ai Velocity

Enhancing applications and workflows with GPT

Unless you live under a rock you have probably heard of OpenAI and GPT models. Likely, you have tried GPT out and seen some of the areas it shines and where it struggles. I want to describe three ways you can enhance your applications and workflows using GPT-3 models’ strengths. You can offer new functionality that wasn’t possible before, accelerate debugging, and automate the boring stuff. 

Before we dive in, here is a brief overview and history of GPT-3. GPT stands for Generative Pre-trained Transformer models, which is a new technique for training neural networks that were described in a paper released in 2018.The most widely used model is GPT-3 and ChatGPT (sometimes referred to as GPT-3.5). ChatGPT belongs to a new generation of chatbots that interpret natural language messages and respond like a “real person”. The biggest limitation for GPT models is that they can only respond with information in their training datasets.

These datasets can be massive, GPT-3 was training on roughly 570GB of text including all of English Wikipedia. However, they can only respond to data from their training dataset. For example, the text-davinci-003 model (the most advanced model that powers ChatGPT) training dataset stopped in June 2021 meaning it has no knowledge of events/developments, etc after June 2021. If you haven’t you can create an account for free and interact with ChatGPT at https://chat.openai.com/. If you are curious and want to read more, a brief search will find many thorough articles going through every part of the process from model generation, curation, parameters, development, and usage. 
The two biggest strengths of vanilla[1] GPT models are to give a close-to-human text/chat experience and to regurgitate and apply solutions to (relatively) common problems. It struggles in environments where it does not know the context/content discussed (e.g. Supreme Court opinions) or where the solutions are niche (e.g. it will fully use coding libraries that don’t exist but it thinks should).

Photo by AltumCode on Unsplash

New Functionality

 One attractive use for GPT-3 models is to expand your application to offer functionality that is not possible without the models. Ideally, your use case is limited to additional data you must supply, or you are looking for a cheaper alternative than a hosted AI API, where you can assist users in the creation of freeform (short or long) text. For example, electronic greeting card companies like Jacquie Lawson or Greeting Island could help you write your thank-you notes (brief plug: if you want to see a PoC which helps that check out ThankYou Assistant). Marketing copy sites like Hubspot / Buffer can help flesh out social media posts. Survey sites like SurveyMonkey/Qualtrix can help create short-form and long-form questions (or responses). Grammarly could be extended to correct and write full paragraphs. Code-generation sites like Spring Initializer could not just boilerplate code but customize it for the specific user. Postgres (a popular Relational Database) could allow users to write queries in natural language and convert those to SQL for users.

What these cases have in common is that the text that is generated is evaluated and iterated on by a real person to make sure it makes sense. Currently, you would not (for example) want to use GPT models as a sales assistant as it could easily sell features that don’t exist yet! More generally, there are no ways to GUARANTEE that a model will only stick to certain topics or “approved” responses so you want to be careful if you put the models in places where it’s making promises/generalizations of your brand to customers. That said, there are plenty of ways it can help make teams more effective that don’t directly contribute to functionality.

Automate boring stuff

Allow smart people to focus on smart problems. If you’ve spent any time around me you will have heard that before and a key way to do this is to automate and abstract common workflows and problems that don’t directly meet your skillset. For example, if you were working as a legal clerk or VC Firm you could have GPT-3 summaries term sheets/contracts for you and highlight unusual terms / key highlights. Allowing you to focus on the negotiation, finding precedents, and having some insight before reading often dry legal material. 

In a technical capacity, the applications are almost more ubiquitous. The introduction and popularization of GitHub Copilot (itself based on GPT-3 models) shows what it can do. In brief:

  • Create/extend CI/CD pipelines (e.g. groovy syntax)
  • Automate python/bash utility scripts (creation & running)
  • Writing Dockerfiles
  • Create cron jobs
  • Creating PR descriptions based on commit messages
  • Review PR and code for common issues (with proper prompting can be somewhere between a linter and vulnerability scan)
  • Generating documentation for your code / APIs, for example, you could ask GPT-3 to create a readme file and the model would generate a detailed description of the project including instructions on how to install, run, and deploy it.

As mentioned above, where there is existing code and it has a schema to help you, it can often get you to a 95% completion quickly. It has significantly helped in areas where I am not familiar (e.g. coming back to bash scripting after a long hiatus, or writing some niche search optimization) and got me close enough and trigger some neurons to fire and help across the finish line. Disclaimer: sometimes GPT-3 has gotten me 100% through, most of the time it gets to 95% and you can either help it get across the finish line or fix the script yourself. I find this is much faster than doing it all myself, YMMV. 

Here are three examples showing how easy it is to have it create a docker container for you. If you have questions, ideas for more, or want to dive deeper drop a comment and we can explore it together! 

Example 1: Run a Spring Boot app in docker

Example 2: Run Kafka in docker-compose

Example 3: Moderate complexity bash commands

Photo by Diego PH on Unsplash

Accelerate Development & Debugging

On top of automating the boring stuff, GPT-3 can also accelerate development and debugging. For example, you could extend your testing framework such that when a test fails it will take the failed class & error message into GPT-3 and ask it what the problem is. Often it will point you in the right direction. This could also be done as part of the production bug workflow, to throw the bug and general code in GPT-3 and get a good idea of what issues may be. GPT-3 is good at thinking generally about problems so if it does not see an issue with the code it may give other avenues for investigation (e.g. networking, status outages, etc). To stay on the testing track, one of its strengths is creating more test cases and thinking of edge cases we might miss. Simply copy your test code and add a last `@Test` (or the syntax for your language) and see what it completes for you. 

In Closing

We are only just scratching the surface of what is possible with these models. There is a huge treasure trove of use cases to explore, and if you would like to explore some of these options and extend your app with GPT-3 models in any way, please reach out to me at ajw@enfuse.io. We’d love to see how we can partner to help you.

What’s next? This all uses the default models and relies on data that GPT-3 models have been trained with. You can use fine-tunings to use cheaper models and receive expected standardized outputs, or you can use embeddings (and some math) to “extend” GPT-3’s knowledge without needing to retrain the models.

[1] I believe that these are the strengths of basic Chat GPT models. The next stage (and a future blog) will be to discuss embeddings and the ability to query NEW data with these same capabilities. 

Author

Cahlen Humphreys