How I use ChatGPT (and other LLMs)

As ChatGPT & other LLMs have become more mainstream & useable in everyday working life, there has been a lot of discussion & hype about how they can be used. Particularly in the field of software engineering, they are being heralded as a way to boost developer productivity & reduce the need for engineers.

While that all sounds great & I'm all for it (I'd love to put my feet up & get a computer to do my work for me) I think their use & applicability are wildly overblown. I'm going to tell you how I've used or currently use LLMs as well as what I've tried to use them for & decided against using them for in the future. Hopefully, along the way, I can explain a bit about the difficult parts of being an engineer & what the more straightforward parts are.

LLMs (of which ChatGPT is) are a type of artificial intelligence (AI) that typically can generate some kind of output based upon human-like input. What this essentially means is that you can talk to an LLM & it can respond by generating some output of its own creation (as opposed to copying information that already exists, as a search engine would).

This sounds like a great idea at a base level, I can ask an LLM to generate some code for me to solve a problem. While it can do such a thing, I would hesitate to say that its output is frequently useful.

For starters, the most difficult part of my job as an engineer is not simply typing out code to do a certain task. More commonly it's trying to understand what the requirements being given to me by my Product Owner or Manager are & understanding how our existing applications, data & tools can be leveraged to fulfil the task at hand.

When it comes to actually writing code that does what I want it to do, the hardest part of my job is already over. That being said, you can get an LLM to write code for you, but my argument is that the benefit you receive for this is frequently not worth the cost of tuning your prompt & adapting the code provided to you to work with your specific scenario.

For example, I recently had to create a fairly straightforward regex. It had to match a credit card number & return the last 4 digits as a captured group. I asked ChatGPT to create a regex to do this & what it gave me did not work.

Because it did not work I then had to attempt to understand the regex it had provided for me, figure out what it did & then fix it. I would argue that if instead I had spent this time understanding more about regexes & developing my own, then I would have a greater understanding of regexes for the next time I had to do something similar & I would be more confident in understanding what I'd written & more comfortable making changes to it in the future.

In the grand scheme of things, creating such a regex is a small task & was a small part of implementing the bigger requested feature. If ChatGPT is not able to do something this small, I think it would struggle even more with something of a reasonable size. More to the point, if what it gave me on a larger-sized task didn't work the first time, it would take a lot more time to understand what it had written & attempt to fix it. I think in such a case it would decrease my productivity as I would spend so much time massaging ChatGPT & the code it outputs, I could have just done the work myself & gained the knowledge, levelling up as an employee.

I have also tried GitHub Copilot in the past as well & had a very similar experience. Even though it had access to all the code in my project, it was still unable to give me a working example.

However, it is not all disappointment on the side of LLMs. Something I have used ChatGPT for recently & will continue to use it for is some documentation tasks. Recently I had to re-word some documentation to change the point of view of some documentation. I quickly fired this text into ChatGPT, asked it to change it to a second-person viewpoint, reviewed the results & then pasted it back into the documentation file. In this case, it provided me with great results, perhaps even better than I could have managed by myself & in a more timely fashion.

Helping you write documentation like this is something I think LLMs could help out with quite easily, particularly if you already have a base to work with and are merely editing rather than starting from scratch.

Another thing that I am excited to try out with an LLM is writing boilerplate code. In some programming languages, it can be common to write vast swathes of code that are pretty much the same which are time-consuming & very simple to write. For this purpose, I would be interested in giving an LLM a try & see what it can do. In this use case, I don't think that the LLM is doing anything particularly complex. The meat & complexity of any programming task would not be writing boilerplate code, you could consider it just a time-consuming task that you can get a computer to do. The key thing I am trying to impress upon you is that using an LLM has not made the job of engineering any easier, it's just taken a potentially time-consuming task & offloaded it to a computer that is intelligent enough to carry it out.

I'm very interested in how I can continue to utilise LLMs in such a way going forward, as if I can offload some of the simple, rote, time-consuming tasks onto a machine, then I can spend more of my time on the interesting, complex tasks that will ultimately make me a better engineer & employee for the future.

To conclude this article, I think that LLMs are largely over-hyped for the task of software engineering, but could have niche uses for carrying out repetitive, simple tasks that may be a part of what an engineer has to carry out in any role. But they should not be expected to carry out complex engineering tasks & would not be able to replace any engineer worth his salt (at least not based upon the current ability of LLMs).