It’s been a tremendous help to me as I relearn how to code on some personal projects. I have written 5 little apps that are very useful to me for my hobbies.
It’s also been helpful at work with some random database type stuff.
But it definitely gets stuff wrong. A lot of stuff.
The funny thing is, if you point out its mistakes, it often does better on subsequent attempts. It’s more like an iterative process of refinement than one prompt gives you the final answer.
“Oh, I see the problem. In order to correct (what went wrong with the last implementation), we can (complete code re-implementation which also doesn’t work)”
I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”
Agreed, I send my first prompt, review the output, smack my head “obviously it couldn’t read my mind on that missing requirement”, and go back and edit the first prompt as if I really was a competent and clear communicator all along.
It’s actually not a bad strategy because it can make some adept assumptions that may have seemed pertinent to include, so instead of typing out every requirement you can think of, you speech-to-text* a half-assed prompt and then know exactly what to fix a few seconds later.
*[ad] free Ecco Dictate on iOS, TypingMind’s built-in dictation… anything using OpenAI Whisper, godly accuracy. btw TypingMind is great - stick in GPT-4o & Claude 3 Opus API keys and boom
While explaining BTRFS I’ve seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.
It’s incredibly useful for learning. ChatGPT was what taught me to unlearn, essentially, writing C in every language, and how to write idiomatic Python and JavaScript.
It is very good for boilerplate code or fleshing out a big module without you having to do the typing. My experience was just like yours; once you’re past a certain (not real high) level of complexity you’re looking at multiple rounds of improvement or else just doing it yourself.
Exactly. And for me, being in middle age, it’s a big help with recalling syntax. I generally know how to do stuff, but need a little refresher on the spelling, parameters, etc.
Personally I find all LLMs in general not that great at writing larger blocks of code. It’s fine for smaller stuff, but the more you expect out of it the more it’ll get wrong.
I find they work best with existing stuff that you provide. Like “make this block of code more efficient” or “rewrite this function to do X”.
I was recently asked to make a small Android app using flutter, which I had never touched before
I used chatgpt at first and it was so painful to get correct answers, but then made an agent or whatever it’s called where I gave it instructions saying it was a flutter Dev and gave it a bunch of specifics about what I was working on
Suddenly it became really useful…I could throw it chunks of code and it would just straight away tell me where the error was and what I needed to change
I could ask it to write me an example method for something that I could then easily adapt for my use
One thing I would do would be ask it to write a method to do X, while I was writing the part that would use that method.
This wasn’t a big project and the whole thing took less than 40 hours, but for me to pick up a new language, setup the development environment, and make a working app for a specific task in 40 hours was a huge deal to me… I think without chatgpt, just learning all the basics and debugging would have taken more than 40 hours alone
It’s been a tremendous help to me as I relearn how to code on some personal projects. I have written 5 little apps that are very useful to me for my hobbies.
It’s also been helpful at work with some random database type stuff.
But it definitely gets stuff wrong. A lot of stuff.
The funny thing is, if you point out its mistakes, it often does better on subsequent attempts. It’s more like an iterative process of refinement than one prompt gives you the final answer.
Or it get stuck in an endless loop of two different but wrong solutions.
Me: This is my system, version x. I want to achieve this.
ChatGpt: Here’s the solution.
Me: But this only works with Version y of given system, not x
ChatGpt: <Apology> Try this.
Me: This is using a method that never existed in the framework.
ChatGpt: <Apology> <Gives first solution again>
I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”
Agreed, I send my first prompt, review the output, smack my head “obviously it couldn’t read my mind on that missing requirement”, and go back and edit the first prompt as if I really was a competent and clear communicator all along.
It’s actually not a bad strategy because it can make some adept assumptions that may have seemed pertinent to include, so instead of typing out every requirement you can think of, you speech-to-text* a half-assed prompt and then know exactly what to fix a few seconds later.
*[ad] free Ecco Dictate on iOS, TypingMind’s built-in dictation… anything using OpenAI Whisper, godly accuracy. btw TypingMind is great - stick in GPT-4o & Claude 3 Opus API keys and boom
Ha! That definitely happens sometimes, too.
But only sometimes. Not often enough that I don’t still find it more useful than not.
While explaining BTRFS I’ve seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.
It’s incredibly useful for learning. ChatGPT was what taught me to unlearn, essentially, writing C in every language, and how to write idiomatic Python and JavaScript.
It is very good for boilerplate code or fleshing out a big module without you having to do the typing. My experience was just like yours; once you’re past a certain (not real high) level of complexity you’re looking at multiple rounds of improvement or else just doing it yourself.
Exactly. And for me, being in middle age, it’s a big help with recalling syntax. I generally know how to do stuff, but need a little refresher on the spelling, parameters, etc.
Personally I find all LLMs in general not that great at writing larger blocks of code. It’s fine for smaller stuff, but the more you expect out of it the more it’ll get wrong.
I find they work best with existing stuff that you provide. Like “make this block of code more efficient” or “rewrite this function to do X”.
I was recently asked to make a small Android app using flutter, which I had never touched before
I used chatgpt at first and it was so painful to get correct answers, but then made an agent or whatever it’s called where I gave it instructions saying it was a flutter Dev and gave it a bunch of specifics about what I was working on
Suddenly it became really useful…I could throw it chunks of code and it would just straight away tell me where the error was and what I needed to change
I could ask it to write me an example method for something that I could then easily adapt for my use
One thing I would do would be ask it to write a method to do X, while I was writing the part that would use that method.
This wasn’t a big project and the whole thing took less than 40 hours, but for me to pick up a new language, setup the development environment, and make a working app for a specific task in 40 hours was a huge deal to me… I think without chatgpt, just learning all the basics and debugging would have taken more than 40 hours alone
This is because all LLMs function primarily based on the token context you feed it.
The best way to use any LLM is to completely fill up it’s history with relevant context, then ask your question.
I worked on a creative writing thing with it and the more I added, the better its responses. And 4 is a noticeable improvement over 3.5.