I use chatgpt semi-often… For generating stuff in a repeating pattern. Any time I have used it to make code, I don’t save any time because I have to debug most of the generated code anyway. My main use case lately is making python dicts with empty keys (e.g. key1, key2… becomes “key1”: “”, “key2”: “”,…) or making a gold/prod level SQL view by passing in the backend names and frontend names (e.g. value_1, value_2…
Value 1
,Value 2
,… Becomes value_1 asValue 1
,…).I know this is gonna sound annoying but I just use vim for stuff like this. Even notepad++ has a macro thing too, right? My coworkers keep saying how much of a productivity boost it is but all I see it do is mess up stuff like this that only takes a few seconds in vim to setup and I know it’ll be correct every time
I use vim keybinds (via doom emacs) for this sort of stuff if I’m doing it for personal projects, my professional work is all done in an online platform (no way around it) so it’s just faster and easier to throw the pattern and columns at the integrated chatgpt terminal rather than hop to a local editor and back
ill use copilot in place of most of the times ive searched on stackoverflow or to do mundane things like generate repeated things but relying solely on it is the same as relying solely on stackoverflow.
It’s been a tremendous help to me as I relearn how to code on some personal projects. I have written 5 little apps that are very useful to me for my hobbies.
It’s also been helpful at work with some random database type stuff.
But it definitely gets stuff wrong. A lot of stuff.
The funny thing is, if you point out its mistakes, it often does better on subsequent attempts. It’s more like an iterative process of refinement than one prompt gives you the final answer.
It’s incredibly useful for learning. ChatGPT was what taught me to unlearn, essentially, writing C in every language, and how to write idiomatic Python and JavaScript.
It is very good for boilerplate code or fleshing out a big module without you having to do the typing. My experience was just like yours; once you’re past a certain (not real high) level of complexity you’re looking at multiple rounds of improvement or else just doing it yourself.
deleted by creator
Exactly. And for me, being in middle age, it’s a big help with recalling syntax. I generally know how to do stuff, but need a little refresher on the spelling, parameters, etc.
This is because all LLMs function primarily based on the token context you feed it.
The best way to use any LLM is to completely fill up it’s history with relevant context, then ask your question.
I worked on a creative writing thing with it and the more I added, the better its responses. And 4 is a noticeable improvement over 3.5.
The funny thing is, if you point out its mistakes, it often does better on subsequent attempts.
Or it get stuck in an endless loop of two different but wrong solutions.
Me: This is my system, version x. I want to achieve this.
ChatGpt: Here’s the solution.
Me: But this only works with Version y of given system, not x
ChatGpt: <Apology> Try this.
Me: This is using a method that never existed in the framework.
ChatGpt: <Apology> <Gives first solution again>
While explaining BTRFS I’ve seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.
- “Oh, I see the problem. In order to correct (what went wrong with the last implementation), we can (complete code re-implementation which also doesn’t work)”
- Goto 1
I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”
Agreed, I send my first prompt, review the output, smack my head “obviously it couldn’t read my mind on that missing requirement”, and go back and edit the first prompt as if I really was a competent and clear communicator all along.
It’s actually not a bad strategy because it can make some adept assumptions that may have seemed pertinent to include, so instead of typing out every requirement you can think of, you speech-to-text* a half-assed prompt and then know exactly what to fix a few seconds later.
*[ad] free Ecco Dictate on iOS, TypingMind’s built-in dictation… anything using OpenAI Whisper, godly accuracy. btw TypingMind is great - stick in GPT-4o & Claude 3 Opus API keys and boom
Ha! That definitely happens sometimes, too.
But only sometimes. Not often enough that I don’t still find it more useful than not.
I was recently asked to make a small Android app using flutter, which I had never touched before
I used chatgpt at first and it was so painful to get correct answers, but then made an agent or whatever it’s called where I gave it instructions saying it was a flutter Dev and gave it a bunch of specifics about what I was working on
Suddenly it became really useful…I could throw it chunks of code and it would just straight away tell me where the error was and what I needed to change
I could ask it to write me an example method for something that I could then easily adapt for my use
One thing I would do would be ask it to write a method to do X, while I was writing the part that would use that method.
This wasn’t a big project and the whole thing took less than 40 hours, but for me to pick up a new language, setup the development environment, and make a working app for a specific task in 40 hours was a huge deal to me… I think without chatgpt, just learning all the basics and debugging would have taken more than 40 hours alone
We need a comparison against an average coder. Some fucking baseline ffs.
Worth noting this study was done on gpt 3.5, 4 is leagues better than 3.5. I’d be interested to see how this number has changed
There is huge gap between 3.5 and 4 especially in coding related questions. GPT3.5 does not have large enough token size to handle harder code related questions.
4 made up functions that didn’t exist last time I asked in a programming question.
sure, I’m not saying GPT4 is perfect, just that it’s known to be a lot better than 3.5. Kinda why I would be interested to see how much better it actually is.
This is why I like Bing Chat for this kind of thing, it does a web search in the background and will often be working right from the API documentation.
AI Defenders! Assemble!
No need to defend it.
Either it’s value is sufficient that businesses can make money by implementing it and it gets used, or it isn’t.
I’m personally already using it to make money, so I suspect it’s going to stick around.
I wonder if the AI is using bad code pulled from threads where people are asking questions about why their code isn’t working, but ChatGPT can’t tell the difference and just assumes all code is good code.
For someone doing a study on LLM they don’t seem to know much about LLMs.
They don’t even mention which model was used…
Here’s the study used for this clickbait garbage :
What’s especially troubling is that many human programmers seem to prefer the ChatGPT answers. The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn’t catch AI-generated mistakes at 39 percent.
Why is this happening? It might just be that ChatGPT is more polite than people online.
It’s probably more because you can ask it your exact question (not just search for something more or less similar) and it will at least give you a lead that you can use to discover the answer, even if it doesn’t give you a perfect answer.
Also, who does a survey of 12 people and publishes the results? Is that normal?
Even this Lemmy thread has more participants than the survey
I have 13 friends who are researchers and they publish surveys like that all the time.
(You can trust this comment because I peer reviewed it.)
In the short term it really helps productivity, but in the end the reward for working faster is more work. Just doing the hard parts all day is going to burn developers out.
I program for a living and I think of it more as doing the interesting tasks all day, rather than the mundane and repetitive. Chat GPT and GitHub Copilot are great for getting something roughly right that you can tweak to work the way you want.
I think we must change the way we see AI. A lot of people see it as the holy grail of everything that can do everything we can do, even tho it can’t. AI is a tool for humans to become more efficient in their work. It can do easy tasks for you and sometimes Assist you with harder stuff. It is the same as with Mathematicians and calculators. A good mathematician is able to calculate everytheverything he needs without a calculator, but the calculator makes him much more efficient at calculating stuff. The calculator didn’t replace mathematicians, because you still have to know how to do the stiff you’re doing.
I’m surprised it scores that well.
Well, ok… that seems about right for languages like JavaScript or Python, but try it on languages with a reputation for being widely used to write terrible code, like Java or PHP (hence having been trained on terrible code), and it’s actively detrimental to even experienced developers.
You forgot the “at least” before the 52%.
Developing with ChatGPT feels bizzarely like when Tony Stark invented a new element with Jarvis’ assistance.
It’s a prolonged back and forth, and you need to point out the AIs mistakes and work through a ton of iterations to get something that is close enough that you can tweak it and use, but it’s SO much faster than trawling through Stack Overflow or hoping someone who knows more than you can answer a post for you.
Yeah if you treat it is a junior engineer, with the ability to instantly research a topic, and are prepared to engage in a conversation to work toward a working answer, then it can work extremely well.
Some of the best outcomes I’ve had have needed 20+ prompts, but I still arrived at a solution faster than any other method.
In the end, there is this great fear of “the AI is going to fully replace us developers” and the reality is that while that may be a possibility one day, it wont be any day soon.
You still need people with deep technical knowledge to pilot the AI and drive it to an implemented solution.
AI isnt the end of the industry, it has just greatly sped up the industry.
The interesting bit for me is that if you ask a rando some programming questions they will be 99% wrong on average I think.
Stack overflow still makes more sense though.
The best method I’ve found for using it is to help you with languages you may have lost familiarity in and to walk it through what you need step by step. This lets you evaluate it’s reasoning. When it gets stuck in a loop:
Try A!
Actually A doesn’t work because that method doesn’t exist.
Oh sorry Try B!
Yeah B doesn’t work either.
You’re right, so sorry about that, Try A!
Yeah… we just did this.at that point it’s time to just close it down and try another AI.