Notice. New forum software under development. It's going to miss a few functions and look a bit ugly for a while, but I'm working on it full time now as the old forum was too unstable. Couple days, all good. If you notice any issues, please contact me.
|
Forum Index : Microcontroller and PC projects : ChatGPT conversation anyone?
Page 2 of 4 | |||||
Author | Message | ||||
twofingers Guru Joined: 02/06/2014 Location: GermanyPosts: 1236 |
Is there a larger size? Hmm, if we want to discuss this seriously? It's not that ChatGPT doesn't admit to hallucinating sometimes? But does that make it worthless? Isn't it the job and function of science to evaluate information and data and determine the truth content? Shouldn't most people here be able to do that? By the way, ChatGPT 4.0 itself says (when asked about its capabilities and limitations): and later You could just as well say: Only trust your professor if you already know the answer. Edited 2024-09-08 02:50 by twofingers causality ≠ correlation ≠ coincidence |
||||
Mixtel90 Guru Joined: 05/10/2019 Location: United KingdomPosts: 6771 |
Your professor might give you incorrect information but that's because he may have limited understanding of the matter. When an AI program gives incorrect information it's because it is working from incorrect data and "polishing" the answer to make it appear more plausible. The professor is intelligent but can make mistakes. The AI has no intelligence and is totally dependent on data, much of which it has no means to verify as there is insufficient of it or it is too contradictory. As an AI system is totally dependent on the amount *and accuracy* of its data the worst place from which to get it is probably the internet in general. :) It would be very helpful if all AI replies came with an accuracy probability score. Mick Zilog Inside! nascom.info for Nascom & Gemini Preliminary MMBasic docs & my PCB designs |
||||
twofingers Guru Joined: 02/06/2014 Location: GermanyPosts: 1236 |
Wouldn't the correctness and accuracy of the input data have to be known for this? (Known as GIGO - Garbage in, garbage out) Incidentally, I believe that not only organic systems have the potential for intelligence. However intelligence is defined. I asked ChatGPT: ChatGPT: I asked about the size of the database. Answer "huge amounts of text from various sources until September 2023". Later he said something about terabytes to petabytes. Well, I think terabyte is a gross understatement. That's the amount of data I have in my home. A better estimate? ChatGPT: That seems way too small to me. Edited 2024-09-08 04:27 by twofingers causality ≠ correlation ≠ coincidence |
||||
twofingers Guru Joined: 02/06/2014 Location: GermanyPosts: 1236 |
I asked ChatGPT: "How could your skills be improved?" I think this is a look into the future. causality ≠ correlation ≠ coincidence |
||||
Volhout Guru Joined: 05/03/2018 Location: NetherlandsPosts: 4222 |
The internet is full of information, often contradicting other information. And you would rely on AI to tell you what is correct? It is a toy, and many like playing with toys. Have fun... Volhout PicomiteVGA PETSCII ROBOTS |
||||
Malibu Senior Member Joined: 07/07/2018 Location: AustraliaPosts: 228 |
I did use AI a lot when I was trying to figure out the WiFi connection in the WebMite source code. With no experience in C/C++, I struggled with the syntax and Google searches gave too many links to find a simple answer. AI did a good job of condensing the masses of info into a short paragraph. It certainly helped with the particular problem I was chasing. I think it was Phind I used. From the ChatGPT 'conversation', it looked to me that "I don't know" was not an option as an answer, so basically, it just started making stuff up - literally I also asked where the company I work for is situated. It seems I work in 'The Bahamas'. Hmmmm... I don't recall driving 16000km to work everyday Still, the whole thing is pretty funny though John |
||||
Mixtel90 Guru Joined: 05/10/2019 Location: United KingdomPosts: 6771 |
I think you have a point. "I don't know" or "I have insufficient data to give a meaningful answer" do seem to be replaced by hallucinations. I wonder if that's a function of the algorithm, always attempting to provide an answer, or if it's simply not possible for an AI system to know enough about its data? Mick Zilog Inside! nascom.info for Nascom & Gemini Preliminary MMBasic docs & my PCB designs |
||||
TassyJim Guru Joined: 07/08/2011 Location: AustraliaPosts: 6094 |
When I went to university, the first thing I was told "You thought you were here to learn to be engineers. No. You are here to learn how to learn." In my days you could discuss things with the professor and your class mates, not just believe what is first said. You could trace the source of the "information" and question it's validity. Now, the students do remote learning and loose a lot of the ability to discuss things. Add to that and we now believe the garbage we get from facebook and ticktock and whatever the next fad is - currently AI. Now I sound like Marvin the Paranoid Android. Sigh... Jim VK7JH MMedit MMBasic Help |
||||
pwillard Senior Member Joined: 07/06/2022 Location: United StatesPosts: 292 |
But not without good reason... |
||||
zeitfest Guru Joined: 31/07/2019 Location: AustraliaPosts: 482 |
As long as humans have bullsh*t detectors - and are able to use them, instead of being herded around - we'll be fine. |
||||
zeitfest Guru Joined: 31/07/2019 Location: AustraliaPosts: 482 |
Hmm. Re Marvin the Paranoid Android.. Andy Grove, who ran Intel Corp - |
||||
EDNEDN Senior Member Joined: 18/02/2023 Location: United StatesPosts: 118 |
Except... that is obviously not true. Humans have been subjected to manipulations of the 'Powers to Be' for centuries. And we keep getting sucked deeper and deeper into their web of lies. The good news is... That period in humanity's history is almost over. |
||||
Malibu Senior Member Joined: 07/07/2018 Location: AustraliaPosts: 228 |
Nahhh, I think you're going OK there Jim... Maybe if you had said: "Life... Don't talk to me about life.", we might be a bit worried That was my thoughts on how it worked. AI (or any code for that matter) doesn't know anything about its data, except what has been coded by humans. The answers I saw indicated that there wasn't enough 'error traps' coded in for the unexpected circumstances of the questions. A simple Select case [question] blah... blah... Case Else ? "Sorry, I don't know the answer to that" End select would have covered a lot of the garbage. ...but of course, no one using it would impressed with AI if it admitted to not knowing the answer. John |
||||
TassyJim Guru Joined: 07/08/2011 Location: AustraliaPosts: 6094 |
To paraphrase another quote VK7JH MMedit MMBasic Help |
||||
twofingers Guru Joined: 02/06/2014 Location: GermanyPosts: 1236 |
The discussion reminds me of The Fisherman and His Wife. Yes, ChatGPT is a tool and like any tool you have to know how to use it. If I use a hammer to remove screws, do I complain? Should I complain? Wouldn't that be childish? Shouldn't we rather be amazed at what is technically doable. And talk about the benefits ... and the dangers? Have we forgotten how to be amazed? Edited 2024-09-08 15:10 by twofingers causality ≠ correlation ≠ coincidence |
||||
phil99 Guru Joined: 11/02/2018 Location: AustraliaPosts: 2134 |
The reason it is called AI is it isn't explicitly "coded by humans". Only a broad outline of what the programmers want is coded, along with the tools needed to search vast amounts of data and look for patterns and connections. From that point on the code is self modifying (including the original outline and search methods) in a continuously iterative web of loops. After the learning period it bears little resemblance to human generated code. Attempts have been made to understand how it got to where it is and does what it does, but the code has become so convoluted with myriad loops and dense webs of conditional branches between loops that no-one can unravel it. |
||||
EDNEDN Senior Member Joined: 18/02/2023 Location: United StatesPosts: 118 |
Spaghetti code. |
||||
Grogster Admin Group Joined: 31/12/2012 Location: New ZealandPosts: 9305 |
"That information, is not available." Zen (computer on The Liberator ship in the Blake's 7 Sci-Fi series) Smoke makes things work. When the smoke gets out, it stops! |
||||
Mixtel90 Guru Joined: 05/10/2019 Location: United KingdomPosts: 6771 |
AI is extremely clever and what we see is definitely amazing! It "learns" - well sort of - and we are told that its answers improve as it does so. (That may or may not be true - it depends on the accuracy of the new data and whether it learns to discard old data correctly.) Yes, it can be used as a tool. No, it's not necessarily an accurate tool. It's not "Deep Thought" - the font of all human knowledge. Or, more precisely, it is, but it can't tell which parts of its knowledge are true and which are false other than by comparing it. Ask it a question and you'll get an answer of some sort. You now need to verify that answer against known truths as it's a toss up whether the answer is correct or is something that sounds like you would get to that sort of a question - a false truth if you like. AI shouldn't (at the current stage anyway) be used as a question and answer box. It's fun to do so but that's all. Always expect lies and inaccuracies. Its real use is for deep analysis of systems, which can be defined mathematically. It is then capable of using "fuzzy logic" to find results that a human couldn't find in a lifetime. There is a danger that people start to attribute human characteristics to AI systems and start to trust them. But there is no humanity there. There is no empathy. There is no intelligence. The human-machine interface is based on things like ELIZA so it *sounds* plausible. Everything it tells you can usually be found using an internet search engine (and not necessarily from verifiable sources) or is made to sound like something factual even though it isn't. The case comes to mind of legal researchers quoting non-existent cases in court because they'd been using AI to find typical cases. They *sounded* genuine and had all the hallmarks of being genuine but they never existed. Mick Zilog Inside! nascom.info for Nascom & Gemini Preliminary MMBasic docs & my PCB designs |
||||
Mixtel90 Guru Joined: 05/10/2019 Location: United KingdomPosts: 6771 |
YAY! :) I loved Blake's 7 - even the cardboard sets and, especially, Servalan :) . There is a photo of one of the original Orac machines. Lovely, built out of plastic tube and relays. :) Mick Zilog Inside! nascom.info for Nascom & Gemini Preliminary MMBasic docs & my PCB designs |
||||
Page 2 of 4 |
Print this page |