Home
JAQForum Ver 24.01
Log In or Join  
Active Topics
Local Time 19:02 23 Nov 2024 Privacy Policy
Jump to

Notice. New forum software under development. It's going to miss a few functions and look a bit ugly for a while, but I'm working on it full time now as the old forum was too unstable. Couple days, all good. If you notice any issues, please contact me.

Forum Index : Microcontroller and PC projects : ChatGPT conversation anyone?

     Page 2 of 4    
Author Message
twofingers
Guru

Joined: 02/06/2014
Location: Germany
Posts: 1236
Posted: 03:20pm 07 Sep 2024
Copy link to clipboard 
Print this post

  pwillard said  ... Here's your shirt...

Is there a larger size?

  Quote  Trust ChatGPT only if you already know the answer.

Hmm, if we want to discuss this seriously? It's not that ChatGPT doesn't admit to hallucinating sometimes? But does that make it worthless? Isn't it the job and function of science to evaluate information and data and determine the truth content? Shouldn't most people here be able to do that?

By the way, ChatGPT 4.0 itself says (when asked about its capabilities and limitations):
  Quote  ... Risk of "Hallucination": I might sometimes generate responses that seem plausible but are incorrect or nonsensical—a phenomenon known as "hallucination" in AI ...

and later
  Quote  ... These capabilities and limitations define what I can do effectively and where I might fall short. My goal is to assist and provide value, but I rely on users to critically evaluate my output, especially in situations requiring high accuracy or up-to-date information.


You could just as well say: Only trust your professor if you already know the answer.
Edited 2024-09-08 02:50 by twofingers
causality ≠ correlation ≠ coincidence
 
Mixtel90

Guru

Joined: 05/10/2019
Location: United Kingdom
Posts: 6771
Posted: 05:02pm 07 Sep 2024
Copy link to clipboard 
Print this post

Your professor might give you incorrect information but that's because he may have limited understanding of the matter.

When an AI program gives incorrect information it's because it is working from incorrect data and "polishing" the answer to make it appear more plausible. The professor is intelligent but can make mistakes. The AI has no intelligence and is totally dependent on data, much of which it has no means to verify as there is insufficient of it or it is too contradictory.

As an AI system is totally dependent on the amount *and accuracy* of its data the worst place from which to get it is probably the internet in general. :)

It would be very helpful if all AI replies came with an accuracy probability score.
Mick

Zilog Inside! nascom.info for Nascom & Gemini
Preliminary MMBasic docs & my PCB designs
 
twofingers
Guru

Joined: 02/06/2014
Location: Germany
Posts: 1236
Posted: 05:16pm 07 Sep 2024
Copy link to clipboard 
Print this post

  Mixtel90 said  ... It would be very helpful if all AI replies came with an accuracy probability score.

Wouldn't the correctness and accuracy of the input data have to be known for this? (Known as GIGO - Garbage in, garbage out)

Incidentally, I believe that not only organic systems have the potential for intelligence. However intelligence is defined.

I asked ChatGPT:
  Quote  do you have the ability to mark the areas of your answers where you are unsure or hallucinating?


ChatGPT:
  Quote  I do not have the ability to explicitly mark areas of my responses where I might be unsure or hallucinating. However, I can provide disclaimers or express uncertainty when responding to complex or ambiguous questions, indicating that my response might not be fully reliable.

If you ever feel that a response might be uncertain or require further verification, you can ask me to clarify, or double-check the information using trusted sources.


I asked about the size of the database. Answer "huge amounts of text from various sources until September 2023". Later he said something about terabytes to petabytes.

Well, I think terabyte is a gross understatement. That's the amount of data I have in my home. A better estimate?

ChatGPT:
  Quote  The size of the data used to train a model like GPT can vary and isn't directly comparable to the data one might store on terabyte hard drives at home. The data used for training consists of a vast collection of text from the internet, specifically prepared for machine learning. Here's a better estimate:

Raw Data Volume: The amount of raw text data used for training could reach several hundred terabytes. These data come from a wide range of sources and are unstructured.

Processed Data: After processing and cleaning the raw data to remove duplicates and optimize the text for training, the actual data volume could be reduced to around 45-70 terabytes. This estimate is based on data volumes used for similar models like GPT-3.

Training Data: The data actually used in the training process, fed into the algorithm, is highly compressed and optimized. This data volume might be in the range of 10-20 terabytes, considering that GPT-3 used about 570 GB of deduplicated, processed text for training. However, since GPT-4 is more advanced, this figure might be higher.

Trained Model: After training, the model itself is much smaller, in the range of several hundred gigabytes, not terabytes.

In summary, the raw data might encompass hundreds of terabytes, while the optimized training data is more likely in the range of 10 to 70 terabytes.

That seems way too small to me.
Edited 2024-09-08 04:27 by twofingers
causality ≠ correlation ≠ coincidence
 
twofingers
Guru

Joined: 02/06/2014
Location: Germany
Posts: 1236
Posted: 07:09pm 07 Sep 2024
Copy link to clipboard 
Print this post

I asked ChatGPT: "How could your skills be improved?"
  Quote  The improvement of my capabilities could occur on several levels:

1. Real-time access to current information:

Web browsing: Having the ability to search the internet in real-time would allow me to provide more accurate and timely responses.
Database integration: Access to specialized databases or scientific journals would enable me to offer more informed and specialized information.

2. Updating and expanding the training dataset:

Continuous training: Regular updates to my training dataset would ensure that I stay informed about the latest developments and information across various fields.
Dataset expansion: A broader and more diversified dataset could improve my ability to respond to an even wider range of topics.

3.Better understanding of uncertainties and misinterpretations:

Self-reflection modules: An enhanced ability to recognize uncertainties in my answers and clearly communicate them could increase trust in my responses.
Error detection: Integrating mechanisms to detect and correct potential errors in my answers could improve accuracy.

4. Improved interactivity and user customization:

Personalization: The ability to remember previous interactions and tailor my responses based on the preferences and knowledge level of a user could enhance the usefulness of my answers.
Dynamic adaptation: Improvements in my ability to adjust responses based on context and user feedback in real-time could make interactions smoother and more relevant.

5. Expanded modalities:

Integration of visual data: The ability to interpret and generate images, diagrams, or videos would significantly expand my functionality.
Speech processing: Enhanced speech processing capabilities could allow me to better understand and generate spoken language.

6. Ethics and transparency:

Ethics modules: An improved ability to incorporate ethical considerations into my responses could increase the responsibility and safety of my usage.
Explanation of decisions: The ability to more transparently explain my decision-making processes or the basis of my responses could strengthen trust in my usage.
Each of these improvements would help to expand my capabilities and enhance the quality of my interactions.

I think this is a look into the future.
causality ≠ correlation ≠ coincidence
 
Volhout
Guru

Joined: 05/03/2018
Location: Netherlands
Posts: 4222
Posted: 07:57pm 07 Sep 2024
Copy link to clipboard 
Print this post

The internet is full of information, often contradicting other information.  And you would rely on AI to tell you what is correct?

It is a toy, and many like playing with toys. Have fun...

Volhout
PicomiteVGA PETSCII ROBOTS
 
Malibu
Senior Member

Joined: 07/07/2018
Location: Australia
Posts: 228
Posted: 08:09pm 07 Sep 2024
Copy link to clipboard 
Print this post

  Gizmo said  I use chat GPT daily for my work. As a software developer who works with well documented languages, it does a really good job.
...
I mean, a question about writing a simple loop in php, it would have millions of examples in its library, but maybe only a few dozen in PicoMite basic.


I did use AI a lot when I was trying to figure out the WiFi connection in the WebMite source code. With no experience in C/C++, I struggled with the syntax and Google searches gave too many links to find a simple answer.
AI did a good job of condensing the masses of info into a short paragraph. It certainly helped with the particular problem I was chasing.
I think it was Phind I used.

From the ChatGPT 'conversation', it looked to me that "I don't know" was not an option as an answer, so basically, it just started making stuff up - literally

I also asked where the company I work for is situated. It seems I work in 'The Bahamas'.
Hmmmm... I don't recall driving 16000km to work everyday

Still, the whole thing is pretty funny though  
John
 
Mixtel90

Guru

Joined: 05/10/2019
Location: United Kingdom
Posts: 6771
Posted: 08:57pm 07 Sep 2024
Copy link to clipboard 
Print this post

I think you have a point. "I don't know" or "I have insufficient data to give a meaningful answer" do seem to be replaced by hallucinations. I wonder if that's a function of the algorithm, always attempting to provide an answer, or if it's simply not possible for an AI system to know enough about its data?
Mick

Zilog Inside! nascom.info for Nascom & Gemini
Preliminary MMBasic docs & my PCB designs
 
TassyJim

Guru

Joined: 07/08/2011
Location: Australia
Posts: 6094
Posted: 09:01pm 07 Sep 2024
Copy link to clipboard 
Print this post

  twofingers said  
You could just as well say: Only trust your professor if you already know the answer.


When I went to university, the first thing I was told "You thought you were here to learn to be engineers. No. You are here to learn how to learn."

In my days you could discuss things with the professor and your class mates, not just believe what is first said.
You could trace the source of the "information" and question it's validity.

Now, the students do remote learning and loose a lot of the ability to discuss things.
Add to that and we now believe the garbage we get from facebook and ticktock and whatever the next fad is - currently AI.

Now I sound like Marvin the Paranoid Android.

Sigh...
Jim
VK7JH
MMedit   MMBasic Help
 
pwillard
Senior Member

Joined: 07/06/2022
Location: United States
Posts: 292
Posted: 12:37am 08 Sep 2024
Copy link to clipboard 
Print this post

  Quote  Now I sound like Marvin the Paranoid Android.


But not without good reason...
 
zeitfest
Guru

Joined: 31/07/2019
Location: Australia
Posts: 482
Posted: 12:50am 08 Sep 2024
Copy link to clipboard 
Print this post

As long as humans have bullsh*t detectors - and are able to use them, instead of being herded around -  we'll be fine.    
 
zeitfest
Guru

Joined: 31/07/2019
Location: Australia
Posts: 482
Posted: 12:58am 08 Sep 2024
Copy link to clipboard 
Print this post

Hmm.

Re Marvin the Paranoid Android..

Andy Grove, who ran Intel Corp  -
  Quote  "Only the paranoid survive"
 
EDNEDN
Senior Member

Joined: 18/02/2023
Location: United States
Posts: 118
Posted: 03:27am 08 Sep 2024
Copy link to clipboard 
Print this post

  zeitfest said  As long as humans have bullsh*t detectors - and are able to use them, instead of being herded around -  we'll be fine.    


Except...   that is obviously not true.   Humans have been subjected to manipulations of the 'Powers to Be' for centuries.   And we keep getting sucked deeper and deeper into their web of lies.

The good news is...   That period in humanity's history is almost over.
 
Malibu
Senior Member

Joined: 07/07/2018
Location: Australia
Posts: 228
Posted: 04:03am 08 Sep 2024
Copy link to clipboard 
Print this post

  Quote  Now I sound like Marvin the Paranoid Android.

Nahhh, I think you're going OK there Jim...
Maybe if you had said: "Life... Don't talk to me about life.", we might be a bit worried  

  Quote  I wonder if that's a function of the algorithm, always attempting to provide an answer

That was my thoughts on how it worked. AI (or any code for that matter) doesn't know anything about its data, except what has been coded by humans.
The answers I saw indicated that there wasn't enough 'error traps' coded in for the unexpected circumstances of the questions. A simple
Select case [question]
 blah...
 blah...
Case Else
 ? "Sorry, I don't know the answer to that"
End select

would have covered a lot of the garbage.
...but of course, no one using it would impressed with AI if it admitted to not knowing the answer.
John
 
TassyJim

Guru

Joined: 07/08/2011
Location: Australia
Posts: 6094
Posted: 04:08am 08 Sep 2024
Copy link to clipboard 
Print this post

  Quote  Case Else
? "Sorry, I don't know the answer to that"
End select

To paraphrase another quote
  Quote  COMPUTER SAYS DON'T NO

VK7JH
MMedit   MMBasic Help
 
twofingers
Guru

Joined: 02/06/2014
Location: Germany
Posts: 1236
Posted: 04:32am 08 Sep 2024
Copy link to clipboard 
Print this post

The discussion reminds me of The Fisherman and His Wife.
Yes, ChatGPT is a tool and like any tool you have to know how to use it. If I use a hammer to remove screws, do I complain? Should I complain? Wouldn't that be childish?
Shouldn't we rather be amazed at what is technically doable. And talk about the benefits ... and the dangers?
Have we forgotten how to be amazed?
Edited 2024-09-08 15:10 by twofingers
causality ≠ correlation ≠ coincidence
 
phil99

Guru

Joined: 11/02/2018
Location: Australia
Posts: 2134
Posted: 05:17am 08 Sep 2024
Copy link to clipboard 
Print this post

  Quote   AI (or any code for that matter) doesn't know anything about its data, except what has been coded by humans.
The reason it is called AI is it isn't explicitly "coded by humans". Only a broad outline of what the programmers want is coded, along with the tools needed to search vast amounts of data and look for patterns and connections.

From that point on the code is self modifying (including the original outline and search methods) in a continuously iterative web of loops. After the learning period it bears little resemblance to human generated code.

Attempts have been made to understand how it got to where it is and does what it does, but the code has become so convoluted with myriad loops and dense webs of conditional branches between loops that no-one can unravel it.
 
EDNEDN
Senior Member

Joined: 18/02/2023
Location: United States
Posts: 118
Posted: 05:40am 08 Sep 2024
Copy link to clipboard 
Print this post

  phil99 said   Attempts have been made to understand how it got to where it is and does what it does, but the code has become so convoluted with myriad loops and dense webs of conditional branches between loops that no-one can unravel it.


Spaghetti code.
 
Grogster

Admin Group

Joined: 31/12/2012
Location: New Zealand
Posts: 9305
Posted: 06:26am 08 Sep 2024
Copy link to clipboard 
Print this post

"That information, is not available."
Zen (computer on The Liberator ship in the Blake's 7 Sci-Fi series)
Smoke makes things work. When the smoke gets out, it stops!
 
Mixtel90

Guru

Joined: 05/10/2019
Location: United Kingdom
Posts: 6771
Posted: 06:58am 08 Sep 2024
Copy link to clipboard 
Print this post

AI is extremely clever and what we see is definitely amazing! It "learns" - well sort of - and we are told that its answers improve as it does so. (That may or may not be true - it depends on the accuracy of the new data and whether it learns to discard old data correctly.)

Yes, it can be used as a tool. No, it's not necessarily an accurate tool. It's not "Deep Thought" - the font of all human knowledge. Or, more precisely, it is, but it can't tell which parts of its knowledge are true and which are false other than by comparing it. Ask it a question and you'll get an answer of some sort. You now need to verify that answer against known truths as it's a toss up whether the answer is correct or is something that sounds like you would get to that sort of a question - a false truth if you like.

AI shouldn't (at the current stage anyway) be used as a question and answer box. It's fun to do so but that's all. Always expect lies and inaccuracies. Its real use is for deep analysis of systems, which can be defined mathematically. It is then capable of using "fuzzy logic" to find results that a human couldn't find in a lifetime.

There is a danger that people start to attribute human characteristics to AI systems and start to trust them. But there is no humanity there. There is no empathy. There is no intelligence. The human-machine interface is based on things like ELIZA so it *sounds* plausible. Everything it tells you can usually be found using an internet search engine (and not necessarily from verifiable sources) or is made to sound like something factual even though it isn't. The case comes to mind of legal researchers quoting non-existent cases in court because they'd been using AI to find typical cases. They *sounded* genuine and had all the hallmarks of being genuine but they never existed.
Mick

Zilog Inside! nascom.info for Nascom & Gemini
Preliminary MMBasic docs & my PCB designs
 
Mixtel90

Guru

Joined: 05/10/2019
Location: United Kingdom
Posts: 6771
Posted: 07:06am 08 Sep 2024
Copy link to clipboard 
Print this post

  Grogster said  "That information, is not available."
Zen (computer on The Liberator ship in the Blake's 7 Sci-Fi series)


YAY! :)

I loved Blake's 7 - even the cardboard sets and, especially, Servalan :) .

There is a photo of one of the original Orac machines. Lovely, built out of plastic tube and relays. :)
Mick

Zilog Inside! nascom.info for Nascom & Gemini
Preliminary MMBasic docs & my PCB designs
 
     Page 2 of 4    
Print this page
© JAQ Software 2024