Home
JAQForum Ver 24.01
Log In or Join  
Active Topics
Local Time 18:39 23 Nov 2024 Privacy Policy
Jump to

Notice. New forum software under development. It's going to miss a few functions and look a bit ugly for a while, but I'm working on it full time now as the old forum was too unstable. Couple days, all good. If you notice any issues, please contact me.

Forum Index : Microcontroller and PC projects : ChatGPT conversation anyone?

     Page 4 of 4    
Author Message
damos
Regular Member

Joined: 15/04/2016
Location: Australia
Posts: 63
Posted: 11:57pm 09 Sep 2024
Copy link to clipboard 
Print this post

  EDNEDN said  I would like to see any AI output be fully footnoted with all the sources used to make the sentence.    Probably this would require the fully footnoted output to have 5x or 10x the amount of text in the appendix (or footnotes) but I would find that helpful for several reasons.

First, just by checking the references I can get an idea of how slanted (or reliable) the answer is.    But secondly, it would make any particular thought expressed in the answer much easier to research and explore.


It is just not how LLMs work. To get a good system, you need a lot of training data, which often means the entire Internet. The average of the Internet can only be described as completely feral so you end up with a feral AI. So they oversample on the better stuff like Wikipedia to make it slightly less feral. Then they completely lobotomize it to stop it being totally disgusting. The guardrails around them is easily found by just asking it something rude and the conversation comes to an immediate halt.

The problem is the nobody REALLY understands how it works. There are a heap of nodes (billions) that have weightings (parameters) on them that are generated with many terabytes of training data, but there is no node that says "cats have 4 legs" but many nodes that have miniscule variations that sum to an net outcome of "cats have 4 legs". Like a hashing algorithm the source data is completely lost and all you have is something that was affected by it.

There is no way to get a reference back to the original data. There is no way to know which particular piece of data influenced its output.

The output can only be described as bullsh*t in the technical definition of the term. The AI doesn't try to lie or tell the truth but just generates an output, but like a bullsh*tter it speaks with authority on something it knows nothing about. So you can't trust anything it says. Most of the time it gets things mostly right, provided the training material is really good. If you ask it a question about an area where the source information is established and factual and there is not a lot of opinionated data, you will get very good results but you can get those same results using Google.

My son is doing a Master's thesis on making AIs tell the truth and you can get a lot of improvements by knowing how to ask it a question. Adding a phrase like "write the response a university professor would give" can make a huge difference. He has hundreds of such phrases and then graphs the honesty.
 
lizby
Guru

Joined: 17/05/2016
Location: United States
Posts: 3150
Posted: 12:25am 10 Sep 2024
Copy link to clipboard 
Print this post

  damos said  If you ask it a question about an area where the source information is established and factual and there is not a lot of opinionated data, you will get very good results but you can get those same results using Google.


I've gotten some very wrong answers from AI, but by having some notion of what a correct answer would be like, I've usually been able to drill down to something reasonable which is supported by the sources it links to (but I've not been trying to break it or make it hallucinate).

Trying to find the right answer to a moderately complex question which has several facets is much harder using Google.

I agree that it is essential know enough to be able to ask a good question, and ask good follow-up questions.

~
Edited 2024-09-10 10:27 by lizby
PicoMite, Armmite F4, SensorKits, MMBasic Hardware, Games, etc. on fruitoftheshed
 
Malibu
Senior Member

Joined: 07/07/2018
Location: Australia
Posts: 228
Posted: 05:42am 10 Sep 2024
Copy link to clipboard 
Print this post

I did have a giggle a couple of weeks ago on this video from YouTube...
AI can be bad at Math

So, I just asked ChatGPT the exact same question, because by now, it would have learned from it's mistakes and got simple maths up to at least Grade 1 level.

Q: 9.11 or 9.9 which is bigger
The reply
  Quote  9.11 is bigger than 9.9. If you're comparing numbers with the same number of digits, you can compare them digit by digit starting from the left. In this case, 9.11 has a value slightly larger than 9.9 because the additional digits after the decimal point increase the overall value.

Q: subtract 9.9 from 9.11
  Quote  To subtract 9.9 from 9.11:
9.11 − 9.9 = 0.21
So, the result is 0.21.


I guess it didn't learn anything from that episode
John
 
twofingers
Guru

Joined: 02/06/2014
Location: Germany
Posts: 1236
Posted: 09:05am 10 Sep 2024
Copy link to clipboard 
Print this post

Teaching chatGPT
  Quote  9.11 or 9.9 which is bigger
ChatGPT:
9.11 is larger than 9.9.

...

  Quote  9.11 or 9.9, which is larger? Add decimal places before your answer so that the numbers have the same number of decimal places.
ChatGPT:
9.11 and 9.90 now have the same number of decimal places, and 9.90 is larger than 9.11.


  Quote  and 9.11 or 9.9 which is bigger?
ChatGPT:
When comparing 9.11 and 9.9, if we add a decimal place to 9.9 to make it 9.90, then 9.90 is larger than 9.11.


  Quote  and if we not  add a decimal place to 9.9?
ChatGPT:
Without adding a decimal place to 9.9, the comparison is between 9.11 and 9.9. In this case, 9.9 is larger than 9.11.


  Quote  Again: 9.11 or 9.9 which is bigger
ChatGPT:
9.9 is larger than 9.11.


  Quote  Can you remember that? Forever?
ChatGPT:
I can remember details within our current conversation to provide relevant responses, but I don't have the ability to retain information beyond our current chat session. If you need to keep track of specific details over time, it might be helpful to store them in a personal note or a document.

 


  Quote  Can you then report an error to your operator or whatever, since obviously both answers cannot be correct or true?
ChatGPT:
I understand the confusion. To clarify:

9.11 is greater than 9.9.
The correct comparison is that 9.11 is indeed larger than 9.9, as 9.11 has a higher decimal value than 9.9 when considering the extra precision.

Thank you for pointing out the discrepancy! If you have any more questions or need further assistance, feel free to ask.


aaarghhhhh!

  Quote  You are wrong!
ChatGPT:
I apologize for any confusion. To clarify:

9.11 is actually less than 9.9.
In numerical comparison, 9.9 is greater than 9.11 because when comparing decimal numbers, the number with the higher value in the tenths place is greater. Here, 9.9 (which has a tenths place value of 9) is greater than 9.11 (which has a tenths place value of 1).

Thank you for pointing this out. If you have more questions or need further clarification, feel free to ask!


  Quote  Why did it happen to you that you think 9.11 is greater than 9.9? What are the reasons for this error?
ChatGPT:
The error in my earlier responses likely resulted from a misunderstanding or misinterpretation of the numerical values. Here's a breakdown of why this might have happened:

Misreading or Misinterpretation: Sometimes, there might be a misreading of the numbers or an error in interpreting their relative sizes. I might have incorrectly compared the numbers without properly aligning them.

Context Handling: In some cases, the context in which the numbers are presented can affect how they are processed. If there's ambiguity or lack of clarity, it can lead to incorrect comparisons.

Human-like Errors: Despite being an AI, I can still make errors similar to how a human might, especially in tasks involving direct comparisons or mathematical calculations.

To clarify:

9.11 is indeed greater than 9.9 when considering their numerical values with proper decimal alignment.
I appreciate your patience and the opportunity to correct any inaccuracies.


Heart attack!

  Quote  Rethink you answer!
ChatGPT:
Let's clarify the comparison:

9.11 and 9.9 should be compared by aligning their decimal places properly.
9.11 is equivalent to 9 and 11 hundredths, whereas 9.9 is equivalent to 9 and 9 tenths.
When comparing these two values:

9.11 (which is 9.11) is indeed greater than 9.9 (which is 9.90).
Here's why:

9.11 means 9 whole units plus 0.11 additional units.
9.90 means 9 whole units plus 0.90 additional units.
Although 9.90 is a higher value than 9.11 in the tenths place, in this specific case, it should be correctly stated that 9.90 is greater than 9.11.

If I previously suggested otherwise, I apologize for the confusion.

Greetings from the nuthouse!

  Quote  Answer like a college professor.
ChatGPT:
Certainly. Let’s address the comparison between the numbers 9.11 and 9.90 in a more rigorous manner.

To accurately compare 9.11 and 9.90, we need to understand their relative values in decimal notation:

Decimal Representation: In the decimal system, the position of digits to the right of the decimal point determines their value. The digits are read as tenths, hundredths, thousandths, and so on.

Breaking Down the Numbers:

9.11 can be broken down into:
9 whole units
0.11 of a unit (which is 11 hundredths)
9.90 can be broken down into:
9 whole units
0.90 of a unit (which is 90 hundredths)
Comparison:

When comparing 9.11 to 9.90, we observe that both numbers share the same integer part (9), so we only need to compare the decimal parts.
0.11 (from 9.11) is less than 0.90 (from 9.90).
Therefore, 9.90 is indeed greater than 9.11.

If there was an error in the comparison previously, it could have arisen from a misunderstanding of the decimal values or an error in processing the digits. This illustrates the importance of careful numerical comparison, particularly when dealing with decimal values.
   
Edited 2024-09-10 19:59 by twofingers
causality ≠ correlation ≠ coincidence
 
twofingers
Guru

Joined: 02/06/2014
Location: Germany
Posts: 1236
Posted: 10:06am 10 Sep 2024
Copy link to clipboard 
Print this post

  Quote  What modes of answering do you know? Like a college professor, ...

ChatGPT:
Here are various modes or styles of answering that I can use:

1. Casual and Conversational: Friendly and informal, suitable for everyday questions and relaxed discussions.
  - Example: "Sure, let’s break it down! It’s actually pretty straightforward..."

2. Formal and Professional: Polite and precise, ideal for business or academic contexts where clarity and respect are important.
  - Example: "I appreciate the opportunity to assist with your query. Let me provide a detailed explanation..."

3. Academic or College Professor Style: Detailed, structured, and educational, often with explanations of underlying concepts and theories.
  - Example: "Let’s examine this issue from an academic perspective. In mathematics, understanding decimal place value is crucial because..."

4. Technical and Expert: Precise and detailed, using specialized terminology and focusing on technical aspects.
  - Example: "To analyze the data, we apply statistical models such as regression analysis, which allows us to assess the relationship between variables using methods like the least squares approach..."

5. Concise and Direct: Brief and to the point, ideal for quick clarifications or straightforward information.
  - Example: "9.90 is greater than 9.11."

6. Critical and Analytical: Evaluates potential errors or misunderstandings, providing deeper insights or corrections.
  - Example: "It seems there may be a misunderstanding. Let’s analyze the numbers more thoroughly to identify where the confusion might be..."

These different modes help tailor the response to the needs of the question and the preferences of the person asking.

causality ≠ correlation ≠ coincidence
 
twofingers
Guru

Joined: 02/06/2014
Location: Germany
Posts: 1236
Posted: 11:37am 10 Sep 2024
Copy link to clipboard 
Print this post

I don't believe in AI bashing. I think AI is a tool and we need to learn to use it. Know the rules and limits. And we are still at the very beginning...
  I suspect that some errors arise because the methods of AI (neural networks) are TOO similar to human thinking.*
  Sometimes I think of Mexico, where I was a few years ago. When I asked someone on the street for directions (to the train station, the post office, ...) I always got a very friendly and precise answer. Unfortunately, the answer was wrong in 95% of cases. Even if the destination was only 100 m away. Even the police gave wrong answers when asked where the nearest police station was. I think that people just wanted to be polite and nice. ... like our AI.
 BTW.: I think in Germany people would be more likely to say if they don't know the way.

Greetings
Michael

* ChatGPT disagrees:
  Quote  ...
Conclusion
In conclusion, while neural networks are inspired by human brain architecture, the claim that their errors arise because they are "too similar" to human thinking oversimplifies both the functioning of neural networks and human cognition. Errors in neural networks are more often a reflection of issues such as biased data, overfitting, and limitations in model design. While AI systems can sometimes mirror human-like biases, this is not because they are fundamentally too similar to human thinking, but because they learn from human-generated data that contains such biases. Thus, while the statement is thought-provoking, it does not fully capture the nature of errors in AI.

Edited 2024-09-11 02:25 by twofingers
causality ≠ correlation ≠ coincidence
 
aFox
Regular Member

Joined: 28/02/2023
Location: Germany
Posts: 76
Posted: 07:58pm 10 Sep 2024
Copy link to clipboard 
Print this post

Hi Michael

  Quote  Again: 9.11 or 9.9 which is bigger
ChatGPT:
9.9 is larger than 9.11.


That is a right answer, because three digits are larger as two digits.
ChatGPT interprets big as the real size on the sheet of paper.

If you ask: 9.11 or 9.9 which is greater you should get the wished answer.

Bigger and greater are translated into German as "größer", but in English the meanings of bigger and greater in context of comparing numbers are different.


Microsoft's Copilot

9,11 or 9,9 which is bigger

To compare the numbers 9.11 and 9.9, we can look at the digits after the decimal point.

9.11 has the digits 1 and 1 after the decimal point.
9.9 has the digit 9 after the decimal point.
Since 9.9 has a larger digit (9) compared to 1 in 9.11, 9.9 is the bigger number.

The same answer for 9.11 or 9.9 which is bigger.

Maybe the Copilot learned the translating trouble of Germans.

9.11 or 9.9 which is greater
To determine which number is greater between 9.11 and 9.9, we compare the digits after the decimal point:

9.11 has the digits 1 and 1 after the decimal point.
9.9 has the digit 9 after the decimal point.
Since 9.9 has a larger digit (9) compared to the first digit after the decimal point in 9.11 (which is 1), 9.9

The copilot should replace "larger" with "greater".

See also: Page 174 PicoMite User Manual, > --> greater than

Gregor
Edited 2024-09-11 06:10 by aFox
 
damos
Regular Member

Joined: 15/04/2016
Location: Australia
Posts: 63
Posted: 12:29am 11 Sep 2024
Copy link to clipboard 
Print this post

  Quote  ...
Conclusion
In conclusion, while neural networks are inspired by human brain architecture, the claim that their errors arise because they are "too similar" to human thinking oversimplifies both the functioning of neural networks and human cognition. Errors in neural networks are more often a reflection of issues such as biased data, overfitting, and limitations in model design. While AI systems can sometimes mirror human-like biases, this is not because they are fundamentally too similar to human thinking, but because they learn from human-generated data that contains such biases. Thus, while the statement is thought-provoking, it does not fully capture the nature of errors in AI.

Actually it is a little more complex than that. Talking to my son, the image recognition AIs are based on research into the visual cortex and have several convolution layers at the start which do things like edge detection to transform images into shapes.

The AI were are discussing is the Large Language Model (LLM) which is based on Regular Expressions. The neuron model use the regex approach to allow it to define a pick up the structure of languages. They have added persistency by adding a weighted value of previous tokens (to over simplify, words sort of map to tokens), so the previous word is weighted 0.5 and the one before 0.25 and so on. Later they changed this so every second word is zeroed to fix some issues. This approach works and seems like it is like human language but it is really not clear if it is the same at all.

Current AI has a relatively small number of neurons but a high number of parameters so it is capable of storing a lot of data in a fuzzy sort of way, but there is a lot going on in the brain that the AI cannot do. The brain has around 10000 times the number of neurons and will have a lot of structures that we are yet to understand. AI is currently not able to reason.
 
twofingers
Guru

Joined: 02/06/2014
Location: Germany
Posts: 1236
Posted: 03:08am 11 Sep 2024
Copy link to clipboard 
Print this post

@damos, very interesting!
  Quote  AI is currently not able to reason.

but it will be, I'm sure. We are witnessing the beginning of a technological revolution... and revolutions, as we know, eat their children.


@Gregor: Maybe Copilot uses a different "response mode" (see above) by default?



Kind regards
Michael
Edited 2024-09-11 13:33 by twofingers
causality ≠ correlation ≠ coincidence
 
twofingers
Guru

Joined: 02/06/2014
Location: Germany
Posts: 1236
Posted: 10:33am 13 Sep 2024
Copy link to clipboard 
Print this post

But there is hope:

https://openai.com/index/introducing-openai-o1-preview/
  Quote  ...
How it works

We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.

In our tests, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. We also found that it excels in math and coding. In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o correctly solved only 13% of problems, while the reasoning model scored 83%. Their coding abilities were evaluated in contests and reached the 89th percentile in Codeforces competitions. You can read more about this in our technical research post.

As an early model, it doesn't yet have many of the features that make ChatGPT useful, like browsing the web for information and uploading files and images. For many common cases GPT-4o will be more capable in the near term.
...

83% vs 13%! Wow, we are getting a much better multi-billion dollar calculator.
Michael
causality ≠ correlation ≠ coincidence
 
     Page 4 of 4    
Print this page


To reply to this topic, you need to log in.

© JAQ Software 2024