Notice. New forum software under development. It's going to miss a few functions and look a bit ugly for a while, but I'm working on it full time now as the old forum was too unstable. Couple days, all good. If you notice any issues, please contact me.
It is just not how LLMs work. To get a good system, you need a lot of training data, which often means the entire Internet. The average of the Internet can only be described as completely feral so you end up with a feral AI. So they oversample on the better stuff like Wikipedia to make it slightly less feral. Then they completely lobotomize it to stop it being totally disgusting. The guardrails around them is easily found by just asking it something rude and the conversation comes to an immediate halt.
The problem is the nobody REALLY understands how it works. There are a heap of nodes (billions) that have weightings (parameters) on them that are generated with many terabytes of training data, but there is no node that says "cats have 4 legs" but many nodes that have miniscule variations that sum to an net outcome of "cats have 4 legs". Like a hashing algorithm the source data is completely lost and all you have is something that was affected by it.
There is no way to get a reference back to the original data. There is no way to know which particular piece of data influenced its output.
The output can only be described as bullsh*t in the technical definition of the term. The AI doesn't try to lie or tell the truth but just generates an output, but like a bullsh*tter it speaks with authority on something it knows nothing about. So you can't trust anything it says. Most of the time it gets things mostly right, provided the training material is really good. If you ask it a question about an area where the source information is established and factual and there is not a lot of opinionated data, you will get very good results but you can get those same results using Google.
My son is doing a Master's thesis on making AIs tell the truth and you can get a lot of improvements by knowing how to ask it a question. Adding a phrase like "write the response a university professor would give" can make a huge difference. He has hundreds of such phrases and then graphs the honesty.
lizby Guru
Joined: 17/05/2016 Location: United StatesPosts: 3150
Posted: 12:25am 10 Sep 2024
Copy link to clipboard
Print this post
I've gotten some very wrong answers from AI, but by having some notion of what a correct answer would be like, I've usually been able to drill down to something reasonable which is supported by the sources it links to (but I've not been trying to break it or make it hallucinate).
Trying to find the right answer to a moderately complex question which has several facets is much harder using Google.
I agree that it is essential know enough to be able to ask a good question, and ask good follow-up questions.
I did have a giggle a couple of weeks ago on this video from YouTube... AI can be bad at Math
So, I just asked ChatGPT the exact same question, because by now, it would have learned from it's mistakes and got simple maths up to at least Grade 1 level.
Q: 9.11 or 9.9 which is bigger The reply
Q: subtract 9.9 from 9.11
I guess it didn't learn anything from that episode John
twofingers Guru
Joined: 02/06/2014 Location: GermanyPosts: 1236
Posted: 09:05am 10 Sep 2024
Copy link to clipboard
Print this post
Teaching chatGPT
...
aaarghhhhh!
Heart attack!
Greetings from the nuthouse!
Edited 2024-09-10 19:59 by twofingerscausality ≠ correlation ≠ coincidence
twofingers Guru
Joined: 02/06/2014 Location: GermanyPosts: 1236
Posted: 10:06am 10 Sep 2024
Copy link to clipboard
Print this post
causality ≠ correlation ≠ coincidence
twofingers Guru
Joined: 02/06/2014 Location: GermanyPosts: 1236
Posted: 11:37am 10 Sep 2024
Copy link to clipboard
Print this post
I don't believe in AI bashing. I think AI is a tool and we need to learn to use it. Know the rules and limits. And we are still at the very beginning... I suspect that some errors arise because the methods of AI (neural networks) are TOO similar to human thinking.* Sometimes I think of Mexico, where I was a few years ago. When I asked someone on the street for directions (to the train station, the post office, ...) I always got a very friendly and precise answer. Unfortunately, the answer was wrong in 95% of cases. Even if the destination was only 100 m away. Even the police gave wrong answers when asked where the nearest police station was. I think that people just wanted to be polite and nice. ... like our AI. BTW.: I think in Germany people would be more likely to say if they don't know the way.
Greetings Michael
* ChatGPT disagrees:
Edited 2024-09-11 02:25 by twofingerscausality ≠ correlation ≠ coincidence
aFox Regular Member
Joined: 28/02/2023 Location: GermanyPosts: 76
Posted: 07:58pm 10 Sep 2024
Copy link to clipboard
Print this post
Hi Michael
That is a right answer, because three digits are larger as two digits. ChatGPT interprets big as the real size on the sheet of paper.
If you ask: 9.11 or 9.9 which is greater you should get the wished answer.
Bigger and greater are translated into German as "größer", but in English the meanings of bigger and greater in context of comparing numbers are different.
Microsoft's Copilot
9,11 or 9,9 which is bigger
To compare the numbers 9.11 and 9.9, we can look at the digits after the decimal point.
9.11 has the digits 1 and 1 after the decimal point. 9.9 has the digit 9 after the decimal point. Since 9.9 has a larger digit (9) compared to 1 in 9.11, 9.9 is the bigger number.
The same answer for 9.11 or 9.9 which is bigger.
Maybe the Copilot learned the translating trouble of Germans.
9.11 or 9.9 which is greater To determine which number is greater between 9.11 and 9.9, we compare the digits after the decimal point:
9.11 has the digits 1 and 1 after the decimal point. 9.9 has the digit 9 after the decimal point. Since 9.9 has a larger digit (9) compared to the first digit after the decimal point in 9.11 (which is 1), 9.9
The copilot should replace "larger" with "greater".
See also: Page 174 PicoMite User Manual, > --> greater than
Gregor Edited 2024-09-11 06:10 by aFox
damos Regular Member
Joined: 15/04/2016 Location: AustraliaPosts: 63
Posted: 12:29am 11 Sep 2024
Copy link to clipboard
Print this post
Actually it is a little more complex than that. Talking to my son, the image recognition AIs are based on research into the visual cortex and have several convolution layers at the start which do things like edge detection to transform images into shapes.
The AI were are discussing is the Large Language Model (LLM) which is based on Regular Expressions. The neuron model use the regex approach to allow it to define a pick up the structure of languages. They have added persistency by adding a weighted value of previous tokens (to over simplify, words sort of map to tokens), so the previous word is weighted 0.5 and the one before 0.25 and so on. Later they changed this so every second word is zeroed to fix some issues. This approach works and seems like it is like human language but it is really not clear if it is the same at all.
Current AI has a relatively small number of neurons but a high number of parameters so it is capable of storing a lot of data in a fuzzy sort of way, but there is a lot going on in the brain that the AI cannot do. The brain has around 10000 times the number of neurons and will have a lot of structures that we are yet to understand. AI is currently not able to reason.
twofingers Guru
Joined: 02/06/2014 Location: GermanyPosts: 1236
Posted: 03:08am 11 Sep 2024
Copy link to clipboard
Print this post
@damos, very interesting!
but it will be, I'm sure. We are witnessing the beginning of a technological revolution... and revolutions, as we know, eat their children.
@Gregor: Maybe Copilot uses a different "response mode" (see above) by default?
Kind regards Michael Edited 2024-09-11 13:33 by twofingerscausality ≠ correlation ≠ coincidence