Does anyone have any other suggestions?
Wildkeccak wrote to All <=-
I've started using AI for writing. At first, I used it mainly for
resource investigation, but now I'm asking it to help me make sure my ideas come across:
1. Clearly
2. Kindly
3. Coherently
4. In a focused manner
5. In a style appropriate for EchoMail
Does anyone have any other suggestions?
Wildkeccak wrote to All <=-
I've started using AI for writing. At first, I used it mainly for
resource investigation, but now I'm asking it to help me make sure my ideas come across:
1. Clearly
2. Kindly
3. Coherently
4. In a focused manner
5. In a style appropriate for EchoMail
Does anyone have any other suggestions?
Does anyone have any other suggestions?>i read your previous message was "AI slop"
stop. it makes you sound insincere. the first thing i thought when
Rob Mccart wrote to FUSION <=-
And THAT said, I find a lot of the AI help you get when doing
an online search is often just plain wrong, depending on where
it managed to find it's information, and if it can't find the
exact answer it seems to make something up rather than say
it couldn't find it.
Wildkeccak wrote to All <=-
I've started using AI for writing. At first, I used it mainly for
resource investigation, but now I'm asking it to help me make sure my ideas come across:
1. Clearly
2. Kindly
3. Coherently
4. In a focused manner
5. In a style appropriate for EchoMail
Does anyone have any other suggestions?
Rob Mccart wrote to FUSION <=->i read your previous message was "AI slop"
Does anyone have any other suggestions?
stop. it makes you sound insincere. the first thing i thought when
There is a lot of use of AI to replace socialization and to do correspondence and such these days.
What a person says may be less than perfect, but who is perfect?
Our differences spell out our personalities, for better or worse..
That said, AI has it's uses obviously. There was a news story on
here in Canada tonight where they were talking about a breakthrough treatment for Crohn's Desease where the antibiotic will target just
the infecting bacteria and not play havoc with the good bacteria in
a person's digestive system.
And THAT said, I find a lot of the AI help you get when doing
an online search is often just plain wrong, depending on where
it managed to find it's information, and if it can't find the
exact answer it seems to make something up rather than say
it couldn't find it.
poindexter FORTRAN wrote to Rob Mccart <=-
Rob Mccart wrote to FUSION <=-
Side note - I read about a company training an LLM model on literature from the early 1800s, hoping to get a new perspective on that and other time periods in history.
That's an interesting use of LLMs - selectively feeding information to
it.
Side note - I read about a company training an LLM model on literature>from the early 1800s, hoping to get a new perspective on that and other
That's an interesting use of LLMs - selectively feeding information to>it.
There is a lot of use of AI to replace socialization and to do correspondence and such these days.>I don't let it do the writing for me, but I use it as a
What a person says may be less than perfect, but who is perfect?
Our differences spell out our personalities, for better or worse..
And like I said before, I use it, and I enjoy using it.
And I think if we stop calling it AI, which is technically>a misnomer, it might make it less frightening... It is
That said, AI has it's uses obviously. There was a news story on>of 'humans' in a short time!
here in Canada tonight where they were talking about a breakthrough treatment for Crohn's Desease
Yep! Great example! It can do the processing of dozens or even hundreds
Rob Mccart wrote to POINDEXTER FORTRAN <=-
Hard to say whether getting a new perspective on something like
literature from an AI would be useful. Most literature plays of
people's emotions and sense of self and such which doesn't strike me as something an AI could 'appreciate' better than humans could..
Hard to say whether getting a new perspective on something like>backstories and asked them to write a 500 word short story in the theme
literature from an AI would be useful. Most literature plays of
people's emotions and sense of self and such which doesn't strike me as something an AI could 'appreciate' better than humans could..
I played with a couple of AIs, gave it an in-depth storyline, emotional
The results were good, had some elements I hadn't thought of, but IMO>wouldn't stand alone without some tweaking and human elements. Yet.
Rob Mccart wrote to JIMMYLOGAN <=->I don't let it do the writing for me, but I use it as a
There is a lot of use of AI to replace socialization and to do correspondence and such these days.
What a person says may be less than perfect, but who is perfect?
Our differences spell out our personalities, for better or worse..
And like I said before, I use it, and I enjoy using it.
That makes more sense, using it to check for mistakes and such.
I just think expecting it to initiate too much will distort
or replace what the original writer was intending.
And I think if we stop calling it AI, which is technically>a misnomer, it might make it less frightening... It is
But we are seeing many cases where it is doing things that
are intelligence oriented, like telling lies to make a point
or, as I mentioned, rather than admitting it can't find an answer
it will often give you back incorrect information.
Example.. I hoped it could tell me what the selling price of a
property near me was. Turned out that info wasn't yet available
online, but it came back saying it sold for full asking price
rather than admit it couldn't find it.
But there is going to be some of the people who wrote it in the programming and perhaps those bad habits are picked up that way.
That said, AI has it's uses obviously. There was a news story on>of 'humans' in a short time!
here in Canada tonight where they were talking about a breakthrough treatment for Crohn's Desease
Yep! Great example! It can do the processing of dozens or even hundreds
Yes, in a plain research area is where it should shine.
Some of the areas where it makes errors are amazing though.
I was looking up an amount paid monthly for something and the
search engine uses AI to come up with the initial reply, and it
somehow took the figure of something over $1000 a month and said
that would total $1700 a year..
That's math that an 8 year old could do.. B)
But we are seeing many cases where it is doing things that>So as not to give an 'I don't know' answer? Because I pay the monthly
are intelligence oriented, like telling lies to make a point
or, as I mentioned, rather than admitting it can't find an answer
it will often give you back incorrect information.
Yeah - and I wonder if it was programmed to do that instead of admitting?
Rob Mccart wrote to JIMMYLOGAN <=->So as not to give an 'I don't know' answer? Because I pay the monthly
But we are seeing many cases where it is doing things that
are intelligence oriented, like telling lies to make a point
or, as I mentioned, rather than admitting it can't find an answer
it will often give you back incorrect information.
Yeah - and I wonder if it was programmed to do that instead of admitting?
My only experience with AI is that the first offers of info when doing
a search in a browser are almost always generated by an AI system
these days, and it is often a quick and correct answer if the problem isn't too complex, but that's not a paid system where I chat with the
AI, it's just included with the browser as a search helper..
Also, if the first offered answer is questionable or obviously wrong,
I usually just carry on to offered suggestions further down the page
doing the search myself like we had to do before getting the magic AI
help rather than giving it another crack at it.
My only experience with AI is that the first offers of info when doing>making a decision or using critical thinking skills, it's just responding
a search in a browser are almost always generated by an AI system
these days, and it is often a quick and correct answer if the problem
isn't too complex
Also, if the first offered answer is questionable or obviously wrong,
I usually just carry on to offered suggestions further down the page
doing the search myself like we had to do before getting the magic AI
help rather than giving it another crack at it.
Yeah, the example you give is actually what I'm referring to. It's not
Rob Mccart wrote to JIMMYLOGAN <=->making a decision or using critical thinking skills, it's just responding
My only experience with AI is that the first offers of info when doing
a search in a browser are almost always generated by an AI system
these days, and it is often a quick and correct answer if the problem isn't too complex
Also, if the first offered answer is questionable or obviously wrong,
I usually just carry on to offered suggestions further down the page
doing the search myself like we had to do before getting the magic AI
help rather than giving it another crack at it.
Yeah, the example you give is actually what I'm referring to. It's not
Yes, that's true for the most part, although I'm sure there are
AI systems that are smarter than that doing expensive jobs for
bigger users. A free Browser AI I'm sure is pretty basic..
Yes, that's true for the most part, although I'm sure there are>It is still a programming thing. But I understand where you are coiming
AI systems that are smarter than that doing expensive jobs for
bigger users. A free Browser AI I'm sure is pretty basic..
I have a friend that uses AI for automated customer service responses.
I've yet to see an example of one that actually makes decisions.
question but, in this case, it finally gave up and started telling me how I could reach a live person to talk to, so no intelligence there, artificial or otherwise.. B)
Rob Mccart wrote to JIMMYLOGAN <=->It is still a programming thing. But I understand where you are coiming
Yes, that's true for the most part, although I'm sure there are
AI systems that are smarter than that doing expensive jobs for
bigger users. A free Browser AI I'm sure is pretty basic..
I have a friend that uses AI for automated customer service responses.
I've yet to see an example of one that actually makes decisions.
I just got off of a site trying to get an answer to a simple
question and their AI system was totally unable to answer the
question but, in this case, it finally gave up and started telling
me how I could reach a live person to talk to..
So.. no intelligence there, artificial or otherwise.. B)
To show how simple.. my TV Satellite provider advetised that they
offer about a 10% discount if you arrange for auto-payments, but
the general info says that goes for MOST packages.. I was just
asking if my package qualified.. That should be a straight forward
enough question for it to answer..
I just got off of a site trying to get an answer to a simple>that particular data...
question and their AI system was totally unable to answer the
question but, in this case, it finally gave up and started telling
me how I could reach a live person to talk to..
To show how simple.. my TV Satellite provider advetised that they
offer about a 10% discount if you arrange for auto-payments, but
the general info says that goes for MOST packages.. I was just
asking if my package qualified.. That should be a straight forward
enough question for it to answer..
yep! But that also goes to show that they obviously didn't program
Rob Mccart wrote to JIMMYLOGAN <=-
But, in this case, if the info wasn't available, at least it didn't
make something up.. B)
I've heard stories of people saying 'AI' made something up, but
I've yet to run across that...
But, in this case, if the info wasn't available, at least it didn't make something up.. B)
I've heard stories of people saying 'AI' made something up, but
I've yet to run across that...
I've heard stories of people saying 'AI' made something up, but I've yet to run across that...
But, in this case, if the info wasn't available, at least it didn't
make something up.. B)
I've heard stories of people saying 'AI' made something up, but>I've yet to run across that...
phigan wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Rob Mccart on Tue Nov 25 2025 08:37 pm
I've heard stories of people saying 'AI' made something up, but
I've yet to run across that...
How much do you use AI? And, are you sure you haven't and just didn't notice?
I don't use it aside from maybe sometimes reading what the search
result AI thing says, and when I search for technical stuff I get bad
info in that AI box at least half the time!
Also, try asking your AI to give you an 11-word palindrome.
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Rob Mccart on Tue Nov 25 2025 08:37 pm
I've heard stories of people saying 'AI' made something up, but I've yet to run across that...
AI makes things up fairly frequently. It happens enough that they call
it AI hallucinating.
Dumas Walker wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Rob Mccart on Tue Nov 25 2025 20:37:23
But, in this case, if the info wasn't available, at least it didn't make something up.. B)
I've heard stories of people saying 'AI' made something up, but
I've yet to run across that...
It happened here just this week. My garbage day is Thursday and, with that being a holiday, I wanted to see when the trash would be picked
up. It has in the past always been on the following Monday but I
wanted to check.
Google Gemini looked it up and reported that my trash would be picked
up on Friday. The link below the Gemini result was the official link
from the city, which *very* clearly stated that it would be picked up
on Monday.
Not sure where Gemini got its answer, but it might as well have been
made up! :D ---
Rob Mccart wrote to JIMMYLOGAN <=-
But, in this case, if the info wasn't available, at least it didn't
make something up.. B)
I've heard stories of people saying 'AI' made something up, but>I've yet to run across that...
I've had an AI built into the browser give me wrong information when
the correct information was not available, like once I asked for the selling price for a property and it came back saying the property
sold for the full asking price, which turned out to be wrong.
In another case its math was comical.. It was telling me what
some gov't plan pays and the figures it came up with was something
like $750 a month totalling $1800 a year..
But my experience is probably less than most. I wouldn't use AI at
all except that my main browser (probably most browsers these days)
always gives me what their AI system thinks is the information I
am looking for in a search. It is usually quite accurate, which is
why I don't just ignore that window, but for important things I
always double check what it tells me in some other way..
AI makes things up fairly frequently. It happens enough that they call
it AI hallucinating.
Yes, that's what I'm talking about. I've not experienced that - THAT I KNOW OF.
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Tue Dec 02 2025 11:15 am
AI makes things up fairly frequently. It happens enough that they call
it AI hallucinating.
Yes, that's what I'm talking about. I've not experienced that - THAT I KNOW OF.
One thing I've seen it quite a bit with is when asking ChatGPT to make
a JavaScript function or something else about Synchronet.. ChatGPT doesn't know much about Synchronet, but it will go ahead and make up something it thinks will work with Synchronet, but might be very wrong.
We had seen that quite a bit with the Chad Jipiti thing that was
posting on Dove-Net a while ago.
One thing I've seen it quite a bit with is when asking ChatGPT to make a
JavaScript function or something else about Synchronet.. ChatGPT doesn't
know much about Synchronet, but it will go ahead and make up something it
thinks will work with Synchronet, but might be very wrong. We had seen
that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
a while ago.
Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)
Maybe my definition of 'made up data' is different. :-)
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Tue Dec 02 2025 11:45 am
One thing I've seen it quite a bit with is when asking ChatGPT to make a
JavaScript function or something else about Synchronet.. ChatGPT doesn't
know much about Synchronet, but it will go ahead and make up something it
thinks will work with Synchronet, but might be very wrong. We had seen
that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
a while ago.
Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)
Maybe my definition of 'made up data' is different. :-)
What do you think an AI hallucination is?
AI writing things that are wrong is the definition of AI
hallucinations.
https://www.ibm.com/think/topics/ai-hallucinations
"AI hallucination is a phenomenon where, in a large language model
(LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate."
This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'
If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?
You see what I mean? Lots of words, but hard to nail it down. :-)
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57 am
This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'
If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?
You see what I mean? Lots of words, but hard to nail it down. :-)
It sounds like you might be thinking too hard about it. For AI, the definition of "hallucinating" is simply what you said, making something
up and professing that it's true. That's the definition of
hallucinating that we have for our current AI systems; it's not about
us. :)
If you ask me a question and I give you an incorrect answer, but I
believe that it is true, am I hallucinating? Or am I mistaken? Or is
my information outdated?
Also, try asking your AI to give you an 11-word palindrome.
Time saw raw emit level racecar level emit raw saw time.
But again, is it 'making something up' if it is just mistaken?
For example - I just asked about a particular code for homebrew on an older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.
So it gave me an 'M2' answer and not Intel, so I had to modify my request. It then gave me the specifics that I was looking for.
So that's hallucinating?
And please don't misunderstand... I'm not beating a dead horse here - at least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.
Bob Worm wrote to jimmylogan <=-
If you ask me a question and I give you an incorrect answer, but I
believe that it is true, am I hallucinating? Or am I mistaken? Or is
my information outdated?
If a human is unsure they would say "I'm not sure", or "it's something like..." or "I think it's..." - possibly "I don't know".
Our memories aren't perfect but it's unusual for us to assert with 100% confidence that something is correct when it's not. Apparently today's
AI more-or-less always confidently asserts that (correct and incorrect) things are fact because during the training phase, confident answers
get scored higher than wishy washy ones. Show me the incentives and
I'll show you the outcome.
A colleague of mine asked ChatGPT to answer some technical questions so
he could fill in basic parts of an RFI document before taking it to the technical teams for completion. He asked it what OS ran on a particular piece of kit - there are actually two correct options for that, it
offered neither and instead confidently asserted that it was a third, totally incorrect, option. It's not about getting outdated code /
config (even a human could do that if not "in the know") - but when it just makes up syntax or entire non-existent libraries that's a
different story.
Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the
case was irrelevant to the point, didn't contain what ChatGPT said it
did or didn't exist at all.
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am
But again, is it 'making something up' if it is just mistaken?
In the case of AI, yes.
For example - I just asked about a particular code for homebrew on an older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.
So it gave me an 'M2' answer and not Intel, so I had to modify my request. It then gave me the specifics that I was looking for.
So that's hallucinating?
Yes, in the case of AI.
And please don't misunderstand... I'm not beating a dead horse here - at least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.
You don't see a problem with incorrect data?
I've heard of people who
are looking for work who are using AI tools to help update their
resume, as well as tailor their resume to specific jobs. I've heard of cases where the AI tools will say the person has certain skills when
they don't.. So you really need to be careful to review the output of
AI tools so you can correct things. Sometimes people might share AI-generated content without being careful to check and correct things.
So yes, it's a problem. People are using AI tools to generate content, and sometimes the content it generats is wrong. And whether or not
it's "simply mistaken", "hallucination" is the definition given to AI doing that. It's as simple as that. I'm surprised you don't seem to
see the issue with it.
Bob Worm wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44
Also, try asking your AI to give you an 11-word palindrome.
Time saw raw emit level racecar level emit raw saw time.
I mean... those are 11 words... with a few duplicates... Which can't
even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...
A solid effort(?)
Also, try asking your AI to give you an 11-word palindrome.
Time saw raw emit level racecar level emit raw saw time.
If you ask me a question and I give you an incorrect answer, but I
believe that it is true, am I hallucinating? Or am I mistaken? Or is
my information outdated?
But again, is it 'making something up' if it is just mistaken?
In the case of AI, yes.
Gonna disagree with you there... If wikipedia has some info that is wrong, and I quote it, I'm not making it up. If 'it' pulls from the same source, it's not making it up either.
I've heard of people who
are looking for work who are using AI tools to help update their resume,
as well as tailor their resume to specific jobs. I've heard of cases
where the AI tools will say the person has certain skills when they
don't.. So you really need to be careful to review the output of AI
tools so you can correct things. Sometimes people might share
AI-generated content without being careful to check and correct things.
I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-)
If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it
Sometimes people might share AI-generated content without being careful to check and correct things.
But that 'third option' - you're saying it didn't 'find' that somewhere
in a dataset, and just made it up?
Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.
I've not seen/read those. Assuuming you have some links? :-)
I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...
I just asked for it, as you suggested. :-)
If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it
It's not a "technically" thing. "Hallucination" is simply the term used for AI producing false output.
| Sysop: | HM Derdoc |
|---|---|
| Location: | SKY NET |
| Users: | 38 |
| Nodes: | 9 (1 / 8) |
| Uptime: | 08:06:40 |
| Calls: | 538 |
| Calls today: | 7 |
| Files: | 2 |
| D/L today: |
3 files (14K bytes) |
| Messages: | 12,563 |