Apparently it's time to do this again:
ChatGPT and other LLM tools do not "know" anything. They work by stringingn words together based on how frequently those words appear together in the model's training data.
LLM tools are notorious for literally making shit up, particularly when it comes to complex legal topics (like immigration) and material that originated in a language other than English (like Japanese). For this reason we do not recommend that anyone use ChatGPT or any other such tools for the purposes of researching their move to Japan. If you feel you must use it, at least spend some time confirming the information it gives you.
As far as the subreddit is concerned LLMs impact two rules in particular:
Rule 2: Do your own research before posting
As mentioned above, LLMs are notoriously bad at the very subjects this subreddit is focused on. As such "I asked ChatGPT" is not considered sufficient research for the purposes of Rule 2.
We're happy to help you confirm our deny ChatGPT's claims, but you still need to show some evidence of doing your own research beyond just asking ChatGPT.
Rule 6: Don't know? Don't post!
LLMs do not know anything. They are not experts in any subject. As such they fall squarely into "Don't know? Don't post!"
Do not use ChatGPT/LLMs to answer people's questions. No "please" here. Do not do it.
Do not use ChatGPT to "clean up" your answers. Use your own words. It's ok to use these tools for translation purposes, but please limit your use to just translation.
Any comments that we believe are LLM-created will be removed by the moderators immediately. Persistent or serial offenders will be banned from the subreddit.
by dalkyr82
6 comments
Yea I’ve noticed more and more people use LLMs to find stuff, format their replies (and posts) and so on, which is so off-putting to me…
I’m still old school and just Google stuff and use my brain to figure out if what I find is correct or just weird shit on the internet. It’s fine to do that with LLMs I guess, but always use your brain.
Glad you’re sending a PSA about it.
One thing I’d say is, if someone is using LLMs to translate their post or replies, because their English isn’t good, I don’t mind that as much, but Google translate is also fine by me lol
So here’s what I personally advise with regards to AI/LLM/Chatbots in general:
You remember is school when teachers prohibited you from citing Wikipedia on any of your papers? Because you know, just any old random person can go on there and edit stuff. It’s the same thing with Chat GPT. It could just be really well written made up stuff and you would never know.
BUT
The students with a lot of ingenuity (maybe you were one) knew that each Wikipedia article had cited sources for the things written in it down at the bottom of the page. So you can ***check for yourself*** to see if its made up, then ***cite that source.***
There’s a little known chatbot out there called **Perplexity**. Its a little more limited than ChatGPT, BUT it cites the sources it uses for the information it retrieves right in the response. You can click on the sources and ***check for yourself*** whether the chatbot is right or just making stuff up.
I mean formatting your answers to them to be easily understood and be comprehensible is a net good for everyone. I don’t know what’s the problem with that one? 🤔 You want people to write incoherent thoughts as is?
Thank you for taking this position!! I totally agree 👍
Some people I know tell me using Gemini is just better than using Google search…I’d at least verify info before taking action on it. Common sense.
IF you are to use AI, use copilote or GPT with the web search function.
Don’t trust what it says, but use it for the links. Then do the readings
DO NOT TRUST WHAT IT SAYS, just USE IT FOR THE LINKS
That was my PSA for the day