or you found the most passive aggressive AI chatbot. I hope it’s the latter because it’s funnier if you imagine the bot was mimicking sarcasm.
Or the bot was actively excited that they are about to frustrate the fuck out of a meat bag.
yeah that would be kinda funny too
What is the difference between an AI chatbot and a non-AI chatbot in this context?
“AI” = Stuff like ChatGPT that use Large Language Models (LLM)
“Non-AI” = Bots that don’t use LLMs.
So without hardcoded I/O choice options à la 20 Questions, how is the latter supposed to function?
Kind of exactly like that. They’re not capable of very meaningful conversations, but they can be convincing for a minute or two. Plenty of examples and info on https://en.wikipedia.org/wiki/Chatbot
Edit: namely https://en.wikipedia.org/wiki/ELIZA is a great example.
I’m not super familiar with ELIZA but this section of the text
ELIZA starts its process of responding to an input by a user by first examining the text input for a “keyword”. A “keyword” is a word designated as important by the acting ELIZA script,…”
makes it sound like an LLM with only a small pool of language data? An LM, if you will.
Sure, but in this cases the responses are directly programmed to the keywords and not scrawling through datasets for patterns to replicate.
It never lies to you, at least. Unless you format your questions with lots of double negatives
Neither one of which can actually answer questions or provide any real help.
In this case, a simple chatbot like she interacted with falls under AI. AI companies have marketed AI as synonymous with genAI and especially transformer models like GPTs. However, AI as a field is split into two types: machine learning and non-machine-learning (traditional algorithms).
Where the latter starts gets kind of fuzzy, but think algorithms with hard-coded rules like traditional chess engines, video game NPCs, and simple rules-based chatbots. There’s no training data; a human is sitting down and manually programming the AI’s response to every input.
By an AI chatbot, she’d be referring to something like a large language model (LLM) – usually a GPT. That’s specifically a generative pretrained transformer – a type of transformer which is a deep learning model which is a subset of machine learning which is a type of AI (you don’t really need to know exactly what that means right now). By not needing hard-coded rules and instead being a highly parallelized and massive model trained on a gargantuan corpus of text data, it’ll be vastly better at its job of mimicking human behavior than a simple chatbot in 99.9% of cases.
TL;DR: What she’s seeing here technically is AI, just a very primitive form of an entirely different type that’s apparently super shitty at its job.
Is CleverBot still around, and as stupid as always?
Cleverbot is still around, but he’s got nothing on SmarterChild.
https://computerhistory.org/blog/smarterchild-a-chatbot-buddy-from-2001/
Oh, SmarterChild. Now that brings me back. I remember there being a slew of chatbots that followed it. I didn’t know what Radiohead was at the time, I learned about the band from them, and thought it odd they had their own chatbot.
Funny enough, Slackbot (or at least the current incarnation of it) is definitely based on an LLM. Although I suspect this screenshot is older, because when the current Slackbot gives bad responses it does so a lot more verbosely.
I’ve found it to usually work better than most AI, actually, at least if you ask it stuff like “which slack threads do I need to follow up on?” or other stuff it can work out based on your slack activity.
Does this imply an AI-chatbot would perform better? Because I reject that notion.
I mean probbaly. Most non-AI bots would work better, even
Slack does have AI features now, mostly focused around summarization. I found the features pretty useless and turned them off (as much as they allow you to, at least). It’s not very often that I don’t care to know the whole context of messages I receive at work. And I do have channels that I usually just skim or ignore that the summaries weren’t super helpful for. It strips out way too much of the conversation.
Similarly, I really dislike the Apple Intelligence summarization features. It drove me to finally turn off Apple Intelligence on all my devices. Do people find summarization useful? Genuinely curious for use cases.
HELLO?
I would ignore this poor slack (and general communication) etiquette myself.
All chatbots are AI, from ELIZA to GPT4. Did you mean LLM-based, maybe?
Your pedantry is acknowledged and appreciated, but everyone knows exactly what they meant including you. Languages evolve and words’ meanings change and evolve over time.
Merriam-Webster: https://www.merriam-webster.com/dictionary/ai
Quote, some emphasis mine:
artificial intelligence
[…]
specifically : a program or set of programs developed using tools (such as machine learning and neural networks) and used to generate content, analyze complex patterns (as in speech or digital images), or automate complex tasks
see also generative AI
You really can’t tell?
Technically you can call a chain of three if/else conditions an AI but come on, you KNOW that’s not what we mean.
It would be worse if it was an AI, then it would just straight up mislead you








