I coded an Alexa Skill once. It was tedious and a garbage platform. After a while it was delisted for spurious reasons, even worse DX than Google and Apple app stores. Complete dumpster fire from start to finish.
All obsolete now that LLMs are here. I don’t think any devs will miss it.
Alexa and LLMs are fundamentally not too different from each other. It’s just a slightly different architecture and most importantly a much larger network.
The problem with LLMs is that they require immense compute power.
I don’t see how LLMs will get into the households any time soon. It’s not economical.
Well yea. You could slap Gemini Google-Home today. You wouldn’t even need a new device for that probably. The reason they don’t do that is econimical.
My point is that LLMs aren’t replacing those devices. They are the same thing essentially. Just one a trimmed version of the other for economic reasons.
Alexa skill store is a “prime” example of Amazon’s we don’t give a shit attitude. For years they’ve turned their back on third party developers by limiting skill integration. A well designed skill on that store gets a two star rating. When everything in your app store is total shit - maybe the problem is you Amazon?! It’s been like that for years ; I completely avoid using skills as they only lead to frustration.
LLM integration into an Alexa device could be a big improvement, but current speed performance at that scale seems concerning that we’d get a laggy or very dumbed down system. Frankly Id be happy if Alexa could just grasp the concept of synonyms and also have the ability to attempt second guess interpretations of speech comprehension rather than assume user has just asked the exact same question in rapid succession but with a more frustrated tone.
Every damn smart light skill has different syntax and there is no way to get the Alexa app to just fucking tell me what the syntax is. The "nui’ (no user interface) approach is cute but really falls flat when trying to do complex tasks or mix brands of smart devices.
Also, it might be Google that does this more often so I won’t blame Alexa necessarily, but a lot of times when I ask things to play my liked songs I end up getting a song called “my liked songs” to play. It hasn’t happened in a while so however I am phrasing it must be correct now but it’s not something I’m super consciously aware of.
Yeah the syntax stuff was the biggest disappointment for me as a dev, too. There’s very little natural language processing going on, just simple template-based pattern matching. So basic and inflexible.
I never dove into the skill API, but I’d imagine you’re setting phrases up. Can LLMs really help there? Like asking Alexa general information, I could see how LLMs were helpful, but asking it to turn lights on, how would that help?
It may be better at identifying intents, especially with different dialects and languages. You could tell it to send the response in a specific format, say json. Never tried it but might work.
I coded an Alexa Skill once. It was tedious and a garbage platform. After a while it was delisted for spurious reasons, even worse DX than Google and Apple app stores. Complete dumpster fire from start to finish.
All obsolete now that LLMs are here. I don’t think any devs will miss it.
Alexa and LLMs are fundamentally not too different from each other. It’s just a slightly different architecture and most importantly a much larger network.
The problem with LLMs is that they require immense compute power.
I don’t see how LLMs will get into the households any time soon. It’s not economical.
To train. But you can run a relatively simple one like phi-3 on quite modest hardware.
The immense computing power for AI is needed for training LLMs, it’s far less for running a pre-trained model on a local machine.
deleted by creator
deleted by creator
Well yea. You could slap Gemini Google-Home today. You wouldn’t even need a new device for that probably. The reason they don’t do that is econimical.
My point is that LLMs aren’t replacing those devices. They are the same thing essentially. Just one a trimmed version of the other for economic reasons.
Alexa skill store is a “prime” example of Amazon’s we don’t give a shit attitude. For years they’ve turned their back on third party developers by limiting skill integration. A well designed skill on that store gets a two star rating. When everything in your app store is total shit - maybe the problem is you Amazon?! It’s been like that for years ; I completely avoid using skills as they only lead to frustration.
LLM integration into an Alexa device could be a big improvement, but current speed performance at that scale seems concerning that we’d get a laggy or very dumbed down system. Frankly Id be happy if Alexa could just grasp the concept of synonyms and also have the ability to attempt second guess interpretations of speech comprehension rather than assume user has just asked the exact same question in rapid succession but with a more frustrated tone.
Every damn smart light skill has different syntax and there is no way to get the Alexa app to just fucking tell me what the syntax is. The "nui’ (no user interface) approach is cute but really falls flat when trying to do complex tasks or mix brands of smart devices.
Also, it might be Google that does this more often so I won’t blame Alexa necessarily, but a lot of times when I ask things to play my liked songs I end up getting a song called “my liked songs” to play. It hasn’t happened in a while so however I am phrasing it must be correct now but it’s not something I’m super consciously aware of.
Yeah the syntax stuff was the biggest disappointment for me as a dev, too. There’s very little natural language processing going on, just simple template-based pattern matching. So basic and inflexible.
Whoever made a song called my liked songs is an evil genius.
I never dove into the skill API, but I’d imagine you’re setting phrases up. Can LLMs really help there? Like asking Alexa general information, I could see how LLMs were helpful, but asking it to turn lights on, how would that help?
It may be better at identifying intents, especially with different dialects and languages. You could tell it to send the response in a specific format, say json. Never tried it but might work.