My take on the news:
Apple in talks with Google to use Gemini: It’s kind of weird, but Apple is taking a hybrid approach to catching up in Generative AI (the first step was to realize they are lagging behind in AI and that AI is critical for staying afloat, not just a fancy fad). On the one hand, they are heavily investing in Generative AI (see the news on MM1 below), but on the other hand, they are considering using Google’s Gemini systems in the meantime. To me, this means that Apple’s Generative AI is not good enough yet. Apple was distracted for years by things like the “Apple Car” (Titan project) or, well, the Apple Vision Pro, which has an uncertain future. Now they have to focus on AI.
Apple unveils MM1 model for texts and images: Yes, Apple is starting to get tangible results in Generative AI, such as the multimodal MM1 generative model that was reported recently. I took a look at the research paper posted in ArXiv that reports its capabilities. It’s not like Apple is breaking new ground here, but at least they are taking seriously their weak situation in AI. Apple urgently needs good Generative AI capabilities for things like revamping Siri, which by now is clearly an outdated piece of software, as I commented in a blog post a year ago.
Fitbit uses Google's advanced AI to become a personal coach: As you perhaps know, Fitbit was acquired by Google a few years ago, and now they are leveraging the powerful Gemini conversational capabilities to bring personalized reports and advice to users, as reported by PC Magazine. That makes sense for Google, but I guess it would make sense for Apple as well once they get better AI to use in their systems. In recent years Apple has been incorporating more and more health-related functionalities to their iPhones and watches, but without Generative AI so far.
This week’s anecdote
I recently found in Wired magazine a recount of how Transformers Architecture became the next big thing in technology, becoming the technical base of the Generative AI revolution underway now. It’s well-written and entertaining. You shouldn’t miss it if you are interested in Generative AI. You can check the original Wired article here.
What is…?
Quantum dots.
Initially, I thought “Quantum Dots” was a marketing term that tried to sound futuristic but had no technical substance. Boy, was I wrong.
If your TV set is recent, it may have the “Quantum Dots” enhancement to its screen.
Quantum dots are tiny crystals that absorb light of one wavelength and convert it to another. It was invented at Bell Labs in 1982. Crystals are added above the backlight layer of an LED TV. That’s it, but compared with traditional LCD TV sets, quantum dots translate into more luminosity because instead of fading the unused colors in one particular pixel, they convert the light from one color to another.
Now you know what “quantum dots” are.
This week’s blog piece
I’m publishing the article “Are the Ray-Ban | Meta Real Smart Glasses?” on Medium. Its highlights are:
Many of us assume (perhaps mistakenly) that “smart glasses” had to have augmented reality on them.
The Ray-Ban Meta glasses have no AR at all.
That doesn’t make them useless.
I include the list of functionalities of the Ray-Ban Meta glasses out-of-the-box.
One of its functions is to ask about what you have in front of you, using an AI conversational chatbot that uses voice for both the question and the answer.
Then I propose several possible use cases, including a tourist guide, translation of street signs, reading a book aloud, vision-impaired guidance, and taking “provisional” pictures to be enhanced later.
I ended up having a discussion about where smart glasses are heading.
Here is the end of the free preview of SkepTech. Please become a paying subscriber for more exclusive content, including a “Friend Link” to the mentioned article and links for some curated articles on Medium.