News
Google gives a load of announcements: Besides renaming Bard to Gemini (and giving it a needed overhaul), the Mountain View behemoth also announces Lumiere, a text-to-video LLM that pretends to be photorealistic. I know, compared to the OpenAI Sora, it looks rather lukewarm, but anyway, it marks (alongside Sora) the beginning of the video generation era. We can expect shortly the use of such systems in movie studios and, of course, in fake news videos…. Oh, and another Google announcement is its Gemma entry in the open-source LLM arena.
Apple shuts down its EV car project: After billions of dollars invested and river or ink wasted in news and commentary, Apple finally pushes the shutdown button and ends the “Titan” project. It started in 2014 when self-driving cars seemed the next big thing. Still, the Titan project suffered several vision changes (from a fully autonomous car to a more Tesla-like) and several leadership changes. In the end, around 2,000 employees are mostly being transferred to Generative AI projects or laid off.
Andrej Karpathy leaves OpenAI (again): OpenAI’s lead researcher announced on X his departure but clarified that “nothing “happened” and it’s not a result of any particular event, issue or drama.” We wonder how much this can affect OpenAI’s technical capabilities. Karpathy had left OpenAI before, during the Sam Altman recurrent firing-rehiring drama.
BYD plans to build a factory in Mexico: Tesla is increasingly facing European and Asian competition. I got the news that the Chinese car manufacturer BYD (“buy your dreams”) is seriously considering installing a plant in Mexico. Why in Mexico? Because of the TLCAN, of course! They intend to manufacture the cars in Mexico, where labor is cheaper, and sell them in the US and Canada without paying importation fees. It seems the BYD factory will be built in Nuevo Leon, just like the Tesla plant currently under construction, but I didn’t find out exactly when they will start to build it.
This week’s quote
This time, I have another non-technical one, more on the personal development side –or perhaps also the entrepreneurial side:
Feeling ready is too late
– Kim Witten, PhD
This quote is actually the title of an article on Medium (I’ll give below the link to the premium subscribers). Kim argues that you risk waiting too long if you wait to feel comfortable with your project before jumping into action. She says it’s preferable to “be a little bit uncomfortable.”
This week’s product
I discovered one of the least popular Google products: NotebookLM, which is basically a note-taking application with Generative AI capabilities. You can think of it as “the Evernote of AI times.”
NotebookLM can sip most of the content you have generated (your blog posts, your notes, anything) and gets configured as your “thought partner.” At least it was configured to partner with Steven Johnson, a book author now working for Google on the NotebookLM project.
I haven’t tested NotebookLM for real because it’s not available where I’m living now (Mexico), and for sure I will test it later on (perhaps using a VPN or during a trip to the US), but for sure, it looks like a very promising product. Now, Google has an extensive record of developing and then shutting down its projects…
What is…?
LoRA (Low-Rank Adaptation)
In deep learning, LoRA is a technique used for fine-tuning large language models (LLMs) efficiently. It allows you to adapt pre-trained LLMs to new tasks without requiring training all their original parameters. This makes it faster and more memory-efficient than traditional fine-tuning approaches (if we could talk about “traditional” LLM fine-tuning methods).
LoRA introduces a set of low-rank matrices inserted into the pre-trained model. These matrices act as adapters that learn to modify the model's behavior for the specific task at hand.
By training only these low-dimensional adapters, LoRA performs well on the new task while requiring significantly fewer updated parameters than the entire pre-trained model.
There you have it. I admit this is a very high-level explanation of what I got from some explanations, not something I could claim to be an expert on. But you can check the nitty-gritty details at Hugging Face or ArXiV.
Blog Piece Highlights
My Medium post this time is “How Generative AI is Changing Language Learning Forever,” posted in the “Generative AI” publication. It’s available for my premium subscribers (Friend Link below). Its highlights are:
“Language Learning” hasn’t existed for most of the population for a long time.
Proper technology for language learning started with books –yes, book tech was proposed by Gutenberg).
Many technologies have been used for language learning, including records, cassettes, CDs, PC software, Internet-powered interactions, and mobile apps.
I was myself involved in creating a startup about language learning, “Avalinguo,” which shut down a while ago.
Currently, Duolingo is the dominant language-learning app, but it has limitations, particularly related to making the user speak. There are four language skills, and Duolingo is weakest in speaking.
The “AI way” of learning a language uses generative AI to make the user speak, overcoming thus Duolingo’s limitations.
In a few years, the “AI way” will soon become the default language learning method.
A couple of AI apps for language learning are reviewed.