AI is very cloud centric - and this is usually because AI is difficult to set up on a local machine, and because AI often has got high hardware requirements. But I for one - in the name of privacy as well as personal computing - prefer to tinker with AI in an offline environment.
A lot of what is going on today is going on because of convenience. It’s going on because people are lazy. And because people who offer services know users are lazy. But in my opinion, this has led to a situation where some amazing products simply cannot happen - and it’s led to a market where only the biggest platform makers can compete.
I want to sing a song for all open source projects where the developers make their innovations available to those who want to run things on their own hardware. These guys make the world a better place.
What do you guys think? In the name of personal computing and for the sake of tinkerers, do you think it’s ok to run the future on corporate clouds and sacrifice the personal computer?
While I do agree that we need more open source and on premise software, people vote with their wallets (and attention). The cloud just makes it much easier for a lot of people to deploy apps and, most importantly, integrate with other apps.
There are now platforms where you can launch a fully functional AI app in 5 minutes. Obviously this has a lot of value.
I agree that it has a lot of value - and I am using it myself. But I also think that recreating the mainframe paradigm from where the personal computer spawned has a lot of undesirable side-effects. The cloud offers corporations a lot of power to abuse, which can end up not being in the self-interest of users. The personal computer was about making expensive, corporate solutions consumer friendly.
Cloud is also an enabler of waste, in the sense that it’s heavily framework driven - which is great for lazy developers, but may end up creating unmanageable bloat, consuming too much hardware resources without any good design reasons.
In the end, I guess AI will make software more efficient, but the question is if there will be any privacy left.
In the beginning, computers were large mainframes accessed by remote terminals. When personal computers came onto the scene, resident programs became commonplace. Coming full circle, it’s now all about “the cloud,” with computers acting more like the dumb terminals of old.
For privacy, cost, and eliminating the need for an Internet connection, I would much prefer a resident program for AI text to speech. Predicting technology is always fraught with peril, but I think we’ll have excellent local programs from which to choose within a year. At least I hope so.
I agree with a lot of what you say.
But the whole generation that grew up at the turn of the 90s and the first two thousand knows very well what enormous advantages the cloud has brought.
Fortunately, a lot is also being done for offline machine learning.
For example, you can perform sensor data analysis on a very low-power device, typically in the mW (milliwatt) and below range, and thus allow a variety of use cases that are always running and target battery-powered devices. Let’s talk about open boards like Arduino.
Machine Learning is not necessarily neural networks, and the library TinyML provides a viable alternative of learning techniques also applicable to 8-bit microcontrollers and with a small amount of RAM.
Starting from these two Machine Learning projects and the related libraries, several application examples have been developed within the Arduino Community taking advantage of the capabilities of one of the latest generation Arduino boards, namely the Arduino Nano 33 BLE Sense board. This type of card (with a ARM Cortex-M4 processor) was also chosen for the presence of several interesting sensors (IMU, microphone, BLE module, environmental sensors) on a really small object surface. The examples produced by the combination of Machine Learning libraries and this fantastic card are:
I am sure that in the future we will feel more and more the need to have offline machine learning systems but for now there is still a long way to go