Web 3.0: The Evolution of the Internet towards IoT
The so-called Web 2.0 and 3.0 are two evolutionary stages of the Internet. At the moment we are between these two phases. Let’s find out together the differences and how we got here.
Il Web: how it all started
Today the internet is almost a primary need, a service that is taken for granted, permeating our days.
To think that this technology was born for military applications. If you’ve read the article on cryptography, you’ll remember it has military origins as well.
“Polemos [conflict] of all things is father, of all things is king; and some reveals gods, others humans, some slaves, others free.”- Heraclitus
It is during the Cold War and especially with the launch of Sputnik into space, that the U.S. Department of Defense started to invest in research projects to find new communication systems, first taken over by researcher Paul Baran, and then entrusted to a dedicated research group called ARPA.
ARPA, between 1967 and 1972, created the first network between computers, using telephone lines. Naturally, the network linked together – and was accessible only by – researchers and students connected to the project.
So ARPA became ARPANET and in 1973 created the TCP/IP protocol, with which researchers created a public network, that was not controlled by anyone.
After years of development under a predominantly military direction, the National Science Foundation decided in 1992 that the Internet would not be a government service, but a service provided by dedicated companies, namely the commercial Internet providers we know today, technically called ISPs.
This event, along with the invention of the World Wide Web, brought the internet into its second phase, commercialisation.
To invent the global network, the WWW, was a single researcher at CERN, Tim Berners-Lee. In 1994 came the first browser, Mosaic, copied directly by Netscape.
Netscape was the first public company of the “dot-coms” that formed the famous dot-com bubble. When the bubble burst in 2000, public opinion thought that the internet was just a passing thing, with no real future. Only 6.7% of the world’s population then had access to the internet.
Many companies went bankrupt, but the technological advances made during the bubble remained.
It’s funny to think that the same thing happened during the Gold Rush, and is probably happening with cryptocurrencies: during and after the gold rush, it wasn’t so much the miners who made money, but the traders who sold them shovels and picks, and those that developed new mining technologies.
Likewise, the value created by new crypto projects will remain, regardless of market trends.
So between the 90s and 2000s the HTTP protocol was born; the first APIs were used by companies; Ward Cunningham invented wiki pages, web pages which can be modified by anyone.
The wiki model transformed the internet from a “read-only” internet to a “read-write” internet, with the creation of contents by users without any commercial purpose.
What motivated the advancement of technology in these years was also the extreme need to simplify transactions through what would be e-commerce.
Web 2.0: people sink, problems swim
From 2004 until today, the most disruptive changes occur. Access to the internet moves to mobile devices, images are the medium gaining more and more importance, social media almost creates a parallel reality. With the blanket spread of technology comes naturally the crime associated with it.
Another component that changes the paradigm is control by governments and the Big 5.
We are thus reconnected to the birth of the internet in the military. It is precisely this imprint that seems to have made the internet a way to easily “police” its users.
As is well known, this predisposition has been exploited to the extreme by the NSA, the US national security agency, as well as the Chinese government.
Google and Facebook (and to some extent Yahoo), on the other hand, use this facility for their profit. This business model originates from the fact that internet users tend not to accept having to pay for a service such as information search or email.
Then it is Big Data that becomes the product. The result is a total symbiosis between user and Google/Facebook, in an exchange of value that is not always equal, especially in the case of Facebook. On top of that, the total dependence of companies and people on a few giants is felt the moment their systems go down, turning off the voice of the world, which is reduced to a whisper on alternative channels.
The power wielded by the other members of the Big 5 – Apple, Amazon and Microsoft – is less of an issue because we’re used to the concept they incorporate. These providers of software, hardware, and online services usually require payment for their products, products that everyone understands more or less.
Web 3.0: Humans and Machines
Where Web 2.0 was driven by the advent of mobile, social media and the cloud, Web 3.0 is built largely on three new levels of technological innovation: decentralised networks, artificial intelligence and edge computing.
In response to the problems that emerged in Phase 2.0, DeFiers and proponents of decentralisation are making their voice heard.
DeFi‘s technology proposal is to decentralise any financial service through smart contracts and dapps. However, decentralisation and blockchain could be applied even outside of finance, to any service that requires secure data sharing or peer-to-peer services.
The decentralisation of apps and services would solve the problem of monopoly and downtimes due to a “single point of failure”.
In the off-chain world, the innovation of Web 3.0 is given by the use of artificial intelligence. Applications of AI that we already know are search engines, virtual assistants, facial recognition, machine translation and many video games where you can play “against the computer”.
This is just the surface of the sea of possible applications in so many sectors, from military to healthcare, from industrial to transportation.
Devices that have AI built-in, such as cars, drones, smart home security systems and Amazon echo are all elements of the famous Internet of Things. You’ll often hear IoT and Web 3.0 used synonymously, as the former is the direct application of the latter.
The main challenge right now is to make AI applications able to self-update by learning, since right now they need human intervention. Note that the less human intervention is needed, the more decentralised an application is.
Artificial intelligence implies huge amounts of data to be processed according to complex algorithms.
Precisely in response to the need to process large amounts of data compared to the previous evolutionary phase, today we speak of edge computing.
Edge computing is a sensationalistic name for something we already know – it is our smartphones, cars, computers, smartwatches, tablets etc. to process this data, that they themselves generate.
Prior to the teeming of network-connected smart devices, the Internet of Things, there was only talk of cloud computing.
Cloud computing consists of a single cloud service provider, to which the devices are connected in a centralised structure. It is the provider that is responsible for processing the data generated by the connected devices.
Edge computing in comparison is a much more decentralised structure, as data processing is delegated towards the periphery of the system, i.e. individual devices.
This speeds up processes, reduces data traffic, and fewer problems in case of connection failures.
WWW creator Tim Berners-Lee ideally anticipated Web 3.0, which he called “Semantic Web”. He envisioned a world in which daily life would be managed by “machines talking to machines”.
All in all, whether on blockchain or not, the future looks increasingly decentralised.