Web 3.0: Towards The Decentralized Web
The Internet has fundamentally changed how human beings organize societies. Having access to cheap data from all over the world has made organizing political movements much easier. Anonymous posting on various websites has created a culture where people from all over the world freely exchange their ideas and create new socio-political movements. On the other hand, governments have scaled up surveillance operations and are curbing down on civil liberties more than ever.
The current situation in the internet space is such that we are faced with a bottleneck. There are a handful of internet companies that are running more than seventy per cent of the landscape. The internet was always meant to be a space where decentralised nodes can share information. The current landscape is alarmingly centralized and we need new technology so that we create systems and infrastructure that decentralize the internet once again.
On this guide, we will analyze the current status of the Internet as it transitions to Web 3.0 thanks to blockchain technology.
Evolution of the Internet
By the latter half of the mid-twentieth century, computers were fast becoming an integral part of work governments, multinational companies and academic institutions. From storing data to computing complex mathematical problems, computers were revolutionising scientific research. Sharing computational resources amongst many users simultaneously was becoming important for multitasking. Computer scientists from all over the world were trying to solve the problem of time-sharing.
In the early 1960s, Paul Baran proposed the idea for distributed networks storing data in message blocks. By the late 1960s, Donald Davies improved on that idea and conceived of packet switching. He was proposing to build a commercial data network for the UK.
The Advanced Research Project Agency of the US Department of Defense released their very own network known as ARPANET in 1969. Basing their research on packet switching proposed by Davies and Baran and applying it to the mathematical work done by Leonard Kleinrock out of UCLA, this network was prominent in moving research and development of the internet in the right direction.
In 1970 the French research network started their network called CYCLADES. This network was also built on principles of packet switching but it was different from the American ARPANET, proposing an interconnected network where users can exchange data from multiple nodes.
All these networks were working independently of each other during the early part of the 1970s when ARPA and a global group of computer scientists known as the International Working Group first created protocols to connect these networks and make a network of networks.
A universal protocol was required to connect all these networks into one. In 1974, extensive research drawing from the French CYCLADES network was published by Bob Kahn and Vint Cerf out of Stanford University which formed the basis of the Transfer Control Protocol (TCP) and Internet Protocol (IP). By the 1980s the US National Science Foundation created a network connecting all the supercomputers in the USA to share research data. Trying to access the NSFNET from outside the USA brought about the concepts of Domain Name Service (DNS) and rapid adoption of TCP/IP protocols. These advancements eventually led to the formation of the first version of the internet.
The first iteration of the internet is called Web 1.0. Even though it is not clear when Web 2.0 eventually emerged, historians generally classify the era of Web 1.0 from 1991 to 2004. Web 1.0 is mainly static web pages hosted by various Internet Service providers or free website hosting servers. The web pages were static HTML documents with little to nothing dynamic engagement with the user. It used to contain written content sprinkled with GIFs and pictures.
The only form of dynamic interaction that was prevalent amongst Web 1.0 users was Bulletin Board Service(BBS). Internet users would go to these BBS sites and post comments on the site as you would do on a bulletin board.
The term Web 2.0 started appearing in media by the late 90s. Early in the next millennial websites started getting dynamic characteristics to them. Users could create profiles and login to these pages, creating communities and people started creating content catering to these communities.
The age of social media was born. Users started to depend on their browsers for direct interactions with the websites rather than downloading them. Internet startups came up basing their business models on user interaction and participation. Monetization of attention is still the primary source of income for most internet social media companies.
Web 2.0 brings together the capabilities of the server-side software, web browser software along with content syndication and standard practices that are persistent today.
Tim Berners Lee, the creator of the World Wide Web first predicted that the eventual iteration of the Internet would be a semantic web. Interfacing human response accurately and having a seamless integration of devices, humans and artificial intelligence is the primary goal for Web 3.0. Integrating data with semantics would require the utilization of technologies like Resource Description Framework (RDF), Web Ontology Language (WOL) and Extensive Markup Language (XML). These technologies are currently used for representing metadata.
Artificial intelligence systems would help understand metadata at a human level and bridge the gap between thought and actual action. Another important aspect of Web 3.0 has to be the decentralization of data storage on servers. Web 2.0 or the present-day iteration of the internet is plagued by centralised data storage making the data unreliable owing to a single point of attack and it is very easy to take control of that data and manipulate it.
The Internet of Things (IoT) is a pipe dream that computer scientists have been building up towards. Web 3.0 will make this a reality. The increase in data speeds would help make the internet ubiquitous and internet connectivity can be leveraged to create smart and better consumer products.
Problems With Web 2.0
The current structure of our internet is rife with problems. Internet companies prey on human emotions to increase the time a user engages with websites. In a time and age where data is perhaps the most valuable resource, data security is not assured by storage servers.
Nefarious actors ranging from identity thieves to big data skimmers, everybody is after your data. Protecting this valuable resource is of the utmost importance. Let us look at these problems with Web 2.0 in detail.
Lack of Privacy
When the internet was first developed, a primary feature of it was the culture. Anonymous message boards were the basis of communication in the first iteration of the Internet. Progressively as the social media revolution began, Internet companies based their business models on them acquiring highly accurate and verifiable user data. Instead of the users being the consumer, they became the product.
Centralization of Data
The bulk of server data is stored in centralized servers all over the world. This is a major security issue faced by today’s internet companies. These centralized data hubs provide a single point of attack for hackers engaging in Direct Denial of Service (DDoS) attacks. The data stored by various institutions with these data storage companies leaked out leading to identity thefts.
Centralised storage servers provide easy access for governments and corporations to rub out uncomfortable discussions by censoring at specific points without being flagged in the zeitgeist radar.
Why is Web 3.0 Important?
In 1999, an American innovator named Kevin Ashton proclaimed that the future of the world will be defined by the internet. All household items in an average home, starting from the air-conditioner to bathroom mirrors to walls that divide the space into rooms will be run smartly and efficiently with the help of the Internet. This future came to be known as the Internet of Things. In our current infrastructure of the Internet, we have started seeing the beginnings of this future. Most things have a ‘smart’ version of themselves.
Be it a Smart TV or a Smart bed, designers have been successfully integrating the Internet into their functioning. The front-end of the future is here, but the back-end is still lagging. Web 3.0 has the power to propel this future-forward. Artificial intelligent systems and data autonomy will augment the Internet into a decentralized and more secure version of today’s Internet
Today, a consumer sitting in Asia can very easily procure things from America and vice-versa. With all this innovation comes along a fundamental problem. Adam sells an apple to Lisa via one of these websites. Adam lists his product with one such website depending on the part of the world he comes from. Lisa, scrolling through the website finds these nice apples and orders them.
At this point, the website X, procures the product from the seller and then delivers it to the buyer. The problem is that the website X has skimmed a certain amount of value from the transaction. They provide the delivery service but they are marketing themselves as if they are selling the product.
Web 3.0 can revolutionize this industry. Open source marketplaces can be built to replace the existing systems. Systems that will run with artificial intelligence and will require no grunt work to maintain and thereby repatriating the value of the product back to the buyer and seller.
Medical technologies will also radically transform with the advent of Web 3.0. In the current infrastructure of medicine and health care, the solutions available to patients are generic and conforming. Everybody has a slightly different chemistry and therefore its biology varies from the next person. Generic medicine can only go so much in treating particular breakthrough cases. Research and development are predicated upon financial incentives. Web 3.0 can bring about a revolution in personalised medicine. Artificial intelligence and 3D image technology can be leveraged to create solutions geared towards individuals thereby making the industry more cost-effective and precise.
All these are a few examples of industries waiting to be renovated and revolutionized. In a broader sense, the advent of Web 3.0 will radically reshape the world of capitalism.
The Role of Blockchain Technology in Web 3.0
In 2008 Satoshi Nakamoto published the Bitcoin whitepaper. Computer scientists around the world rapidly started to come up with ideas and innovations, revolutionizing existing industries with the power of decentralization. Web 3.0 or as Burners-Lee called it, the Semantic Web, was one such idea. Creating a new version of the internet was an urgent matter that needed to be addressed. The power structure of the internet has to be decentralized.
Web 3.0 software stack is radically different from its previous versions. The entire client-server relationship is upended by decentralizing server storage. The transition has to be a slow one because blockchain technology is at its nascent stage and converting fully to Web 3.0, would result in a loss of bandwidth. The transition has to be gradual and measured.
The web 3.0 architecture built by leveraging blockchain technology will have the following layers:
- Application layer: This is the software layer through which the users will interact. This layer will consist of dAPP browsers, the platform for hosting applications, decentralized applications and various programming languages. The purpose of this layer is to cater to users by figuring out workable business plans according to the user preference patterns.
- Service layer: This layer will operate and maintain the application layer. It consists of smart contracts, decentralized autonomous organizations (DAOs) which will ratify smart contracts, data feeds from the application layer, an off-chain computation layer for reducing the processing load on the blockchain (sidechain), private two-way channels for direct transactions between users, digital wallets for storing cryptocurrencies, blockchain oracles used to transmit real-world information for the smart contracts and many other optional components.
- Protocol layer: This is the layer that constitutes the root of all blockchain applications. This layer consists of consensus algorithms which form the logic structure on which the different nodes are aligned, virtual machines which are security systems that scope out unreliable code from various nodes and execute them remotely thereby preventing denial of service attacks, sidechains, etc.
- Network and Transport Layer: This layer will form the network used by blockchain to form the p2p network. The most important function of this layer is as a transportation medium. Data will be transferred via this layer and it will seek out new nodes and make a connection with them thereby adding an extra layer of protection in the transportation.
- Infrastructure Layer: This layer will provide the means to create a blockchain. The systems and algorithms will provide blockchain as a service, mining as a service, a robust network of nodes, tokens, NFTs, etc. Developers can leverage this technology and create solutions geared towards consumers.
How is RSK Enabling Web 3.0 Adoption
Many people from developing countries still need to be brought into the Internet age. This will enable them to undertake enterprises and earn a living for their communities and their families. The current internet ecosystem is not inviting these people to participate in a manner where they can accrue actual value from it.
RSK & RIF solve the complexities of building Web 3.0 by providing a complete tech stack that provides all the most important pieces of the puzzle. Let’s go over two clear examples:
Decentralized Data Networks
As previously mentioned, decentralization of data storage is one of the primary drivers of Web 3.0. RIF Storage is the decentralized storage layer protocol that provides a clear solution to this problem.
RIF Storage interfaces with both the user who is uploading data and people who are providing the storage space and the developers who can integrate the protocol in their dApps. The user side of the protocol is generally composed of third party solutions.
The two main characteristics of the user layer is the storage gateway which allows users to store data at a price without hosting a node and the pinning service which allows the persistence of the data even after the node has gone offline. The storage side of the protocol consists of integrated 3rd party storage providers like Swarm and IPFS. Users can store their data with either of them seamlessly.
Digital identity is one of the pivotal problems that needs to be solved in order to have a functioning decentralized web. Physical identities that form the backbone of the modern economic system are more often than not unverifiable without any central authority ratifying it in a virtual setup.
The industry needs to create a protocol where users can have a self sovereign identification. Self sovereign identities (SSIDs) form the cornerstone of Web 3.0 by engaging more and new actors into the mainstream economy. If users can own verifiable identities which are private and can be used to verify peer to peer transactions then they can participate in trade and commerce. The main hurdle in the road for constructing a digital identity is building a robust SSID protocol.
RSK provides an industry standard protocol for its identity service and it is also inviting developers from all around the world to create decentralized applications with the help of RSK libraries. By providing an API layer, RSK enables developers to create solutions that are compatible with SSIDs through decentralized domain names.