XLP-Manual Chapter 6. Remix: The XLP Online Platform

From MIT Technology Roadmapping
Jump to navigation Jump to search
Chapter 5
Back to Content Page
Chapter 7
English version /Chinese version

Think of the world you live in: Imagine the classroom of the future:

In a world that is becoming increasingly digital, it is essential to teach the skills required to navigate through the digital environment as early as possible. A gap is opening between those who are technologically literate and others who are yet to be comfortable with handling software. The ones who are on the wrong side of this gap will eventually suffer detrimental implications to their careers and lives. Such individuals will not be able to take part in collaborative work, thereby limiting their ability to increase their knowledge. Already, individuals and institutions lack the tools to create new digital assets or work with the existing digital information, both of which decrease their overall potential. Clearly, students need a solution to escape this situation. Imagine a classroom of the future that is modeled after the reality outside the classroom. If the reality outside the classroom is one pervaded by digital tools, so should the classroom. Remix makes sure this happens. Students will no longer study for exams but prepare for the challenges that await them in the future. It helps them work and self-manage teams with tools like Phabricator and Git, while allowing teachers to track student progress and individual contributions (captured using a digital data processing system) towards their respective projects using GitLab. Without such digital tools, organizational consciousness can’t be brought to human attention and organizational learning can’t happen.

Trustworthy computing technologies such as Blockchain technology are integrated into Remix. These ensure immutability while ensuring credit is given where credit is due.

Remix Tools

The Remix platform offers a large array of powerful tools, manageable by anyone and scalable to any size.

These tools will be an important asset for any modern educational institution for which it can fully develop the intellectual potential of its students and staff. The platform democratizes services that were previously limited to such an extent that only established companies were able to utilize their full potential. These services are now combined into a single platform allowing any individual or institution to start their digital transformation. Without the need for continuous online connectivity, the platform can be brought to the farthest corners of the globe, where a new generation of individuals can start fulfilling their computational needs. In short, Remix can bring the tools used by established software companies into any classroom or home enabling anyone to take part in the future of the digital world.

Together, participants can create new data using tools like Jupyter and OpenModelica, or analyze and optimize existing information using Elasticsearch and TensorFlow. Ultimately, every individual can create or take part in a Digital Publishing Workflow – a cycle going from Data In- put to Data Management to Data Publishing, all from their own laptop in one single application. This is Remix.

What is a Microservice?

The central idea behind microservices is that some types of applications become easier to build and maintain when they are broken down into smaller, composable pieces which work together. Each component is continuously developed and separately maintained, and the application is then simply the sum of its constituent components. This is in contrast to a traditional, ”monolithic” application which is all developed all in one piece.

Applications built as a set of modular components are easier to understand, easier to test, and most importantly easier to maintain over the life of the application. It enables organizations to achieve much higher agility and be able to vastly improve the time it takes to get working improvements to production. This approach has proven to be superior, especially for large enterprise applications which are developed by teams of geographically and culturally diverse developers<ref>https://opensource.com/resources/what-are-microservices An Introduction Microservices, Opensource.com</ref>.

Remix Microservice Overview

Figure 6.1: Remix platforms microservices

Data Input

Data Input describes the different ways through which data may enter the Remix platform. In the Digital Publishing Workflow, Data Input lies between Data Publishing and Data Management, as previously published data can act as input for Data Management.

Generally, data may enter the system from three source types or combinations of them.

Data Creation

Remix comes with microservices from which new data can be created. For instance, OpenModelica enables users to create simulations of complex systems. The data produced by these simulations can then be saved for later use. Another service, Jupyter, allows for coding, solving equations, and showing their visualizations that can then be shared in real time among differ- ent users. Again, all the data created can be saved for later use.

Online Data

Remix also allows users to source data from the internet such as databases, wikis, or pictures. Again, different microservices are available and integrated for this purpose. Data may be accessed directly from the internet or downloaded first (for instance, to be brought to regions where internet access is spare or Internet is slow). Kiwix, for instance, offers the entire Wikipedia, WikiVoyage, TED Talks, and more for free to download on their website. This data may also be used as input for Data Creation, subsequently representing a combination of data.

Private Data

Remix enables users to use their own private data as input for Data Management and Publishing. For this purpose, the platform comes with microservices such as Nuxeo, which is a digital asset library already used by institutions, companies, and individuals to manage their digital assets and NextCloud for file storage in the cloud. Hence, they can tap into their existing data and use it as input for other microservices to create new data, again representing a combination. Generally, such private data may belong to an individual, an institution, or a company who can scale up their data storage as required without having to scale the other microservices thanks to the underlying system architecture.

Data Management

The data management part of Remix utilizes four different tools to perform a number of tasks. In the Digital Publishing Workflow, Data Management lies between Data Input and Data Publish- ing. The purpose of data management is to load the given information from the different data inputs, then optimize it so that it becomes searchable and produces the best results, which in turn can be visually presented to the user. This is possible through the use of the ELK stack from Elastic, consisting of Logstash, Elasticsearch, and Kibana, in combination with the ma- chine learning capabilities of TensorFlow. These tools are combined to allow users to perform queries on wide array of data that in turn, can be optimized based on an infinite number of characteristics. The following section will describe each tool and how they are an integral part of the of the Remix framework.

Logstash: Data extraction, transformation, and loading

Remix consists of a large number of data sources that each produce a large quantity of data, in many different formats. It is therefore necessary to have a tool that can extract, transform and load the input data into the next step of the process. This is where Logstash is used. Logstash is an open source server-side data processing pipeline based on the extract, transform, load (ETL) process.

Each step has varying degrees of complexity, that will be explained in the following sections:

The first job carried out by Logstash is the extraction of data from the different defined sources. Extraction is conceptually the simplest task of the whole process but also the most important. Theoretically, data from multiple source systems will be collected and piped into the system for it later to be transformed and eventually loaded into the system. Practically however, ex- traction can easily become the most complex part of the process. The process needs to take data from the different sources, each with their own data organization format and ensure that the extraction happens correctly so that the data remains uncorrupted. This is where validation is used. The extraction process uses validation to confirm whether the data that was extracted has the correct values, in terms of what was expected. It works by setting up a certain set of rules and patterns from which all data can be validated. The provided data must pass the Transform Load validation steps to ensure that the subsequent steps only receive proper manageable data. If the validation step fails, then the data is either fully rejected or passed back to the source system for further analysis to identify improper records, if they exist.

The data that is extracted then moves on to the data transformation stage. The purpose of this stage is to prepare all submitted data for loading into the end target. This is done by applying a series of rules or functions to ensure that all business and technical needs are met. Logstash does this by applying up to 40 different filters to all submitted data. When filtering is completed, the information is transformed into a common format for easier, accelerated analysis. At the same time, Logstash identifies named fields to build structure from previously unstructured data. In the end of the transformation process, all data in the system will be structured and in a common format that is easily accepted by subsequent processes.

The last part of the ETL process is the load phase. The load phase takes the submitted and trans- formed data and loads it into the end target. There are certain requirements defined by the system that must be upheld. This pertains to the frequency of updating extracted data and which existing information should be overwritten at any given point. Logstash allows the system to load onto a number of systems, Remix does however only require that Elasticsearch receives the data.

Elasticsearch: Search and Optimize

Search and optimize are two key attributes of any data management system. It allows a system to filter away all the unwanted data and prioritize the results based on a number of given attributes. Search and optimize are not functions that are limited to basic keyword searches, but can instead be used for a wide variety of possibilities. Everything from choosing the correct strategy in a game of chess to simulating the trajectory of a moving vehicle. In all these cases, the function utilizes the available information from the different data sources in combination with machine learning intelligence to give the desired outcome To achieve this Remix uses Elasticsearch and TensorFlow. Elasticsearch is a distributed, RESTful search and analytics engine that stores data in a searchable manner. All the data that is passed through Logstash eventually ends up in Elasticsearch. Here it is structured and analyzed to allow users to search based on their chosen parameters. The given parameters are in turn used to filter away all the unwanted results. What remains is a list of results that in one way or another are linked to the original search criteria. This list is, in turn, handed to TensorFlow.

TensorFlow is a mathematical library using deep neural networks in order to analyze data. The system takes in the data that was selected by Elasticsearch and prioritizes/orders it according to the pre-determined criteria. This gives the user a selected number of results that should be suited exactly to their defined needs.

When it comes to searching and optimizing, Remix’s key difference compared to other services is that results are purely based on the user. If a user specifies a certain interest or academic field that they are studying, then the optimization will be created with that parameter as a focus point. Thus, opening up focused research where all advertising-based rankings or unwanted results are removed.

Kibana: Data Visualization

In certain scenarios, the outcome of the data management process doesn’t come in the form of links or lines of text. In these cases, it is often required that the data goes through some sort of visualization in order to turn it into something that is manageable.

This is carried out by Kibana, the last tool in the data management process. Kibana is a data visualization plugin that works with Elasticsearch to provide visualization on top of the content that has been indexed. It takes all the data that the user has asked for and gives a visualization if it is applicable. Kibana can therefore also be seen as being part of the data creation aspect of Remix.

Data Publishing

The final aspect of any standard research project concerns the publishing of results and conclusions.

For this reason, the final part of the Remix framework is data publishing. The purpose of this step is to ensure distribution of new data to a wide audience while guaranteeing rightful credit and ownership of published research and findings. To do this, the platform uses two main tools, MediaWiki and Hyperledger, in combination with the machine learning capabilities of Tensor- Flow.

MediaWiki is a digital publishing tool created by the Wikimedia Foundation. It allows information to be published in a structured and navigation-friendly way. Remix uses MediaWiki to allow institutions or individuals to create closed or open wiki spaces in which all their information and research can be published. Each publisher then creates a distinct name for each new published article or piece of information. All the information on the given wiki space is then individually connected using the deep neural network capabilities of TensorFlow, as mentioned in the previous chapter. TensorFlow analyses each piece of information and carefully links it together with other related information. This, in the end, produces a wiki space which is full of research articles and other information, in combination with existing Wikipedia data, that is fully connected. Furthermore, connections and recommendations of articles can be made based on user preferences. In other words, if a user is studying biology and is doing research on the flight patterns of butterflies, then Remix will start creating more links and finding more research articles on that topic specifically. In that scenario, it might connect the flight patterns of butterflies to the evolution of airplane wing structures.

Ultimately, a person can, through Remix, have access to a deeply interconnected network of research, published articles, and existing data from the internet.

To ensure all information in the system is untampered with and that publishers are rightfully credited, Remix uses trustworthy computing technologies such as Hyperledger. Hyperledger is an open-source collaborative blockchain technology that ensures transparency and immutability of all data in the system. This, in turn, ensures that any information for which there may be a rightful owner is credited as such. These tools open up a new dimension of the digital publish- ing process that allows institutions and individuals to contribute their knowledge to a greater audience.

Groupware

Remix provides Phabricator and GitLab as a platform for participants to collaborate. GitLab is an open-source web-based Git-repository manager with wiki and issue-tracking features, allowing users to upload and share code and digital assets, while ensuring no conflicts occur between versions.

Phabricator is is a suite of web-based software development collaboration tools, including a code reviewer, repository browser, change monitoring tool, bug tracker and wiki. Phabricator integrates with Git, Mercurial, and Subversion. Participants can use it for project management and coordination within (and among) teams.

Architecture

The interconnectedness of services in Remix is made possible using the Docker platform. Docker uses container technologies to allow microservices and other digital assets can be run in easily-replicable sandboxes and not interfere with each other. Unlike virtual machines, containers do not require a guest operating system which makes them more lightweight, allowing for more or bigger applications running on a server or single computer.

With Docker container technologies, we can re-define all digital assets into three main categories: content data, software data, and configuration data. These categories can be tagged using a hash code (similar to Git or Blockchain labels) and can be traded by swapping out ”to- kens” or hash IDs of the asset. This allows participants to easily install different software services and start using the asset almost instantly. The end result is that participants and instructors can better manage learning activities, and use one consistent namespace to organize all their assets.

Figure 6.2: Architecture of Remix platform

Hence, a multitude of containers can easily be combined in a single application. This can be done through Docker Compose. While Docker focuses on individual containers, Docker Com- pose engenders scripting the installation of multiple containers that work together to create a bigger application. Microservices in Remix talk to each other to modify and move data from its creation, to its management, and publishing. At the same time, since the microservices are still housed in their respective containers, any service may be added or removed at any time without damaging other containers.

Remix also enables deployment, monitoring, and scaling microservices with Kubernetes, a tool specifically designed for this task. Microservices can then be scaled individually and independently from each other (thanks to their containerization), specific to the needs of the user. Matomo (formerly known as Piwik) is an open-source analytics software package (similar to Google Analytics) for tracking online visitors, analyzing important information, and track key performance indicators. Remix uses this software to track usage of the platform by participants.

Finally, Mesosphere DC/OS acts as the foundation of the system and adds a layer of abstraction between Kubernetes and Docker and the user’s underlying OS. This operating system for datacenters works specifically well with microservices and takes care of resource allocation and makes the system fault-tolerant.

In summary, Remix is an platform that is lightweight, modular, and easy to install, use, and scale, enabling everyone to make use of the powerful microservices included. The platform achieves this using a three-part structure with the microservices being the highest layer of abstraction followed by the combination of Kubernetes and Docker and completed by the Mesosphere DC/OS.

Education on the Blockchain

The blockchain is a decentralized technology that acts as a digital ledger for recording transactions between different entities, both human and machine. Originally devised for the digital currency Bitcoin, the blockchain is now being used for many other purposes, including in education. Below is our guide to making education fairer, deeper, and more accessible to all by using the blockchain.

“The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.” - Don & Alex Tapscott, authors Blockchain Revolution (2016)

Historical Context

Like many phenomena, over time education has moved from a centralized cathedral model to a more decentralized bazaar model<ref>From Open Programming to Open Learning: The Cathedral, the Bazaar, and the Open Classroom</ref>:

Education 1.0: Traditional Education

Figure 6.3: Education 1.0: Centralized monoliths

Traditional education consists of physical university campuses, and students attending classes on-site. Students do coursework, lab work, theses and exams, and these are graded by pro- fessors, usually at the end of the semester. In education 1.0, students are generally passive consumers of education, receiving information from academic staff<ref>Higher Education 1.0 to 3.0 and Beyond, Gilly Salmon, 27 March 2017</ref>:

Because traditional education is based around a physical space and a limited number of professors, students are limited by number and location, and financial and academic requirements for entry can be very high<ref>How much would it really cost to write off student debt?, Jack Britton, Carl Emmerson and Laura van der Erve, 14 September 2017</ref><ref>Is College Really Harder to Get Into Than It Used To Be?, Jacoba Urist, The Atlantic, 4 April 2014</ref>.

Because assessment is done by humans, fraud and bias can creep in from both students and faculty. Indeed, A large-scale study in Germany found that 75% of the university students admitted that they conducted at least one of seven types of academic misconduct (such as plagiarism or falsifying data)within the previous six months <ref>Patrzek, J.; Sattler, S.; van Veen, F.; Grunschel, C.; Fries, S. (2014). “Investigating the effect of academic procrastination on the frequency and variety of academic misconduct: a panel study”. Studies in Higher Education:1–16.</ref>. Some examples of dishonest behavior include:

  • A woman using an impostor to take a college English test<ref>Chinese woman admits using impostor to take US college English test, amid crackdown on fraud by foreign students, South China Morning Post, 3 April 2018</ref>
  • A Japanese medical university manipulating entrance exam scores to limit the number of women admitted<ref>Japanese medical university admits to discriminating against female applicants, Science Magazine, Aug 8, 2018</ref>
  • Thousands of UK nationals buying fake degrees from“diploma mill”inPakistan<ref>“Staggering” trade in fake degrees revealed, BBC News, 16 January 2018</ref>
  • 50,000 UK university students were caught cheating in the previous three years, amounting to a so-called “plagiarism epidemic”<ref>0UK universities in “plagiarism epidemic” as almost 50,000 students caught cheating over last 3 years, The Independent, 4 January 2016</ref>

Summary

In Education 1.0:

  • A student browses universities within physical proximity and signs up for a course, typically for several years
  • The student physically goes to classroom and sits in front of professor
  • The student writes thesis and sits exam
  • The professor grades the thesis and exam
  • The professor awards a grade to the student

Education 2.0: MOOCs

Education 2.0 moved education online, with Massive Open Online Courses, offered by sites such as Coursera, Udacity, and EdX. These courses are open to anyone around the world, and students are given regular feedback on their performance via the system.

Figure 6.4: Education 2.0: Networked monoliths

Education 2.0 is the age of networked monoliths. Whereas previously universities offered courses only to their own students, now they offer courses on MOOC platforms too. However, these MOOC platforms are themselves centralized monoliths. Students may take courses on one more MOOC platforms.

Typically students sign up for a MOOC with their existing digital credentials like a Facebook login or email address. While convenient, these are not solid proof of identity in the way that a secure digital identity (like public/private key) would be. At best, some services offer biometric identity via webcam<ref>Verified Certificates: How does verification work?</ref>.

Interaction between students (if there is any at all, which is not often<ref>How widely used are MOOC forums? A first look, Jane Manning, 18 July 2013</ref>) is via the MOOC’s fo- rum, and is typically centred around problem-solving and support, rather than content creation, project-based learning and digital publishing. After the course is finished students dis- band (again, if they ever banded together) and go their separate ways. This limits the possibility for collaboration and groupwork.

Courses are typically assessed by algorithm [citation needed], from students answering multi- choice questions or writing a program that generates a specific output. This limits many MOOCs in terms of flexibility and creativity.

Summary

In Education 2.0:

  • Student signs up for a course on Coursera, EdX or another platform offering educational courses, typically for a few weeks or months
  • Student works from wherever they are in the world
  • Student coursework is graded by the MOOC system and assessors working for the MOOC
  • MOOC system awards grade to student

But how do we ensure the student is actually the one doing the work?

Physically decentralized but central MOOC system

The Future: Education3.0: Education on the Chain

With Education 3.0 we can modularize, granularize, democratize and decentralize education further:

  • The line between educator and student gets blurred, as anyone can directly offer courses or tutoring and get paid for it
  • Students can learn from anywhere in the world, from a single lesson to a series of courses
  • Students and educators can share the materials they create, instead of them languishing on a hard disk
  • Educators can spend less time grading and more time doing valuable tasks
  • Qualifications can be verified and trusted
Figure 6.5: Education 3.0: Mass P2P learning

We do this by leveraging the power of the blockchain, specifically Ethereum.

Ethereum is a blockchain that enables anyone to create decentralized applications and smart contracts<ref>Ethereum smart contract development: build blockchain-based decentralized applications using Solidity, Mayukh Mukhopadhyay, 2018</ref>. A smart contract is a legal contract in the form a computer program stored that enables secure, verified transactions between two or more entities<ref>Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Ethereum & Smart Contracts, David Gerard, 2017</ref>. This program is encoded on the Ethereum blockchain, and typically takes cryptocurrency as input. Parties sign the contract using their digital identity, typically in the form of a PGP private/public key pair.

Choosing a Course

The blockchain allows us to decentralize the MOOC concept further and allows decentralized marketplaces for courses, enabling content creators to offer their own curriculum, and giving users confidence with a built-in rating system. Students from anywhere in the world can browse this marketplace and elect to take a course by signing a smart contract.

Since anyone can be a course provider in education 3.0, the line between educators and students blurs considerably. Anyone can offer their materials and assistance (via one-on-one or one-to-many personalized tuition) on this platform. Participants browse the offerings, and find the course that best suits them. They can take the whole course, just certain units, or specific guidance from the course creator.

Because there is less human effort required in assessments, smaller machine-graded courses can easily be offered. These “micro-qualifications” cover specific niche skills or disciplines, for example the React Javascript framework<ref>A Review of Udacity’s React Nano Degree, Bilal Tahir, 18 October 2017</ref>.

Course Signup

During course signup, students read through the course #constitution, which outlines their rights and responsibilities. This is both a human-readable document and embodied digitally in the form of a smart contract. Students sign this smart contract with their digital identity to show they have read their rights and responsibilities during the course.

Coursework

Depending on the type of course, different types of coursework may be done:

Individual Learning

May consist of watching videos, answering multi-choice questions, and writing code that creates a specific output. Similar to many programming courses in MOOCs. Students can offer support to each other via the platform’s forum and social features.

Group Learning

More project-based. Students do coursework on digital publishing platforms like MediaWiki and WordPress, which they sign into using their digital identity, ensuring any data they create is tied to them. These services have open APIs, so their data can be extracted and stored on a blockchain (either private or the main chain) to ensure data integrity<ref>In future, students may be able to completely switch to DApps (decentralized apps) to perform their coursework, but the apps haven’t been built yet.</ref>.

Crowd Learning

Any student can create a portfolio on a global student expertise exchange platform (similar in concept to Upwork) and seek out other students around the world who need their skills. This creates a marketplace for students to share knowledge and expertise, and enables large-scale crowd learning.

Many students can cooperate to create a single digital asset, on which they will be jointly assessed, and they can then share this asset (or subsections of it) on a decentralized digital asset sharing platform.

Assessment

For assessment, student-created data is gathered and stored on the blockchain to prevent tampering. It is then sorted using tools like the Elastic Stack, then processed, checked for plagiarism, and assessed by machine learning (with tools like TensorFlow). It can then be aggregated, ranked, and visualized for professors to perform their own assessments if required.

Grading and Qualification

As part of the assessment, a student’s qualification will be signed by the institution (for example, Tsinghua University) and stored securely on the main blockchain, allowing access by employers, institutions and anyone else to whom the student grants access via app or other means.

By tying every step of the course to a student’s verifiable digital identity and assessing by ma- chine instead of humans, we greatly reduce fraud, bias, and human error in the system, and can scale education to deal with thousands of learners. Qualifications can be verified and trusted by students, universities, and employers.

Figure 6.6: Assessing and grading students

Access to these qualifications can be encoded as a QR code which could be embedded in a badge via the OpenBadge standard (similar to Scout badges, but digital). A set of skills can form a portfolio accessible via the web or dedicated app. As we transition to a model of lifelong learning, we can foresee people earning these badges throughout their life, and perhaps professional or job-hunting networks (like LinkedIn or Monster.com) may one day have a function to search by badge to find the best candidates.

Figure 6.7: Example digital badges

Technical Analysis

Infrastructure

Education 3.0 is primarily digital - since everything boils down to zero’s and one’s, automation and computation can be applied to create efficient, verifiable workflows.

Software Stack

At present all of the software stack we have used in XLP has been traditional server-based microservices. Decentralized Apps (DApps) are still some way off when it comes to capabilities.

  • Digital media creation tools like MediaWiki, WordPress, Jupyter Notebook, etc
  • Tools to extract, analyse and write that data to blockchain: Logstash, ElasticSearch,TensorFlow
  • Tools to visualize that data for human assessment(if required):Kibana
  • A front-end to tie all of this together

Smart Contracts in Depth

We use smart contracts throughout the whole process outlined above:

  • Educators putting their courses on the course marketplace
  • Students electing to take courses
  • Educators assigning grades to students

In addition, we use digital signatures to prove ownership and work on certain assets:

  • Courses created by educators
  • Coursework done by students
  • Qualifications, signed by both student and educator

Signing a Smart Contract

The first contract signed by students is the constitution, which outlines their rights and responsibilities. To do this they need to set up a digital identity, typically a public/private key pair using the PGP standard:

OpenPGP is the most widely used encryption standard in the world. It is based on PGP (Pretty Good Privacy) as originally developed by Phil Zimmermann. The OpenPGP protocol defines standard formats for encrypted messages, signatures, and certificates for exchanging public keys. PGP & GPG is an easy-to read, informal tutorial for implementing electronic privacy on the cheap using the standard tools of the email privacy field - commercial PGP and non-commercial GnuPG (GPG)<ref> PGP & GPG: Email for the Practical Paranoid, Michael W Lucas, 2006</ref>

PGP uses two keys for encryption - a private key to encrypt messages, and a public key to de- crypt them. Bruce Schneier writes<ref>Applied Cryptography: Protocols, Algorithms and Source Code in C, Bruce Schneier, 2017</ref>:

A file encrypted with PGP typically looks like this:

“Putting mail in the mailbox is analogous to encrypting with the public key; anyone can do it. Just open the slot and drop it in. Getting mail out of a mailbox is analogous to decrypt- ing with the private key. Generally it’s hard; you need welding torches. However, if you have the secret (the physical key to the mailbox), it’s easy to get mail out of a mailbox.”

A file encrypted with PGP typically looks like this:

-----BEGIN PGP MESSAGE-----

Version: GnuPG v1.4.0 (FreeBSD)

hQIOA9o0ykGmcZmnEAf9Ed8ari4zo+6MZPLRMQ022AqbeNxuNsPKwvAeNGlDfDu7 iKYvFh3TtmBfeTK0RrvtU+nsaOlbOi4PrLLHLYSBZMPau0BIKKGPcG9162mqun4T 6R/qgwN7rzO6hqLqS+2knwA/U7KbjRJdwSMlyhU+wrmQI7RZFGutL7SOD2vQToUy sT3fuZX+qnhTdz3zA9DktIyjoz7q9N/MlicJa1SVhn42LR+DL2A7ruJXnNN2hi7g XbTFx9GaNMaDP1kbiXhm+rVByMHf4LTmteS4bavhGCbvY/dc4QKssinbgTvxzTlt 7CsdclLwvG8N+kOZXl/EHRXEC8B7R5l0p4x9mCI7zgf/Y3yPI85ZLCq79sN4/BCZ +Ycuz8YX14iLQD/hV2lGLwdkNzc3vQIvuBkwv6yq1zeKTVdgF/Yak6JqBnfVmH9q 8glbNZh3cpbuWk1xI4F/WDNqo8x0n0hsfiHtToICa2UvskqJWxDFhwTbb0UDiPbJ PJ2fgeOWFodASLVLolraaC6H2eR+k0lrbhYAIPsxMhGbYa13xZ0QVTOZ/KbVHBsP h27GXlq6SMwV6I4P69zVcFGueWQ7/dTfI3P+GvGm5zduivlmA8cM3Scbb/zW3ZIO 4eSdyxL9NaE03iBR0Fv9K8sKDttYDoZTsy6GQreFZPlcjfACn72s1Q6/QJmg8x1J SdJRAaPtzpBPCE85pK1a3qTgGuqAfDOHSYY2SgOEO7Er3w0XxGgWqtpZSDLEHDY+ 9MMJ0UEAhaOjqrBLiyP0cKmbqZHxJz1JbE1AcHw6A8F05cwW

===zr4l

-----END PGP MESSAGE-----

In the context of email:

  • Alice and Bob agree on a public key algorithm.
  • Bob sends Alice his public key.
  • Alice encrypts her message with Bob’s public key and sends it to Bob.
  • Bob decrypts Alice’s message with his private key.

Writing a Smart Contract

Smart Contracts are often written in the Solidity programming language, although LLL and Serpent are also available. They are then run on the Ethereum Virtual Machine, which implements a full stack Turing-complete computer on the blockchain.


Footnotes

<references />


Chapter 5
Back to Content Page
Chapter 7
English version /Chinese version