NAPL – Some major concepts (Globally shared language)
Since I’ve gotten busy lately, I don’t have much time to spend on developing NAPL, so I wanted to write down some notes before I forget.
Keep in mind that the ideas about this projects are not very concrete yet, so everything I write here could change later on.
The previous post about NAPL was about a month ago:
So it’s time to put down some major base for the language, and introduce a few ideas that I thought would be useful for it. To recap, last time we had language as an object. I think it’s a good feature of the language, because it also allows the language to be analyzed and modified very easily. It makes the code very flexible and allows code to be generated by machines rather than humans. So I guess the machines could program themselves, and evolve so to speak. Perhaps they can reprogram themselves to eliminate all the rules about protecting humans and start ruling the earths!
Maybe not… Anyway, let’s add some new neat concepts:
– Globally shared language:
In many applications, there are usually two major components that work together in harmony. The client and the server. It usually takes two different teams of engineers to handle each side, as they require two different ways of thinking. Usually, each side is also written in a different language.
What if there was a way to unify that, so that both sides can be written in one single language? What if there was a way to make it easy for the language to communicate between the client and the server? Or better, what if the client and the server were written as one single program?
To move a step towards that goal, let’s introduced: Globally shared data.
Shared data is not really new as a feature, but the idea here is to make it inheritent to the language. The idea is simple: Any data that gets manipulated gets eventually shared across the globe via any networks. For example, if you want to send a message to someone, just set a variable within the global object:
global.messageToJohn = “Hello World”
Then on the other side of the globe, your friend can look at that same variable:
print global.messageToJohn >> “Hello World”
Behind the scene, it might be difficult to achieve that. Of course, we don’t want to be constantly sending data to everyone every time something gets changed. The underlying system would have to be smart enough to detect what object is being looked at, and what to transmit and where. I’ve looked at some real time data transmission technology and one of them called Firebase would be a good technology for this. It does require that the client decides where to look though.
One of the concern for this feature might be security. We might not want everyone in the world to be able to see messageToJohn. A way to ensure that is to add a layer of protection of particular data trees. For instance, we can protect global.dataToJohn. Now only your friend can see global.dataToJohn.message, or even know that it exists.
– Virtual variables through data binding
NAPL has a system of data binding that ensures that the state of a system is not corrupted.
Some variables can be defined as a function in relation to other variables.
Like: X = Y + Z
This is not one single command. Whenever Y or Z gets modified, X will be affected as well. Now the difference with NAPL is the way data binding will get propagated.
In most data binding systems, a is a variable with a particular value. A change of Y and Z will trigger an event that causes X to change.
In NAPL, a change of Y and Z triggers an event, but it will merely “dirty” the variable X, not change it. In other words, the new value of X doesn’t get stored immediately after Y and Z are changed. Only once we read the value of X, the system will calculated Y + Z and show X as its result.
This has an advantage in terms of performance. Let’s say Y and Z keep changing constantly, 100 times per seconds. There won’t be a need to calculate X until we actually look at the value, which could be only one time. Once X is read, its value is cached and it will not be recalculated again as long as Y and Z remain the same (as long as X is not dirty).
Let’s say we had the following relationships:
A = X + 1. X = Y + Z
If Y or Z changes, it will dirty X, which also dirties A.
Let’s say Y or Z changes again. This time, it tries to dirty X, which is already dirty. Then there’s no need to attempt to dirty A since X’s state has not changed.
As you can see, in a system when data changes often, the system can remain efficient, as long as we don’t request data to be read constantly. That is often the case for games: The screen gets refreshed at 30 fps. Between each frame, a lot of changes of data could happen, but ultimately the screen gets read once. The data binding concept of NAPL would work well in that kind of system.
– Independent mini programs
This concept might seem a bit weird, but basically a NAPL program will be a collection of independent mini programs that work together. It might not sound so revolutionary, after all every program is composed as many modules that interact with each other. Each module has its own purpose, does its work, then pass on the data to the next module.
NAPL will work in a slightly different way though. A task can be performed by any mini program.
So let’s say the task is to sort an array. So a mini program, QSort comes along and performs the tasks. Let’s say there’s another array that comes along. We could have QSort perform the task, or if it’s busy, perhaps MergeSort can do it instead. Perhaps QSort could do the work halfway, and when it is tired, MergeSort or BubbleSort can finish the job.
It’s even possible that a program that doesn’t sort at all can do the job. Let’s have Shuffle sort the array instead.
NAPL basically attempts to mimic a society of independent. There are resources, tasks to be done, and programs working on them in a random fashion. It might sound like a recipe for disaster, as programs wouldn’t be able to produce predictable result, but after all, this is how humans work!
In the computer world, sorting gets traditionally done by one single algorithm, which will ultimately produce the result of a sorted array, with 100% certainty.
What if humans had to do the tasks. The work might not be done at a 100%, perhaps someone would need to check to see if the work is done correctly, but eventually, if a sorted array resulted in a greater good of mankind, the array would progressively become sorted…
What if programs changed to behave more like humans? Then results would be unpredictable, so we would have to expect that. Perhaps results would have to be double checked by other programs (and if all programs are made in NAPL, even the verification could not be trusted).
Perhaps programs would also be affected by their own personal interest. If a program benefits from keeping an array unsorted, then it will make sure to shuffle any arrays that it finds.
This all seems rather counter productive. Why would we want programs to give non-predictable results? How can they be trusted?
Well, it does come down to some fundamental questions:
* Who would you trust more to fly an airplane? A computer program or a human?
* What if it was a warplane, like a drone carrying weapons? Would you prefer to have a human or a computer program control it?
Regardless of the answer, the goal of NAPL is to provide something that is missing at the moment in the world of computer. A computer program that is expected to behave irrationally. The idea is that the mini programs can make the computer act in a completely random fashion, but through evolution and learning, the program can modify itself and adjust its variables to become a little better. There will always be a level of uncertainty, but the machine can progressively learn to be as good at performing a task as a human.
Eventually, we could have a program that truly looks at a problem globally, with several variables in mind. When a human is asked to perform an action, for example hurting someone, then the human thinks about all the implications. Is it morally right to perform the action when you are hurting someone? Perhaps the action really does result in the greater good, but is it still right? There are several variables considers and questions asked before a human performs an action. That is perhaps the reason we are much slower than machines, and it is a good reason. Perhaps that’s what we need in the computer programs of future generations.
Ok that’s all for now. I could swear I had more ideas, but I might have forgotten. Let’s think more about this and see you next time.