Skip to content

Human Operator Overloading

February 21, 2012

I believe that world power is centralized, this power is hidden possibly ocean dwelling.

The basic premise of my theory is that INFORMATION is the most sensitive thing going around. In other  words, the power gradients in our world are very representative of the info gradients that must be  existing.

Even in this age of the Net, info gradients exist in two ways: gateway monopolies as in Google and  centralized user interfaces as in static forums (of which blogs, social networks etc are specialized  derivatives). Both have very dominant shares 85-90% resulting in very steep gradients.

Now imagine the situation when people had literally no means of communication, let alone long distance  communication.

This logic alone predicates huge power gradients. Over thousands of years, power not only centralized,  it abstracted itself completely.

Complete abstraction means that no person in the lower known layer can prove the existence of the upper  hidden layer because all applications of power are carried out via OPERATOR OVERLOADING.

Operator overloading (as it applies here) is the art of achieving exactly the same chess move (for  instance) with the smallest necessary subset of the trillions of strategies a computer might have the  power to model.

Operator overloading is the principal mechanism through which untraceable interference takes place  because the person needs to provide only a proxy for the brute force of the computer.

The overloading mechanisms have been refined over time to a point where we can be completely free to do  things and still remain completely overloaded.

This is because we can only project a very small set of consequences of our actions. in other words, if  someone is overloading our secondary / tertiary consequences, we are unlikely to ever find out let alone  counter it.

Let’s try to figure out how operator overloading works by taking the example of Google:

It is sad that the Net has an ethos of “public access”. In other words, we welcome all (even bots) to  visit our site. More sadly, bots are machines which never get bored or tired. But most sadly, bots can  make an exact replica of our site.

So essentially Google IS the Net, why bother with the fragments. Of course this works only for the non  transactional stuff, but wait, don’t price comparison bots kill ecommerce. Then the only thing Google  can’t do is duplicate the interactivity. You get the picture.

If we extrapolate these info and power gradients, it becomes plausible that the upper hidden layer now  tracks everything, models everything, figures out points of maximum leverage then interferes only at  those points to get the maximum operator overloading for a given amount of interference (untraceable  interference, mind you).

Some discussion on this topic can be found on Quora here: -prove-it-and-why/answer/Panjwani-Ajay

Search Quora for “operator overloading” to find more discussion on the same topic.


From → Conspiracy

  1. The same concept cast in a different mould

    lets take three people a, b and c

    lets say person a knows exactly where all the land mines really are and he wants person b to avoid these but person c to walk over these

    one way would be to show the full map to b and a full (but fake) map to c

    this solution is sub optimal in two ways: a has to declare full knowledge and has to assume that b and c would deduce the course of action a wishes they would take

    a better solution for a would be to back calculate from their decision making skills using game theory then show them only the most sketchy map that has just enough info to trip their decisions on the right options

    now look at the conditions that would make such a setup actually play out (1) a must know decision making processes of b and c (2) b and c should never discuss their maps with each other

    here it becomes a question of faith, do you believe that a exists and harbours such a bias

    i, for one, believe that a exists solely because for humans the fight for supremacy must have taken place thousands of years ago and what we see today are simply the long term effects of that unwavering supremacy

    over the course of these thousands of years, the benchmarking rate kept on increasing i.e each successive generation of c kept getting more and more suspicious of the validity of their maps visavis those of b

    but even today in this age of the internet where almost any person is able to benchmark against any other person from the remotest part of the world, no one really suspects that there is an a and that their maps could actually have been messed with

    even so, a would have had to really think up some pretty nifty tricks to keep his gig from crashing

    while a’s bag of tricks might be pretty diverse, there is a common theme that unites these


    ie how to make 90% b types cross the minefield in one piece while ensuring only 10% c types achieve the same

    the immediate concern is


    if c types are scoring 10 and b types are scoring 90, why dont c types revolt or fight or compete, anything to equalize with b because all are unaware of a and a’s bias

    the trick is


    what if the buzz around the minefield is that walk one step and make a 1000 bucks then rest for however much time

    because then it would be a simple matter to blame c casualties on their greed

    now lets deal with the


    we still have a, b and c but no land mines (thank god)

    here the aim of long term supremacy a is again the same


    ie how to maintain a steep attribution gradient between b and c while maximizing the illusion of


    even in these times of hyper benchmarking

    please note that attribution is a far more complex concept than achievement

    this is because each achievement is unique in its own way

    a topper in a city school has a higher achievement than a topper in a village school but that alone cannot dismiss the village student’s attribution because his achievement was won under far dire circumstances

    now imagine how tough a’s job really is

    a has to achieve an attribution gradient (a steep one at that) in the face of all the hyper benchmarking going on

    most of my previous comments keep talking of reverse symbolism because thats the only way a can achieve attribution gradient while creating an illusion of justice

    reverse symbolism grew to its zenith in the age of mass media esp television but has now started to wane a bit as we get into decentralized media consumption

    reverse symbolism is simply about showing that b is suffering and c is enjoying so that if decentralized decision making redistributes from c to b, no one thinks that to be necessarily a wrong thing to do

    now lets come to the crux of the matter: how indebted are we to each other without realizing it

    i would say that thanks to a and his bias, b are heavily indebted to c

    the reason c dont ask for payback is because they dont know about this debt which in turn is because they dont know of a and esp of a’s bias

Trackbacks & Pingbacks

  1. Quora

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: