When two non-linearities collide

What happens if exponential growth of productivity continues?

Most tech managers will laugh at exponential growth of productivity. However, it is quite achievable. OK, OK! For now, let us assume it is possible to take steps that improve 20%-50% productivity improvements year over year. [Later I will charge consultancy fee if you want to know how :-)]

Over four separate occasions, I have experienced that while such a streak of improvements is possible, it is not practical to continue exponential growth for longer than 3 or 4 steps. Let me rephrase it for better understanding. While it is technically possible to make things better, faster and cheaper, it doesn’t happen after 3 or 4 such steps.

Typically I have seen such a growth hitting one of the following limits:

  • Problem domain limit: For example, when subatomic particles were discovered, they were discovered at a phenomenal rate. In a few decades it was predicted that there will be a particle for each person on the earth. Two things stopped that explosion. First, human beings out-produced themselves over papers :-) and second, Standard Model gave a very succinct presentation and the race ended. Similarly, when I mounted on designing game boards, I went exponentially inventing new boards (like cylindrical, helical, …) till I found the principle behind what I was doing. After that, I could predict all and there was an end
  • Operational limit: For example, if you could meta-automate tests, wrote libraries in a way that test cases generate themselves. I have reached to meta-automation stages 3 times in my career so far. It found bugs by tons. However, soon I hit operational limits in terms of the lab. I was able to generate tens of thousands of tests on demand but I still needed physical time and setups to run them
  • Organizational limit: For example, you followed “Decision-less Programming” to its extreme or designed tools to crank out near-faultless code, other organizations (like your users or testers) may not be ready for such an assault. Once I eliminated my team’s jobs doing something like this. All the foreseen projects were done when our homegrown “web-page flow to code converter” and “DB to web-page designer” worked perfectly in tandem. [The pages sucked aesthetically but hey, who cared about the UI of internal tools?]
  • Financial/market limit: For example, the market for cars can’t accept cars going faster than some kmph. Any faster and roads or human reflexes may end up in accidents
  • Social limit: For example, I remember someone from Siemens complaining about selling thermal power plants in India. While the plant technology was improved exponentially to the level that it could be run by less than half a dozen engineers, the government insisted on specifying the need more than a hundred to be employed. The power problem wasn’t so much of that of power generation but of employment generation, he noted

What other limits cross your mind?

Why will Ostendo’s holographic display on mobiles fail?

I am generally a tech-enthusiast. It is rare to find me criticizing a technology so early in the hype cycle.

However, Ostendo’s new technology of holographic display on mobiles is heading towards failure in my view.

The reasons are fairly clear.

  • Content availability in 3D has always been a spoiler. Something as expensive as a 3D viewer doesn’t have World Cup Football (Soccer) being streamed? Not worth it!
  • Counter argument to content availability is starting with socials and chats and *generating* content. The trouble will be, how would the cameras be placed geometrically during chatting? Avatars may work but novelty will soon fizzle out. Mobility isn’t about setting up speakers and home-theater. How do you think setting up 3D, heck, Holographic 3D cameras would suite a casual chat setup? Just look at poor selfies! Getting decent angles in selfies has been a problem
  • Alone 3D projection isn’t going to be enough. It has to accompany camera on the other side and protocols in-between. There are too many players in these three spaces. Too many cooks have spoiled the broth too many times
  • Last, but not the least, it appears that the projection is likely to be perpendicular to the display. That means, if I could see what is being projected, so would the person sitting next to me. That kills privacy

What do you say?

Fascination to 2-D continues

I was editing some Excel sheets. Being a manager, I am just good at this piece of software it seems :-) People also warn that if I become executive, Excel will be substituted with Powerpoint.

Anyway, let us return to Excel. It lets me copy-paste in handsome manner. I can paste with/without format, with/without formula and also paste transverse (rows and columns flipped). As a user of Emacs (!a manager using Emacs!) I remember pasting in inverted order also. Excel notably misses this part.

However, yesterday I realized need for one more way of pasting – bottom-up pasting. It is easy to confuse it with Emacs’ inverted pasting. Here is the difference:

Emacs’ inverted pasting still pastes in  “forward” direction, from the cursor rightwards.

What I wanted was to paste from cursor leftwards or upwards.

That means:

  1. “Pasting information” direction may be straight or inverted (like that Emacs’ option)
  2. Pasting direction (as in my requirement) is different and independent from “Pasting information” direction
  3. Pasting direction may be rightwards (as usually done) or leftwards. In 1-D it may not make difference but in 2-D it will make huge difference
  4. Transverse pasting is yet another, independent activity

BTW, are we slowly gathering requirements for Excel++ here? Is 2-D programming really emerging here?

Learning Curve – Beyond Search Engines

Nobody doubts that Internet search has revolutionized human endeavor of gathering information. I have been using Internet for 20 years for now. Over the period of time, I have started realizing limits of just a “Search Engine”.

If you look back at the most prominent search engine Google, it has changed in only the following ways from black box point of view:

  1. Predictive search, which in turn killed “I’m feeling lucky”
  2. Tag based search on images etc.
  3. Infinite scrolling of search results rather than pagination
  4. More relevant search

Nothing of above was non-trivial achievement. However, it is still search, search and search. Wolform Alpha has a new approach to search – and some other search engines may have some more angles to search.

However, as an intelligent being, searching for the information isn’t enough. A lot of times another longer intellectual activity may be going in the end user’s mind. This activity is called “learning”.

My wonder is, can an engine be designed that takes all the random information on the web and draw a Learning Curve for a given concept? That means, out of given concept (keyword), can the Web be rendered in ordered manner to introduce that concept and gradually leading the reader to deeper and deeper concepts? I will call such an engine Learning Curve

Outcome of such an engine on human intelligent activity will be dramatic. Suddenly searching will sound too consumerist. If we can draw such a curve (or curves) for a given concept – and associate learning sessions to a tool like browser, potential of human civilization can be realized like it has never been. If we can make such an engine (and popularize it :-) human knowledge growth curve will be much steeper than current “search and get lucky” approach. If Learning Curve approach is like stable agriculture, searching is like hunting-gathering.

Next question is, how do we order the ever-increasing –knowledge-in-the-web in the order of easy to hard?

First negatives:

  • User rating of a standalone page is likely to be defrauded, misjudged
  • Asking administrators to rate the page is likely to be misjudged or devoid of motivation
  • Based on human learning habits, one way to learn may not exist

Now how it may be achieved:

  • Absolute rating of pages doesn’t make sense but relative rating does. A browser plugin may display two pages about a concept side by side with end user rating easier-harder
  • An engine storing each page’s ranking with each concept with other rating (now, that is computationally suicidal)
  • Upon next request to serve the page, the engine serves pages by rank. The user can adjust rank like chess engine according to her/his level in a particular subject. The service may not be serving pages like rank 1, 2, 3 and so on. The service may bunch pages in links for 1-50, 51-100, …

Do you think it may work? If not, what can work? Is this idea worth pursuing at all?

BTW, commercialization of this idea is easy :-)

 

 

Cisco’s “Application Centric Infrastructure” – is it “Decisionless”?

Cisco has been promoting “Application Centric Infrastructure” as an alternative to Software Defined Networking (SDN).

I need to do more homework to appreciate the difference between SDN and ACI.

However, what struck to me was that ACI is about taking the policy out of forwarding path. As per my understanding of ACI, once a policy is set by the “intelligence” of APIC, hardware will take over forwarding.

This is strikingly similar to decision-less programming I have been advocating. Readers of this blog are aware that in decision-less programming, only I/O requires decisions. Business logic buried deep into FSMs of derived classes is implemented as policy look-up tables.

If my understanding of parallels so far is correct, I suppose ACI will have the same characteristics as decision-less programming:

  • There will be no “patchability” of policies. All the policies must be explicitly known, documented and implemented
  • The system will work predictably barring system level problems of running out of resources etc.
  • The system will be extremely fast
  • The system will be memory intensive
  • The system will require sophisticated input parsing
  • Testing of such systems will be trivial except at boundary conditions like array bound violations or at system level problems like thread lockups or disk overruns

Is interference the next big thing?

Recently I have covered spying applications like “audio zoom“. Here is an excellent article about pCell, a technology that 1,000 times faster WWAN coverage. Though pCell may never become a reality for the cold, hard economics, something VERY interesting is common across such developments.

The common theme here is exploitation of interference or superimposition through multiple, yet small number of transceivers.

As a trend, I see that physics still matters to engineering even after computers got into our pockets :-) I also see that a phenomenon of wave physics called Interference is suddenly being paid attention to. So far technology (or the technology I know) has focused on brute force power of signals and number of transceivers. This is the first time I see “transceivers working together” yielding “sum bigger than  parts”. This phenomena of exploitation of interference can be attributed to availability of cheaper and less power hungry processors. As time passes, we may see more and more such interference based technologies to emerge and help us better.

IMHO this could be a paradigm shift in technology as big as superheterodyne transmission (or the radio) , which had moved the possibility of information exchange much farther, faster and cheaper than earlier methods of base band communication.

Also, any such change brings up bigger changes in the way other branches of knowledge (like philosophy) perceives the world. I can see what kind of impact success of such technologies may have in our collective thinking. I can predict some philosophical ideas of future. However, this is largely a tech blog. So let us stop here :-)

 

 

 

2-D programming – about function prototypes, definitions and calls

Talking to people about my ideas help me fine tune my ideas.

Recently I talked to a bunch of students about implementing my earlier post about 2-D programming. That got my brain ticking. I thought of function prototypes, definitions and function calls in 2-D programming.

Function prototyping  and declaration:

I had earlier wrote about improving the function prototyping by adding some contract to each argument. That is, rather than writing prototype of fwrite just as

size_t fwrite ( const void * ptr, size_t size, size_t count, FILE * stream );

we should also mention the hidden assumption (contract) for better understanding of the programmer who is going to use fwrite as:

size_t fwrite ( const void * ptr != NULL, size_t size > 0, size_t count >= 0, FILE * stream != NULL);

That is, human readability is also important in code maintenance and explicit statement about contract is helpful in avoiding a lot of bugs.

However, when you add to default values also mentioned in the prototype, 1-D arrangement becomes very difficult to read because with each argument, there will be four attributes associated now – type, argument name, default value and contract about the argument. The same exercise has to be repeated when the function is declared:

size_t fwrite ( const void * ptr != NULL, size_t size > 0, size_t count >= 0, FILE * stream != NULL) {
...
}

2-D definition may make it very succinct and intuitive. (Here) Assuming fwrite defaults to stdout as default stream to write,

/*type*/
size_t
const void *
size_t
size_t
FILE *
/*name*/
fwrite
ptr
size
count
stream
/*default*/  1
/*contract*/
>= 0
!= NULL
> 0
>= 0
!= NULL

Now that we have freedom to redesign the prototype or first line of function again and with one more dimension, the following will suite reader psychology the best:

/*name*/
fwrite
ptr
size
count
stream
/*type*/
size_t
const void *
size_t
size_t
FILE *
/*contract*/
>= 0
!= NULL
> 0
>= 0
!= NULL
/*default*/ 1

At the first glance (row #1) this definition shows fwrite will need a data pointer, size of element, number of elements and a destination stream to write. The reader doesn’t get lost into other details if not needed. As such, human mind thinks about arguments of a function first and later about its type, default values or restrictions. Often, when a function is fully understood, some of these attributes are understood better. By describing the list of argument in “one row view”, the writer communicates the reader very quickly what the function is all about.

Counterpart of function call – or statement structure can also become interesting in 2-D. Suppose you are pre-historic era Mathematician trying to find out area of an eclipse. You would deduce that AreaOfAnEclipse has to do something with its major axis and minor axis and nothing more. We all programmers have an intuitive feel of what arguments will be needed to evaluate a function – and later figure out necessary functions and operations to fit them in. 2-D programming structures can be thought out to do the same.

Take for example, area of a sphere with its major axis being A and B. We will not require function calls here. Only operators will suffice:

/*variables*/
areaOfAnEclipse =
pi
A
B
/*operations and functions*/ *
/*operations and functions*/ *

[As you can see, it is kind of wobbly. Someone with theory background may help me fine tune the presentation.]

For function calls it is easier. For example, if you are trying to determine value of gobbledegook as cosA*sinB – sinA*cosB, 2-D programming may look like:

/*variables*/ gobbledegook =
A
B
A
B
/*operations and functions*/ cos sin sin cos
/*operations and functions*/ * *
/*operations and functions*/ -


It is tempting to continue to think about what may happen to control structures and expressions (apart from the 2-D nesting we discussed earlier). However, it is late in the day and I am under allergy medication. Waiting for your comments, folks!