Pair – A new data structure?

Inspired by dances, here is a data structure called “pair”. Please let me know if similar data structure exists.

A pair has two elements – element [0] and element [1].

An element may “join” or “leave” the pair under some conditions:

  • The pair is in “empty” state to start with
  • When created, an element must be in “waiting” state. The pair goes into “proposed” state
  • If two elements are in “waiting” state, they enter in a “bonded” state. The pair goes into “full” state.
  • Once “bonded”, if one of the element wants to “leave”, it enters “leave requested” state. The pair goes into “shaky” state
  • If both the elements are in “leave requested” state, they are “debonded” from the pair and destroyed. The pair goes into “empty” state
  • If a “waiting” element wants to “leave” [that is, the pair was "proposed"], it is directly debonded and destroyed. The pair goes  into “empty” state

Most paired dances follow this data structure. I guess one-to-one chats also must be following this structure. What other uses can you think of?

Also, there is a possibility of a directional pair – kind of inner/outer loops of a raasa dance.

And finally, it could be more than a pair, a n-tuplet.

When two non-linearities collide

What happens if exponential growth of productivity continues?

Most tech managers will laugh at exponential growth of productivity. However, it is quite achievable. OK, OK! For now, let us assume it is possible to take steps that improve 20%-50% productivity improvements year over year. [Later I will charge consultancy fee if you want to know how :-)]

Over four separate occasions, I have experienced that while such a streak of improvements is possible, it is not practical to continue exponential growth for longer than 3 or 4 steps. Let me rephrase it for better understanding. While it is technically possible to make things better, faster and cheaper, it doesn’t happen after 3 or 4 such steps.

Typically I have seen such a growth hitting one of the following limits:

  • Problem domain limit: For example, when subatomic particles were discovered, they were discovered at a phenomenal rate. In a few decades it was predicted that there will be a particle for each person on the earth. Two things stopped that explosion. First, human beings out-produced themselves over papers :-) and second, Standard Model gave a very succinct presentation and the race ended. Similarly, when I mounted on designing game boards, I went exponentially inventing new boards (like cylindrical, helical, …) till I found the principle behind what I was doing. After that, I could predict all and there was an end
  • Operational limit: For example, if you could meta-automate tests, wrote libraries in a way that test cases generate themselves. I have reached to meta-automation stages 3 times in my career so far. It found bugs by tons. However, soon I hit operational limits in terms of the lab. I was able to generate tens of thousands of tests on demand but I still needed physical time and setups to run them
  • Organizational limit: For example, you followed “Decision-less Programming” to its extreme or designed tools to crank out near-faultless code, other organizations (like your users or testers) may not be ready for such an assault. Once I eliminated my team’s jobs doing something like this. All the foreseen projects were done when our homegrown “web-page flow to code converter” and “DB to web-page designer” worked perfectly in tandem. [The pages sucked aesthetically but hey, who cared about the UI of internal tools?]
  • Financial/market limit: For example, the market for cars can’t accept cars going faster than some kmph. Any faster and roads or human reflexes may end up in accidents
  • Social limit: For example, I remember someone from Siemens complaining about selling thermal power plants in India. While the plant technology was improved exponentially to the level that it could be run by less than half a dozen engineers, the government insisted on specifying the need more than a hundred to be employed. The power problem wasn’t so much of that of power generation but of employment generation, he noted

What other limits cross your mind?

Why will Ostendo’s holographic display on mobiles fail?

I am generally a tech-enthusiast. It is rare to find me criticizing a technology so early in the hype cycle.

However, Ostendo’s new technology of holographic display on mobiles is heading towards failure in my view.

The reasons are fairly clear.

  • Content availability in 3D has always been a spoiler. Something as expensive as a 3D viewer doesn’t have World Cup Football (Soccer) being streamed? Not worth it!
  • Counter argument to content availability is starting with socials and chats and *generating* content. The trouble will be, how would the cameras be placed geometrically during chatting? Avatars may work but novelty will soon fizzle out. Mobility isn’t about setting up speakers and home-theater. How do you think setting up 3D, heck, Holographic 3D cameras would suite a casual chat setup? Just look at poor selfies! Getting decent angles in selfies has been a problem
  • Alone 3D projection isn’t going to be enough. It has to accompany camera on the other side and protocols in-between. There are too many players in these three spaces. Too many cooks have spoiled the broth too many times
  • Last, but not the least, it appears that the projection is likely to be perpendicular to the display. That means, if I could see what is being projected, so would the person sitting next to me. That kills privacy

What do you say?

Fascination to 2-D continues

I was editing some Excel sheets. Being a manager, I am just good at this piece of software it seems :-) People also warn that if I become executive, Excel will be substituted with Powerpoint.

Anyway, let us return to Excel. It lets me copy-paste in handsome manner. I can paste with/without format, with/without formula and also paste transverse (rows and columns flipped). As a user of Emacs (!a manager using Emacs!) I remember pasting in inverted order also. Excel notably misses this part.

However, yesterday I realized need for one more way of pasting – bottom-up pasting. It is easy to confuse it with Emacs’ inverted pasting. Here is the difference:

Emacs’ inverted pasting still pastes in  “forward” direction, from the cursor rightwards.

What I wanted was to paste from cursor leftwards or upwards.

That means:

  1. “Pasting information” direction may be straight or inverted (like that Emacs’ option)
  2. Pasting direction (as in my requirement) is different and independent from “Pasting information” direction
  3. Pasting direction may be rightwards (as usually done) or leftwards. In 1-D it may not make difference but in 2-D it will make huge difference
  4. Transverse pasting is yet another, independent activity

BTW, are we slowly gathering requirements for Excel++ here? Is 2-D programming really emerging here?

Learning Curve – Beyond Search Engines

Nobody doubts that Internet search has revolutionized human endeavor of gathering information. I have been using Internet for 20 years for now. Over the period of time, I have started realizing limits of just a “Search Engine”.

If you look back at the most prominent search engine Google, it has changed in only the following ways from black box point of view:

  1. Predictive search, which in turn killed “I’m feeling lucky”
  2. Tag based search on images etc.
  3. Infinite scrolling of search results rather than pagination
  4. More relevant search

Nothing of above was non-trivial achievement. However, it is still search, search and search. Wolform Alpha has a new approach to search – and some other search engines may have some more angles to search.

However, as an intelligent being, searching for the information isn’t enough. A lot of times another longer intellectual activity may be going in the end user’s mind. This activity is called “learning”.

My wonder is, can an engine be designed that takes all the random information on the web and draw a Learning Curve for a given concept? That means, out of given concept (keyword), can the Web be rendered in ordered manner to introduce that concept and gradually leading the reader to deeper and deeper concepts? I will call such an engine Learning Curve

Outcome of such an engine on human intelligent activity will be dramatic. Suddenly searching will sound too consumerist. If we can draw such a curve (or curves) for a given concept – and associate learning sessions to a tool like browser, potential of human civilization can be realized like it has never been. If we can make such an engine (and popularize it :-) human knowledge growth curve will be much steeper than current “search and get lucky” approach. If Learning Curve approach is like stable agriculture, searching is like hunting-gathering.

Next question is, how do we order the ever-increasing –knowledge-in-the-web in the order of easy to hard?

First negatives:

  • User rating of a standalone page is likely to be defrauded, misjudged
  • Asking administrators to rate the page is likely to be misjudged or devoid of motivation
  • Based on human learning habits, one way to learn may not exist

Now how it may be achieved:

  • Absolute rating of pages doesn’t make sense but relative rating does. A browser plugin may display two pages about a concept side by side with end user rating easier-harder
  • An engine storing each page’s ranking with each concept with other rating (now, that is computationally suicidal)
  • Upon next request to serve the page, the engine serves pages by rank. The user can adjust rank like chess engine according to her/his level in a particular subject. The service may not be serving pages like rank 1, 2, 3 and so on. The service may bunch pages in links for 1-50, 51-100, …

Do you think it may work? If not, what can work? Is this idea worth pursuing at all?

BTW, commercialization of this idea is easy :-)

 

 

Cisco’s “Application Centric Infrastructure” – is it “Decisionless”?

Cisco has been promoting “Application Centric Infrastructure” as an alternative to Software Defined Networking (SDN).

I need to do more homework to appreciate the difference between SDN and ACI.

However, what struck to me was that ACI is about taking the policy out of forwarding path. As per my understanding of ACI, once a policy is set by the “intelligence” of APIC, hardware will take over forwarding.

This is strikingly similar to decision-less programming I have been advocating. Readers of this blog are aware that in decision-less programming, only I/O requires decisions. Business logic buried deep into FSMs of derived classes is implemented as policy look-up tables.

If my understanding of parallels so far is correct, I suppose ACI will have the same characteristics as decision-less programming:

  • There will be no “patchability” of policies. All the policies must be explicitly known, documented and implemented
  • The system will work predictably barring system level problems of running out of resources etc.
  • The system will be extremely fast
  • The system will be memory intensive
  • The system will require sophisticated input parsing
  • Testing of such systems will be trivial except at boundary conditions like array bound violations or at system level problems like thread lockups or disk overruns

Is interference the next big thing?

Recently I have covered spying applications like “audio zoom“. Here is an excellent article about pCell, a technology that 1,000 times faster WWAN coverage. Though pCell may never become a reality for the cold, hard economics, something VERY interesting is common across such developments.

The common theme here is exploitation of interference or superimposition through multiple, yet small number of transceivers.

As a trend, I see that physics still matters to engineering even after computers got into our pockets :-) I also see that a phenomenon of wave physics called Interference is suddenly being paid attention to. So far technology (or the technology I know) has focused on brute force power of signals and number of transceivers. This is the first time I see “transceivers working together” yielding “sum bigger than  parts”. This phenomena of exploitation of interference can be attributed to availability of cheaper and less power hungry processors. As time passes, we may see more and more such interference based technologies to emerge and help us better.

IMHO this could be a paradigm shift in technology as big as superheterodyne transmission (or the radio) , which had moved the possibility of information exchange much farther, faster and cheaper than earlier methods of base band communication.

Also, any such change brings up bigger changes in the way other branches of knowledge (like philosophy) perceives the world. I can see what kind of impact success of such technologies may have in our collective thinking. I can predict some philosophical ideas of future. However, this is largely a tech blog. So let us stop here :-)