Cisco’s “Application Centric Infrastructure” – is it “Decisionless”?

Cisco has been promoting “Application Centric Infrastructure” as an alternative to Software Defined Networking (SDN).

I need to do more homework to appreciate the difference between SDN and ACI.

However, what struck to me was that ACI is about taking the policy out of forwarding path. As per my understanding of ACI, once a policy is set by the “intelligence” of APIC, hardware will take over forwarding.

This is strikingly similar to decision-less programming I have been advocating. Readers of this blog are aware that in decision-less programming, only I/O requires decisions. Business logic buried deep into FSMs of derived classes is implemented as policy look-up tables.

If my understanding of parallels so far is correct, I suppose ACI will have the same characteristics as decision-less programming:

  • There will be no “patchability” of policies. All the policies must be explicitly known, documented and implemented
  • The system will work predictably barring system level problems of running out of resources etc.
  • The system will be extremely fast
  • The system will be memory intensive
  • The system will require sophisticated input parsing
  • Testing of such systems will be trivial except at boundary conditions like array bound violations or at system level problems like thread lockups or disk overruns

Is interference the next big thing?

Recently I have covered spying applications like “audio zoom“. Here is an excellent article about pCell, a technology that 1,000 times faster WWAN coverage. Though pCell may never become a reality for the cold, hard economics, something VERY interesting is common across such developments.

The common theme here is exploitation of interference or superimposition through multiple, yet small number of transceivers.

As a trend, I see that physics still matters to engineering even after computers got into our pockets :-) I also see that a phenomenon of wave physics called Interference is suddenly being paid attention to. So far technology (or the technology I know) has focused on brute force power of signals and number of transceivers. This is the first time I see “transceivers working together” yielding “sum bigger than  parts”. This phenomena of exploitation of interference can be attributed to availability of cheaper and less power hungry processors. As time passes, we may see more and more such interference based technologies to emerge and help us better.

IMHO this could be a paradigm shift in technology as big as superheterodyne transmission (or the radio) , which had moved the possibility of information exchange much farther, faster and cheaper than earlier methods of base band communication.

Also, any such change brings up bigger changes in the way other branches of knowledge (like philosophy) perceives the world. I can see what kind of impact success of such technologies may have in our collective thinking. I can predict some philosophical ideas of future. However, this is largely a tech blog. So let us stop here :-)

 

 

 

2-D programming – about function prototypes, definitions and calls

Talking to people about my ideas help me fine tune my ideas.

Recently I talked to a bunch of students about implementing my earlier post about 2-D programming. That got my brain ticking. I thought of function prototypes, definitions and function calls in 2-D programming.

Function prototyping  and declaration:

I had earlier wrote about improving the function prototyping by adding some contract to each argument. That is, rather than writing prototype of fwrite just as

size_t fwrite ( const void * ptr, size_t size, size_t count, FILE * stream );

we should also mention the hidden assumption (contract) for better understanding of the programmer who is going to use fwrite as:

size_t fwrite ( const void * ptr != NULL, size_t size > 0, size_t count >= 0, FILE * stream != NULL);

That is, human readability is also important in code maintenance and explicit statement about contract is helpful in avoiding a lot of bugs.

However, when you add to default values also mentioned in the prototype, 1-D arrangement becomes very difficult to read because with each argument, there will be four attributes associated now – type, argument name, default value and contract about the argument. The same exercise has to be repeated when the function is declared:

size_t fwrite ( const void * ptr != NULL, size_t size > 0, size_t count >= 0, FILE * stream != NULL) {
...
}

2-D definition may make it very succinct and intuitive. (Here) Assuming fwrite defaults to stdout as default stream to write,

/*type*/
size_t
const void *
size_t
size_t
FILE *
/*name*/
fwrite
ptr
size
count
stream
/*default*/  1
/*contract*/
>= 0
!= NULL
> 0
>= 0
!= NULL

Now that we have freedom to redesign the prototype or first line of function again and with one more dimension, the following will suite reader psychology the best:

/*name*/
fwrite
ptr
size
count
stream
/*type*/
size_t
const void *
size_t
size_t
FILE *
/*contract*/
>= 0
!= NULL
> 0
>= 0
!= NULL
/*default*/ 1

At the first glance (row #1) this definition shows fwrite will need a data pointer, size of element, number of elements and a destination stream to write. The reader doesn’t get lost into other details if not needed. As such, human mind thinks about arguments of a function first and later about its type, default values or restrictions. Often, when a function is fully understood, some of these attributes are understood better. By describing the list of argument in “one row view”, the writer communicates the reader very quickly what the function is all about.

Counterpart of function call – or statement structure can also become interesting in 2-D. Suppose you are pre-historic era Mathematician trying to find out area of an eclipse. You would deduce that AreaOfAnEclipse has to do something with its major axis and minor axis and nothing more. We all programmers have an intuitive feel of what arguments will be needed to evaluate a function – and later figure out necessary functions and operations to fit them in. 2-D programming structures can be thought out to do the same.

Take for example, area of a sphere with its major axis being A and B. We will not require function calls here. Only operators will suffice:

/*variables*/
areaOfAnEclipse =
pi
A
B
/*operations and functions*/ *
/*operations and functions*/ *

[As you can see, it is kind of wobbly. Someone with theory background may help me fine tune the presentation.]

For function calls it is easier. For example, if you are trying to determine value of gobbledegook as cosA*sinB – sinA*cosB, 2-D programming may look like:

/*variables*/ gobbledegook =
A
B
A
B
/*operations and functions*/ cos sin sin cos
/*operations and functions*/ * *
/*operations and functions*/ -


It is tempting to continue to think about what may happen to control structures and expressions (apart from the 2-D nesting we discussed earlier). However, it is late in the day and I am under allergy medication. Waiting for your comments, folks!

Smarter keyboard

Now you feel I am stretching it a bit too much.

Come on! What is there in for a keyboard to get smarter? A keyboard could be sleeker, fancier, pricier – and even just projected on a screen or table top. But smarter?

Yes, why not? A keyboard can be made smarter. The bad news is, it is getting redundant by technology.

A keyboard with built in dictionary can be built. Why can’t this statement be typed with fewer (and funkier) keystrokes? Like…

User types: ‘W’ – keyboard pushes ‘h’ – user mentally accepts so moves forward to

User types ‘y’ – keyboard pushes a blank space – user mentally accepts so moves forward to

User types ‘c’ – keyboard pushes ‘h’ – user presses an “override” key + ‘a’ – keyboard pushes ‘n’ – user mentally accepts so moves forward

User types an apostrophe – keyboard pushes ‘t’ – user mentally accepts so moves forward to

User types a blank space

User types ‘t’ – keyboard pushes ‘h’ – user mentally accepts so moves forward to …

You can imagine the keyboard as a thought reader with an override key.

Interestingly this problem is taken care into application software these days.

Testing a feature in presence of many possible combinations of environment

Take for example testing of a router where we list features in columns of test areas:

L1 L2 L3
10BasedT STP RIPv1
100BasedT RSTP RIPv2
1G PVST OSPF
10G MVST ISIS
BGP

In the beginning of a product like this, most interest will be in making sure each of the protocols work. Stand alone testing of the cells will be more than enough.

Once most low hanging (and highly damaging) bugs that could be found through testing cells of the matrix alone, are weeded out of the software, QA progresses by making “experts” of testing going column-wise – L1 expert, L2 expert, L3 expert etc. This leads to more experienced testers and technically great bugs (like route redistribution having memory leaks). QA also arranges its test plans by test areas and features.

At this stage, only a section of bugs seen by a customer is eliminated and the complaint “QA isn’t testing what the customer ways” continues.

That is because the customer doesn’t use the product by columns. A typical customer environment is a vector selected one member from each column (at most). A product is likely to break at any vector.

As you can see, overall testing of the matrix above would require testing 4*4*5 = 80 environments. In reasonable products the actual number may be in 10,000′s.

***

Testing a feature in presence of many possible combinations of environment is a well-known QA problem.

Various approaches have been suggested. There have been combinatorial engines and test pairs and so on to help QA optimize this multi-dimensional problem.

The approach I discuss here is yet another algorithm to be followed by semi-automation. Just let me know your thoughts about it.

***

Let us define our terms once again:

  • Feature: A feature in the product [I know that isn't a very great definition.]
  • Area: A set of related features
  • Product: A set of Areas
  • Environment: A vector (or a sub-vector) of Product, with each component coming from an Area
  • Environment for a Feature: A maximal length sub-vector of the Product without the Area of Feature

Please understand once again that QA expertise and test plans are structured by Area (a column in the matrix). The best way to test will be to test every test of every Feature against every “Environment for a Feature”.

This approach is simply uneconomical. So, what is the next best approach?

***

Before coming to that, we need to understand how test cases are typically structured. In a feature, typically test cases have some overlap of steps – like configuration or authentication etc or overlapping phenomena like exchange of Hello packets, establishment of neighborhood etc.

That means, there is a significant redundancy of tests from white-box point of view.

This redundancy can be exploited to assure that the product may stand reasonably well in diverse environments. As we discussed earlier, such an environment is a vector of that matrix, which in turn is a sub-vector plus a test condition.

***

Understanding so much brings us to a workable solution for testing more “customer-like” without incurring too much cost.

The key understanding from the above discussion is that Environment for a Feature can be changed independently of the test case (or the Feature or the Area).

That means if the tester can signal an “environment controller” at the end of a test case, the controller may change the DUT/SUT to another Environment for the Feature. After that change is done, the tester just continues testing the system to the next test case – till all test cases end.

Because it is less likely that number of test cases are a factor (or multiple) of number of sub-vectors, within a few test cycles reasonable amount of test steps will be tested across reasonably diverse environmental sub-vectors.

As a strategy of testing the product, QA can assure its internal customers that over a few releases, most interesting bugs that can be found economically will be found.

***

What are the downsides of this approach? For sure, the Environment for a Feature must not have any configurations about the Feature under test – or even the Area. That means the tester will have to always configure the Feature before going to the next test. If you are working on a Feature that takes VERY long to converge, you are out of luck. [VPLS in networking comes across as an example.]

Since most products don’t involve long signaling delays, let me focus on the optimization of this testing.

How can we find maximum number of bugs in a given feature (or the entire column) related to environmental changes within minimum number of steps?

The answer is obvious to the readers of this blog – by taking the sub-vectors in anti-sorted way!

Cut-and-Paste!

Think of cutting and pasting. Can next generation cut-and-paste behave in more user friendly way? Can’t think much, right?

I have seen the most complex paste operation in Excel sheets. However, in my humble opinion, the most complex doesn’t always mean the most convenient.

The simple cut and paste of text hasn’t changed from its early days. It is very rough, ready and useful.

To make it more intelligent, we need to look at the behavior carefully. Let us take an example.

****

Original Text: “It is very rough, ready and useful.”

Intended Text: “It is very rough, useful and ready.”

Steps to be taken:

  1. Select “useful” from “It is very rough, ready and useful.”
  2. Ctrl+X -> “It is very rough, ready and .”
  3. Go to the space near comma
  4. Ctrl+V -> “It is very rough, usefulready and .”
  5. Enter a space -> “It is very rough, useful ready and .”
  6. Select “ready” from “It is very rough, useful ready and .”
  7. Ctrl+X -> “It is very rough, useful and .”
  8. Go to the space before the period
  9. Ctrl+V -> “It is very rough, useful and ready.”

phew!

If we had a Ctrl+W that understood conjunctions (and, or in the beginning or a comma at the end) in the clipboard and intelligence to push them at the end, here is how it could work:

  1. Select “and useful” from “It is very rough, ready and useful.”
  2. Ctrl+X -> “It is very rough, ready .”
  3. Go to the space near comma
  4. Ctrl+W-> pushes the first “and ” at the end of the clip board and pastes -> “It is very rough, useful and ready .”
  5. Remove the space before the period

presto!

I know I have yet not perfected the algorithm. Can you fine tune it?

Can the camera revolution check corruption in India?

Until recently cameras were toys of the rich. Along with many other industries like wrist watches, cell phones changed camera industry also.

Recently India was awash with scams. People were in the street protesting against corruption.

I realized that everyone has a phone – and a camera. Why can’t such a formidable presence of evidence-catching machines be turned against the corrupt?

Finally a corrupt person is going to spend all that cash somewhere. Why can’t the mass turn into paparazzi and catch every possible movement of someone on the black list?

Later, the pictures may be circulated by some means – from MMS to Facebook or Tumblr. Such images can be used to estimate the person’s (or his familiy’s) spending habits and can be questioned for consumption disproportionate to the income.