Cisco’s “Application Centric Infrastructure” – is it “Decisionless”?

Cisco has been promoting “Application Centric Infrastructure” as an alternative to Software Defined Networking (SDN).

I need to do more homework to appreciate the difference between SDN and ACI.

However, what struck to me was that ACI is about taking the policy out of forwarding path. As per my understanding of ACI, once a policy is set by the “intelligence” of APIC, hardware will take over forwarding.

This is strikingly similar to decision-less programming I have been advocating. Readers of this blog are aware that in decision-less programming, only I/O requires decisions. Business logic buried deep into FSMs of derived classes is implemented as policy look-up tables.

If my understanding of parallels so far is correct, I suppose ACI will have the same characteristics as decision-less programming:

  • There will be no “patchability” of policies. All the policies must be explicitly known, documented and implemented
  • The system will work predictably barring system level problems of running out of resources etc.
  • The system will be extremely fast
  • The system will be memory intensive
  • The system will require sophisticated input parsing
  • Testing of such systems will be trivial except at boundary conditions like array bound violations or at system level problems like thread lockups or disk overruns

Is interference the next big thing?

Recently I have covered spying applications like “audio zoom“. Here is an excellent article about pCell, a technology that 1,000 times faster WWAN coverage. Though pCell may never become a reality for the cold, hard economics, something VERY interesting is common across such developments.

The common theme here is exploitation of interference or superimposition through multiple, yet small number of transceivers.

As a trend, I see that physics still matters to engineering even after computers got into our pockets 🙂 I also see that a phenomenon of wave physics called Interference is suddenly being paid attention to. So far technology (or the technology I know) has focused on brute force power of signals and number of transceivers. This is the first time I see “transceivers working together” yielding “sum bigger than  parts”. This phenomena of exploitation of interference can be attributed to availability of cheaper and less power hungry processors. As time passes, we may see more and more such interference based technologies to emerge and help us better.

IMHO this could be a paradigm shift in technology as big as superheterodyne transmission (or the radio) , which had moved the possibility of information exchange much farther, faster and cheaper than earlier methods of base band communication.

Also, any such change brings up bigger changes in the way other branches of knowledge (like philosophy) perceives the world. I can see what kind of impact success of such technologies may have in our collective thinking. I can predict some philosophical ideas of future. However, this is largely a tech blog. So let us stop here 🙂

 

 

 

About delay, loss, fuzz and unfortunate events

I covered importance of testing the impact of delayed response earlier.

It is also important to test for lost responses to see how queuing up impacts the application/device under test.

Semantically confusing fields in networking data (like http://192.168.1.255 over a 255.225.255.0 LAN) should be tested to avoid gaffes and towards more secure code. Protocol fuzz testing should ideally cover such tests.

At last, there is also a possibility of events related to other protocols happening at unfortunate events. How does your system behave when route summary is happening from OSPF to BGP AND an OSPF update arrives? Testing such scenario is extremely difficult for the want of right set of tools and skills. It also unleashes combinatorial nightmare for testing. However, with careful “code baking” policy, it is possible to find strange, bad and nasty bugs before customers find them in the field.

Technological angle to 2G scam discrepancy

The 2G scam was labelled at INR 1.76 Lakh Crores. When GoI auctioned 2G spectrum again, it is fetched mere INr 9,600 Crores. Now pro-government forces are in “We told you so!” mode.

I am not a student of scam-ology. However, I have a reason to believe in the original estimate – it is called Obsolescence.

Technology gets obsolete at an exceptional rate.  Roughly, technology halves its value every year.

For example, your cell phone for which you paid INR 25,000 two years back is now worth INR 6,000.

A spectrum for 2G is not like land sites. A land site doesn’t have any competitor. 2G now has 3G and even 4G.

So, if in the end of 2012 the spectrum fetched INR 9,600 Crores, in 2008 it should have fetched

INR 9,600 Crores * 2 (for 2012) * 2 (for 2011) * 2 (for 2010) * 2 ( for 2009) * 2 (for 2008)

= INR 9,600 Crores * 32

= INR 307,200 Crores in INR terms of 2012.

Given the fact that INR (rupee) lost its value significantly over last 5 years, let us halve that value to INR 1.53 Lakh Crores.

That is a close match to original CAG estimate, isn’t it?

Cheat sheet of data networking part 1

A lot of data networking is conceptually self-similar thanks to the layered approach.

Broadly speaking (from Grandma’s point of view), if <-> denotes exchanging or swapping,

  • Physical interface1<-> Physical interface2 is repeating
  • Physical port1 <-> Physical port2 is bridging
  • MAC1 <-> MAC2 is routing
  • IP1 <-> IP2 is NATting, load balancing
  • TCP/UDP port1 <-> TCP/UDP port2 (along with IP1 <-> IP2) is PATting
  • SSL/TLS ON/OFF <-> SSL/TLS OFF/ON is a security gateway
  • Application1 <-> Application2 is an application gateway, for example, gmail

Once you look at the “data plane” like such symbol swapping, it becomes very simple game.

Especially, if you start putting tuplets together like:

Ingress {Source, Destination} * {Physical segment, MAC, IP, Port, SSL/TLS and encryption, Application} <->

Egress {Source, Destination} * {Physical segment, MAC, IP, Port, SSL/TLS and encryption, Application}

It gives a nice product matrix. Once I learn how to put tables on WordPress, I will upload some and ask for your help to fill in the rest of details.

Understand that symbol swapping is the simplest thing a networking product does. Building such symbol swapping tables is one of the hardest jobs and that is where the concept of “control plane” kicks in. I will leave that discussion to experts.

Interestingly, the above is not the complete picture of networking products. It has only two interfaces – Ingress and Egress. There are some products (called caches) which (may) have three interfaces. Some day I would like to write such a table for caches also.

And then, layering is not the only concept in networking. There is also encapsulation. It leads to wholly another class of products called tunnels, like MPLS, IPSec and SSL-VPN. Making a table of such technologies also will be fun.

Do you want to embark on this journey?

 

 

How a simple L4 proxy is useful in testing – boredom testing

We hear complaints like: “Why can’t QA catch this bug in the lab?”

In my line of business, a lot of times such bugs arise because of loaded servers. Delayed responses that arise out of loaded servers cause the DUTs to queue up data, run short of memory and software starts misbehaving. Compared with “stress” testing, this could be thought as “boredom” testing.

QA labs are generally devised for maximum “test throughput”, that is, maximum number of test cases executed over a given time. A typical QA manager (including me) would like to have the broadest sweep in the shortest amount of time. As you can see, slower servers are antithesis to such a thinking!

Also, typically, QA has limited network and limited traffic generation capabilities. Come on, if your QA is your biggest customer, you should do some other business!

Naturally, QA is likely to miss bugs that arise out of delayed DHCP, DNS or authentication responses.

Then the question is, can a lab be specifically designed to test sluggishness? Yes.

The key lies in proxy-ing.

Say, you have a DUT that requires to be tested under delayed responses. Rather than forwarding its requests to a real server (which you have kept in best shape, alas!), forward them to a Linux box. This Linux box receives those queries, waits for configured amount of time and then forwards them to the actual server. In this way, most queries can be delayed in controlled fashion, except where IP address plays a key role.

Has anyone tried this?

When is an n layered network stable? (pseudo-mathematical definition)

  • An n layer network is a strictly ordered set of digraphs G1,G2,…,Gn such that
    • Vn is a subset of Vn-1 is a subset of … V1; where Vi is a set of vertices of Gi
    • For m > 1, a path Pm(i,j) on vertices i, j in Vm exists only if Pm-1(i,j) on Vm-1 exists
  • An n layer network is stable if and only if
    • for all i,j in Vn, all paths Pn(i,j) are stable
  • A path Pm(i,j) is stable if and only if for all q<=m, all Pq(i,j) are stable