2-D programming – about function prototypes, definitions and calls

Talking to people about my ideas help me fine tune my ideas.

Recently I talked to a bunch of students about implementing my earlier post about 2-D programming. That got my brain ticking. I thought of function prototypes, definitions and function calls in 2-D programming.

Function prototyping  and declaration:

I had earlier wrote about improving the function prototyping by adding some contract to each argument. That is, rather than writing prototype of fwrite just as

size_t fwrite ( const void * ptr, size_t size, size_t count, FILE * stream );

we should also mention the hidden assumption (contract) for better understanding of the programmer who is going to use fwrite as:

size_t fwrite ( const void * ptr != NULL, size_t size > 0, size_t count >= 0, FILE * stream != NULL);

That is, human readability is also important in code maintenance and explicit statement about contract is helpful in avoiding a lot of bugs.

However, when you add to default values also mentioned in the prototype, 1-D arrangement becomes very difficult to read because with each argument, there will be four attributes associated now – type, argument name, default value and contract about the argument. The same exercise has to be repeated when the function is declared:

size_t fwrite ( const void * ptr != NULL, size_t size > 0, size_t count >= 0, FILE * stream != NULL) {
...
}

2-D definition may make it very succinct and intuitive. (Here) Assuming fwrite defaults to stdout as default stream to write,

/*type*/
size_t
const void *
size_t
size_t
FILE *
/*name*/
fwrite
ptr
size
count
stream
/*default*/  1
/*contract*/
>= 0
!= NULL
> 0
>= 0
!= NULL

Now that we have freedom to redesign the prototype or first line of function again and with one more dimension, the following will suite reader psychology the best:

/*name*/
fwrite
ptr
size
count
stream
/*type*/
size_t
const void *
size_t
size_t
FILE *
/*contract*/
>= 0
!= NULL
> 0
>= 0
!= NULL
/*default*/ 1

At the first glance (row #1) this definition shows fwrite will need a data pointer, size of element, number of elements and a destination stream to write. The reader doesn’t get lost into other details if not needed. As such, human mind thinks about arguments of a function first and later about its type, default values or restrictions. Often, when a function is fully understood, some of these attributes are understood better. By describing the list of argument in “one row view”, the writer communicates the reader very quickly what the function is all about.

Counterpart of function call – or statement structure can also become interesting in 2-D. Suppose you are pre-historic era Mathematician trying to find out area of an eclipse. You would deduce that AreaOfAnEclipse has to do something with its major axis and minor axis and nothing more. We all programmers have an intuitive feel of what arguments will be needed to evaluate a function – and later figure out necessary functions and operations to fit them in. 2-D programming structures can be thought out to do the same.

Take for example, area of a sphere with its major axis being A and B. We will not require function calls here. Only operators will suffice:

/*variables*/
areaOfAnEclipse =
pi
A
B
/*operations and functions*/ *
/*operations and functions*/ *

[As you can see, it is kind of wobbly. Someone with theory background may help me fine tune the presentation.]

For function calls it is easier. For example, if you are trying to determine value of gobbledegook as cosA*sinB – sinA*cosB, 2-D programming may look like:

/*variables*/ gobbledegook =
A
B
A
B
/*operations and functions*/ cos sin sin cos
/*operations and functions*/ * *
/*operations and functions*/


It is tempting to continue to think about what may happen to control structures and expressions (apart from the 2-D nesting we discussed earlier). However, it is late in the day and I am under allergy medication. Waiting for your comments, folks!

Advertisements

Smarter keyboard

Now you feel I am stretching it a bit too much.

Come on! What is there in for a keyboard to get smarter? A keyboard could be sleeker, fancier, pricier – and even just projected on a screen or table top. But smarter?

Yes, why not? A keyboard can be made smarter. The bad news is, it is getting redundant by technology.

A keyboard with built in dictionary can be built. Why can’t this statement be typed with fewer (and funkier) keystrokes? Like…

User types: ‘W’ – keyboard pushes ‘h’ – user mentally accepts so moves forward to

User types ‘y’ – keyboard pushes a blank space – user mentally accepts so moves forward to

User types ‘c’ – keyboard pushes ‘h’ – user presses an “override” key + ‘a’ – keyboard pushes ‘n’ – user mentally accepts so moves forward

User types an apostrophe – keyboard pushes ‘t’ – user mentally accepts so moves forward to

User types a blank space

User types ‘t’ – keyboard pushes ‘h’ – user mentally accepts so moves forward to …

You can imagine the keyboard as a thought reader with an override key.

Interestingly this problem is taken care into application software these days.

Testing a feature in presence of many possible combinations of environment

Take for example testing of a router where we list features in columns of test areas:

L1 L2 L3
10BasedT STP RIPv1
100BasedT RSTP RIPv2
1G PVST OSPF
10G MVST ISIS
BGP

In the beginning of a product like this, most interest will be in making sure each of the protocols work. Stand alone testing of the cells will be more than enough.

Once most low hanging (and highly damaging) bugs that could be found through testing cells of the matrix alone, are weeded out of the software, QA progresses by making “experts” of testing going column-wise – L1 expert, L2 expert, L3 expert etc. This leads to more experienced testers and technically great bugs (like route redistribution having memory leaks). QA also arranges its test plans by test areas and features.

At this stage, only a section of bugs seen by a customer is eliminated and the complaint “QA isn’t testing what the customer ways” continues.

That is because the customer doesn’t use the product by columns. A typical customer environment is a vector selected one member from each column (at most). A product is likely to break at any vector.

As you can see, overall testing of the matrix above would require testing 4*4*5 = 80 environments. In reasonable products the actual number may be in 10,000’s.

***

Testing a feature in presence of many possible combinations of environment is a well-known QA problem.

Various approaches have been suggested. There have been combinatorial engines and test pairs and so on to help QA optimize this multi-dimensional problem.

The approach I discuss here is yet another algorithm to be followed by semi-automation. Just let me know your thoughts about it.

***

Let us define our terms once again:

  • Feature: A feature in the product [I know that isn’t a very great definition.]
  • Area: A set of related features
  • Product: A set of Areas
  • Environment: A vector (or a sub-vector) of Product, with each component coming from an Area
  • Environment for a Feature: A maximal length sub-vector of the Product without the Area of Feature

Please understand once again that QA expertise and test plans are structured by Area (a column in the matrix). The best way to test will be to test every test of every Feature against every “Environment for a Feature”.

This approach is simply uneconomical. So, what is the next best approach?

***

Before coming to that, we need to understand how test cases are typically structured. In a feature, typically test cases have some overlap of steps – like configuration or authentication etc or overlapping phenomena like exchange of Hello packets, establishment of neighborhood etc.

That means, there is a significant redundancy of tests from white-box point of view.

This redundancy can be exploited to assure that the product may stand reasonably well in diverse environments. As we discussed earlier, such an environment is a vector of that matrix, which in turn is a sub-vector plus a test condition.

***

Understanding so much brings us to a workable solution for testing more “customer-like” without incurring too much cost.

The key understanding from the above discussion is that Environment for a Feature can be changed independently of the test case (or the Feature or the Area).

That means if the tester can signal an “environment controller” at the end of a test case, the controller may change the DUT/SUT to another Environment for the Feature. After that change is done, the tester just continues testing the system to the next test case – till all test cases end.

Because it is less likely that number of test cases are a factor (or multiple) of number of sub-vectors, within a few test cycles reasonable amount of test steps will be tested across reasonably diverse environmental sub-vectors.

As a strategy of testing the product, QA can assure its internal customers that over a few releases, most interesting bugs that can be found economically will be found.

***

What are the downsides of this approach? For sure, the Environment for a Feature must not have any configurations about the Feature under test – or even the Area. That means the tester will have to always configure the Feature before going to the next test. If you are working on a Feature that takes VERY long to converge, you are out of luck. [VPLS in networking comes across as an example.]

Since most products don’t involve long signaling delays, let me focus on the optimization of this testing.

How can we find maximum number of bugs in a given feature (or the entire column) related to environmental changes within minimum number of steps?

The answer is obvious to the readers of this blog – by taking the sub-vectors in anti-sorted way!

Cut-and-Paste!

Think of cutting and pasting. Can next generation cut-and-paste behave in more user friendly way? Can’t think much, right?

I have seen the most complex paste operation in Excel sheets. However, in my humble opinion, the most complex doesn’t always mean the most convenient.

The simple cut and paste of text hasn’t changed from its early days. It is very rough, ready and useful.

To make it more intelligent, we need to look at the behavior carefully. Let us take an example.

****

Original Text: “It is very rough, ready and useful.”

Intended Text: “It is very rough, useful and ready.”

Steps to be taken:

  1. Select “useful” from “It is very rough, ready and useful.”
  2. Ctrl+X -> “It is very rough, ready and .”
  3. Go to the space near comma
  4. Ctrl+V -> “It is very rough, usefulready and .”
  5. Enter a space -> “It is very rough, useful ready and .”
  6. Select “ready” from “It is very rough, useful ready and .”
  7. Ctrl+X -> “It is very rough, useful and .”
  8. Go to the space before the period
  9. Ctrl+V -> “It is very rough, useful and ready.”

phew!

If we had a Ctrl+W that understood conjunctions (and, or in the beginning or a comma at the end) in the clipboard and intelligence to push them at the end, here is how it could work:

  1. Select “and useful” from “It is very rough, ready and useful.”
  2. Ctrl+X -> “It is very rough, ready .”
  3. Go to the space near comma
  4. Ctrl+W-> pushes the first “and ” at the end of the clip board and pastes -> “It is very rough, useful and ready .”
  5. Remove the space before the period

presto!

I know I have yet not perfected the algorithm. Can you fine tune it?

Can the camera revolution check corruption in India?

Until recently cameras were toys of the rich. Along with many other industries like wrist watches, cell phones changed camera industry also.

Recently India was awash with scams. People were in the street protesting against corruption.

I realized that everyone has a phone – and a camera. Why can’t such a formidable presence of evidence-catching machines be turned against the corrupt?

Finally a corrupt person is going to spend all that cash somewhere. Why can’t the mass turn into paparazzi and catch every possible movement of someone on the black list?

Later, the pictures may be circulated by some means – from MMS to Facebook or Tumblr. Such images can be used to estimate the person’s (or his familiy’s) spending habits and can be questioned for consumption disproportionate to the income.

Key Idea

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” Isaac Asimov

And society includes engineers, designers, marketing executives and consumers.

Coincidence?

It is odd. I have been talking on this blog about software – and sometimes on automobiles. Here is a link from Nokia – and its plans to get big into automobiles. Coincidence? No.

Here is why.

Scene 1:

When I was in my teens, we used to get this pseudo-historical TV serial “The Sword of Tipu Sultan”. The story was about an Indian King hero and how he lost against British East India Company. Sugar coating aside, the director Sanjay Khan had done a decent job.

Once I was watching an episode depicting war preparations by both the sides. My father interrupted me and asked, “Can you deduce from these scenes who should win the war?”. I said “no”. My father said it is obvious that in long run, the British should win. I asked how. Then he pointed at the British general and commanders spreading a map on the table and planning the war – movement, provisions, backups etc. but Tipu was addressing his crowd as a hero. [My father taught me a lesson on how cold blood wins over hot blood. That isn’t the point of discussion here.]

Lesson: Cartography and surveying are amazing tools

Scene 2:

Science and technology didn’t progress much till Galileo noticed that a pendulum swings only at certain interval. This was the birth of modern cronometry – or clock business. Later that time measurement developed into Newton’s laws where the “rate of change” became the cornerstone of the universe – and the world exploded with technology and science.

Lesson: Cronometry is an amazing tool

Scene 3:

Albert Einstein proves that there isn’t much difference between time and space.

Lesson: Surveying and cronometry are cousins

As human beings limited to space and time, measuring both are extremely important to us. There is no wonder conquering both are goal of every major business.

If you look at last two decade’s technological advances, GPS (surveying and cronometry), GIS (cartography), mobility of computing, self-driven cars (automation of change in location) – all is promising great future for any business that combines space and time.

While business success depends on a lot of factor and I can’t say Nokia would get it 100% right, fundamental idea of Nokia is unbeatable.

Long back a rishi of upanishads could see this importance of time and distance, probably by observing how riding a horse or bullock cart helped reducing pains of humankind:

यदा चर्मवदावकाशम् वेष्टयिष्यन्ति मानवाः

तदा देवमविज्ञाय दुःखस्यान्तो भविष्यति.

When humans will wrap around the world like a hide (skin)

Then the pain associated of not knowing God will end

Moral: If you want to make money, one way is to make anything more controllable in space and time