Jim McKeith has recently posted a webinar replay called “Get the most out of Android with 10.3 Rio”. It’s worth watching and it’s at.
Get the most out of Android with 10.3 Rio
In the webinar Jim points out that Android is now the most used operating system in the world (in terms of numbers of devices it is installed on). He also points out that Samsung are much more involved in the control and steering of Android then many people realise (most folk just assume Android = Google).
Samsung have the “Dex” scheme (short for “Desktop Experience”) which allows users to quickly and effectively turn their smart phone into a desktop PC. Clearly another attempt at encouraging the world to drop Windows.
What does all this mean to software developers?
The short answer is that any new projects that are likely to be “on going” (and professionally developed software that isn’t aiming to be “on going” is a contradiction in terms) should be designed with a view to migration from one platform to another being a genuine consideration.
One could write books on this subject (and people have done so). Of course what is good for one person may not be seen as such by someone else. This is particularly so when designing UIs to be used by people with little or no familiarity with the digital world.
One of the problems is that a good UI is also a moving target.
I used to advocate Windows applications as being easy to use if you just followed the simple rules:
1) Tell the computer what you “thing” you want to do something to, by selecting it (often a single click with the mouse or a click and drag).
2) Right click and pick what you want to do with the pop up menu.
This scheme used to work with all good Windows applications. Sadly this uniform approach has gone and users (particularly those in the “addicted to mobiles” category) seem to be happy with the chaotic “every application has it’s own way” approach.
Thinking and planning the basic approach to your UI design must be sensible. I find that David Millington’s three postings about this subject are a good succinct set of documents to read and consider before starting a new project.
David Millington’s Good UI Design – part 1
David Millington’s Good UI Design – part 2
David Millington’s Good UI Design – part 3
One of the recent Embarcadero “CodeRage 2018” webinars was about some of the features in C++ 17.
This included some talk about the “auto” key word and associated encouragement to use it. I’m less convinced about using this in a carefree manner. When you declare a variable or (in C++ 17) a return value who is the sensible person to choose what type it is? The compiler or you?
It’s a great typing saver when you are declaring an iterator for one of the standard library containers (eg a vector). So use it for this.
But I’m not so sure about using it in cases where some careless modification of code might unintentionally change the type of a variable or return value. I feel (because I am used to it) that the strong rigorous typing enforced by earlier C++ standards (before the introduction of the “auto” keyword) is an advantage of the language not a disadvantage.
So yes, use “auto” but only WITH CARE.
Many functions in third party libraries, APIs or in your own code often return some kind of error indication as to the success or otherwise of the function call.
Always check errors when a library (or other) function returns an error (even if you are going to ignore it !).
Why do we make this statement? For all the following reasons.
a) It documents in your source code the fact that the function returns an error (you may decide to check this later).
b) It makes it easier to check the error status when you are debugging your code as/when it doesn’t work.
c) It makes it easier to add error checking at a later date.
d) It encourages you to think “should I be really ignoring the error code returned by a function” at the time that you write your code. In most cases the answer to this question is probably “no”. Checking errors almost always adds little to code complexity and often saves time in the long run.
When developing C++ code it is very common to declare a pointer to a variable of a specific type (either as a local variable, a class member or (hopefully more rarely) a global variable.
Consider the following two exactly equivalent lines
TMyType *Variable; // no space between the * and the variable – *** preferred ***
TMyType * Variable; // space between the * and the variable
Similarly when using the value pointed to (dereferencing) consider the following two exactly equivalent lines
TMyType X = *Variable; // no space between the * and the variable – *** preferred ***
TMyType X = * Variable; // space between the * and the variable
In each case we prefer the first of the two lines. Why? because the second form looks too much like the multiply operator (z = x * y).
It is very important to be consistent throughout all your code. Knowing that we always use the first form allows us to use a search on “*Variable” and find ALL the instances where the pointer is dereferenced.
If I have a person sawing a log for me and it takes them three quarters of an hour to saw through three quarters of a log I can reasonably assume that it will take an hour to finish the job. Given this information I can also assume that if there are ten logs that require sawing and one is done each day (so the person sawing is fresh each time) then each day I need to allocate one hour to the sawing task.
How can we software engineers explain to our managers that software development is not like sawing a log?
Firstly it’s exceedingly hard to know how far through the software you are at a given time.
Secondly you can’t usually predict when the next time consuming challenge is going to hit you. Some parts of a design may go smoothly but then the development may uncover something that had not previously been considered and the sensible engineering solution may well be to re-write (or at least refactor) some of the earlier design that may have been considered “finished”.
With larger projects this unpredictably does tend to even out so that you can (with experience) make estimates of the time needed to get a software system to a point where it is useable.
But I’ve just been asked how long will it take me to fix an issue with a particular type of output on a particular type of device. Once I find how to make the change it will be a ten minute fix. But how can I estimate how long it will take to find (or even if it is actually possible?). How much of the device manual will I have to read and understand in order to find the correct method to achieve the result ? And how much of the code will need to be redesigned to accommodate the required change(s)? And if the documentation is poor (as it usually is) how much experimenting / testing will be required to get it working?
So what do I do ? Quote a short delivery time and then look a fool (and perhaps upset the customer’s planned schedule) if it takes three times as long as I said ? or quote a long delivery time and get the reaction “how long? just to do that simple job! ”. Best is to try to get managers to read and consider this blog !
When developing code it is a frequent occurance that you write something as a quick “get it to compile” fix whilst your mind is focused on the key part on which you are working.
Let me offer an example. Whilst developing a TCP/IP interface to a set of remote digital and analogue i/o you come across a need for a function to extract a set of characters that are specific within a string. You might quickly write a function with a prototype:
String ExtractSpecificCharactersFromString(String IncomingString);
In order to keep going on the main code you are writing on you may quickly write a dummy function body:
String ExtractSpecificCharactersFromString(String IncomingString)
This allows your code to compile and it retuns a sample data String that allows you to start testing your code, all of which is good!
But there is a real danger that “dummy” code like this can get left in genuine code for too long. You end up creating your own bug: “I am sure that digital output byte is set to 0xf500, so why does it keep reading as “0xf5c9?”
Having wasted time as a younger programmer chasing these self inflicted bugs I have adopted a procedure where I reserve a comment statement with three consecutive ’!’ characters specifically for tagging code that is still to be written. For the above example create the function body as:
String ExtractSpecificCharactersFromString(String IncomingString)
return String(“D0=0xf5c9”); // !!! still to be written
Then at regular “low concentration” moments you can go back to the code and search for the character string ”!!!! in “all project files” (using the Embarcadero C++ search terminology here) and quickly find all examples of dummy code where you know further work is required.
I chose ”!!!” for this task as it is a string which is exceedingly unlikely to appear in genuine C++ code. Choose something else if you like but whatever you choose stick with it!