Dec 24, 2013

MySQL Post Install Setup

After MySQL is installed, either as part of Ubuntu Server installation sequence or separately through an appropriate apt-get command, it needs to be properly initialized for use by a client.

One key concept of any standalone database engine is that they have their own user namespace and associated access rights. MySQL is no different and therefore this needs to be appropriately setup before any client can create/update/delete entries into any table.

Essentially this involves the following steps, all to be undertaken as the Administrator (or the root account, by default):

Linux Command Output Redirection Rules

Here's another thing that I learned today -- command output redirection syntax.

> redirects the program's output to the specified file which is specified afterwards. If > is preceded by ampersand, shell redirects all outputs (error and normal) to the file right of >. If you don't specify ampersand, then only normal output is redirected.

In other words:
command &>  file # redirect error and standard output to file
command >   file # redirect standard output to file
command 2>  file # redirect error output to file
This is how things work on Ubuntu and bash. Other shells could very well be different.

Dec 23, 2013

VirtualBox - Sharing a host folder on Ubuntu guest

I have been using VirtualBox to run Ubuntu Server on my Windows PC over the last couple of weeks. The more time I spend with it, the more I find the need to get myself acclimatized with the good old command line method to interact with your OS. Though I'm a pretty fast learner and have spent quite a bit of time on Linux and its variants in the past, the sheer number of commands that one has to learn to manage the large array of interfaces in present-day OS is astounding.

For example, I wanted to share a folder from my Windows host to the Ubuntu guest. How do I go about doing it? How do I mount a USB stick on guest OS?

Luckily, answers to most of these questions are only a few clicks away with the help of Google. However, since I expect to carry out some of these tasks quite often in the future, I'm going to try and record things that I discover and re-learn in various blog posts, so as to act as a reference for myself. This post talks about how to go about accessing a Host folder that is shared through VirtualBox in the Ubuntu guest through command line.

Setting up Ubuntu for Python Development

This is more for my own reference as I often forget the various steps that I followed to get an environment set up. This post lists the steps that I followed to setup an Ubuntu Server for Python/Django/MySQL environment. The steps are drawn from various resources on the web -- official documentation from Ubuntu, some blog posts and a few pointers on stackexchange/stackoverflow. All credits due to all the kind souls who shared knowledge.

Oct 2, 2013

Play youtube videos on Raspbian using omxplayer

This is a follow up post to the original one made by Jojen on setting raspbian to use omxplayer to play videos embedded in webpages. This yields much smoother playback on raspberry PI as omxplayer is built to use the PI GPU rather than software emulation.

BTW, some blog posts might suggest that you can install gnash and its browser plugin, but since it's not hardware accelerated, I found it's performance to be simply non-acceptable. So if you stumble upon those posts, you might as well save yourself some time and not go down that path.

Back to Jojen's post. Everything in the post works as it says it would, including playing the sample video the link to which he has at the bottom of his post. The trouble starts when you try to play youtube videos. I found this was due to two reasons:

  1. Incorrect permissions on the script
  2. Midory user addon Javascript failing to detect video links since youtube as has changed the relevant div tag id

The problems can be fixed thus:

  1. Change the permissions on omxplayer-youtuble-sh (located in /opt/media-berry/server) through the command: chmod 755
  2. Update the replace_video_user.js (located in ~/.local/share/midori/scripts) such that "#watch7-video" is replaced with "#player-api" in line 61(at the time of writing this post).

If it all worked, you should see the youtube video thumbnail being replaced with a rectangle with a play button in the middle.

There's a caveat. After playing the main video, if you try to visit one of the related clips through the links on the sidebar at the right, the video that you just played will be played. This probably has to do with Javascript variable lifetime in the user script. I haven’t had the time to look into it, but when and if I do I’ll post the updated script.

Sep 21, 2013

Debugging Django Template Tags

Here's a cool bit of code snippet that I found from the web (original post here). I'm recording it here for my own easy reference without having to google it all the time. It describes a simple technique to debug Django templates using the python debugger - pdb.

def pdb(element):
    import pdb; pdb.set_trace()
    return element
Now, inside a template you can do
{{ template_var|pdb }}
and enter a pdb session (given you're running the local development server) where you can inspect element to your heart's content.

It's a very nice way to see what's happened to your object when it arrives at the template.

Sep 4, 2013

MQC Client

Beging a legacy component, Mercury Quality Center web plugin does not work very well with new versions of Internet Explorer. MQC Client is a little program that addresses this by hosting the MQC web plugin inside it making it work with the newer IE versions that come as standard with Windows 7.

I have not tested it on Windows 8, but it ought to work.

More about the program and how to use it is here.

Jul 30, 2013

Friendly(X) UI

One would think that in this day and age of highly interactive user experiences, simple message boxes can't go too wrong as far as UX is concerned. At least that's what I thought until I came across this dialog:

Perhaps you don't notice what I'm grumbling about -- the link provided to get the latest virus definition files (erased for confidentiality reasons) is neither a hyperlink nor a selectable text! That's right, you can't even select that text and copy-paste into the browser leaving you with the only choice of typing the link manually, a letter at a time.

Well, at least they got the grammar right and managed to convey the message correctly. So it's not as bad as it could've been!

Setting up Shared Repository in Git

This is not really a post and more of a web clipping for my own future reference.

This page contains useful instructions on how to setup a shared repository with Git. I use it to back up by work on to a thumbdrive.

You might wonder why am I not using one of the free services such as GitHub or BitBucket. Answer is simple -- the code is closed source.

From my research most of the online Git services only provide free hosting for open source code. May be there is a provider out there who does offer true free hosting for closed source, but I don't have the time to pour through their licence agreement and ensure that free is really free without any catches.

Hence the decision to store things locally. In any case Git makes it so easy to set things up.

Jun 19, 2013

Typesafe Logging


One of the common sources of runtime errors in C++ comes from using the good old vararg function types -- printf() or its variants. The problem stems from type mismatch between the type specifier in the format string and the actual arguments supplied to match this format string. Though most of the present-day software do not employ the printf function for user interaction, a variant of it is quite heavily used -- sprintf()  for outputting log messages.

Often log messages are written in response to an error condition and formal testing does not always simulate all the possible error conditions which means some code goes untested. Also, log messages typically have an associated logging level which is usually set to a medium value by default. Testers often do not set the logging level to the lowest possible level before embarking on the testing process meaning that messages at the lowest level are also not exercised during testing.  Another real-world situation is when the released code is maintained by support engineers who adds one or more arguments to the log statement for their own debugging purposes, but fails to test it thoroughly.

This post discusses a simple technique of adapting the superior C++ streams infrastructure as an extended logging framework. The mechanism is built such that it can sit on top of an existing legacy logging framework so that introducing it would not require rampant system-wide code changes. 

As the reader might be aware, C++ puts lot of emphasis on writing type-safe code and C++ streams provides just this. (Check out Stroustrup's tips on how to avoid unsafe code here and one of his tips is to avoid using vararg functions).

Design Goals and Constraints

To reiterate, this is a solution which has to fit into an existing logging framework. The framework can be thought of the providing the following features, which is typical of many legacy logging infrastructure.
  • Multilevel logging with a global system wide logging level
  • Each message has an attached logging level and is only output if the global logging level is equal or lower than this level.
  • Provides printf() like function to concatenate multiple argument values into a log statement.
Given this, the design goals and various constraints can be summarized as below:
  • Goals
    • Provide a type-safe logging framework by leveraging on the C++ streams framework
    • Maintain backward compatibility by retaining the old logging infrastructure which means that the log output should be the same for both types of logging.
    • All log messages should get output to the destination as soon as the statement is executed before the next line is is executed.
    • Maintain logging levels as in the legacy framework and integrate it as an implicit feature. 
  • Constraints
    • Can only utilize on the STL libraries which is available on most platforms and compilers
    • Don't use C++11 language features as certain compilers do not support it yet.


    To anyone familiar with the STL library and its streams facility, the excellent stringstream class can be applied effectively to compose and output the log message. This allows values of multiple variables/objects to be captured as a string through a single statement. A typical logging code using this (one without any explicit framework) could look like:

    std::stringstream ss;
    ss << "Error creating connection to the given address, error code: " 
        << ::GetLastError() << endl;
    g_logger->log(LOG_LEVEL_ERROR, ss.str());

    Where g_logger is a global instance of a class that provides the legacy logging framework through its vararg methods. While this would definitely work, it has couple of shortcomings:
    • Every log statement requires three lines of code. First one to declare a local variable, second to compose the log message and third to write it to the log destination.
    • There's a necessity to declare a local variable which means that two log statements in the same block of code have to be using two separate locals. Alternatively, each log statement has to be in its own block.

    An alternative implementation could be to declare a globally accessible instance of a specialized STL stream (which could implement its own stream buffer so as to redirect output to the legacy module-wide logger instance), say g_ls, to which data can be written as below:

    using namesapce std;
    g_ls << "Error creating connection to the given address, error code: "
        << ::GetLastError() << endl;
    g_ls << flush;

    This has the drawback that the programmer needs to call std::flush() after every logging statement so as to achieve the stated goal of immediate output to the logger destination before the execution of the next line of code. Not to mention the additional work of specializing the STL stream with its own stream buffer. And as in the previous example, we've to rely on the programmer being disciplined enough to make a call to flush() repetitiously after composing every log message.

    Both the possible approaches presented above have the major drawback that it requires conscious and disciplined action from the user to make the logging work as per the goals set forth earlier. And this involves repetitious coding practice that is best avoided if robust and resilient software is the desired result.

    What we need is a mechanism which upon entering a statement provides a blank slate for users to accumulate their messages in and upon exit dumps the content to the log destination. Which means we need an entry and exit trigger where we can plug in the necessary code to interface it with the legacy logging framework. Naturally, the object constructor and destructor comes to mind. Upon construction we get an object which acts as a blank slate where different messges can be stored and the destructor can then dump the slate contents to the logging medium. This can be implemented as:

    class AutoLogger {
        int loglevel_;
        std::stringstream ss_;
        AutoLogger(); // hide default constructor!
        AutoLogger(const AutoLogger&); // a good practice
        AutoLogger(explicit int level) : loglevel_(level) {}
        ~AutoLogger() {
            g_logger->log(level_, ss_.str());
        std::stringstream& getStream() { return ss_; }

    Thereafter we can employ it as below:

    AutoLogger al1(LOG_LEVEL_ERROR);
        << "Error creating connection to the given address, error code: "
        << ::GetLastError() << endl;

    The above code meets most of our goals set forth at the beginning, except the third one that log messages should be written as soon as the statement that composed it is executed. In the above code, the buffered message would only be written when the variable goes out of scope, which is when its destructor would get called. That means code like this fails to meet the requirement refreshed above:

    AutoLogger al1(LOG_LEVEL_ERROR);
        << "Error creating connection to the given address, error code: "
        << ::GetLastError() << endl;
    // more code...
        << "Error retrieving the secondary server address"
        << ::GetLastError() << endl;

    So we need a mechanism to prevent users from instantiating AutoLogger locally and this can be easily accomplished by hiding its constructor. The amended code would like like:

    class AutoLogger {
        int loglevel_;
        std::stringstream ss_;
        AutoLogger(); // hide default constructor!
        AutoLogger(const AutoLogger&); // a good practice
        AutoLogger(explicit int level) : loglevel_(level) {}
        ~AutoLogger() {
            g_logger->log(level_, ss_.str());
        std::stringstream& getStream() { return ss_; }

    This however now presents another problem, if constructor is hidden how is the user going to be able to instantiate an object and use it? Well, we provide a friend function to do just this. Being a friend, the function can instantiate that class and return it. So the code becomes:

    class AutoLogger {
        int loglevel_;
        std::stringstream ss_;
        AutoLogger(); // hide default constructor!
        AutoLogger(const AutoLogger&); // a good practice
        AutoLogger(explicit int level) : loglevel_(level) {}
        friend AutoLogger getAutoLogger(int level);
        ~AutoLogger() {
            g_logger->log(level_, ss_.str());
        std::stringstream& getStream() { return ss_; }
    AutoLogger getAutoLogger(int level) {
        return AutoLogger(level);
    Now to use the framework, user will have to write code like this:

        << "Error creating connection to the given address, error code: "
        << ::GetLastError() << endl;

    This truly meets all the requirements that we set forth at the beginning of our discussion. You get increased typesafety and consequently no more runtime errors, log messages are written at the completion of every statement and logging levels have to be attached to every log message.

    Note that the key trick here is hiding the constructor of the AutoLogger class. This in no way restricts its usage, only its instantiation is constrained. In fact this is a common technique adopted by the class factory pattern to force users to use the factory to instantiate classes.

    Additional Benefits

    Besides the runtime safety benefits extolled earlier, STL streams based logging provides yet another capability that can significantly improve programming productivity and code quality. Since we're using STL streams as the interface for logging, any C++ class can be easily extended to support dumping its state to the log medium by overriding the output operator '<<'.

    The significance of this becomes relevant when we observe how various C++ classes in a software are typically developed. Usually they are written individually, often using a console program to perform unit testing. Thereafter, when a minimum amount of class facade is written and tested, the component is integrated with the larger project where the necessary client code is written. It's beneficial for such component classes to include a method to dump their state to the logger. If the streams based logger is employed throughout, the overridden streams output ('<<') operator can be used to dump the object's state in both the unit-test console program (through cout or cerr) as well as the larger project (presumably through a class scuh as AutolLogger above).

    As an example let's consider the following Employee class:

    // employee.hpp
    class Employee {
        // data
        std::string firstname_;
        std::string lastname_;
        // needs to be declared as friend so that the function
        // can directly access the class private data members
        friend std::basic_ostream<char>& operator<<(
            std::basic_ostream<char>&, const Employee&);
        Employee(std::string const& f, std::string const& l)
            : firstname_(f), lastname_(l)
    // implementation
    std::basic_ostream<char>& operator<<(std::basic_ostream<char>& os,
            const Employee& emp)
        os << "Employee: " 
            << emp.lastname_ << ", " << emp.firstname_;
        return os;

    With this in place, now we can unit test programs as:
    #include <iostream>
    #include "employee.hpp"
    int main(void)
        Employee e("John", "Smith");
        std::cout << e;

    And in the production code (with the above logging framework integrated):

    Employee e = getAuthenticatedUser();
    << "Initializing document rights for user: "
        << e << std::endl;


    The above example illustrates one very important aspect of the C++ language. With only the STL and clever usage of the auto object scope rules, one can develop pretty sophisticated mechanisms that can yield very robust and high quality software.

    The other important aspect illustrated by this solution is the variable scope rules and how it can be effectively used to completely eliminate the possibility of memory leaks. Local scoped objects and the auto destruction semantics is one of the most powerful features of the language and effective employment of this can eliminate the need for explicit memory management in the code.

    Feb 26, 2013

    Explicit Overrides

    In the last two days, I have been twice bitten by the loose nature of legacy C++ language.

    I had developed a base class that abstracts the core features of a module a year back. The class itself was a great success. Since developing it for a specific module, I successfully refactored another module to use this and was able to do so in a relatively short window of 1 week. This is the background.

    Recently, I had an opportunity to re-use this class for another module and having been successful earlier twice, I guess I was a little callous in using it without referring to the accompanied documentation. The design requires that I override a bunch of methods and admittedly I was as little cocky -- if I had designed and implemented it earlier, it should be a snap for me to re-use it. So I wrote the code, compiled it, removed a few syntax errors and dropped it into test. All good, except that couple of overridden methods were not being invoked!

    WTF? It also so happened that the callback to the overridden methods were triggered by an external event. Off I went tracing this path to see what's going on. Nothing wrong there either. WTF??

    With nowhere else left to investigate, I went back to the base class design. And it was then that I realized the rather embarrassing mistake that I had made. I guess the code is perhaps worth a thousand words -- so here it goes.

    Base class implementation:
    class Base {
       virtual void onInitialize();
       virtual void onShutdown(bool fGraceful);
    And here's the derived class:
    class Derived : public Base {
       virtual void onInitialize();
       virtual void onShutdown();
    Note the mistake? I had forgotten about the bool fGraceful parameter to onShutdown() and as a result it was defined as a new method in the derived class!

    This is precisely the kind of problem that explicit overrides feature in C++11 allows you to evade. If you have a conforming compiler (I didn't), you can declare the derived class as
    class Derived : public Base {
       virtual void onInitialize() override;
       virtual void onShutdown() override;
    What the trailing override does is to inform the compiler that onShutown() is a method that is overridden from its namesake in the base class. And the compiler, seeing that the prototype for the method in derived class doesn't match what's in the base, would raise an error which would allow you to spot the problem and fix it, even when you're overconfident and cocky!

    As for me, I don't have the luxury of using a C++11 compiler and have to live with the possibility that I will make these mistakes again!

    Feb 6, 2013

    Back to the roots

    For the past couple of weeks, I have been working on something that I thought I had left behind a long time ago, a very long time ago -- a windows device driver!

    I had launched my career writing drivers, first for OS/2 and then eventually for Windows. When I say Windows, I'm referring to Windows NT 3.51. That's correct, NT 3.51! And here's a surprise, the first Windows NT driver that I worked on ran on PowerPC platform! IBM was readying their new state-of-the-art PC platform and to play it safe wanted to give the user the choice of platforms to run -- OS/2 and Windows NT.

    Being a new platform, the toolsets were not quite as well developed as they were for the more contemporary and stable x86. WinDBG (a far cry from the WinDBG of these days) had issues syncing the source line information in the PDB with the actual source files. Quite often I had to resort to using the disassembly to isolate the root cause and fix it. Another challenge that I faced then was that the x86 driver had plenty of inline assembly which had to be ported to either C or PowerPC's RISC instruction set.

    But what sets apart the driver development experience then and now is the advances made with the virtualization technology. Gone are the days of NULL modem cable and the rather long wait with every WinDBG command. Instead you work with multiple VMs with a virtual serial port mapped to a named pipe and debugging different versions of the driver simultaneously! The fact that VM states can be saved and cloned at will really makes driver development a breeze.

    One thing however, has not changed much. Much of NT driver is still written C with very little C++. It's a shame that MS has not managed to update their compiler such that it can generate a driver safe PE image. Let's hope that this will change in the near future.

    Anyway, it's hard not to feel like I'm going back to my roots.

    Jan 30, 2013

    Back to the blogging world...

    I'm back online after a hiatus of almost 12 months.

    Having maintained a self hosted blog on a shared hosting service and having gone through the pains of keeping the Wordpress version current, I'm gonna give the Blogger service a try.

    Since I use Gmail as my primary email service, Blogger and its tight integration with Google services ought to make this a pleasant experience.

    Also, Blogger provides free mapping of custom domains to the hosted blog. That means I just need to pay for my domain name -- no more hosting charges! Not sure if theres' a way by which I can get my domain mail account to work for free as well. Let's see.

    May I find the drive to persevere with this effort for the long term. Wish me luck!