Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; tz_blog_widget has a deprecated constructor in /home/timba/ on line 24

Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; tz_flickr_widget has a deprecated constructor in /home/timba/ on line 24

Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; tz_video_widget has a deprecated constructor in /home/timba/ on line 24

Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; add_tzshortcode_button has a deprecated constructor in /home/timba/ on line 2
Tim Akinbo | TimbaObjects Technologies Ltd.

All posts by Tim Akinbo

Faster Data Input using Form Navigation

Project SwiftCount Checklist Entry Form

Data entry is at the heart of the operations at Project SwiftCount. Our data clerks have to fill forms like the one in the screenshot above over and over again. In studying how they accomplish this task, we noticed a certain activity that takes a lot of time – leaving the keyboard to use the mouse.

Every time a data clerk wants to navigate from one input field to the next, they have to reach out to the mouse and click on the next field. The really smart ones had figured out the tab key and use that to move from one field to the next. You’ll agree with me that if you have to fill in fifty of those input fields, that’s a lot of work.

In order to help, we worked on a jQuery snippet that allows the use of keys on the keyboard to navigate from one field to the next. So all the data clerk needs to do is either press the “n” key on the keyboard to navigate to the next input field or “p” to navigate to the previous. This works for us since numerals are the only valid inputs in each of these text boxes so trapping the “n” and “p” keys to perform the navigation suffices.

We’ve created this snippet demonstrating how it was done.

Two are better than one when it comes to servers

Two (and probably more) are better than one when it comes to deploying servers for a web application. When we deploy our applications, we generally like to separate the database server from the application server. If we were deploying a PHP web application for instance, the web server (which also doubles as the application server in the simplest cases) will be deployed separately from the MySQL database server. That is going to cost more in terms of hardware resources but we’ve found out that it not only improves the application performance but it also increases the application reliability. At TimbaObjects, we obsess over the visual appeal, speed, data integrity and reliability of our applications and so our approach to building such applications generally means we separating data processing from data storage.

In early hours of this morning, one of our application servers which is being used to store, manage and process data for one of our clients went down. Our client is an accredited elections observation group and they recruit and train their election observers to fill out and send answers to an elections checklist (using structured text messages) to a shortcode which is then routed to an application we built that parses, validates and stores the data in this structured text message. It then sends out a response as a reply to the sender.

Our connectivity with the telecom operators is via an SMPP link with our content aggregators and that’s a good thing because if our server goes down for any reason, messages bound for our shortcode get aggregated until we re-establish the connection. This is about the second time we’ve had this problem in the close-to-one-year period this application has been live. Further investigations into the crash revealed that we ran out of memory on the box and while the operating system was attempting to salvage more memory by killing tasks, it seg-faulted. Unfortunately, our watchdog is unable to detect this condition and reboot the server so a reboot had to be done manually. This was an occurrence only happening on the application server; the database server has been largely unscathed – we have an uptime of over 303 days on the database server (as at the writing of this post).

Our deployment uses an Apache server with mod_wsgi to serve the application and preforks 10 processes to serve requests. In order to provide access to previous elections data, we copy over this configuration for every election. With a current deployment covering five elections schedules, we have over 50 apache processes. With a server deployed with only 768MB of RAM, that’s a lot of memory we’re talking about here. The machine on which our web server is deployed also doubles as the SMS processing server. So we have an additional Kannel bearerbox and smsbox processes routing messages over HTTP to a node.js HTTP proxy server we built to handle HTTP requests on behalf of RapidSMS (which our application is built on). Add all this processes together and you have a box that’s about to go belly up.

We solved the problem by reducing the number of processes apache forks to handle requests for the archived applications since those applications don’t get used that much. From about 59 apache processes, it went down to 25. We’re currently using 56% of available RAM now to serve the applications and no swap yet. With a swap size of only 256MB, if we had something much bigger, we probably wouldn’t have run out of memory.

Of course we could have just used one server with a lot of RAM to serve both database and application servers, however, a lot of things can go wrong and the fewer applications you have running on a server, the better for its stability. You simply don’t want to have a single point of failure in your application architecture.

Next time, we’re considering separating the SMS processing application from the application server so in the event that the application server goes down, text messages will continue being processed.

Changing the future

You’ve probably seen a movie or two about someone having the power to predict the future or in some cases see into the future. It’s all too likely that if you were able to see into the future, you’ll change a thing or two – and then the future changes.

In the Jan. 2, 2012 edition of The New Yorker, there’s an IBM ad with the title: “Drivers now ‘see’ traffic jams before they happen” citing a case in which IBM helps Singaporean motorists see traffic congestions one hour before they happen.

This and many types of other prediction applications use tons of historic and realtime data to accomplish this feat. The utility of this application is undoubtedly undisputed (pardon the pun). If the removal of fuel subsidy doesn’t discourage people from driving their own cars and hence potentially reduce congestion in a city like Lagos, such an application will be very useful.

Someone might ask, if prediction algorithms are able to tell us there’ll be heavy traffic on 3rd Mainland bridge, then the suggestion might be to take the Ikorodu road route. However, that then causes a problem on Ikorodu road since every motorist will then want to take that route. Good question, but then the algorithm also uses realtime data and would put into consideration the fact that traffic is building up on Ikorodu road and hence make another prediction and hence a new suggestion. It’s like having a traffic warden who sees all the routes at once and can make better decisions as a result of this oversight.

Welcome to the new year.

NSE Data API usage report

It’s been two weeks since we launched our Nigerian Stock Exchange data API. We decided to do some analytics on the API usage yesterday and to our amazement, we’ve received a total of 665 hits on our API since launch on July 13th, 2011. Certainly some of this requests don’t translate into valid requests. Here’s a breakdown:

Response Type Response Code Hits
OK (Successful API requests) 200 410
OK (All successful requests) 200 439
Forbidden (Unauthenticated Requests) 403 9
Not Found 404 182

Breakdown of HTTP Request Types

The large number of Not Found requests were generated by trojans making requests like /PhpMyAdmin and other such requests (they were probably looking for security loopholes).

Of the 23 API tokens that have been provisioned, we have a 65% usage on those keys. Here’s a distribution of the requests we’ve received from those keys:

Requests Users %
1 4 26%
2 4 26%
3 1 7%
4 2 13%
5 2 13%
8 1 7%
369 1 7%

Requests Vs API Users

In order to make the points in the chart fit within a reasonable chart size, the topmost dot was reduced.

How to pitch to VCs

Delegates at the Pitch Monday Event

Yesterday at Pitch Monday, a number of startup founders in Nigeria had an opportunity to pitch their ideas at investors and get a chance to raise funding for their ideas.

While most of the ideas that were pitched were notable, it was quite obvious that most of the founders weren’t prepared for their pitches or didn’t know how to.

There are numerous articles online on how to prepare for your pitch and so I’ll recommend reading as much on this subject as possible.

In this blog post, I’ll be sharing and expanding some of the tips Tomi Davies shared with the delegates on pitching their ideas at the tail end of the event.

Tomi Davies summarized by saying pitching an idea is all about telling a story and your story needs to contain certain key components for it to be effective.

He breaks it down into the following key components: Proposition, Organization, Economics and Technicalities (POET).


Your proposition has to do with you describing what your service is all about; what problem you are attempting to solve; what value you intend to create and for whom. Depending on how much time you have, you also want to talk about your suppliers, customers and how your product (or service) is different from the competition and your strategy for maintaining your competitive edge.


In organization, you essentially describe your team. Who the members on your team are (or will be) and what roles they play (or will play). It does appear to me that investors favor startup teams of two or more founders – preferably a team that comprises of someone who’s good at product development and marketing (the business guy), the hacker (developer) and the aesthetics (interface) team members.


So this is where a number of startup founders fumble. It’s important to show the investor the money. Most investors are not investing for philanthropy but for profit and if they cannot clearly see how your startup will make them money, you just wouldn’t get them interested enough to invest in your startup. The economics of your startup outlines your cost centers and expenditure (both operational and capital) and your revenue sources. You will do good to explain how you intend to realistically make money in your startup and show how long an investor is likely to wait to recoup their investment and move on to profit from your startup. This is where you talk about things like the return on investment and your rate of return.


This has to do with how your startup is meant to work. Most startup founders start with this but as Tomi Davies mentioned yesterday, this should be the last thing you talk about. If you’re discussing your startup with other geeks, sure this can come first but not when you’re pitching to investors.

In conclusion, I’m really excited about the startup community in Nigeria and I’m looking forward to seeing startup founders grow their businesses into world-class companies.

Private beta announcement of NSE data service

For well over a year now, we’ve been testing an API we developed at TimbaObjects for providing data on trades at the Nigerian Stock Exchange. A number of our clients already use this data in production for some of their applications and we’re repurposing this API for public use.

This data service is the first of a number of such data services that we intend to provide to developers who will in turn use this to create valuable applications. With this NSE data API, you get access to data on all stocks that were traded on the Nigerian Stock Exchange on a daily basis.

We’re yet to launch this on a private beta basis and we’re seeking feedback in the form of a survey on what kind of services or applications will be built using this API. This will give us a sense of the data requirements and help us provide more value to developers who are likely to use this service.

To gain access to this API on launch, please fill our survey and don’t forget to leave your contact information and we’ll sign you up when we launch.

[Survey link]

On Agile Development

Back in the days, the waterfall model of software development was the de facto standard, however, the times have changed. Now there are numerous methods and approaches to software development and in some cases, minor dialects of the same development process. At TimbaObjects, we use an Agile development process that’s somewhat custom and so not as specific as methods like Scrum, XP or AUP. We use tools like Pivotal Tracker that helps us track and respond to issues as quickly as possible.

Here’s our typical process:

  1. We sit with the client to collect requirements. This might be a new feature, a bug or modification on a certain feature.
  2. We create tickets on our tracking system to reflect these features or needed bug fixes
  3. We work on these features and fix bugs and
  4. We meet with the client again for feedback

We continue this cycle until the project is completed. Pretty simple right? Well not exactly; if you’ve been accustomed to using the waterfall process, this is really going to move you out of your comfort zone. In our experience, no software project is particularly complete until the software itself or project expires. What more? Very few clients know what they really want from the start of the project. As the software project becomes a reality, there will be modifications in the requirements – some big, some small. Without an iterative process, this becomes a nightmare for both client and developer.

In our most recently concluded project, we used the Agile development process to really speed up turn around time for software delivery. In hindsight, we probably couldn’t have successfully executed this project without going Agile.