My name is Ismael Chang Ghalimi. I build the STOIC platform. I am a stoic, and this blog is my agora.

Since we’re now mostly fixing bugs as we’re finding them, there is no point reporting on weekly velocity anymore. But as you can see on this screenshot, we’re definitely converging. Over the past few days, François has been going through every single object one by one, finding and fixing many new bugs along the way. He also made significant improvements to our charting engine. For their part, Florian and Zhipeng have been working on Regions and choropleths, and we’re a couple of days from being able to generate these maps from addresses. In the meantime, Hugues is putting the final touches to the upgrade framework with Yves and has started to synchronize with François so that we can connect it to our new user interface. And while all this was happening, Jacques-Alexandre has made major additions to STOIC Drive and developed a connector for Google Contacts, while Jim has been working on provisioning. Busy, busy, busy…

Oh, joy! Pascal finally pushed his code for the distributed meta-data cache. We’ve been waiting for this moment for months, and it’s finally upon us. Of course, it’s likely to break pretty much everything, but since we don’t have any more features to implement, we have time to fix bugs.

The Choropleth chart is now up and running. Unfortunately, it’s not yet integrated with the Address datatype, because it requires that we map geocoded addresses to the Regions object, which is quite complex, especially if you want to support rgions that are more granular than countries, such as states or counties in the US for example.

There are two problems to solve there: first, mapping elements of the geocoding information we get from the Google Maps API to our Regions object; second, feeding that information to Elasticsearch in such a way that aggregations can be done directly by the database. I’m not sure that we’ll manage to fix that bug for the MVP, but we’ll try, because I know a customer or two who will really, really need it.

Yes, I’m thinking about you Unmesh…

Last week was really quite frustrating, but necessary to get us where we want to be. Pascal had to re-start from scratch the merging of his distributed meta-data cache with the rest of the codebase, which cost us one if not two weeks of work. That being said, things are back to normal, and he is now in a position to actually complete this integration. In the meantime, Hugues and Yves have made great progress with the upgrade framework, which is nearing completion.

On the user interface front, François has been fixing a ton of bugs with the charts generator, but will need another week to finish everything there. For his part, Florian has been polishing our wizards framework and the record view, while helping Pascal here and there. And while all this was happening, Jim has refactored the integration of CircularJS with the rest of the platform and has worked on deploying our new website.

And for my part, I’ve done very little beside preparing my move to Singapore…

Lease signed

I just signed a lease for an apartment and home office in Singapore, starting June 1st, in the Novena district. I will move there in early June, and my family will follow in late July. After that, I will commute back to the US every 4 to 6 weeks.

As you can see on this list of completed stories, most of our work is currently focused on fixing any bug we can find. It’s a painstaking piece of work, but it should be worth the effort. In the meantime, Hugues and Yves are almost done with the meta-data upgrade franework, and Pascal managed to unstuck the merging of his refactored meta-data caching layer with the rest of our codebase. I’m hopeful that he’ll get something working next week, just in time for an upcoming proof-of-concept with a major customer.

François made some improvements to the way we’re displaying suggestions for charts. Whenever a field has more than 6 possible charts, we only show 5 of them and an ellipsis. When clicking on the latter, we then display a modal dialog with all possible charts. Later on, we will add some smarter algorithms to prioritize the top-5 charts.
François made some improvements to the way we’re displaying suggestions for charts. Whenever a field has more than 6 possible charts, we only show 5 of them and an ellipsis. When clicking on the latter, we then display a modal dialog with all possible charts. Later on, we will add some smarter algorithms to prioritize the top-5 charts.
François made some improvements to the way we’re displaying suggestions for charts. Whenever a field has more than 6 possible charts, we only show 5 of them and an ellipsis. When clicking on the latter, we then display a modal dialog with all possible charts. Later on, we will add some smarter algorithms to prioritize the top-5 charts.

François made some improvements to the way we’re displaying suggestions for charts. Whenever a field has more than 6 possible charts, we only show 5 of them and an ellipsis. When clicking on the latter, we then display a modal dialog with all possible charts. Later on, we will add some smarter algorithms to prioritize the top-5 charts.

After a couple of weeks of hard work, Zhipeng has managed to develop a very decent connector for the Wikipedia API. This allowed us to automatically download all the data we needed for the Countries and Currencies objects. The data wasn’t entirely clean, but we massaged manually a little bit, and it’s now complete.
This project took a lot longer than expected, because the Wikipedia API provides semi-structured content, which needs to be parsed in a very ad-hoc fashion. We looked for existing libraries, but could not find any that worked sufficiently well and was written in JavaScript. Instead, we had to manually port one written in Python.
Many thanks to everyone who helped us with the original manual data collection! We ended up not using this content, but it allowed us to test our connector, and to convince ourselves that we should build one in the first place.
After a couple of weeks of hard work, Zhipeng has managed to develop a very decent connector for the Wikipedia API. This allowed us to automatically download all the data we needed for the Countries and Currencies objects. The data wasn’t entirely clean, but we massaged manually a little bit, and it’s now complete.
This project took a lot longer than expected, because the Wikipedia API provides semi-structured content, which needs to be parsed in a very ad-hoc fashion. We looked for existing libraries, but could not find any that worked sufficiently well and was written in JavaScript. Instead, we had to manually port one written in Python.
Many thanks to everyone who helped us with the original manual data collection! We ended up not using this content, but it allowed us to test our connector, and to convince ourselves that we should build one in the first place.
After a couple of weeks of hard work, Zhipeng has managed to develop a very decent connector for the Wikipedia API. This allowed us to automatically download all the data we needed for the Countries and Currencies objects. The data wasn’t entirely clean, but we massaged manually a little bit, and it’s now complete.
This project took a lot longer than expected, because the Wikipedia API provides semi-structured content, which needs to be parsed in a very ad-hoc fashion. We looked for existing libraries, but could not find any that worked sufficiently well and was written in JavaScript. Instead, we had to manually port one written in Python.
Many thanks to everyone who helped us with the original manual data collection! We ended up not using this content, but it allowed us to test our connector, and to convince ourselves that we should build one in the first place.

After a couple of weeks of hard work, Zhipeng has managed to develop a very decent connector for the Wikipedia API. This allowed us to automatically download all the data we needed for the Countries and Currencies objects. The data wasn’t entirely clean, but we massaged manually a little bit, and it’s now complete.

This project took a lot longer than expected, because the Wikipedia API provides semi-structured content, which needs to be parsed in a very ad-hoc fashion. We looked for existing libraries, but could not find any that worked sufficiently well and was written in JavaScript. Instead, we had to manually port one written in Python.

Many thanks to everyone who helped us with the original manual data collection! We ended up not using this content, but it allowed us to test our connector, and to convince ourselves that we should build one in the first place.