Earlier today, Yves and François integrated HypercubeJS into core.stoic and connected the new Pivot perspective to it. We now need to finish our query interface, and we’ll be able to produce our nice and shiny pivot tables. That, combined with native C++ indexes should give us quite a few bangs for our bucks.
hello() that would return the
world string. In the first case, the string is a variable of the function. In the other, it’s a value returned by an external C++ library. Results? 300ns for the former (n as nano), and 400ns for the latter (on average, for 10,000,000 calls).
Conclusion: externalizing a function increases the latency of it’s invocation by about 30%, which is very reasonable. In exchange, you get virtually unlimited memory, and you get native C speed. How much faster will our indexes be as a result? I’m willing to bet anywhere between 2 to 5 times, and we should get a first answer for Hexastore sometime tomorrow…
Importing large datasets through Excel spreadsheets is usually not a good idea, because there is no liberally-licensed open source parser for XLSX files that supports streaming. As a result, you’re limited by the amount of memory that Node.js can handle, which is about 1.4GB. Therefore, XLSX imports should be limited to relatively small sample datasets that are used to bootstrap your application. From there, another importer should be used.
For this purpose, Jacques-Alexandre has been working on a high-performance CSV importer implemented as a connector. As a result, it will by-pass all the complex business logic implemented in the existing importer, connect directly to our database, and support streaming. This should allow us to import very large datasets with an optimal level of performance. We hope to have a first version of this sometime later this week, or early next week.