My name is Ismael Chang Ghalimi. I build the STOIC platform. I am a stoic, and this blog is my agora.

Refactoring of advanced relationships

After some discussions with Hugues and Pascal, we’ve decided to re-factor the way we’re implementing advanced relationships. So far, these have been used for relationships that can have multiple target objects and/or multiple target records, and have been implemented by storing a dumb JSON object directly within records, without any native support at the middleware level. In other words, it was a temporary hack…

While this was enough for a while, it is creating some performance issues for certain types of queries, and it is limiting our ability to add support for more advanced features, such as the addition of custom attributes to relations in order to support triples (RDF's atomic data entity).

In order to work around these limitations, we have decided to implement advanced relationships with dedicated tables on Elasticsearch (one table per relationship). This will dramatically speed up the execution of complex queries on advanced relationships, while allowing us to process a brand new class of queries that cannot be handled with our current implementation. It will also allow us to add schema-driven relation attributes, thereby supporting the native import/export of RDF data structures (I know a few people who will go crazy about that one).

Hugues will work on this as soon as he is done with permission restrictions.

Nashorn integrated into Elasticsearch

Victory! Hugues and our friend Kin Wah managed to integrate Nashorn into Elasticsearch.

Why does it matter? Well, we now have a powerful JavaScript runtime deployed on top of the Java virtual machine on which Elasticsearch is running. As a result, we will be able to execute our powerful FormulaJS expressions directly within the database. This should improve the performance of our authorization engine quite dramatically, while allowing us to perform queries for analytics that were simply impossible to handle before. Think of it as stored procedures on steroids, with a fully extensible functional language.

Great work team Singapore!

Making sense of our JS soup

If you’ve been reading this blog on a regular basis, you might be getting confused with the explosion of JS projects and repositories that we’ve been creating. There is some method to our madness though, and I actually think it all makes sense. Let’s take a look at our major initiatives:

  • FormulaJS, JavaScript implementation of all Excel formula functions
  • ExpressionJS, functional language built on top of FormulaJS
  • ProcessorJS, declarative event-driven controller built on top of ExpressionJS
  • CircularJS, server-side templating engine built on top of AngularJS and ProcessorJS
  • WorkerJS, asynchronous event processor built on top of Bull and ProcessorJS

As you can see, ProcessorJS is built on top of ExpressionJS, which itself is built on top of FormulaJS. On top of it, we build CircularJS for developing the front-end of web applications, and WorkerJS for developing their back-end.

What’s so great about this architecture is that CircularJS and WorkerJS share the same ProcessorJS engine, which means that you have only one thing to learn, and we have only one pattern to support, especially when it comes to the development of graphical tools.

And when we start letting you reuse the same directives across AngularJS and CircularJS, while using FormulaJS expressions that can be applied at any level of the stack, our overall level of genericity should be pretty phenomenal…

Distributed architecture

A year ago, Hugues and I had a passionate discussion about the benefits of a distributed architecture, whereby a collection of small applications collaborate toward a common goal. Hugues was of the opinion that it was the only way to scale in the cloud. I was of the opinion that it did not really matter as long as you had proper support for clustering.

I was wrong.

Earlier today, we deployed a first version of CircularJS on Cloud Foundry for our new website. It turns out that our core server crashed soon thereafter, but CircularJS kept running just fine, because it does not rely on the server responsible for providing our web user interface. Instead, it connects directly to our Elasticsearch database, using the exact same middleware as the one used for our web application.

Today, I’m convinced that with proper provisioning, lots of smaller applications is better.

Hugues: respect!

Integrating Bull

In order to support some critical use cases, we need to execute the FormulaJS expressions defined by the fields of imported objects. This could slow down the import process to the point where we need a proper job management system in order to keep things running smoothly.

Somehow, we managed to do everything we’re doing without such a queuing system, but I don’t think this is sustainable anymore, and this opinion is shared by Hugues as well. I did not get a chance to discuss it with Pascal or Jim, but I’d be surprised if they did not agree with our analysis.

In the spirit of keeping things as simple as possible, we decided to give the Bull Job Manager a spin. It’s powered by Redis, which we already use for managing user sessions, and it’s inspired by Kue, which we really liked when it came out.

Hugues will build a new application for it that can be used to deploy as many workers on a cluster as we need. This application will then be used to manage our batch import process for complex spreadsheets, as well as for implementing our email facet.

If it works well, we will then refactor our Batch and Jobs objects to take advantage of it.

Toughts on Git integration

Over the past couple of days, I’ve been using STOIC Pages to develop a blog publishing application and rewrite the documentation for Formula.js. As described on this earlier post, I’m thoroughly enjoying the experience of using a database-driven application to manage pages, templates, fragments, and their inter-relationships.

By the same token, I’m also experiencing the frustration of not having direct access to resources like JavaScript libraries, CSS stylesheets, and images through a regular filesystem. Of course, something like STOIC Drive will address this issue for binary resources like images, but it won’t do much good for text-based resources that currently require a database back-end.

What I’m really experiencing is the fundamental dichotomy that traditional IT systems usually create between flat files and database records. It’s as if you always have to chose one over the other. Either you get the convenience of files but you lose the power of a database, or you get the power of a database but you lose the convenience of files. No matter which one you pick, you can never have both at the same time.

Well, I don’t take no for an answer easily, and I really want both. To me, flat files and database records are nothing more than two materialized artifacts for the same abstract entities. If I’m dealing with pages, I’d like to manipulate them as files when I’m in the mood for some serious hacking, and I’d like to visualize them as records when I’m trying to make sense of their relationships. And I want to be able to switch back and forth between the two at any time. In other words, I want files and records to be two facets of the same entity.

What this means is that we really need a file-based datastore alongside our record-based ones. And we need these datastores to be synchronized with each other at all times. Here is how it would work: First, we would define a canonical mapping between database records and flat files. Second, we would create connectors for various file-based protocols like Git. Third, we would map our commit process to the commit processes of these protocols.

The canonical mapping between database records and flat files could look like that:

  • Record name mapped to file name
  • Record parent mapped to parent folder when a hierarchy is defined for the object
  • Record owner mapped to file owner
  • Record update timestamp mapped to file update timestamp

What this means is that all the pages of my website that are currently managed by our Pages object and are stored in our database would also be available as files and folders on our filesystem. Whenever I would modify a page from our user interface, its record would be updated on the database, and its corresponding file would be refreshed on the file system. Similarly, if I were to edit the page directly from its file on the filesystem, its corresponding database record would be updated automatically.

From a connector standpoint, we should try to stay as close to a generic filesystem model as possible, instead of dealing with the idiosyncracies of source control systems. The last thing we want to do is to develop an abstraction for various source control systems like Git or CVS, because such a thing does not exist. And if it did, it would be utterly useless. Instead, we have to consider that different datastores serve different purposes, and that data does not have to be replicated in full across the different datastores that make our hybrid back-end.

As a result, all the business logic inherent to things like versioning and branch management should be kept within the source control system, and should be hidden from the rest of the platform. If I’m interested by these considerations, I’ll use my favorite Git client to manipulate my entities as files. But when I’m done and I’m switching to the STOIC user interface, I do not want to see anything related to versioning or branching, because with the hat that I’m wearing right now, these things do not make sense to me anymore.

Clearly, these ideas are still in their infancy, and we’ll need to refine our thinking before we start implementing anything. But the more I’m thinking about them, the more I believe that we’re onto something really interesting there. From an implementation standpoint, we would certainly start with Git, using GitHub as testing target. And if we need a Git client, we would use js-git. Or we could ignore the source control system altogether, and just provide a file interface, using the regular file system as interface between our middleware and any source control system you like.

Simple is beautiful…

Modularizing our spreadsheets

Originally, all our meta-data was defined and stored into a single spreadsheet. Over time, we felt that we had to modularize things a bit better and we decided to use one spreadsheet per application. After a while, some of our meta-data became too large for the Platform spreadsheet, and we started to pull some of it out and store it into a separate Resources spreadsheet. Yesterday, we decided to fully modularize our spreadsheets and to allow users to externalize any piece of content they want into individual files.

To better understand how it will work, one needs to understand the different kinds of data and meta-data we have to handle. Now that we’re having a better grasp of it all, we’ve started to classify our structured data into four main categories:

  • Über-data (Objects, Fields, Relationships)
  • Meta-data (44 objects of Platform)
  • Reference data (Countries, Currencies, etc.)
  • Business data (Companies, Contacts, etc.)

With our current design, all über-data and meta-data is defined in the Platform spreadsheet. When packaged into a single Excel spreadsheet, its size is 765KB, which is not tiny, but is not large either. Using the ODS Open Document format, it’s even smaller, taking only 623KB of space. And if we were to deduplicate all Bootstrap fields, its size would be a third of what it is right now.

Then, we have 32 spreadsheets stored in a Resources folder on Google Drive that contain records for reference data objects, as well as test records for business data, which we refer to as test data. Because the reference data needs to be deployed on all customer instances, we reference it from the Platform spreadsheet, by using a new field of the Objects object called Datasource.

We deal with test data in a totally different manner, because it’s not really part of the product that we ship. Unlike reference data, it’s not referenced from the Platform spreadsheet. Instead, each file used to externalize test data references its related object, currently by storing a small JSON object as a note added to the A1 cell of the single sheet contained by any test data spreadsheet.

This modular packaging allows us to implement a very simple import process for our structured data, starting with the Platform spreadsheet which references its required reference data, then optionally adding all test data spreadsheets to our internal testing instances.

All this should work next week.

Distributed Connector Architecture

This morning, Jim and Jacques-Alexandre have started prototyping a ground-breaking architecture for distributed connectors. The idea is that some connectors might require lots of hardware resources, and you don’t want to overload your primary cluster with them. In order to make them scale better, we defined an architecture allowing the deployment of individual connectors on external servers, either locally or remotely. And to make things even more scalable, connectors that need scheduling can run their own Cron scheduler, so that the scheduler of the main cluster does not become a bottleneck.


Live Updates Coming

Jim committed a piece of code that will allow us to push any changes made to data and meta-data onto all connected clients, instantly. Once we take advantage of this brand new feature from our user interface, it will give us a user experience similar to the one offered by Google Apps, where any change made by a user to a document is instantly visible to all other users looking at the same document. Our user interface will be refactored incrementally in order to implement this feature, and we hope to be done with it sometime in October or November.

Data vs. Meta-Data

Since we started working on the STOIC platform eighteen months ago, we’ve been very keen on making sure that meta-data behaves pretty much the same way as business data. In fact, for the longest time, there was no way to really distinguish one from the other.

As the platform matured though, their respective life-cycles started to diverge. For example, when we added support for meta-data caching, we had to explicitly indicate which objects would be included into this cache. This implicitly considered some objects as being part of the meta-data. Similarly, when we started to implement our Commit process, we had to identify a subset of these meta-data objects as special cases that require explicit commit operations.

Coming back to our original idea, treating meta-data and business data alike had clear benefits. For one, it allowed us to use the same canonical user interface for both. In other words, from the viewpoint of developers and users, meta-data and business data are the same thing. But from the viewpoint of the implementers of the platform (STOIC employees), they’re quite different, for rather good reasons. Clearly, we needed a way to reconcile both sets of requirements.

Today, we know that we want them to be both different and the same, all at once.

Then, as we started to implement our meta-data update framework to support cascading levels of meta-data custody, we realized that such a capability was required only for meta-data, not business data. The reason for it is very simple: while the platform vendor (STOIC), software vendors developing packaged applications on top of it, systems integrators customizing these applications to suit the needs of their customers, and customers configuring these applications could all make changes to meta-data, only customers (referred to as end custodians) would need to create and manage actual business data. As a result, the multi-custodian meta-data life-cycle could be applied to meta-data only, and we could pretty much ignore the concept of custodian for business data. This sudden reduction of scope opened the door to many opportunities for simplification and optimization, which we’re now taking full advantage of.

This is especially important because we’re doing all that work while finishing the implementation of our distributed meta-data cache and adding support for clustering. Taken individually, caching, clustering, and custody are hard-enough to implement. Put together, they’re like rocket science, and the more you can simplify, the better a chance you have of ever making it work.

With that in mind, we’re now streamlining the end-to-end data lifecycle. Here is how it will work.

First, we’re separating data from meta-data entirely. Meta-data is defined as the records of the objects for which Cached (a field of the Objects object) is set to TRUE. All records of these objects will be part of our meta-data cache (mdCache). This cache will have two versions, one for servers containing all fields of cached objects, and one for clients only containing the fields for which Cached (a field of the Fields object) is set to TRUE.

Second, we’re creating one schema on PostgreSQL or one index on Elasticsearch for each and every custodian, according to the architecture described on this previous post, but these schemas or indexes are used for meta-data only. We then create a separate schema or index for business data, used by the end custodian only.

Third, we acknowledge the fact that any changes made to meta-data by upstream custodians (custodians other than the end-custodian) follow a different lifecycle than changes made by the end custodian. The former are traditionally called upgrades, while the latter are called configurations, customizations, or extensions. The former happen rather infrequently in a very controlled environment, while the latter happen on a daily basis, in a very ad hoc fashion. For this reason, they can be implemented very differently: the former is implemented by simply replacing a schema or index file by a new one, while the latter is implemented with incremental updates.

Fourth, we implement a cluster-friendly incremental update process for all updates made to meta-data by the end custodian. For these, we build an aggregated image of the meta-data by combining the meta-data schemas or indexes of all custodians, according to simple overloading rules. Usually, precedence is taken by changes made by the custodian who is the most downstream in the custody chain. Then, we deploy this meta-data in memory on all servers and clients, and make sure that they remain synchronized at all times.

Fifth, to keep everything synchronized, incremental updates to meta-data are first applied to a persistent copy of the aggregated meta-data stored by PostgreSQL or Elasticsearch. The meta-data is kept consistent through locking, which today is implemented in an optimistic fashion through the use of Change Identifiers (CIDs), but might be migrated to a pessimistic locking mechanism if we decide that it would improve the overall end-user experience. And we make sure that the internal structure of our meta-data cache supports incremental updates in a robust and high performance fashion, by getting rid of extraneous cross-references that were added to it.

As a result of this architecture, the complete refreshing of our meta-data cache will happen a lot less frequently than it has so far. In fact, it will be limited to instances where meta-data needs to be upgraded by upstream custodians, or when clients go back online after some period of offline activity (once we add full support for offline access). This should improve performance while reducing the latency of both server operations and client interactions.

That’s pretty much all for now. If you followed me so far, good for you. If you did not, don’t worry. You don’t really have to understand any of this, unless you’re planning to deploy the STOIC platform at a very large scale. All you should know is that this stuff is what makes it work.

STOIC for Independent Software Vendors

We’re currently discussing with a potential OEM partner. This deal is quite strategic for both parties, hence we’re engaged in a thorough due diligence process. Through our discussions, many interesting questions have been raised. Here is a summary of the most interesting ones, which could be of interest to other OEM partners, or large customers planning to develop complex applications on top of the STOIC platform. This post is the longer ever published on this blog and repackages pieces that were published earlier.

Database Architecture
STOIC is designed to be database agnostic. Its core server is built around an advanced object data-mapper capable of supporting both SQL and NoSQL datastores. Furthermore, the server is designed to support a hybrid datastore architecture whereby multiple databases can be used at the same time, allowing different objects to be stored in different databases, or the same objects to be stored in multiple databases at the same time in order to benefit from different indexing, querying, and searching capabilities.

At present time, STOIC has been successfully deployed in two main configurations:

  1. PostgreSQL as primary and Elasticsearch as secondary
  2. Elasticsearch standalone

The first configuration brings the best of both worlds: full SQL semantics with PostgreSQL and advanced searching capabilities with Elasticsearch. The second configuration reduces complexity and overhead. Our experience is that Elasticsearch has now reached a sufficient level of maturity for it to be used as standalone database, instead of being used as a simple indexing and search engine coupled to a primary database. Today, this standalone configuration is our preferred deployment option, even though we’re maintaining support for the hybrid configuration as well, which is required by some of our customers.

In the past twelve months, we’ve also successfully deployed the full STOIC platform on top of MongoDB as a proof of concept. This helped us demonstrate the generic nature of our object data mapper. Nevertheless, we did not have an actual need for this configuration, and we are not supporting it anymore. We might revive it if some need for it arises in the future.

In the coming months, we will also add support for Cassandra as primary datastore, coupled to Elasticsearch for indexing and searching. This configuration should allow us to support deployment across multiple data-centers, in a fully distributed fashion (no single master). This configuration will require significant development efforts though, and will add a significant amount of additional complexity. As a result, it will be reserved for our largest deployments. This development is fully funded by one of our customers.

Based on past experience, adding support for a new database requires 2 to 6 person-months worth of development and testing efforts. As our object data mapper is gaining more and more capabilities, the amount of efforts required to support new databases might slightly increase. Adding support for a new SQL database is usually easier than adding support for a NoSQL database, because SQL provides a standard query language and because we are not using any stored procedures. That being said, we need support for advanced primitive types that are supported by Oracle and SQL Server but are not offered by MySQL. Therefore, it is highly unlikely that we ever provide support for this database.

By the same token, replacing Elasticsearch by another indexing and searching engine would be a much more significant endeavor. This is due to the fact that we’re taking advantage of many advanced features offered by Elasticsearch, such as the ability to execute queries against multiple indexes at once, which is a core component of the architecture we implemented to manage the lifecycle of our meta-data (more on this later). Therefore, Elasticsearch should be considered as an integral and mandatory component of our architecture.

Additionally, we are using Redis as key-value store for managing user sessions.

Middleware Architecture
The STOIC server is built as a JavaScript application deployed on top of Node.js. It is written in plain JavaScript, instead of using pre-processed languages like CoffeeScript. It directly leverages over 50 third-party libraries (and many more indirect dependencies), all available under liberal open source licenses (Apache, BSD, MIT, etc.). These libraries are packaged using standard NPM modules and their dependencies are managed using RequireJS.

The core component of the server is the object data mapper (aka mapper). The mapper is responsible for exposing objects persisted in the database as plain old JavaScript objects. The mapper provides support for ACID transactions and supports all CRUD operations. Additionally, it provides support for advanced relationships (1-N, N-N, Hierarchical, Ad-hoc), pre-fetching, unlimited joins, reverse lookup, filtering, grouping, and sorting.

Communications between the server and its clients are handled with TCP over HTTP using the Bayeux protocol, which is implemented using Faye. This allows both pull and push requests, in a scalable and low-latency fashion. This protocol is also used to implement core components of our clustering architecture (Cf. Clustering below). On top of this protocol, STOIC added multiple levels of data compression with libraries like lz-string in order to reduce the payload size of messages. In many cases, a compression rate of 50x can be achieved, without meaningful performance overhead for compression and decompression.

On top of this communication layer, the mapper provides a way to remotely expose objects from the server to the client. This remote method invocation layer exposes server-side objects as plain old JavaScript objects on the client, in a fashion similar to what Meteor is doing. This provides a very generic language-native meta-API for all objects managed by the server, which dramatically reduces the amount of middleware code that needs to be written when using these objects. For scalability reasons, this API is fully asynchronous.

In order to improve performance and simplify developments, we have built a fairly sophisticated caching layer for our meta-data. This allows us to create highly-optimized caches available both server-side and client-side, through a synchronous API (instead of the standard asynchronous API mentioned above). This synchronous API dramatically reduces the amount of code that needs to be written when dealing with meta-data objects like Objects, Fields, Datatypes, Forms, Views, Controls, or Widgets, and makes this code much simpler to comprehend by junior developers who are not yet expert with asynchronous programming.

Connectors to third-party systems and applications can be developed as internal JavaScript libraries deployed on the same Node.js runtime, or external services deployed remotely. In the latter case, these connectors can be written in any language (not just JavaScript). When written in JavaScript, they can communicate with the main server using the native mapper API. When written in other languages, they can communicate with the main server using REST APIs. Moving forward, language-specific bindings will be developed in order to provide a mapper-like API for other languages such as Java, Objective C, Python, and Ruby. The ability to deploy connectors on multiple runtimes contributes to making the architecture as distributed and scalable as possible.

The STOIC server provides its own framework, built on top of control flow libraries like Async.js and Parseq, middleware libraries like Connect, dependency injection managers like rewire, functional programming libraries like Lo-Dash, and low-level frameworks like Express.

Clustering Architecture
STOIC supports deployment of both database and middleware on a cluster. Deployment of PostgreSQL on a cluster can be done using VMware vFabric Data Director (additional configuration work might be required). For its part, Elasticsearch supports clustering natively. Deployment of the Node.js server on a cluster is currently undergoing testing and certification. This effort should be completed by the end of October 2013. As mentioned above, connectors to third-party systems and applications can also be deployed on separate servers for increased levels of scalability and fault tolerance. As a result, STOIC’s end-to-end architecture is linearly scalable and does not present any single point of failure (as far as we can tell).

Multi Data Center Architecture
STOIC can be configured to be deployed across multiple data centers for disaster recovery purposes (custom configuration work required). Nevertheless, this architecture assumes that one data center is used as master, while the other data centers are used as slaves.

A fully distributed deployment with no single master could be considered by using Cassandra as primary database. Nevertheless, this would require the refactoring of certain server-side components such as the scheduling engine and the workflow engine. It would also add significant performance overhead, and would dramatically increase the complexity of the overall architecture. It is not yet clear that any realistic use cases would justify such an investment.

Multi-Tenancy Architecture
STOIC natively supports two deployment models for multi-tenancy:

  1. Multiple single-tenant instances
  2. Multiple multi-tenant instances

According to the first model, each and every tenant has its own instance made of a Node.js server (or cluster), PostgreSQL server (or cluster), Elasticsearch server (or cluster), and Redis server (or cluster). This complex architecture is packaged as a single unit deployed on top of Cloud Foundry. STOIC developed advanced provisioning tools in order to simplify the management of such deployments. As a result, over 300 instances could be managed by a single person dedicating less than one hour a day to this activity. This deployment model provides the highest level of isolation across tenants and the highest level of scalability. Nevertheless, it requires the dedication of a minimum amount of hardware resources to each and every tenant, leading to an increased marginal cost of provisioning.

According to the second model, multiple tenants can be deployed on a single instance of the STOIC platform. This instances is usually larger than the ones used for the first model and can accommodate tens to hundreds of tenants on the same instance. In order to provide isolation across tenants, each tenant is deployed on a separate schema on PostgreSQL (if used) and a separate set of indexes on Elasticsearch, while sharing the same Node.js runtime with all other tenants served by the instance. This deployment model has the benefit of having a much lower marginal cost of provisioning for new tenants, especially ones that would require very limited amounts of hardware resources (eg. free trials). STOIC developed advanced provisioning tools to manage both multiple instances and multiple tenants deployed on the same instance. Some of these tools are limited to a command line interface, while others also offer a complete web-based user interface developed on top of the STOIC platform itself.

With the first model (multiple single-tenant instances), custom JavaScript code can be deployed on the Node.js server. With the second model (multiple multi-tenant instances), this is not permitted, because it would create the risk of one tenant getting access to the data of another tenant. That being said, this limitation should not be considered a problem, for custom code can always be deployed on remote servers with minimal performance overhead (this is the architecture we’re using for our own connectors) or on clients. The latter strategy is actually used quite often, for it increases scalability, moving compute load from servers to clients. This capability is a direct benefit of using the same language (JavaScript) for both server and client.

Authentication Architecture
Authentication is implemented with OAuth 2.0 using Passport.js and offers the following:

  • 140+ authentication strategies
  • Single sign-on with OpenID and OAuth
  • Built-in handling of success and failure
  • Persistent sessions
  • Dynamic scope and permissions
  • Dynamic strategies
  • Custom strategies

Connectors for LDAP and Active Directory are planned for the first half of 2014.

Authorization Architecture
Authorization is handled by a proprietary Authorization Engine based on the following model:

  • Application: The application from which the permission is granted (for packaging)
  • Actors: The users, groups, or roles to which the permission is granted
  • Actions: The set of actions authorized by the permission
  • Resources: The set of resources to which the permission applies
  • Scope: Record or Entity

In relation to this model, we support multi-dimensional hierarchical inheritance:

  • Upward inheritance through the Role’s Parent hierarchy
  • Downward inheritance through the Group’s Parent hierarchy
  • Downward inheritance through the Action’s implicit hierarchy
  • Downward inheritance through the Resource’s optional hierarchy
  • Downward inheritance through the Resource’s contextualization

This engine is powered by objects such as Groups, Roles, and Users, which themselves can be connected to third-party systems such as LDAP or Active Directory. Additionally, custom authorization schemes can be supported by connecting the engine to an external entitlement management system. That being said, such a scenario should be limited to use cases that positively require it, for it might create significant performance overhead.

The authorization model supported by STOIC is designed to handle the vast majority of entitlement scenarios usually found in business applications. This versatility comes at a cost though, for some permission settings might create significant performance overhead, especially with regards to reporting and analytics. In order to work around this challenge, STOIC is currently developing a secure data analysis sandbox allowing ad-hoc Elasticsearch indexes to be created on the fly and populated with authorized data very rapidly. During the data population phase, all permission rules are evaluated against the imported data to ensure that the data analyst who requested it has the right entitlements. During the subsequent data analysis phase, the Authorization Engine is disabled, thereby ensuring the highest level of performance. During this phase, all queries are logged on a separate audit trail, thereby enabling post-session forensics. This secure data analysis sandbox should become available in the first half of 2014.

The monitoring and logging of transactions is handled at the lowest-possible level, usually down to individual keystrokes and mouse clicks. This logging is captured in a change log managed by Elasticsearch on each and every instance. Additionally, certain logs can be centralized on a separate instance. This is particularly useful to monitor the utilization level of certain product features and to handle A/B testing scenarios. Logging is handled using logstash and Kibana, which are directly deployed on top of Elasticsearch.

Meta-Data Architecture
Technically speaking, STOIC is a semologic system, which combines semantics and logics into a single platform. It is the first platform of its kind allowing domain experts to create business applications by simply describing data models, business rules, and workflows, while letting software developers add code whenever and wherever they see fit.

What makes STOIC work is its ability to model semantics and logics through a very small set of primitives. Most business concepts and entities can be modeled with objects holding data through fields. These fields are strongly typed through datatypes, which give the system a deep understanding of the data being manipulated by objects. Logic is defined through functions and rules. Time is handled by actions, triggers, and workflows. These eight primitives can be combined in virtually infinite ways to represent the vast majority of business scenarios. Everything else is automatically and canonically generated by the platform, including user interfaces and application programming interfaces.

This level of abstraction is supported by an equally-important aspect of virtualization, which isolates the platform and its users from any technical details that might be subject to change over time. This includes communication protocols, database technologies, and external applications. All these peripheral utilities are fully virtualized through extensible application programming interfaces, allowing one to easily switch between service providers.

Overall, STOIC’s most critical design strategy is the radical simplification of the abstraction primitives described above. For example, overly complex concepts of object-oriented programming such as inheritance or polymorphism have been deliberatly excluded from the platform’s design. Similarly, the workflows modeled and executed with STOIC might only offer a fraction of the patterns supported by modern BPM systems. Nevertheless, we found that by cleverly combining these simpler abstraction primitives, one can handle the vast majority of realistic scenarios that one will encounter during one’s career.

In everything we do, we try to heed to Albert Einstein’s advice:

"Everything should be made as simple as possible, but not simpler."

Meta-Data Lifecycle Management
Meta-data is a critical component of the STOIC platform, for the following reasons:

  • STOIC does not really make any difference between data and meta-data.
  • Most parts of the platform are built with meta-data.
  • Virtually every piece of meta-data is designed to be customizable and extensible.

In order to address these challenges, all data and meta-data provided by STOIC is stored in a first Elasticsearch index, while all data and meta-data created by users is stored in a second index. Whenever a record provided by STOIC is modified by a user, it is duplicated across both indexes, using the same UUID. This deceivingly simple architecture allows mapper to keep track of changes very easily. And when time comes to update customer instances with a brand new version of the meta-data, all that is required is to create a new index with the new meta-data, then point to this new index instead of the old one. From there, duplicate records are merged according to a few set of rules that are executed either within the datastore (Elasticsearch), or within the server (Node.js), depending on their complexity. This approach brings quite a few benefits:

  • Super-simple update process (#1 Copy index file. #2 Update index pointer. Done!)
  • Instantaneous rollover (as fast as updating an index pointer)
  • Easy rollback (in case something went wrong)

Furthermore, this multi-index architecture can be extended to multiple levels of meta-data custody. By default, STOIC is configured to support two main custodians (STOIC and customers), but additional custodians could be added, such as independant software vendors developing applications on top of the STOIC platform or systems integrators customizing such applications for the specific needs of their own customers.

Additionally, the STOIC architecture is designed with advanced concepts and mechanisms that strengthen the life-cycle of meta-data. For example, most user interface components, business rules, and workflows are externalized through fully normalized meta-data, making them customizable (changes) and extensible (additions). Also, most meta-data elements such as Objects or Fields have human-readable identifiers made of a custodian-specific namespace and a keyword. This allows the canonical generation of APIs while removing all risks of naming collision, in a fully distributed environment. As a result, a customer could customize an application developed by an independent software vendor (ISV) on top of the STOIC platform and deployed by a third-party systems integrator (SI), without having to worry about future updates that could be made by STOIC, the ISV, or the SI.

The design of our meta-data has been made in such a way that customization and extensions are possible for virtually every piece of meta-data. This required that most concept be modeled as fully normalized objects, while only the most complex and domain-specific data structures were modeled as JSON objects. These rare exceptions do not support the same level of customization and extension, yet they are systematically defined with complete JSON schemas, which will facilitate their normalization should such customizations and extensions become required.

Middleware Life-Cycle Management
The STOIC platform is developed using the GitHub versioning platform and the Jenkins continuous integration platform. Software updates can be pushed from the service provider to the customer, or pulled by the customer. The aggressive externalization of most data structures and business logic components into software-independent meta-data minimizes critical interdependencies between software code and meta-data.

Nevertheless, some software updates might require changes to the meta-data provided by the platform or to the data created by customers. In such cases, data migration scripts are added to software updates and executed automatically whenever updates are applied to instances.

Additionally, the STOIC platform provides built-in database backup and recovery functions that can be invoked automatically before software updates are applied. The multi-index architecture used for managing the lifecycle of meta-data makes such backup and recovery a very simple and foolproof process. Lastly, actual backups can be stored locally or remotely.

Instance Life-Cycle Management
Instance management is handled through a native STOIC application called STOIC Provisioning made of objects such as Domains, Tenants, Agreements, Namespaces, and Settings. It allows multiple Domains to be created, with different settings and upgrade strategies. It also facilitates the management of End User License Agreements with support for non-repudiation procedures and the enablement of core functionalities based on contracts or profiles.

Through the use of domains, multiple tiers of customer instances can be created that follow different upgrade schedules. Low tiers get upgrades often and early, while higher tiers get them less often and later. This strategy enables a distributed Quality Assurance process whereby significant amounts of testing and debugging work is actually delegated to customers and partners who are willing to participate in this effort in return for getting upgrades early.

The life-cycle management of instances is handled by scripts deployed on top of the Cloud Foundry Platform as a Service layer. This allows the deployment of the STOIC platform on either public or private cloud. That being said, our actual dependency on Cloud Foundry is quite limited, and a similar architecture could be developed on top of should the need for an alternative arise, even though we’re perfectly happy with Cloud Foundry as it stands today.

API Management
The STOIC platform is highly meta-data centric. As a result, it does not provide a traditional object-specific API, but rather a totally generic meta-API that is shared by all objects handled by the platform, for both standard and custom objects. As a result, the need for API management is dramatically reduced compared to more traditional platforms.

Furthermore, all objects that need to be exposed through programmable APIs offer three fields that can be used for identification: UUID, Name, and Identifier. The Identifier field is the one used for the generation of canonical APIs. As a result, the Name values of records can be changed at any time without any impact on APIs. This further reduces the need for API management.

Nevertheless, there usually comes a time where API management is needed, especially when significant changes are made to meta-data, resulting in the obsolescence of earlier API versions. In order to handle such scenarios, we are planning to integrate the Swagger API documentation engine (or develop a similar framework), and to add an API broker that would support API versioning. These developments are planned for the second half of 2014. Additionally, we are considering some integration with the Apigee platform.

Through its meta-API, STOIC can easily be integrated with external workflow and BPM engines. By the same token, the STOIC platform also offers such capabilities natively. In our quest for simplicity, we’ve taken a pretty radical approach to workflow though. Instead of orchestrating processes on top of objects, we’re building workflows directly within objects. Essentially, this inversion of control removes the need for complex data mappings, which makes workflow accessible to virtually anyone.

While this model for workflow is rather unusual, its implementation is entirely novel. First, it’s all built in JavaScript (by forking Stately.js). Second, the engine is architected in such a way that it can run either on the server (Node.js) or on the client (Web Browser). This allows us to make the user interface highly interactive, while reducing server load.

Our model for workflow is deceptively simple. At a minimum, workflows are defined with:

  1. A list of steps that a given object’s workflow can be in
  2. A set of transitions that are possible from any given step

Additionally, developers can specify:

  • A set of optional rules that dictate which transitions are allowed
  • A set of automated actions that are executed when reaching certain steps
  • A definition of which users, roles, or groups can perform transitions
  • A definition of due dates for every step in the workflow

All this can be done from a single form, without having to learn any graphical notation. From there, a flowchart depicting the workflow is automatically generated, thereby teaching business users about this way of describing workflows.

Granted, such a simple model won’t accommodate all possible workflow patterns, but exhaustivity is not our goal. Functionality and simplicity are. In such a context, we will add simple extensions that make our basic Workflow engine powerful enough to address more sophisticated scenarios. For example, by attaching workflows to objects, we’re implicitly tying workflow instances to object records. As a result, we don’t have any notion of parallel flows. But this does not prevent us to do things in parallel, either by using Workflow Checklists, or by having a workflow spawn multiple records of other objects that have their own workflows and can be synchronized back with the primary workflow. Not entirely simple, but totally doable, and definitely much simpler than having to deal with multi-threaded processes.

Another important aspect of our worklow is usability. For example, we spent a lot of time thinking about how workflows should materialize themselves within our user interface. While flowcharts look cool on screenshots, they’re actually not that useful in practice. Instead, what matters to users is to know which state a workflow is in at a given point in time, and which transitions are allowed. For example, if a task has been assigned and can be either completed or canceled, the user interface should show three things: a drop-down list showing the task’s status (Assigned), and two buttons showing the two possible transitions (Complete and Cancel). Ideally, the Complete button should be green, and the cancel button red (like Pivotal Tracker does).

Ultimately, what really matters is to make it very easy for developers to specify the different steps of a workflow, including two verbs for every step, one in the past tense being used as step name (like “Assigned”), and one in the present tense being used for buttons (like “Complete”), plus some rendering information (such as colors). That plus the set of transitions that are allowed from any step is all you need to design an effective workflow and automatically generate a great user interface for it. And it will be enough to address the vast majority of use cases.

Batch Processes and Scheduling
Through its meta-API, STOIC can easily be integrated with external batch processing and scheduling engines. By the same token, the STOIC platform also offers such capabilities natively. Through a few sets of dedicated objects (Batches, Actions, Operations, Jobs, and Schedules), the STOIC platform lets developers create and manage batch processes that can be applied to multiple records of multiple objects. The set of records that a batch applies to is simply defined as a view using the View Editor, thereby benefiting from its advanced filtering rules. Schedules are handled by a Node.js binding to CRON.

Code Extension Points
The STOIC platform provides multiple code extension points:

Formula Fields
The easiest place to add custom code is within Formula Fields. Any field of any object can be turned into a formula field, and its value is dynamically calculated using Formula.js, a clone of the formula functions offered by Microsoft Excel and Google Spreadsheets. Fields of the same record for which a formula field is calculated can be referenced by the formula. Optionally, users can manually override a formula field’s computed value.

Formula Functions
The set of functions offered by Formula.js can be extended by writing plain JavaScript functions. And because these are just functions, there is no API to speak of. New functions can also be created by combining existing Formula.js functions, without having to write a single line of JavaScript code.

Object Actions
Another place where custom code can be executed in relation to a record is within Object Actions. There, code is written in JavaScript and executed on the client side (Web Browser). It can use most of the client-side libraries that we’re using as foundations.

Workflow Actions
Workflow also support the execution of custom code, in the context of workflow actions, which can be executed when a workflow is transitioned to a particular step. Workflow actions are implemented in JavaScript and executed on the server-side within a secure container.

Agent Actions
Agents allow users to link actions to triggers, in a fashion similar to IFTTT or Zapier. Developers can create custom actions using JavaScript code executed on the server-side within the same secure container used for Workflow Actions.

As described above, developers can create their own connectors, using any framework and programming language. Connectors can be deployed anywhere (including on premises) and communicate with the server through a REST API.

Form Controls
Forms displayed by the Record View can be extended with custom Form Controls. These rather simple user interface components are implemented in JavaScript using AngularJS, jQuery UI, and Twitter Bootstrap. As you would expect, they’re executed on the client-side.

The Object View can be extended with custom Perspectives. Much like form controls, perspectives are implemented in JavaScript using AngularJS, jQuery UI, and Twitter Bootstrap, and are executed on the client-side.

Custom Charts can be developed using JavaScript (D3.js recommended). 

Widgets are simple HTML5 components allowing web developers to create custom user interfaces using the STOIC Platform on the back-end, and virtually any web technology on the front-end. Much like Form Control, widgets are implemented using AngularJS, jQuery UI, and Twitter Bootstrap. They’re executed on the client-side, from any HTML page.

IDE Integration
Because JavaScript is such a dynamic language, there is not much that can be offered by an Integrated Development Environment (IDE). As a result, STOIC does not recommend any particular development tool. Instead, most debugging takes place within the web browser itself (Google Chrome is the best option for that), and code can be developed with any text editor. Additionally, the following tools are highly recommended:

  • GitHub for code versioning
  • Jenkins for continuous integration
  • Chai and Mocha for testing automation
  • AngularJS Karma for user interface testing

For customers staying within the bounds of the STOIC framework and not planning to write any custom code, the STOIC user interface provides a set of tightly integrated tool that run directly within the web browser and are largely powered by the platform itself. These include:

  • The Spreadsheet Importer to automatically convert spreadsheets into applications
  • The Object Editor to develop custom objects
  • The Field Editor to configure fields
  • The Workflow Editor to design custom object workflows
  • The View Editor to configure custom object views
  • The Chart Editor to configure interactive charts
  • The Dashboard Editor to design interactive dashboards
  • The Code Editor for small snippets of custom code

CMS Integration
Through the componentization of our user interface into easily embeddable HTML5 widgets, the STOIC platform can be integrated with virtually any Content Management System (CMS). Additionally, STOIC provides its own CMS, built as a native application on top of the platform, around a simple set of objects (Pages, Pageviews). This CMS is built using AngularJS and supports advanced templating, hierarchical inheritance, remote analytics (Google Analytics,, etc.), native analytics (using the Pageviews object), and advanced meta-data management.

Web User Interface
STOIC provides two separate user interface: a web user interface optimized for desktops, laptops and tablets, and a mobile user interface optimized for smartphones. The web user interface is implemented using AngularJS and over 50 JavaScript libraries, plus a carefully curated collection of 3,400 icons (including all Dutch Icon icons) packaged as a set of platform-independent icon fonts using the Font Custom icon font generator.

Small amounts of jQuery code have been written but are being phased out incrementally and replaced by native AngularJS code as the AngularJS platform is gaining in functionality and maturity, especially with respect to animations. Similarly, small amounts of Twitter Bootstrap code have been written as well, but our dependency on this framework is being reduced as we are strengthening our own presentation layer.

The user interface is designed to be entirely localizable and themeable (using Bootstrap themes). That being said, these two features were not included in our Minimum Viable Product (MVP) and will be implemented immediately after our first GA release.

Mobile User Interface
The mobile user interface is implemented as an HTML5 application packaged into native application shells using PhoneGap (aka Cordova). This mobile user interface provides support for native smartphone features such as Camera and GPS. It is being regularly tested on all major mobile platforms including Android, iOS, and Windows Mobile.

The mobile user interface provides only a subset of the features offered by the web user interface and focuses on end-users versus developers. As a result, most of the advanced tools offered by the STOIC platform are available only from the web user interface. This should not be considered as a severe limitation though, for very few use cases would require a mobile user to modify an object schema or a workflow directly from a smartphone. Instead, the mobile user interface focuses on data entry, data analysis, event notification, and group collaboration.

Additionally, connectors for third-party applications that have bespoke user interfaces optimized for specific scenarios are being developed as well. This includes connectors for email servers, calendar services (like Google Calendar), or task management services (like Most of these connectors will become available by the end of the year (2013).

Offline Access
From the very beginning, the STOIC platform was designed to support offline access. This feature is currently under development and should become available in the first or second quarters of 2014. It is made possible by the fact that the entire platform is written using the JavaScript programming language and that the entire server can actually be packaged as a mobile application running in standalone fashion on smartphones or tablets, using local storage.

In order to support offline access, all records of all objects have a CID field. This field is populated with a unique identifier every time a record is checked out. Whenever changes to the record are requested to the server by a client, the value of the CID field provided by the client is compared to the one stored by the server. If they match, the transaction is executed and the CID for the record is refreshed by the server. If they do not, it indicates that changes were made earlier by other clients for the same record, and the request is declined. This optimistic locking algorithm is simpler to implement than more advanced ones based on vector keys and enables a relatively straightforward data synchronization process, with minimal user interventions.

As part of this architecture, the user interface also needs to provides an easy way for users to select the sets of records they want to check out whenever they’re about to go offline. In order to facilitate this process, the user interface will implement various strategies, including:

  • All records owned by the user
  • All records ever accessed by the user
  • All records directly related to any included record
  • All records of a particular application, module, or object
  • Only records to which the user is entitled to

STOIC is currently working with a partner planning to use the platform for mobile data capture in rural areas of India. This scenario positively mandates support for offline access and will provide large-scale deployment and testing of the architecture in the first semester of 2014.

Reporting and Analytics
STOIC provides a complete set of APIs, feeds, and export tools in order to integrate with various third-party reporting and analytics tools. Additionally, STOIC provides its own set of tools for reporting and analytics, including:

  • The View Editor to create custom views on objects
  • The Chart Editor to facilitate the creation of interactive charts
  • The Dashboard Editor to create dashboards that combine multiples views and charts

These tools are fully WYSIWYG and serialize their models in fully documented and schema-driven JSON formats. Furthermore, the output generated by these tools is systematically componentized into user interface widgets (usually one line of HTML5 code) that can be embedded into any web page by adding a single JavaScript library in its header. This includes views, charts, and dashboards. Last but not least, views can be rendered through multiple perspectives, depending on the schema of their related objects. STOIC provides a dozen perspectives out of the box, with a dozen more on our mid-term roadmap (6 to 9 months).

Here is a subset of the perspectives being offered today:

  • Calendar
  • Columns
  • Dendrogram
  • Gantt
  • Grid
  • Gridtree
  • Kanban
  • List
  • Map
  • Mindmap
  • Series
  • Tiles
  • Timeline

The reporting engine is implemented on top of Elasticsearch, using its powerful Query DSL, which is based on JSON to define queries. This DSL is extended with our proprietary Expression Engine, which allows the processing of expressions based on the Formula.js re-implementation of all 450+ Excel formula functions, extended with 150+ functions derived from the Lo-Dash library in order to support callbacks (for asynchronous calls like database lookups) and multi-dimensional data processing (vectors and matrices).

Moving forward, STOIC will also provide a sophisticated prediction API supporting over 130 statistical distributions and multiple inference algorithms. This development will be part of the STOIC Analytics application and will take place in 2014.

That should do it for now. If you have any questions, send them our way!