It’s now official: we will support Cassandra as alternative datastore sometime next year.
Originally, all our meta-data was defined and stored into a single spreadsheet. Over time, we felt that we had to modularize things a bit better and we decided to use one spreadsheet per application. After a while, some of our meta-data became too large for the Platform spreadsheet, and we started to pull some of it out and store it into a separate Resources spreadsheet. Yesterday, we decided to fully modularize our spreadsheets and to allow users to externalize any piece of content they want into individual files.
To better understand how it will work, one needs to understand the different kinds of data and meta-data we have to handle. Now that we’re having a better grasp of it all, we’ve started to classify our structured data into four main categories:
- Über-data (Objects, Fields, Relationships)
- Meta-data (44 objects of Platform)
- Reference data (Countries, Currencies, etc.)
- Business data (Companies, Contacts, etc.)
With our current design, all über-data and meta-data is defined in the Platform spreadsheet. When packaged into a single Excel spreadsheet, its size is 765KB, which is not tiny, but is not large either. Using the
ODS Open Document format, it’s even smaller, taking only 623KB of space. And if we were to deduplicate all Bootstrap fields, its size would be a third of what it is right now.
Then, we have 32 spreadsheets stored in a Resources folder on Google Drive that contain records for reference data objects, as well as test records for business data, which we refer to as test data. Because the reference data needs to be deployed on all customer instances, we reference it from the Platform spreadsheet, by using a new field of the Objects object called Datasource.
We deal with test data in a totally different manner, because it’s not really part of the product that we ship. Unlike reference data, it’s not referenced from the Platform spreadsheet. Instead, each file used to externalize test data references its related object, currently by storing a small JSON object as a note added to the
A1 cell of the single sheet contained by any test data spreadsheet.
This modular packaging allows us to implement a very simple import process for our structured data, starting with the Platform spreadsheet which references its required reference data, then optionally adding all test data spreadsheets to our internal testing instances.
All this should work next week.
We’re going to integrate Kue into our platform for managing job queues.
This morning, Jim and Jacques-Alexandre have started prototyping a ground-breaking architecture for distributed connectors. The idea is that some connectors might require lots of hardware resources, and you don’t want to overload your primary cluster with them. In order to make them scale better, we defined an architecture allowing the deployment of individual connectors on external servers, either locally or remotely. And to make things even more scalable, connectors that need scheduling can run their own Cron scheduler, so that the scheduler of the main cluster does not become a bottleneck.
Jim committed a piece of code that will allow us to push any changes made to data and meta-data onto all connected clients, instantly. Once we take advantage of this brand new feature from our user interface, it will give us a user experience similar to the one offered by Google Apps, where any change made by a user to a document is instantly visible to all other users looking at the same document. Our user interface will be refactored incrementally in order to implement this feature, and we hope to be done with it sometime in October or November.
Since we started working on the STOIC platform eighteen months ago, we’ve been very keen on making sure that meta-data behaves pretty much the same way as business data. In fact, for the longest time, there was no way to really distinguish one from the other.
As the platform matured though, their respective life-cycles started to diverge. For example, when we added support for meta-data caching, we had to explicitly indicate which objects would be included into this cache. This implicitly considered some objects as being part of the meta-data. Similarly, when we started to implement our Commit process, we had to identify a subset of these meta-data objects as special cases that require explicit commit operations.
Coming back to our original idea, treating meta-data and business data alike had clear benefits. For one, it allowed us to use the same canonical user interface for both. In other words, from the viewpoint of developers and users, meta-data and business data are the same thing. But from the viewpoint of the implementers of the platform (STOIC employees), they’re quite different, for rather good reasons. Clearly, we needed a way to reconcile both sets of requirements.
Today, we know that we want them to be both different and the same, all at once.
Then, as we started to implement our meta-data update framework to support cascading levels of meta-data custody, we realized that such a capability was required only for meta-data, not business data. The reason for it is very simple: while the platform vendor (STOIC), software vendors developing packaged applications on top of it, systems integrators customizing these applications to suit the needs of their customers, and customers configuring these applications could all make changes to meta-data, only customers (referred to as end custodians) would need to create and manage actual business data. As a result, the multi-custodian meta-data life-cycle could be applied to meta-data only, and we could pretty much ignore the concept of custodian for business data. This sudden reduction of scope opened the door to many opportunities for simplification and optimization, which we’re now taking full advantage of.
This is especially important because we’re doing all that work while finishing the implementation of our distributed meta-data cache and adding support for clustering. Taken individually, caching, clustering, and custody are hard-enough to implement. Put together, they’re like rocket science, and the more you can simplify, the better a chance you have of ever making it work.
With that in mind, we’re now streamlining the end-to-end data lifecycle. Here is how it will work.
First, we’re separating data from meta-data entirely. Meta-data is defined as the records of the objects for which Cached (a field of the Objects object) is set to
TRUE. All records of these objects will be part of our meta-data cache (
mdCache). This cache will have two versions, one for servers containing all fields of cached objects, and one for clients only containing the fields for which Cached (a field of the Fields object) is set to
Second, we’re creating one schema on PostgreSQL or one index on Elasticsearch for each and every custodian, according to the architecture described on this previous post, but these schemas or indexes are used for meta-data only. We then create a separate schema or index for business data, used by the end custodian only.
Third, we acknowledge the fact that any changes made to meta-data by upstream custodians (custodians other than the end-custodian) follow a different lifecycle than changes made by the end custodian. The former are traditionally called upgrades, while the latter are called configurations, customizations, or extensions. The former happen rather infrequently in a very controlled environment, while the latter happen on a daily basis, in a very ad hoc fashion. For this reason, they can be implemented very differently: the former is implemented by simply replacing a schema or index file by a new one, while the latter is implemented with incremental updates.
Fourth, we implement a cluster-friendly incremental update process for all updates made to meta-data by the end custodian. For these, we build an aggregated image of the meta-data by combining the meta-data schemas or indexes of all custodians, according to simple overloading rules. Usually, precedence is taken by changes made by the custodian who is the most downstream in the custody chain. Then, we deploy this meta-data in memory on all servers and clients, and make sure that they remain synchronized at all times.
Fifth, to keep everything synchronized, incremental updates to meta-data are first applied to a persistent copy of the aggregated meta-data stored by PostgreSQL or Elasticsearch. The meta-data is kept consistent through locking, which today is implemented in an optimistic fashion through the use of Change Identifiers (CIDs), but might be migrated to a pessimistic locking mechanism if we decide that it would improve the overall end-user experience. And we make sure that the internal structure of our meta-data cache supports incremental updates in a robust and high performance fashion, by getting rid of extraneous cross-references that were added to it.
As a result of this architecture, the complete refreshing of our meta-data cache will happen a lot less frequently than it has so far. In fact, it will be limited to instances where meta-data needs to be upgraded by upstream custodians, or when clients go back online after some period of offline activity (once we add full support for offline access). This should improve performance while reducing the latency of both server operations and client interactions.
That’s pretty much all for now. If you followed me so far, good for you. If you did not, don’t worry. You don’t really have to understand any of this, unless you’re planning to deploy the STOIC platform at a very large scale. All you should know is that this stuff is what makes it work.
We’re currently discussing with a potential OEM partner. This deal is quite strategic for both parties, hence we’re engaged in a thorough due diligence process. Through our discussions, many interesting questions have been raised. Here is a summary of the most interesting ones, which could be of interest to other OEM partners, or large customers planning to develop complex applications on top of the STOIC platform. This post is the longer ever published on this blog and repackages pieces that were published earlier.
STOIC is designed to be database agnostic. Its core server is built around an advanced object data-mapper capable of supporting both SQL and NoSQL datastores. Furthermore, the server is designed to support a hybrid datastore architecture whereby multiple databases can be used at the same time, allowing different objects to be stored in different databases, or the same objects to be stored in multiple databases at the same time in order to benefit from different indexing, querying, and searching capabilities.
At present time, STOIC has been successfully deployed in two main configurations:
The first configuration brings the best of both worlds: full SQL semantics with PostgreSQL and advanced searching capabilities with Elasticsearch. The second configuration reduces complexity and overhead. Our experience is that Elasticsearch has now reached a sufficient level of maturity for it to be used as standalone database, instead of being used as a simple indexing and search engine coupled to a primary database. Today, this standalone configuration is our preferred deployment option, even though we’re maintaining support for the hybrid configuration as well, which is required by some of our customers.
In the past twelve months, we’ve also successfully deployed the full STOIC platform on top of MongoDB as a proof of concept. This helped us demonstrate the generic nature of our object data mapper. Nevertheless, we did not have an actual need for this configuration, and we are not supporting it anymore. We might revive it if some need for it arises in the future.
In the coming months, we will also add support for Cassandra as primary datastore, coupled to Elasticsearch for indexing and searching. This configuration should allow us to support deployment across multiple data-centers, in a fully distributed fashion (no single master). This configuration will require significant development efforts though, and will add a significant amount of additional complexity. As a result, it will be reserved for our largest deployments. This development is fully funded by one of our customers.
Based on past experience, adding support for a new database requires 2 to 6 person-months worth of development and testing efforts. As our object data mapper is gaining more and more capabilities, the amount of efforts required to support new databases might slightly increase. Adding support for a new SQL database is usually easier than adding support for a NoSQL database, because SQL provides a standard query language and because we are not using any stored procedures. That being said, we need support for advanced primitive types that are supported by Oracle and SQL Server but are not offered by MySQL. Therefore, it is highly unlikely that we ever provide support for this database.
By the same token, replacing Elasticsearch by another indexing and searching engine would be a much more significant endeavor. This is due to the fact that we’re taking advantage of many advanced features offered by Elasticsearch, such as the ability to execute queries against multiple indexes at once, which is a core component of the architecture we implemented to manage the lifecycle of our meta-data (more on this later). Therefore, Elasticsearch should be considered as an integral and mandatory component of our architecture.
Additionally, we are using Redis as key-value store for managing user sessions.
Communications between the server and its clients are handled with TCP over HTTP using the Bayeux protocol, which is implemented using Faye. This allows both pull and push requests, in a scalable and low-latency fashion. This protocol is also used to implement core components of our clustering architecture (Cf. Clustering below). On top of this protocol, STOIC added multiple levels of data compression with libraries like lz-string in order to reduce the payload size of messages. In many cases, a compression rate of 50x can be achieved, without meaningful performance overhead for compression and decompression.
In order to improve performance and simplify developments, we have built a fairly sophisticated caching layer for our meta-data. This allows us to create highly-optimized caches available both server-side and client-side, through a synchronous API (instead of the standard asynchronous API mentioned above). This synchronous API dramatically reduces the amount of code that needs to be written when dealing with meta-data objects like Objects, Fields, Datatypes, Forms, Views, Controls, or Widgets, and makes this code much simpler to comprehend by junior developers who are not yet expert with asynchronous programming.
The STOIC server provides its own framework, built on top of control flow libraries like Async.js and Parseq, middleware libraries like Connect, dependency injection managers like rewire, functional programming libraries like Lo-Dash, and low-level frameworks like Express.
STOIC supports deployment of both database and middleware on a cluster. Deployment of PostgreSQL on a cluster can be done using VMware vFabric Data Director (additional configuration work might be required). For its part, Elasticsearch supports clustering natively. Deployment of the Node.js server on a cluster is currently undergoing testing and certification. This effort should be completed by the end of October 2013. As mentioned above, connectors to third-party systems and applications can also be deployed on separate servers for increased levels of scalability and fault tolerance. As a result, STOIC’s end-to-end architecture is linearly scalable and does not present any single point of failure (as far as we can tell).
Multi Data Center Architecture
STOIC can be configured to be deployed across multiple data centers for disaster recovery purposes (custom configuration work required). Nevertheless, this architecture assumes that one data center is used as master, while the other data centers are used as slaves.
A fully distributed deployment with no single master could be considered by using Cassandra as primary database. Nevertheless, this would require the refactoring of certain server-side components such as the scheduling engine and the workflow engine. It would also add significant performance overhead, and would dramatically increase the complexity of the overall architecture. It is not yet clear that any realistic use cases would justify such an investment.
STOIC natively supports two deployment models for multi-tenancy:
- Multiple single-tenant instances
- Multiple multi-tenant instances
According to the first model, each and every tenant has its own instance made of a Node.js server (or cluster), PostgreSQL server (or cluster), Elasticsearch server (or cluster), and Redis server (or cluster). This complex architecture is packaged as a single unit deployed on top of Cloud Foundry. STOIC developed advanced provisioning tools in order to simplify the management of such deployments. As a result, over 300 instances could be managed by a single person dedicating less than one hour a day to this activity. This deployment model provides the highest level of isolation across tenants and the highest level of scalability. Nevertheless, it requires the dedication of a minimum amount of hardware resources to each and every tenant, leading to an increased marginal cost of provisioning.
According to the second model, multiple tenants can be deployed on a single instance of the STOIC platform. This instances is usually larger than the ones used for the first model and can accommodate tens to hundreds of tenants on the same instance. In order to provide isolation across tenants, each tenant is deployed on a separate schema on PostgreSQL (if used) and a separate set of indexes on Elasticsearch, while sharing the same Node.js runtime with all other tenants served by the instance. This deployment model has the benefit of having a much lower marginal cost of provisioning for new tenants, especially ones that would require very limited amounts of hardware resources (eg. free trials). STOIC developed advanced provisioning tools to manage both multiple instances and multiple tenants deployed on the same instance. Some of these tools are limited to a command line interface, while others also offer a complete web-based user interface developed on top of the STOIC platform itself.
Authentication is implemented with OAuth 2.0 using Passport.js and offers the following:
- 140+ authentication strategies
- Single sign-on with OpenID and OAuth
- Built-in handling of success and failure
- Persistent sessions
- Dynamic scope and permissions
- Dynamic strategies
- Custom strategies
Connectors for LDAP and Active Directory are planned for the first half of 2014.
Authorization is handled by a proprietary Authorization Engine based on the following model:
- Application: The application from which the permission is granted (for packaging)
- Actors: The users, groups, or roles to which the permission is granted
- Actions: The set of actions authorized by the permission
- Resources: The set of resources to which the permission applies
- Scope: Record or Entity
In relation to this model, we support multi-dimensional hierarchical inheritance:
- Upward inheritance through the Role’s Parent hierarchy
- Downward inheritance through the Group’s Parent hierarchy
- Downward inheritance through the Action’s implicit hierarchy
- Downward inheritance through the Resource’s optional hierarchy
- Downward inheritance through the Resource’s contextualization
This engine is powered by objects such as Groups, Roles, and Users, which themselves can be connected to third-party systems such as LDAP or Active Directory. Additionally, custom authorization schemes can be supported by connecting the engine to an external entitlement management system. That being said, such a scenario should be limited to use cases that positively require it, for it might create significant performance overhead.
The authorization model supported by STOIC is designed to handle the vast majority of entitlement scenarios usually found in business applications. This versatility comes at a cost though, for some permission settings might create significant performance overhead, especially with regards to reporting and analytics. In order to work around this challenge, STOIC is currently developing a secure data analysis sandbox allowing ad-hoc Elasticsearch indexes to be created on the fly and populated with authorized data very rapidly. During the data population phase, all permission rules are evaluated against the imported data to ensure that the data analyst who requested it has the right entitlements. During the subsequent data analysis phase, the Authorization Engine is disabled, thereby ensuring the highest level of performance. During this phase, all queries are logged on a separate audit trail, thereby enabling post-session forensics. This secure data analysis sandbox should become available in the first half of 2014.
The monitoring and logging of transactions is handled at the lowest-possible level, usually down to individual keystrokes and mouse clicks. This logging is captured in a change log managed by Elasticsearch on each and every instance. Additionally, certain logs can be centralized on a separate instance. This is particularly useful to monitor the utilization level of certain product features and to handle A/B testing scenarios. Logging is handled using logstash and Kibana, which are directly deployed on top of Elasticsearch.
Technically speaking, STOIC is a semologic system, which combines semantics and logics into a single platform. It is the first platform of its kind allowing domain experts to create business applications by simply describing data models, business rules, and workflows, while letting software developers add code whenever and wherever they see fit.
What makes STOIC work is its ability to model semantics and logics through a very small set of primitives. Most business concepts and entities can be modeled with objects holding data through fields. These fields are strongly typed through datatypes, which give the system a deep understanding of the data being manipulated by objects. Logic is defined through functions and rules. Time is handled by actions, triggers, and workflows. These eight primitives can be combined in virtually infinite ways to represent the vast majority of business scenarios. Everything else is automatically and canonically generated by the platform, including user interfaces and application programming interfaces.
This level of abstraction is supported by an equally-important aspect of virtualization, which isolates the platform and its users from any technical details that might be subject to change over time. This includes communication protocols, database technologies, and external applications. All these peripheral utilities are fully virtualized through extensible application programming interfaces, allowing one to easily switch between service providers.
Overall, STOIC’s most critical design strategy is the radical simplification of the abstraction primitives described above. For example, overly complex concepts of object-oriented programming such as inheritance or polymorphism have been deliberatly excluded from the platform’s design. Similarly, the workflows modeled and executed with STOIC might only offer a fraction of the patterns supported by modern BPM systems. Nevertheless, we found that by cleverly combining these simpler abstraction primitives, one can handle the vast majority of realistic scenarios that one will encounter during one’s career.
In everything we do, we try to heed to Albert Einstein’s advice:
"Everything should be made as simple as possible, but not simpler."
Meta-Data Lifecycle Management
Meta-data is a critical component of the STOIC platform, for the following reasons:
- STOIC does not really make any difference between data and meta-data.
- Most parts of the platform are built with meta-data.
- Virtually every piece of meta-data is designed to be customizable and extensible.
In order to address these challenges, all data and meta-data provided by STOIC is stored in a first Elasticsearch index, while all data and meta-data created by users is stored in a second index. Whenever a record provided by STOIC is modified by a user, it is duplicated across both indexes, using the same UUID. This deceivingly simple architecture allows mapper to keep track of changes very easily. And when time comes to update customer instances with a brand new version of the meta-data, all that is required is to create a new index with the new meta-data, then point to this new index instead of the old one. From there, duplicate records are merged according to a few set of rules that are executed either within the datastore (Elasticsearch), or within the server (Node.js), depending on their complexity. This approach brings quite a few benefits:
- Super-simple update process (#1 Copy index file. #2 Update index pointer. Done!)
- Instantaneous rollover (as fast as updating an index pointer)
- Easy rollback (in case something went wrong)
Furthermore, this multi-index architecture can be extended to multiple levels of meta-data custody. By default, STOIC is configured to support two main custodians (STOIC and customers), but additional custodians could be added, such as independant software vendors developing applications on top of the STOIC platform or systems integrators customizing such applications for the specific needs of their own customers.
Additionally, the STOIC architecture is designed with advanced concepts and mechanisms that strengthen the life-cycle of meta-data. For example, most user interface components, business rules, and workflows are externalized through fully normalized meta-data, making them customizable (changes) and extensible (additions). Also, most meta-data elements such as Objects or Fields have human-readable identifiers made of a custodian-specific namespace and a keyword. This allows the canonical generation of APIs while removing all risks of naming collision, in a fully distributed environment. As a result, a customer could customize an application developed by an independent software vendor (ISV) on top of the STOIC platform and deployed by a third-party systems integrator (SI), without having to worry about future updates that could be made by STOIC, the ISV, or the SI.
The design of our meta-data has been made in such a way that customization and extensions are possible for virtually every piece of meta-data. This required that most concept be modeled as fully normalized objects, while only the most complex and domain-specific data structures were modeled as JSON objects. These rare exceptions do not support the same level of customization and extension, yet they are systematically defined with complete JSON schemas, which will facilitate their normalization should such customizations and extensions become required.
Middleware Life-Cycle Management
The STOIC platform is developed using the GitHub versioning platform and the Jenkins continuous integration platform. Software updates can be pushed from the service provider to the customer, or pulled by the customer. The aggressive externalization of most data structures and business logic components into software-independent meta-data minimizes critical interdependencies between software code and meta-data.
Nevertheless, some software updates might require changes to the meta-data provided by the platform or to the data created by customers. In such cases, data migration scripts are added to software updates and executed automatically whenever updates are applied to instances.
Additionally, the STOIC platform provides built-in database backup and recovery functions that can be invoked automatically before software updates are applied. The multi-index architecture used for managing the lifecycle of meta-data makes such backup and recovery a very simple and foolproof process. Lastly, actual backups can be stored locally or remotely.
Instance Life-Cycle Management
Instance management is handled through a native STOIC application called STOIC Provisioning made of objects such as Domains, Tenants, Agreements, Namespaces, and Settings. It allows multiple Domains to be created, with different settings and upgrade strategies. It also facilitates the management of End User License Agreements with support for non-repudiation procedures and the enablement of core functionalities based on contracts or profiles.
Through the use of domains, multiple tiers of customer instances can be created that follow different upgrade schedules. Low tiers get upgrades often and early, while higher tiers get them less often and later. This strategy enables a distributed Quality Assurance process whereby significant amounts of testing and debugging work is actually delegated to customers and partners who are willing to participate in this effort in return for getting upgrades early.
The life-cycle management of instances is handled by scripts deployed on top of the Cloud Foundry Platform as a Service layer. This allows the deployment of the STOIC platform on either public or private cloud. That being said, our actual dependency on Cloud Foundry is quite limited, and a similar architecture could be developed on top of docker.io should the need for an alternative arise, even though we’re perfectly happy with Cloud Foundry as it stands today.
The STOIC platform is highly meta-data centric. As a result, it does not provide a traditional object-specific API, but rather a totally generic meta-API that is shared by all objects handled by the platform, for both standard and custom objects. As a result, the need for API management is dramatically reduced compared to more traditional platforms.
Furthermore, all objects that need to be exposed through programmable APIs offer three fields that can be used for identification: UUID, Name, and Identifier. The Identifier field is the one used for the generation of canonical APIs. As a result, the Name values of records can be changed at any time without any impact on APIs. This further reduces the need for API management.
Nevertheless, there usually comes a time where API management is needed, especially when significant changes are made to meta-data, resulting in the obsolescence of earlier API versions. In order to handle such scenarios, we are planning to integrate the Swagger API documentation engine (or develop a similar framework), and to add an API broker that would support API versioning. These developments are planned for the second half of 2014. Additionally, we are considering some integration with the Apigee platform.
Through its meta-API, STOIC can easily be integrated with external workflow and BPM engines. By the same token, the STOIC platform also offers such capabilities natively. In our quest for simplicity, we’ve taken a pretty radical approach to workflow though. Instead of orchestrating processes on top of objects, we’re building workflows directly within objects. Essentially, this inversion of control removes the need for complex data mappings, which makes workflow accessible to virtually anyone.
Our model for workflow is deceptively simple. At a minimum, workflows are defined with:
- A list of steps that a given object’s workflow can be in
- A set of transitions that are possible from any given step
Additionally, developers can specify:
- A set of optional rules that dictate which transitions are allowed
- A set of automated actions that are executed when reaching certain steps
- A definition of which users, roles, or groups can perform transitions
- A definition of due dates for every step in the workflow
All this can be done from a single form, without having to learn any graphical notation. From there, a flowchart depicting the workflow is automatically generated, thereby teaching business users about this way of describing workflows.
Granted, such a simple model won’t accommodate all possible workflow patterns, but exhaustivity is not our goal. Functionality and simplicity are. In such a context, we will add simple extensions that make our basic Workflow engine powerful enough to address more sophisticated scenarios. For example, by attaching workflows to objects, we’re implicitly tying workflow instances to object records. As a result, we don’t have any notion of parallel flows. But this does not prevent us to do things in parallel, either by using Workflow Checklists, or by having a workflow spawn multiple records of other objects that have their own workflows and can be synchronized back with the primary workflow. Not entirely simple, but totally doable, and definitely much simpler than having to deal with multi-threaded processes.
Another important aspect of our worklow is usability. For example, we spent a lot of time thinking about how workflows should materialize themselves within our user interface. While flowcharts look cool on screenshots, they’re actually not that useful in practice. Instead, what matters to users is to know which state a workflow is in at a given point in time, and which transitions are allowed. For example, if a task has been assigned and can be either completed or canceled, the user interface should show three things: a drop-down list showing the task’s status (Assigned), and two buttons showing the two possible transitions (Complete and Cancel). Ideally, the Complete button should be green, and the cancel button red (like Pivotal Tracker does).
Ultimately, what really matters is to make it very easy for developers to specify the different steps of a workflow, including two verbs for every step, one in the past tense being used as step name (like “Assigned”), and one in the present tense being used for buttons (like “Complete”), plus some rendering information (such as colors). That plus the set of transitions that are allowed from any step is all you need to design an effective workflow and automatically generate a great user interface for it. And it will be enough to address the vast majority of use cases.
Batch Processes and Scheduling
Through its meta-API, STOIC can easily be integrated with external batch processing and scheduling engines. By the same token, the STOIC platform also offers such capabilities natively. Through a few sets of dedicated objects (Batches, Actions, Operations, Jobs, and Schedules), the STOIC platform lets developers create and manage batch processes that can be applied to multiple records of multiple objects. The set of records that a batch applies to is simply defined as a view using the View Editor, thereby benefiting from its advanced filtering rules. Schedules are handled by a Node.js binding to CRON.
Code Extension Points
The STOIC platform provides multiple code extension points:
The easiest place to add custom code is within Formula Fields. Any field of any object can be turned into a formula field, and its value is dynamically calculated using Formula.js, a clone of the formula functions offered by Microsoft Excel and Google Spreadsheets. Fields of the same record for which a formula field is calculated can be referenced by the formula. Optionally, users can manually override a formula field’s computed value.
As described above, developers can create their own connectors, using any framework and programming language. Connectors can be deployed anywhere (including on premises) and communicate with the server through a REST API.
Widgets are simple HTML5 components allowing web developers to create custom user interfaces using the STOIC Platform on the back-end, and virtually any web technology on the front-end. Much like Form Control, widgets are implemented using AngularJS, jQuery UI, and Twitter Bootstrap. They’re executed on the client-side, from any HTML page.
- GitHub for code versioning
- Jenkins for continuous integration
- Chai and Mocha for testing automation
- AngularJS Karma for user interface testing
For customers staying within the bounds of the STOIC framework and not planning to write any custom code, the STOIC user interface provides a set of tightly integrated tool that run directly within the web browser and are largely powered by the platform itself. These include:
- The Spreadsheet Importer to automatically convert spreadsheets into applications
- The Object Editor to develop custom objects
- The Field Editor to configure fields
- The Workflow Editor to design custom object workflows
- The View Editor to configure custom object views
- The Chart Editor to configure interactive charts
- The Dashboard Editor to design interactive dashboards
- The Code Editor for small snippets of custom code
Through the componentization of our user interface into easily embeddable HTML5 widgets, the STOIC platform can be integrated with virtually any Content Management System (CMS). Additionally, STOIC provides its own CMS, built as a native application on top of the platform, around a simple set of objects (Pages, Pageviews). This CMS is built using AngularJS and supports advanced templating, hierarchical inheritance, remote analytics (Google Analytics, Gaug.es, etc.), native analytics (using the Pageviews object), and advanced meta-data management.
Web User Interface
Small amounts of jQuery code have been written but are being phased out incrementally and replaced by native AngularJS code as the AngularJS platform is gaining in functionality and maturity, especially with respect to animations. Similarly, small amounts of Twitter Bootstrap code have been written as well, but our dependency on this framework is being reduced as we are strengthening our own presentation layer.
The user interface is designed to be entirely localizable and themeable (using Bootstrap themes). That being said, these two features were not included in our Minimum Viable Product (MVP) and will be implemented immediately after our first GA release.
Mobile User Interface
The mobile user interface is implemented as an HTML5 application packaged into native application shells using PhoneGap (aka Cordova). This mobile user interface provides support for native smartphone features such as Camera and GPS. It is being regularly tested on all major mobile platforms including Android, iOS, and Windows Mobile.
The mobile user interface provides only a subset of the features offered by the web user interface and focuses on end-users versus developers. As a result, most of the advanced tools offered by the STOIC platform are available only from the web user interface. This should not be considered as a severe limitation though, for very few use cases would require a mobile user to modify an object schema or a workflow directly from a smartphone. Instead, the mobile user interface focuses on data entry, data analysis, event notification, and group collaboration.
Additionally, connectors for third-party applications that have bespoke user interfaces optimized for specific scenarios are being developed as well. This includes connectors for email servers, calendar services (like Google Calendar), or task management services (like Any.do). Most of these connectors will become available by the end of the year (2013).
In order to support offline access, all records of all objects have a CID field. This field is populated with a unique identifier every time a record is checked out. Whenever changes to the record are requested to the server by a client, the value of the CID field provided by the client is compared to the one stored by the server. If they match, the transaction is executed and the CID for the record is refreshed by the server. If they do not, it indicates that changes were made earlier by other clients for the same record, and the request is declined. This optimistic locking algorithm is simpler to implement than more advanced ones based on vector keys and enables a relatively straightforward data synchronization process, with minimal user interventions.
As part of this architecture, the user interface also needs to provides an easy way for users to select the sets of records they want to check out whenever they’re about to go offline. In order to facilitate this process, the user interface will implement various strategies, including:
- All records owned by the user
- All records ever accessed by the user
- All records directly related to any included record
- All records of a particular application, module, or object
- Only records to which the user is entitled to
STOIC is currently working with a partner planning to use the platform for mobile data capture in rural areas of India. This scenario positively mandates support for offline access and will provide large-scale deployment and testing of the architecture in the first semester of 2014.
Reporting and Analytics
STOIC provides a complete set of APIs, feeds, and export tools in order to integrate with various third-party reporting and analytics tools. Additionally, STOIC provides its own set of tools for reporting and analytics, including:
- The View Editor to create custom views on objects
- The Chart Editor to facilitate the creation of interactive charts
- The Dashboard Editor to create dashboards that combine multiples views and charts
Here is a subset of the perspectives being offered today:
The reporting engine is implemented on top of Elasticsearch, using its powerful Query DSL, which is based on JSON to define queries. This DSL is extended with our proprietary Expression Engine, which allows the processing of expressions based on the Formula.js re-implementation of all 450+ Excel formula functions, extended with 150+ functions derived from the Lo-Dash library in order to support callbacks (for asynchronous calls like database lookups) and multi-dimensional data processing (vectors and matrices).
Moving forward, STOIC will also provide a sophisticated prediction API supporting over 130 statistical distributions and multiple inference algorithms. This development will be part of the STOIC Analytics application and will take place in 2014.
That should do it for now. If you have any questions, send them our way!
With the introduction of multi-indexing and clustering, what might look like minor user interface bugs are now becoming harder and harder to fix. That’s because multi-indexing is making the generation and refreshing of our meta-data cache a lot more complex, and clustering is dramatically introducing our reliance on the Faye pub-sub messaging system, which brings its own share of complexity. There is not much we can do about it right now, but we expect things to improve a bit once multi-indexing and clustering gain in maturity. In the meantime, François and Florian will have to dive a lot deeper into the stack than they might want to.
We’ve recently added a few libraries to our arsenal:
- Async.js (Control-flow library)
- AWS SDK for Node.js (SDK for Amazon Web Services)
- Connect Redis (Redis session store for Connect)
- faye-redis (Redis-based backend for Faye)
- Form-Data (FormData interface)
- Grunt (Task runner)
- log4js-node (Logger)
- node-cron (Cron client module)
- node-http-proxy (HTTP proxy)
- node-imap (IMAP client module)
- Request (Simplified HTTP client)
- rewire (Dependency injection)
- SuperTest (HTTP assertions library)
Hugues is making some serious progress with clustering. Nice!
Hugues is now putting the final touches on our new clustering framework, which allows our Node.js runtime to be deployed on multiple servers, and uses Faye as underlying messaging layer. If all goes as planned, we will test it in a pre-production environment within one of our larger customers before the end of September.
This will give our middleware layer linear scalability and real-time failover. Since we already have that for the database layer by using Elasticsearch, it will make our overall architecture scalable and fault tolerant. And when we add support for Cassandra, it will allow its deployment across multiple data-centers in order to reduce latency and provide support for disaster recovery.