Skip to content

Notes on Architecture

chime-mu edited this page May 26, 2017 · 12 revisions

Notes on cobudget architecture

As of May 26, 2017

Frontend design

As explained by Rob, the frontend stores data in an in-memory database, LokiJS. The data in this database gets populated by API calls to the backend.

Backend/Frontend communication

All API calls that responds with data, is responding by sending one or more named sets of specific objects. The objects that can be sent in a response is:

All objects have ID's and links between objects is communicated through these Object ID's.

Examples

(Group 41 is Enspiral)

  • The GET Group call (api/v1/groups/41) responds with one groups object.
  • The GET Contributions call (api/v1/contributions?group_id=41) responds with one users object, one subscription_trackers object and one contributions object.
  • The GET Buckets call (api/v1/buckets?group_id=41) responds with 43 users objects, 43 subscription_trackers objects, one groups object and 102 buckets objects
  • The GET Memberships call (api/v1/memberships?group_id=41) responds with 243 users objects, 243 subscription_trackers objects, one groups object and 243 memberships objects.

Backend database access patterns

The backend stores all data in PostgreSQL. Data is accessed solely through Rails ActiveRecord.

Lazy loading of associated objects

ActiveRecord loads objects lazily by default. Thus an object is loaded through a SQL query only when the information is needed. The leads to a behaviour where associated objects will be loaded one at a time as needed. The leads to a problem where lot's of small SQL queries is generated to load each object separately, rather than loading all of them at once. The is known as the N + 1 query problem.

As API responses typically contains entire sets of objects, the default lazy read behaviour is not optimal.

Model object attributes that access other tables

Several of the model objects in the backend has attributes that return information that requires reads from other database tables. Examples:

  • total_contributions on the Bucket model. This return the sum of contributions to this bucket. It will do a query on the contributions table to get the result.
  • total_allocations on the Membership model. This return the sum of money this user has allocated to this group. It will do a query on the allocations table to get the result.

Most of these attributes will be serialized when the object is used for a API response. This means all the related SQL queries will be called - one call for one attribute value in one object.

Some API calls are quite slow

For example will the GET Memberships call on the Enspiral group generate in excess of 1500 SQL queries.

Architectural properties

  • All the Rails magic that makes this work makes the database tightly coupled with model objects, model objects tightly coupled with serialisers and the serialisers tightly coupled with the frontend. The tight coupling of the entire chain of objects makes is easy to make changes in the database and see this propagate to the frontend.
    • This is a great feature for building prototypes. It's easy to change things to experiment with new features and see the effects after a very short time.
    • This makes it hard to make changes in one part of the code without having to change everything. Thus optimising to make faster queries or even change some of the database schema will be quite hard and require a lot of work.
  • The default Rails behaviour coupled with the chosen design makes the application unbearably slow for even moderate number of users and/or buckets in group.
  • Optimising this slowness away is possible, but require careful investigation of access patterns and adding eager load directives at specific points. It feels like Rails and ActiveRecord is more in the way than helping. This might be helped by using the frontend database more (more on this later).
  • The database in the frontend is not being used.
    • Almost all navigation in the frontend will trigger API calls to get all objects to draw the necessary information in the displayed page. We might as well get information directly from the API response(s), not from the frontend database.
    • Attributes that make simple computations and/or make subqueries does this at the backend and then send the computed property to the frontend. The capabilities to do this in the frontend database is not taken advantage of.
  • The current API responses seem specifically tailored to the existing frontend. Again, this is great for experimentation and prototyping of features. Not so much for other kinds of interfaces and building other types of frontends.

Notes on the Loomio architecture

According to Rob, Loomio is build on the same basic pattern with PostgreSQL, Rails and the frontend in-memory database. However Loomio also has what seem to be significant differences.

The Loomio backend will proactively send changes in the data to the frontend using WebSockets. This means the frontend database is now a constantly updated reflection of a well defined part of the backend stored data. The frontend database can then be used a the primary datasource for display information in the frontend.

It's also possible to use the frontend database to make the specific computations that is currently done by model attributes in the cobudget backend. Whether that is actually used in Loomio, I don't know.

A few more notes on the use of a frontend database.

  • It seems to minimise the necessary communication between backend and frontend, which is great, especially in a mobile context.
  • Mobile browsers have pretty restrictive memory requirements. To use this pattern it's worthwhile investigating if we will bump up again those restriction - but then again, they will likely decrease over time.
  • If the frontend database is used extensively for computations and validation, it becomes unclear where the business logic is actually executed. It might be beneficial to do this in the frontend from a performance standpoint, but it becomes harder to build new types of frontend functionality and ensuring the same business rules.