Skip to content
This repository has been archived by the owner on Sep 12, 2022. It is now read-only.
Julian Pistorius edited this page Nov 8, 2017 · 16 revisions

Data Layer Improvements (1/DLI)

(button) status: draft

Data Layer Improvements (DLI) defines a process for replacing the existing Flux implementation handling data within the Troposphere user interface to Atmosphere(1) and improving the overall handling of response data from the Atmosphere (1) API.

Change Process

This specification models the Consensus-Oriented Specification System (COSS) (see "Consensus-Oriented Specification System"). Changes shall be in accordance with the statements outlined in COSS.

Language

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 (see "Key words for use in RFCs to Indicate Requirement Levels").

Goals

DLI aims to remove the existing Flux implementation and replace it with a robust implementation.

This Specification looks to address the following challenges:

  • Fetching Conveyance
  • Optimistic Updates
  • Error Handling
  • Caching
  • Filtering & Pagination

These challenges contribute to and describe aspects of issues with the existing implementation. They will be explored in more detail within the "Common Problems" section.

This Specification states these specific goals as a minimum criteria for a robust, new implementation:

  1. Ensure there is a distinction between a data fetch returning "no data" and a data fetch resulting in "error" (sometimes called the "Null Data Render" pattern).
  2. Provide a solution to Instance state "flop back" (where an operation is requested & subsequent polling responses put the Instance back into a final state).
  3. Give developers more control during debugging
  4. Reduce the number of lines of code & complexity (e.g. "boilerplate")

The current implementation has caching, filtering, and pagination functionality. A new implementation must also have this functionality. A new implementation without these is not robust, and also is not a valid implementation.

Quality Aspirations

The goal of DLI is to increase the "testability" of the data layer and the components that depend on values held & returned by the data layer. We define "high quality" primarily thus:

  • the code is easy to read and understand (even if you are not familiar with the libraries, frameworks, & patterns).
  • the code is easy to reuse, either in partial or constituent definitions.
  • the code is easy to write correctly, and errors are rapidly found/fixed.

Common Problems

With this Specification we minimally aim to address a problems associated with the existing data layer; while enabling agility, rapid feature development, and a better development experience. Some common problems are:

  • Fetching problems (Null Data Render, etc.)
  • Issues Managing Optimistic state ("Flop Back")
  • Handling Error States
  • Managing, Evicting Cache
  • Dealing with large result sets
  • Avoid re-fetching data

Problem: Fetching problems (Null Data Render, etc.)

The "Null Data Render" pattern implemented in the current data layer makes it so the application can be progressively rendered as data is received asynchronously. However, the "lack" is used to mean that a component might be waiting for a fetch to return, or there might not be any data to return, or there may be an error (and nothing will be returned).

A solution to this common problem could be called "Fetching Conveyance".

Fetching Conveyance:

  • MUST be able to indicate when data is being fetched
  • MUST be able to indicate when data was found
  • MUST be able to indicate when data was NOT found
  • MUST be able to indicate when data already exists, but could be refreshed

Issues Managing Optimistic state ("Flop Back")

A common problem is that an Instance in a final state can "flop back" to that final state when an operation is requested on it. So, this is related to trying to inject eager state transitions into the user interface and then have polling responses clobber (overwrite) them.

A solution to this would be to make it easy to define "optimistic updates". Some stores in the currently implementation define optimistic updates, but their definition and managing has been a source of bugs and issues. Providing "optimistic updates" would also be a significant enabling feature for indicating progress, transition, and pending actions on resources.

Optimistic Updates:

  • MUST be able to inject data into the application before server confirmation
  • MUST be able to manage when data is optimistic (e.g. being created, updated, or deleted)
  • MUST be able to reconcile success & failure of operations
    • On success, it MUST be possible to merge results with optimistic data to reflect the latest state of the system.
    • On failure, it MUST: know of the failure, roll back prior to merge, offer to retain optimistic data

Handling Error States

A common problem is that a fetch of data has failed. The components see this as "no data" returned, and enter into an infinite "spinning loader" view. The data is not being fetched by a "retry", so the view will remain "spinning" until the community member runs out of patience and refreshes (or closes the tab).

Error Handling:

  • MUST be able to indicate when errors have occurred
  • MUST be able to communicate errors back to the community member
  • MUST be able to offer server error messages in an understandable context to assist in a corrective action

Managing/Evicting Cache

A common problem is that a situation is requested by the community member where we need to "purge" all cached data. We do have some semantics within the existing stores to attempt this. However, results can be unpredictable and clumsy. Singling out a specific value to "purge", or evict, is a bit harder (and, often, forces a full "purge").

A solution for managing cached data and allowing for fine-grain control for eviction would enable a more robust data layer.

Caching:

  • MUST be able to cache data
  • MUST be able to retrieve from cache
    • (this included filtered & paginated data)
  • MUST be able to distinguish when data was cached or fetched
    • (relates to Fetching Conveyance)
  • MUST be able to invalidate cache

Functionality related to invalidating cache could be expanding to allow for "whole application" invalidation and "per view" invalidation. For the purposes of discussing this aspect of caching, we will make the following definitions:

  • "whole application" invalidation is called global cache invalidation
  • "per view" invalidation is called local cache invaldiation

When consisting global cache invalidation, a set of "rules" MAY emerge. Examples could be: Refetching (all) data every H hours. This rule might have variations limited what data is refetched or on what time-duration the refetch happens.

When consisting local cache invalidation, a set of "rules" MAY map to views that MUST be refetched. Example could be: Refetching data is REQUIRED every time the community member navigates to the view (like, a dashboard or allocation usage view).

Dealing with "large" result sets

A common problem is that a Image is searched for and the results are so numerous that they cannot be rendered within a single page. Many resources modeled by the API will present the user interface with result sets that are difficult to render in a single page.

It MUST be possible to narrow the count of result sets. It MUST be possible to present result sets in an "explorable" manner, such that a community member could "page through" the resources to find the one they are looking for.

Filtering & Pagination:

  • MUST be able to query represented by data (predicate form)
  • MUST be able to get partial results, by parameter (pages)
    • this includes pageSize and start/stop indices
  • MUST provide strategy for traditional pagination, along with infinite scrolling
  • MUST provide strategy to inject new data into views
    • (related to Optimistic Updates)

Avoid re-fetching data

A common problem is fetching data that is already present within the application.

Caching:

  • MUST be able to when data is already present
  • MUST be able to distinguish when data was cached or fetched

Provisional Replacement

The new data layer should resolve current issues and position the PROJECT (Troposphere) in a way to deliver value to the supported community.

A replacement must address the Goals stated in this document. It must help resolve the stated common problems.

Reference Implementation

An implementation will be built alongside the existing data layer to test and facilitate evaluation of the "replacement process". A "cut-over" can be done when DLI is fully capable of supporting necessary operations.