Why bother with architecture?

Answer: for reducing human resources costs per feature.

Mobile developers evaluate the architecture in three dimensions.

  1. Balanced distribution of responsibilities among feature actors.
  2. Testability
  3. Ease of use and maintainability
Distribution of Responsibility Testability Ease of Use
Tight-coupling MVC
Cocoa MVC ❌ VC are coupled ✅⭐
MVP ✅ Separated View Lifecycle Fair: more code
MVVM Fair: because of View’s UIKit dependant Fair
VIPER ✅⭐️ ✅⭐️

Tight-coupling MVC

Traditional MVC

For example, in a multi-page web application, page completely reloaded once you press on the link to navigate somewhere else. The problem is that the View is tightly coupled with both Controller and Model.

Cocoa MVC

Apple’s MVC, in theory, decouples View from Model via Controller.

Cocoa MVC

Apple’s MVC in reality encourages massive view controllers. And the view controller ends up doing everything.

Realistic Cocoa MVC

It is hard to test coupled massive view controllers. However, Cocoa MVC is the best architectural pattern regarding the speed of the development.

MVP

In an MVP, Presenter has nothing to do with the life cycle of the view controller, and the View can be mocked easily. We can say the UIViewController is actually the View.

MVC Variant

There is another kind of MVP: the one with data bindings. And as you can see, there is tight coupling between View and the other two.

MVP

MVVM

It is similar to MVP but binding is between View and View Model.

MVVM

VIPER

There are five layers (VIPER View, Interactor, Presenter, Entity, and Routing) instead of three when compared to MV(X). This distributes responsibilities well but the maintainability is bad.

VIPER

When compared to MV(X), VIPER

  1. Model logic is shifted to Interactor and Entities are left as dumb data structures.
  2. UI related business logic is placed into Presenter, while the data altering capabilities are placed into Interactor.
  3. It introduces Router for the navigation responsibility.

Key value cache

4059 2019-01-06 23:24

KV cache is like a giant hash map and used to reduce the latency of data access, typically by

  1. Putting data from slow and cheap media to fast and expensive ones.
  2. Indexing from tree-based data structures of O(log n) to hash-based ones of O(1) to read and write

There are various cache policies like read-through/write-through(or write-back), and cache-aside. By and large, Internet services have a read to write ratio of 100:1 to 1000:1, so we usually optimize for read.

In distributed systems, we choose those policies according to the business requirements and contexts, under the guidance of CAP theorem.

Regular Patterns

  • Read
    • Read-through: the clients read data from the database via the cache layer. The cache returns when the read hits the cache; otherwise, it fetches data from the database, caches it, and then return the vale.
  • Write
    • Write-through: clients write to the cache and the cache updates the database. The cache returns when it finishes the database write.
    • Write-behind / write-back: clients write to the cache, and the cache returns immediately. Behind the cache write, the cache asynchronously writes to the database.
    • Write-around: clients write to the database directly, around the cache.

Cache-aside pattern

When a cache does not support native read-through and write-through operations, and the resource demand is unpredictable, we use this cache-aside pattern.

There are still chances for dirty cache in this pattern. It happens when these two cases are met in a racing condition:

  1. read database and update cache
  2. update database and delete cache

Where to put the cache?

  • client-side
  • distinct layer
  • server-side

What if data volume reaches the cache capacity? Use cache replacement policies

  • LRU(Least Recently Used): evict the most recently used entries and keep the most recently used ones.
  • LFU(Least Frequently Used): evict the most frequently used entries and keep the most frequently used ones.
  • ARC(Adaptive replacement cache): it has a better performance than LRU. It is achieved by keeping both the most frequently and frequently used entries, as well as a history for eviction. (Keeping MRU+MFU+eviction history.)

Who are the King of the cache usage?

Facebook TAO

Motivation & Assumptions

  • PB-level Blob storage
  • Traditional NFS based desgin (Each image stored as a file) has metadata bottleneck: large metadata size severely limits the metadata hit ratio.
    • Explain more about the metadata overhead

For the Photos application most of this metadata, such as permissions, is unused and thereby wastes storage capacity. Yet the more significant cost is that the file’s metadata must be read from disk into memory in order to find the file itself. While insignificant on a small scale, multiplied over billions of photos and petabytes of data, accessing metadata is the throughput bottleneck.

Solution

Eliminates the metadata overhead by aggregating hundreds of thousands of images in a single haystack store file.

Architecture

Facebook Photo Storage Architecture

Data Layout

index file (for quick memory load) + haystack store file containing needles.

index file layout index file layout 1

index file layout 2

haystack store file

haystack store file

CRUD Operations

  • Create: write to store file and then async write index file, because index is not critical
  • Read: read(offset, key, alternate_key, cookie, data_size)
  • Update: Append only. If the app meets duplicate keys, then it can choose one with largest offset to update.
  • Delete: soft delete by marking the deleted bit in the flag field. Hard delete is executed by the compact operation.

Usecases

Upload

Photo Storage Upload

Download

Photo Storage Download

Disclaimer: All things below are collected from public sources or purely original. No Uber-confidential stuff here.

Requirements

  • ride hailing service targeting the transportation markets around the world
  • realtime dispatch in massive scale
  • backend design

Architecture

uber architecture

Why micro services?

Conway’s law says structures of software systems are copies of the organization structures.

Monolithic Service Micro Services
Productivity, when teams and codebases are small ✅ High ❌ Low
Productivity, when teams and codebases are large ❌ Low ✅ High (Conway’s law)
Requirements on Engineering Quality ❌ High (under-qualified devs break down the system easily) ✅ Low (runtimes are segregated)
Dependency Bump ✅ Fast (centrally managed) ❌ Slow
Multi-tenancy support / Production-staging Segregation ✅ Easy ❌ Hard (each individual service has to either 1) build staging env connected to others in staging 2) Multi-tenancy support across the request contexts and data storage)
Debuggability, assuming same modules, metrics, logs ❌ Low ✅ High (w/ distributed tracing)
Latency ✅ Low (local) ❌ High (remote)
DevOps Costs ✅ Low (High on building tools) ❌ High (capacity planning is hard)

Combining monolithic codebase and micro services can bring benefits from both sides.

Dispatch Service

  • consistent hashing sharded by geohash
  • data is transient, in memory, and thus there is no need to replicate. (CAP: AP over CP)
  • single-threaded or locked matching in a shard to prevent double dispatching

Payment Service

The key is to have an async design, because payment systems usually have a very long latency for ACID transactions across multiple systems.

UserProfile Service and Trip Service

  • low latency with caching
  • UserProfile Service has the challenge to serve users in increasing types (driver, rider, restaurant owner, eater, etc) and user schemas in different locations

Push Notification Service

  • Apple Push Notifications Service (not quite reliable)
  • Google Cloud Messaging Service GCM (it can detect the deliverability) or
  • SMS service is usually more reliable

Blockchain Technology Review

7190 2018-12-26 01:20

What is blockchain?

A blockchain is an incorruptible distributed ledger that is…

  1. Hosted by multiple parties
  2. Secured by crypto algorithms
  3. Append-only/immutable and thus verifiable in data storage

How does it work?

how does blockchain work

Categorization: Public vs. Private vs. Hybrid Blockchains

  • Public: It is permission-less to join the network.
  • Private: The permission to join is centrally controlled.
  • Hybrid: Multi-parties control the permission to join.

Do you need a blockchain?

Do you need a blockchain?

Architecture

Hardware
Hardware
Basic Utils
Basic Utils
Ledger
Ledger
Concensus
Concensus
Smart Contract
Smart Contract
APIs
APIs
dApps
dApps
Dev
Ops
[Not supported by viewer]

  1. Hardware: computer resources = computing + networking + storage

  2. Basic Utils: P2P network + crypto + data storage w/ db or filesystem

  3. Ledger: chain of data blocks + domain-specific data models

  4. Consensus: write first consensus later (PoW/PoS/DPoS) / consensus first write later (PBFT)

  5. Smart Contract: limited program running on the blockchain

  6. API: RPC + SDK

  7. dApps: 1) transfer of values 2) data certification 3) data access control

  8. DevOps: deployment, operations, metrics, logs

Industries

  1. Financial Services

    • crypto exchange: binance, coinbase, etc.
    • international payments: ripple, stellar, etc.
    • Know Your Customer (KYC) / anti-money laundry (AML): civic
  2. Health care

    • sharing data across providers, insurers, vendors, auditors, etc
  3. Public sector

    • asset tokenization
    • transparent voting in public election
  4. Energy and resources

    • trading
    • sharing data across suppliers, shippers, contractors, and authorities
  5. Technology, media, and telecom

    • DRM and incentivizing content creator
    • securing operations and data storage of IoT devices
  6. Consumer and industrial products

    • loyalty points programs in traveling
    • document signing
    • supply-chain management

Tian Pan's Notes

Software Engineering and Startup
© 2010-2018 Tian
Built with ❤️ in San Francisco