# 指导原则

  1. 开发库,写代码反而是简单的部分

    1. 总有计划外的工作
    2. 总有新的 feature
    3. 总有要修的 bug
  2. 人们衡量开源库的质量基本方式

    1. 基本的代码质量, go vet, go fmt, go lint
    2. 测试覆盖率
    3. 文档
    4. Issue open / closed 的数量
    5. GitHub star 数量多并不意味着质量高
  3. 开源库的四大要素, 可用性/可读性/灵活性/可测性

# 库的四大要素

# 可用性: 从使用者角度出发思考问题

比如要写一个 HTTP GET Request ,先创建再发送

import "net/http"

req, err := http.NewRequest(
  http.MethodGet, "https://www.google.com", nil /* no body */)
if err != nil {
  return err
}
client := &http.Client{}
res, err := client.Do(req)

但是,这种代码会经常被用到,反复写的话会很冗长,那么为什么不直接提供 GET 的接口呢?

import "net/http"

client := &http.Client{}
res, err := client.Get("https://www.google.com")

# 可读性

# 灵活性

还是 GET 的例子,如果要加 request logging

import "net/http"

client := &http.Client{}
res, err := client.Get("https://www.google.com")

有一个可替换的 RoundTripper interface

type Client struct {
    // Transport specifies the mechanism by which individual
    // HTTP requests are made.
    // If nil, DefaultTransport is used.
    Transport RoundTripper
type RoundTripper interface {
    RoundTrip(*Request) (*Response, error)
}

这样只需要指定一个 loggingRoundTripper 就可以了,不需要 wrap

import "net/http"

client := &http.Client{Transport: loggingRoundTripper}
res, err := client.Get("https://www.google.com")

# 可测性

  • 不仅要自己保证自己可测
  • 还提供 test helper 给 caller 用

# 向后兼容

  • 任何 exported entity
    • 重命名、移除
    • 函数参数类型的修改
    • interface 加一个 method

这些都有可能导致 breaking change,这个时候就需要 semantic version 了

# 版本控制

Semantic Versioning 2.0.0 | Semantic Versioning

MAJOR.MINOR.PATCH

  • patch: a bug fix
  • minor: a new non-breaking change
  • major: a breaking change

Stable / Unstable?

  • Unstable v < 1.0
    • 从 0.1.0 开始,每个 release 的时候 increment MINOR,bug fix 的时候 increment PATCH
  • Stable 正式版 > = 1.0

Version Bump 很难,在 micro services 大系统中的核心库大概会花 6 个月到 1 年来更新所有的 dependencies

^1.1 is short for >= 1.1, < 2;
~0.2 is short for >= 0.2, < 0.3

Pin to version range vs Lock exact version

Two golden rules: minimize 1) latency 2) payload

# To Minimize Latency…

  • Reduce DNS lookups

    • Use a Fast DNS Provider, AVG Res Time (cloud flare < DNS Made Easy < AWS Route 53 < GoDaddy < NameCheap). NOTE: results vary in certain regions
    • DNS Cache. TTL Tradeoff = perf <> up-to-dateness
    • reduce number of 3p domains or use services with fast DNS (conflicts with domain sharding optimization for HTTP1)
    • DNS prfetching <link rel="dns-prefetch" href="//www.example.com/" >
  • reuse TCP connections.

  • minimize number of HTTP redirects

  • use a CDN

    • E.g. Netflix dev their own hardware and cooperate w/ local ISPs to serve CDN
  • eliminate unnecessary resources

  • cache resources on the client

    1. HTTP cache headers
      • cache-control for max-age
        • Note, for JS files : A simple way to ensure the browser picks up changed is by using output.filename substitutions with hashes. Webpack Caching
      • expires
        • If both Expires and max-age are set max-age will take precedence.
    2. last-modified, ETag headers to validate if the resource has been updated since we last load it
      • time-based Last-Modified response header (not used often because nginx and microservices)
      • content-based ETag (Entity Tag)
        • This tag is useful when for when the last modified date is difficult to determine.
        • done by hashing
    3. a common mistake is to set only one of the two above
  • compress assets during transfer

    • use JPEG, WebP instead of PNG
    • HTTP2 compresses headers automatically
    • nginx gzipped

# To minimize Payload…

  • eliminate unnecessary request bytes

    • especially for cookies
      • even though HTTP standard does not specify a size limit on the headers / cookie, but browsers / servers often enforce …
        • 4KB limit on cookies
        • 8KB ~ 16 KB limit on headers
      • cookies are attached in every request
  • parallelize request and response processing

    • while browser is blocked on resources, preload scanner looks ahead and dispatch downloads in advance: ~20% improvement

# Applying protocol-specific optimizations

  • HTTP 1.x

    • use HTTP keepalive and HTTP pipelining: dispatch multiple requests, in parallel, over the same connection, without waiting for a response in serial fashion.
    • browsers could only open a limited number of connections to a particular domain, so …
      • domain sharding = more origins * 6 connections per origin (DNS lookups may introduce more latencies)
      • bundle resources to reduce HTTP requests
      • inline small resource
  • HTTP 2.X

    • With binary framing layer introduced, we get one connection per origin with multiplexing/steam prioritization/flow control/server push, so remove 1.x optimization…
      • remove unnecessary concatenation and image splitting
      • use server push: previously inlined resources can be pushed and cached.

# Tools

# References

# 为什么会有经济周期

市场是由一个个交易组成的,在交易中,一个人的支出就是另外一个人的收入。

一个人可支配的支出包括“钱”和“信贷”。一个人的“收入”增加,有越好的偿还能力,“信贷”就可以增加,“支出”可以进一步增加。适度的“信贷”有助于提高生产率。

在经济增长曲线里

  1. 生产率是平缓上升的。
  2. 而借贷有周期。短期债务周期一般 5-8 年,由央行控制。
  3. 因为人们喜欢借钱不喜欢还钱,导致在长期内,债务增加的速度会超过收入,进而还形成长期债务周期。

Economic Cycles

债务和收入的比率叫债务负担。债务负担过重,达到长期债务的顶峰,经济进入去杠杆化时期(美国二三十年代大萧条、日本八十年代大萧条、2008 年全球金融危机)。

去杠杆化有四个手段:1 削减支出、2 减少债务、3 财富再分配、4 发行货币。

Economic Cycles and Time

# 最后的建议

  1. 不要让债务的增长速度超过收入(太多的债务负担会把你压垮);
  2. 不要让收入的增长速度超过生产率(会让你失去竞争力,公司雇佣别人的性价比更高);
  3. 尽一切努力提高生产率(在长期内起最关键作用)。
Loading ...
© 2010-2018 Tian
Built with ❤️ in San Francisco