Consistency between Redis Cache and SQL Database

Original post: https://yunpengn.github.io/blog/2019/05/04/consistent-redis-sql/

Nowadays, Redis has become one of the most popular cache solution in the Internet industry. Although relational database systems (SQL) bring many awesome properties such as ACID, the performance of the database would degrade under high load in order to maintain these properties.

In order to fix this problem, many companies & websites have decided to add a cache layer between the application layer (i.e., the backend code which handles the business logic) and the storage layer (i.e., the SQL database). This cache layer is usually implemented using an in-memory cache. This is because, as stated in many textbooks, the performance bottleneck of traditional SQL databases is usually I/O to secondary storage (i.e., the hard disk). As the price of main memory (RAM) has gone down in the past decade, it is now feasible to store (at least part of) the data in main memory to improve performance. One popular choice is Redis.

Certainly, most systems would only store the so-called “hot data” in the cache layer (i.e., main memory). This is according to the Pareto Principle (also known as 80/20 rule), for many events, roughly 80% of the effects come from 20% of the causes. To be cost-efficient, we just need to store that 20% in the cache layer. To identify the “hot data”, we could specify an eviction policy (such as LFU or LRU) to determine which data to expire.

Background

As mentioned earlier, part of the data from the SQL database would be stored in in-memory cache such as Redis. Even though the performance is improved, this approach brings a huge headache that we do not have a single source of truth anymore. Now, the same piece of data would be stored in two places. How can we ensure the consistency between the data stored in Redis and the data stored in SQL database?

Below, we present a few common mistakes and point out what could go wrong. We also present a few solutions to this tricky problem.

Notice: to ease our discussion here, we take the example of Redis and traditional SQL database. However, please be aware the solutions presented in this post could be extended to other databases, or even the consistency between any two layers in the memory hierarchy.

Various Solutions

Below we describe a few approaches to this problem. Most of them are almost correct (but still wrong). In other words, they can guarantee consistency between the 2 layers 99.9% of the time. However, things could go wrong (such as dirty data in cache) under very high concurrency and huge traffic.

However, these almost correct solutions are heavily used in the industry and many companies have been using these approaches for years without major headache. Sometimes, going from 99.9% correctness to 100% correctness is too challenging. For real-world business, faster development lifecycle and shorter go-to-market timeline are probably more important.

Cache Expiry

Some naive solutions try to use cache expiry or retention policy to handle consistency between MySQL and Redis. Although it is a good practice in general to carefully set expiry time and retention policy for your Redis Cluster, this is a terrible solution to guarantee consistency. Let’s say your cache expiry time is 30 minutes. Are you sure you can undertake the danger of reading dirty data for up to half an hour?

What about setting the expiry time to be shorter? Let’s say we set it to be 1 minute. Unfortunately, we are talking about services with huge traffic and high concurrency here. 60 seconds may make us lose millions of dollars.

Hmm, let’s set it to be even shorter, what about 5 seconds? Well, you have indeed shortened the inconsistent period. However, you have defeated the original objective of using cache! You will have a lot of cache misses and likely the performance of the system will degrade a lot.

Cache Aside

The algorithm for cache aside pattern is:

  • For immutable operations (read):
    • Cache hit: return data from Redis directly, with no query to MySQL;
    • Cache miss: query MySQL to get the data (can use read replicas to improve performance), save the returned data to Redis, return the result to client.
  • For mutable operations (create, update, delete):
    • Create, update or delete the data to MySQL;
    • Delete the entry in Redis (always delete rather than update the cache, the new value will be inserted when next cache miss).

This approach would mostly work for common use cases. In fact, cache aside is the de facto standard for implementing consistency between MySQL and Redis. The famous paper, Scaling Memecache at Facebook also described such an approach. However, there does exist some problems with this approach as well:

  • Under normal scenarios (let’s say we assume the process is never killed and write to MySQL/Redis will never fail), it can mostly guarantee eventual consistency. Let’s say process A tries to update an existing value. At a certain moment, A has successfully updated the value in MySQL. Before it deletes the entry in Redis, another process B tries to read the same value. B will then get a cache hit (because the entry has not been deleted in Redis yet). Therefore, B will read the outdated value. However, the old entry in Redis will eventually be deleted and other processes will eventually get the updated value.
  • Under extreme situations, it cannot guarantee eventual consistency as well. Let’s consider the same scenario. If process A is killed before it attempts to delete the entry in Redis, that old entry will never be deleted. Hence, all other processes thereafter will keep reading the old value.
  • Even under normal scenarios, there exists a corner case with very low probability where eventual consistency may break. Let’s say process C tries to read a value and gets a cache miss. Then C queries MySQL and gets the returned result. Suddenly, C somehow is stuck and paused by the OS for a while. At this moment, another process D tries to update the same value. D updates MySQL and has deleted the entry in Redis. After that, C resumes and saves its query result into Redis. Hence, C saves the old value into Redis and all subsequent processes will read dirty data. This may sound scary, but its probability is very low because:
    • If D is trying to update an existing value, this entry by right should exist in Redis when C tries to read it. This scenario will not happen if C gets a cache hit. In order for such a case to happen, that entry must have expired and been deleted from Redis. However, if this entry is “very hot” (i.e., there is huge read traffic on it), it should have been saved into Redis again very soon after it is expired. If this belongs to “cold data”, there should be low consistency on it and thus it is rare to have one read request and one update request on this entry simultaneously.
    • Mostly, writing to Redis should be much faster than writing to MySQL. In reality, C‘s write operation on Redis should happen much earlier than D‘s delete operation on Redis.

Cache Aside – Variant 1

The algorithm for the 1st variant of cache aside pattern is:

  • For immutable operations (read):
    • Cache hit: return data from Redis directly, with no query to MySQL;
    • Cache miss: query MySQL to get the data (can use read replicas to improve performance), save the returned data to Redis, return the result to client.
  • For mutable operations (create, update, delete):
    • Delete the entry in Redis;
    • Create, update or delete the data to MySQL.

This can be a very bad solution. Let’s say process A tries to update an existing value. At a certain moment, A has successfully deleted the entry in Redis. Before A updates the value in MySQL, process B attempts to read the same value and gets a cache miss. Then, B queries MySQL and saves the returned data to Redis. Notice the data in MySQl has not been updated at this moment yet. Since A will not delete the Redis entry again later, the old value will remain in Redis and all subsequent reads to this value will be wrong.

According to the analysis above, assuming extreme conditions will not happen, both the origin cache aside algorithm and its variant 1 cannot guarantee eventual consistency in some cases (we call such cases the unhappy path). However, the probability of the unhappy path for variant 1 is much higher than that of the original algorithm.

Cache Aside – Variant 2

The algorithm for the 2nd variant of cache aside pattern is:

  • For immutable operations (read):
    • Cache hit: return data from Redis directly, with no query to MySQL;
    • Cache miss: query MySQL to get the data (can use read replicas to improve performance), save the returned data to Redis, return the result to client.
  • For mutable operations (create, update, delete):
    • Create, update or delete the data to MySQL;
    • Create, update or delete the entry in Redis.

This is a bad solution as well. Let’s say there are two processes A and B both attempting to update an existing value. A updates MySQL before B; however, B updates the Redis entry before A. Eventually, the value in MySQL is updated by B; however, the value in Redis is updated by A. This would cause inconsistency.

Similarly, the probability of unhappy path for variant 2 is much higher than that of the original approach.

Read Through

The algorithm for read through pattern is:

  • For immutable operations (read):
    • Client will always simply read from cache. Either cache hit or cache miss is transparent to the client. If it is a cache miss, the cache should have the ability to automatically fetch from the database.
  • For mutable operations (create, update, delete):
    • This strategy does not handle mutable operations. It should be combined with write through (or write behind) pattern.

A key drawback of read through pattern is that many cache layers may not support it. For example, Redis would not be able to fetch from MySQL automatically (unless you write a plugin for Redis).

Write Through

The algorithm for write through pattern is:

  • For immutable operations (read):
    • This strategy does not handle immutable operations. It should be combined with read through pattern.
  • For mutable operations (create, update, delete):
    • The client only needs to create, update or delete the entry in Redis. The cache layer has to atomically synchronize this change to MySQL.

The drawbacks of write through pattern are obvious as well. First, many cache layers would not natively support this. Second, Redis is a cache rather than an RDBMS. It is not designed to be resilient. Thus, changes may be lost before they are replicated to MySQL. Even if Redis has now supported persistence techniques such as RDB and AOF, this approach is still not recommended.

Write Behind

The algorithm for write behind pattern is:

  • For immutable operations (read):
    • This strategy does not handle immutable operations. It should be combined with read through pattern.
  • For mutable operations (create, update, delete):
    • The client only needs to create, update or delete the entry in Redis. The cache layer saves the change into a message queue and returns success to the client. The change is replicated to MySQL asynchronously and may happen after Redis sends success response to the client.

Write behind pattern is different from write through because it replicates the changes to MySQL asynchronously. It improves the throughput because the client does not have to wait for the replication to happen. A message queue with high durability could be a possible implementation. Redis stream (supported since Redis 5.0) could be a good option. To further improve the performance, it is possible to combine the changes and update MySQL in batch (to save the number of queries).

The drawbacks of write behind pattern are similar. First, many cache layers do not natively support this. Second, the message queue used must be FIFO (first in first out). Otherwise, the updates to MySQL may be out of order and thus the eventual result may be incorrect.

Double Delete

The algorithm for double delete pattern is:

  • For immutable operations (read):
    • Cache hit: return data from Redis directly, with no query to MySQL;
    • Cache miss: query MySQL to get the data (can use read replicas to improve performance), save the returned data to Redis, return the result to client.
  • For mutable operations (create, update, delete):
    • Delete the entry in Redis;
    • Create, update or delete the data to MySQL;
    • Sleep for a while (such as 500ms);
    • Delete the entry in Redis again.

This approach combines the original cache aside algorithm and its 1st variant. Since it is an improvement based on the original cache aside approach, we can declare that it mostly guarantees eventual consistency under normal scenarios. It has attempted to fix the unhappy path of both approaches as well.

By pausing the process for 500ms, the algorithm assumes all concurrent read processes have saved the old value into Redis and thus the 2nd delete operation on Redis will clear all dirty data. Although there does still exist a corner case where this algorithm to break eventual consistency, the probability of that would be negligible.

Write Behind – Variant

In the end, we present a novel approach introduced by the canal project developed by Alibaba Group from China.

This new method can be considered as a variant of the write behind algorithm. However, it performs replication in the other direction. Rather than replicating changes from Redis to MySQL, it subscribes to the binlog of MySQL and replicates it to Redis. This provides much better durability and consistency than the original algorithm. Since binlog is part of the RDMS technology, we can assume it is durable and resilient under disaster. Such an architecture is also quite mature as it has been used to replicate changes between MySQL master and slaves.

Conclusion

In conclusion, none of the approaches above can guarantee strong consistency. Strong consistency may not be a realistic requirement for the consistency between Redis and MySQL as well. To guarantee strong consistency, we have to implement ACID on all operations. Doing so will degrade the performance of the cache layer, which will defeat our objectives of using Redis cache.

However, all the approaches above have attempted to achieve eventual consistency, of which the last one (introduced by canal) being the best. Some of the algorithms above are improvements to some others. To describe their hierarchy, the following tree diagram is drawn. In the diagram, each node would in general achieve better consistency that its children (if any).

We conclude there would always be a tradeoff between 100% correctness and performance. Sometimes, 99.9% correctness is already enough for real-world use cases. In future researches, we remind that people should remember to not defeat the original objectives of the topic. For example, we cannot sacrifice performance when discussing the consistency between MySQL and Redis.

References

Fixing “App Is Damaged and Can’t Be Opened”

Just a reminder when installing 3rdparty software on MacOS.

Starting with MacOS Sierra 10.12, run:

Starting with MacOS Catalina 10.15, run:

A Spring Cloud Toy Project

Recently played with the Spring/SpringBoot/SpringCloud stack with a toy project: https://github.com/gonwan/spring-cloud-demo. Just paste README.md here, and any pull request is welcome:

Introduction

The demo project is initialized from https://github.com/carnellj/spmia-chapter10. Additions are:

  • Code cleanup, bug fix, and better comments.
  • Java 9+ support.
  • Spring Boot 2.0 migration.
  • Switch from Postgres to MySQL, and from Kafka to RabbitMQ.
  • Easier local debugging by switching off service discovery and remote config file lookup.
  • Kubernetes support.
  • Swagger Integration.
  • Spring Boot Admin Integration.

The project includes:

  • [eureka-server]: Service for service discovery. Registered services are shown on its web frontend, running at 8761 port.
  • [config-server]: Service for config file management. Config files can be accessed via: http://${config-server}:8888/${appname}/${profile}. Where ${appname} is spring.application.name and ${profile} is something like dev, prd or default.
  • [zipkin-server]: Service to aggregate distributed tracing data, working with spring-cloud-sleuth. It runs at 9411 port. All cross service requests, message bus delivery are traced by default.
  • [zuul-server]: Gateway service to route requests, running at 5555 port.
  • [authentication-service]: OAuth2 enabled authentication service running at 8901. Redis is used for token cache. JWT support is also included. Spring Cloud Security 2.0 saves a lot when building this kind of services.
  • [organization-service]: Application service holding organization information, running at 8085. It also acts as an OAuth2 client to authentication-service for authorization.
  • [license-service]: Application service holding license information, running at 8080. It also acts as an OAuth2 client to authentication-service for authorization.
  • [config]: Config files hosted to be accessed by config-server.
  • [docker]: Docker compose support.
  • [kubernetes]: Kubernetes support.

NOTE: The new OAuth2 support in Spring is actively being developed. All functions are merging into core Spring Security 5. As a result, current implementation is suppose to change. See:

Tested Dependencies

  • Java 8+
  • Docker 1.13+
  • Kubernetes 1.11+

Building Docker Images

In case of running out of disk space, clean up unused images and volumes with:

Running Docker Compose

Or with separate services:

Running Kubernetes

NOTE: Kubernetes does not support environment variable substitution by default.

Use Cases

Suppose you are using the kubernetes deployment.

Get OAuth2 token

curl is used here, and 31004 is the cluster-wide port of the Zuul gateway server:

Get organization info

Use the token returned from previous request.

Get license info associated with organization info

Use the token returned from previous request.

Distributed Tracing via Zipkin

Every response contains a correlation ID to help diagnose possible failures among service call. Run with curl -v to get it:

Search it in Zipkin to get all trace info, including latencies if you are interested in.
zipkin-1
zipkin-2

The license service caches organization info in Redis, prefixed with organizations:. So you may want to clear them to get a complete tracing of cross service invoke.

Working with OAuth2

All OAuth2 tokens are cached in Redis, prefixed with oauth2:. There is also JWT token support. Comment/Uncomment @Configuration in AuthorizationServerConfiguration and JwtAuthorizationServerConfiguration classes to switch it on/off.

Swagger Integration

The organization service and license service have Swagger integration. Access via /swagger-ui.html.

Spring Boot Admin Integration

Spring Boot Admin is integrated into the eureka server. Access via: http://${eureka-server}:8761/admin.
sba-1

Solving iBooks Not Syncing in macOS

As a note here:

Go to Menu –> Store –> Check for Available Downloads, to refresh your iBooks login manually. Also make sure the iCloud option for iBooks is enabled in settings.

Updated July 4, 2022: On MacOS 12, Go to Settings –> Apple ID —> iCloud Drive, disable and re-enable the iBooks sync. If this does not work, logout and re-login the Apple ID.

Job Open: gonwan’s girlfriend

Brief
    I’m now working for ASUS Computer Inc. as a senior software engineer. My wage is about 5k – 8k per month. No house nor car is available.
    In spare time, I read books. Learning is important. I also watch animes to relax myself. I like singing. My favorite singer is Fish Leong. Every week, I play badminton or do other sports. But I’m not very good at sports, just for fun. “Living with passion” is my motto.
    Lastly, I do not have a plan for marriage in recent two years.

Title
    gonwan’s girlfriend

Requirements
    ·162-172 a must.
    ·Intelligent, diligent, and with a pleasant personality.
    ·Being so outgoing that I can trust and always share my thoughts with you.
    ·No specific zodiac required.
    ·Good singing skills preferred.
    ·Good drawing skills a plus.

Responsibilities
    ·Just whatever a girlfriend should do.

    I will offer a better life to you, if you get the position.
    Please send your resume to “gonwan (at) gmail (dot) com”. Be sure to mark the title with “applying for position”. Then you will be informed when and where to take an interview session.
    Contact me if you are the one.

丸之宅记录2009 (砖头篇)

2008/08 – 2009/07 读过的技术类书籍.
基本按照时间序, 至少读完50% 以上, 其中页数以amazon.com 为准.

Cross-Platform Development in C++

Authors: Syd Logan
Pages: 576
Difficulty: ★★★
Recommended Degree: ★★★
Comprehensive Degree: 95%
本书的作者是Netscape 的资深工程师, 也就是现在正在弄firefox 这个东西. 本书实质性的内容不多, 但是对于跨平台的开发目前看来还是唯一一本. 书里一开始说, 对于一个跨平台的项目, 便一开始就要各个平台并行开发, 不能丢掉任何一个. 也就是态度决定一切. 接着介绍了一般跨平台开发的编码规范, 哪些语言特性是编译器相关的, 哪些特性是平台相关的, 需要避免使用. 然后便是代码的组织结构, 以及如何自己实现跨平台的接口. 在然后是跨平台的一些开发工具. 最后介绍了wxWidgets 这个开源的跨平台项目, 以及作者自己写的一个跨平台类库.

C++ GUI Programming with Qt4, 2E

Authors: Jasmin Blanchette, Mark Summerfield
Pages: 752
Difficulty: ★★★☆
Recommended Degree: ★★★★
Comprehensive Degree: 90%
Qt, 这是一个跨平台的GUI 类库, 当然现在已经不仅仅限于GUI 了. 当初看这本书的原因, 自然是因为前一本看的意犹未尽. 而Qt 的代码, 因为有商业版本的支持, 比wxWidgets 质量高很多(Qt 自4.4 版本以后感觉代码质量有所下降). 可以把Qt 的GUI 库想像成Java 中的swing 库, 因为它们的控件都是用GDI 画出来的. 而可以把wxWidgets 的GUI 库想像成Java 中的awt 库, 它们的控件是调用Windows 的API 来绘制的. 所以你可以把它们分别成为light-weight 和heavy-weight.
言归正传, 这本书可以说是一本半官方的文档. 除了前几章介绍了Qt 的整体框架之外, 之后开始便都在讲如何使用类库. Qt 的好处, 除了跨平台以外, 还有客户端代码的简洁, 以及很好的工具支持. 当然, 要知道类库怎么用并不难, 难的是要知道类库到底是如何设计的. 看这本书的时候, 丸子经常写一个很简单的程序, 然后用Visual Studio 单步调式Qt 的源代码. 之后, 在公司项目中对于Qt 设计思想的使用, 让我对Qt 有了更深的理解.

An Introduction to Design Patterns in C++ with Qt4

Authors: Alan Ezust, Paul Ezust
Pages: 656
Difficulty: ★★
Recommended Degree: ★
Comprehensive Degree: 100%
乍看之下, 本书的名字很牛啊, 又是Qt, 又是C++, 又是design pattern. 但其实却是入门等级的, 丸子也被书名给骗了. 本书的流程大概就是用Qt 的语法, 教你写程序, 顺便提提design pattern. 真无语.

Effective C++, 2E

Authors: Scott Meyers
Pages: 256
Difficulty: ★★★
Recommended Degree: ★★★
Comprehensive Degree: 95%
本书也可以说经典了, 但是丸子却没看过. 书中提了50 条C++ 的编程规范, 让你的code 能更加effective. 当时看的时候, 发现自己几乎都知道, 而且中间有很多的废话, 一般人根本不会那么写代码的. 所以感觉收获并不是很大. 还是推荐另外3 本书比较有难度的书: <<exceptional c++>>, <<more exceptional c++>>, <<c++ object model>>, 保证让你看的醉生梦死.

Design Patterns: Elements of Reusable Object-Oriented Software

Authors: Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides
Pages: 416
Difficulty: ★★★★
Recommended Degree: ★★★★★
Comprehensive Degree: 75%
又一本经典, 四人帮的书.
本书被丸子誉为程序员的九阳神功, 有了它等之后练太极, 练乾坤大挪移就能事半功倍.
本书自然是介绍23 种设计模式. 就语言描述来说, 真的是有些难懂, 但确实是目前来说最定义最完整的. 目前公司training 的一本叫做<<head first: design pattern>> 的书, amazon 上评价也不错, 确实更容易理解, 但是这本书说的显然还是太简单了, 而且举的例子事实上很多会混淆读者的理解.

Applying UML and Patterns

Authors: Craig Larman
Pages: 736
Difficulty: ★★★★
Recommended Degree: ★★★★
Comprehensive Degree: 80%
本书其实题目起的不好, 它实际上整个都在讲软件工程的过程控制, 中间突出UML 和Pattern (不只是design pattern, 还有architectural pattern) 的核心作用. 反正看不懂就看吧. 本书在7 月份的分组考试中起到了非常重要的作用.
另, 本书让我想到上课时我们可爱滴牛老师=v=.

Essential COM

Authors: Don Box
Pages: 464
Difficulty: ★★★★☆
Recommended Degree: ★★★
Comprehensive Degree: 60%
本真是目前来说看的最累的一本. 真不知道M$ 那帮人是怎么设计出那么复杂的COM(Component Object Model) 框架的. 本书破天荒的有两篇前言, 一篇还是COM 的设计者, 说没有人能比Box 先生解释COM 解释的更好了. 事实上, 他解释的我也不怎么能看懂, 而且还严重拖延了我的看书计划. 如果没有严重的自虐倾向, 建议看下面这本书. 本书偏理论, 下面那本偏应用.
最后说一下, 为什么要用COM. M$ 的最初设计是为了跨平台, 解决C++ 的二进制兼容型. 当然, 很嘲的是, 横跨的是Windows 平台.

Inside COM

Authors: Dale Rogerson
Pages: 376
Difficulty: ★★★☆
Recommended Degree: ★★★★
Comprehensive Degree: 80%
本书跟上一本都是COM 的必看书之一. 我在看了上一本前3 章之后, 来看这本, 一晚上扫了100 多页, 真是心情愉快啊. 很多的例子代码能帮助你更好的理解.

Pro C# 2008 and the .NET 3.5 Platform, 4E

Authors: Andrew Troelsen
Pages: 1370
Difficulty: ★★★
Recommended Degree: ★★★
Comprehensive Degree: 90%
公司training 的书, 居然有1000 多页. amazon 上评价不错, 但私以为比较垃圾. 本书把你当作C# 的初学者来对待. 作者还很喜欢用这样的词来开头”simply put, …..”, 当我们都是sx. 所以我也就随便看看, 主要看的是C# 3.0, 3.5 的新特性, 比如WPF, WCF, WF, LINQ 等. 而这些新特性本书却写的非常的不详细, 果然是给初学者的书.
WPF 确实是比较比较好的设计. 这一点不想多说, 光看有很多开源的模仿WPF 的框架就知道这个设计思路有多好.
WCF, WF.. 私以为这两个完全没有必要. WCF 虽然说代码的确比较简单, 但那个配置文件没有Visual Studio 的辅助, 基本是配不来的. WF 感觉就是个半成品, 对于Visual Studio 的依赖性更大.
LINQ 的设计思路是把SQL 集成到.NET 的语言级别, 想法很好. 但是平白增加了语言的复杂性. 光为了支持LINQ 特性, .NET 3.5 就增加了好几个关键字, 以及好几个语言特性. 而这些新加的特性除了LINQ 之外, 很少会在其它库中用到.
丸子评价可能带有片面性, 请自行判断.

Programming Windows, 5E

Authors: Charles Petzold
Pages: 1100
Difficulty: ★★★☆
Recommended Degree: ★★★★☆
Comprehensive Degree: 90%
大师的书啊.. Charles Petzold, 响当当的名字.
本书侧重于介绍Windows GUI 的编程. 一个最最简单的Windows 窗口, 完全调用Windows API 的话大概要70-80 行代码. 而*nix/gtk 大概是20 行以内, Qt 和Java 应该能在10 行以内. Windows API 的代码效率实在有待商榷.
言归正传, 本书从Windows 的窗口消息机制讲起, 消息循环, 消息分发, 屏幕重绘, 控件的使用, Owner-draw, Hook. 然后本书的另外一个大头讲了GDI 的相关内容, 非常的深入. GDI 的使用是一件让人很头痛的事情, 功能本身不怎么强大, 但非常容易写出memory leak, 或者GDI handle leak 的代码. 于是M$ 之后发布了GDI+, 这个GDI 的扩展版本确实比GDI 强大了不少, 而且是OO 的. 但是缺点是, 绘图那个慢啊.. 而且debug 困难.
另外推荐一本<<Programmiong Application for Microsoft Windows>>. 这本测试的是Windows API 的非GUI 部分.

Microsoft Windows Internals, 4E

Authors: Mark E. Russinovich, David A. Solomon
Pages: 976
Difficulty: ★★★★☆
Recommended Degree: ★★★★☆
Comprehensive Degree: 80%
本书也是一本牛书, Windows 内核最权威的书, 前Windows 部门的项目经理甚至为它作序. 就书本身来说, 语言方面真的没人能写出那么拗口的句子了, 从句套从句, 有时候一打段文字它居然就只有一句句子. 但没办法, 书就那么一本.
如果说上面那本偏重的是user mode 的话, 那么本书偏中的就是kernel mode. M$ 的东西, 它就是不开源. 一个可能很容易的概念, 一旦把它作为黑盒来分析的话, 理解起来就会非常的累人, 不像Linux, 所有的代码你都可以看. 书中推荐使用Windbg 来调试内核.
本书跟一般的OS 书一样, 介绍了Windows 的进程, 线程, 内存管理, 缓存, 存储管理, 安全性等一系列的相关的实现及最初的设计思路. 让我发现M$ 确实是很有创造力的一家公司, 很多东西都是在Windows 这个OS 上最先出现的哦, 而且它的设计思路完全不走寻常路=v=.
对于本书的理解非常重要. 我一般写程序调用Windows API 的时候, MSDN 上很多时候会说, 某个函数一定要跟另外一个搭配使用, 某个函数一定要传入什么什么参数, 某个函数一定要在什么什么模式下使用. 看完本书之后, 很多东西都能解答了, 也就用不着硬记了.
另外要说的一点是, M$ Windows 的API 兼容型的确很好, 上个世纪编译的程序一样能在最新的Windows 7 下运行. 但代价就是, Windows 代码的冗余. 看看Linux 内核的开发, 不用的代码, 基本都是标记为deprecated 之后保留几个版本, 然后就直接删掉的. Mac OSX 做的更绝, OSX 之前的代码完全不能用, 而10.5 版本在原来Carbon API的基础上完全重写了一套Cocoa 的API. 当然要我选, 我一定选的是高效的代码. 但M$ 却不是, 这也是M$ 聪明的地方吧.

2 Interesting Quiz

1. Rate your life
http://www.monkeyquiz.com/life/rate_my_life.html

My Result:

This Is My Life, Rated
Life: 7
Mind: 6.9
Body: 6.5
Spirit: 7.3
Friends/Family: 4.4
Love: 6.2
Finance: 8.1
Take the Rate My Life Quiz

Your Life Analysis:

Life: Your life rating is a score of the sum total of your life, and accounts for how
satisfied, successful, balanced, capable, valuable, and happy you are. The quiz attempts to put a number on the summation of all of these things, based on your answers. Your life score is reasonably high. This means that you are on a good path. Continue doing what is working and set about to improve in areas which continue to lag. Do this starting today and you will begin to reap the benefits immediately. (Read more on improving your life)

Mind: Your mind rating is a score of your mind’s clarity, ability, and health. Higher scores indicate an advancement in knowledge, clear and capable thinking, high mental health, and pure thought free of interference. Your mind score is not bad, but could be improved upon. Your mental health is not weak, but you are not achieving full mental clarity and function. Learn how to unclutter your mind. Keep learning, keep improving, continue moving forward. Read advice from other quiz-takers on improving the mind.

Body: Your body rating measures your body’s health, fitness, and general wellness. A healthy body contributes to a happy life, however many of us are lacking in this area. Your body score is fairly average, which means there is room for improvement. Keep a focus on your physical health. Protect your body as it is your most valuable physical asset. Nutrition, stress reduction, and exercise are key. Read advice from other quiz-takers on improving the body.

Spirit: Your spirit rating seeks to capture in a number that elusive quality which is found in your faith, your attitude, and your philosophy on life. A higher score indicates a greater sense of inner peace and balance. Your spirit score is relatively high, which means you are rewarded by your beliefs. Spirituality is clearly important to do. Never let it slip, and continue to learn and grow. Read advice from other quiz-takers on improving the spirit.

Friends/Family: Your friends and family rating measures your relationships with those around you, and is based on how large, healthy, and dependable your social network is. Your friends and family score suffers, yet it does not need to be this way. Strengthen your social network by reaffirming old bonds. Seek out new friendships, and they will provide you the reward you need. Try using MeetUp.com to find people near you who share your interests.

Love: Your love rating is a measure of your current romantic situation. Sharing your heart with another person is one of life’s most glorious, terrifying, rewarding experiences. Your love score is fairly average. Things could be worse, and thankfully they aren’t. But you must work to improve this area, turn an average score into a great score. Read advice from other quiz-takers on finding and maintaining love.

Finance: Your finance rating is a score that rates your current financial health and stability. You have a rather good financial score, which is not all that common these days. Keep doing what works. Avoid common pitfalls and save for the future. You will be glad you did. Read advice from other quiz-takers on improving your finances.

2. Personal DNA
http://www.personaldna.com/

My Complete Result:
http://www.personaldna.com/report.php?k=ikPtYodDIuissfZ-PF-ACCDD-19a7

Kalafina – Lacrimosa

Lacrimosa
Kalafina

暗闇の中で睦み合う
絶望と未来を
悲しみを暴く月灯り
冷たく照らしてた

君のくれた秘密を標に
蒼い夜の静けさを行く

Lacrimosa
遠く砕けて消えた
眩しい世界をもう一度愛したい
瞳の中に夢を隠して
汚れた心に
涙が堕ちて来るまで

幻の馬車は闇を分け
光のある方へ
夢という罠が僕たちを
焔へ誘う

空の上の無慈悲な神々には
どんな叫びも届きはしない
Lacrimosa

僕らは燃えさかる薪となり
いつかその空を焼き尽くそう

Lacrimosa
ここに生まれて落ちた
血濡れた世界を恐れずに愛したい
許されるより許し信じて
汚れた地上で
涙の日々を数えて

Lacrimosa……

Lacrimosa
Kalafina

绝望与未来
在黑暗中悄然融合
静静映照这一切的冰冷月光之中
悲伤无处藏身

你告诉我的秘密便是我的道标
我将以此为指引穿过苍白夜晚的静寂

Lacrimosa
已然破碎消逝一去不回的灿烂世界
我仍想再一次将它深深爱恋
将梦想藏于眼眸中
直到泪水滴落
失去纯洁的心中之时

幻影的马车冲破黑暗
循着光芒驶去
梦想的陷阱将我们
引向无边烈焰

天空之上冷酷无情的诸神
怎样深切的呼唤都置若罔闻
Lacrimosa

让我们化身为熊熊燃烧的薪柴
终有一天会将这片天空烧尽

Lacrimosa
我所在的这个血染的世界
我仍想不顾一切地将它深深眷恋
与其等待被宽恕我愿意选择宽容与信任
默数在这失去纯洁的大地上
度过的落泪之日

Lacrimosa……

<<黑执事>> ED2.
Lacrimosa: 拉丁语, 流泪, 痛苦之意.

深爱 – 水树奈奈

深愛
水樹奈々

雪が舞い散る夜空
二人寄り添い見上げた
繋がる手と手の温もりは とても優しかった

淡いオールドブルーの
雲間に消えて行くでしょう
永遠へと続くはずの あの約束

あなたの側にいるだけで
ただそれだけでよかった
いつの間にか膨らむ
今以上の夢に気付かずに

どんな時も どこにいる時でも
強く強く抱きしめていて
情熱が日常に染まるとしても
あなたへのこの想いはすべて
終わりなどないと信じている
あなただけずっと見つめているの

交わす言葉と時間
姿も変えて行くでしょう
白い頬に溶けたそれは 月の涙

行かないで もう少しだけ
何度も言いかけては
また会えるよねきっと
何度も自分に問いかける

突然走り出した
行く先の違う二人
もう止まらない
沈黙が想像を越え引き裂いて
一つだけ許される願いがあるなら
ごめんねと伝えたいの

いくら思っていても届かない
声にしなきゃ 動き出さなきゃ
隠したままの二人の秘密
このまま忘れられてしまうの
だからね早く 今ここに来て

あなたの側にいるだけで
ただそれだけでよかった
今度巡り会えたら
もっともっと笑いあえるかな?

どんな時も どこにいる時でも
強く強く抱きしめていて
情熱より熱い熱で溶かして
あなたへのこの想いはすべて
終わりなどないと信じている
あなただけずっと見つめているの

深愛
水樹奈奈

雪花飞舞的夜空
我们两人并肩仰望
紧系著的手和手的温暖 非常的和善

在淡薄蔚蓝的
云彩间隙裏消逝了的吧
本应该永远持续的 那个约定

只要守候在你的身边
仅仅如此就心满意足了
不知何时已膨胀起
穿越现实的梦而我却没有察觉

无论身处何时人在何方
我们都能紧紧的互相拥抱
尽管这份热情将生命映的火红
对於你的这个感情就是一切
并坚信著不会有终结的一天
仅仅只有你是我一直想要注视的

交錯的說話與時間
形貌都會改變
在雪白的臉頰上溶化的 月亮的眼淚

請不要走 多留一會吧
無數次欲言又止
一定會再相見吧
無數次自問

突然踏上路途
目的地不同的兩人
已經無法阻止
沉默超越想像
如果只能實現一個願望
想跟你說對不起

單是心裏想 怎樣也不能傳達
要說出口 要行動
再這樣下去
兩人之間隱藏的秘密就要被忘掉
所以請你現在快點來這裏

只要守候在你的身边
仅仅如此就心满意足了
下次跟你碰面的時候
可以相視而笑嗎?

無論何時 無論身處何地
都想你緊緊抱着我
溶掉比熱情更熾熱的熱度
對你的思念就是一切
至今仍相信不會終結
只想永遠注視你