SSTable详解

本文涉及的产品
日志服务 SLS,月写入数据量 50GB 1个月
简介:

前记

几年前在读Google的BigTable论文的时候,当时并没有理解论文里面表达的思想,因而囫囵吞枣,并没有注意到SSTable的概念。再后来开始关注HBase的设计和源码后,开始对BigTable传递的思想慢慢的清晰起来,但是因为事情太多,没有安排出时间重读BigTable的论文。在项目里,我因为自己在学HBase,开始主推HBase,而另一个同事则因为对Cassandra比较感冒,因而他主要关注Cassandra的设计,不过我们两个人偶尔都会讨论一下技术、设计的各种观点和心得,然后他偶然的说了一句:Cassandra和HBase都采用SSTable格式存储,然后我本能的问了一句:什么是SSTable?他并没有回答,可能也不是那么几句能说清楚的,或者他自己也没有尝试的去问过自己这个问题。然而这个问题本身却一直困扰着我,因而趁着现在有一些时间深入学习HBase和Cassandra相关设计的时候先把这个问题弄清楚了。

SSTable的定义

要解释这个术语的真正含义,最好的方法就是从它的出处找答案,所以重新翻开BigTable的论文。在这篇论文中,最初对SSTable是这么描述的(第三页末和第四页初):
SSTable

The Google SSTable file format is used internally to store Bigtable data. An SSTable provides a persistent, ordered immutable map from keys to values, where both keys and values are arbitrary byte strings. Operations are provided to look up the value associated with a specified key, and to iterate over all key/value pairs in a specified key range. Internally, each SSTable contains a sequence of blocks (typically each block is 64KB in size, but this is configurable). A block index (stored at the end of the SSTable) is used to locate blocks; the index is loaded into memory when the SSTable is opened. A lookup can be performed with a single disk seek: we first find the appropriate block by performing a binary search in the in-memory index, and then reading the appropriate block from disk. Optionally, an SSTable can be completely mapped into memory, which allows us to perform lookups and scans without touching disk.

简单的非直译:
SSTable是Bigtable内部用于数据的文件格式,它的格式为文件本身就是一个排序的、不可变的、持久的Key/Value对Map,其中Key和value都可以是任意的byte字符串。使用Key来查找Value,或通过给定Key范围遍历所有的Key/Value对。每个SSTable包含一系列的Block(一般Block大小为64KB,但是它是可配置的),在SSTable的末尾是Block索引,用于定位Block,这些索引在SSTable打开时被加载到内存中,在查找时首先从内存中的索引二分查找找到Block,然后一次磁盘寻道即可读取到相应的Block。还有一种方案是将这个SSTable加载到内存中,从而在查找和扫描中不需要读取磁盘。

这个貌似就是HFile第一个版本的格式么,贴张图感受一下:

在HBase使用过程中,对这个版本的HFile遇到以下一些问题(参考 这里 ):
1. 解析时内存使用量比较高。
2. Bloom Filter和Block索引会变的很大,而影响启动性能。具体的,Bloom Filter可以增长到100MB每个HFile,而Block索引可以增长到300MB,如果一个HRegionServer中有20个HRegion,则他们分别能增长到2GB和6GB的大小。HRegion需要在打开时,需要加载所有的Block索引到内存中,因而影响启动性能;而在第一次Request时,需要将整个Bloom Filter加载到内存中,再开始查找,因而Bloom Filter太大会影响第一次请求的延迟。
而HFile在版本2中对这些问题做了一些优化,具体会在HFile解析时详细说明。

SSTable作为存储使用

继续BigTable的论文往下走,在5.3 Tablet Serving小节中这样写道(第6页):
Tablet Serving

Updates are committed to a commit log that stores redo records. Of these updates, the recently committed ones are stored in memory in a sorted buffer called a memtable; the older updates are stored in a sequence of SSTables. To recover a tablet, a tablet server reads its metadata from the METADATA table. This metadata contains the list of SSTables that comprise a tablet and a set of a redo points, which are pointers into any commit logs that may contain data for the tablet. The server reads the indices of the SSTables into memory and reconstructs the memtable by applying all of the updates that have committed since the redo points. 

When a write operation arrives at a tablet server, the server checks that it is well-formed, and that the sender is authorized to perform the mutation. Authorization is performed by reading the list of permitted writers from a Chubby file (which is almost always a hit in the Chubby client cache). A valid mutation is written to the commit log. Group commit is used to improve the throughput of lots of small mutations [13, 16]. After the write has been committed, its contents are inserted into the memtable. 

When a read operation arrives at a tablet server, it is similarly checked for well-formedness and proper authorization. A valid read operation is executed on a merged view of the sequence of SSTables and the memtable. Since the SSTables and the memtable are lexicographically sorted data structures, the merged view can be formed efficiently.

Incoming read and write operations can continue while tablets are split and merged.

第一段和第三段简单描述,非翻译:
在新数据写入时,这个操作首先提交到日志中作为redo纪录,最近的数据存储在内存的排序缓存memtable中;旧的数据存储在一系列的SSTable 中。在recover中,tablet server从METADATA表中读取metadata,metadata包含了组成Tablet的所有SSTable(纪录了这些SSTable的元 数据信息,如SSTable的位置、StartKey、EndKey等)以及一系列日志中的redo点。Tablet Server读取SSTable的索引到内存,并replay这些redo点之后的更新来重构memtable。
在读时,完成格式、授权等检查后,读会同时读取SSTable、memtable(HBase中还包含了BlockCache中的数据)并合并他们的结果,由于SSTable和memtable都是字典序排列,因而合并操作可以很高效完成。

SSTable在Compaction过程中的使用

在BigTable论文5.4 Compaction小节中是这样说的:
Compaction

As write operations execute, the size of the memtable increases. When the memtable size reaches a threshold, the memtable is frozen, a new memtable is created, and the frozen memtable is converted to an SSTable and written to GFS. This minor compaction process has two goals: it shrinks the memory usage of the tablet server, and it reduces the amount of data that has to be read from the commit log during recovery if this server dies. Incoming read and write operations can continue while compactions occur.

Every minor compaction creates a new SSTable. If this behavior continued unchecked, read operations might need to merge updates from an arbitrary number of SSTables. Instead, we bound the number of such files by periodically executing a merging compaction in the background. A merging compaction reads the contents of a few SSTables and the memtable, and writes out a new SSTable. The input SSTables and memtable can be discarded as soon as the compaction has finished.

A merging compaction that rewrites all SSTables into exactly one SSTable is called a major compaction. SSTables produced by non-major compactions can contain special deletion entries that suppress deleted data in older SSTables that are still live. A major compaction, on the other hand, produces an SSTable that contains no deletion information or deleted data. Bigtable cycles through all of its tablets and regularly applies major compactions to them. These major compactions allow Bigtable to reclaim resources used by deleted data, and also allow it to ensure that deleted data disappears from the system in a timely fashion, which is important for services that store sensitive data.

随着memtable大小增加到一个阀值,这个memtable会被冻住而创建一个新的memtable以供使用,而旧的memtable会转换成一个SSTable而写道GFS中,这个过程叫做minor compaction。这个minor compaction可以减少内存使用量,并可以减少日志大小,因为持久化后的数据可以从日志中删除。 在minor compaction过程中,可以继续处理读写请求。
每次minor compaction会生成新的SSTable文件,如果SSTable文件数量增加,则会影响读的性能,因而每次读都需要读取所有SSTable文件,然后合并结果,因而对SSTable文件个数需要有上限,并且时不时的需要在后台做merging compaction,这个merging compaction读取一些SSTable文件和memtable的内容,并将他们合并写入一个新的SSTable中。当这个过程完成后,这些源SSTable和memtable就可以被删除了。
如果一个merging compaction是合并所有SSTable到一个SSTable,则这个过程称做major compaction。一次major compaction会将mark成删除的信息、数据删除,而其他两次compaction则会保留这些信息、数据(mark的形式)。Bigtable会时不时的扫描所有的Tablet,并对它们做major compaction。这个major compaction可以将需要删除的数据真正的删除从而节省空间,并保持系统一致性。

SSTable的locality和In Memory

在Bigtable中,它的本地性是由Locality group来定义的,即多个column family可以组合到一个locality group中,在同一个Tablet中,使用单独的SSTable存储这些在同一个locality group的column family。HBase把这个模型简化了,即每个column family在每个HRegion都使用单独的HFile存储,HFile没有locality group的概念,或者一个column family就是一个locality group。

在Bigtable中,还可以支持在locality group级别设置是否将所有这个locality group的数据加载到内存中,在HBase中通过column family定义时设置。这个内存加载采用延时加载,主要应用于一些小的column family,并且经常被用到的,从而提升读的性能,因而这样就不需要再从磁盘中读取了。

SSTable压缩

Bigtable的压缩是基于locality group级别:
Compression

Clients can control whether or not the SSTables for a locality group are compressed, and if so, which compression format is used. The user-specified compression format is applied to each SSTable block (whose size is controllable via a locality group specific tuning parameter). Although we lose some space by compressing each block separately, we benefit in that small portions of an SSTable can be read without decompressing the entire file. Many clients use a two-pass custom compression scheme. The first pass uses Bentley and McIlroy’s scheme [6], which compresses long common strings across a large window. The second pass uses a fast compression algorithm that looks for repetitions in a small 16 KB window of the data. Both compression passes are very fast—they encode at 100–200 MB/s, and decode at 400–1000 MB/s on modern machines.

Bigtable的压缩以SSTable中的一个Block为单位,虽然每个Block为压缩单位损失一些空间,但是采用这种方式,我们可以以Block为单位读取、解压、分析,而不是每次以一个“大”的SSTable为单位读取、解压、分析。

SSTable的读缓存

为了提升读的性能,Bigtable采用两层缓存机制:
Caching for read performance

To improve read performance, tablet servers use two levels of caching. The Scan Cache is a higher-level cache that caches the key-value pairs returned by the SSTable interface to the tablet server code. The Block Cache is a lower-level cache that caches SSTables blocks that were read from GFS. The Scan Cache is most useful for applications that tend to read the same data repeatedly. The Block Cache is useful for applications that tend to read data that is close to the data they recently read (e.g., sequential reads, or random reads of different columns in the same locality group within a hot row).

两层缓存分别是:
1. High Level,缓存从SSTable读取的Key/Value对。提升那些倾向重复的读取相同的数据的操作(引用局部性原理)。
2. Low Level,BlockCache,缓存SSTable中的Block。提升那些倾向于读取相近数据的操作。

Bloom Filter

前文有提到Bigtable采用合并读,即需要读取每个SSTable中的相关数据,并合并成一个结果返回,然而每次读都需要读取所有SSTable,自然会耗费性能,因而引入了Bloom Filter,它可以很快速的找到一个RowKey不在某个SSTable中的事实(注:反过来则不成立)。
Bloom Filter

As described in Section 5.3, a read operation has to read from all SSTables that make up the state of a tablet. If these SSTables are not in memory, we may end up doing many disk accesses. We reduce the number of accesses by allowing clients to specify that Bloom fil- ters [7] should be created for SSTables in a particu- lar locality group. A Bloom filter allows us to ask whether an SSTable might contain any data for a spec- ified row/column pair. For certain applications, a small amount of tablet server memory used for storing Bloom filters drastically reduces the number of disk seeks re- quired for read operations. Our use of Bloom filters also implies that most lookups for non-existent rows or columns do not need to touch disk.

SSTable设计成Immutable的好处

在SSTable定义中就有提到SSTable是一个Immutable的order map,这个Immutable的设计可以让系统简单很多:
Exploiting Immutability

Besides the SSTable caches, various other parts of the Bigtable system have been simplified by the fact that all of the SSTables that we generate are immutable. For example, we do not need any synchronization of accesses to the file system when reading from SSTables. As a result, concurrency control over rows can be implemented very efficiently. The only mutable data structure that is accessed by both reads and writes is the memtable. To reduce contention during reads of the memtable, we make each memtable row copy-on-write and allow reads and writes to proceed in parallel.

Since SSTables are immutable, the problem of permanently removing deleted data is transformed to garbage collecting obsolete SSTables. Each tablet’s SSTables are registered in the METADATA table. The master removes obsolete SSTables as a mark-and-sweep garbage collection [25] over the set of SSTables, where the METADATA table contains the set of roots.

Finally, the immutability of SSTables enables us to split tablets quickly. Instead of generating a new set of SSTables for each child tablet, we let the child tablets share the SSTables of the parent tablet.

关于Immutable的优点有以下几点:
1. 在读SSTable是不需要同步。读写同步只需要在memtable中处理,为了减少memtable的读写竞争,Bigtable将memtable的row设计成copy-on-write,从而读写可以同时进行。
2. 永久的移除数据转变为SSTable的Garbage Collect。每个Tablet中的SSTable在METADATA表中有注册,master使用mark-and-sweep算法将SSTable在GC过程中移除。
3. 可以让Tablet Split过程变的高效,我们不需要为每个子Tablet创建新的SSTable,而是可以共享父 Tablet的SSTable。

相关实践学习
lindorm多模间数据无缝流转
展现了Lindorm多模融合能力——用kafka API写入,无缝流转在各引擎内进行数据存储和计算的实验。
云数据库HBase版使用教程
  相关的阿里云产品:云数据库 HBase 版 面向大数据领域的一站式NoSQL服务,100%兼容开源HBase并深度扩展,支持海量数据下的实时存储、高并发吞吐、轻SQL分析、全文检索、时序时空查询等能力,是风控、推荐、广告、物联网、车联网、Feeds流、数据大屏等场景首选数据库,是为淘宝、支付宝、菜鸟等众多阿里核心业务提供关键支撑的数据库。 了解产品详情: https://cn.aliyun.com/product/hbase   ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库 ECS 实例和一台目标数据库 RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
相关文章
|
存储 缓存 Java
|
4月前
|
关系型数据库 数据库 存储
顺序读和InnoDB的数据组织
【7月更文挑战第7天】自增主键优化顺序读:保证数据物理排序,提升范围查询效率。InnoDB引擎中,主键决定数据在页的存储。当插入的数据引起页分裂,如从1、2、3、5、6、7插入4,会导致相邻逻辑页在磁盘上可能分散,影响性能。了解页结构深化数据库知识,面试时可根据情况深入讨论。
36 2
|
6月前
|
存储 缓存 NoSQL
数据缓存,可以尝试用RocksDB了
`shigen`,一个专注于Java、Python、Vue和Shell的博主,探讨了为何在学习阿里云DRM产品时选择RocksDB而非Redis或Guava。RocksDB是一个高速、可配置的存储系统,适用于Flash和HDFS,支持数据压缩。与Redis相比,RocksDB在高速存储和灵活性上更具优势。在尝试使用RocksDB与SpringBoot集成时遇到问题,目前尚未解决。他还对比了RocksDB、Redis和Guava Cache的特性,强调RocksDB适合大规模、高性能场景,而Redis适合内存存储和实时性需求。
130 0
数据缓存,可以尝试用RocksDB了
|
6月前
|
缓存 NoSQL 关系型数据库
LSM-Tree - LevelDb之LRU缓存(二)
LSM-Tree - LevelDb之LRU缓存
119 0
LSM-Tree - LevelDb之LRU缓存(二)
|
6月前
|
存储 缓存 NoSQL
LSM-Tree - LevelDb之LRU缓存(一)
LSM-Tree - LevelDb之LRU缓存
122 0
LSM-Tree - LevelDb之LRU缓存(一)
|
6月前
|
NoSQL 测试技术 C++
LSM-Tree - LevelDb布隆过滤器(二)
LSM-Tree - LevelDb布隆过滤器
83 0
LSM-Tree - LevelDb布隆过滤器(二)
|
6月前
|
存储 NoSQL 安全
LSM-Tree - LevelDb布隆过滤器(一)
LSM-Tree - LevelDb布隆过滤器
97 0
LSM-Tree - LevelDb布隆过滤器(一)
|
缓存 NoSQL Go
Boltdb 源码导读(一):Boltdb 数据组织(2)
Boltdb 源码导读(一):Boltdb 数据组织(2)
257 0
Boltdb 源码导读(一):Boltdb 数据组织(2)
|
存储 Java 测试技术
Boltdb 源码导读(一):Boltdb 数据组织(1)
Boltdb 源码导读(一):Boltdb 数据组织(1)
189 0
Boltdb 源码导读(一):Boltdb 数据组织(1)
|
NoSQL 关系型数据库 MySQL
boltdb 源码导读(二):boltdb 索引设计(1)
boltdb 源码导读(二):boltdb 索引设计(1)
241 0
boltdb 源码导读(二):boltdb 索引设计(1)