Compare commits
11 Commits
b41547ddcb
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d7d5c2100d | ||
|
|
700f1d3de2 | ||
|
|
dc6cfda291 | ||
|
|
1254018305 | ||
|
|
61d7680af8 | ||
|
|
83bdade363 | ||
|
|
4d90426eee | ||
|
|
3240b6a9f2 | ||
|
|
8e497ed487 | ||
|
|
e87a7fba2c | ||
|
|
053a8ff77f |
27
README.md
Normal file
27
README.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# BTRFS explorer - view the layout of your data on disk #
|
||||
|
||||
To better understand the on-disk format of BTRFS (one of the main file systems used in Linux) I wrote a parser for it and added a simple web interface. Maybe this can be useful for anyone else trying to understand BTRFS by allowing to look at a concrete example. I also tried to add explanations at some points, but they could be more detailed.
|
||||
|
||||
## Online demo ##
|
||||
|
||||
This is just a 3 GiB image which contains a base installation of Arch Linux. And I took a snapshot called "freshinstall". The tool offers a few different views:
|
||||
|
||||
* [The chunks:](https://florianstecker.net/btrfs/) this shows the layout of the physical disk. It is split into "chunks" which are a few hundred megabytes in size and can contain data or metadata. Click on one of the metadata chunks to see how the B-trees are laid out in them. To see what is actually contained in these trees, use the "root tree" view below.
|
||||
* [The root tree:](https://florianstecker.net/btrfs/tree/1) this shows the metadata on a higher level: the B-trees form a key-value store which stores all kinds of data like files, directories, subvolumes, checksums, free space etc. The "root tree" is the highest level node from which you can click through all the other trees. The most interesting is tree 5 (the filesystem tree) with which you can browse through the entire filesystem (although it currently doesn't show the contents of large files).
|
||||
* [The chunk tree:](https://florianstecker.net/btrfs/tree/3) this tree with the ID 3 is a bit special in that it's not listed in the root tree (it also lives in the system chunks instead of the metadata chunks). It contains the mapping between virtual and physical addresses, providing the "device mapper" and RAID functionality of BTRFS. The "chunks" overview above is really just showing the contents of this tree.
|
||||
|
||||
## Using your own data ##
|
||||
|
||||
To install this, clone the repository, get [Rust], and run
|
||||
|
||||
cargo build --release
|
||||
|
||||
which creates a standalone binary at the path `target/release/btrfs_explorer_bin`.
|
||||
|
||||
Now you can just run the `btrfs_explorer_bin` binary. It takes two arguments: the first is a disk image and the second (optional) argument is the address the web server should listen at, in the format `address:port`. The default is `localhost:8080`.
|
||||
|
||||
Then visit the address (e.g. `http://localhost:8080`) in a browser to see the chunk view, and `http://localhost:8080/tree/1` for the root tree view, etc.
|
||||
|
||||
Ideally it should be possible to use this on block devices, even mounted ones, but at the moment that has a few issues, so I would recommend only looking at images (it never writes anything, so at least it should never destroy any data, but there might be crashes).
|
||||
|
||||
[Rust]: https://www.rust-lang.org/
|
||||
@@ -2,7 +2,7 @@ use std::convert::identity;
|
||||
use std::rc::Rc;
|
||||
use std::ops::{Deref, RangeBounds, Bound};
|
||||
|
||||
use crate::btrfs_structs::{Leaf, Key, Item, InteriorNode, Node, ParseError, ParseBin, Value, Superblock, ItemType, ZERO_KEY, LAST_KEY};
|
||||
use crate::btrfs_structs::{InteriorNode, Item, ItemType, Key, Leaf, Node, ParseBin, ParseError, Superblock, Value, LAST_KEY, ZERO_KEY};
|
||||
use crate::nodereader::NodeReader;
|
||||
|
||||
/// Represents a B-Tree inside a filesystem image. Can be used to look up keys,
|
||||
@@ -32,7 +32,7 @@ impl<'a> Tree<'a> {
|
||||
.filter(|x| x.key.key_id == tree_id && x.key.key_type == ItemType::Root);
|
||||
|
||||
let root_addr_log = match tree_root_item {
|
||||
Some(Item { key: _, value: Value::Root(root)}) => root.bytenr,
|
||||
Some(Item { key: _, range: _, value: Value::Root(root)}) => root.bytenr,
|
||||
_ => return Err("root item not found or invalid".into())
|
||||
};
|
||||
|
||||
@@ -119,7 +119,7 @@ impl Tree<'_> {
|
||||
|
||||
/***** iterator *****/
|
||||
|
||||
pub struct RangeIter<'a, 'b> {
|
||||
pub struct RangeIterWithAddr<'a, 'b> {
|
||||
tree: &'b Tree<'a>,
|
||||
|
||||
start: Bound<Key>,
|
||||
@@ -128,14 +128,16 @@ pub struct RangeIter<'a, 'b> {
|
||||
backward_skip_fn: Box<dyn Fn(Key) -> Key>,
|
||||
}
|
||||
|
||||
pub struct RangeIter<'a, 'b> (RangeIterWithAddr<'a, 'b>);
|
||||
|
||||
impl<'a> Tree<'a> {
|
||||
/// Given a tree, a range of indices, and two "skip functions", produces a double
|
||||
/// ended iterator which iterates through the keys contained in the range, in ascending
|
||||
/// or descending order.
|
||||
///
|
||||
/// The skip functions are ignored for now, but are intended as an optimization:
|
||||
/// after a key `k` was returned by the iterator (or the reverse iterator), all keys
|
||||
/// strictly lower than `forward_skip_fn(k)` are skipped (resp. all keys strictly above
|
||||
/// The skip functions make it possible to efficiently iterate only through certain types of items.
|
||||
/// After a key `k` was returned by the iterator (or the reverse iterator), all keys
|
||||
/// lower or equal `forward_skip_fn(k)` are skipped (resp. all keys higher or equal
|
||||
/// `backward_skip_fn(k)` are skipped.
|
||||
///
|
||||
/// If `forward_skip_fn` and `backward_skip_fn` are the identity, nothing is skipped
|
||||
@@ -144,33 +146,36 @@ impl<'a> Tree<'a> {
|
||||
R: RangeBounds<Key>,
|
||||
F1: Fn(Key) -> Key + 'static,
|
||||
F2: Fn(Key) -> Key + 'static {
|
||||
RangeIter {
|
||||
RangeIter(RangeIterWithAddr {
|
||||
tree: self,
|
||||
start: range.start_bound().cloned(),
|
||||
end: range.end_bound().cloned(),
|
||||
forward_skip_fn: Box::new(forward_skip_fn),
|
||||
backward_skip_fn: Box::new(backward_skip_fn),
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
pub fn range<'b, R: RangeBounds<Key>>(&'b self, range: R) -> RangeIter<'a, 'b> {
|
||||
RangeIter {
|
||||
tree: self,
|
||||
start: range.start_bound().cloned(),
|
||||
end: range.end_bound().cloned(),
|
||||
forward_skip_fn: Box::new(identity),
|
||||
backward_skip_fn: Box::new(identity),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
pub fn iter<'b>(&'b self) -> RangeIter<'a, 'b> {
|
||||
RangeIter {
|
||||
RangeIter(RangeIterWithAddr {
|
||||
tree: self,
|
||||
start: Bound::Unbounded,
|
||||
end: Bound::Unbounded,
|
||||
forward_skip_fn: Box::new(identity),
|
||||
backward_skip_fn: Box::new(identity),
|
||||
})
|
||||
}
|
||||
|
||||
pub fn range<'b, R: RangeBounds<Key>>(&'b self, range: R) -> RangeIter<'a, 'b> {
|
||||
RangeIter(self.range_with_node_addr(range))
|
||||
}
|
||||
|
||||
pub fn range_with_node_addr<'b, R: RangeBounds<Key>>(&'b self, range: R) -> RangeIterWithAddr<'a, 'b> {
|
||||
RangeIterWithAddr {
|
||||
tree: self,
|
||||
start: range.start_bound().cloned(),
|
||||
end: range.end_bound().cloned(),
|
||||
forward_skip_fn: Box::new(identity),
|
||||
backward_skip_fn: Box::new(identity),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -178,19 +183,19 @@ impl<'a> Tree<'a> {
|
||||
|
||||
/// Get the first item under the node at logical address `addr`.
|
||||
/// This function panics if there are no items
|
||||
fn get_first_item(tree: &Tree, addr: u64) -> Result<Item, ParseError> {
|
||||
fn get_first_item(tree: &Tree, addr: u64) -> Result<(Item, u64), ParseError> {
|
||||
match tree.reader.get_node(addr)?.deref() {
|
||||
Node::Interior(intnode) => get_first_item(tree, intnode.children[0].ptr),
|
||||
Node::Leaf(leafnode) => Ok(leafnode.items[0].clone()),
|
||||
Node::Leaf(leafnode) => Ok((leafnode.items[0].clone(), addr)),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the last item under the node at logical address `addr`.
|
||||
/// This function panics if there are no items
|
||||
fn get_last_item(tree: &Tree, addr: u64) -> Result<Item, ParseError> {
|
||||
fn get_last_item(tree: &Tree, addr: u64) -> Result<(Item, u64), ParseError> {
|
||||
match tree.reader.get_node(addr)?.deref() {
|
||||
Node::Interior(intnode) => get_last_item(tree, intnode.children.last().unwrap().ptr),
|
||||
Node::Leaf(leafnode) => Ok(leafnode.items.last().unwrap().clone()),
|
||||
Node::Leaf(leafnode) => Ok((leafnode.items.last().unwrap().clone(), addr)),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -201,7 +206,7 @@ enum FindKeyMode {LT, GT, GE, LE}
|
||||
/// the "closest" match. The exact meaning of "closest" is given by the `mode` argument:
|
||||
/// If `mode` is `LT`/`GT`/`GE`/`LE`, return the item with the greatest / least / greatest / least
|
||||
/// key less than / greater than / greater or equal to / less or equal to `key`.
|
||||
fn find_closest_key(tree: &Tree, key: Key, mode: FindKeyMode) -> Result<Option<Item>, ParseError> {
|
||||
fn find_closest_key(tree: &Tree, key: Key, mode: FindKeyMode) -> Result<Option<(Item, u64)>, ParseError> {
|
||||
|
||||
// in some cases, this task can't be accomplished by a single traversal
|
||||
// but we might have to go back up the tree; prev/next allows to quickly go back to the right node
|
||||
@@ -244,14 +249,14 @@ fn find_closest_key(tree: &Tree, key: Key, mode: FindKeyMode) -> Result<Option<I
|
||||
match leafnode.find_key_or_previous(key) {
|
||||
Some(idx) => {
|
||||
// the standard case, we found a key `k` with the guarantee that `k <= key`
|
||||
let Item {key: k, value: v} = leafnode.items[idx].clone();
|
||||
let it = leafnode.items[idx].clone();
|
||||
|
||||
if mode == FindKeyMode::LE || mode == FindKeyMode::LT && k < key || mode == FindKeyMode::GE && k == key {
|
||||
return Ok(Some(Item {key: k, value: v}))
|
||||
} else if mode == FindKeyMode::LT && k == key {
|
||||
if mode == FindKeyMode::LE || mode == FindKeyMode::LT && it.key < key || mode == FindKeyMode::GE && it.key == key {
|
||||
return Ok(Some((it, current)))
|
||||
} else if mode == FindKeyMode::LT && it.key == key {
|
||||
// prev
|
||||
if idx > 0 {
|
||||
return Ok(Some(leafnode.items[idx-1].clone()));
|
||||
return Ok(Some((leafnode.items[idx-1].clone(), current)));
|
||||
} else {
|
||||
// use prev
|
||||
if let Some(addr) = prev {
|
||||
@@ -263,7 +268,7 @@ fn find_closest_key(tree: &Tree, key: Key, mode: FindKeyMode) -> Result<Option<I
|
||||
} else {
|
||||
// next
|
||||
if let Some(item) = leafnode.items.get(idx+1) {
|
||||
return Ok(Some(item.clone()));
|
||||
return Ok(Some((item.clone(), current)));
|
||||
} else {
|
||||
// use next
|
||||
if let Some(addr) = next {
|
||||
@@ -280,7 +285,7 @@ fn find_closest_key(tree: &Tree, key: Key, mode: FindKeyMode) -> Result<Option<I
|
||||
return Ok(None);
|
||||
} else {
|
||||
// return the first item in tree if it exists
|
||||
return Ok(leafnode.items.get(0).map(|x|x.clone()));
|
||||
return Ok(leafnode.items.get(0).map(|x|(x.clone(), current)));
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -299,10 +304,10 @@ fn range_valid<T: Ord>(start: Bound<T>, end: Bound<T>) -> bool {
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, 'b> Iterator for RangeIter<'a, 'b> {
|
||||
type Item = Item;
|
||||
impl<'a, 'b> Iterator for RangeIterWithAddr<'a, 'b> {
|
||||
type Item = (Item, u64);
|
||||
|
||||
fn next(&mut self) -> Option<Item> {
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
if !range_valid(self.start.as_ref(), self.end.as_ref()) {
|
||||
return None;
|
||||
}
|
||||
@@ -317,11 +322,11 @@ impl<'a, 'b> Iterator for RangeIter<'a, 'b> {
|
||||
let result = find_closest_key(self.tree, start_key, mode)
|
||||
.expect("file system should be consistent (or this is a bug)");
|
||||
|
||||
if let Some(item) = &result {
|
||||
if let Some((item, _)) = &result {
|
||||
self.start = Bound::Excluded((self.forward_skip_fn)(item.key));
|
||||
}
|
||||
|
||||
let end_filter = |item: &Item| {
|
||||
let end_filter = |(item, _): &(Item, u64)| {
|
||||
match &self.end {
|
||||
&Bound::Included(x) => item.key <= x,
|
||||
&Bound::Excluded(x) => item.key < x,
|
||||
@@ -335,8 +340,8 @@ impl<'a, 'b> Iterator for RangeIter<'a, 'b> {
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, 'b> DoubleEndedIterator for RangeIter<'a, 'b> {
|
||||
fn next_back(&mut self) -> Option<Item> {
|
||||
impl<'a, 'b> DoubleEndedIterator for RangeIterWithAddr<'a, 'b> {
|
||||
fn next_back(&mut self) -> Option<Self::Item> {
|
||||
if !range_valid(self.start.as_ref(), self.end.as_ref()) {
|
||||
return None;
|
||||
}
|
||||
@@ -350,11 +355,11 @@ impl<'a, 'b> DoubleEndedIterator for RangeIter<'a, 'b> {
|
||||
let result = find_closest_key(self.tree, start_key, mode)
|
||||
.expect("file system should be consistent (or this is a bug)");
|
||||
|
||||
if let Some(item) = &result {
|
||||
if let Some((item,_)) = &result {
|
||||
self.end = Bound::Excluded((self.backward_skip_fn)(item.key));
|
||||
}
|
||||
|
||||
let start_filter = |item: &Item| {
|
||||
let start_filter = |(item, _): &(Item, u64)| {
|
||||
match &self.start {
|
||||
&Bound::Included(x) => item.key >= x,
|
||||
&Bound::Excluded(x) => item.key > x,
|
||||
@@ -367,3 +372,17 @@ impl<'a, 'b> DoubleEndedIterator for RangeIter<'a, 'b> {
|
||||
.map(|item|item.clone())
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, 'b> Iterator for RangeIter<'a, 'b> {
|
||||
type Item = Item;
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
self.0.next().map(|x|x.0)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, 'b> DoubleEndedIterator for RangeIter<'a, 'b> {
|
||||
fn next_back(&mut self) -> Option<Self::Item> {
|
||||
self.0.next_back().map(|x|x.0)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ use std::fmt;
|
||||
use std::error;
|
||||
use std::ffi::CString;
|
||||
|
||||
|
||||
/***** BTRFS structures *****/
|
||||
|
||||
pub const NODE_SIZE: usize = 0x4000;
|
||||
@@ -29,13 +30,13 @@ pub enum ItemType {
|
||||
Root = 0x84, // implemented
|
||||
RootBackRef = 0x90, // implemented
|
||||
RootRef = 0x9c, // implemented
|
||||
Extent = 0xa8, // implemented (with only one version of extra data!!)
|
||||
Metadata = 0xa9, // implemented (with only one version of extra data!!)
|
||||
TreeBlockRef = 0xb0,
|
||||
ExtentDataRef = 0xb2,
|
||||
Extent = 0xa8, // implemented
|
||||
Metadata = 0xa9, // implemented
|
||||
TreeBlockRef = 0xb0, // implemented (inside ExtentItem)
|
||||
ExtentDataRef = 0xb2, // implemented (inside ExtentItem)
|
||||
ExtentRefV0 = 0xb4,
|
||||
SharedBlockRef = 0xb6,
|
||||
SharedDataRef = 0xb8,
|
||||
SharedBlockRef = 0xb6, // implemented (inside ExtentItem)
|
||||
SharedDataRef = 0xb8, // implemented (inside ExtentItem)
|
||||
BlockGroup = 0xc0, // implemented
|
||||
FreeSpaceInfo = 0xc6, // implemented
|
||||
FreeSpaceExtent = 0xc7, // implemented
|
||||
@@ -93,8 +94,9 @@ pub enum Value {
|
||||
Dev(DevItem),
|
||||
DevExtent(DevExtentItem),
|
||||
ExtentData(ExtentDataItem),
|
||||
Ref(RefItem),
|
||||
RootRef(RootRefItem),
|
||||
Ref(Vec<RefItem>),
|
||||
RootRef(Vec<RootRefItem>),
|
||||
Checksum(Vec<u32>),
|
||||
Unknown(Vec<u8>),
|
||||
}
|
||||
|
||||
@@ -103,6 +105,7 @@ pub enum Value {
|
||||
#[derive(Debug,Clone)]
|
||||
pub struct Item {
|
||||
pub key: Key,
|
||||
pub range: (u32, u32), // start and end offset within node
|
||||
pub value: Value,
|
||||
}
|
||||
|
||||
@@ -214,7 +217,7 @@ pub struct BlockGroupItem {
|
||||
}
|
||||
|
||||
#[allow(unused)]
|
||||
#[derive(Debug,Clone)]
|
||||
#[derive(Debug,Clone,ParseBin)]
|
||||
pub struct ExtentItem {
|
||||
pub refs: u64,
|
||||
pub generation: u64,
|
||||
@@ -222,9 +225,16 @@ pub struct ExtentItem {
|
||||
// pub data: Vec<u8>,
|
||||
|
||||
// this is only correct if flags == 2, fix later!
|
||||
pub block_refs: Vec<(ItemType, u64)>,
|
||||
// pub tree_block_key_type: ItemType,
|
||||
// pub tree_block_key_id: u64,
|
||||
pub block_refs: Vec<BlockRef>,
|
||||
}
|
||||
|
||||
#[allow(unused)]
|
||||
#[derive(Debug,Clone)]
|
||||
pub enum BlockRef {
|
||||
Tree { id: u64, },
|
||||
ExtentData { root: u64, id: u64, offset: u64, count: u32, },
|
||||
SharedData { offset: u64, count: u32, },
|
||||
SharedBlockRef { offset: u64 },
|
||||
}
|
||||
|
||||
#[allow(unused)]
|
||||
@@ -472,7 +482,7 @@ impl From<&str> for ParseError {
|
||||
}
|
||||
}
|
||||
|
||||
pub trait ParseBin where Self: Sized {
|
||||
pub trait ParseBin: Sized {
|
||||
fn parse_len(bytes: &[u8]) -> Result<(Self, usize), ParseError>;
|
||||
|
||||
fn parse(bytes: &[u8]) -> Result<Self, ParseError> {
|
||||
@@ -534,12 +544,31 @@ impl<const N: usize> ParseBin for [u8; N] {
|
||||
}
|
||||
|
||||
// we use Vec<u8> for "unknown extra data", so just eat up everything
|
||||
|
||||
impl ParseBin for Vec<u8> {
|
||||
fn parse_len(bytes: &[u8]) -> Result<(Self, usize), ParseError> {
|
||||
Ok((Vec::from(bytes), bytes.len()))
|
||||
}
|
||||
}
|
||||
|
||||
trait ParseBinVecFallback: ParseBin { }
|
||||
impl ParseBinVecFallback for BlockRef { }
|
||||
|
||||
impl<T: ParseBinVecFallback> ParseBin for Vec<T> {
|
||||
fn parse_len(bytes: &[u8]) -> Result<(Self, usize), ParseError> {
|
||||
let mut result: Vec<T> = Vec::new();
|
||||
let mut offset: usize = 0;
|
||||
|
||||
while offset < bytes.len() {
|
||||
let (item, len) = <T as ParseBin>::parse_len(&bytes[offset..])?;
|
||||
result.push(item);
|
||||
offset += len;
|
||||
}
|
||||
|
||||
Ok((result, bytes.len()))
|
||||
}
|
||||
}
|
||||
|
||||
impl ParseBin for CString {
|
||||
fn parse_len(bytes: &[u8]) -> Result<(Self, usize), ParseError> {
|
||||
let mut chars = Vec::from(bytes);
|
||||
@@ -617,14 +646,7 @@ impl ParseBin for Node {
|
||||
let value = match key.key_type {
|
||||
ItemType::BlockGroup =>
|
||||
Value::BlockGroup(parse_check_size(data_slice)?),
|
||||
ItemType::Metadata => {
|
||||
let item: ExtentItem = parse_check_size(data_slice)?;
|
||||
if item.flags != 2 || item.refs > 1 {
|
||||
println!("Metadata item with refs = {}, flags = {}, data = {:x?}", item.refs, item.flags, &data_slice[0x18..]);
|
||||
}
|
||||
Value::Extent(item)
|
||||
},
|
||||
ItemType::Extent =>
|
||||
ItemType::Extent | ItemType::Metadata =>
|
||||
Value::Extent(parse_check_size(data_slice)?),
|
||||
ItemType::Inode =>
|
||||
Value::Inode(parse_check_size(data_slice)?),
|
||||
@@ -649,17 +671,39 @@ impl ParseBin for Node {
|
||||
ItemType::ExtentData =>
|
||||
Value::ExtentData(parse_check_size(data_slice)?),
|
||||
ItemType::Ref => {
|
||||
Value::Ref(parse_check_size(data_slice)?)
|
||||
let mut result: Vec<RefItem> = vec![];
|
||||
let mut item_offset = 0;
|
||||
|
||||
while item_offset < data_slice.len() {
|
||||
let (item, len) = RefItem::parse_len(&data_slice[item_offset..])?;
|
||||
result.push(item);
|
||||
item_offset += len;
|
||||
}
|
||||
Value::Ref(result)
|
||||
}
|
||||
ItemType::RootRef =>
|
||||
Value::RootRef(parse_check_size(data_slice)?),
|
||||
ItemType::RootBackRef =>
|
||||
Value::RootRef(parse_check_size(data_slice)?),
|
||||
ItemType::RootRef | ItemType::RootBackRef => {
|
||||
let mut result: Vec<RootRefItem> = vec![];
|
||||
let mut item_offset = 0;
|
||||
|
||||
while item_offset < data_slice.len() {
|
||||
let (item, len) = RootRefItem::parse_len(&data_slice[item_offset..])?;
|
||||
result.push(item);
|
||||
item_offset += len;
|
||||
}
|
||||
Value::RootRef(result)
|
||||
},
|
||||
ItemType::ExtentCsum => {
|
||||
let mut checksums: Vec<u32> = Vec::new();
|
||||
for i in 0..data_slice.len()/4 {
|
||||
checksums.push(u32::from_le_bytes(data_slice[i*4 .. (i+1)*4].try_into().unwrap()));
|
||||
}
|
||||
Value::Checksum(checksums)
|
||||
},
|
||||
_ =>
|
||||
Value::Unknown(Vec::from(data_slice)),
|
||||
};
|
||||
|
||||
items.push(Item { key, value });
|
||||
items.push(Item { key, range: (0x65 + offset, 0x65 + offset + size), value });
|
||||
}
|
||||
|
||||
Ok((Node::Leaf(Leaf { header, items }), NODE_SIZE))
|
||||
@@ -691,18 +735,19 @@ impl ParseBin for ExtentDataItem {
|
||||
let (header, header_size) = ExtentDataHeader::parse_len(bytes)?;
|
||||
if header.extent_type == 1 { // external extent
|
||||
let (body, body_size) = ExternalExtent::parse_len(&bytes[header_size..])?;
|
||||
return Ok((ExtentDataItem { header: header, data: ExtentDataBody::External(body)},
|
||||
Ok((ExtentDataItem { header, data: ExtentDataBody::External(body)},
|
||||
header_size + body_size))
|
||||
} else { // inline extent
|
||||
let data_slice = &bytes[header_size..];
|
||||
return Ok((ExtentDataItem {
|
||||
header: header,
|
||||
Ok((ExtentDataItem {
|
||||
header,
|
||||
data: ExtentDataBody::Inline(Vec::from(data_slice))
|
||||
}, header_size + data_slice.len()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
impl ParseBin for ExtentItem {
|
||||
fn parse_len(bytes: &[u8]) -> Result<(Self, usize), ParseError> {
|
||||
let refs = u64::parse(bytes)?;
|
||||
@@ -722,6 +767,35 @@ impl ParseBin for ExtentItem {
|
||||
Ok((ExtentItem { refs, generation, flags, block_refs }, 0x18 + refs as usize * 0x09))
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
impl ParseBin for BlockRef {
|
||||
fn parse_len(bytes: &[u8]) -> Result<(Self, usize), ParseError> {
|
||||
match ItemType::parse(bytes)? {
|
||||
ItemType::ExtentDataRef => {
|
||||
let root = u64::parse(&bytes[0x01..])?;
|
||||
let id = u64::parse(&bytes[0x09..])?;
|
||||
let offset = u64::parse(&bytes[0x11..])?;
|
||||
let count = u32::parse(&bytes[0x19..])?;
|
||||
Ok((BlockRef::ExtentData { root, id, offset, count }, 0x1d))
|
||||
},
|
||||
ItemType::SharedDataRef => {
|
||||
let offset = u64::parse(&bytes[0x01..])?;
|
||||
let count = u32::parse(&bytes[0x09..])?;
|
||||
Ok((BlockRef::SharedData { offset, count }, 0x0d))
|
||||
},
|
||||
ItemType::TreeBlockRef => {
|
||||
let id = u64::parse(&bytes[0x01..])?;
|
||||
Ok((BlockRef::Tree { id }, 0x09))
|
||||
},
|
||||
ItemType::SharedBlockRef => {
|
||||
let offset = u64::parse(&bytes[0x01..])?;
|
||||
Ok((BlockRef::SharedBlockRef { offset }, 0x09))
|
||||
}
|
||||
x => err!("unknown block ref type: {:?}", x)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/***** prettier debug output for UUIDs and checksums *****/
|
||||
|
||||
|
||||
327
btrfs_explorer/src/http_chunk.rs
Normal file
327
btrfs_explorer/src/http_chunk.rs
Normal file
@@ -0,0 +1,327 @@
|
||||
use maud::{Markup, html};
|
||||
use rouille::{Request, Response};
|
||||
use crate::{
|
||||
btrfs_lookup::Tree, btrfs_structs::{self, BlockRef, ItemType, TreeID, Value}, key, main_error::MainError, render_common::{render_page, size_name, http_path}
|
||||
};
|
||||
|
||||
struct ChunkLineDisplay {
|
||||
logical_address: Option<u64>,
|
||||
link: bool,
|
||||
physical_address: u64,
|
||||
size: u64,
|
||||
description: String,
|
||||
}
|
||||
|
||||
struct ChunkResult {
|
||||
pub offset: u64,
|
||||
pub refs: Vec<Vec<u64>>,
|
||||
pub color_special: bool,
|
||||
}
|
||||
|
||||
pub fn http_allchunks(image: &[u8], _req: &Request) -> Result<Response, MainError> {
|
||||
let tree = Tree::chunk(image)?;
|
||||
|
||||
let mut chunks: Vec<ChunkLineDisplay> = Vec::new();
|
||||
|
||||
for item in tree.iter() {
|
||||
let Value::Chunk(chunk_item) = &item.value else { continue; };
|
||||
|
||||
for stripe in &chunk_item.stripes {
|
||||
if stripe.devid != 1 {
|
||||
println!("multiple devices not supported!");
|
||||
continue;
|
||||
}
|
||||
|
||||
let desc = match chunk_item.chunktype & 0x7 {
|
||||
1 => format!("data chunk"),
|
||||
2 => format!("system chunk"),
|
||||
4 => format!("metadata chunk"),
|
||||
_ => format!("(unknown chunk type)"),
|
||||
};
|
||||
|
||||
chunks.push(ChunkLineDisplay {
|
||||
logical_address: Some(item.key.key_offset),
|
||||
link: chunk_item.chunktype & 0x7 != 1,
|
||||
physical_address: stripe.offset,
|
||||
size: chunk_item.size,
|
||||
description: desc,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
chunks.sort_by_key(|x|x.physical_address);
|
||||
|
||||
let mut chunks_filled: Vec<ChunkLineDisplay> = Vec::with_capacity(chunks.len());
|
||||
|
||||
let mut total_size: u64 = 0;
|
||||
for x in chunks {
|
||||
if total_size > x.physical_address {
|
||||
// not so good
|
||||
} else if total_size < x.physical_address {
|
||||
chunks_filled.push(ChunkLineDisplay {
|
||||
logical_address: None,
|
||||
link: false,
|
||||
physical_address: total_size,
|
||||
size: x.physical_address - total_size,
|
||||
description: format!("unassigned"),
|
||||
});
|
||||
}
|
||||
total_size = x.physical_address + x.size;
|
||||
|
||||
chunks_filled.push(x);
|
||||
}
|
||||
|
||||
Ok(Response::html(render_allchunks(chunks_filled)))
|
||||
}
|
||||
|
||||
pub fn http_chunk(image: &[u8], offset: &str, _req: &Request) -> Result<Response, MainError> {
|
||||
let logical_offset: u64 = u64::from_str_radix(offset, 16)?;
|
||||
|
||||
let tree = Tree::new(image, TreeID::Extent)?;
|
||||
|
||||
let start = key!(logical_offset, ItemType::BlockGroup, 0);
|
||||
let end = key!(logical_offset, ItemType::BlockGroup, u64::MAX);
|
||||
if let Some(bg) = tree.range(start..=end).next() {
|
||||
// let extent_list: Vec<(u64, u64, Vec<u64>)> = Vec::new();
|
||||
let blockgroup_size = bg.key.key_offset;
|
||||
|
||||
// we'll just assume for now this is metadata, for every node we store the referencing trees
|
||||
let nr_nodes = (blockgroup_size >> 14) as usize;
|
||||
let mut node_list: Vec<Vec<u64>> = Vec::with_capacity(nr_nodes);
|
||||
|
||||
let start = key!(logical_offset, ItemType::Invalid, 0);
|
||||
let end = key!(logical_offset + blockgroup_size, ItemType::Invalid, 0);
|
||||
|
||||
for item in tree.range(start..end) {
|
||||
let Value::Extent(extent_item) = item.value else { continue };
|
||||
|
||||
if item.key.key_type == ItemType::Metadata {
|
||||
let index = ((item.key.key_id - logical_offset) >> 14) as usize;
|
||||
|
||||
let process_ref = |rf: &BlockRef| {
|
||||
match rf {
|
||||
&BlockRef::Tree { id } => Some(id),
|
||||
_ => None,
|
||||
}
|
||||
};
|
||||
let refs: Vec<u64> = extent_item.block_refs.iter().filter_map(process_ref).collect();
|
||||
|
||||
while node_list.len() < index {
|
||||
node_list.push(Vec::new());
|
||||
}
|
||||
node_list.push(refs);
|
||||
}
|
||||
}
|
||||
|
||||
while node_list.len() < nr_nodes {
|
||||
node_list.push(Vec::new());
|
||||
}
|
||||
|
||||
let chunk_data = ChunkResult {
|
||||
offset: logical_offset,
|
||||
refs: node_list,
|
||||
color_special: true,
|
||||
};
|
||||
return Ok(Response::html(render_chunk(chunk_data)))
|
||||
}
|
||||
|
||||
err!("No block group found at logical address {:x}", logical_offset)
|
||||
}
|
||||
|
||||
// const COLORS: &[&str] = &["lightgray", "#e6194b", "#3cb44b", "#ffe119", "#4363d8", "#f58231", "#911eb4", "#46f0f0", "#f032e6", "#bcf60c", "#fabebe", "#008080", "#e6beff", "#9a6324", "#fffac8", "#800000", "#aaffc3", "#808000", "#ffd8b1", "#000075", "#808080", "#000000"];
|
||||
|
||||
fn render_chunk(data: ChunkResult) -> Markup {
|
||||
let header = format!("Metadata nodes in chunk at address {:x}", data.offset);
|
||||
|
||||
let boxes: Vec<Vec<&str>> = data.refs
|
||||
.chunks(64)
|
||||
.map(|row|row.iter()
|
||||
.map(|noderefs| {
|
||||
if noderefs.len() == 0 {
|
||||
"lightgrey"
|
||||
} else if noderefs[0] == 1 { // root
|
||||
if data.color_special {"darkgreen"} else {"black"}
|
||||
} else if noderefs[0] == 2 { // extent
|
||||
if data.color_special {"orange"} else {"black"}
|
||||
} else if noderefs[0] == 3 { // extent
|
||||
if data.color_special {"brown"} else {"black"}
|
||||
} else if noderefs[0] == 4 { // device
|
||||
if data.color_special {"cyan"} else {"black"}
|
||||
} else if noderefs[0] == 7 { // checksum
|
||||
if data.color_special {"magenta"} else {"black"}
|
||||
} else if noderefs[0] == 9 { // uuid
|
||||
if data.color_special {"yellow"} else {"black"}
|
||||
} else if noderefs[0] == 10 { // free space
|
||||
if data.color_special {"#00ff00"} else {"black"}
|
||||
} else if noderefs[0] < 0x100 && noderefs[0] != 5 || noderefs[0] > u64::MAX - 0x100 {
|
||||
if data.color_special {"red"} else {"black"}
|
||||
} else if noderefs.len() == 1 {
|
||||
if noderefs[0] == 5 {
|
||||
if data.color_special {"black"} else {"darkgreen"}
|
||||
} else {
|
||||
if data.color_special {"black"} else {"blue"}
|
||||
}
|
||||
} else {
|
||||
if data.color_special {"black"} else {
|
||||
"conic-gradient(blue 0deg 45deg, darkgreen 45deg 225deg, blue 225deg 360deg)"
|
||||
}
|
||||
}
|
||||
})
|
||||
.collect())
|
||||
.collect();
|
||||
|
||||
let content = html! {
|
||||
h1 {
|
||||
(header)
|
||||
}
|
||||
|
||||
details open {
|
||||
summary { "Explanation" }
|
||||
(explanation_chunk())
|
||||
}
|
||||
|
||||
br {}
|
||||
|
||||
table.blocks {
|
||||
@for row in boxes {
|
||||
tr {
|
||||
@for cell in row {
|
||||
td style={"background:" (cell) ";"} {}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
render_page(&header, content)
|
||||
}
|
||||
|
||||
fn render_allchunks(data: Vec<ChunkLineDisplay>) -> Markup {
|
||||
let content = html! {
|
||||
h1 {
|
||||
"Physical disk layout"
|
||||
}
|
||||
|
||||
details open {
|
||||
summary { "Explanation" }
|
||||
(explanation_allchunks())
|
||||
}
|
||||
|
||||
br {}
|
||||
|
||||
@for bg in data {
|
||||
div.item {
|
||||
span.key.key_offset {
|
||||
(format!("{:x}", bg.physical_address))
|
||||
}
|
||||
@if let Some(addr) = bg.logical_address {
|
||||
span.key.key_offset {
|
||||
(format!("{:x}", addr))
|
||||
}
|
||||
}
|
||||
span.itemvalue {
|
||||
@if bg.link {
|
||||
a href=(format!("{}/chunk/{:x}", http_path(), bg.logical_address.unwrap())) {
|
||||
(bg.description)
|
||||
}
|
||||
", "
|
||||
(size_name(bg.size))
|
||||
} @else {
|
||||
(bg.description) ", " (size_name(bg.size))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
render_page("Physical disk layout", content)
|
||||
}
|
||||
|
||||
fn explanation_allchunks() -> Markup {
|
||||
html! {
|
||||
p {
|
||||
"This shows the on-disk format of a BTRFS file system on the most zoomed-out level. BTRFS includes a device mapper functionality where a single logical file system can be spread over multiple physical disks, and a logical block can have multiple copies of it stored on disk. Here we assume the file system has only one physical disk and there are no RAID features enabled."
|
||||
}
|
||||
p {
|
||||
"This page shows the layout of the phyiscal disk. It is organized in \"chunks\", typically a few megabytes to a gigabyte in size, with possibly some unassigned space in between. There are three types of chunks:"
|
||||
}
|
||||
|
||||
ul {
|
||||
li {
|
||||
b { "Data: " }
|
||||
"A data chunk contains actual contents of files. The contents of one file do not have to be stored in the data chunk in a contiguous way, and can be spread out over multiple data chunks. The data chunk itself also contains no information about which files the data belongs to, all of that is stored in the metadata. Very small files (e.g. under 1 KiB) do not have their contents in here, but they are entirely stored in the metadata section."
|
||||
}
|
||||
|
||||
li {
|
||||
b { "Metadata: " }
|
||||
"This contains all information about file names, directories, checksums, used and free space, devices etc. The data here is organized in the eponymous \"B-Trees\", which is essentially a type of key-value store. Click on a metadata chunk to find out what is stored inside it."
|
||||
}
|
||||
|
||||
li {
|
||||
b { "System: " }
|
||||
"The system chunks are just additional metadata chunks, except that they are reserved for special kinds of metadata. Most importantly, the system chunks contain the mapping from logical to physical addresses, which is needed to find the other metadata chunks. The physical locations of the system chunks are stored in the superblock, thereby avoiding a chicken-and-egg problem while mounting the drive."
|
||||
}
|
||||
}
|
||||
|
||||
p {
|
||||
"The first column in the following table shows the physical address of a chunk and the second column shows its logical address. You can click on the metadata or system chunks to find out how they are laid out."
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn explanation_chunk() -> Markup {
|
||||
html! {
|
||||
p { "The metadata of a BTRFS file system is organized in the form of multiple " b { "B-trees" } ", which consist of " b { "nodes" } ". Each node is 16 KiB in size." }
|
||||
|
||||
p { "This page shows the contents of a single metadata chunk. Every little box is one node of 16 KiB. Together they add up to the total size of the chunk. We show 64 nodes per row, so each row is 1 MiB." }
|
||||
|
||||
p { "The colors indicate which B-tree the node belongs to. Most of them belong to the filesystem trees. There is one filesystem tree for every subvolume, but we draw them all in the same color here. The other trees are:" }
|
||||
|
||||
table.legend {
|
||||
tr {
|
||||
td { table.blocks { tr { td style="background: darkgreen;" {} } } }
|
||||
td { "root tree" }
|
||||
}
|
||||
|
||||
tr {
|
||||
td { table.blocks { tr { td style="background: orange;" {} } } }
|
||||
td { "extent tree" }
|
||||
}
|
||||
|
||||
tr {
|
||||
td { table.blocks { tr { td style="background: brown;" {} } } }
|
||||
td { "chunk tree" }
|
||||
}
|
||||
|
||||
tr {
|
||||
td { table.blocks { tr { td style="background: cyan;" {} } } }
|
||||
td { "device tree" }
|
||||
}
|
||||
|
||||
tr {
|
||||
td { table.blocks { tr { td style="background: magenta;" {} } } }
|
||||
td { "checksum tree" }
|
||||
}
|
||||
|
||||
tr {
|
||||
td { table.blocks { tr { td style="background: yellow;" {} } } }
|
||||
td { "uuid tree" }
|
||||
}
|
||||
|
||||
tr {
|
||||
td { table.blocks { tr { td style="background: #00ff00;" {} } } }
|
||||
td { "free space tree" }
|
||||
}
|
||||
|
||||
tr {
|
||||
td { table.blocks { tr { td style="background: red;" {} } } }
|
||||
td { "other trees" }
|
||||
}
|
||||
|
||||
tr {
|
||||
td { table.blocks { tr { td style="background: black;" {} } } }
|
||||
td { "filesystem trees" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,6 @@
|
||||
use std::str::FromStr;
|
||||
use std::{
|
||||
str::FromStr,
|
||||
};
|
||||
use rouille::{Request, Response};
|
||||
use crate::{
|
||||
btrfs_structs::{ItemType, Item, Key, ZERO_KEY, LAST_KEY},
|
||||
@@ -18,41 +20,45 @@ enum TreeDisplayMode {
|
||||
|
||||
|
||||
fn http_tree_internal(tree: &Tree, tree_id: u64, mode: TreeDisplayMode) -> Response {
|
||||
let mut items: Vec<Item>;
|
||||
let mut items: Vec<(Item, u64)>;
|
||||
let mut highlighted_key_id: Option<u64> = None;
|
||||
|
||||
match mode {
|
||||
TreeDisplayMode::Highlight(key_id, before, after) => {
|
||||
let key = Key {key_id, key_type: ItemType::Invalid, key_offset: 0 };
|
||||
items = tree.range(..key).rev().take(before).collect();
|
||||
items = tree.range_with_node_addr(..key).rev().take(before).collect();
|
||||
items.reverse();
|
||||
items.extend(tree.range(key..).take(after));
|
||||
items.extend(tree.range_with_node_addr(key..).take(after));
|
||||
highlighted_key_id = Some(key_id);
|
||||
},
|
||||
TreeDisplayMode::From(key, num_lines) => {
|
||||
items = tree.range(key..).take(num_lines).collect();
|
||||
items = tree.range_with_node_addr(key..).take(num_lines).collect();
|
||||
if items.len() < num_lines {
|
||||
items.reverse();
|
||||
items.extend(tree.range(..key).rev().take(num_lines - items.len()));
|
||||
items.extend(tree.range_with_node_addr(..key).rev().take(num_lines - items.len()));
|
||||
items.reverse();
|
||||
}
|
||||
},
|
||||
TreeDisplayMode::To(key, num_lines) => {
|
||||
items = tree.range(..key).rev().take(num_lines).collect();
|
||||
items = tree.range_with_node_addr(..key).rev().take(num_lines).collect();
|
||||
items.reverse();
|
||||
if items.len() < num_lines {
|
||||
items.extend(tree.range(key..).take(num_lines - items.len()));
|
||||
items.extend(tree.range_with_node_addr(key..).take(num_lines - items.len()));
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
let data_slice = |item: &Item, node_addr: u64| -> &[u8] {
|
||||
tree.reader.get_raw_data(node_addr, item.range.0, item.range.1).unwrap()
|
||||
};
|
||||
|
||||
let table_result = TableResult {
|
||||
tree_id,
|
||||
tree_desc: root_key_desc(tree_id).map(|x|x.to_string()),
|
||||
key_id: highlighted_key_id,
|
||||
items: items.iter().map(|it|(it,&[] as &[u8])).collect(),
|
||||
first_key: items.first().map(|it|it.key).unwrap_or(LAST_KEY),
|
||||
last_key: items.last().map(|it|it.key).unwrap_or(ZERO_KEY),
|
||||
items: items.iter().map(|(it, addr)|(it, *addr, data_slice(it, *addr))).collect(),
|
||||
first_key: items.first().map(|x|x.0.key).unwrap_or(LAST_KEY),
|
||||
last_key: items.last().map(|x|x.0.key).unwrap_or(ZERO_KEY),
|
||||
};
|
||||
|
||||
Response::html(render_table(table_result))
|
||||
|
||||
@@ -8,6 +8,7 @@ pub mod http_tree;
|
||||
pub mod render_common;
|
||||
pub mod render_tree;
|
||||
pub mod main_error;
|
||||
pub mod http_chunk;
|
||||
|
||||
#[cfg(test)]
|
||||
mod test;
|
||||
|
||||
@@ -2,6 +2,7 @@ use std::{
|
||||
collections::HashMap,
|
||||
sync::Arc,
|
||||
cell::RefCell,
|
||||
time::Instant,
|
||||
};
|
||||
|
||||
use crate::btrfs_structs::{Node, ParseError, ParseBin};
|
||||
@@ -29,16 +30,25 @@ impl<'a> NodeReader<'a> {
|
||||
return Ok(Arc::clone(node))
|
||||
}
|
||||
|
||||
println!("Reading node at {:X}", addr);
|
||||
let start_time = Instant::now();
|
||||
|
||||
let node_data = self.addr_map.node_at_log(self.image, addr)?;
|
||||
let node = Arc::new(Node::parse(node_data)?);
|
||||
|
||||
self.cache.borrow_mut().insert(addr, Arc::clone(&node));
|
||||
|
||||
let t = Instant::now().duration_since(start_time);
|
||||
|
||||
println!("Read node {:X} in {:?}", addr, t);
|
||||
|
||||
Ok(node)
|
||||
}
|
||||
|
||||
pub fn get_raw_data(&self, addr: u64, start: u32, end: u32) -> Result<&'a [u8], ParseError> {
|
||||
let node_data = self.addr_map.node_at_log(self.image, addr)?;
|
||||
Ok(&node_data[start as usize .. end as usize])
|
||||
}
|
||||
|
||||
pub fn addr_map(&self) -> &AddressMap {
|
||||
&self.addr_map
|
||||
}
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
use maud::Render;
|
||||
use maud::{html, DOCTYPE, Markup, Render};
|
||||
use std::fmt::{Debug, UpperHex};
|
||||
use std::sync::OnceLock;
|
||||
|
||||
pub struct DebugRender<T>(pub T);
|
||||
|
||||
@@ -36,3 +37,28 @@ pub fn size_name(x: u64) -> String {
|
||||
format!("{} EiB", x / (1<<60))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn render_page(title: &str, content: Markup) -> Markup {
|
||||
html! {
|
||||
(DOCTYPE)
|
||||
head {
|
||||
link rel="stylesheet" href={(http_path()) "/style.css"};
|
||||
title {
|
||||
(title)
|
||||
}
|
||||
}
|
||||
body {
|
||||
(content)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static HTTP_PATH: OnceLock<String> = OnceLock::new();
|
||||
|
||||
pub fn http_path() -> &'static str {
|
||||
HTTP_PATH.get().expect("HTTP_PATH should have been initialized before usage.")
|
||||
}
|
||||
|
||||
pub fn http_path_set(path: String) {
|
||||
HTTP_PATH.set(path).expect("HTTP_PATH can only be set once.");
|
||||
}
|
||||
|
||||
@@ -1,20 +1,20 @@
|
||||
use crate::btrfs_structs::{Item, Key, ItemType, Value, ExtentDataBody};
|
||||
use crate::render_common::{Hex, size_name};
|
||||
use crate::render_common::{Hex, size_name, http_path};
|
||||
use maud::{Markup, html, DOCTYPE, PreEscaped};
|
||||
use std::ffi::CStr;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct TableResult<'a> {
|
||||
pub tree_id: u64,
|
||||
pub tree_desc: Option<String>,
|
||||
pub key_id: Option<u64>,
|
||||
pub items: Vec<(&'a Item, &'a [u8])>,
|
||||
pub items: Vec<(&'a Item, u64, &'a [u8])>, // item, node addr, data
|
||||
pub first_key: Key,
|
||||
pub last_key: Key,
|
||||
}
|
||||
|
||||
pub fn render_table(table: TableResult) -> Markup {
|
||||
|
||||
let header: String = if let Some(desc) = table.tree_desc {
|
||||
let header = if let Some(desc) = table.tree_desc {
|
||||
format!("Tree {} ({})", table.tree_id, desc)
|
||||
} else {
|
||||
format!("Tree {}", table.tree_id)
|
||||
@@ -22,19 +22,22 @@ pub fn render_table(table: TableResult) -> Markup {
|
||||
|
||||
let key_input_value = table.key_id.map_or(String::new(), |x| format!("{:X}", x));
|
||||
|
||||
let first_key_url = format!("/tree/{}",
|
||||
table.tree_id);
|
||||
let prev_key_url = format!("/tree/{}/to/{:016X}-{:02X}-{:016X}",
|
||||
let first_key_url = format!("{}/tree/{}",
|
||||
http_path(), table.tree_id);
|
||||
let prev_key_url = format!("{}/tree/{}/to/{:016X}-{:02X}-{:016X}",
|
||||
http_path(),
|
||||
table.tree_id,
|
||||
table.first_key.key_id,
|
||||
u8::from(table.first_key.key_type),
|
||||
table.first_key.key_offset);
|
||||
let next_key_url = format!("/tree/{}/from/{:016X}-{:02X}-{:016X}",
|
||||
let next_key_url = format!("{}/tree/{}/from/{:016X}-{:02X}-{:016X}",
|
||||
http_path(),
|
||||
table.tree_id,
|
||||
table.last_key.key_id,
|
||||
u8::from(table.last_key.key_type),
|
||||
table.first_key.key_offset);
|
||||
let last_key_url = format!("/tree/{}/to/{:016X}-{:02X}-{:016X}",
|
||||
let last_key_url = format!("{}/tree/{}/to/{:016X}-{:02X}-{:016X}",
|
||||
http_path(),
|
||||
table.tree_id,
|
||||
u64::wrapping_sub(0,1),
|
||||
u8::wrapping_sub(0,1),
|
||||
@@ -42,12 +45,13 @@ pub fn render_table(table: TableResult) -> Markup {
|
||||
|
||||
let mut rows: Vec<Markup> = Vec::new();
|
||||
|
||||
for &(it, _it_data) in table.items.iter() {
|
||||
for &(it, node_addr, it_data) in table.items.iter() {
|
||||
let highlighted = if table.key_id.filter(|x|*x == it.key.key_id).is_some() { "highlight" } else { "" };
|
||||
let value_string = item_value_string(table.tree_id, it);
|
||||
let details_string = item_details_string(table.tree_id, it);
|
||||
let raw_string = format!("{:#?}", &it.value);
|
||||
let id_desc = row_id_desc(it.key, table.tree_id);
|
||||
let hex_data: String = it_data.iter().map(|x|format!("{:02X} ", x)).collect();
|
||||
|
||||
rows.push(html! {
|
||||
details.item.(highlighted) {
|
||||
@@ -64,6 +68,9 @@ pub fn render_table(table: TableResult) -> Markup {
|
||||
span.itemvalue.(key_type_class(it.key)) {
|
||||
(&value_string)
|
||||
}
|
||||
span.nodeaddr {
|
||||
(Hex(node_addr))
|
||||
}
|
||||
}
|
||||
|
||||
div.details {
|
||||
@@ -77,6 +84,15 @@ pub fn render_table(table: TableResult) -> Markup {
|
||||
(&raw_string)
|
||||
}
|
||||
}
|
||||
|
||||
details {
|
||||
summary {
|
||||
"show hex data"
|
||||
}
|
||||
pre {
|
||||
(&hex_data)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
@@ -86,20 +102,25 @@ pub fn render_table(table: TableResult) -> Markup {
|
||||
html! {
|
||||
(DOCTYPE)
|
||||
head {
|
||||
link rel="stylesheet" href="/style.css";
|
||||
link rel="stylesheet" href={(http_path()) "/style.css"};
|
||||
}
|
||||
body {
|
||||
h1 {
|
||||
(header)
|
||||
}
|
||||
|
||||
details {
|
||||
summary { "Explanation" }
|
||||
(explanation_tree())
|
||||
}
|
||||
|
||||
@if table.tree_id != 1 {
|
||||
a href="/tree/1" {
|
||||
a href={(http_path()) "/tree/1"} {
|
||||
"go back to root tree"
|
||||
}
|
||||
}
|
||||
|
||||
form method="get" action={"/tree/" (table.tree_id)} {
|
||||
form method="get" action={(http_path()) "/tree/" (table.tree_id)} {
|
||||
input type="text" name="key" value=(key_input_value);
|
||||
input type="submit" value="Search";
|
||||
}
|
||||
@@ -115,6 +136,26 @@ pub fn render_table(table: TableResult) -> Markup {
|
||||
}
|
||||
}
|
||||
|
||||
fn explanation_tree() -> Markup {
|
||||
html! {
|
||||
p {
|
||||
"This page shows the content of a tree. It is essentially a list of items, each of which consist of a key and a value."
|
||||
}
|
||||
|
||||
p {
|
||||
"The key is shown in the boxes on the left. It is a triple of a 64-bit id, an 8-bit type, and a 64-bit offset. What each of them means depends on the tree we're in. You can search for a key id by using the search field below."
|
||||
}
|
||||
|
||||
p {
|
||||
"The value is summarized to the right of the key. To see the value in more detail, unfold the key by clicking on it."
|
||||
}
|
||||
|
||||
p {
|
||||
"Finally, to the very right, we have the logical address of the metadata node which the item is stored in."
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn key_type_class(key: Key) -> &'static str {
|
||||
match key.key_type {
|
||||
ItemType::Inode => "inode",
|
||||
@@ -133,7 +174,7 @@ fn row_id_desc(key: Key, tree_id: u64) -> (Markup, Markup, Markup) {
|
||||
let x = format!("{:X}", key.key_id);
|
||||
let y = format!("{:?} ({:02X})", key.key_type, u8::from(key.key_type));
|
||||
let z = if key.key_type == ItemType::RootRef || key.key_type == ItemType::Ref {
|
||||
format!("<a href=\"/tree/{}/{:X}\">{:X}</a>", tree_id, key.key_offset, key.key_offset)
|
||||
format!("<a href=\"{}/tree/{}/{:X}\">{:X}</a>", http_path(), tree_id, key.key_offset, key.key_offset)
|
||||
} else {
|
||||
format!("{:X}", key.key_offset)
|
||||
};
|
||||
@@ -143,7 +184,7 @@ fn row_id_desc(key: Key, tree_id: u64) -> (Markup, Markup, Markup) {
|
||||
fn item_value_string(tree_id: u64, item: &Item) -> Markup {
|
||||
match &item.value {
|
||||
Value::Root(_) => {
|
||||
html! { a href={"/tree/" (item.key.key_id)} { "go to tree " (item.key.key_id) } }
|
||||
html! { a href={(http_path()) "/tree/" (item.key.key_id)} { "go to tree " (item.key.key_id) } }
|
||||
},
|
||||
Value::Dir(dir_item) | Value::DirIndex(dir_item) => {
|
||||
let name = format!("{:?}", &dir_item.name);
|
||||
@@ -151,8 +192,15 @@ fn item_value_string(tree_id: u64, item: &Item) -> Markup {
|
||||
html! {
|
||||
(name)
|
||||
" @ "
|
||||
a href=(format!("/tree/{tree_id}/{id:x}")) {
|
||||
(Hex(id))
|
||||
@if dir_item.location.key_type == ItemType::Root {
|
||||
a href=(format!("{}/tree/{id}", http_path())) {
|
||||
"subvolume "
|
||||
(Hex(id))
|
||||
}
|
||||
} @else {
|
||||
a href=(format!("{}/tree/{tree_id}/{id:x}", http_path())) {
|
||||
(Hex(id))
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
@@ -180,12 +228,17 @@ fn item_value_string(tree_id: u64, item: &Item) -> Markup {
|
||||
ExtentDataBody::External(ext_extent) =>
|
||||
PreEscaped(format!("external, length {}", size_name(ext_extent.num_bytes))),
|
||||
},
|
||||
Value::Ref(ref_item) =>
|
||||
html! { (format!("{:?}", &ref_item.name)) },
|
||||
Value::RootRef(ref_item) =>
|
||||
html! { (format!("{:?}", &ref_item.name)) },
|
||||
Value::Ref(ref_item) => {
|
||||
let names: Vec<&CStr> = ref_item.iter().map(|x|x.name.as_ref()).collect();
|
||||
html! { (format!("{:?}", &names)) }
|
||||
|
||||
},
|
||||
Value::RootRef(ref_item) => {
|
||||
let names: Vec<&CStr> = ref_item.iter().map(|x|x.name.as_ref()).collect();
|
||||
html! { (format!("{:?}", &names)) }
|
||||
},
|
||||
Value::Extent(extent_item) =>
|
||||
PreEscaped(format!("flags: {}, block_refs: {:?}", extent_item.flags, extent_item.block_refs)),
|
||||
PreEscaped(format!("flags: {}, block_refs: {:X?}", extent_item.flags, extent_item.block_refs)),
|
||||
Value::BlockGroup(blockgroup_item) =>
|
||||
PreEscaped(format!("{} used", size_name(blockgroup_item.used))),
|
||||
Value::DevExtent(dev_extent_item) =>
|
||||
@@ -245,13 +298,14 @@ fn item_details_string(_tree_id: u64, item: &Item) -> Markup {
|
||||
},
|
||||
Value::Ref(ref_item) => {
|
||||
html! { table { tbody {
|
||||
tr { td { "name" } td { (format!("{:?}", ref_item.name)) } }
|
||||
tr { td { "index" } td { (ref_item.index) } }
|
||||
tr { td { "name" } td { (format!("{:?}", ref_item[0].name)) } }
|
||||
tr { td { "index" } td { (ref_item[0].index) } }
|
||||
}}}
|
||||
},
|
||||
Value::Dir(dir_item) | Value::DirIndex(dir_item) => {
|
||||
html! { table { tbody {
|
||||
tr { td { "name" } td { (format!("{:?}", dir_item.name)) } }
|
||||
tr { td { "target key" } td { (format!("{:X} {:?} {:X}", dir_item.location.key_id, dir_item.location.key_type, dir_item.location.key_offset)) } }
|
||||
}}}
|
||||
},
|
||||
Value::Root(root_item) => {
|
||||
@@ -278,9 +332,9 @@ fn item_details_string(_tree_id: u64, item: &Item) -> Markup {
|
||||
},
|
||||
Value::RootRef(root_ref_item) => {
|
||||
html! { table { tbody {
|
||||
tr { td { "name" } td { (format!("{:?}", root_ref_item.name)) } }
|
||||
tr { td { "directory" } td { (root_ref_item.directory) } }
|
||||
tr { td { "index" } td { (root_ref_item.index) } }
|
||||
tr { td { "name" } td { (format!("{:?}", root_ref_item[0].name)) } }
|
||||
tr { td { "directory" } td { (root_ref_item[0].directory) } }
|
||||
tr { td { "index" } td { (root_ref_item[0].index) } }
|
||||
}}}
|
||||
},
|
||||
_ => {
|
||||
|
||||
@@ -1,46 +1,41 @@
|
||||
use std::{
|
||||
collections::HashMap, env, fs::{File, OpenOptions}, iter,
|
||||
env, fs::OpenOptions, ops::Deref, include_str,
|
||||
};
|
||||
use memmap2::MmapOptions;
|
||||
use rouille::{Request, Response, router};
|
||||
use btrfs_explorer::{
|
||||
btrfs_structs::{TreeID, Value::Extent, Value::BlockGroup, NODE_SIZE, ItemType},
|
||||
btrfs_lookup::Tree,
|
||||
addrmap::AddressMap,
|
||||
main_error::MainError,
|
||||
};
|
||||
use rouille::{Response, router};
|
||||
use btrfs_explorer::main_error::MainError;
|
||||
use btrfs_explorer::render_common::http_path_set;
|
||||
|
||||
const COLORS: &[&str] = &["#e6194b", "#3cb44b", "#ffe119", "#4363d8", "#f58231", "#911eb4", "#46f0f0", "#f032e6", "#bcf60c", "#fabebe", "#008080", "#e6beff", "#9a6324", "#fffac8", "#800000", "#aaffc3", "#808000", "#ffd8b1", "#000075", "#808080", "#000000"];
|
||||
const CSS_FILE: &'static str = include_str!("style.css");
|
||||
|
||||
fn main() -> Result<(), MainError> {
|
||||
let filename = env::args().skip(1).next().ok_or("Argument required")?;
|
||||
let args: Vec<String> = env::args().collect();
|
||||
|
||||
/*
|
||||
let file = OpenOptions::new().read(true).open(filename)?;
|
||||
let image = unsafe { Mmap::map(&file)? };
|
||||
*/
|
||||
if args.len() < 2 {
|
||||
return Err("Argument required".into());
|
||||
}
|
||||
|
||||
let filename: &str = args[1].as_ref();
|
||||
let sockaddr: &str = args.get(2)
|
||||
.map_or("localhost:8080", Deref::deref);
|
||||
let http_path: String = args.get(3)
|
||||
.map_or(String::new(), Clone::clone);
|
||||
http_path_set(http_path);
|
||||
|
||||
let file = OpenOptions::new().read(true).open(filename)?;
|
||||
let image = unsafe { MmapOptions::new().len(493921239040usize).map(&file)? };
|
||||
let image = unsafe { MmapOptions::new().map(&file)? };
|
||||
|
||||
// return Ok(());
|
||||
rouille::start_server(sockaddr, move |request| {
|
||||
println!("Request: {}", request.url());
|
||||
|
||||
/*
|
||||
let mystery_addr = 0x2f_2251_c000;
|
||||
let addr_map = AddressMap::new(&image)?;
|
||||
let mystery_addr_phys = addr_map.to_phys(mystery_addr).unwrap() as usize;
|
||||
let mystery_node = Node::parse(&image[mystery_addr_phys .. ])?;
|
||||
|
||||
println!("{:#x?}", &mystery_node);
|
||||
*/
|
||||
|
||||
rouille::start_server("127.0.0.1:8080", move |request| {
|
||||
router!(
|
||||
request,
|
||||
(GET) ["/"] =>
|
||||
http_main_boxes(&image, request),
|
||||
btrfs_explorer::http_chunk::http_allchunks(&image, request).unwrap(),
|
||||
(GET) ["/root"] =>
|
||||
btrfs_explorer::http_tree::http_root(&image, None, request),
|
||||
(GET) ["/chunk/{offset}", offset: String] =>
|
||||
btrfs_explorer::http_chunk::http_chunk(&image, &offset, request).unwrap(),
|
||||
(GET) ["/tree/{tree}", tree: String] =>
|
||||
btrfs_explorer::http_tree::http_tree(&image, &tree, None, request.get_param("key").as_deref(), request).unwrap(),
|
||||
(GET) ["/tree/{tree}/{key}", tree: String, key: String] =>
|
||||
@@ -50,224 +45,9 @@ fn main() -> Result<(), MainError> {
|
||||
(GET) ["/tree/{tree}/{method}/{key}", tree: String, method: String, key: String] =>
|
||||
btrfs_explorer::http_tree::http_tree(&image, &tree, Some(&method), Some(&key), request).unwrap(),
|
||||
(GET) ["/favicon.ico"] => Response::empty_404(),
|
||||
(GET) ["/style.css"] => Response::from_file("text/css", File::open("style.css").unwrap()),
|
||||
(GET) ["/htmx.min.js"] => Response::from_file("text/css", File::open("htmx.min.js").unwrap()),
|
||||
(GET) ["/style.css"] => Response::from_data("text/css", CSS_FILE)
|
||||
.with_additional_header("Cache-Control", "max-age=3600"),
|
||||
_ => Response::empty_404(),
|
||||
)
|
||||
});
|
||||
}
|
||||
|
||||
static CIRCLE_IMAGE: &str =
|
||||
"data:image/png;base64,\
|
||||
iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAP0lEQVQY02NgoBn4//+//P///yf9\
|
||||
////DRRP+v//vzw2hZP+Y4JJ2BS+waLwDUyeiVinIStchkV+GfmeoRoAAJqLWnEf4UboAAAAAElF\
|
||||
TkSuQmCC";
|
||||
|
||||
static EXPLANATION_TEXT: &str = "\
|
||||
<h3>Chunks</h3>
|
||||
<p>On the highest level, btrfs splits the disk into <b>chunks</b> (also called <b>block groups</b>). They can have different sizes, with 1GiB being typical in a large file system. Each chunk can either contain data or metadata.<p>
|
||||
|
||||
<p>Here we look at the metadata chunks. They contain the B-treesm which btrfs gets its name from. They are key-value stores for different kinds of information. For example, the filesystem tree stores which files and directories are in the filesystem, and the extent tree stores which areas of the disk are in use. Each B-tree consists of a number of 16KiB <b>nodes</b>, here symbolized by colorful boxes, with the color indicating which tree the node belongs to. Most of the nodes are <b>leaves</b>, which contain the actual key-value pairs. The others are <b>interior nodes</b>, and we indicate them with a little white circle. They are important to find the leaf a key is stored in.</p>";
|
||||
|
||||
fn http_main_boxes(image: &[u8], _req: &Request) -> Response {
|
||||
let mut treecolors: HashMap<u64, &str> = HashMap::new();
|
||||
|
||||
let mut result = String::new();
|
||||
|
||||
let explanation_tablerowformat = |c: &str, t: &str| format!(
|
||||
"<tr>\
|
||||
<td><table><tr><td style=\"height:10px;width:10px;padding:0;background:{};\"></td></tr></table></td>\
|
||||
<td><table><tr><td style=\"height:10px;width:10px;padding:0;background:{};\"><img src=\"{}\" /></td></tr></table></td>\
|
||||
<td>{}</td>\
|
||||
</tr>\n",
|
||||
c, c, CIRCLE_IMAGE, t);
|
||||
let explanation_tablerowformat_leafonly = |c,t| format!(
|
||||
"<tr>\
|
||||
<td><table><tr><td style=\"height:10px;width:10px;padding:0;background:{};\"></td></tr></table></td>\
|
||||
<td></td>\
|
||||
<td>{}</td>\
|
||||
</tr>\n",
|
||||
c, t);
|
||||
|
||||
let cellformat = |c| format!(
|
||||
"<td style=\"height:10px;width:10px;padding:0;background:{};\"></td>\n",
|
||||
c);
|
||||
let cellformat_higher = |c,_| format!(
|
||||
"<td style=\"height:10px;width:10px;padding:0;background:{}\"><img src=\"{}\" /></td>\n",
|
||||
c, CIRCLE_IMAGE);
|
||||
|
||||
result.push_str(&"<details>\n<summary>What am I seeing here?</summary>");
|
||||
result.push_str(EXPLANATION_TEXT);
|
||||
|
||||
// tree explanations
|
||||
result.push_str(&"<table style=\"margin: 0 auto;\">\n");
|
||||
result.push_str(&explanation_tablerowformat_leafonly("lightgrey", "unused or outdated node"));
|
||||
treecolors.insert(1, COLORS[treecolors.len() % COLORS.len()]);
|
||||
result.push_str(&explanation_tablerowformat(treecolors[&1], "root tree"));
|
||||
|
||||
treecolors.insert(3, COLORS[treecolors.len() % COLORS.len()]);
|
||||
result.push_str(&explanation_tablerowformat(treecolors[&3], "chunk tree"));
|
||||
|
||||
let roots = Tree::root(image).unwrap();
|
||||
for item in roots.iter() {
|
||||
if item.key.key_type == ItemType::Root {
|
||||
let treedesc: String = match &item.key.key_id {
|
||||
1 => format!("root tree"),
|
||||
2 => format!("extent tree"),
|
||||
3 => format!("chunk tree"),
|
||||
4 => format!("device tree"),
|
||||
5 => format!("filesystem tree"),
|
||||
6 => format!("root directory"),
|
||||
7 => format!("checksum tree"),
|
||||
8 => format!("quota tree"),
|
||||
9 => format!("UUID tree"),
|
||||
10 => format!("free space tree"),
|
||||
11 => format!("block group tree"),
|
||||
0xffff_ffff_ffff_fff7 => format!("data reloc tree"),
|
||||
x @ 0x100 ..= 0xffff_ffff_ffff_feff => format!("file tree, id = {}", x),
|
||||
x => format!("other tree, id = {}", x),
|
||||
};
|
||||
|
||||
treecolors.insert(item.key.key_id, COLORS[treecolors.len() % COLORS.len()]);
|
||||
result.push_str(&explanation_tablerowformat(
|
||||
treecolors[&item.key.key_id],
|
||||
&treedesc
|
||||
));
|
||||
}
|
||||
}
|
||||
result.push_str(&"</table>\n");
|
||||
result.push_str(&"</details>\n");
|
||||
|
||||
let extent_tree = Tree::new(&image, TreeID::Extent).unwrap();
|
||||
let mut extent_tree_iterator = extent_tree.iter();
|
||||
|
||||
// current_blockgroup == None: haven't encountered a blockgroup yet
|
||||
// metadata_items == None: current blockgroup is not metadata or system
|
||||
let mut current_blockgroup = None;
|
||||
let mut metadata_items: Option<Vec<Option<(u64, u64)>>> = None;
|
||||
|
||||
let metadata_blockgroups = iter::from_fn(|| {
|
||||
while let Some(item) = extent_tree_iterator.next() {
|
||||
// println!("Got key: {:x?}", &item.key);
|
||||
match &item.value {
|
||||
BlockGroup(bg) => {
|
||||
println!("{:x?}", item.key);
|
||||
let result = (current_blockgroup.take(), metadata_items.take());
|
||||
|
||||
let nodes_in_blockgroup = item.key.key_offset as usize / NODE_SIZE;
|
||||
if bg.flags & 0x01 == 0 {
|
||||
metadata_items = Some(vec![None; nodes_in_blockgroup]);
|
||||
} else {
|
||||
metadata_items = None;
|
||||
}
|
||||
current_blockgroup = Some(item);
|
||||
|
||||
if let (Some(bg), met) = result {
|
||||
return Some((bg, met));
|
||||
}
|
||||
},
|
||||
Extent(e) => {
|
||||
if let Some(bg_item) = ¤t_blockgroup {
|
||||
if let Some(met) = &mut metadata_items {
|
||||
let bg_start = bg_item.key.key_id;
|
||||
let node_addr = item.key.key_id;
|
||||
let tree_id = e.block_refs.iter().filter(|&(t,_)|t == &ItemType::TreeBlockRef).count() as u64;
|
||||
let index = (node_addr - bg_start) as usize / NODE_SIZE;
|
||||
if index < met.len() {
|
||||
met[index] = Some((tree_id, item.key.key_offset));
|
||||
} else {
|
||||
println!("Warning: extent out of block group range: {:x?}", &item.key);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
println!("Warning: extent without matching block group: {:x?}", &item.key);
|
||||
}
|
||||
},
|
||||
_ => {},//panic!("Unexpected item in extent tree: {:x?}", item.key)
|
||||
}
|
||||
}
|
||||
|
||||
let result = (current_blockgroup.take(), metadata_items.take());
|
||||
if let (Some(bg), met) = result {
|
||||
return Some((bg, met));
|
||||
} else {
|
||||
return None;
|
||||
}
|
||||
});
|
||||
|
||||
let mut last_key = 0;
|
||||
|
||||
// colorful table
|
||||
for (bg, nodes) in metadata_blockgroups {
|
||||
if bg.key.key_id < last_key {
|
||||
println!("Error: going backwards!");
|
||||
break;
|
||||
} else {
|
||||
last_key = bg.key.key_id;
|
||||
}
|
||||
|
||||
let bg_value = match &bg.value {
|
||||
BlockGroup(bgv) => bgv,
|
||||
_ => panic!("Expected BlockGroup value"),
|
||||
};
|
||||
|
||||
// header
|
||||
let addr_map: &AddressMap = extent_tree.reader.as_ref().addr_map();
|
||||
result.push_str(
|
||||
&format!(
|
||||
"<h3 style=\"text-align: center;\">{:x} - {:x} ({}, {})</h3><p>Physical: {}</p>\n",
|
||||
bg.key.key_id,
|
||||
bg.key.key_id + bg.key.key_offset,
|
||||
match bg.key.key_offset {
|
||||
x if x <= (1<<11) => format!("{} B", x),
|
||||
x if x <= (1<<21) => format!("{} KiB", x as f64 / (1u64<<10) as f64),
|
||||
x if x <= (1<<31) => format!("{} MiB", x as f64 / (1u64<<20) as f64),
|
||||
x if x <= (1<<41) => format!("{} GiB", x as f64 / (1u64<<30) as f64),
|
||||
x if x <= (1<<51) => format!("{} TiB", x as f64 / (1u64<<40) as f64),
|
||||
x @ _ => format!("{} PiB", x as f64 / (1u64<<50) as f64),
|
||||
},
|
||||
match bg_value.flags & 0x07 {
|
||||
0x01 => "Data",
|
||||
0x02 => "System",
|
||||
0x04 => "Metadata",
|
||||
_ => "???",
|
||||
},
|
||||
match addr_map.0.binary_search_by_key(&bg.key.key_id, |x|x.0) {
|
||||
Ok(i) => format!("{:x?}", &addr_map.0[i].2),
|
||||
_ => String::from(""),
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
if let Some(nodes) = nodes {
|
||||
result.push_str("<table style=\"margin: 0 auto;\">\n<tr>\n");
|
||||
|
||||
for (i, &n) in nodes.iter().enumerate() {
|
||||
if i % 64 == 0 && i != 0 {
|
||||
result.push_str("</tr>\n<tr>\n");
|
||||
}
|
||||
|
||||
if let Some((tid, level)) = n {
|
||||
let color: Option<&str> = treecolors.get(&tid).map(|x|*x);
|
||||
let color = color.unwrap_or_else(|| {
|
||||
println!("Unknown color for id: {}", &tid);
|
||||
let color: &str = COLORS[treecolors.len() % COLORS.len()];
|
||||
treecolors.insert(tid, color);
|
||||
color
|
||||
});
|
||||
if level == 0 {
|
||||
result.push_str(&cellformat(color));
|
||||
} else {
|
||||
result.push_str(&cellformat_higher(color, level));
|
||||
}
|
||||
} else {
|
||||
result.push_str(&cellformat("lightgrey"));
|
||||
}
|
||||
}
|
||||
|
||||
result.push_str("</tr>\n</table>\n");
|
||||
}
|
||||
}
|
||||
|
||||
Response::html(result)
|
||||
}
|
||||
|
||||
@@ -2,43 +2,6 @@ body {
|
||||
padding: 0.2em 2em;
|
||||
}
|
||||
|
||||
table {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
table td {
|
||||
padding: 0.1em 0.2em;
|
||||
}
|
||||
|
||||
table th {
|
||||
text-align: left;
|
||||
border-bottom: 1px solid #ccc;
|
||||
}
|
||||
|
||||
table > tbody > tr.view {
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
table > tbody > tr.even {
|
||||
background: #eee;
|
||||
}
|
||||
|
||||
table > tbody > tr.highlight {
|
||||
background: #0cc;
|
||||
}
|
||||
|
||||
table > tbody > tr.fold {
|
||||
display: none;
|
||||
}
|
||||
|
||||
table > tbody > tr.fold > td {
|
||||
padding-left: 1em;
|
||||
}
|
||||
|
||||
table > tbody > tr.fold.open {
|
||||
display: table-row;
|
||||
}
|
||||
|
||||
div.nav {
|
||||
padding: 5px;
|
||||
background-color: #dde;
|
||||
@@ -52,7 +15,7 @@ a.nav {
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
details.item {
|
||||
.item {
|
||||
padding: 3px;
|
||||
background-color: #dde;
|
||||
border-radius: 4px;
|
||||
@@ -76,7 +39,7 @@ details .details {
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
details .itemvalue {
|
||||
.itemvalue {
|
||||
color: black;
|
||||
padding: 3px;
|
||||
margin: 1px 2px;
|
||||
@@ -84,7 +47,7 @@ details .itemvalue {
|
||||
display: inline-block;
|
||||
}
|
||||
|
||||
details .key {
|
||||
.key {
|
||||
color: white;
|
||||
background-color: #999;
|
||||
border-radius: 4px;
|
||||
@@ -95,7 +58,7 @@ details .key {
|
||||
font-size: 12pt;
|
||||
}
|
||||
|
||||
details .key a {
|
||||
.key a {
|
||||
color: white;
|
||||
}
|
||||
|
||||
@@ -133,6 +96,17 @@ span.key_type.root {
|
||||
background-color: #111;
|
||||
}
|
||||
|
||||
span.nodeaddr {
|
||||
color: white;
|
||||
background-color: #999;
|
||||
text-align: right;
|
||||
border-radius: 4px;
|
||||
padding: 3px;
|
||||
float: right;
|
||||
font-family: monospace;
|
||||
font-size: 12pt;
|
||||
}
|
||||
|
||||
.details table {
|
||||
border-collapse: collapse;
|
||||
margin-bottom: 10px;
|
||||
@@ -151,3 +125,23 @@ span.key_type.root {
|
||||
padding: 0;
|
||||
margin: 5px 0;
|
||||
}
|
||||
|
||||
pre {
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
|
||||
table.blocks {
|
||||
margin: 0 auto;
|
||||
border-collapse: separate;
|
||||
border-spacing: 2px;
|
||||
}
|
||||
|
||||
table.blocks td {
|
||||
height: 10px;
|
||||
width: 10px;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
table.legend {
|
||||
margin: 0 auto;
|
||||
}
|
||||
Reference in New Issue
Block a user