Compare commits
1 Commits
main
...
372d461113
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
372d461113 |
73
README.md
73
README.md
@@ -1,73 +0,0 @@
|
|||||||
# iavlread - extract data from the state of a Cosmos SDK blockchain
|
|
||||||
|
|
||||||
This is a simple tool read state data from the snapshot of a [Cosmos SDK] block chain. The state is stored in the `application.db` leveldb database, in the form of an [IAVL tree]. This tool walks the IAVL tree to get values from the state, at a desired block height.
|
|
||||||
|
|
||||||
### Installation
|
|
||||||
|
|
||||||
It's really just two Python files. `iavlread` is the CLI tool, and `iavltree.py` is the library which actually handles the data structure. It can also be used as a package in Python code (`store.ipynb` shows a few examples). To use `iavlread`, it's easiest to just clone this repository and make a symlink to `iavlread` from `$HOME/bin` or other directory which in `$PATH`.
|
|
||||||
|
|
||||||
### Usage
|
|
||||||
|
|
||||||
I'm just going to show a couple of examples on a snapshot of the [Allora] testnet, as it can be downloaded e.g. [here](https://www.imperator.co/services/chain-services/testnets/allora). That's what I've been using this for, although I'd assume it works the same for other CosmosSDK blockchains.
|
|
||||||
|
|
||||||
We assume that we're inside the snapshot, so the application db is at the path `data/application.db` from the current working directory. Otherwise, it can be specified with using `-d path_to_database`.
|
|
||||||
|
|
||||||
To get the **maximum/minimum block height** contained in the snapshot
|
|
||||||
|
|
||||||
$ iavlread max_height s/k:emissions/
|
|
||||||
5224814
|
|
||||||
$ iavlread min_height s/k:emissions/
|
|
||||||
5221360
|
|
||||||
|
|
||||||
Here `s/k:emissions/` is the prefix for a specific IAVL tree, the one corresponding to the emissions module. Other prefixes are `s/k:mint/`, `s/k:bank/`, `s/k:staking/`, `s/k:acc/`. They should generally produce the same min/max height, but that is not guaranteed.
|
|
||||||
|
|
||||||
To **count** the items in a keeper, use the `count` subcommand:
|
|
||||||
|
|
||||||
$ iavlread count s/k:emissions/
|
|
||||||
19070982
|
|
||||||
|
|
||||||
We can also count only the items with a specific key (i.e. one item of the keeper).
|
|
||||||
|
|
||||||
$ iavlread count s/k:emissions/ 62
|
|
||||||
9714538
|
|
||||||
|
|
||||||
Here 62 corresponds to the `latestOneOutInfererInfererNetworkRegrets` field, which has type `Map[Triple[uint64, string, string], TimestampedValue]`. So the keys for the individual item in the map are quadruples consisting of 62, an integer, and two strings.
|
|
||||||
|
|
||||||
To **iterate through all items** whose key starts with 62, use the `iterate` command:
|
|
||||||
|
|
||||||
$ iavlread -kQss -vpb,2=float iterate s/k:emissions/ 62 | head -n5
|
|
||||||
([62, 1, 'allo1004k7wqa4spns0wlct6mvxnmfysae07w2p75xc', 'allo1004k7wqa4spns0wlct6mvxnmfysae07w2p75xc'], [(1, 1655577), (2, -2.538390795862665)])
|
|
||||||
([62, 1, 'allo1004k7wqa4spns0wlct6mvxnmfysae07w2p75xc', 'allo123s56yuz7dkyh54gs7gstulrfzwj3d6ucwlwk2'], [(1, 1655577), (2, -2.9735713547935196)])
|
|
||||||
([62, 1, 'allo1004k7wqa4spns0wlct6mvxnmfysae07w2p75xc', 'allo136dfvvuhazgyqyls2f68qzr5l3p07uhnk9mmk0'], [(1, 1655577), (2, -2.4482888033843784)])
|
|
||||||
([62, 1, 'allo1004k7wqa4spns0wlct6mvxnmfysae07w2p75xc', 'allo15fvg8gk63u8ydf3znaaxrrukgwulezjz4azf68'], [(1, 1655577), (2, -2.4736050386063284)])
|
|
||||||
([62, 1, 'allo1004k7wqa4spns0wlct6mvxnmfysae07w2p75xc', 'allo16k9af4uu7vpm5hy68t6u62jylc5frv747mz5lw'], [(1, 1655577), (2, -2.4686292734004254)])
|
|
||||||
|
|
||||||
The option `-kQss` specifies the key format (a 64 bit integer `Q` followed by two strings `s`; see below). And `-vpb,2=float` specifies the value format: a protocol buffer whose field number 2 is a `float`.
|
|
||||||
|
|
||||||
If we want to restrict to keys which start with 62 and 60 (i.e. get one-out regrets for topic 60 only)
|
|
||||||
|
|
||||||
$ iavlread -kQss count s/k:emissions/ 62 60 | head -n5
|
|
||||||
10650
|
|
||||||
$ iavlread -kQss -vpb,2=float iterate s/k:emissions/ 62 60 | head -n5
|
|
||||||
([62, 60, 'allo107zfy4xrp5plt0jmutaj9feer02v6r30amku78', 'allo107zfy4xrp5plt0jmutaj9feer02v6r30amku78'], [(1, 5070658), (2, 0.004613475335253211)])
|
|
||||||
([62, 60, 'allo107zfy4xrp5plt0jmutaj9feer02v6r30amku78', 'allo10q5h8afpwjh5x3vazxzwwkxfhpzfys9wxxw3q8'], [(1, 4785658), (2, 0.18922985130794248)])
|
|
||||||
([62, 60, 'allo107zfy4xrp5plt0jmutaj9feer02v6r30amku78', 'allo10vgaxk57dkk0fd255r3gxn5quwzxaqq95m2cz2'], [(1, 4837018), (2, 0.6616848257922094)])
|
|
||||||
([62, 60, 'allo107zfy4xrp5plt0jmutaj9feer02v6r30amku78', 'allo10w45atfjsh9q6vsk7mx74xh0pvuf8r42vnmt5p'], [(1, 4802038), (2, 0.010614832362991345)])
|
|
||||||
([62, 60, 'allo107zfy4xrp5plt0jmutaj9feer02v6r30amku78', 'allo12hnfamvwumkfm6dnc42rt8q3yevyqpuzkdtwat'], [(1, 4471438), (2, 0.2668976650081499)])
|
|
||||||
|
|
||||||
Or if we know the full key, we can use `get` instead of `iterate`. E.g. to get a balance from the bank module:
|
|
||||||
|
|
||||||
$ iavlread -kbs -vint get s/k:bank/ 2 570DD38DC5BAF3112A7C83A420ED399A8E59C5FC uallo
|
|
||||||
350
|
|
||||||
|
|
||||||
We can also get the value for at a different block height (by default, the max block height is used):
|
|
||||||
|
|
||||||
$ iavlread -H 5221360 -kbs -vint get s/k:bank/ 2 570DD38DC5BAF3112A7C83A420ED399A8E59C5FC uallo
|
|
||||||
10
|
|
||||||
|
|
||||||
Or we can get all past updates to the value (that are contained in the snapshot):
|
|
||||||
|
|
||||||
$ iavlread -kbs -vint history s/k:bank/ 2 570DD38DC5BAF3112A7C83A420ED399A8E59C5FC uallo
|
|
||||||
5224814 150
|
|
||||||
5224813 30
|
|
||||||
5224812 60
|
|
||||||
184
iavlread
184
iavlread
@@ -3,20 +3,25 @@ import argparse
|
|||||||
import plyvel
|
import plyvel
|
||||||
import iavltree
|
import iavltree
|
||||||
import json
|
import json
|
||||||
import struct
|
|
||||||
|
|
||||||
def decode_protobuf(subformats: dict, format_prefix: str, data: bytes):
|
def decode_protobuf(subformats: dict, format_prefix: str, data: bytes):
|
||||||
result = []
|
result = []
|
||||||
for (k,v) in iavltree.parse_pb(data):
|
for (k,v) in iavltree.parse_struct(data):
|
||||||
idx = f'{format_prefix}.{k}'
|
idx = f'{format_prefix}.{k}'
|
||||||
if idx in subformats:
|
if idx in subformats:
|
||||||
f = subformats[idx]
|
f = subformats[idx]
|
||||||
if f == 'pb':
|
if f == 'str':
|
||||||
|
decoded_value = v.decode('utf-8')
|
||||||
|
elif f == 'int':
|
||||||
|
decoded_value = int(v)
|
||||||
|
elif f == 'float':
|
||||||
|
decoded_value = float(v)
|
||||||
|
elif f == 'proto':
|
||||||
decoded_value = decode_protobuf(subformats, idx, v)
|
decoded_value = decode_protobuf(subformats, idx, v)
|
||||||
elif f == 'pbdict':
|
elif f == 'protodict':
|
||||||
decoded_value = dict(decode_protobuf(subformats, idx, v))
|
decoded_value = dict(decode_protobuf(subformats, idx, v))
|
||||||
else:
|
else:
|
||||||
decoded_value = decode_output(f, v)
|
decoded_value = v
|
||||||
else:
|
else:
|
||||||
decoded_value = v
|
decoded_value = v
|
||||||
result.append((k, decoded_value))
|
result.append((k, decoded_value))
|
||||||
@@ -29,127 +34,84 @@ def decode_output(format: str, data: bytes) -> str:
|
|||||||
return int(data)
|
return int(data)
|
||||||
elif format == 'float':
|
elif format == 'float':
|
||||||
return float(data)
|
return float(data)
|
||||||
elif format == 'u64':
|
elif format.startswith('protodict'):
|
||||||
return struct.unpack('>Q', data[:8])[0]
|
|
||||||
elif format == 'u32':
|
|
||||||
return struct.unpack('>I', data[:4])[0]
|
|
||||||
elif format == 'u16':
|
|
||||||
return struct.unpack('>H', data[:2])[0]
|
|
||||||
elif format == 'u8':
|
|
||||||
return struct.unpack('>B', data[:1])[0]
|
|
||||||
elif format == 'i64':
|
|
||||||
return struct.unpack('>q', data[:8])[0]
|
|
||||||
elif format == 'i32':
|
|
||||||
return struct.unpack('>i', data[:4])[0]
|
|
||||||
elif format == 'i16':
|
|
||||||
return struct.unpack('>h', data[:2])[0]
|
|
||||||
elif format == 'i8':
|
|
||||||
return struct.unpack('>b', data[:1])[0]
|
|
||||||
elif format == 'i64ord':
|
|
||||||
return struct.unpack('>Q', data[:8])[0] - (1<<63)
|
|
||||||
elif format == 'i32ord':
|
|
||||||
return struct.unpack('>I', data[:4])[0] - (1<<31)
|
|
||||||
elif format == 'i16ord':
|
|
||||||
return struct.unpack('>H', data[:2])[0] - (1<<15)
|
|
||||||
elif format == 'i8ord':
|
|
||||||
return struct.unpack('>B', data[:1])[0] - (1<<7)
|
|
||||||
elif format.startswith('pbdict'):
|
|
||||||
subformats = {'.' + id: subformat for x in format.split(',')[1:] for id, subformat in (x.split('='),)}
|
subformats = {'.' + id: subformat for x in format.split(',')[1:] for id, subformat in (x.split('='),)}
|
||||||
return dict(decode_protobuf(subformats, '', data))
|
return dict(decode_protobuf(subformats, '', data))
|
||||||
elif format.startswith('pb'):
|
elif format.startswith('proto'):
|
||||||
subformats = {'.' + id: subformat for x in format.split(',')[1:] for id, subformat in (x.split('='),)}
|
subformats = {'.' + id: subformat for x in format.split(',')[1:] for id, subformat in (x.split('='),)}
|
||||||
return decode_protobuf(subformats, '', data)
|
return decode_protobuf(subformats, '', data)
|
||||||
else:
|
else:
|
||||||
return data
|
return data
|
||||||
|
|
||||||
def get_args():
|
parser = argparse.ArgumentParser(description="Read the IAVL tree in a cosmos snapshot")
|
||||||
parser = argparse.ArgumentParser(description="Read the IAVL tree in a cosmos snapshot")
|
|
||||||
|
|
||||||
parser.add_argument('-d', '--database', help='Path to database (application.db folder)')
|
parser.add_argument('-d', '--database', help='Path to database (application.db folder)')
|
||||||
parser.add_argument('-H', '--height', type=int, help='Block height')
|
parser.add_argument('-H', '--height', type=int, help='Block height')
|
||||||
# parser.add_argument('-j', '--json', action='store_true', help='JSON output')
|
parser.add_argument('-k', '--keyformat', help='Key format for maps (e.g. Qss)')
|
||||||
|
parser.add_argument('-v', '--valueformat', help='Value format')
|
||||||
|
|
||||||
def add_key_cmd(subparsers, cmd, help, optional: bool):
|
subparsers = parser.add_subparsers(required=True, dest='cmd')
|
||||||
subp = subparsers.add_parser(cmd, help = help)
|
p_max_height = subparsers.add_parser('max_height', help = 'Get the max block height in the snapshot')
|
||||||
subp.add_argument('-k', '--keyformat', help='Key format for maps (e.g. Qss)')
|
p_get = subparsers.add_parser('get', help = 'Retrieve a single item')
|
||||||
subp.add_argument('-v', '--valueformat', help='Value format')
|
p_get.add_argument('prefix', help = 'Prefix (e.g. "s/k:emissions/")')
|
||||||
subp.add_argument('prefix', help = 'Prefix (e.g. "s/k:emissions/")')
|
p_get.add_argument('key', nargs='+', help = 'Key parts')
|
||||||
subp.add_argument('key', nargs='*' if optional else '+', help = 'Key parts')
|
p_count = subparsers.add_parser('count', help = 'Count number of items with a prefix')
|
||||||
return subp
|
p_count.add_argument('prefix', help = 'Prefix (e.g. "s/k:emissions/")')
|
||||||
|
p_count.add_argument('key', nargs='*', help = 'Key parts')
|
||||||
|
p_iterate = subparsers.add_parser('iterate', help = 'Iterate over items with some prefix')
|
||||||
|
p_iterate.add_argument('prefix', help = 'Prefix (e.g. "s/k:emissions/")')
|
||||||
|
p_iterate.add_argument('key', nargs='*', help = 'Key parts')
|
||||||
|
p_iterate = subparsers.add_parser('iterate_keys', help = 'Iterate over items with some prefix, output keys only')
|
||||||
|
p_iterate.add_argument('prefix', help = 'Prefix (e.g. "s/k:emissions/")')
|
||||||
|
p_iterate.add_argument('key', nargs='*', help = 'Key parts')
|
||||||
|
p_iterate = subparsers.add_parser('iterate_values', help = 'Iterate over items with some prefix, output values only')
|
||||||
|
p_iterate.add_argument('prefix', help = 'Prefix (e.g. "s/k:emissions/")')
|
||||||
|
p_iterate.add_argument('key', nargs='*', help = 'Key parts')
|
||||||
|
|
||||||
subparsers = parser.add_subparsers(required=True, dest='cmd')
|
args = parser.parse_args()
|
||||||
p_max_height = subparsers.add_parser('max_height', help = 'Get the max block height in the snapshot')
|
|
||||||
p_max_height.add_argument('prefix', help = 'Prefix (e.g. "s/k:emissions/")')
|
|
||||||
p_min_height = subparsers.add_parser('min_height', help = 'Get the min block height in the snapshot')
|
|
||||||
p_min_height.add_argument('prefix', help = 'Prefix (e.g. "s/k:emissions/")')
|
|
||||||
add_key_cmd(subparsers, 'get', 'Retrieve a single item', False)
|
|
||||||
add_key_cmd(subparsers, 'history', 'Get all stored past values of the item', False)
|
|
||||||
add_key_cmd(subparsers, 'count', 'Count number of items with a prefix', True)
|
|
||||||
add_key_cmd(subparsers, 'iterate', 'Iterate over items with some prefix', True)
|
|
||||||
add_key_cmd(subparsers, 'iterate_keys', 'Iterate over items with some prefix, output keys only', True)
|
|
||||||
add_key_cmd(subparsers, 'iterate_values', 'Iterate over items with some prefix, output values only', True)
|
|
||||||
|
|
||||||
return parser.parse_args()
|
dbpath = args.database if args.database is not None else 'data/application.db'
|
||||||
|
keyformat = args.keyformat if args.keyformat is not None else ''
|
||||||
|
valueformat = args.valueformat if args.valueformat is not None else 'b'
|
||||||
|
|
||||||
def run(args):
|
if args.key is None or len(args.key) == 0:
|
||||||
dbpath = args.database if args.database is not None else 'data/application.db'
|
key = None
|
||||||
keyformat = args.keyformat if hasattr(args, 'keyformat') and args.keyformat is not None else ''
|
else:
|
||||||
valueformat = args.valueformat if hasattr(args, 'valueformat') and args.valueformat is not None else 'b'
|
if len(args.key) > len(keyformat) + 1:
|
||||||
|
raise Exception('Too many key elements for keyformat')
|
||||||
if args.cmd == 'max_height' or args.cmd == 'min_height' or args.key is None or len(args.key) == 0:
|
key = [int(args.key[0])]
|
||||||
key = None
|
for f, k in zip(keyformat, args.key[1:]):
|
||||||
else:
|
if f in ['i', 'I', 'q', 'Q']:
|
||||||
if len(args.key) > len(keyformat) + 1:
|
key.append(int(k))
|
||||||
raise Exception('Too many key elements for keyformat')
|
|
||||||
key = [int(args.key[0])]
|
|
||||||
for f, k in zip(keyformat, args.key[1:]):
|
|
||||||
if f in ['i', 'I', 'q', 'Q']:
|
|
||||||
key.append(int(k))
|
|
||||||
else:
|
|
||||||
key.append(k)
|
|
||||||
|
|
||||||
with plyvel.DB(dbpath) as db:
|
|
||||||
if args.height is None or args.cmd == 'max_height':
|
|
||||||
height = iavltree.max_height(db, args.prefix.encode('utf-8'))
|
|
||||||
else:
|
else:
|
||||||
height = args.height
|
key.append(k)
|
||||||
|
|
||||||
if args.cmd == 'max_height':
|
with plyvel.DB(dbpath) as db:
|
||||||
print(height)
|
if args.height is None or args.cmd == 'max_height':
|
||||||
elif args.cmd == 'min_height':
|
height = iavltree.max_height(db)
|
||||||
hmin, _ = iavltree.min_max_height(db, args.prefix.encode('utf-8'))
|
else:
|
||||||
print(hmin)
|
height = args.height
|
||||||
elif args.cmd == 'get':
|
|
||||||
result = iavltree.get(db, args.prefix, height, keyformat, key)
|
|
||||||
|
|
||||||
if result is not None:
|
if args.cmd == 'max_height':
|
||||||
print(decode_output(valueformat, result))
|
print(height)
|
||||||
elif args.cmd == 'history':
|
elif args.cmd == 'get':
|
||||||
it = iavltree.history(db, args.prefix, keyformat, key, height)
|
result = iavltree.walk_disk(db, args.prefix, height, keyformat, key)
|
||||||
|
|
||||||
try:
|
print(decode_output(valueformat, result))
|
||||||
for h, v in it:
|
elif args.cmd == 'count':
|
||||||
print(f'{h} {decode_output(valueformat, v)}')
|
result = iavltree.count(db, args.prefix, height, keyformat, key = key)
|
||||||
except BrokenPipeError:
|
|
||||||
pass
|
|
||||||
elif args.cmd == 'count':
|
|
||||||
result = iavltree.count(db, args.prefix, height, keyformat, key = key)
|
|
||||||
|
|
||||||
print(result)
|
print(result)
|
||||||
elif args.cmd == 'iterate' or args.cmd == 'iterate_keys' or args.cmd == 'iterate_values':
|
elif args.cmd == 'iterate' or args.cmd == 'iterate_keys' or args.cmd == 'iterate_values':
|
||||||
it = iavltree.iterate(db, args.prefix, height, keyformat, key = key)
|
it = iavltree.iterate(db, args.prefix, height, keyformat, key = key)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
for k, v in it:
|
for k, v in it:
|
||||||
if args.cmd == 'iterate_keys':
|
if args.cmd == 'iterate_keys':
|
||||||
print(' '.join([str(x) for x in k]))
|
print(k)
|
||||||
elif args.cmd == 'iterate_values':
|
elif args.cmd == 'iterate_values':
|
||||||
print(decode_output(valueformat,v))
|
print(decode_output(valueformat,v))
|
||||||
else:
|
else:
|
||||||
print(' '.join([str(x) for x in k]), decode_output(valueformat, v))
|
print((k, decode_output(valueformat, v)))
|
||||||
except BrokenPipeError:
|
except BrokenPipeError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
args = get_args()
|
|
||||||
run(args)
|
|
||||||
|
|||||||
154
iavltree.py
154
iavltree.py
@@ -1,8 +1,9 @@
|
|||||||
import plyvel
|
import plyvel
|
||||||
import struct
|
import struct
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
# functions for reading IAVL tree
|
# functions for reading IAVL tree
|
||||||
def read_varint(x: bytes, offset: int = 0) -> tuple[int, int]:
|
def read_varint(x: bytes, offset: int = 0) -> int:
|
||||||
result = 0
|
result = 0
|
||||||
factor = 1
|
factor = 1
|
||||||
|
|
||||||
@@ -14,7 +15,7 @@ def read_varint(x: bytes, offset: int = 0) -> tuple[int, int]:
|
|||||||
return result // 2, offset+i+1
|
return result // 2, offset+i+1
|
||||||
factor *= 128
|
factor *= 128
|
||||||
|
|
||||||
def read_uvarint(x: bytes, offset: int = 0) -> tuple[int, int]:
|
def read_uvarint(x: bytes, offset: int = 0) -> int:
|
||||||
result = 0
|
result = 0
|
||||||
factor = 1
|
factor = 1
|
||||||
|
|
||||||
@@ -26,20 +27,6 @@ def read_uvarint(x: bytes, offset: int = 0) -> tuple[int, int]:
|
|||||||
return result, offset+i+1
|
return result, offset+i+1
|
||||||
factor *= 128
|
factor *= 128
|
||||||
|
|
||||||
def write_uvarint(x: int) -> list[int]:
|
|
||||||
if x < 0:
|
|
||||||
raise Exception('write_uvarint only supports positive integers')
|
|
||||||
elif x == 0:
|
|
||||||
return [0]
|
|
||||||
|
|
||||||
result = []
|
|
||||||
while x > 0:
|
|
||||||
result.append(128 + x % 128)
|
|
||||||
x //= 128
|
|
||||||
result[-1] -= 128
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
def read_key(key: bytes) -> tuple[int, int] | None:
|
def read_key(key: bytes) -> tuple[int, int] | None:
|
||||||
if not key.startswith(b's'):
|
if not key.startswith(b's'):
|
||||||
return None
|
return None
|
||||||
@@ -86,36 +73,52 @@ def read_node(node: bytes) -> tuple[int, int, bytes, tuple[int, int], tuple[int,
|
|||||||
|
|
||||||
return (height, length, key, (left_version, left_nonce), (right_version, right_nonce))
|
return (height, length, key, (left_version, left_nonce), (right_version, right_nonce))
|
||||||
|
|
||||||
def get_raw(db: plyvel.DB, prefix: bytes, version: int, searchkey: bytes) -> None | tuple[int, int, int, int, bytes, bytes]:
|
def walk(tree, version, searchkey):
|
||||||
key = db.get(prefix + write_key((version, 1)))
|
if (version, 1) not in tree:
|
||||||
if key is None:
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
node = read_node(key)
|
node = tree[(version, 1)]
|
||||||
|
if len(node) == 2: # root copy?
|
||||||
|
node = tree[node]
|
||||||
|
|
||||||
|
while node[0] > 0:
|
||||||
|
nodekey = node[2]
|
||||||
|
if searchkey < nodekey:
|
||||||
|
next = node[3]
|
||||||
|
else:
|
||||||
|
next = node[4]
|
||||||
|
|
||||||
|
node = tree[next]
|
||||||
|
|
||||||
|
return node[3]
|
||||||
|
|
||||||
|
def walk_disk_raw(db, prefix: bytes, version: int, searchkey: bytes) -> None | bytes:
|
||||||
|
root = db.get(prefix + write_key((version, 1)))
|
||||||
|
if root is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
node = read_node(root)
|
||||||
|
|
||||||
if len(node) == 2: # root copy?
|
if len(node) == 2: # root copy?
|
||||||
key = node
|
node = read_node(db.get(prefix + write_key(node)))
|
||||||
node = read_node(db.get(prefix + write_key(key)))
|
|
||||||
|
|
||||||
while node[0] > 0:
|
while node[0] > 0:
|
||||||
# print(node)
|
# print(node)
|
||||||
|
|
||||||
nodekey = node[2]
|
nodekey = node[2]
|
||||||
if searchkey < nodekey:
|
if searchkey < nodekey:
|
||||||
key = node[3]
|
next = node[3]
|
||||||
else:
|
else:
|
||||||
key = node[4]
|
next = node[4]
|
||||||
|
|
||||||
node = read_node(db.get(prefix + write_key(key)))
|
node = read_node(db.get(prefix + write_key(next)))
|
||||||
|
|
||||||
if node[2] == searchkey:
|
if node[2] == searchkey:
|
||||||
(version, nonce) = key
|
return node[3]
|
||||||
(height, length, itemkey, value) = node
|
|
||||||
return (version, nonce, height, length, itemkey, value)
|
|
||||||
else:
|
else:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def get_next_key_raw(db: plyvel.DB, prefix: bytes, version: int, searchkey: bytes) -> None | bytes:
|
def walk_disk_next_key_raw(db, prefix: bytes, version: int, searchkey: bytes) -> None | bytes:
|
||||||
root = db.get(prefix + write_key((version, 1)))
|
root = db.get(prefix + write_key((version, 1)))
|
||||||
if root is None:
|
if root is None:
|
||||||
return None
|
return None
|
||||||
@@ -141,11 +144,10 @@ def get_next_key_raw(db: plyvel.DB, prefix: bytes, version: int, searchkey: byte
|
|||||||
|
|
||||||
return lowest_geq_key
|
return lowest_geq_key
|
||||||
|
|
||||||
def get(db: plyvel.DB, prefix: str, version: int, format: str, searchkey: list) -> None | bytes:
|
def walk_disk(db, prefix: str, version: int, format: str, searchkey: list) -> None | bytes:
|
||||||
x = get_raw(db, prefix.encode('utf-8'), version, encode_key(format, searchkey))
|
return walk_disk_raw(db, prefix.encode('utf-8'), version, encode_key(format, searchkey))
|
||||||
return x[5] if x is not None else None
|
|
||||||
|
|
||||||
def parse_pb(data):
|
def parse_struct(data):
|
||||||
n = 0
|
n = 0
|
||||||
results = []
|
results = []
|
||||||
|
|
||||||
@@ -176,51 +178,26 @@ def next_key(db, k: bytes) -> bytes | None:
|
|||||||
finally:
|
finally:
|
||||||
it.close()
|
it.close()
|
||||||
|
|
||||||
def max_height(db: plyvel.DB, prefix: bytes | str) -> int:
|
def max_height(db) -> int:
|
||||||
if isinstance(prefix, str):
|
|
||||||
prefix = prefix.encode('utf-8')
|
|
||||||
|
|
||||||
testnr = 1<<63
|
testnr = 1<<63
|
||||||
|
|
||||||
for i in range(62, -1, -1):
|
for i in range(62, -1, -1):
|
||||||
n = next_key(db, prefix + b's' + struct.pack('>Q', testnr) + struct.pack('>I', 1))
|
prefix = b's/k:emissions/s'
|
||||||
|
n = next_key(db, prefix + struct.pack('>Q', testnr))
|
||||||
|
|
||||||
if n is not None and n.startswith(prefix):
|
if n is not None and n.startswith(prefix):
|
||||||
# print(f'{testnr} is low')
|
# print(f'{testnr:16x} is low')
|
||||||
testnr += 1 << i
|
testnr += 1 << i
|
||||||
else:
|
else:
|
||||||
# print(f'{testnr} is high')
|
# print(f'{testnr:16x} is high')
|
||||||
testnr -= 1 << i
|
testnr -= 1 << i
|
||||||
|
|
||||||
n = db.get(prefix + struct.pack('>Q', testnr))
|
n = next_key(db, prefix + struct.pack('>Q', testnr))
|
||||||
if n is not None and n.startswith(prefix):
|
if n is not None and n.startswith(prefix):
|
||||||
return testnr
|
return testnr
|
||||||
else:
|
else:
|
||||||
return testnr - 1
|
return testnr - 1
|
||||||
|
|
||||||
def min_max_height(db: plyvel.DB, prefix: bytes) -> tuple[int, int]:
|
|
||||||
if isinstance(prefix, str):
|
|
||||||
prefix = prefix.encode('utf-8')
|
|
||||||
|
|
||||||
hmax = max_height(db, prefix)
|
|
||||||
|
|
||||||
h = 1<<hmax.bit_length()
|
|
||||||
inc = h>>1
|
|
||||||
|
|
||||||
for _ in range(25):
|
|
||||||
if h > hmax:
|
|
||||||
highenough = True
|
|
||||||
else:
|
|
||||||
root = db.get(prefix + write_key((h, 1)))
|
|
||||||
highenough = root is not None
|
|
||||||
# print(h, highenough, inc)
|
|
||||||
(h, inc) = (h + (1 - 2*highenough) * inc, inc >> 1)
|
|
||||||
if not highenough:
|
|
||||||
h += 1
|
|
||||||
|
|
||||||
return (h, hmax)
|
|
||||||
|
|
||||||
|
|
||||||
# encode and decode keys
|
# encode and decode keys
|
||||||
def encode_key(format: str, key: list) -> bytes:
|
def encode_key(format: str, key: list) -> bytes:
|
||||||
result_bytes = []
|
result_bytes = []
|
||||||
@@ -238,10 +215,6 @@ def encode_key(format: str, key: list) -> bytes:
|
|||||||
result_bytes += list(struct.pack('>Q', key[i+1]))
|
result_bytes += list(struct.pack('>Q', key[i+1]))
|
||||||
elif f == 'q':
|
elif f == 'q':
|
||||||
result_bytes += list(struct.pack('>Q', key[i+1] + (1<<63)))
|
result_bytes += list(struct.pack('>Q', key[i+1] + (1<<63)))
|
||||||
elif f == 'b':
|
|
||||||
data = list(bytes.fromhex(key[i+1]))
|
|
||||||
result_bytes += write_uvarint(len(data))
|
|
||||||
result_bytes += data
|
|
||||||
|
|
||||||
return bytes(result_bytes)
|
return bytes(result_bytes)
|
||||||
|
|
||||||
@@ -269,11 +242,6 @@ def decode_key(format: str, key: bytes) -> list:
|
|||||||
v = struct.unpack('>Q', key[idx:idx+8])[0]
|
v = struct.unpack('>Q', key[idx:idx+8])[0]
|
||||||
result.append(v - (1<<63))
|
result.append(v - (1<<63))
|
||||||
idx += 8
|
idx += 8
|
||||||
elif f == 'b':
|
|
||||||
length, offset = read_uvarint(key[idx:])
|
|
||||||
data = key[idx+offset:idx+offset+length]
|
|
||||||
result.append(data.hex().upper())
|
|
||||||
idx += offset + length
|
|
||||||
|
|
||||||
if idx < len(key):
|
if idx < len(key):
|
||||||
result.append(key[idx:])
|
result.append(key[idx:])
|
||||||
@@ -282,7 +250,7 @@ def decode_key(format: str, key: bytes) -> list:
|
|||||||
|
|
||||||
# iteration
|
# iteration
|
||||||
class IAVLTreeIteratorRaw:
|
class IAVLTreeIteratorRaw:
|
||||||
def __init__(self, db: plyvel.DB, prefix: bytes, version: int, start: bytes | None = None, end: bytes | None = None):
|
def __init__(self, db, prefix: bytes, version: int, start: bytes | None = None, end: bytes | None = None):
|
||||||
self.db = db
|
self.db = db
|
||||||
self.prefix = prefix
|
self.prefix = prefix
|
||||||
self.version = version
|
self.version = version
|
||||||
@@ -359,7 +327,7 @@ class IAVLTreeIteratorRaw:
|
|||||||
return (node[2], node[3])
|
return (node[2], node[3])
|
||||||
|
|
||||||
class IAVLTreeIterator:
|
class IAVLTreeIterator:
|
||||||
def __init__(self, db: plyvel.DB, prefix: bytes, version: int, format: str, start: bytes | None = None, end: bytes | None = None):
|
def __init__(self, db, prefix: bytes, version: int, format: str, start: bytes | None = None, end: bytes | None = None):
|
||||||
self.format = format
|
self.format = format
|
||||||
self.inner = IAVLTreeIteratorRaw(db, prefix, version, start, end)
|
self.inner = IAVLTreeIteratorRaw(db, prefix, version, start, end)
|
||||||
|
|
||||||
@@ -382,7 +350,7 @@ def next_bs(x: bytes) -> bytes | None:
|
|||||||
|
|
||||||
return x_enc
|
return x_enc
|
||||||
|
|
||||||
def iterate(db: plyvel.DB, prefix, version, format = '', key = None, start = None, end = None):
|
def iterate(db, prefix, version, format = '', key = None, start = None, end = None):
|
||||||
prefix_enc = prefix.encode('utf-8')
|
prefix_enc = prefix.encode('utf-8')
|
||||||
|
|
||||||
if key is not None:
|
if key is not None:
|
||||||
@@ -394,7 +362,7 @@ def iterate(db: plyvel.DB, prefix, version, format = '', key = None, start = Non
|
|||||||
|
|
||||||
return IAVLTreeIterator(db, prefix_enc, version, format, start = start_enc, end = end_enc)
|
return IAVLTreeIterator(db, prefix_enc, version, format, start = start_enc, end = end_enc)
|
||||||
|
|
||||||
def count(db: plyvel.DB, prefix, version, format = '', key = None, start = None, end = None):
|
def count(db, prefix, version, format = '', key = None, start = None, end = None):
|
||||||
prefix_enc = prefix.encode('utf-8')
|
prefix_enc = prefix.encode('utf-8')
|
||||||
|
|
||||||
if key is not None:
|
if key is not None:
|
||||||
@@ -419,7 +387,7 @@ def count(db: plyvel.DB, prefix, version, format = '', key = None, start = None,
|
|||||||
|
|
||||||
return endidx - startidx
|
return endidx - startidx
|
||||||
|
|
||||||
def indexof_raw(db: plyvel.DB, prefix: bytes, version: int, key: bytes) -> int:
|
def indexof_raw(db, prefix: bytes, version: int, key: bytes) -> int:
|
||||||
"""
|
"""
|
||||||
Find how many items come before `key` in the tree. If `key` doesn't exist, how many
|
Find how many items come before `key` in the tree. If `key` doesn't exist, how many
|
||||||
items come before the slot it would get inserted at
|
items come before the slot it would get inserted at
|
||||||
@@ -429,7 +397,7 @@ def indexof_raw(db: plyvel.DB, prefix: bytes, version: int, key: bytes) -> int:
|
|||||||
next(it)
|
next(it)
|
||||||
except StopIteration:
|
except StopIteration:
|
||||||
# get root count
|
# get root count
|
||||||
return it.stack[0][1][1]
|
return read_node(db.get(prefix + write_key(it.stack[0][0])))[1]
|
||||||
|
|
||||||
keys = [p[1][3] for p, c in zip(it.stack, it.stack[1:]) if c[0] == p[1][4]]
|
keys = [p[1][3] for p, c in zip(it.stack, it.stack[1:]) if c[0] == p[1][4]]
|
||||||
keys_encoded = [prefix + write_key(k) for k in keys]
|
keys_encoded = [prefix + write_key(k) for k in keys]
|
||||||
@@ -437,31 +405,5 @@ def indexof_raw(db: plyvel.DB, prefix: bytes, version: int, key: bytes) -> int:
|
|||||||
|
|
||||||
return count
|
return count
|
||||||
|
|
||||||
def indexof(db: plyvel.DB, prefix: str, version: int, format: str, key: list) -> int:
|
def indexof(db, prefix: str, version: int, format: str, key: list) -> int:
|
||||||
return indexof_raw(db, prefix.encode('utf-8'), version, encode_key(format, key))
|
return indexof_raw(db, prefix.encode('utf-8'), version, encode_key(format, key))
|
||||||
|
|
||||||
class IAVLTreeHistoryIterator:
|
|
||||||
def __init__(self, db: plyvel.DB, prefix: bytes, key: bytes, max_height: int, min_height: int = 0):
|
|
||||||
self.db = db
|
|
||||||
self.prefix = prefix
|
|
||||||
self.key = key
|
|
||||||
self.height = max_height
|
|
||||||
self.min_height = min_height
|
|
||||||
|
|
||||||
def __iter__(self):
|
|
||||||
return self
|
|
||||||
|
|
||||||
def __next__(self) -> tuple[int, bytes]:
|
|
||||||
if self.height < self.min_height:
|
|
||||||
raise StopIteration
|
|
||||||
result = get_raw(self.db, self.prefix, self.height, self.key)
|
|
||||||
if result is None:
|
|
||||||
raise StopIteration
|
|
||||||
(h, _, _, _, _, v) = result
|
|
||||||
self.height = h - 1
|
|
||||||
return (h, v)
|
|
||||||
|
|
||||||
def history(db: plyvel.DB, prefix: str, format: str, key: list, max_height: int, min_height: int = 0) -> IAVLTreeHistoryIterator:
|
|
||||||
prefix_enc = prefix.encode('utf-8')
|
|
||||||
key_enc = encode_key(format, key)
|
|
||||||
return IAVLTreeHistoryIterator(db, prefix_enc, key_enc, max_height, min_height)
|
|
||||||
|
|||||||
49
store.ipynb
49
store.ipynb
@@ -2,13 +2,14 @@
|
|||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 1,
|
"execution_count": 168,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import plyvel\n",
|
"import plyvel\n",
|
||||||
"from itertools import islice\n",
|
"from itertools import islice\n",
|
||||||
"import iavltree"
|
"\n",
|
||||||
|
"%run -i read_tree.py"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -17,8 +18,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# db = plyvel.DB('../node/nodedir/data/application.db')\n",
|
"db = plyvel.DB('../node/nodedir/data/application.db')\n",
|
||||||
"height = iavltree.max_height(db)\n",
|
"height = max_height(db)\n",
|
||||||
"height"
|
"height"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -28,7 +29,9 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"[k for k, v in iavltree.iterate(db, 's/k:mint/', height)]"
|
"it = iterate(db, 's/k:mint/', height)\n",
|
||||||
|
"[k for k, v in it]\n",
|
||||||
|
"it.inner.lookups"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -37,7 +40,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"dict(iavltree.parse_pb(next(iavltree.iterate(db, 's/k:mint/', height, key = [138]))[1]))"
|
"dict(parse_struct(next(iterate(db, 's/k:mint/', height, key = [138]))[1]))"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -46,10 +49,19 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"it = iavltree.iterate(db, 's/k:emissions/', height, key = [62, 64], format = 'Qss')\n",
|
"it = iterate(db, 's/k:emissions/', height, key = [62, 64], format = 'Qss')\n",
|
||||||
"ooiiregrets = [(k[2],k[3],value[1],float(value[2])) for k,v in it for value in (dict(iavltree.parse_pb(v)),)]\n",
|
"ooiiregrets = [(k[2],k[3],value[1],float(value[2])) for k,v in it for value in (dict(parse_struct(v)),)]\n",
|
||||||
"\n",
|
"\n",
|
||||||
"len(ooiiregrets), len(it.inner.lookups)"
|
"len(ooiiregrets), it.inner.lookups"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 181,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"keynames = {0: \"Params\", 1: \"TotalStake\", 2: \"TopicStake\", 3: \"Rewards\", 4: \"NextTopicId\", 5: \"Topics\", 6: \"TopicWorkers\", 7: \"TopicReputers\", 8: \"DelegatorStake\", 9: \"DelegateStakePlacement\", 10: \"TargetStake\", 11: \"Inferences\", 12: \"Forecasts\", 13: \"WorkerNodes\", 14: \"ReputerNodes\", 15: \"LatestInferencesTs\", 16: \"ActiveTopics\", 17: \"AllInferences\", 18: \"AllForecasts\", 19: \"AllLossBundles\", 20: \"StakeRemoval\", 21: \"StakeByReputerAndTopicId\", 22: \"DelegateStakeRemoval\", 23: \"AllTopicStakeSum\", 24: \"AddressTopics\", 24: \"WhitelistAdmins\", 25: \"ChurnableTopics\", 26: \"RewardableTopics\", 27: \"NetworkLossBundles\", 28: \"NetworkRegrets\", 29: \"StakeByReputerAndTopicId\", 30: \"ReputerScores\", 31: \"InferenceScores\", 32: \"ForecastScores\", 33: \"ReputerListeningCoefficient\", 34: \"InfererNetworkRegrets\", 35: \"ForecasterNetworkRegrets\", 36: \"OneInForecasterNetworkRegrets\", 37: \"OneInForecasterSelfNetworkRegrets\", 38: \"UnfulfilledWorkerNonces\", 39: \"UnfulfilledReputerNonces\", 40: \"FeeRevenueEpoch\", 41: \"TopicFeeRevenue\", 42: \"PreviousTopicWeight\", 43: \"PreviousReputerRewardFraction\", 44: \"PreviousInferenceRewardFraction\", 45: \"PreviousForecastRewardFraction\", 46: \"InfererScoreEmas\", 47: \"ForecasterScoreEmas\", 48: \"ReputerScoreEmas\", 49: \"TopicRewardNonce\", 50: \"DelegateRewardPerShare\", 51: \"PreviousPercentageRewardToStakedReputers\", 52: \"StakeRemovalsByBlock\", 53: \"DelegateStakeRemovalsByBlock\", 54: \"StakeRemovalsByActor\", 55: \"DelegateStakeRemovalsByActor\", 56: \"TopicLastWorkerCommit\", 57: \"TopicLastReputerCommit\", 58: \"TopicLastWorkerPayload\", 59: \"TopicLastReputerPayload\", 60: \"OpenWorkerWindows\", 61: \"LatestNaiveInfererNetworkRegrets\", 62: \"LatestOneOutInfererInfererNetworkRegrets\", 63: \"LatestOneOutInfererForecasterNetworkRegrets\", 64: \"LatestOneOutForecasterInfererNetworkRegrets\", 65: \"LatestOneOutForecasterForecasterNetworkRegrets\", 66: \"PreviousForecasterScoreRatio\", 67: \"LastDripBlock\", 68: \"TopicToNextPossibleChurningBlock\", 69: \"BlockToActiveTopics\", 70: \"BlockToLowestActiveTopicWeight\", 71: \"PreviousTopicQuantileInfererScoreEma\", 72: \"PreviousTopicQuantileForecasterScoreEma\", 73: \"PreviousTopicQuantileReputerScoreEma\", 74: \"CountInfererInclusionsInTopic\", 75: \"CountForecasterInclusionsInTopic\", 76: \"ActiveInferers\", 77: \"ActiveForecasters\", 78: \"ActiveReputers\", 79: \"LowestInfererScoreEma\", 80: \"LowestForecasterScoreEma\", 81: \"LowestReputerScoreEma\", 82: \"LossBundles\", 83: \"TotalSumPreviousTopicWeights\", 84: \"RewardCurrentBlockEmission\", 85: \"GlobalWhitelist\", 86: \"TopicCreatorWhitelist\", 87: \"TopicWorkerWhitelist\", 88: \"TopicReputerWhitelist\", 89: \"TopicWorkerWhitelistEnabled\", 90: \"TopicReputerWhitelistEnabled\", 91: \"LastMedianInferences\", 92: \"MadInferences\", 93: \"InitialInfererEmaScore\", 94: \"InitialForecasterEmaScore\", 95: \"InitialReputerEmaScore\", 96: \"GlobalWorkerWhitelist\", 97: \"GlobalReputerWhitelist\", 98: \"GlobalAdminWhitelist\", 99: \"LatestRegretStdNorm\", 100: \"LatestInfererWeights\", 101: \"LatestForecasterWeights\", 102: \"NetworkInferences\", 103: \"OutlierResistantNetworkInferences\", 104: \"MonthlyReputerRewards\", 105: \"MonthlyTopicRewards\",}"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -58,15 +70,15 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import numpy as np\n",
|
|
||||||
"keynames = {0: \"Params\", 1: \"TotalStake\", 2: \"TopicStake\", 3: \"Rewards\", 4: \"NextTopicId\", 5: \"Topics\", 6: \"TopicWorkers\", 7: \"TopicReputers\", 8: \"DelegatorStake\", 9: \"DelegateStakePlacement\", 10: \"TargetStake\", 11: \"Inferences\", 12: \"Forecasts\", 13: \"WorkerNodes\", 14: \"ReputerNodes\", 15: \"LatestInferencesTs\", 16: \"ActiveTopics\", 17: \"AllInferences\", 18: \"AllForecasts\", 19: \"AllLossBundles\", 20: \"StakeRemoval\", 21: \"StakeByReputerAndTopicId\", 22: \"DelegateStakeRemoval\", 23: \"AllTopicStakeSum\", 24: \"AddressTopics\", 24: \"WhitelistAdmins\", 25: \"ChurnableTopics\", 26: \"RewardableTopics\", 27: \"NetworkLossBundles\", 28: \"NetworkRegrets\", 29: \"StakeByReputerAndTopicId\", 30: \"ReputerScores\", 31: \"InferenceScores\", 32: \"ForecastScores\", 33: \"ReputerListeningCoefficient\", 34: \"InfererNetworkRegrets\", 35: \"ForecasterNetworkRegrets\", 36: \"OneInForecasterNetworkRegrets\", 37: \"OneInForecasterSelfNetworkRegrets\", 38: \"UnfulfilledWorkerNonces\", 39: \"UnfulfilledReputerNonces\", 40: \"FeeRevenueEpoch\", 41: \"TopicFeeRevenue\", 42: \"PreviousTopicWeight\", 43: \"PreviousReputerRewardFraction\", 44: \"PreviousInferenceRewardFraction\", 45: \"PreviousForecastRewardFraction\", 46: \"InfererScoreEmas\", 47: \"ForecasterScoreEmas\", 48: \"ReputerScoreEmas\", 49: \"TopicRewardNonce\", 50: \"DelegateRewardPerShare\", 51: \"PreviousPercentageRewardToStakedReputers\", 52: \"StakeRemovalsByBlock\", 53: \"DelegateStakeRemovalsByBlock\", 54: \"StakeRemovalsByActor\", 55: \"DelegateStakeRemovalsByActor\", 56: \"TopicLastWorkerCommit\", 57: \"TopicLastReputerCommit\", 58: \"TopicLastWorkerPayload\", 59: \"TopicLastReputerPayload\", 60: \"OpenWorkerWindows\", 61: \"LatestNaiveInfererNetworkRegrets\", 62: \"LatestOneOutInfererInfererNetworkRegrets\", 63: \"LatestOneOutInfererForecasterNetworkRegrets\", 64: \"LatestOneOutForecasterInfererNetworkRegrets\", 65: \"LatestOneOutForecasterForecasterNetworkRegrets\", 66: \"PreviousForecasterScoreRatio\", 67: \"LastDripBlock\", 68: \"TopicToNextPossibleChurningBlock\", 69: \"BlockToActiveTopics\", 70: \"BlockToLowestActiveTopicWeight\", 71: \"PreviousTopicQuantileInfererScoreEma\", 72: \"PreviousTopicQuantileForecasterScoreEma\", 73: \"PreviousTopicQuantileReputerScoreEma\", 74: \"CountInfererInclusionsInTopic\", 75: \"CountForecasterInclusionsInTopic\", 76: \"ActiveInferers\", 77: \"ActiveForecasters\", 78: \"ActiveReputers\", 79: \"LowestInfererScoreEma\", 80: \"LowestForecasterScoreEma\", 81: \"LowestReputerScoreEma\", 82: \"LossBundles\", 83: \"TotalSumPreviousTopicWeights\", 84: \"RewardCurrentBlockEmission\", 85: \"GlobalWhitelist\", 86: \"TopicCreatorWhitelist\", 87: \"TopicWorkerWhitelist\", 88: \"TopicReputerWhitelist\", 89: \"TopicWorkerWhitelistEnabled\", 90: \"TopicReputerWhitelistEnabled\", 91: \"LastMedianInferences\", 92: \"MadInferences\", 93: \"InitialInfererEmaScore\", 94: \"InitialForecasterEmaScore\", 95: \"InitialReputerEmaScore\", 96: \"GlobalWorkerWhitelist\", 97: \"GlobalReputerWhitelist\", 98: \"GlobalAdminWhitelist\", 99: \"LatestRegretStdNorm\", 100: \"LatestInfererWeights\", 101: \"LatestForecasterWeights\", 102: \"NetworkInferences\", 103: \"OutlierResistantNetworkInferences\", 104: \"MonthlyReputerRewards\", 105: \"MonthlyTopicRewards\",}\n",
|
|
||||||
"lens = np.zeros(256, dtype = int)\n",
|
"lens = np.zeros(256, dtype = int)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"for field in range(255):\n",
|
"for field in range(255):\n",
|
||||||
" lens[field] = iavltree.count(db, 's/k:emissions/', height, key = [field])\n",
|
" lens[field] = count(db, 's/k:emissions/', height, key = [field])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"order = np.lexsort((np.arange(256)[::-1], lens))[::-1]\n",
|
"order = np.lexsort((np.arange(256)[::-1], lens))[::-1]\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
"print('Map lengths:')\n",
|
||||||
|
"\n",
|
||||||
"for i in range(len(order)):\n",
|
"for i in range(len(order)):\n",
|
||||||
" if lens[order[i]] == 0 and order[i] not in keynames:\n",
|
" if lens[order[i]] == 0 and order[i] not in keynames:\n",
|
||||||
" break\n",
|
" break\n",
|
||||||
@@ -110,19 +122,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# found"
|
"# found"
|
||||||
]
|
]
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 16,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# allora testnet module addresses\n",
|
|
||||||
"# mod allorapendingrewards 54C6D62FF29ECFEE9A5F0366DEC0F9CB44C10BB4\n",
|
|
||||||
"# mod allorarewards F3CA54C42E5B7DC7CB2A347B21E77AC248D914D2\n",
|
|
||||||
"# mod allorastaking 3C19B4642DA1C2DBB7E44679FA48F72FD9A97E5E\n",
|
|
||||||
"# mod ecosystem 570DD38DC5BAF3112A7C83A420ED399A8E59C5FC"
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
|||||||
Reference in New Issue
Block a user