Skip to content

Backend v2 #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 134 commits into
base: main
Choose a base branch
from
Open

Backend v2 #4

wants to merge 134 commits into from

Conversation

Aerilym
Copy link
Collaborator

@Aerilym Aerilym commented Nov 20, 2024

@Aerilym Aerilym changed the title Backend v2 [WIP] Backend v2 Nov 20, 2024
.gitmodules Outdated
@@ -0,0 +1,3 @@
[submodule "session-token-contracts"]
path = session-token-contracts
url = https://github.com/oxen-io/eth-sn-contracts.git
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be updated to the session-foundation https://github.com/session-foundation/session-token-contracts

README.md Outdated
The backend is split into three parts:

- **Fetcher**: `fetcher.py` is the main server that handles all the fetching, processing, and main database writing.
- **Api**: `api.py` is the main API server that handles most requests.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem relevant anymore file doesn't exist.

@@ -0,0 +1,2 @@
#!/usr/bin/env python3
import src.app_fetcher
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd remove this file, I'm mindful of leaf files like these that are helper wrappers. src.app_fetcher is around ~6 lines of code and so we have a file which has an indirection to another file which runs the app, which then, indirects to the actual code. We can reduce these layers by just dropping it

#!/usr/bin/env python3
from src import config
from src.staking.fetcher import App

app = App("fetcher")
app.run()

Having these leaf files is extra infrastructure that you have to ensure keeps working over time. I'd rather that the README.md just tell me to run the fetcher from the root directory by calling python3 src/app_fetcher.py

@@ -0,0 +1,6 @@
#!/usr/bin/env python3
from src import config
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused

from src import config
from src.staking.fetcher import App

app = App("fetcher")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this pattern

if __name__ == "__main__":
    <run the app>

Is meant to handle the pattern you're doing here, where if you import src/fetcher/app.py it will not run the app but let you use fetcher as a library. But if you python3 src/fetcher/fetcher.py then it execute the branch.

In that sense I would encourage that we merge the entrypoint of the app into the app file itself for similar reasons that reducing the number of leaf functions cuts down on the number of files to juggle if there's not a strong reason to otherwise (and so forth for the other app_<thing>.py things you have).

:param db_writer: The db writer to use for interacting with the db.
:param abi_dir: The directory where ABI files are stored. Default is 'abis'.
"""
self.abi_dir = abi_dir
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoding the ABI dir doesn't seem like the way to go for the reason that to change it means inspecting internals and overwriting it, but once you overwrite it load_all_abis now changes behaviour.

Passing the ABI dir is this thing that makes sense in the context of calling load_all_abis, it shouldn't be sticky and remain in memory. If you then update the DBs and call load_all_abis you'd expect the class to go reload the ABIs and not cache them.

To make it explicit I would imagine the ABI manager has a function that's like load_abi_from_path where you pass in an explicit path and it just loads and overwrites the ABI in the manager if it exists. Then you have load_abi_from_dir which loads all the *.json files and writes them in.

And I can call them as many times as I want, they won't cache and will always override the in-memory ones which is the most expected behaviour from a class like this. Locking in the ABI dir gets you into the situation where if you wanted to change the directory its looking at, you now have this misconfigured object so now I need to go reinstantiate the object, run the constructor and load, then Python needs to go discard the old ABI manager.

It's just very brittle and not a very flexibly designed class that it requires a lot of work, programming wise but also CPU wise to do a simple operation like loading new ABIs from a given path.

:return: BLS public key, in hex.
"""
pks = self.contract.functions.blsPubkey().call()
return "0x{:0128x}".format((pks[0] << 256) + pks[1])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm .. what's the downshift for? Doesn't the contract just give you a tuple of 2x64 byte keys in a tuple (pk0, pk1)?

self.filters = filters

# Our JSON-RPC throttling parameters
# self.min_scan_chunk_size = 10 # 12 s/block = 120 seconds period
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate line

self.chunk_size_decrease = 0.5

# Factor how fast we increase chunk size if no results found
# self.chunk_size_increase = 2.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commented line

# Start low at 20, this value is set at the end of the scan so we can use the optimal chunk size in future scans
self.optimal_chunk_size = optimal_chunk_size

def scan_chunk(self, start_block, end_block) -> Tuple[int, list]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will need you to walk me through the event scanner at some point to make sure I understand this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants