"Torchrun for the World": Enabling any terminal user to mobilize global computing resources with a single command to execute local code.
π Building the Next-Generation Computing Internet - EasyNet
English | δΈζ
EasyRemote is not just a Private Function-as-a-Service (Private FaaS) platformβit's our answer to the future of computing:
While current cloud computing models are platform-centric, requiring data and code to "go to the cloud" to exchange resources, we believeβ
The next-generation computing network should be terminal-centric, language-interfaced, function-granular, and trust-bounded.
We call it: "EasyNet".
EasyRemote is the first-stage implementation of EasyNet, allowing you to:
- π§ Define task logic using familiar Python function structures
- π Deploy computing nodes on any device while maintaining privacy, performance, and control
- π Transform local functions into globally accessible task interfaces through lightweight VPS gateways
- π Eventually launch tasks as simply as using
torchrun, automatically scheduling to the most suitable resources for execution
| Traditional Cloud Computing | EasyNet Mode |
|---|---|
| Platform-centric | Terminal-centric |
| Code must go to cloud | Code stays on your device |
| Pay for computing power | Contribute to earn computing power |
| Vendor lock-in | Decentralized collaboration |
| Cold start delays | Always warm |
# 1. Start gateway node (any VPS)
from easyremote import Server
Server(port=8080).start()
# 2. Contribute computing node (your device)
from easyremote import ComputeNode
node = ComputeNode("your-gateway:8080")
@node.register
def ai_inference(prompt):
return your_local_model.generate(prompt) # Runs on your GPU
node.serve()
# 3. Global computing access (anywhere)
from easyremote import Client
result = Client("your-gateway:8080").execute("ai_inference", "Hello AI")π Your device has joined EasyNet!
| Feature | AWS Lambda | Google Cloud | EasyNet Node |
|---|---|---|---|
| Computing Location | Cloud servers | Cloud servers | Your device |
| Data Privacy | Upload to cloud | Upload to cloud | Never leaves local |
| Computing Cost | $200+/million calls | $200+/million calls | $5 gateway fee |
| Hardware Limitations | Cloud specs | Cloud specs | Your GPU/CPU |
| Startup Latency | 100-1000ms | 100-1000ms | 0ms (always online) |
- π English Documentation Center - Complete English documentation navigation
- π δΈζζζ‘£δΈεΏ - Complete Chinese documentation navigation
- 5-Minute Quick Start - Fastest way to get started | δΈζ
- Installation Guide - Detailed installation instructions | δΈζ
- API Reference - Complete API documentation | δΈζ
- Basic Tutorial - Detailed basic tutorial | δΈζ
- Advanced Scenarios - Complex application implementation | δΈζ
- System Architecture - Overall architecture design | δΈζ
- Deployment Guide - Multi-environment deployment solutions | δΈζ
- Technical Whitepaper - EasyNet theoretical foundation | δΈζ
- Research Proposal - Academic research plan | δΈζ
- Project Pitch - Business plan overview | δΈζ
@node.register
def medical_diagnosis(scan_data):
# Medical data never leaves your HIPAA-compliant device
# But diagnostic services can be securely accessed globally
return your_private_ai_model.diagnose(scan_data)- Traditional Cloud Services: Pay-per-use, costs increase exponentially with scale
- EasyNet Model: Contribute computing power to earn credits, use credits to call others' computing power
- Gateway Cost: $5/month vs traditional cloud $200+/million calls
# Your gaming PC can provide AI inference services globally
@node.register
def image_generation(prompt):
return your_stable_diffusion.generate(prompt)
# Your MacBook can participate in distributed training
@node.register
def gradient_computation(batch_data):
return your_local_model.compute_gradients(batch_data)"Computing Evolution is not linear progression, but paradigmatic leaps"
Core Innovation: From local calls β cross-node function calls
Technical Expression: @remote decorator for transparent distributed execution
Paradigm Analogy: RPC β gRPC β EasyRemote (spatial decoupling of function calls)
# Traditional local calls
def ai_inference(data): return model.predict(data)
# EasyRemote: Function calls across global networks
@node.register
def ai_inference(data): return model.predict(data)
result = client.execute("global_node.ai_inference", data)Breakthrough Metrics:
- API Simplicity: 25+ lines β 12 lines (-52%)
- Startup Latency: 100-1000ms β 0ms (-100%)
- Privacy Protection: Data to cloud β Never leaves local
Core Innovation: From explicit scheduling β adaptive intelligent scheduling
Technical Expression: Intent-driven multi-objective optimization scheduling
Paradigm Analogy: Kubernetes β Ray β EasyRemote ComputePool
# Traditional explicit scheduling
client.execute("specific_node.specific_function", data)
# EasyRemote: Intelligent intent scheduling
result = await compute_pool.execute_optimized(
task_intent="image_classification",
requirements=TaskRequirements(accuracy=">95%", cost="<$5")
)
# System automatically: task analysis β resource matching β optimal schedulingBreakthrough Metrics:
- Scheduling Efficiency: Manual config β Millisecond auto-decisions
- Resource Utilization: 60% β 85% (+42%)
- Cognitive Load: Complex config β Intent expression
Core Innovation: From calling functions β expressing intentions
Technical Expression: Natural language-driven expert collaboration networks
Paradigm Analogy: LangChain β AutoGPT β EasyRemote Intent Engine
# Traditional function call mindset
await compute_pool.execute_optimized(function="train_classifier", ...)
# EasyRemote: Natural language intent expression
result = await easynet.fulfill_intent(
"Train a medical imaging AI with >90% accuracy for under $10"
)
# System automatically: intent understanding β task decomposition β expert discovery β collaborative executionBreakthrough Metrics:
- User Barrier: Python developers β General users (10M+ user scale)
- Interaction Mode: Code calls β Natural language
- Collaboration Depth: Tool calls β Intelligent agent networks
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Global Compute OS β β Paradigm 3: Intent Layer
β "Train medical AI" β Auto-coordinate global experts β (Intent-Graph)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β²
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Compute Sharing Platform β β Paradigm 2: Autonomous Layer
β Intelligent scheduling + Multi-objective optimization β (Intelligence-Linked)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β²
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Private Function Network β β Paradigm 1: Function Layer
β @remote decorator + Cross-node calls + Load balancing β (Function-Driven)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Ultimate Vision: Mobilize global computing as easily as using torchrun
$ easynet "Train a medical imaging AI with my local data, 95%+ accuracy required"
π€ Understanding your needs, coordinating global medical AI expert nodes...
β
Found stanford-medical-ai and 3 other expert nodes, starting collaborative training...π Global clients
β
βοΈ Lightweight gateway cluster (routing only, no computing)
β
π» Personal computing nodes (actual execution)
β
π Peer-to-peer collaboration network
- Communication Protocol: gRPC + Protocol Buffers
- Secure Transport: End-to-end encryption
- Load Balancing: Intelligent resource awareness
- Fault Tolerance: Automatic retry and recovery
Limitations of Traditional Models:
- πΈ Cloud service costs grow exponentially with scale
- π Data must be uploaded to third-party servers
- β‘ Cold starts and network latency limit performance
- π’ Locked into major cloud service providers
EasyNet's Breakthroughs:
- π° Computing Sharing Economy: Contribute idle resources, gain global computing power
- π Privacy by Design: Data never leaves your device
- π Edge-First: Zero latency, optimal performance
- π Decentralized: No single points of failure, no vendor lock-in
Redefining the future of computing: From a few cloud providers monopolizing computing power to every device being part of the computing network.
# Become an early node in EasyNet
pip install easyremote
# Contribute your computing power
python -c "
from easyremote import ComputeNode
node = ComputeNode('demo.easynet.io:8080')
@node.register
def hello_world(): return 'Hello from my device!'
node.serve()
"| Role | Contribution | Benefits |
|---|---|---|
| Computing Providers | Idle GPU/CPU time | Computing credits/token rewards |
| Application Developers | Innovative algorithms and applications | Global computing resource access |
| Gateway Operators | Network infrastructure | Routing fee sharing |
| Ecosystem Builders | Tools and documentation | Community governance rights |
- π― Technical Discussions: GitHub Issues
- π¬ Community Chat: GitHub Discussions
- π§ Business Collaboration: [email protected]
- π¨βπ» Project Founder: Silan Hu - NUS PhD Candidate
π Ready to join the computing revolution?
pip install easyremoteDon't just see it as a distributed function tool β it's a prototype running on old-world tracks but heading towards a new-world destination.
β If you believe in this new worldview, please give us a star!
