Skip to content

io.net

Overview

io.net is a decentralized GPU computing network optimized for AI and machine learning workloads. As a significant project in the DePIN (Decentralized Physical Infrastructure Networks) space, io.net aggregates idle GPU computing resources worldwide to provide low-cost, high-performance on-demand computing services for AI researchers, developers, and enterprises.

io.net was launched in 2023, founded by Ahmad Shadid and Tory Green, and built on the Solana blockchain. The project's core vision is to break the monopoly of traditional cloud computing giants over GPU resources by enabling anyone with GPU hardware to contribute computing power and earn rewards in a decentralized manner, while providing the AI industry with more affordable and flexible computing resources.

With the rapid development of AI technology, demand for GPU computing power has grown explosively. Traditional cloud service providers (such as AWS, Google Cloud, Microsoft Azure) offer expensive GPU resources that are often in short supply during peak periods. io.net addresses this by creating an open GPU marketplace that consolidates idle GPU power from data centers, cryptocurrency mining facilities, gamers, and even personal computers worldwide into a massive distributed computing network.

As of 2024, io.net has aggregated tens of thousands of GPU nodes, offering diverse computing resources ranging from consumer-grade graphics cards to data center-grade GPUs (such as NVIDIA H100, A100). The network supports mainstream AI frameworks (such as PyTorch, TensorFlow, JAX) and can handle large-scale model training, inference, rendering, and other compute-intensive tasks.

Core Features

Distributed GPU Network

io.net has built a globally distributed GPU computing network where any individual or organization with qualifying GPU hardware can become a computing power provider (GPU Provider). These GPU nodes are distributed worldwide, including independent data centers, cryptocurrency mining facilities, surplus resources from cloud providers, and personal computers. Through its decentralized architecture, io.net achieves geographic redundancy and high availability, with the network continuing to function normally even if some nodes go offline.

On-Demand Computing Services

Users can flexibly rent GPU computing power based on actual needs, with billing by the hour, day, or task. Unlike traditional cloud services that often require long-term contracts, io.net offers fully elastic resource scheduling, allowing users to spin up dozens or even hundreds of GPU instances for parallel computing within minutes and release resources immediately after task completion. This flexibility is particularly suitable for AI experiments, model fine-tuning, and inference services that require burst computing power.

Multi-Tier GPU Resources

GPU resources in the io.net network are divided into multiple tiers, from consumer-grade graphics cards (such as NVIDIA RTX 4090, 3090) to professional compute cards (such as A100, H100, MI300), and even dedicated AI accelerator chips. Users can choose hardware configurations that match their task requirements. For development and testing phases, lower-cost consumer GPUs can be used; for large-scale training or production environments, high-end data center GPUs can be selected. This multi-tier resource pool enables io.net to serve a broad user base from individual developers to large AI laboratories.

Native AI Framework Support

io.net deeply integrates mainstream AI and machine learning frameworks, including PyTorch, TensorFlow, JAX, Hugging Face Transformers, and more. Users can directly use familiar development tools and libraries to run training and inference tasks on the io.net network without modifying code. The platform also provides pre-configured container images and environment templates, further simplifying the deployment process.

Intelligent Task Scheduling

io.net's scheduling system automatically matches the most suitable compute nodes based on task requirements (such as GPU model, VRAM size, network bandwidth, geographic location). The scheduling algorithm considers cost, performance, reliability, and other factors to ensure users get the best value. For distributed training tasks, the system also prioritizes low-latency node combinations to reduce communication overhead.

Decentralized Storage Integration

io.net integrates with decentralized storage networks (such as Filecoin, Arweave), allowing users to store training data, model weights, and computation results in distributed storage systems. This design avoids dependence on centralized cloud storage, improving data durability and censorship resistance.

Technical Architecture

Network Topology

io.net employs a layered network architecture. The bottom layer consists of tens of thousands of GPU nodes (Worker Nodes) running the io.net client software, registering local GPU resources with the network. The middle layer consists of Coordinator Nodes responsible for task scheduling, resource monitoring, and node management. The top layer is the user interface layer, including a web console, CLI tools, and API, through which users submit compute tasks and manage resources.

Ray Distributed Framework

io.net is built on the open-source Ray distributed computing framework, developed by UC Berkeley RISELab. Ray is a general-purpose distributed computing system widely used for large-scale AI training and inference. It provides key capabilities such as task parallelism, state sharing, and fault recovery, enabling io.net to efficiently manage GPU resources distributed globally.

Containerized Deployment

All compute tasks run in isolated container environments (based on Docker/Kubernetes), ensuring that different users' tasks do not interfere with each other. Containerization also provides environmental consistency, allowing code that passes local testing to seamlessly migrate to the io.net network. The platform offers a rich set of pre-built images covering common AI development environments and dependency libraries.

Blockchain Integration

io.net records critical network metadata on the Solana blockchain, including node registration information, task submission records, and payment receipts. The blockchain's immutability guarantees transaction transparency and traceability. The IO token serves as the network's native cryptocurrency, used to pay computing fees and incentivize computing power providers. Smart contracts automatically execute payment settlements without manual intervention.

Performance Monitoring System

io.net deploys a comprehensive monitoring system that tracks each GPU node's health status, utilization rate, temperature, failure rate, and other metrics in real time. This data is used not only for scheduling optimization but is also provided to users to help them evaluate task execution quality. Nodes that perform poorly or frequently go offline automatically have their reputation scores lowered, reducing task allocation.

GPU Network Mechanism

Node Registration and Verification

Becoming an io.net GPU provider requires a registration and verification process. First, users install the io.net client software locally and connect to the network. The system automatically detects GPU hardware specifications (model, VRAM, computing capability, etc.) and runs benchmarks to verify performance. Once verified, node information is recorded on the Solana blockchain, and the node begins receiving task assignments.

Computing Power Pricing Mechanism

GPU computing power prices are determined by market supply and demand. Computing power providers can set their own prices (IO tokens per GPU hour), but excessively high prices will result in a lack of competitiveness. io.net provides reference pricing suggestions based on GPU model, market conditions, and historical data. Typically, io.net GPU prices are 60-80% lower than traditional cloud service providers, enabling AI developers to significantly reduce costs.

Task Execution and Monitoring

When users submit compute tasks, the scheduling system selects the most suitable resources from available GPU nodes. Tasks are packaged into container images, distributed to target nodes, and executed. During execution, the system continuously monitors task status, resource consumption, and output logs. Users can view task progress in real time through the web console or API. If a node fails, the system automatically migrates the task to other available nodes to ensure completion.

Reputation and Penalty Mechanism

To ensure service quality, io.net has established a node reputation system. Node performance (such as uptime, task success rate, response speed) is recorded and used to calculate a reputation score. High-reputation nodes receive more task assignments and higher earnings. Conversely, nodes that frequently go offline, have high task failure rates, or provide false hardware information will have their reputation lowered and may even be expelled from the network. This mechanism incentivizes node providers to maintain high-quality service.

Earnings Settlement

Computing power provider earnings are paid in IO tokens, calculated based on actual computing time provided and resource type. Settlement periods are typically daily or weekly, with smart contracts automatically transferring earnings to the node operator's wallet address. Providers can choose to hold IO tokens for potential price appreciation or exchange them for other cryptocurrencies or fiat currency on decentralized exchanges.

IO Token Economics

Token Functions

The IO token is the core of the io.net ecosystem, with multiple functions:

  • Payment Medium: Users use IO tokens to pay GPU computing fees
  • Earnings Distribution: Computing power providers earn IO tokens by contributing GPU resources
  • Staking and Governance: Holders can stake IO tokens to participate in network governance, voting on major decisions such as protocol upgrades and parameter adjustments
  • Incentive Mechanism: Early contributors, referrers, and community builders can receive IO token rewards

Token Distribution

The IO token's total supply and distribution plan (refer to the latest official announcements for specific numbers) typically includes:

  • Computing power provider incentives (long-term release)
  • User rewards and ecosystem development fund
  • Team and early investors (lock-up period and linear release)
  • Strategic partners
  • Community treasury (managed by DAO)

Value Capture

The IO token's value is directly related to io.net network usage. As more AI developers and enterprises use io.net services, demand for IO tokens continues to grow. Additionally, a portion of transaction fees is used to buy back and burn IO tokens, reducing circulating supply and creating deflationary pressure, which benefits long-term token value growth.

Incentive Programs

io.net regularly launches various incentive programs to encourage user and provider participation. For example, new users can receive free computing credits for testing; users who refer new nodes receive referral rewards; early testnet participants can receive airdrops. These incentives help rapidly expand network scale and user base.

Node Operation Guide

Hardware Requirements

Becoming an io.net GPU provider requires meeting certain hardware requirements:

  • GPU: NVIDIA GPU (RTX 3060 or above recommended, or professional compute cards such as A100, H100). Some AMD GPUs may also be supported; check the official compatibility list
  • VRAM: At least 8GB (GPUs with more VRAM can accept larger-scale tasks and earn more)
  • CPU: Multi-core processor, at least 4 cores
  • RAM: At least 16GB
  • Storage: 100GB available disk space (for caching data and models)
  • Network: Stable internet connection, at least 10Mbps upload speed recommended

Software Configuration

  1. Install the operating system (Ubuntu 20.04/22.04 or other Linux distributions recommended)
  2. Install NVIDIA drivers and CUDA toolkit
  3. Install Docker container engine
  4. Download and install io.net client software
  5. Create a Solana wallet (for receiving IO token earnings)
  6. Register the node and complete hardware verification

Starting Operations

After configuration, start the io.net client, and the software will automatically register the node on the network and begin accepting tasks. Node operators can set maximum resource usage, pricing strategies, operating time windows, and other parameters through configuration files. Keeping nodes online for extended periods is recommended to improve reputation scores and task assignment probability.

Maintenance and Optimization

Regularly check node health status, monitor GPU temperature, utilization, and fault logs. Keep io.net client software and system drivers up to date for the latest features and security patches. Optimize network configuration to reduce latency and improve task execution efficiency. For professional operators, deploying multi-GPU servers or clusters can form scalable computing power services.

Compute Task Types

AI Model Training

io.net supports large-scale deep learning model training, including image classification, object detection, natural language processing, speech recognition, and more. Users can train models ranging from small experimental models to large language models (LLMs) with billions of parameters. The platform supports distributed training strategies (such as data parallelism, model parallelism, pipeline parallelism), fully utilizing multi-GPU resources to accelerate training.

Model Inference Services

Trained AI models can be deployed on the io.net network to provide inference services. For example, deploying a real-time image recognition API, chatbot backend, or recommendation system. io.net's elastic scaling capability allows inference services to automatically adjust the number of GPU instances based on request volume to handle traffic fluctuations.

Rendering and Simulation

Beyond AI workloads, io.net is also suitable for 3D rendering, video encoding, physics simulation, and other GPU-intensive tasks. Game developers can use the network for batch scene rendering, and scientific researchers can run large-scale molecular dynamics simulations or climate models.

Data Processing

Large-scale data preprocessing and feature engineering can also be executed on io.net. For example, image data augmentation, video frame extraction, text vectorization, and other parallel computing tasks can fully leverage the GPU's parallel processing capabilities, significantly reducing data preparation time.

Development History

Early 2023: Project Launch

Ahmad Shadid and Tory Green founded io.net, proposing the vision of building a decentralized GPU computing network. The team completed seed round funding and began technical development.

Mid 2023: Testnet Launch

The io.net testnet was released, allowing early participants to register nodes and test network functionality. The testnet attracted thousands of GPU nodes, validating the technical approach's feasibility.

Late 2023: Mainnet Launch

The io.net mainnet officially launched, beginning to provide production-grade computing services. The first enterprise clients and AI research institutions began using the network for model training and inference.

Q1 2024: Rapid Growth

Network scale expanded rapidly, with GPU node counts exceeding tens of thousands. io.net completed a new funding round with investors including well-known crypto funds and traditional tech investment firms.

Q2 2024: IO Token Launch

The IO token was distributed to the community through public sales and airdrops. The token was listed on major decentralized exchanges, providing liquidity and value capture mechanisms for the ecosystem.

Mid-Late 2024: Ecosystem Flourishing

io.net established partnerships with multiple AI projects, open-source communities, and enterprises. The network processed millions of hours of cumulative compute tasks, becoming a benchmark project in the DePIN space.

Technical Advantages

Significant Cost Advantage

io.net GPU prices are typically 60-80% lower than traditional cloud service providers. This cost advantage comes from utilizing idle resources and the decentralized operating model, without the construction and maintenance costs of large data centers. For AI startups and research institutions, these cost savings can significantly lower R&D barriers.

Rich and Diverse Resources

io.net aggregates GPU resources ranging from consumer-grade to data center-grade, allowing users to flexibly choose based on needs. This diversity is difficult for a single cloud provider to offer, especially when high-end GPUs (such as H100) are in short supply, io.net can provide more available resources through its distributed network.

Elasticity and Scalability

Users can scale from a few GPUs to hundreds within minutes without advance reservation or long-term contracts. This elasticity is extremely valuable for scenarios requiring burst computing power (such as urgent model training, large-scale inference).

Censorship Resistance and Decentralization

Based on blockchain and distributed network architecture, io.net has inherent censorship resistance. No single entity can shut down the network or censor specific users' compute tasks. This is particularly important for users who need privacy protection or work in regulatory-unfriendly regions.

Global Distribution and Low Latency

GPU nodes are distributed worldwide, allowing users to select geographically closer nodes to reduce network latency. For real-time inference services or applications requiring fast responses, this geographic distribution provides performance advantages.

Application Scenarios

AI Research and Development

Academic institutions and independent researchers use io.net for cutting-edge AI algorithm research, training experimental models without investing in expensive local hardware or bearing high cloud service costs.

Enterprise AI Applications

Enterprises use io.net to build and deploy AI-driven products such as intelligent customer service, content moderation, recommendation systems, and fraud detection. Elastic resource scheduling allows enterprises to flexibly handle business growth.

Web3 and Crypto AI

Decentralized AI applications (such as on-chain AI agents, decentralized oracles) can use io.net as a computing backend, maintaining decentralization across the entire technology stack.

Scientific Computing

Research fields like bioinformatics, drug discovery, and climate simulation that require substantial GPU computing power find io.net provides cost-effective computing resources.

Content Creation

Creative industries including video production, game development, and animation rendering can leverage io.net's GPU resources for batch rendering and post-processing, shortening production cycles.

User Guide

Registration and Setup

  1. Visit the io.net website (https://io.net/)
  2. Create an account and connect a Solana wallet (such as Phantom, Solflare)
  3. Top up with IO tokens or use supported payment methods (some platforms support credit card purchases)
  4. Configure the computing environment in the console (select GPU type, framework, image, etc.)

Submitting Compute Tasks

Users can submit tasks through the following methods:

  • Web Console: Suitable for simple tasks; upload code and data through the graphical interface, configure parameters, and launch
  • CLI Tools: Command-line tools suitable for automation and scripted workflows
  • API: Programmatic task submission, integrating into existing MLOps workflows

Monitoring and Management

After task launch, users can view execution status, resource usage, and log output in real time through the console. Support for pausing, resuming, and terminating tasks. Upon completion, results are automatically saved to the specified storage location (local, cloud storage, or decentralized storage).

Cost Control

io.net provides cost estimation tools, allowing users to view estimated costs before submitting tasks. After setting a budget cap, tasks that reach the limit will automatically stop, preventing overspending. The platform also offers different pricing tiers of resources, balancing performance and cost.

Future Development

Resource Pool Expansion

io.net plans to expand support to more types of hardware resources, including TPUs (Tensor Processing Units), FPGAs (Field-Programmable Gate Arrays), and dedicated AI accelerator chips. This will further enrich the network's computing capabilities and application scenarios.

Cross-Chain Integration

While currently built on Solana, io.net plans to support multi-chain architecture, allowing users to pay with tokens from Ethereum, BSC, and other blockchains, and record network activity on multiple chains for improved interoperability.

Edge Computing

In the future, io.net may integrate GPU resources from edge devices (such as smartphones, IoT devices) to build a more distributed edge computing network supporting low-latency real-time AI inference services.

Enterprise-Grade Services

To meet enterprise user needs, io.net will launch SLA (Service Level Agreement) guarantees, dedicated resource pools, private network deployments, and other enterprise-grade features to attract more traditional enterprises to adopt decentralized computing services.

AI Model Marketplace

io.net plans to establish a decentralized AI model marketplace where developers can publish and trade pre-trained models, datasets, and AI services, forming a complete AI ecosystem.

Risk Disclosure

Node Reliability

Since GPU nodes are operated by globally distributed individuals and organizations, node stability and reliability varies. Some nodes may interrupt service due to hardware failures, network issues, or operators going offline. Although io.net's scheduling system automatically migrates tasks, this may cause task delays.

Performance Variability

Different nodes have varying hardware performance, network bandwidth, and configuration levels, which may result in different task execution speeds. Users should choose appropriate resource tiers based on task criticality; for production environments, high-reputation and high-performance nodes are recommended.

Data Privacy

Although io.net uses container isolation and encrypted transmission technology to protect user data, running compute tasks on a distributed network still carries some data leakage risk. When processing sensitive data, users should assess risks and take additional encryption and security measures.

Token Price Volatility

The IO token's price is affected by market supply and demand, overall cryptocurrency market sentiment, and various other factors, and may experience significant volatility. Users should be aware of price risk when topping up and holding IO tokens.

Regulatory Uncertainty

Decentralized computing networks and cryptocurrency payments may face regulatory restrictions or legal uncertainty in certain jurisdictions. Users should understand and comply with local laws and regulations.

References

  • DePIN: Decentralized Physical Infrastructure Networks that organize physical hardware resources (such as storage, computing, sensors) into decentralized networks through blockchain and token incentives
  • GPU: Graphics Processing Unit, a processor designed for parallel computing, widely used in AI training and inference
  • AI Training: The process of training neural network models using large amounts of data and computing resources
  • AI Inference: The process of using trained models to make predictions or classify new data
  • Distributed Computing: Distributing compute tasks across multiple nodes for parallel execution, improving processing speed and fault tolerance
  • Ray: An open-source distributed computing framework widely used for large-scale machine learning and AI applications
  • Solana: A high-performance blockchain platform, io.net's underlying blockchain infrastructure
  • Computing Power Provider: Individuals or organizations that contribute GPU resources and earn rewards on the io.net network
  • On-Demand Computing: Flexibly renting computing resources based on actual needs, paying only for what you use
  • Containerization: Using technologies like Docker to package applications and their dependencies into independent, portable units