diff --git a/docs/cli/Guides/collaboration.md b/docs/cli/Guides/collaboration.md index 6d1bdc2f..47e5a7d8 100644 --- a/docs/cli/Guides/collaboration.md +++ b/docs/cli/Guides/collaboration.md @@ -1,17 +1,20 @@ --- id: "collaboration" -title: "Confidential Collaboration" +title: "Two-party Collaboration" slug: "/guides/collaboration" -sidebar_position: 2 +displayed_sidebar: null +unlisted: true --- -Super Protocol enables independent parties to jointly compute over their private inputs without revealing those inputs to one other. +Super Protocol enables independent parties to jointly compute over their private inputs without revealing those inputs to one another. -This guide describes a simple example of confidential collaboration between two parties, **Alice** and **Bob**. Alice owns a script she wants to use to process Bob's dataset. However, the dataset contains sensitive information and cannot be shared. +This guide describes an example of confidential collaboration between two parties, **Alice** and **Bob**. Alice owns a script she wants to use to process Bob's dataset. -The computation runs on Super Protocol within a Trusted Execution Environment that is is isolated from all external access, including by Alice, Bob, the hardware owner, and the Super Protocol team. Additionally, Super Protocol's Certification System ensures verifiability, eliminating the need for trust. +The dataset contains sensitive information and cannot be shared. At the same time, Bob must review the script to ensure it is safe to run on his data. If Alice's script is proprietary and she cannot share it with Bob, a possible alternative is to involve independent security experts who can audit the script without exposing it publicly. -Note that this is just one example; Super Protocol's architecture enables a range of more complex multiparty scenarios. +The computation runs on Super Protocol within a Trusted Execution Environment that is isolated from all external access, including that of Alice, Bob, the hardware owner, and the Super Protocol team. Additionally, Super Protocol's Certification System ensures verifiability, eliminating the need for trust. + +The following is just one relatively simple example of confidential collaboration. Super Protocol's architecture enables a range of more complex scenarios involving multiple parties and assets. ## General workflow @@ -39,62 +42,62 @@ sequenceDiagram note over Alice,Blockchain: Execution Alice ->>+ Super Protocol / TEE: 8. Place an order - Super Protocol / TEE ->> Storage: Download the solution - Bob ->> Super Protocol / TEE: 9. Complete the data suborder - Super Protocol / TEE ->> Storage: Download the dataset + Bob ->> Super Protocol / TEE: 9. Approve the usage of the dataset + Super Protocol / TEE ->> Storage: Download the solution and dataset Super Protocol / TEE ->> Blockchain: Publish the order report - Super Protocol / TEE ->> Super Protocol / TEE: Execute the order + Super Protocol / TEE ->> Super Protocol / TEE: Process the order Super Protocol / TEE ->>- Storage: Upload the order results Alice ->> Storage: 10. Download the order results + end + Alice ->> Blockchain: 11. Get the order report Bob ->> Blockchain: 11. Get the order report - end ```
-**Preparation**: +**Preparation** -Alice builds a solution—a Docker image containing her script (1). She uploads the solution using SPCTL (2) and grants Bob access for verification (3). +Alice builds a solution—a Docker image containing her script ([1](/cli/guides/collaboration#alice-1-build-a-solution)). She uploads the solution using SPCTL ([2](/cli/guides/collaboration#alice-2-upload-the-solution)) and grants Bob access for verification ([3](/cli/guides/collaboration#alice-3-send-the-solution-to-bob)). -Bob downloads the solution (4) and verifies it is safe to process his data (5). +Bob (or an independent auditor) downloads the solution ([4](/cli/guides/collaboration#bob-4-download-the-solution)) and verifies that it is safe to process his data ([5](/cli/guides/collaboration#bob-5-verify-the-solution)). -Bob uploads his dataset to remote storage using SPCTL (6). The dataset is automatically encrypted during upload, and only Bob holds the key. +Bob uploads his dataset to remote storage using SPCTL ([6](/cli/guides/collaboration#bob-6-upload-the-dataset)). The dataset is automatically encrypted during upload, and only Bob holds the key. -Bob creates an offer on the Marketplace (7). The offer require Bob's manual approval for use. He shares the offer's IDs with Alice. +Bob creates an offer on the Marketplace ([7](/cli/guides/collaboration#bob-7-create-an-offer)). The offer requires Bob's manual approval for use. He shares the offer's IDs with Alice. -**Execution**: +**Execution** -Alice places an order on Super Protocol using her solution and Bob's offer ID (8). The order remains **Blocked** by the data suborder. +Alice places an order on Super Protocol using her solution and Bob's offer ID ([8](/cli/guides/collaboration#alice-8-place-an-order)). The order remains **Blocked** by the data suborder. -Bob manually completes the data suborder (9). The command includes the verified solution hash. Completion succeeds only if this hash matches the actual solution hash, meaning the solution was not altered. +Bob manually approves the usage of his dataset for the image with a specific hash ([9](/cli/guides/collaboration#bob-9-complete-the-data-suborder)). If this hash matches the actual solution hash, the CVM begins to process the order. If the hashes do not match, the order will be terminated with an error. -Once the computation finishes, Alice can download the result (10). +Once the computation finishes, Alice can download the result ([10](/cli/guides/collaboration#alice-10-download-the-order-results)). All the data within the TEE (solution, dataset, order results, etc.) is automatically deleted. -Both Alice and Bob can retrieve the order report (11) that confirms the authenticity of the entire process. +Both Alice and Bob can retrieve the order report ([11](/cli/guides/collaboration#alice-and-bob-11-get-the-order-report)) that confirms the authenticity of the entire trusted setup. ## Prerequisites -### Alice +**Alice**: - [SPCTL](/cli) - Docker -### Bob +**Bob**: - [SPCTL](/cli) -- Provider Tools +- [Provider Tools](/cli/guides/provider-tools) ## Preparation -### Alice: 1. Build the solution +### Alice: 1. Build a solution -1.1. Prepare the solution: write a Dockerfile that creates an image with your software. Keep in mind the special file structure inside the TEE: +1.1. Write a Dockerfile that creates an image with your code. Keep in mind the special file structure inside the TEE: -| **Location** | **Purpose** | **Access** | -| :- | :- | :- | -| `/sp/inputs/input-0001/`
`/sp/inputs/input-0002/`
etc. | Possible data locations | Read-only | -| `/sp/output/` | Output directory for results | Write; read own files | -| `/sp/certs/` | Contains the order certificate | Read-only | +| **Location** | **Purpose** | **Access** | +| :- | :- | :- | +| `/sp/inputs/input-0001/`
`/sp/inputs/input-0002/`
etc. | Possible data locations | Read-only | +| `/sp/output/` | Output directory for results | Write; read own files | +| `/sp/certs/` | Contains the order certificate, private key, and workloadInfo | Read-only | Your scripts must find the data in `/sp/inputs/` and write the results to `/sp/output/`. @@ -109,7 +112,7 @@ You can find several Dockerfile examples in the [Super-Protocol/solutions](https 1.2. Build an image: ```shell -docker build -t . +docker build --platform linux/amd64 -t . ``` Replace `` with the name of your solution. @@ -131,7 +134,7 @@ docker save :latest | gzip > .tar.gz ### Alice: 3. Send the solution to Bob -Send Bob the output `solution.resource.json` file from the previous step. +Send the output `solution.resource.json` file from the previous step to Bob. ### Bob: 4. Download the solution @@ -232,16 +235,16 @@ If you are registering an offer for the first time, you will be prompted to comp Follow the dialog: -Q: `Have you already created a DATA offer?` +Q: `Have you already created a DATA offer?`
A: `n` (No) -Q: `Please specify a path to the offer info json file` +Q: `Please specify a path to the offer info json file`
A: `./offer-info.json` -Q: `Please specify a path to the slot info json file` +Q: `Please specify a path to the slot info json file`
A: `./slot-info.json` -Q: `Do you want to add another slot?` +Q: `Do you want to add another slot?`
A: `n` (No) Wait for the offer to be created and find a line in the output with the IDs of the offer and slot, for example: @@ -250,7 +253,7 @@ Wait for the offer to be created and find a line in the output with the IDs of t Slot 119654 for offer 18291 has been created successfully ``` -Provide Bob with these IDs. Ignore other instructions you see in the output. +Provide Alice with these IDs. Ignore other instructions you see in the output. ## Execution @@ -323,6 +326,8 @@ If the order ended up with an error, the results will contain execution logs tha ### Alice and Bob: 11. Get the order report +You can get the order report as soon as the CVM downloads the order components and starts the execution, without waiting for the order to complete: + ```shell ./spctl orders get-report --save-to report.json ``` @@ -345,4 +350,8 @@ Additionally, find entries in the `runtimeInfo` array that start with `"type": " }, ``` -These are hashes of the actual solution and data that were executed within a TEE. Compare them with the solution and dataset hashes from the respective resource files. +These hashes are of the actual solution and data that were executed within a TEE. Compare them with the solution and dataset hashes from the respective resource files. + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/marketplace/Guides/prepare-comfyui.md b/docs/cli/Guides/comfyui.md similarity index 91% rename from docs/marketplace/Guides/prepare-comfyui.md rename to docs/cli/Guides/comfyui.md index 5b8dc8eb..ddc23efb 100644 --- a/docs/marketplace/Guides/prepare-comfyui.md +++ b/docs/cli/Guides/comfyui.md @@ -1,14 +1,14 @@ --- -id: "prepare-comfyui" -title: "Prepare a ComfyUI Workflow" -slug: "/guides/prepare-comfyui" -sidebar_position: 5 +id: "comfyui" +title: "Custom ComfyUI Workflow" +slug: "/guides/solutions/comfyui" +sidebar_position: 4 --- import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; -This guide provides step-by-step instructions for preparing a **ComfyUI** workflow with custom nodes before uploading it. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI. +This guide provides step-by-step instructions for preparing a **ComfyUI** workflow with custom nodes to run on Super Protocol. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI. :::note @@ -28,7 +28,7 @@ You can prepare your model, workflow, and custom node files manually or using Do 1. Clone the [Super-Protocol/solutions](https://github.com/Super-Protocol/solutions/) GitHub repository to the location of your choosing: - ``` + ```shell git clone https://github.com/Super-Protocol/solutions.git --depth 1 ``` @@ -54,13 +54,13 @@ You can prepare your model, workflow, and custom node files manually or using Do Access the running container with the following command: - ``` + ```shell docker exec -it comfyui bash ``` Go to the `models` directory inside the container and download the model files to the corresponding subdirectories using the `wget` command. For example: - ``` + ```shell wget https://huggingface.co/prompthero/openjourney/resolve/main/mdjrny-v4.safetensors ``` @@ -68,7 +68,7 @@ You can prepare your model, workflow, and custom node files manually or using Do If you have the model on your computer, copy its files to the container using the following command: - ``` + ```shell docker cp comfyui: ``` @@ -77,7 +77,7 @@ You can prepare your model, workflow, and custom node files manually or using Do For example: - ``` + ```shell docker cp ~/Downloads/openjourney/mdjrny-v4.safetensors comfyui:/opt/ComfyUI/models/checkpoints/mdjrny-v4.safetensors ``` @@ -87,7 +87,7 @@ You can prepare your model, workflow, and custom node files manually or using Do 8. Unpack the archive using the following command: - ``` + ```shell tar -xvzf snapshot.tar.gz -C ``` @@ -159,6 +159,6 @@ You can prepare your model, workflow, and custom node files manually or using Do -## Contact Super Protocol +## Support -If you face any issues, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/Guides/deploy-app/deploy-app-example.md b/docs/cli/Guides/deploy-app/deploy-app-example.md new file mode 100644 index 00000000..edb21fd1 --- /dev/null +++ b/docs/cli/Guides/deploy-app/deploy-app-example.md @@ -0,0 +1,175 @@ +--- +id: "deploy-app-example" +title: "Example: Python script" +slug: "/guides/deploy-app/example" +sidebar_position: 1 +--- + +This guide serves as an example to the more general [deployment guide](/cli/guides/deploy-app) and shows how to deploy a Python script on Super Protocol without modifying its code. + +The [simple script](/files/usd_to_crypto.py) used here as an example calculates how much Bitcoin (BTC) and Ether (ETH) can be bought for given amount in US dollars: + +1. Reads the input amount from `input.txt` located in the same directory. +2. Fetches current prices of BTC and ETH using CoinGecko API. +3. Calculates how much BTC and ETH can be bought for this amount of USD. +4. Creates `result.txt` in the same directory and writes the result to it. + +In this deployment, the script will be the solution, and `input.txt` will be the data. + +## Prerequisites + +- Docker +- [SPCTL](/cli) + +### 0. Prepare the files + +Create a local directory `usd_to_crypto`. Download the [example script](/files/usd_to_crypto.py) and rename it to `usd_to_crypto.py`. + +Create a new file `input.txt` to serve as the data input, and add a number—USD amount, for example, `100000`. + +Copy SPCTL and its `config.json` into this directory. + +### 1. Prepare the solution + +Keep in mind that file locations inside a CVM will differ from a local run: + +- Data (`input.txt`) must be found in one of the `/sp/inputs/input-xxxx` directories. +- `result.txt` must be placed into `/sp/output` to be available to download once the execution is finished. + +1.1. Create a new file named `entrypoint.sh` and add the following code: + +```sh title="entrypoint.sh" +#!/bin/sh +set -eu + +# Fixed CVM paths (overridable if needed) +: "${INPUTS_DIR:=/sp/inputs}" +: "${OUTPUT_DIR:=/sp/output}" +: "${SCRIPT_PATH:=/usr/local/bin/usd_to_crypto.py}" + +mkdir -p "${OUTPUT_DIR}" +cd "${OUTPUT_DIR}" + +# Resolve input file +INPUT_FILE="$(find "${INPUTS_DIR}" -mindepth 2 -maxdepth 3 -type f -name 'input.txt' 2>/dev/null | sort | head -n 1 || true)" + +# Make the script's expected input file available in CWD (/sp/output) +rm -f input.txt || true +if [ -n "${INPUT_FILE}" ] && [ -f "${INPUT_FILE}" ]; then + cp -f "${INPUT_FILE}" input.txt +else + # If missing, create an empty file so the Python script emits a clean error + : > input.txt +fi + +# Run the Python script; it reads ./input.txt and writes ./result.txt here (/sp/output) +exec python3 "${SCRIPT_PATH}" +``` + +Create a new file named `Dockerfile` and add the following code: + +```dockerfile title="Dockerfile" +FROM ubuntu:22.04 + +# Non-interactive tzdata install +ENV DEBIAN_FRONTEND=noninteractive + +# System deps +RUN apt-get update && apt-get install -y \ + python3 \ + python3-pip \ + ca-certificates \ + curl \ + jq \ + openssl \ + tzdata \ + sed \ + grep \ + coreutils \ + && rm -rf /var/lib/apt/lists/* + +# Python deps +RUN pip3 install --no-cache-dir requests + +# Put the scripts where your environment expects executables +COPY usd_to_crypto.py /usr/local/bin/usd_to_crypto.py +RUN chmod +x /usr/local/bin/usd_to_crypto.py + +COPY entrypoint.sh /usr/local/bin/entrypoint.sh +RUN chmod +x /usr/local/bin/entrypoint.sh + +# Set /sp as workdir (doesn't matter in this case -- entrypoint.sh uses /sp/output as workdir) +WORKDIR /sp + +# Set entrypoint +ENTRYPOINT ["/usr/local/bin/entrypoint.sh"] +``` + +1.2. Build a Docker image: + +```shell +docker build -t usd_to_crypto . +``` + +1.3. Save and archive the image: + +```shell +docker save usd_to_crypto:latest | gzip > usd_to_crypto.tar.gz +``` + +1.4. Upload the archive: + +```shell +./spctl files upload usd_to_crypto.tar.gz \ + --filename usd_to_crypto.tar.gz \ + --output usd_to_crypto.resource.json +``` + +### 2. Prepare data + +2.1. Archive the file: + +```shell +tar -czvf input.tar.gz ./input.txt +``` + +2.2. Upload the archive: + +```shell +./spctl files upload ./input.tar.gz \ + --filename input.tar.gz \ + --output input.resource.json +``` + +### 3. Deploy + +Place an order: + +```shell +./spctl workflows create \ + --tee 7 \ + --solution ./usd_to_crypto.resource.json \ + --data ./input.resource.json +``` + +Find the order ID in the output, for example: + +``` +Workflow was created, TEE order id: ["275510"] +``` + +### 4. Download the result + +Replace `275510` with your order ID: + +```shell +./spctl orders download-result 275510 +``` + +If there is no result for your order yet, wait a couple of minutes and try again. + +Find `output/result.txt` inside the downloaded archive `result.tar.gz`. + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/Guides/quick-guide.md b/docs/cli/Guides/deploy-app/index.md similarity index 61% rename from docs/cli/Guides/quick-guide.md rename to docs/cli/Guides/deploy-app/index.md index b10548e1..205961f8 100644 --- a/docs/cli/Guides/quick-guide.md +++ b/docs/cli/Guides/deploy-app/index.md @@ -1,11 +1,11 @@ --- -id: "quick-guide" -title: "Quick Deployment Guide" -slug: "/guides/quick-guide" -sidebar_position: 1 +id: "deploy-app" +title: "Deploy Your App" +slug: "/guides/deploy-app" +sidebar_position: 2 --- -This quick guide provides instructions on deploying a solution and data on Super Protocol. Its purpose is to introduce you to the logic and sequence of the CLI commands. +This quick guide provides instructions on deploying your own solution and data on Super Protocol. Its purpose is to introduce you to the logic and sequence of the CLI commands. ## Prerequisites @@ -18,13 +18,13 @@ This quick guide provides instructions on deploying a TEE: -| **Location** | **Purpose** | **Access** | -| :- | :- | :- | -| `/sp/inputs/input-0001`
`/sp/inputs/input-0002`
etc. | Possible data locations | Read-only | -| `/sp/output` | Output directory for results | Write; read own files | -| `/sp/certs` | Contains the order certificate | Read-only | +|
**Location**
| **Purpose** |
**Access**
| +| :- | :- | :- | +| `/sp/inputs/input-0001`
`/sp/inputs/input-0002`
... | Possible data locations | Read-only | +| `/sp/output` | Output directory for results | Read and write | +| `/sp/certs` | Contains the order certificate, private key, and workloadInfo | Read-only | -So, your solution must find the data in `/sp/inputs` and write the results to `/sp/output`. +When you provide multiple data inputs, they are placed in separate directories inside the CVM: the first in `/sp/inputs/input-0001`, the second in `/sp/inputs/input-0002`, and so on. Your solution must find the data in `/sp/inputs` and write the results to `/sp/output`. :::important @@ -32,7 +32,9 @@ Always use absolute paths, such as `/sp/...`. ::: -You can find several Dockerfile examples in the [Super-Protocol/solutions](https://github.com/Super-Protocol/solutions) GitHub repository. +Check the [example](/cli/guides/deploy-app/example) at the end of this guide. + +More Dockerfile examples can be found in the [Super-Protocol/solutions](https://github.com/Super-Protocol/solutions) GitHub repository. ### 1.2. Build a Docker image @@ -124,17 +126,11 @@ Place an order using the [`workflows create`](/cli/commands/workflows/create) co --data ./more-data.resource.json ``` -:::note - -When you provide multiple data inputs, they are placed in separate directories inside the CVM: the first in `/sp/inputs/input-0001`, the second in `/sp/inputs/input-0002`, and so on. - -::: - Find the order ID in the output. ## 4. Download the result -Wait a few minutes and [check the order status](/cli/commands/orders/get): +Wait a few minutes and check the order status: ```shell ./spctl orders get @@ -146,7 +142,7 @@ For example: ./spctl orders get 256587 ``` -If the status is `Done`, the order is ready, and you can [download the order result](/cli/commands/orders/download-result): +If the status is `Done` or `Error`, you can [download the order result](/cli/commands/orders/download-result): ```shell ./spctl orders download-result @@ -156,4 +152,8 @@ For example: ```shell ./spctl orders download-result 256587 -``` \ No newline at end of file +``` + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/Guides/multi-party-collab.md b/docs/cli/Guides/multi-party-collab.md new file mode 100644 index 00000000..b9176b09 --- /dev/null +++ b/docs/cli/Guides/multi-party-collab.md @@ -0,0 +1,413 @@ +--- +id: "fine-tune" +title: "Multi-Party Collaboration" +slug: "/guides/multi-party-collab" +sidebar_position: 7 +--- + +Super Protocol enables independent parties to jointly compute over their private inputs without revealing those inputs to one another. + +This guide describes a scenario of a multi-party confidential collaboration on Super Protocol. It uses fine-tuning of a pre-trained AI model as an example, but the general principle presented here may be applied to other cases. + +The scenario involves three parties: + +- **Alice** owns the AI model. +- **Bob** owns the dataset. +- **Carol** provides the training engine and scripts. + +Neither Alice nor Bob is willing to share their intellectual property with other parties. At the same time, Carol must share her training engine and scripts with both parties so they can verify that the code is safe to run on their data. If Carol's training engine or scripts are proprietary and she cannot share them with Alice and Bob, a possible alternative is to involve independent security experts who can audit the code without exposing it publicly. + +The computation runs on Super Protocol within a Trusted Execution Environment that is isolated from all external access, including that of Alice, Bob, Carol, the hardware owner, and the Super Protocol team. Additionally, Super Protocol's Certification System provides verifiability, eliminating the need for trust. + +The following is just one example of confidential collaboration. Super Protocol's architecture enables a range of scenarios involving multiple parties and assets. + +## General workflow + +**Prepare the solution**: + +```mermaid +sequenceDiagram + actor Alice and Bob + actor Carol + participant Storage + + note over Alice and Bob,Storage: Prepare the solution + + Carol ->> Carol: 1. Build a solution + Carol ->> Storage: 2. Upload the solution + Carol ->> Alice and Bob: 3. Send solution.resource.json + Alice and Bob ->> Storage: 4. Download the solution + Alice and Bob ->> Alice and Bob: 5. Verify the solution +``` +
+ +Carol builds a solution—a Docker image containing her training engine and script ([1](/cli/guides/multi-party-collab#carol-1-build-a-solution)). She uploads the solution using SPCTL ([2](/cli/guides/multi-party-collab#carol-2-upload-the-solution)) and grants Alice and Bob access for verification ([3](/cli/guides/multi-party-collab#carol-3-send-the-solution-to-alice-and-bob)). + +Alice and Bob download the solution ([4](/cli/guides/multi-party-collab#alice-and-bob-4-download-the-solution)) and verify that it is safe to process their data ([5](/cli/guides/multi-party-collab#alice-and-bob-5-verify-the-solution)). + +If Carol cannot share the code with Alice and Bob, and a third-party auditor is involved, the workflow will differ slightly. After uploading, Carol must also create a solution offer on Super Protocol Marketplace (similar to Bob's [Step 8](/cli/guides/multi-party-collab#bob-8-create-an-offer)). + +**Prepare the data**: + +```mermaid +sequenceDiagram + actor Alice + actor Bob + participant Storage + participant Super Protocol + + note over Alice,Super Protocol: Prepare the data + + Alice ->> Storage: 6. Upload the model + Bob ->> Storage: 7. Upload the dataset + Bob ->> Super Protocol: 8. Create an offer +``` +
+ +Alice uploads her model ([6](/cli/guides/multi-party-collab#alice-6-upload-the-model)) and Bob uploads his dataset ([7](/cli/guides/multi-party-collab#bob-7-upload-the-dataset)) to remote storage using SPCTL. Files are automatically encrypted during upload, and only the uploader holds the key. + +Bob creates an offer on the Marketplace ([8](/cli/guides/multi-party-collab#bob-8-create-an-offer)). The offer requires Bob's manual approval for use. He shares the offer's IDs with Alice. + +**Execute**: + +```mermaid +sequenceDiagram + actor Alice + actor Bob + participant Storage + participant Super Protocol / TEE + participant Blockchain + + note over Alice,Blockchain: Execute + + Alice ->>+ Super Protocol / TEE: 9. Place an order + Bob ->> Super Protocol / TEE: 10. Approve the usage of the dataset + Super Protocol / TEE ->> Storage: Download the solution, model, and dataset + Super Protocol / TEE ->> Blockchain: Publish the order report + Super Protocol / TEE ->> Super Protocol / TEE: Process the order + Super Protocol / TEE ->>- Storage: Upload the order results + Alice ->> Storage: 11. Download the order results + Alice ->> Blockchain: 12. Get the order report + Bob ->> Blockchain: 12. Get the order report +``` +
+ +Alice places an order on Super Protocol ([9](/cli/guides/multi-party-collab#alice-9-place-an-order)), adding the solution, her model, and Bob's offer. The order does not proceed automatically and remains `Blocked`. + +Bob manually approves the usage of his dataset for the image with a specific hash ([10](/cli/guides/multi-party-collab#bob-10-complete-the-data-suborder)). If this hash matches the actual solution hash, the CVM begins to process the order. If the hashes do not match, the order will be terminated with an error. + +When the order is complete, Alice downloads the result ([11](/cli/guides/multi-party-collab#alice-11-download-the-order-results)). All the data within the TEE (solution, AI model, dataset, order results, etc.) is automatically deleted. + +Both Alice and Bob can retrieve the order report ([12](/cli/guides/multi-party-collab#alice-and-bob-12-get-the-order-report)) that confirms the authenticity of the entire trusted setup. + +## Prerequisites + +**Alice**: + +- [SPCTL](/cli) + +**Bob**: + +- [SPCTL](/cli) +- [Provider Tools](/cli/guides/provider-tools) + +**Carol**: + +- [SPCTL](/cli) +- Docker + +## Prepare the solution + +### Carol: 1. Build a solution + +1.1. Write a Dockerfile that creates an image with the training engine. + +Keep in mind the special file structure inside the TEE: + +| **Location** | **Purpose** | **Access** | +| :- | :- | :- | +| `/sp/inputs/input-0001`
`/sp/inputs/input-0002`
etc. | Possible data locations
(AI model, dataset, training scripts, etc.) | Read-only | +| `/sp/output` | Output directory for results | Read and write | +| `/sp/certs` | Contains the order certificate, private key, and workloadInfo | Read-only | + +Your solution must find the data in `/sp/inputs` and write the results to `/sp/output`. + +:::important + +Always use absolute paths, such as `/sp/...`. + +::: + +You may either include the training scripts in the image or upload them separately using SPCTL. In this case, Alice will need to attach the uploaded scripts to the order as `--data` at [Step 9](/cli/guides/multi-party-collab#alice-9-place-an-order). + +You can find several Dockerfile examples in the [Super-Protocol/solutions](https://github.com/Super-Protocol/solutions) GitHub repository. + +1.2. Build an image: + +```shell +docker build --platform linux/amd64 -t . +``` + +Replace `` with the name of your solution. + +1.3. Save and archive the image: + +```shell +docker save :latest | gzip > .tar.gz +``` + +### Carol: 2. Upload the solution + +```shell +./spctl files upload .tar.gz \ + --output solution.resource.json \ + --filename .tar.gz \ + --use-addon +``` + +If you did not include training scripts in the image, upload them separately: + +```shell +./spctl files upload \ + --output scripts.resource.json \ + --use-addon +``` + +Replace `` with the path to the directory containing your training scripts. + +:::important + +The output resource files contain information needed to access and decrypt the uploaded files. Be careful with sharing resource files if the uploaded content is sensitive. + +::: + +### Carol: 3. Send the solution to Alice and Bob + +Send the output resource files from the previous step to Alice and Bob (or independent auditors). + +### Alice and Bob: 4. Download the solution + +```shell +./spctl files download solution.resource.json . --use-addon +``` + +### Alice and Bob: 5. Verify the solution + +Review the image to ensure it is safe to process your data. + +## Prepare the data + +### Alice: 6. Upload the model + +```shell +./spctl files upload \ + --output model.resource.json \ + --use-addon +``` + +Replace `` with the path to the dataset directory. + +### Bob: 7. Upload the dataset + +```shell +./spctl files upload \ + --output dataset.resource.json \ + --use-addon +``` + +Replace `` with the path to the dataset directory. + +### Bob: 8. Create an offer + +8.1. In the Provider Tools directory, create a file named `offer-info.json`. Paste the following: + +```json title="offer-info.json" +{ + "name":"Offer name", + "group":"0", + "offerType":"3", + "cancelable":false, + "description":"Offer description", + "restrictions":{ + "offers":[ + ], + "types":[ + ] + }, + "input":"", + "output":"", + "allowedArgs":"", + "allowedAccounts":[ + ], + "argsPublicKey":"", + "resultResource":"", + "subType":"0", + "version":{ + "version":1, + "status":"0", + "info":{ + "metadata":{ + "groupingOffers":true + } + } + } +} +``` + +Modify the offer name and description; leave the rest intact. Save and close the file. + +8.2. In the same directory, create a file named `slot-info.json`. Paste the following: + +```json title="slot-info.json" +{ + "info": { "cpuCores": 0, "gpuCores": 0, "diskUsage": 10485760, "ram": 0, "vram": 0 }, + "usage": { + "maxTimeMinutes": 0, + "minTimeMinutes": 15000, + "price": "0", + "priceType": "1" + }, + "option": { "bandwidth": 0, "externalPort": 0, "traffic": 0 } +} +``` + +Adjust the value set to `diskUsage` so that it is larger than the size of your dataset in bytes. Save and close the file. + +8.3. Register an offer: + +```shell +./provider-tools register data --result +``` + +Replace `` with the path to the `dataset.resource.json` file. + +:::note + +If you are registering an offer for the first time, you will be prompted to complete the provider setup. Enter a provider name and then a brief description. Save the provider information to a file when prompted. + +::: + +Follow the dialog: + +Q: `Have you already created a DATA offer?`
+A: `n` (No) + +Q: `Please specify a path to the offer info json file`
+A: `./offer-info.json` + +Q: `Please specify a path to the slot info json file`
+A: `./slot-info.json` + +Q: `Do you want to add another slot?`
+A: `n` (No) + +Wait for the offer to be created and find a line in the output with the IDs of the offer and slot, for example: + +```text +Slot 119654 for offer 18291 has been created successfully +``` + +Provide Alice with these IDs. Ignore other instructions you see in the output. + +## Execute + +### Alice: 9. Place an order + +9.1. Place an order with the solution, your model, and Bob's data offer. If the training scripts were uploaded separately, add them as data: + +```shell +./spctl workflows create \ + --solution ./solution.resource.json \ + --data ./model.resource.json \ + --data , \ + --tee \ + [--data ./scripts.resource.json] +``` + +Replace: + +- `` with the offer ID provided by Bob +- `` with the slot ID provided by Bob +- `` with the desired compute offer ID. + +Find the order ID in the output, for example: + +```shell +Workflow was created, TEE order id: ["260402"] +``` + +9.2. Get the suborder ID: + +```shell +./spctl orders get --suborders --suborders_fields id,type,status +``` + +Replace `` with the order ID. + +In the output, find the ID of a `Data` suborder with the `New` status. Provide Bob with the order ID and this suborder ID. + +### Bob: 10. Complete the data suborder + +Manually complete the data suborder: + +```shell +./spctl orders complete \ + --result ./dataset.resource.json \ + --status done \ + --solution-hash +``` + +Replace: + +- `` with the data suborder ID +- `` with the hash from the `solution.resource.json` file. If the solution was modified, its hash will not match the hash you enter. In this case, the suborder will not be completed, and the order will not proceed. + +### Alice: 11. Download the order results + +11.1. Check the order status: + +```shell +./spctl orders get +``` + +Replace `` with the order ID. + +11.2. When the status is `Done` or `Error`, download the result: + +```shell +./spctl orders download-result +``` + +If the order ended up with an error, the results will contain execution logs that may be useful for troubleshooting. + +### Alice and Bob: 12. Get the order report + +You can get the order report as soon as the CVM downloads the order components and starts the execution, without waiting for the order to complete: + +```shell +./spctl orders get-report --save-to report.json +``` + +The report contains the full certificate chain, from the Root CA to the order certificate, and workload metadata. + +Ensure you see `Order report validation successful!` in the output. + +Additionally, find entries in the `runtimeInfo` array that start with `"type": "Image"` and `"type": "Data"`. For example: + +```json +{ + "type": "Data", + "size": 12901, + "hash": { + "algo": "sha256", + "hash": "8598805cd2136a4beff17559a7228854f6a8cc0b027856ea5c196fb8d0602501", + "encoding": "hex" + } +}, +``` + +These hashes are of the actual solution and data that were executed within a TEE. Compare them with the solution and dataset hashes from the respective resource files. + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/Guides/provider-tools.md b/docs/cli/Guides/provider-tools.md new file mode 100644 index 00000000..8c6c7f79 --- /dev/null +++ b/docs/cli/Guides/provider-tools.md @@ -0,0 +1,147 @@ +--- +id: "provider-tools" +title: "Provider Tools" +slug: "/guides/provider-tools" +sidebar_position: 1 +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +Provider Tools is a Super Protocol CLI utility for registering providers and creating offers. + +## Download + + + +Create a separate directory, open a terminal there, and run the following command: +``` +curl -L https://github.com/Super-Protocol/provider-tools/releases/latest/download/provider-tools-linux-x64 -o provider-tools +chmod +x ./provider-tools +``` + + +Create a separate directory, open Terminal there, and run the following command: +``` +curl -L https://github.com/Super-Protocol/provider-tools/releases/latest/download/provider-tools-macos-x64 -o provider-tools +chmod +x ./provider-tools +``` + + +Install and launch [WSL](https://learn.microsoft.com/en-us/windows/wsl). Create a separate directory, and install Provider Tools for Linux: +``` +curl -L https://github.com/Super-Protocol/provider-tools/releases/latest/download/provider-tools-linux-x64 -o provider-tools +chmod +x ./provider-tools +``` + + + +## Set up + +```shell +./provider-tools setup +``` + +Enter the Access token: + +```text +eyJhbGciOiJFUzI1NiJ9.eyJhZGRyZXNzIjoiMHhBN0E5NjQ4ZGE2QTg5QjBhNzFhNGMwRDQ2Y2FENDAwMDU3ODI3NGEyIiwiaWF0IjoxNjc5OTk4OTQyLCJleHAiOjE3NDMxMTQxNDJ9.x2lx90D733mToYYdOWhh4hhXn3YowFW4JxFjDFtI7helgp2uqekDHFgekT5yjbBWeHTzRap7SHbDC3VvMIDe0g +``` + +Follow the dialog: + +Q: `Do you need to generate a new authority account?`
+A: `y` (Yes) + +Q: `Do you need to generate a new action account?`
+A: `y` (Yes) + +Q: `Do you need to generate a new tokenReceiver account?`
+A: `y` (Yes) + +## Provider's SPCTL + +Providers need another copy of SPCTL configured to manage their offers. + + + + If you registered a provider using Provider Tools, you should have a configuration file created automatically in the Provider Tools directory. Its name should be similar to `spctl-config-0xB9f0b77BDbAe9fBe3E60BdC567E453f503605BAb.json`, where `0xB9f0b77BDbAe9fBe3E60BdC567E453f503605BAb` is your Authority Account wallet address. + + Copy or download the SPCTL binary to the Provider Tools directory; rename this file to `config.json` so SPCTL can recognize it as its configuration file. + + Alternatively, add the `--config` option to SPCTL commands to use the same SPCTL binary with another account. For example: + + ```shell + ./spctl orders list --my-account --type tee --config ../provider-tools/spctl-config-0xB9f0b77BDbAe9fBe3E60BdC567E453f503605BAb.json + ``` + + + As with your User Account's configuration file, you can manually create the provider's SPCTL configuration file. + + 1. In the Provider Tools directory, create a file named `config.json`. Use the following template: + + ```json title="config.json" + { + "backend": { + "url": "https://bff.superprotocol.com/graphql", + "accessToken": "eyJhbGciOiJFUzI1NiJ9.eyJhZGRyZXNzIjoiMHhBN0E5NjQ4ZGE2QTg5QjBhNzFhNGMwRDQ2Y2FENDAwMDU3ODI3NGEyIiwiaWF0IjoxNjc5OTk4OTQyLCJleHAiOjE3NDMxMTQxNDJ9.x2lx90D733mToYYdOWhh4hhXn3YowFW4JxFjDFtI7helgp2uqekDHFgekT5yjbBWeHTzRap7SHbDC3VvMIDe0g" + }, + "blockchain": { + "rpcUrl": "https://opbnb.superprotocol.com", + "smartContractAddress": "0x3C69ea105Fc716C1Dcb41859281Aa817D0A0B279", + "accountPrivateKey": "", + "authorityAccountPrivateKey": "" + }, + "storage": { + "type": "STORJ", + "bucket": "", + "prefix": "", + "writeAccessToken": "", + "readAccessToken": "" + }, + "workflow": { + "resultEncryption": { + "algo": "ECIES", + "key": "", + "encoding": "base64" + } + } + } + ``` + + 2. Do not change the preconfigured values and provide values to the following keys: + + | **Key** | **Description** | + | :- | :- | + | `"accountPrivateKey"` | Action Account private key. | + | `"authorityAccountPrivateKey"` | Authority Account private key. | + | `"bucket"` | (optional) Name of a Storj bucket. | + | `"prefix"` | (optional) Path to a directory inside the bucket. It can be empty. | + | `"writeAccessToken"` | (optional) Storj access grant with **Full** permission (**Read**, **List**, **Write**, **Delete**) for this bucket. | + | `"readAccessToken"` | (optional) Storj access grant with **Read** permission for this bucket. | + + You can find the section with your Authority and Action Accounts private keys in `provider-tools-config.json` in the Provider Tools directory. For example: + + ```json title="provider-tools-config.json" + "account": { + "authority": "0x50612a8bf52cb263825e58c72361ea58c04efa7af7e5b549ea9c2ed02059c668d", + "action": "0x0512ad96fzc01900d3ecf0987m81c7bc1fd2daf455ebb49kjce5b410c7dc6f05", + "tokenReceiver": "0x167d93786ghbf00d19b7d58065a5a59276e55ca1e621e47330f2b64d9fcb6a38" + }, + ``` + + Save and close the file. + + 3. Generate a key for order result encryption using the [`workflows generate-key`](/cli/commands/workflows/generate-key) command. Open `config.json` again and set the generated key to `workflow.resultEncryption.key`. Save and close the file. + + + +### Set up Storj access (optional) + +If you already [set up Storj access](/cli/#set-up-storj-access-optional) for your regular SPCTL, you may reuse the same credentials for your provider's SPCTL. + +If you skip this step, Super Protocol will automatically provide you with 20 GB of storage. + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/Guides/swarm-vllm.md b/docs/cli/Guides/swarm-vllm.md new file mode 100644 index 00000000..c9022890 --- /dev/null +++ b/docs/cli/Guides/swarm-vllm.md @@ -0,0 +1,181 @@ +--- +id: "swarm-vllm" +title: "vLLM on Super Swarm" +slug: "/guides/swarm-vllm" +sidebar_position: 20 +--- + +This guide provides step-by-step instructions for deploying MedGemma and Apertus on Super Swarm using vLLM. + +## Prerequisites + +- [kubectl](https://kubernetes.io/docs/tasks/tools/) +- [helm](https://helm.sh/docs/intro/install/) +- A domain +- For [MedGemma](https://huggingface.co/google/medgemma-1.5-4b-it): an HF_TOKEN from an account that has already accepted the model's terms + +Also, download and rename deployment scripts: + +- [`deploy_medgemma_official.sh`](/files/deploy_medgemma_official.sh) +- [`deploy_apertus_official.sh`](/files/deploy_apertus_official.sh) + +## 1. Sign in using MetaMask + + +
+ +## 2. Create a Kubernetes cluster + +2.1. Go to **Kubernetes** and press **Create Cluster**: + + +
+
+ +2.2. Add a GPU to the cluster, allocate resources, and press **Create Cluster**: + + +
+ +## 3. Download the cluster configuration file + + +
+ +## 4. Point `kubectl` to the configuration file + +Execute the following command: + +```shell +export KUBECONFIG=-kubeconfig.yaml +``` + +Replace `-kubeconfig.yaml` with the name of the downloaded configuration file. + +## 5. Update the scripts + +In both scripts (`deploy_medgemma_official.sh` and `deploy_apertus_official.sh`), find `BASE_DOMAIN="${BASE_DOMAIN:-monai-swarm.win}"` and replace `monai-swarm.win` with your domain. + +## 6. Create an API key + +Execute the following command and type a desired key: + +```shell +read -rs API_KEY && export API_KEY +``` + +## 7. Deploy the model + +Apertus: + +```shell +bash deploy_apertus_official.sh +``` + +The deployment usually takes 5-7 minutes. + +A working Apertus config is already set in the script: + +``` +dtype=bfloat16 +max-model-len=32768 +gpu-memory-utilization=0.55 +max-num-seqs=8 +max-num-batched-tokens=4096 +``` + +MedGemma: + +```shell +export HF_TOKEN=hf_xxx +bash deploy_medgemma_official.sh +``` + +Replace `hf_xxx` with an HF_TOKEN. + +Alternatively, create a `.hf_token` file next to `deploy_medgemma_official.sh`; the script will read it automatically. + +A working MedGemma config is already set in the script: + +``` +dtype=bfloat16 +max-model-len=8192 +gpu-memory-utilization=0.40 +--mm-processor-cache-gb 1 +max-num-seqs=4 +max-num-batched-tokens=2048 +``` + +## 8. Check Kubernetes + +```shell +kubectl get pods -o wide +kubectl get svc +kubectl get ingress +``` + +Expected output: + +- Two pods in `1/1 Running` +- Two services +- Two ingresses + +## 9. Confirm DNS records + +Back in the Super Swarm dashboard, go to **Ingresses** and note the two hostnames listed there. + + +
+
+ +For each hostname, add a CNAME record pointing to it and a TXT record for domain verification at your DNS provider. + +## 10. Publish the cluster + +In the Super Swarm dashboard, go to **Kubernetes** and publish the cluster. + + +
+ +## 11. Send test requests + +In the test requests below, replace: + +- `` with your domain. +- `` with the API key you set in [Step 6](/cli/guides/swarm-vllm#6-create-an-api-key). + +Apertus: + +```shell +curl https://apertus-vllm./v1/completions \ + -H 'Authorization: Bearer ' \ + -H 'Content-Type: application/json' \ + -d '{ + "model": "swiss-ai/Apertus-8B-2509", + "prompt": "Write a concise technical summary of Kubernetes GPU scheduling.", + "temperature": 0, + "max_tokens": 200 + }' +``` + +MedGemma: + +```shell +curl https://medgemma-vllm./v1/chat/completions \ + -H 'Authorization: Bearer ' \ + -H 'Content-Type: application/json' \ + -d '{ + "model": "google/medgemma-1.5-4b-it", + "messages": [ + { + "role": "user", + "content": [ + {"type": "text", "text": "Describe this image briefly."}, + {"type": "image_url", "image_url": {"url": "data:image/png;base64,PASTE_BASE64_HERE"}} + ] + } + ], + "temperature": 0, + "max_tokens": 120 + }' +``` \ No newline at end of file diff --git a/docs/cli/Guides/tgwui.md b/docs/cli/Guides/tgwui.md new file mode 100644 index 00000000..66ac340a --- /dev/null +++ b/docs/cli/Guides/tgwui.md @@ -0,0 +1,238 @@ +--- +id: "tgwui" +title: "TGWUI and ComfyUI With Tunnels" +slug: "/guides/solutions/tgwui" +sidebar_position: 3 +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +This guide provides step-by-step instructions for uploading and deploying an AI model on Super Protocol using Text Generation Web UI and ComfyUI, both already available in the Marketplace. However, the general workflow described here can be applied to any solution, whether new or existing. + +## Prerequisites + +- [SPCTL](/cli/) +- BNB and SPPI tokens (opBNB) to pay for transactions and orders + +## 1. Prepare + +Ensure your model meets the Super Protocol requirements: + +1.1. Your model must belong to a category supported by one of the engines. + +**Text Generation Web UI**: + +- Text Generation +- Text Classification +- Translation +- Text2Text Generation + +**ComfyUI**: + +- Image Classification +- Object Detection +- Image Segmentation +- Text-to-Image +- Image-to-Text +- Image-to-Image +- Image-to-Video +- Video Classification +- Text-to-Video +- Mask Generation + +If you plan to deploy a ComfyUI workflow with custom nodes, [prepare the files](/cli/guides/solutions/comfyui) before proceeding to the next step. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI. + +1.2. Due to [testnet limitations](/marketplace/limitations), the total size of model files should not exceed 13 GB. Support for bigger models will be available in the future. + +## 2. Upload the model + +Upload the model: + +```shell +./spctl files upload \ + --output model.resource.json \ + --use-addon +``` + +Replace `` with the path to the dataset directory, for example: + +```shell +./spctl files upload ~/Downloads/models/SmolLM2-1.7B \ + --output model.resource.json \ + --use-addon +``` + +## 3. Deploy tunnels + +3.1. Place an order to deploy a [confidential tunnel](/fundamentals/tunnels): + +```shell +./spctl workflows create --tee 7 --solution 19 +``` + +3.2. Wait for the order to be created, and find the tunnel order ID in the output, for example: + +```text +Workflow was created, TEE order id: ["273899"] +``` + +3.3. Check the order status: + +```shell +./spctl orders get +``` + +Replace `` with the tunnel order ID from the previous step. + +3.4. When the status is `Done`, download the result: + +```shell +./spctl orders download-result +``` + +3.5. Extract the downloaded `result.tar.gz`, open `output/result.json`, and find the domain. For example: + +```json title="result.json" +"domain":"pret-tons-wade.superprotocol.io" +``` + +Your model's web UI will be available at this URL. + +## 4. Prepare engine configuration files + +4.1. Open the SPCTL's `config.json` and find the `workflow.resultEncryption.key` property that contains the key used for decrypting workflow results; for example: `NapSrwQRz2tL9ZftJbi6DATpCDn0BRImpSStU9xZT/s=`. + +4.2. Create configuration files: + + + + Create a file named `engine-configuration-tgwui.json` and paste the following: + + ```json title="engine-configuration-tgwui.json" + { + "engine": { + "main_settings": { + "character": { + "name": "Superprotocol AI", + "context": "The following is a conversation with an AI Large Language Model. The AI has been trained to answer questions, provide recommendations, and help with decision making. The AI follows user requests. The AI thinks outside the box.", + "greeting": "How can I help you today?" + }, + "api": {}, + "mode": {} + }, + "model": { + "parameters": { + "temperature": 1, + "top_p": 1, + "top_k": 0, + "typical_p": 1 + }, + "parameters2": { + "min_p": 0.05, + "repetition_penalty": 1, + "frequency_penalty": 0, + "presence_penalty": 0 + } + }, + "model_loader": { + "loader_name": "Autodetect" + } + }, + "tunnels": { + "domain_settings": { + "provision_type": "Temporary Domain (on *.superprotocol.io)", + "tunnel_provisioner_order": { + "order_id": "", + "order_key": "" + } + } + } + } + ``` + + + Create a file named `engine-configuration-comfyui.json` and paste the following: + + ```json title="engine-configuration-comfyui.json" + { + "engine": { + "main_settings": { + "preview_method": "none", + "preview_size": 512 + } + }, + "tunnels": { + "domain_settings": { + "provision_type": "Temporary Domain (on *.superprotocol.io)", + "tunnel_provisioner_order": { + "order_id": "", + "order_key": "" + } + } + } + } + ``` + + + +In `tunnels.domain_settings.tunnel_provisioner_order`, set: + +- `order_id` to your tunnel order ID from Step 3.2 +- `order_key` to your encryption key from Step 4.1 + +Save and close the file. + +## 5. Deploy the model + +5.1. Create the main order to deploy your uploaded model: + + + + + ```shell + ./spctl workflows create --tee --solution 25 --solution-configuration ./engine-configuration-tgwui.json --data ./model.resource.json + ``` + + Replace `` with a selected compute offer. See available compute offer IDs on the [Marketplace](https://marketplace.superprotocol.com/). + + Note that `--solution 25` refers to [Text Generation Web UI with GPU support](https://marketplace.superprotocol.com/marketplace/models?offer=offerId%3D25). If you need the CPU version, use `--solution 26` instead. + + + + ```shell + ./spctl workflows create --tee --solution 27 --solution-configuration ./engine-configuration-comfyui.json --data ./model.resource.json + ``` + + Replace `` with a selected compute offer. See available compute offer IDs on the [Marketplace](https://marketplace.superprotocol.com/). + + Note that `--solution 27` refers to [ComfyUI UI with GPU support](https://marketplace.superprotocol.com/marketplace/models?offer=offerId%3D27). If you need the CPU version, use `--solution 28` instead. + + + +5.2. Wait for the order to be created, and find the main order ID in the output, for example: + +```text +Workflow was created, TEE order id: ["273900"] +``` + +5.3. Deployment may take 15-20 minutes or more, depending on the model size and other parameters. Check the domain from Step 3.5 every few minutes until the UI is available. + +If you suspect something went wrong, check the order status: + +```shell +./spctl orders get +``` + +Replace `` with the main order ID from the previous step. + +The most important statuses (see the [full list](/fundamentals/orders#compute-order)): + +- **Processing**: The compute is executing the order inside a TEE. Your model is either already available or will be available soon. +- **In Queue**: The order is waiting for the compute to become available. This status appears only if the compute is overloaded with orders. If this status persists for a few minutes, place a new main order the same tunnel order and engine configuration but another compute offer. +- **Done**: The order is completed successfully and the model's UI is no longer available. +- **Error**: The order completed with an error. [Download the order results](/cli/commands/orders/download-result) to get more information about the error. + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/Guides/unsloth.md b/docs/cli/Guides/unsloth.md new file mode 100644 index 00000000..70ae7f67 --- /dev/null +++ b/docs/cli/Guides/unsloth.md @@ -0,0 +1,189 @@ +--- +id: "unsloth" +title: "Fine-Tuning With Unsloth" +slug: "/guides/solutions/unsloth" +sidebar_position: 5 +--- + +This guide provides step-by-step instructions for fine-tuning an AI model using the Super Protocol packaging of [Unsloth](https://unsloth.ai/), an open-source framework for LLM fine-tuning and reinforcement learning. + +The solution allows you to run fine-tuning within Super Protocol's Trusted Execution Environment (TEE). This provides enhanced security and privacy and enables a range of [confidential collaboration](/cli/guides/multi-party-collab) scenarios. + +## Prerequisites + +- [SPCTL](/cli/) +- Git +- BNB and SPPI tokens (opBNB) to pay for transactions and orders + +## Repository + +Clone the repository with Super Protocol solutions: + +```shell +git clone https://github.com/Super-Protocol/solutions.git +``` + +The Unsloth solution includes a Dockerfile and a helper script `run-unsloth.sh` that facilitates workflow creation. Note that `run-unsloth.sh` does not build an image and instead uses a pre-existing solution offer. + +## run-unsloth.sh + +Copy SPCTL’s binary and its `config.json` to the `unsloth/scripts` directory inside the cloned Super-Protocol/solutions repository. + +### 1. Prepare training scripts + +When preparing your training scripts, keep in mind the special file structure within the TEE: + +| **Location** | **Purpose** | **Access** | +| :- | :- | :- | +| `/sp/inputs/input-0001`
`/sp/inputs/input-0002`
etc. | Possible data locations
(AI model, dataset, training scripts, etc.) | Read-only | +| `/sp/output` | Output directory for results | Read and write | +| `/sp/certs` | Contains the order certificate, private key, and `workloadInfo` | Read-only | + +Your scripts must find the data in `/sp/inputs` and write the results to `/sp/output`. + +### 2. Place an order + +2.1. Initiate a dialog to construct and place an order: + +```shell +./run-unsloth.sh +``` + +2.2. `Enter TEE offer id (number)`: Enter a compute offer ID. This determines the available compute resources and cost of your order. You can find the full list of available compute offers on the [Marketplace](https://marketplace.superprotocol.com/). + +2.3. `Choose run mode`: `1) file`. + +2.4. `Select the model option`: + +- `1) Medgemma 27b (offer 15900)`: Select this option if you need an untuned MedGemma 27B. +- `2) your model`: Select this option to use another model. Further, when prompted about `Model input`, enter one of the following: + - a path to the model's resource JSON file, if it was already uploaded with SPCTL + - model offer ID, if the model exists on the Marketplace + - a path to the local directory with the model to upload it using SPCTL. +- `3) no model`: No model will be used. + +2.5. `Enter path to a .py/.ipynb file OR a directory`: Enter the path to your training script (file or directory). For a directory, select the file to run (entrypoint) when prompted. Note that you cannot reuse resource files in this step; scripts should be uploaded every time. + +2.6. `Provide your dataset as a resource JSON path, numeric offer id, or folder path`: As with the model, enter one of the following: + +- a path to the dataset's resource JSON file, if it was already uploaded with SPCTL +- dataset offer ID, if the dataset exists on the Marketplace +- a path to the local directory with the dataset to upload it using SPCTL. + +2.7. `Upload SPCTL config file as a resource?`: Answer `N` unless you need to use SPCTL from within the TEE during the order execution. In this case, your script should run a `curl` command to download SPCTL and find the uploaded `config.json` in the `/sp/inputs/` subdirectories. + +2.8. Wait for the order to be created and find the order ID in the output, for example: + +```shell +Unsloth order id: 259126 +Done. +``` + +### 3. Check the order result + +3.1. The order will take some time to complete. Check the order status: + +```shell +./spctl orders get +``` + +Replace `` with your order ID. + +If you lost the order ID, check all your orders to find it: + +```shell +./spctl orders list --my-account --type tee +``` + +3.2. When the order status is `Done` or `Error`, download the result: + +```shell +./spctl orders download-result +``` + +The downloaded TAR.GZ archive contains the results in the `output` directory and execution logs. + +## Dry run + +```shell +./run-unsloth.sh --suggest-only +``` + +The option `--suggest-only` allows you to perform a dry run without actually uploading files and creating orders. + +Complete the dialog, as usual; only use absolute paths. + +In the output, you will see a prepared command for running the script non-interactively, allowing you to easily modify the variables and avoid re-entering the dialog. For example: + +```shell +RUN_MODE=file \ +RUN_DIR=/home/user/Downloads/yma-run \ +RUN_FILE=sft_example.py \ +DATA_RESOURCE=/home/user/unsloth/scripts/yma_data_example-data.json \ +MODEL_RESOURCE=/home/user/unsloth/scripts/medgemma-27b-ft-merged.resource.json \ +/home/user/unsloth/scripts/run-unsloth.sh \ +--tee 8 \ +--config ./config.json +``` + +## Jupyter Notebook + +You can launch and use Jupyter Notebook instead of uploading training scripts directly. + +Initiate a dialog: + +```shell +./run-unsloth.sh +``` + +When prompted: + +1. `Enter TEE offer id`: Enter a compute offer ID. This determines the available compute resources and cost of your order. You can find the full list of available compute offers on the [Marketplace](https://marketplace.superprotocol.com/). + +2. `Choose run mode`: `2) jupyter-server`. + +3. `Select the model option`: + +- `1) Medgemma 27b (offer 15900)`: Select this option if you need an untuned MedGemma 27B. +- `2) your model`: Select this option to use another model. Further, when prompted about `Model input`, enter one of the following: + - a path to the model's resource JSON file, if it was already uploaded with SPCTL + - model offer ID, if the model exists on the Marketplace + - a path to the local directory with the model to upload it using SPCTL. +- `3) no model`: No model will be used. + +4. `Enter Jupyter password` or press Enter to proceed without a password. + +5. `Select domain option`: + +- `1) Temporary Domain (*.superprotocol.io)` is suitable for testing and quick deployments. +- `2) Own domain` will require you to provide a domain name, TLS certificate, private key, and a tunnel server auth token. + +Wait for the Tunnels Launcher order to be created. + +6. `Provide your dataset as a resource JSON path, numeric offer id, or folder path`: As with the model, enter one of the following: +- a path to the dataset's resource JSON file, if it was already uploaded with SPCTL +- dataset offer ID, if the dataset exists on the Marketplace +- a path to the local directory with the dataset to upload it using SPCTL. + +7. `Upload SPCTL config file as a resource?`: Answer `N` unless you need to use SPCTL from within the TEE during the order execution. In this case, your script should run a `curl` command to download SPCTL and find the uploaded `config.json` in the `/sp/inputs/` subdirectories. + +8. Wait for the Jupyter order to be ready and find a link in the output; for example: + +```shell +=================================================== +Jupyter instance is available at: https://beja-bine-envy.superprotocol.io +=================================================== +``` + +8. Open the link in your browser to access Jupyter’s UI. + +**Note**: + +The data in `/sp/output` will not be published as the order result when running the Jupyter server. To save your fine-tuning results, upload them either: +- via Python code +- using the integrated terminal in the Jupyter server +- using SPCTL with the config uploaded at Step 7. + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/Guides/vllm.md b/docs/cli/Guides/vllm.md new file mode 100644 index 00000000..d701a650 --- /dev/null +++ b/docs/cli/Guides/vllm.md @@ -0,0 +1,99 @@ +--- +id: "vllm" +title: "Inference With vLLM" +slug: "/guides/solutions/vllm" +sidebar_position: 6 +--- + +This guide provides step-by-step instructions for running an AI model inference using the Super Protocol packaging of [vLLM](https://www.vllm.ai/), an inference and serving engine for LLMs. This solution allows you to run LLM inference within Super Protocol's Trusted Execution Environment (TEE). + +## Prerequisites + +- [SPCTL](/cli/) +- Git +- BNB and SPPI tokens (opBNB) to pay for transactions and orders + +## Repository + +Clone the repository with Super Protocol solutions: + +```shell +git clone https://github.com/Super-Protocol/solutions.git +``` + +The vLLM solution includes a Dockerfile and a helper script `run-vllm.sh` that facilitates workflow creation. Note that `run-vllm.sh` does not build an image and instead uses a pre-existing solution offer. + +## run-vllm.sh + +Copy SPCTL’s binary and its `config.json` to the `vllm/scripts` directory inside the cloned Super-Protocol/solutions repository. + +### Place an order + +1. Initiate a dialog to construct and place an order: + +```shell +./run-vllm.sh +``` + +2. `Select domain option`: + +- `1) Temporary Domain (*.superprotocol.io)` is suitable for testing and quick deployments. +- `2) Own domain` will require you to provide a domain name, TLS certificate, private key, and a tunnel server auth token. + +3. `Enter TEE offer id`: Enter a compute offer ID. This determines the available compute resources and cost of your order. You can find the full list of available compute offers on the [Marketplace](https://marketplace.superprotocol.com/). + +4. `Provide model as resource JSON path, numeric offer id, or folder path`: Enter one of the following: + +- a path to the model's resource JSON file, if it was already uploaded with SPCTL +- model offer ID, if the model exists on the Marketplace +- a path to the local directory with the model to upload it using SPCTL. + +5. `Enter API key` or press `Enter` to generate one automatically. + +Wait for the deployment to be ready and find the information about it in the output, for example: + +```shell +=================================================== +VLLM server is available at: https://whau-trug-nail.superprotocol.io +API key: d75c577d-e538-4d09-8f59-a0f00ae961a3 +Order IDs: Launcher=269042, VLLM=269044 +=================================================== +``` + +### API + +Once deployed on Super Protocol, your model runs inside a TEE and exposes an OpenAI-compatible API. You can interact with it as you would with a local vLLM instance. + +Depending on the type of request you want to make, use the following API endpoints: + +- Chat Completions (`/v1/chat/completions`) +- Text Completions (`/v1/completions`) +- Embeddings (`/v1/embeddings`) +- Audio Transcriptions & Translations (`/v1/audio/transcriptions`, `/v1/audio/translations`) + +See the [full list of API endpoints](https://docs.vllm.ai/en/latest/serving/openai_compatible_server/). + +## Dry run + +```shell +./run-vllm.sh --suggest-only +``` + +The option `--suggest-only` allows you to perform a dry run without actually uploading files and creating orders. + +Complete the dialog, as usual; only use absolute paths. + +In the output, you will see a prepared command for running the script non-interactively, allowing you to easily modify the variables and avoid re-entering the dialog. For example: + +```shell +RUN_MODE=temporary \ +MODEL_RESOURCE=55 \ +VLLM_API_KEY=9c6dbf44-cef7-43a4-b362-43295b244446 \ +/home/user/vllm/scripts/run-vllm.sh \ +--config ./config.json \ +--tee 8 +``` + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/commands/offers/add-slot.md b/docs/cli/commands/offers/add-slot.md index c00ce34b..dd4ce284 100644 --- a/docs/cli/commands/offers/add-slot.md +++ b/docs/cli/commands/offers/add-slot.md @@ -8,7 +8,7 @@ sidebar_position: 9 Adds a slot to an offer. -**Important:** This command requires SPCTL with a [provider configuration file](/cli/#for-providers). +**Important:** This command requires SPCTL with a [provider configuration file](/cli/guides/provider-tools#providers-spctl). ## Syntax diff --git a/docs/cli/commands/offers/delete-slot.md b/docs/cli/commands/offers/delete-slot.md index 63e30dda..efb0d9e4 100644 --- a/docs/cli/commands/offers/delete-slot.md +++ b/docs/cli/commands/offers/delete-slot.md @@ -8,7 +8,7 @@ sidebar_position: 11 Deletes a slot in an offer. -**Important:** This command requires SPCTL with a [provider configuration file](/cli/#for-providers). +**Important:** This command requires SPCTL with a [provider configuration file](/cli/guides/provider-tools#providers-spctl). Use the [`offers get`](/cli/commands/offers/get) command to get the IDs of all slots in an offer. Use the [`offers get-slot`](/cli/commands/offers/get-slot) command to get additional information on a slot. diff --git a/docs/cli/commands/offers/disable.md b/docs/cli/commands/offers/disable.md index 055cb1be..274311a6 100644 --- a/docs/cli/commands/offers/disable.md +++ b/docs/cli/commands/offers/disable.md @@ -8,7 +8,7 @@ sidebar_position: 6 Disables an existing enabled offer. -**Important:** This command requires SPCTL with a [provider configuration file](/cli/#for-providers). +**Important:** This command requires SPCTL with a [provider configuration file](/cli/guides/provider-tools#providers-spctl). ## Syntax diff --git a/docs/cli/commands/offers/enable.md b/docs/cli/commands/offers/enable.md index c8d8edb0..88586e91 100644 --- a/docs/cli/commands/offers/enable.md +++ b/docs/cli/commands/offers/enable.md @@ -8,7 +8,7 @@ sidebar_position: 7 Enables an existing disabled offer. -**Important:** This command requires SPCTL with a [provider configuration file](/cli/#for-providers). +**Important:** This command requires SPCTL with a [provider configuration file](/cli/guides/provider-tools#providers-spctl). ## Syntax diff --git a/docs/cli/commands/offers/update-slot.md b/docs/cli/commands/offers/update-slot.md index 1ddf8482..b9959301 100644 --- a/docs/cli/commands/offers/update-slot.md +++ b/docs/cli/commands/offers/update-slot.md @@ -8,7 +8,7 @@ sidebar_position: 10 Updates a slot in an offer. -**Important:** This command requires SPCTL with a [provider configuration file](/cli/#for-providers). +**Important:** This command requires SPCTL with a [provider configuration file](/cli/guides/provider-tools#providers-spctl). Use the [`offers get`](/cli/commands/offers/get) command to get the IDs of all slots in an offer. Use the [`offers get-slot`](/cli/commands/offers/get-slot) command to get additional information on a slot. diff --git a/docs/cli/commands/offers/update.md b/docs/cli/commands/offers/update.md index 11a20313..501a7acc 100644 --- a/docs/cli/commands/offers/update.md +++ b/docs/cli/commands/offers/update.md @@ -8,7 +8,7 @@ sidebar_position: 5 Updates information about an offer. -**Important:** This command requires SPCTL with a [provider configuration file](/cli/#for-providers). +**Important:** This command requires SPCTL with a [provider configuration file](/cli/guides/provider-tools#providers-spctl). ## Syntax diff --git a/docs/cli/commands/providers/update.md b/docs/cli/commands/providers/update.md index bc5d3f92..8b57c044 100644 --- a/docs/cli/commands/providers/update.md +++ b/docs/cli/commands/providers/update.md @@ -8,7 +8,7 @@ sidebar_position: 3 Updates information about a provider. -**Important:** This command requires SPCTL with a [provider configuration file](/cli/#for-providers). +**Important:** This command requires SPCTL with a [provider configuration file](/cli/guides/provider-tools#providers-spctl). ## Syntax diff --git a/docs/cli/commands/syntax.md b/docs/cli/commands/syntax.md index 6d71110d..877a52d7 100644 --- a/docs/cli/commands/syntax.md +++ b/docs/cli/commands/syntax.md @@ -54,4 +54,8 @@ So, the final format of this option should be one of the following: - `--solution ,`. For example, `--solution 26,25`. - `--solution `. For example, `--solution ./solution.resource.json`. -Read the descriptions of arguments and options and refer to the examples for more information. If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file +Read the descriptions of arguments and options and refer to the examples for more information. If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/images/create-kubernetes-space.png b/docs/cli/images/create-kubernetes-space.png new file mode 100644 index 00000000..c98291d6 Binary files /dev/null and b/docs/cli/images/create-kubernetes-space.png differ diff --git a/docs/cli/images/ingresses.png b/docs/cli/images/ingresses.png new file mode 100644 index 00000000..86b2a44b Binary files /dev/null and b/docs/cli/images/ingresses.png differ diff --git a/docs/cli/images/kubernetes-create-cluster.png b/docs/cli/images/kubernetes-create-cluster.png new file mode 100644 index 00000000..312d03da Binary files /dev/null and b/docs/cli/images/kubernetes-create-cluster.png differ diff --git a/docs/cli/images/kubernetes-download-kubeconfig.png b/docs/cli/images/kubernetes-download-kubeconfig.png new file mode 100644 index 00000000..cf287367 Binary files /dev/null and b/docs/cli/images/kubernetes-download-kubeconfig.png differ diff --git a/docs/cli/images/kubernetes-publish-cluster.png b/docs/cli/images/kubernetes-publish-cluster.png new file mode 100644 index 00000000..bf0c2bfb Binary files /dev/null and b/docs/cli/images/kubernetes-publish-cluster.png differ diff --git a/docs/cli/images/swarm-log-in.png b/docs/cli/images/swarm-log-in.png new file mode 100644 index 00000000..e7abee2f Binary files /dev/null and b/docs/cli/images/swarm-log-in.png differ diff --git a/docs/cli/index.md b/docs/cli/index.md index eba0c16f..c2d013d9 100644 --- a/docs/cli/index.md +++ b/docs/cli/index.md @@ -10,7 +10,7 @@ import TabItem from '@theme/TabItem'; **SPCTL**—Super Protocol Control—is a versatile tool to access the Super Protocol CLI. With this tool, you can create and manage orders, offers, providers, keys, files, and more. -## Download SPCTL +## Download @@ -41,7 +41,7 @@ import TabItem from '@theme/TabItem'; You can also download and install SPCTL manually from the Super Protocol [GitHub repository](https://github.com/Super-Protocol/ctl). -## For users +## Set up You can set up SPCTL using the `./spctl setup` command or by manually creating a configuration file. @@ -103,7 +103,6 @@ You can set up SPCTL using the `./spctl setup` command or by manually creating a } } } - ``` 3. Do not change the preconfigured values and set values to the following keys: @@ -146,85 +145,6 @@ If you use a free Storj account, your files will become unavailable after the en | `"writeAccessToken"` | Storj access grant with **Full** permission (**Read**, **List**, **Write**, **Delete**) for this bucket. | | `"readAccessToken"` | Storj access grant with **Read** permission for this bucket. | -## For providers - -This section is for providers only. Skip it if you are a regular user. - -Providers need another copy of SPCTL configured to manage their offers. - - - - If you registered a provider using Provider Tools, you should have a configuration file created automatically in the Provider Tools directory. Its name should be similar to `spctl-config-0xB9f0b77BDbAe9fBe3E60BdC567E453f503605BAb.json`, where `0xB9f0b77BDbAe9fBe3E60BdC567E453f503605BAb` is your Authority Account wallet address. - - Rename this file to `config.json` so SPCTL can recognize it as its configuration file. Copy or download the SPCTL binary to the Provider Tools directory. - - Alternatively, use the `--config` option with SPCTL commands to use the same SPCTL binary with a different account. For example: - - ```shell - ./spctl orders list --my-account --type tee --config ./spctl-config-0xB9f0b77BDbAe9fBe3E60BdC567E453f503605BAb.json - ``` - - - As with your User Account's configuration file, you can manually create the provider's SPCTL configuration file. - - 1. In the Provider Tools directory, create a file named `config.json`. Use the following template: - - ```json title="config.json" - { - "backend": { - "url": "https://bff.superprotocol.com/graphql", - "accessToken": "eyJhbGciOiJFUzI1NiJ9.eyJhZGRyZXNzIjoiMHhBN0E5NjQ4ZGE2QTg5QjBhNzFhNGMwRDQ2Y2FENDAwMDU3ODI3NGEyIiwiaWF0IjoxNjc5OTk4OTQyLCJleHAiOjE3NDMxMTQxNDJ9.x2lx90D733mToYYdOWhh4hhXn3YowFW4JxFjDFtI7helgp2uqekDHFgekT5yjbBWeHTzRap7SHbDC3VvMIDe0g" - }, - "blockchain": { - "rpcUrl": "https://opbnb.superprotocol.com", - "smartContractAddress": "0x3C69ea105Fc716C1Dcb41859281Aa817D0A0B279", - "accountPrivateKey": "", - "authorityAccountPrivateKey": "" - }, - "storage": { - "type": "STORJ", - "bucket": "", - "prefix": "", - "writeAccessToken": "", - "readAccessToken": "" - }, - "workflow": { - "resultEncryption": { - "algo": "ECIES", - "key": "", - "encoding": "base64" - } - } - } - ``` - - 2. Do not change the preconfigured values and provide values to the following keys: - - | **Key** | **Description** | - | :- | :- | - | `"accountPrivateKey"` | The provider's Action Account private key. | - | `"authorityAccountPrivateKey"` | The provider's Authority Account private key. | - | `"bucket"` | (optional) Name of a Storj bucket. | - | `"prefix"` | (optional) Path to a directory inside the bucket. It can be empty. | - | `"writeAccessToken"` | (optional) Storj access grant with **Full** permission (**Read**, **List**, **Write**, **Delete**) for this bucket. | - | `"readAccessToken"` | (optional) Storj access grant with **Read** permission for this bucket. | - - You can find the section with your Authority and Action Accounts private keys in `provider-tools-config.json` in the Provider Tools directory. For example: - - ```json title="provider-tools-config.json" - "account": { - "authority": "0x50612a8bf52cb263825e58c72361ea58c04efa7af7e5b549ea9c2ed02059c668d", - "action": "0x0512ad96fzc01900d3ecf0987m81c7bc1fd2daf455ebb49kjce5b410c7dc6f05", - "tokenReceiver": "0x167d93786ghbf00d19b7d58065a5a59276e55ca1e621e47330f2b64d9fcb6a38" - }, - ``` - - Save and close the file. - - 3. Generate a key for order result encryption using the [`workflows generate-key`](/cli/commands/workflows/generate-key) command. Open `config.json` again and set the generated key to `workflow.resultEncryption.key`. Save and close the file. - - - ## Support -If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol). The Community Managers will be happy to help you. \ No newline at end of file +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/data-for-ai/Overview/about.md b/docs/data-for-ai/Overview/about.md index 2ee25347..0471cf17 100644 --- a/docs/data-for-ai/Overview/about.md +++ b/docs/data-for-ai/Overview/about.md @@ -5,7 +5,7 @@ slug: "/overview/about" sidebar_position: 1 --- -The Super Protocol Data-for-AI Campaign is more than a contest—it’s a collaborative initiative to rethink how AI systems are trained in high-stakes, regulated domains. By sourcing high-quality, publicly available regulatory and clinical data, we aim to make AI development transparent, decentralized, and verifiable from the ground up. +The Super Protocol Data-for-AI Campaign is more than a contest—it's a collaborative initiative to rethink how AI systems are trained in high-stakes, regulated domains. By sourcing high-quality, publicly available regulatory and clinical data, we aim to make AI development transparent, decentralized, and verifiable from the ground up. This campaign is powered by Super Protocol, a decentralized cloud platform designed for privacy-preserving AI computing. It combines confidential execution, on-chain traceability, and cryptographic proof of origin, creating a secure foundation for open collaboration between AI systems and data contributors. @@ -13,13 +13,13 @@ This campaign is powered by Super Protocol, a decentralized cloud platform desig AI companies in regulated industries like healthcare face a difficult trade-off: build costly internal systems to collect and validate data, or rely on opaque, third-party pipelines with unknown provenance. Both come with serious limitations—compliance overhead, audit risks, and a lack of trust in the data itself. -This campaign explores a third path: a verifiable, decentralized pipeline for AI training. Every submitted data link is publicly auditable, cryptographically signed by the contributor, and logged to a smart contract on the opBNB network. It’s not just about finding data—it’s about proving where it came from and how it was used. +This campaign explores a third path: a verifiable, decentralized pipeline for AI training. Every submitted data link is publicly auditable, cryptographically signed by the contributor, and logged to a smart contract on the opBNB network. It's not just about finding data—it's about proving where it came from and how it was used. -We’re working with Tytonix, whose medical AI systems will be trained directly on this dataset. Their tools help medical device companies navigate regulatory approvals faster and at lower cost. Your contributions fuel a real-world application with immediate value. +We're working with Tytonix, whose medical AI systems will be trained directly on this dataset. Their tools help medical device companies navigate regulatory approvals faster and at lower cost. Your contributions fuel a real-world application with immediate value. ## Why verifiability is crucial -In healthcare AI, data integrity isn’t optional. It must be provable—both to regulators and the companies relying on it. +In healthcare AI, data integrity isn't optional. It must be provable—both to regulators and the companies relying on it. Super Protocol ensures every submission has a traceable origin, a clear audit trail, and immutable on-chain attribution. This builds a usable bridge between community-sourced input and production-grade AI. @@ -27,18 +27,18 @@ Super Protocol ensures every submission has a traceable origin, a clear audit tr - On-chain record → compliance-ready data - Decentralized sourcing → scalable, cost-effective pipelines -What’s submitted here isn’t just checked off—it’s accounted for. +What's submitted here isn't just checked off—it's accounted for. ## Just the beginning Super Protocol already supports confidential AI training: models run in secure environments where data remains private, even from developers. Deployments are signed, logged, and verifiable. That infrastructure is live. -What’s missing—until now—is granular, user-attributed input. The ability to train AI on individual contributions, where each data point is trackable, auditable, and tied to its source without sacrificing privacy. +What's missing—until now—is granular, user-attributed input. The ability to train AI on individual contributions, where each data point is trackable, auditable, and tied to its source without sacrificing privacy. -This campaign is the first step. In future phases, contributors will be able to control how their data is used, know when it contributes to training, and opt in or out of specific models. It’s the beginning of a long-term shift: from closed, anonymous datasets to a transparent, accountable, and privacy-respecting AI ecosystem. +This campaign is the first step. In future phases, contributors will be able to control how their data is used, know when it contributes to training, and opt in or out of specific models. It's the beginning of a long-term shift: from closed, anonymous datasets to a transparent, accountable, and privacy-respecting AI ecosystem. ## Where you come in Contribute real-world data. Climb the leaderboard. Earn your share of $30,000 in USDT and Super Stakes (convertible into Super Tokens at the token generation event). -This isn’t just a data campaign. It’s the foundation for an AI system that doesn’t require trust, because everything is verifiable, transparent, and owned. \ No newline at end of file +This isn't just a data campaign. It's the foundation for an AI system that doesn't require trust, because everything is verifiable, transparent, and owned. \ No newline at end of file diff --git a/docs/data-for-ai/Overview/dates.md b/docs/data-for-ai/Overview/dates.md index 3c7567b5..2f4c1d45 100644 --- a/docs/data-for-ai/Overview/dates.md +++ b/docs/data-for-ai/Overview/dates.md @@ -14,4 +14,4 @@ June 9 – June 23, 12:00 PM UTC
→ All activity counts toward leaderboard ranking and final rewards. **Daily Reset:**
-Every day at 12:00 PM UTC, submission limits are reset, and the points’ value increases by 4%. \ No newline at end of file +Every day at 12:00 PM UTC, submission limits are reset, and the points' value increases by 4%. \ No newline at end of file diff --git a/docs/data-for-ai/Overview/support.md b/docs/data-for-ai/Overview/support.md index dcd74251..13dab134 100644 --- a/docs/data-for-ai/Overview/support.md +++ b/docs/data-for-ai/Overview/support.md @@ -5,7 +5,7 @@ slug: "/overview/support" sidebar_position: 6 --- -If you have questions, encounter issues, or need assistance during the campaign, we’re here to help. +If you have questions, encounter issues, or need assistance during the campaign, we're here to help. ## Support ticket @@ -15,4 +15,4 @@ For official support via email, please [submit a request](https://superprotocol. If you prefer real-time communication, you can also get help through our [Discord server](https://discord.com/invite/superprotocol). The channel is **#data-for-ai**. -We’re committed to supporting you throughout the campaign. \ No newline at end of file +We're committed to supporting you throughout the campaign. \ No newline at end of file diff --git a/docs/data-for-ai/Rules/referrals.md b/docs/data-for-ai/Rules/referrals.md index a2187631..07958729 100644 --- a/docs/data-for-ai/Rules/referrals.md +++ b/docs/data-for-ai/Rules/referrals.md @@ -9,9 +9,9 @@ The referral system allows you to earn additional points by inviting others to j ## How it works -- After registration, you’ll receive a unique referral link. +- After registration, you'll receive a unique referral link. - When someone signs up using your link—a *referee*—and starts submitting valid data links, you earn referral points. -- There’s no limit to how many people you can refer. +- There's no limit to how many people you can refer. - Each participant can only be referred once. - If someone signs up without your link or uses another link first, they cannot be reassigned to you. @@ -27,13 +27,13 @@ Day 3: ~37.9 points
...
Day 14 (Final Day): ~58.8 points per link -The longer the campaign runs, the more valuable each referee’s activity becomes. +The longer the campaign runs, the more valuable each referee's activity becomes. Note: While later submissions earn more per link, inviting people early gives them time to contribute more overall, resulting in higher total rewards for you. ## Referral penalty -If your referee submits an invalid data link, you’ll lose the referral reward for one previously earned link from that referee. This only affects the bonus points earned from that specific referee and does not impact your own points or rewards from other referees. +If your referee submits an invalid data link, you'll lose the referral reward for one previously earned link from that referee. This only affects the bonus points earned from that specific referee and does not impact your own points or rewards from other referees. Referral points cannot go negative, and the same rule applies individually to each referee and each invalid link. diff --git a/docs/data-for-ai/Rules/rewards.md b/docs/data-for-ai/Rules/rewards.md index 7864d0a6..7bb9f9c4 100644 --- a/docs/data-for-ai/Rules/rewards.md +++ b/docs/data-for-ai/Rules/rewards.md @@ -7,7 +7,7 @@ sidebar_position: 3 ## Reward recipients -Only the top 1,000 participants will get prizes, and the rewards will depend on the rank. The rank is determined by the user’s total points: own points plus referral points. +Only the top 1,000 participants will get prizes, and the rewards will depend on the rank. The rank is determined by the user's total points: own points plus referral points. | **Rank** | **USDT** | **Super Stakes** | | :- | :- | :- | @@ -41,7 +41,7 @@ The top 50 participants might be subject to KYC checks to verify identity and pr ## Leaderboard -To check winners, participants, referrals, rewards, and more, [read the campaign’s smart contract](https://opbnb.bscscan.com/address/0x8c77ef6ed2ee514d1754fbfc2710d70e9d6ba871#readContract) on the opBNB network. +To check winners, participants, referrals, rewards, and more, [read the campaign's smart contract](https://opbnb.bscscan.com/address/0x8c77ef6ed2ee514d1754fbfc2710d70e9d6ba871#readContract) on the opBNB network. ### Check a participant @@ -65,7 +65,7 @@ Fields in the example in order of appearance: | `0` | Number of links validated today. Always `0` because the campaign has ended. | | `true` | Flag indicating if the address is registered as a campaign participant. | | `false` | Flag indicating if the address has claimed the reward. | -| `0x8da2c62C23aEBeb1Aa8b5eE96d341d26a2edec6eB` | The referrer’s address. | +| `0x8da2c62C23aEBeb1Aa8b5eE96d341d26a2edec6eB` | The referrer's address. | | `68` | Number of referees. | | `2640` | Points the participant earned for their referrer. | | `67738` | Points the participant earned from their referees. | @@ -73,7 +73,7 @@ Fields in the example in order of appearance: | `0` | Total number of duplicate links submitted. | | `237` | Total number of valid links submitted. | | `152` | Total number of invalid links submitted. | -| `0xbF4aC1b6efd5C21e5Ce93f34c8F43C8a9bCACA3F3` | The participant’s address. | +| `0xbF4aC1b6efd5C21e5Ce93f34c8F43C8a9bCACA3F3` | The participant's address. | | `813` | Current rank in the leaderboard. | | `97280` | Total points earned. | | `10000000000000000000` | USDT reward, in denominations. 1018 = 1 USDT. | diff --git a/docs/developers/cli_guides/providers_offers.md b/docs/developers/cli_guides/providers_offers.md index 9e1941c3..4fd12887 100644 --- a/docs/developers/cli_guides/providers_offers.md +++ b/docs/developers/cli_guides/providers_offers.md @@ -600,4 +600,4 @@ If there is an error, check `error.log` in the Provider Tools directory and `err ## Support -If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol). The Community Managers will be happy to help you. \ No newline at end of file +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/developers/deployment_guides/tunnels/repo.md b/docs/developers/deployment_guides/tunnels/repo.md index 101b3f1b..0f63e806 100644 --- a/docs/developers/deployment_guides/tunnels/repo.md +++ b/docs/developers/deployment_guides/tunnels/repo.md @@ -15,7 +15,7 @@ These Github Actions are automating the commands outlined in the [previous step] 1. Go to [GitHub](https://github.com) and log in to your account. -2. Click the [New Repository](https://github.com/new) button in the top-right. Enter `superprotocol-test-app` as repository name. You’ll have an option there to initialize the repository with a README file. Add `Node` as `.gitignore` template. +2. Click the [New Repository](https://github.com/new) button in the top-right. Enter `superprotocol-test-app` as repository name. You'll have an option there to initialize the repository with a README file. Add `Node` as `.gitignore` template. 3. Click the “Create repository” button. diff --git a/docs/developers/index.md b/docs/developers/index.md index c8d5c39b..d8bcab43 100644 --- a/docs/developers/index.md +++ b/docs/developers/index.md @@ -30,7 +30,7 @@ When you know the basics, try SPCTL—the Super Protocol CLI tool: ## Create your provider and offers with CLI 1. Follow the [Providers and Offers](/developers/cli_guides/providers_offers) guide to create your provider and a first offer. 2. Follow the [Moderation Guidelines](/developers/marketplace/moderation/) to approve your offer for Marketplace GUI. -3. [Update SPCTL configuration](/cli/#for-providers) as a provider to enable management of your provider and offers. +3. [Update SPCTL configuration](/cli/guides/provider-tools#providers-spctl) as a provider to enable management of your provider and offers. 4. Use [SPCTL commands](/developers/cli_guides/providers_offers#faq) to manage your provider and offers. Join us on [Discord](https://discord.gg/superprotocol). The Super Protocol team welcomes any feedback and questions! \ No newline at end of file diff --git a/docs/developers/marketplace_gui/first-steps.md b/docs/developers/marketplace_gui/first-steps.md index 1d7a5aff..60611168 100644 --- a/docs/developers/marketplace_gui/first-steps.md +++ b/docs/developers/marketplace_gui/first-steps.md @@ -214,4 +214,4 @@ Then, click the **Connect Wallet** button in the Marketplace GUI again and sel ## Support -If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol). Community Managers will be happy to help. \ No newline at end of file +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/developers/marketplace_gui/walkthrough.md b/docs/developers/marketplace_gui/walkthrough.md index 60febcd1..4a25bcd8 100644 --- a/docs/developers/marketplace_gui/walkthrough.md +++ b/docs/developers/marketplace_gui/walkthrough.md @@ -7,9 +7,9 @@ sidebar_position: 2 ## 1. Introduction -To better understand how Super Protocol works, let’s take a step-by-step walkthrough through the Marketplace GUI. +To better understand how Super Protocol works, let's take a step-by-step walkthrough through the Marketplace GUI. -As an example we’ll deploy the [Super Chat](/developers/offers/superchat) app with the tunnels. Please note that for this walkthrough we'll be using [Tunnels Launcher](/developers/offers/launcher), which cuts a few corners in order to streamline the experience. For the full tunnels deployment capabilities please refer to [this guide](/developers/deployment_guides/tunnels). +As an example we'll deploy the [Super Chat](/developers/offers/superchat) app with the tunnels. Please note that for this walkthrough we'll be using [Tunnels Launcher](/developers/offers/launcher), which cuts a few corners in order to streamline the experience. For the full tunnels deployment capabilities please refer to [this guide](/developers/deployment_guides/tunnels). You might want to read up on the fundamental Super Protocol concepts - such as [offers](/fundamentals/offers), [orders](/fundamentals/orders), [requirements and configurations](/fundamentals/slots), and [tunnels](/fundamentals/tunnels) - in advance, or - just dive into it and figure it out as you go. Your choice. @@ -142,7 +142,7 @@ To create this order via CLI, click the **Copy CLI workflow** button. It will ge :::info Step 6. Set up a passphrase. -Either input your own passphrase or generate a new one. Then press the `Place Order` button. Save your passphrase! You won’t be able to access your order results without it. For testing it's easier to have a single passphrase for all orders. +Either input your own passphrase or generate a new one. Then press the `Place Order` button. Save your passphrase! You won't be able to access your order results without it. For testing it's easier to have a single passphrase for all orders. ::: diff --git a/docs/fundamentals/certification.md b/docs/fundamentals/certification.md index 611d5375..4ef4e184 100644 --- a/docs/fundamentals/certification.md +++ b/docs/fundamentals/certification.md @@ -5,37 +5,67 @@ slug: "/certification" sidebar_position: 6 --- -Super Protocol uses a certification system for signing data, verifying signatures, and ensuring applications operate within a trusted confidential computing environment. Verified data is published on the blockchain on behalf of confidential containers, allowing anyone to validate application integrity and ensure confidentiality. End users only interact with issued certificates and verify signatures, while the complexities of Remote Attestation are seamlessly managed in the background. +The Super Protocol Certification System is a hierarchical infrastructure for managing trust in confidential computing environments. The main purpose of the system is to create a valid chain of X.509 certificates for any applications running in Trusted Execution Environments (TEEs). The Certification System itself also operates within TEEs, ensuring the entire chain is rooted in hardware-based trust. -All the system components are open-source, ensuring transparency and verifiability. +The Certification System performs remote attestation under the hood, but exposes a familiar X.509-style certificate chain on the surface. This allows any verifier (a user, an auditor, or an automated service) to validate that: + +- The execution took place within a TEE. +- The certificate chain leading to the workload is valid and trusted. + +The Certification System can function as an independent, standalone service. In this capacity, it could serve external companies and users who need to establish certificate chains for their own confidential computing applications. + +Note that the system is not responsible for validating what an application does internally. Its primary role is to issue certificates to trusted confidential environments, forming a cryptographically verifiable trust chain. + +All system components are planned to be open-sourced, improving transparency and verifiability. ## Architecture -The backbone of the system is a hierarchical structure of Certification Authorities operating inside Trusted Execution Environments (TEE)—Intel SGX enclaves. +The Certification System is organized as a hierarchy of Certification Authorities (CAs) that establishes trust for TEEs through a standard certificate chain. Every CA operates within a TEE—Intel SGX enclave.

-The Root Certification Authority (*Root CA*) is located at the highest hierarchical level. At the start, Root CA generates a self-signed certificate, embedding the SGX attestation quote. +The chain consists of three levels: + +- Root CA is the top-level certificate authority that establishes the trust anchor for the entire system. At the start, it generates a self-signed certificate that embeds the SGX attestation quote. +- SubRoot CAs are intermediate certificate authorities. They submit their quotes and public keys to the Root CA and request certificates. The Root CA verifies these incoming requests and then issues and signs certificates for the SubRoot CAs. Once a SubRoot CA is certified by the Root CA, it can certify any TEE-backed environment that proves it is actually confidential. +- End certificates are issued to specific workloads, entire Confidential Virtual Machines (CVMs) running in TEEs, and in some other cases. These certificates are not CAs and cannot be used to sign or issue other certificates. + +Each level in the hierarchy receives its certificate from the level above, creating a chain of trust that ultimately traces back to the Root CA. + +## Trusted Loader -SubRoot Certification Authorities (*SubRoot CAs*) are located at the next hierarchical level. These submit their quotes and public keys to the Root CA and request certificates. The Root CA verifies these incoming requests and then issues and signs certificates for the SubRoot CAs. +Trusted Loader is a special service that prepares and launches the workload associated with an order inside a CVM running in a TEE. Loader occupies a privileged position within the execution environment, enabling it to access the platform's underlying attestation capabilities. Workloads themselves do not have such access. -The SubRoot CAs, in turn, issue and sign certificates for orders by request. +Trusted Loader also: + +- Collects hashes of the workload and its components. +- Verifies workload integrity before execution starts. +- Requests end certificates. + +All end certificates are requested and received by Trusted Loader. Other components do not interact directly with Certificate Authorities. Trusted Loader may request certificates in several cases: + +- At startup. The certificate confirms that the confidential environment is correctly configured and that the attestation challenge (TDX, SEV-SNP, etc.) matches expectations. +- When generating session keys. The certificate is included in the session key structures used during execution. +- When forming a TEE Confirmation Block (TCB). The certificate is embedded into TCB, which also includes system information and measurements. +- When deploying an order. An order-specific certificate is issued and delivered, along with cryptographic keys, to the order's execution environment. ## Order certificates -The issuing of order certificates involves [Trusted Loader](/whitepaper/tee-provider/#trusted-loader-mechanism)—a mechanism developed to load and run applications within a TEE. Trusted Loader operates inside the Confidential VM that executes the order. This Confidential VM may be deployed within a CPU-based or CPU/GPU-augmented TEE using technologies such as Intel TDX, AMD SEV-SNP, NVIDIA Confidential Computing, or others, making the system TEE-agnostic. +Trusted Loader requests a dedicated order-specific certificate when an order is prepared for execution. This certificate includes order-specific data, such as the hash of workload information. -To receive an order certificate, the Trusted Loader sends a request to a SubRoot CA providing the quote and a public key. The SubRoot CA verifies the quote, issues the order certificate, and signs it with the provided public key. +Trusted Loader places the order certificate as a file into the order's execution environment. There, it can be used by the order itself to prove that it was launched within a confidential environment. + +Note that the Certification System does not determine whether a CVM is correct or compromised. If a CVM runs in a confidential environment, it can obtain certificates. However, differences in hashes are visible in the certificate chain and can be detected by any verifying party. ### Order validation -Orders in Super Protocol are created with necessary input data. This execution environment is referred to as *Workload Info*. +Orders in Super Protocol are created with a workload description known as *Workload Info*. -The Workload Info includes an array called `runtimeInfo[]` with metadata about solutions and datasets used in the order. Each such order component has an entry in this array, which includes: +Workload Info includes an array called `runtimeInfo` that contains information about solutions and data associated with the order. Each data and solution component of the order has an entry in this array, which includes: -- Type +- Type (solution or data) - Hash - Size - Signature key hash (optional) @@ -43,9 +73,13 @@ The Workload Info includes an array called `runtimeInfo[]` with metadata about < The hash of the Workload Info is included in the order certificate. -Trusted Loader generates and publishes a report in the blockchain, allowing anyone to validate the order. This order report includes: +Before order execution begins, Trusted Loader checks the integrity of the full workload composition (solutions, data, and configuration). The order proceeds only if this verification succeeds. + +Trusted Loader also generates and publishes a report to the blockchain, allowing any verifier to validate the order. The report includes: - The public components of all the certificates in the chain - Workload Info: - Order creation date - - The `runtimeInfo[]` array \ No newline at end of file + - The `runtimeInfo` array + +The immutable nature of blockchain prevents any further alterations to the report once it is published. The report enables verifiers to confirm what exactly was launched and that the certificate corresponds to that specific workload. \ No newline at end of file diff --git a/docs/fundamentals/images/certification-system-architecture.png b/docs/fundamentals/images/certification-system-architecture.png index 4e4ffd05..32eead65 100644 Binary files a/docs/fundamentals/images/certification-system-architecture.png and b/docs/fundamentals/images/certification-system-architecture.png differ diff --git a/docs/fundamentals/orders.md b/docs/fundamentals/orders.md index 5bd30508..8532ed07 100644 --- a/docs/fundamentals/orders.md +++ b/docs/fundamentals/orders.md @@ -75,7 +75,8 @@ Statuses: - **New**: The order is waiting for the response from the compute provider. - **In Queue**: The order is waiting in the queue for the compute to become available. This status appears only if the compute is overloaded with orders. - **Processing**: The compute is executing the order inside a TEE. -- **Done**: The order is completed. +- **Done**: The order is completed successfully. +- **Error**: The order completed with an error. Note that the **Processing** and **Done** statuses may have different meanings depending on the usage scenario. For one-time orders, such as executing a Python script, **Processing** means that the machine is working with the solution and data. When this is over, the main order becomes **Done**. diff --git a/docs/guides/index.md b/docs/guides/index.md index 69d2473a..26693316 100644 --- a/docs/guides/index.md +++ b/docs/guides/index.md @@ -7,18 +7,29 @@ sidebar_position: 0 ## Marketplace GUI -|
**Guide**
|
**Description**
| +|
**Guide**
|
**Description**
| | :- | :- | | [Log In with MetaMask](/marketplace/guides/log-in) | How to log in to the [Marketplace](https://marketplace.superprotocol.com/) using MetaMask. | | [Log In with Trust Wallet](/marketplace/guides/log-in-trustwallet) | How to log in to the Marketplace using Trust Wallet. | | [Deploy Your Model](/marketplace/guides/deploy-model) | How to upload and deploy an AI model on Super Protocol. | | [Publish an Offer](/marketplace/guides/publish-offer) | How to upload an AI model and publish it on the Marketplace. | -| [Prepare a ComfyUI Workflow](/marketplace/guides/prepare-comfyui) | How to prepare a ComfyUI workflow with custom nodes. | | [Set Up Personal Storage](/marketplace/guides/storage) | How to set up your personal Storj account. | | [Troubleshooting](/marketplace/guides/troubleshooting) | Most common issues and ways to fix them. | ## CLI -|
**Guide**
|
**Description**
| +|
**Guide**
|
**Description**
| +| :- | :- | +| [Configure SPCTL](/cli) | How to set up SPCTL—a Super Protocol CLI tool. | +| [Configure Provider Tools](/cli/guides/provider-tools) | How to set up Provider Tools—a Super Protocol CLI utility for registering providers and creating offers. | +| [Quick Deployment Guide](/cli/guides/deploy-app) | Quick instructions on deploying a solution and data on Super Protocol. | +| [Confidential Collaboration](/cli/guides/multi-party-collab) | A scenario of confidential collaboration on Super Protocol. | + +### Solutions + +|
**Guide**
|
**Description**
| | :- | :- | -| [Quick Deployment Guide](/cli/guides/quick-guide) | Quick instructions on deploying a solution and data on Super Protocol. | \ No newline at end of file +| [Text Generation WebUI](/cli/guides/solutions/tgwui) | How to deploy a model using Text Generation WebUI. | +| [ComfyUI](/cli/guides/solutions/comfyui) | How to prepare a ComfyUI workflow with custom nodes. | +| [Unsloth](/cli/guides/solutions/unsloth) | How to fine-tune an AI model using the Super Protocol packaging of Unsloth. | +| [vLLM](/cli/guides/solutions/vllm) | How to run a model inference using the Super Protocol packaging of vLLM. | \ No newline at end of file diff --git a/docs/hackathon/about.md b/docs/hackathon/about.md index 181ad43e..c1acdb32 100644 --- a/docs/hackathon/about.md +++ b/docs/hackathon/about.md @@ -7,7 +7,7 @@ sidebar_position: 1 ## Hackathon -The [Super Hackathon](https://hackathon.superprotocol.com/) is a global Web3 event designed to demonstrate the scalability and security of Super Protocol’s cloud under real on-chain load. Participants will migrate existing open-source dApps to opBNB, integrate confidential oracle data feeds, and generate verifiable transactions to stress-test performance. The goal is to showcase how real-world decentralized services can operate efficiently, privately, and transparently at scale. +The [Super Hackathon](https://hackathon.superprotocol.com/) is a global Web3 event designed to demonstrate the scalability and security of Super Protocol's cloud under real on-chain load. Participants will migrate existing open-source dApps to opBNB, integrate confidential oracle data feeds, and generate verifiable transactions to stress-test performance. The goal is to showcase how real-world decentralized services can operate efficiently, privately, and transparently at scale. ## Super Protocol @@ -15,7 +15,7 @@ The [Super Hackathon](https://hackathon.superprotocol.com/) is a global Web3 eve ## Confidential oracles -Confidential oracles are a key showcase of Super Protocol’s architecture and the advantages of confidential execution. Built on Chainlink Data Feeds and executed inside TEEs, they keep all data and computations private while remaining verifiable on-chain. +Confidential oracles are a key showcase of Super Protocol's architecture and the advantages of confidential execution. Built on Chainlink Data Feeds and executed inside TEEs, they keep all data and computations private while remaining verifiable on-chain. Traditional oracles rely on thousands of untrusted nodes to reach consensus, making the whole process quite expensive. Super Protocol achieves the same trust with far fewer nodes, as each operates inside a TEE that cryptographically guarantees honest execution. A single node with 3 CPUs and 4 GB RAM can handle the workload of about 1,000 Chainlink nodes, delivering major improvements in speed and cost efficiency. @@ -54,7 +54,7 @@ Prizes are awarded to the team as a whole, not to individual participants or pro ## Community & support -Official discussions and support take place in Super Protocol’s [Discord server](https://discord.gg/superprotocol), channel #hackathons. +Official discussions and support take place in Super Protocol's [Discord server](https://discord.gg/superprotocol), channel #hackathons. Sign up for our newsletter and social media on the [Super Hackathon webpage](https://hackathon.superprotocol.com/) to keep up to date. diff --git a/docs/hackathon/liquity.md b/docs/hackathon/liquity.md index d51815a9..f80debef 100644 --- a/docs/hackathon/liquity.md +++ b/docs/hackathon/liquity.md @@ -94,7 +94,7 @@ Below is an example of partial console output: ## 6. Configure for opBNB Deployment -Now that local deployment works, let’s prepare the project for deployment to the opBNB mainnet. +Now that local deployment works, let's prepare the project for deployment to the opBNB mainnet. ### 6.1. Create the environment file diff --git a/docs/hackathon/rules.md b/docs/hackathon/rules.md index 53a72db5..6930d7c8 100644 --- a/docs/hackathon/rules.md +++ b/docs/hackathon/rules.md @@ -17,7 +17,7 @@ The interface methods (`latestAnswer`, `getAnswer`, `latestRound`, `getRoundData The dApp must be open-source and have been publicly deployed before September 1, 2025, on Ethereum, Polygon, or BNB Chain. -The chosen dApp’s original smart contracts must be verified on a public block explorer (e.g., Etherscan, BscScan, etc.). +The chosen dApp's original smart contracts must be verified on a public block explorer (e.g., Etherscan, BscScan, etc.). The migration should require minimal code changes—no more than 5% of the original codebase. @@ -25,7 +25,7 @@ Teams may submit multiple unique dApps, but each must have a different original Frontends are welcome but optional—evaluation is based solely on the deployed smart contracts and their on-chain activity. -The project’s license must allow forking and reuse (acceptable licenses include MIT, Apache 2.0, GPL-family, or equivalent). +The project's license must allow forking and reuse (acceptable licenses include MIT, Apache 2.0, GPL-family, or equivalent). ## Deploying to opBNB diff --git a/docs/hackathon/venus-protocol.md b/docs/hackathon/venus-protocol.md index d4f828a7..c0924b9c 100644 --- a/docs/hackathon/venus-protocol.md +++ b/docs/hackathon/venus-protocol.md @@ -141,7 +141,7 @@ For example, [opbnb.bscscan.com/address/0x6DA2Fe3A44dc2837e1ffc450339Ae107AE1AC2 ## 12. Submit the migration -To complete the migration, you’ll need both the original and new contract addresses. +To complete the migration, you'll need both the original and new contract addresses. ### 12.1. Locate the original deployment diff --git a/docs/marketplace/Guides/deploy-model.md b/docs/marketplace/Guides/deploy-model.md index f5b6a77e..d3b0b389 100644 --- a/docs/marketplace/Guides/deploy-model.md +++ b/docs/marketplace/Guides/deploy-model.md @@ -33,7 +33,7 @@ Ensure your model meets the Super Protocol requirements: - Text-to-Video - Mask Generation -If you plan to deploy a ComfyUI workflow with custom nodes, [prepare the files](/marketplace/guides/prepare-comfyui) before uploading. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI. +If you plan to deploy a ComfyUI workflow with custom nodes, [prepare the files](/cli/guides/solutions/comfyui) before uploading. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI. 1.2. Due to [testnet limitations](/marketplace/limitations), the total size of model files should not exceed 13 GB. Support for bigger models will be available in the future. diff --git a/docs/marketplace/Guides/storage.md b/docs/marketplace/Guides/storage.md index 95915151..21ebb997 100644 --- a/docs/marketplace/Guides/storage.md +++ b/docs/marketplace/Guides/storage.md @@ -7,14 +7,12 @@ sidebar_position: 6 This guide provides step-by-step instructions on how to set up your personal Storj account. -The guide is intended for advanced Web3 users; feel free to skip it and continue using the default recommended option—**Super Protocol cloud**. Read about [types of storage](/marketplace/account/web3#storage). +The guide is intended for advanced users; feel free to skip it and continue using the default recommended option—**Super Protocol cloud**. Read about [types of storage](/marketplace/account#storage). - +

-Web2 users must first [log in as a Web3 user](/marketplace/guides/log-in) to be able to upload to a personal Storj account instead of the Super Protocol cloud. - ## Step 1. Register a Storj account If you don't already have a [Storj](https://www.storj.io/) account, register one. Both free Trial and Pro accounts are suitable. Note that with a Trial account, your files will become inaccessible once the trial period ends. @@ -36,7 +34,7 @@ As a result, you should have two pairs Access Key + Secret Key. ## Step 4. Set up your Super Protocol Web3 account -Open the [Marketplace web app](https://marketplace.superprotocol.com/). Log in as a Web3 user and open the **Account** window. +Open the [Marketplace web app](https://marketplace.superprotocol.com/), sign in and open the **Account** window.
diff --git a/docs/marketplace/account.md b/docs/marketplace/account.md new file mode 100644 index 00000000..99f147c1 --- /dev/null +++ b/docs/marketplace/account.md @@ -0,0 +1,52 @@ +--- +id: "account" +title: "Enter Marketplace" +slug: "/account" +sidebar_position: 3 +--- + +Super Protocol supports two login methods: + +- Web2 requires an account on one of the supported platforms: + - Google + - Hugging Face + - GitHub + - Microsoft +- Web3 requires a software wallet installed as a browser extension: + - MetaMask + - Trust Wallet + +For instructions on how to set up software wallets and connect them to the Marketplace, read [How to Log In as a Web3 User](/marketplace/guides/log-in). + + +
+
+ +## Account window + +This window shows your user account settings. + + +
+
+ +**User ID**: your unique user ID. + +**Login**: the OAuth2 provider and your login email address. + +The **Get SPPI** button allows you to get tokens necessary to place orders. + +### Storage + +You have two options of decentralized storage to upload files: + +- **Super Protocol cloud**: + - Recommended for most users. + - Does not require additional setup. + - Uses Super Protocol's Storj account and thus relies on Super Protocol as the storage provider. +- **Your Storj account**: + - Intended for advanced users. + - Requires creating and [setting up a Storj account](/marketplace/guides/storage). + - Gives sole control over the uploaded content and storage account. + +Read [How to Set Up Storage](/marketplace/guides/storage) for step-by-step instructions. \ No newline at end of file diff --git a/docs/marketplace/account/index.md b/docs/marketplace/account/index.md deleted file mode 100644 index e2ae472f..00000000 --- a/docs/marketplace/account/index.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -id: "account" -title: "Enter Marketplace" -slug: "/account" -sidebar_position: 3 ---- - -There are two types of accounts in Super Protocol: - -- Web3 User account -- Web2 User account - - -
-
- -## Web3 User account - -_Web3 User account_ provides access to all Marketplace capabilities, including: - -- Full decentralization and sole control of user's funds, models, and datasets. -- Ability to upload models and datasets to the Super Protocol cloud or a personal Storj account. -- Placement of orders using Marketplace offers or the user's own uploaded content. -- Registration of individual providers. -- Creation and monetization of model and dataset offers on the Marketplace. -- Ability to request additional SPPI tokens. - -Read [How to Log In as a Web3 User](/marketplace/guides/log-in) for step-by-step instructions. - -## Web2 User account - -_Web2 User account_ is a quick way to start with the Marketplace. It streamlines a few steps, but this comes at the expense of full decentralization, such as using OAuth2 authentication for login instead of the decentralized MetaMask. - -To log in as a Web2 user, you need an account on one of the supported platforms: - -- Google -- Hugging Face -- GitHub -- Microsoft diff --git a/docs/marketplace/account/web2.md b/docs/marketplace/account/web2.md deleted file mode 100644 index 5e2e2140..00000000 --- a/docs/marketplace/account/web2.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -id: "web2" -title: "Web2 User Account" -slug: "/account/web2" -sidebar_position: 2 ---- - -This window shows the settings of your [Web2 User account](/marketplace/account#web2-user-account). - - -
-
- -**User ID**: your unique user ID. - -**Login**: the OAuth2 provider and your login email address. - -## Storage - -Super Protocol supports two options of decentralized storage to upload files: - -- **Super Protocol cloud**: - - Does not require additional setup. - - Uses Super Protocol's Storj account and thus relies on Super Protocol as the storage provider. - - Costs SPPI tokens for additional storage beyond the basic free package. -- **Your Storj account**: - - Available to Web3 users only. - - Requires creating and setting up a Storj account. - - Gives sole control over the uploaded content and storage account. - -To enable uploading to your personal Storj account, [log in as a Web3 user](/marketplace/guides/log-in). \ No newline at end of file diff --git a/docs/marketplace/account/web3.md b/docs/marketplace/account/web3.md deleted file mode 100644 index dd18bbb3..00000000 --- a/docs/marketplace/account/web3.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: "web3" -title: "Web3 User Account" -slug: "/account/web3" -sidebar_position: 1 ---- - -This window allows you to manage your [Web3 User account](/marketplace/account#web3-user-account). - - -
-
- -**User ID**: your unique user ID is your EVM wallet address. - -**Login**: the Web3 login method and the EVM wallet address you are using. Currently, Super Protocol only supports MetaMask as a Web3 login method. - -**Get SPPI** and **Get BNB** buttons allow you to get tokens necessary to place orders: - -- SPPI tokens are required to pay and receive payments in Super Protocol. -- BNB tokens are required to pay for opBNB blockchain transactions. - -## Storage - -You have two options of decentralized storage to upload files: - -- **Super Protocol cloud**: - - Recommended for most users. - - Does not require additional setup. - - Uses Super Protocol's Storj account and thus relies on Super Protocol as the storage provider. -- **Your Storj account**: - - Intended for advanced users. - - Requires creating and setting up a Storj account. - - Gives sole control over the uploaded content and storage account. - -Read [How to Set Up Storage](/marketplace/guides/storage) for step-by-step instructions. \ No newline at end of file diff --git a/docs/marketplace/images/web2-account.png b/docs/marketplace/images/web2-account.png index 528db2d7..80cc8bee 100644 Binary files a/docs/marketplace/images/web2-account.png and b/docs/marketplace/images/web2-account.png differ diff --git a/docs/marketplace/images/web3-account.png b/docs/marketplace/images/web3-account.png deleted file mode 100644 index 46480abe..00000000 Binary files a/docs/marketplace/images/web3-account.png and /dev/null differ diff --git a/docs/whitepaper/abstract.md b/docs/whitepaper/abstract.md index 31c74761..6e25470f 100644 --- a/docs/whitepaper/abstract.md +++ b/docs/whitepaper/abstract.md @@ -13,7 +13,7 @@ The key Web3 concept is to create open, decentralized alternatives for all the p

-But decentralization is not the only defining feature of the Web3 concept. It is expected that the following technologies will play a key role in the future: Big Data, AI, IoT, and VR/AR. These technologies require a lot of computational capacity and access to data, know-how and specific tools for creating cutting edge products. The lion’s share of the cloud services market at the moment is [controlled by only three corporations](https://www.canalys.com/newsroom/global-cloud-market-Q121). The same corporations and a number of other similar entities also control most of the data and technologies that will be fundamental for the Internet of the future. +But decentralization is not the only defining feature of the Web3 concept. It is expected that the following technologies will play a key role in the future: Big Data, AI, IoT, and VR/AR. These technologies require a lot of computational capacity and access to data, know-how and specific tools for creating cutting edge products. The lion's share of the cloud services market at the moment is [controlled by only three corporations](https://www.canalys.com/newsroom/global-cloud-market-Q121). The same corporations and a number of other similar entities also control most of the data and technologies that will be fundamental for the Internet of the future. In addition to massive centralization of computational capacity, [the year 2020 also saw a serious shortage thereof](https://en.wikipedia.org/wiki/2020%E2%80%932021_global_chip_shortage). This is happening at the time when millions of GPUs are used worldwide [to mine cryptocurrencies](https://en.wikipedia.org/wiki/Proof_of_work) even though they could be deployed to help develop and promote AI solutions in such key areas as healthcare, logistics, education, etc. diff --git a/docs/whitepaper/blockchain-solution.md b/docs/whitepaper/blockchain-solution.md index 262c8724..0ea16595 100644 --- a/docs/whitepaper/blockchain-solution.md +++ b/docs/whitepaper/blockchain-solution.md @@ -21,7 +21,7 @@ To ensure order processing, the blockchain environment provides an infrastructur * **TEE Offers** * **Orders** -Now let’s take a look at each smart contract in detail. +Now let's take a look at each smart contract in detail. ### Basic smart contract for the Super Protocol Config system @@ -257,7 +257,7 @@ The offer describes the cost of using it and the minimum order deposit. To maint | name | string | offer name | | description | string | offer description | | linkage | string | linkage spec, for example, for docker | -| restrictions | string | possible restrictions and requirements for various provider offers. Used when creating an order. E.g.: ‘{“TEE”: [GUID1, GUID2], “Storage”, “Solution”: [GUID3, GUID4]}’ | +| restrictions | string | possible restrictions and requirements for various provider offers. Used when creating an order. E.g.: ‘{“TEE”: [GUID1, GUID2], “Storage”, “Solution”: [GUID3, GUID4]}' | | slots | [{SlotInfo; OptionInfo; SlotUsage}] | array of provided configurations and conditions for their use | | input | string | input data format | | output | string | output data format | @@ -399,7 +399,7 @@ The results of the execution or encountered errors are later added up to the ord | **cancelOrder(guid orderId) public returns(bool)** | order.consumer | blockchain | |

Request to stop execution of the order on the consumer's side. The order status is changed to "canceling", the provider saves the end result of the order and moves the order to "canceled" status. If the offer is of cancelable type, smart contract immediately refunds the remaining deposit based on the proportion of time running or depositSpent. If the offer is of non-cancellable type, the provider sets a fee for their work after the order is complete.

This method works only when all sub-orders are stopped.

| | | | **refillOrder(guid orderId, uint256 orderAmount)** | order.consumer | blockchain | -| Replenishment of the deposit by the customer. Normally required when renewing a rental. It can also be used to obtain additional results if that is supported by the provider’s offer. | | | +| Replenishment of the deposit by the customer. Normally required when renewing a rental. It can also be used to obtain additional results if that is supported by the provider's offer. | | | | **withdrawProfit(guid orderId) public** | order.provider.tokenReceiver | SDK + blockchain | | Order profit withdrawal by the provider. Available after the order is executed. In this case, the profit is transferred to deferred payments for the number of days specified in the protocol settings (_profitWithdrawDelayDays_). | | | | **withdrawChange(guid orderId) public** | order.consumer | SDK + blockchain | diff --git a/docs/whitepaper/high-level-description.md b/docs/whitepaper/high-level-description.md index 5a660198..6f7a05fe 100644 --- a/docs/whitepaper/high-level-description.md +++ b/docs/whitepaper/high-level-description.md @@ -9,7 +9,7 @@ sidebar_position: 5

-From a bird’s eye view, Super Protocol involves the interactions shown in the above diagram. The interactions include the following entities: +From a bird's eye view, Super Protocol involves the interactions shown in the above diagram. The interactions include the following entities: - **Provider Offers.** In a form of a provider offer, the provider offers their resources or values in exchange for a certain reward. The offer can fall into one of three categories: - **Input.** Offers of this type are used for cooperative processing within a trusted execution environment (TEE). These can be data offers or solution offers. diff --git a/docs/whitepaper/target-audience.md b/docs/whitepaper/target-audience.md index 3be24d01..9e2b8787 100644 --- a/docs/whitepaper/target-audience.md +++ b/docs/whitepaper/target-audience.md @@ -24,7 +24,7 @@ Super Protocol is very unique in a way that it allows equipment owners to engage Data in the modern world is being created everywhere, while new, more advanced processing algorithms allow for an ever larger number of ways to use this data. However, there are quite a few challenges here. Obviously, large volumes of data are not originally meant to be public, which means it requires anonymization or confidential computing. -High-quality anonymization is not always possible without losing data utility. Additionally, many of today’s analytical tools enable successful [data de-anonymization](https://www.cs.utexas.edu/~shmat/shmat_oak09.pdf) under many different circumstances: “To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate.” +High-quality anonymization is not always possible without losing data utility. Additionally, many of today's analytical tools enable successful [data de-anonymization](https://www.cs.utexas.edu/~shmat/shmat_oak09.pdf) under many different circumstances: “To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate.” Almost any data owner would benefit from monetizing it—as long as it does not harm their business as a whole. This is borne out by the widespread development of technologies for analyzing big data. diff --git a/docusaurus.config.js b/docusaurus.config.js index aa49eaab..e38429ec 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -42,10 +42,18 @@ const config = { from: "/hackathon", to: "/hackathon/about", }, - /*{ - from: "/colab", - to: "/colab/jupyter", - },*/ + { + from: "/cli/guides/quick-guide", + to: "/cli/guides/deploy-app", + }, + { + from: "/marketplace/account/web2", + to: "/marketplace/account#account-window" + }, + { + from: "/marketplace/account/web3", + to: "/marketplace/account#account-window" + }, ], }, ], @@ -195,13 +203,13 @@ const config = { position: "right", label: "Developers", },*/ - { + /*{ type: "doc", docId: "index", position: "right", label: "Whitepaper", docsPluginId: "whitepaper", - }, + },*/ ], }, prism: { @@ -230,7 +238,7 @@ const config = { "@easyops-cn/docusaurus-search-local", ({ hashed: true, - docsRouteBasePath: [/*"developers", */"marketplace", "whitepaper", "fundamentals", "cli"], + docsRouteBasePath: [/*"developers", */"marketplace", /*"whitepaper", */"fundamentals", "cli"], language: ["en"], highlightSearchTermsOnTargetPage: true, explicitSearchResultPath: true, diff --git a/src/.DS_Store b/src/.DS_Store deleted file mode 100644 index bdf7d604..00000000 Binary files a/src/.DS_Store and /dev/null differ diff --git a/src/css/custom.css b/src/css/custom.css index 56cd9594..20183983 100644 --- a/src/css/custom.css +++ b/src/css/custom.css @@ -34,6 +34,11 @@ ol > li::marker { font-weight: 500; } +/* hide the warning on unlisted pages */ +.theme-unlisted-banner { + display: none !important; +} + /* navbar */ .navbar__logo { height: 1.12em; diff --git a/src/pages/.DS_Store b/src/pages/.DS_Store deleted file mode 100644 index d86d37c8..00000000 Binary files a/src/pages/.DS_Store and /dev/null differ diff --git a/src/theme/Layout/index.js b/src/theme/Layout/index.js index eeac7d18..980964c8 100644 --- a/src/theme/Layout/index.js +++ b/src/theme/Layout/index.js @@ -87,9 +87,21 @@ export default function Layout(props) { ᐧ A tunnel client hosts a web server; it remains hidden behind the tunnel server and protected from external threats. + + TEE Confirmation Block (TCB) contains a unique device ID, equipment benchmark results,
various hashes, device signature, and a certificate chain for signature verification.

+ + Trusted Loader generates and publishes TCB on the blockchain every 24 hours. +
+ + // Stabs and abbreviations + Confidential Virtual Machine + + + Trusted Loader + ); } \ No newline at end of file diff --git a/static/.DS_Store b/static/.DS_Store deleted file mode 100644 index 20fef11f..00000000 Binary files a/static/.DS_Store and /dev/null differ diff --git a/static/files/deploy_apertus_official.sh b/static/files/deploy_apertus_official.sh new file mode 100755 index 00000000..1487a1c7 --- /dev/null +++ b/static/files/deploy_apertus_official.sh @@ -0,0 +1,154 @@ +#!/usr/bin/env bash +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +BASE_DOMAIN="${BASE_DOMAIN:-monai-swarm.win}" +API_HOST="${API_HOST:-apertus-vllm.${BASE_DOMAIN}}" +MODEL_NAME="${MODEL_NAME:-swiss-ai/Apertus-8B-2509}" +MODEL_ENTRY_NAME="${MODEL_ENTRY_NAME:-apertus}" +RELEASE_NAME="${RELEASE_NAME:-apertus-official}" +if [ -z "${API_KEY:-}" ]; then + echo "API_KEY must be set. Execute:" >&2 + echo "read -rs API_KEY && export API_KEY" >&2 + echo "And then type a desired key." >&2 + exit 1 +fi +IMAGE_REPOSITORY="${IMAGE_REPOSITORY:-vllm/vllm-openai}" +IMAGE_TAG="${IMAGE_TAG:-v0.18.0}" +GPU_MEMORY_UTILIZATION="${GPU_MEMORY_UTILIZATION:-0.55}" +MAX_MODEL_LEN="${MAX_MODEL_LEN:-32768}" +CPU_REQUEST="${CPU_REQUEST:-8}" +MEMORY_REQUEST="${MEMORY_REQUEST:-48Gi}" +GPU_COUNT="${GPU_COUNT:-1}" +PVC_STORAGE="${PVC_STORAGE:-80Gi}" +INGRESS_CLASS="${INGRESS_CLASS:-nginx}" + +need() { command -v "$1" >/dev/null 2>&1 || { echo "Missing dependency: $1" >&2; exit 1; }; } +need kubectl +need helm + +NAMESPACE="${NAMESPACE:-$(kubectl config view --minify -o jsonpath='{..namespace}' 2>/dev/null || true)}" +if [ -z "${NAMESPACE}" ]; then + NAMESPACE="llm" +fi + +SECRET_NAME="${RELEASE_NAME}-auth" +SERVICE_NAME="${RELEASE_NAME}-${MODEL_ENTRY_NAME}-engine-service" +DEPLOY_LABEL_MODEL="${MODEL_ENTRY_NAME}" +INGRESS_NAME="${RELEASE_NAME}-api-ingress" + +echo "==> Runtime: vLLM (official helm chart)" +echo "==> Namespace: ${NAMESPACE}" +echo "==> Release: ${RELEASE_NAME}" +echo "==> API host: ${API_HOST}" +echo "==> Model: ${MODEL_NAME}" +echo "==> Model entry name: ${MODEL_ENTRY_NAME}" +echo "==> Image: ${IMAGE_REPOSITORY}:${IMAGE_TAG}" +echo "==> Max model length: ${MAX_MODEL_LEN}" +echo "==> GPU memory utilization: ${GPU_MEMORY_UTILIZATION}" +echo + +kubectl get ns "${NAMESPACE}" >/dev/null 2>&1 || kubectl create ns "${NAMESPACE}" + +helm repo add vllm https://vllm-project.github.io/production-stack >/dev/null 2>&1 || true +helm repo update >/dev/null 2>&1 + +cat < "${VALUES_FILE}" < Pods:" +kubectl -n "${NAMESPACE}" get pods -o wide +echo +echo "==> Services:" +kubectl -n "${NAMESPACE}" get svc -o wide +echo +echo "==> Ingress:" +kubectl -n "${NAMESPACE}" get ingress -o wide +echo +echo "==> Waiting for Apertus pod readiness..." +kubectl -n "${NAMESPACE}" wait --for=condition=ready pod \ + -l "model=${DEPLOY_LABEL_MODEL},helm-release-name=${RELEASE_NAME}" \ + --timeout=900s +echo +echo "==> Ready" +echo "Base URL: https://${API_HOST}/v1" +echo "Model: ${MODEL_NAME}" +echo "Example:" +echo " curl https://${API_HOST}/v1/models -H 'Authorization: Bearer ${API_KEY}'" diff --git a/static/files/deploy_medgemma_official.sh b/static/files/deploy_medgemma_official.sh new file mode 100755 index 00000000..7845a04e --- /dev/null +++ b/static/files/deploy_medgemma_official.sh @@ -0,0 +1,170 @@ +#!/usr/bin/env bash +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +BASE_DOMAIN="${BASE_DOMAIN:-monai-swarm.win}" +API_HOST="${API_HOST:-medgemma-vllm.${BASE_DOMAIN}}" +MODEL_NAME="${MODEL_NAME:-google/medgemma-1.5-4b-it}" +MODEL_ENTRY_NAME="${MODEL_ENTRY_NAME:-medgemma}" +RELEASE_NAME="${RELEASE_NAME:-medgemma-official}" +if [ -z "${API_KEY:-}" ]; then + echo "API_KEY must be set. Execute:" >&2 + echo "read -rs API_KEY && export API_KEY" >&2 + echo "And then type a desired key." >&2 + exit 1 +fi +IMAGE_REPOSITORY="${IMAGE_REPOSITORY:-vllm/vllm-openai}" +IMAGE_TAG="${IMAGE_TAG:-v0.18.0}" +GPU_MEMORY_UTILIZATION="${GPU_MEMORY_UTILIZATION:-0.40}" +MAX_MODEL_LEN="${MAX_MODEL_LEN:-8192}" +CPU_REQUEST="${CPU_REQUEST:-8}" +MEMORY_REQUEST="${MEMORY_REQUEST:-48Gi}" +GPU_COUNT="${GPU_COUNT:-1}" +PVC_STORAGE="${PVC_STORAGE:-80Gi}" +INGRESS_CLASS="${INGRESS_CLASS:-nginx}" + +if [ -z "${HF_TOKEN:-}" ] && [ -f "${SCRIPT_DIR}/.hf_token" ]; then + HF_TOKEN="$(cat "${SCRIPT_DIR}/.hf_token")" +fi + +if [ -z "${HF_TOKEN:-}" ]; then + echo "HF_TOKEN is required for ${MODEL_NAME}." >&2 + echo "Set HF_TOKEN in the environment or create ${SCRIPT_DIR}/.hf_token." >&2 + exit 1 +fi + +need() { command -v "$1" >/dev/null 2>&1 || { echo "Missing dependency: $1" >&2; exit 1; }; } +need kubectl +need helm + +NAMESPACE="${NAMESPACE:-$(kubectl config view --minify -o jsonpath='{..namespace}' 2>/dev/null || true)}" +if [ -z "${NAMESPACE}" ]; then + NAMESPACE="llm" +fi + +SECRET_NAME="${RELEASE_NAME}-auth" +SERVICE_NAME="${RELEASE_NAME}-${MODEL_ENTRY_NAME}-engine-service" +DEPLOY_LABEL_MODEL="${MODEL_ENTRY_NAME}" +INGRESS_NAME="${RELEASE_NAME}-api-ingress" + +echo "==> Runtime: vLLM (official helm chart)" +echo "==> Namespace: ${NAMESPACE}" +echo "==> Release: ${RELEASE_NAME}" +echo "==> API host: ${API_HOST}" +echo "==> Model: ${MODEL_NAME}" +echo "==> Model entry name: ${MODEL_ENTRY_NAME}" +echo "==> Image: ${IMAGE_REPOSITORY}:${IMAGE_TAG}" +echo "==> Max model length: ${MAX_MODEL_LEN}" +echo "==> GPU memory utilization: ${GPU_MEMORY_UTILIZATION}" +echo + +kubectl get ns "${NAMESPACE}" >/dev/null 2>&1 || kubectl create ns "${NAMESPACE}" + +helm repo add vllm https://vllm-project.github.io/production-stack >/dev/null 2>&1 || true +helm repo update >/dev/null 2>&1 + +cat < "${VALUES_FILE}" < Pods:" +kubectl -n "${NAMESPACE}" get pods -o wide +echo +echo "==> Services:" +kubectl -n "${NAMESPACE}" get svc -o wide +echo +echo "==> Ingress:" +kubectl -n "${NAMESPACE}" get ingress -o wide +echo +echo "==> Waiting for MedGemma pod readiness..." +kubectl -n "${NAMESPACE}" wait --for=condition=ready pod \ + -l "model=${DEPLOY_LABEL_MODEL},helm-release-name=${RELEASE_NAME}" \ + --timeout=900s +echo +echo "==> Ready" +echo "Base URL: https://${API_HOST}/v1" +echo "Model: ${MODEL_NAME}" +echo "Example:" +echo " curl https://${API_HOST}/v1/models -H 'Authorization: Bearer ${API_KEY}'" diff --git a/static/files/usd_to_crypto.py b/static/files/usd_to_crypto.py new file mode 100644 index 00000000..88da57e5 --- /dev/null +++ b/static/files/usd_to_crypto.py @@ -0,0 +1,45 @@ +import requests +import sys + +def main(): + input_file = "input.txt" + output_file = "result.txt" + + try: + # Read the input amount + with open(input_file, "r") as f: + content = f.read().strip() + if not content: + raise ValueError("Input file is empty") + + try: + usd_amount = float(content) + except ValueError: + raise ValueError("Input is not a valid number") + + # Fetch BTC and ETH prices from CoinGecko + url = "https://api.coingecko.com/api/v3/simple/price?ids=bitcoin,ethereum&vs_currencies=usd" + response = requests.get(url, timeout=10) + if response.status_code != 200: + raise RuntimeError(f"API request failed with status code {response.status_code}") + + data = response.json() + btc_price = data["bitcoin"]["usd"] + eth_price = data["ethereum"]["usd"] + + # Calculate how much BTC and ETH can be bought + btc_amount = usd_amount / btc_price + eth_amount = usd_amount / eth_price + + # Write results to output file, rounded to 6 decimals + with open(output_file, "w") as f: + f.write(f"BTC: {btc_amount:.6f}\n") + f.write(f"ETH: {eth_amount:.6f}\n") + + except Exception as e: + # Write the error message to the result file + with open(output_file, "w") as f: + f.write(f"Error: {str(e)}\n") + +if __name__ == "__main__": + main()