@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@NAS Chain Just an update on what we've achieved in V2 so far. When we first launched the subnet, the initial model submission with 93% accuracy had 1.1 million parameters. Today, the winning model achieves the same accuracy with only 0.2 million parameters. This represents the same model performance with an 81% compression rate in terms of memory requirements. Full details will be available in the upcoming white paper.
@Masa friendly reminder to validators and miners alike - please upgrade to the 1.0.2 release ASAP as we will be enforcing this new version via hyperparameters soon. We currently use Python 3.12
git fetch --tags
git checkout v1.0.2
pip install -e .
Be sure to restart your process(es) afterwards!
@Sturdy From Day 1, our goal has been integrating SN10 into every application across DeFi. Today, we're one step closer to that goal.
Morpho is one of the largest DeFi applications by TVL, with >$2b in deposits. We're excited to share that SN10 is coming to Morpho! Thanks to SN10, users will be able to earn AI-optimized yields on USDC Morpho Vaults. Additionally, Aera (a vault infrastructure provider incubated by Gauntlet) will be powering the aggregator's execution and integrating SN10.
Expect to see even more applications building on top of SN10 soon! In the meantime, check out the announcement thread below https://x.com/MorphoLabs/status/1846940138775236758
@Infinite Games We just pushed our most recent update and are about to increase the weight version, if you are a validator please update asap 🙏
@Infinite Games We will push only two updates today:
- a new set of 20+ daily Fred events(https://github.com/amedeo-gigaver/infinite_games/blob/main/docs/fred-events.txt)
- streaming of the miners predictions to our database
- this will allow for the aggregation of the predictions, enabling group components in the scoring and subnet wide predictions
Regarding the scoring we want to address first the issue of the new miners joining the network and getting a penalty for events that were already streaming, given that this would become a more significant problem with the log scoring rule. (cc @Discord User @Discord User @Discord User )
@Infinite Games we will also start streaming soon a new set of events, based on the Fred economic database (https://fred.stlouisfed.org), you can find a sample here: https://github.com/amedeo-gigaver/infinite_games/blob/main/docs/fred-events.txt.
@Proprietary Trading Network Hi everyone, as you may know, we have been hard at work on a plagiarism detection system over the past few months. We will be releasing a PR for the system this week, but want to provide some insight into the operating principles and history of development we’ve been pursuing behind the scenes. We’re releasing a white paper which outlines this process, which can be found here: https://docs.taoshi.io/PlagiarismWhitePaper.pdf.
In general, tracking plagiarism will help improve the stability of our network and protect the signal integrity of our miners who have trained custom models. The initial roll-out will be in ghost mode, and won’t affect miners as we will only be tracking statistics to ensure smooth operation.
The full integration with eliminations is set to release sometime in the next few weeks after we finish thoroughly testing. Looking forward to your feedback!
@Targon new update for valis! 4.1.1, fixes some exploits in fast logprobs.
What was the exploit? Glad you asked. Someone distilled a larger model down to a smaller model, and always generated responses with extremely low temp, This would essentially always pass both log probs because of the distillation and low tolerance for variety. This has been patched and no longer passes checks. Cheers!
@Sportstensor We've applied the gaussian filter patch which penalizes unrealistic predictions such as 0.00001% or even 100% win probability. For the details, please see: https://discord.com/channels/799672011265015819/1263142301786574889/1295966982000345122
The gaussian filter will be retroactively applied to all scores, recalculating everything including the excessively high scores achieved by being naughty.
max_burn
has been set to 0.5t to bring the registration fee down. It will be moved back up to 10t shortly.
Best of luck discovering those edges fam! 🔍 ✨
@Infinite Games Two important updates:
We will bring back the log scoring rule tomorrow at 11 EST and increase the weight version. Please check the PR here https://github.com/amedeo-gigaver/infinite_games/pull/33/files. We will not push batching yet.
Part of our events are now resolved through UMA: https://oracle.uma.xyz/propose?project=Unknown&project=Infinite+Games Anyone can resolve them and receive a small reward. Please consider doing so to support the subnet development!
Release 1.6.7
Updates for Miners:
-
Fixed several bugs in the upload system.
-
Added a retry upload function for Hugging Face in case the HF API becomes unstable.
-
Improved the upload process, allowing miners on the same VPS to upload data without conflicts.
-
Miners can now retrieve the DD list using the –gravity argument. Updates for Validators:
-
Fixed an issue where validators using local Subtensor nodes couldn’t fetch the DD list.
Thank you all for extreme support for developing of the subnet to the moon and back!
little insight( check pr on new datasource, soon to be added)
@Compute user id: 1291320783192326148 was the scammer, deleted before getting ban hammered
@Sturdy Release 1.5.1 is currently in the works and is scheduled to be released during the usual time on Tuesday morning EST next week. Stay tuned 🙂 . https://github.com/Sturdy-Subnet/sturdy-subnet/pull/57
@Sportstensor give this guy a warning please, he’s been continuously disrespectful to our community and making false claims about the team
Good Morning/Afternoon/Evening Sportstensorians!
Just merged in the update that should solidify the incentive mechanism for the long haul. I'm sure there will be some slight adjustments along the way, but this update puts guardrails on the scoring of the calculated edge so it aligns with reality. No more predictions of 0.00001 which generates ridiculous edge. The math behind this will be detailed in our whitepaper as well.
We have removed the probability minimum validation as this new update handles it all and will allow for some nuanced predictions that can happen with NBA games (coming next week!)
All the technical details can be found here: https://github.com/sportstensor/sportstensor/pull/69
Validators: please pull the latest and restart!
Happy predicting!
@ReadyAI Starting at 12pm PST today (in just under 2 hours) we will be closing down the database and setting the weights_version to the most recent version of the subnet repo.
Validators, if you haven't already, please place your API key in the top-level directory of the subnet repo and restart. You'll know it's working if you don't see any warnings during validator execution. Let me know if you have any questions!
@Sturdy wandb is currently experiencing issues with their user authentication. seems like a bunch of runs crashed because of this. validators who currently have wandb enabled can disable it temporarily by adding the --wandb.off
flag when running their validator to prevent it from breaking.
@Omron The 3.0.0 release has been updated to include unique circuit names for Jolt models. We have 3/4 validators configured with these new changes; the fourth will be available shortly. To pull and test, please execute the following from the omron directory. After completing thorough testing on testnet, these changes will be deployed to finney via auto-update at approximately <t:1729094400:F>
git checkout testnet git pull rm -r ./neurons/deployment_layer/model_b7d33e7c19360c042d94c5a7360d7dc68c36dd56c449f7c49164a0098769c01f/target pm2 restart all
@Sturdy Release 1.5.0 has been merged into mainnet! Miners and validators please update:
git pull pip install -r requirements.txt pip install -e .
Registration will be closed for the next 6hrs and the minimum weights version of the subnet has been set to 1050
so please update ASAP!