VERSIONING Built for
VERSIONING Built for
VERSIONING Built for
ML Experts
ML Teams
ML Companies
ML Dabblers
ML Experts
ML Experts
ML Teams
ML Companies
ML Dabblers
ML Experts
ML Experts
ML Teams
ML Companies
ML Dabblers
ML Experts
Bring your ML development into focus by connecting your data, code and models in XetHub.
Bring your ML development into focus by connecting your data, code and models in XetHub.
Bring your ML development into focus by connecting your data, code and models in XetHub.
git-xet 0.12.4 filter started
Xet: Deduplicating data blocks: 13.49 GiB | 279.16 MiB/s, done.
13.49 GiB added, stored 13.49 GiB (0.0% reduction)
[main 14a4e36] Add mistral model to repo
4 files changed, 81 insertions(+)
create mode 100644 mistral-7B-v0.1/RELEASE
create mode 100644 mistral-7B-v0.1/consolidated.00.pth
create mode 100644 mistral-7B-v0.1/params.json
create mode 100644 mistral-7B-v0.1/tokenizer.model
Xet: Uploading data blocks: 100% (921 / 921), 13.49 GiB | 2.67 MiB/s, done.
Enumerating objects: 8, done.
Counting objects: 100% (8/8), done.
Delta compression using up to 10 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 1.36 KiB | 1.36 MiB/s, done.
Total 7 (delta 1), reused 0 (delta 0), pack-reused 0
remote: . Processing 1 reference
remote: Processed 1 reference in total
To https://xethub.com/erinys/mistral-7b.git
a7586d0..14a4e36 main -> main
git-xet 0.12.4 filter started
Xet: Deduplicating data blocks: 13.49 GiB | 279.16 MiB/s, done.
13.49 GiB added, stored 13.49 GiB (0.0% reduction)
[main 14a4e36] Add mistral model to repo
4 files changed, 81 insertions(+)
create mode 100644 mistral-7B-v0.1/RELEASE
create mode 100644 mistral-7B-v0.1/consolidated.00.pth
create mode 100644 mistral-7B-v0.1/params.json
create mode 100644 mistral-7B-v0.1/tokenizer.model
Xet: Uploading data blocks: 100% (921 / 921), 13.49 GiB
Enumerating objects: 8, done.
Counting objects: 100% (8/8), done.
Delta compression using up to 10 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 1.36 KiB | 1.36 MiB/s, done.
Total 7 (delta 1), reused 0 (delta 0), pack-reused 0
remote: . Processing 1 reference
remote: Processed 1 reference in total
To https://xethub.com/erinys/mistral-7b.git
a7586d0..14a4e36 main -> main
git-xet 0.12.4 filter started
Xet: Deduplicating data blocks: 13.49 GiB | 279.16 MiB/s, done.
13.49 GiB added, stored 13.49 GiB (0.0% reduction)
[main 14a4e36] Add mistral model to repo
4 files changed, 81 insertions(+)
create mode 100644 mistral-7B-v0.1/RELEASE
create mode 100644 mistral-7B-v0.1/consolidated.00.pth
create mode 100644 mistral-7B-v0.1/params.json
create mode 100644 mistral-7B-v0.1/tokenizer.model
Xet: Uploading data blocks: 100% (921 / 921), 13.49 GiB | 2.67 MiB/s, done.
Enumerating objects: 8, done.
Counting objects: 100% (8/8), done.
Delta compression using up to 10 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 1.36 KiB | 1.36 MiB/s, done.
Total 7 (delta 1), reused 0 (delta 0), pack-reused 0
remote: . Processing 1 reference
remote: Processed 1 reference in total
To https://xethub.com/erinys/mistral-7b.git
a7586d0..14a4e36 main -> main
Join the teams doing cutting edge work on XetHub
XetHub is perfect for teams of any size
XetHub is perfect for teams of any size
We provide peace of mind and robustness when working with your ML assets
For data scientists
For ML teams
For enterprises
Never worry about losing your work again
One lightweight extension to easily version all your files.
Store everything in one place
Develop with data, notebooks, models, and code in your Git repository.
No extra tools or servers
Version experiments, models, and datasets using Git and let XetHub manage all your large files.
Rewind and restore anything
Experiment confidently knowing that every change is preserved.
For data scientists
For ML teams
For enterprises
Never worry about losing your work again
One lightweight extension to easily version all your files.
Store everything in one place
Develop with data, notebooks, models, and code in your Git repository.
No extra tools or servers
Version experiments, models, and datasets using Git and let XetHub manage all your large files.
Rewind and restore anything
Experiment confidently knowing that every change is preserved.
For data scientists
For ML teams
For enterprises
Never worry about losing your work again
One lightweight extension to easily version all your files.
Store everything in one place
Develop with data, notebooks, models, and code in your Git repository.
No extra tools or servers
Version experiments, models, and datasets using Git and let XetHub manage all your large files.
Rewind and restore anything
Experiment confidently knowing that every change is preserved.
The Xet Ecosystem
The Xet Ecosystem
Integrates with ML Libraries & Platforms
Integrates with ML Libraries & Platforms
Visualize Data
Workflow Orchestration
Data Access
Deployment
The Hub made for large repos
The Hub made for large repos
Our experience is built around accessing, understanding, and collaborating on your large repos.
Visually understand your data and how it has changed.
Visually understand your data and how it has changed.
XetHub automatically displays visual summaries of CSVs for each file for each commit. At a glance, understand how your metrics and distributions have changed.
Run live apps with your data
Run live apps with your data
No more worrying about how to move data where and what service to host data apps on. Build Python apps using tools such as Gradio and Streamlit and run it directly in your repo.
Watch the GBs melt away
Watch the GBs melt away
Our novel block-level deduplication algorithm saves you time and disk space. One less thing for you to manage so you can get back to solving the truly challenging problems.
“As we performed our technical evaluation of XetHub, we found that it scaled well as our repo sizes got larger. It was easy to adopt and required almost no training for the engineers on the team. The usage-based pricing model makes it easy to align our costs with system utilization, unlike some other models based on team size.”
“As we performed our technical evaluation of XetHub, we found that it scaled well as our repo sizes got larger. It was easy to adopt and required almost no training for the engineers on the team. The usage-based pricing model makes it easy to align our costs with system utilization, unlike some other models based on team size.”
“As we performed our technical evaluation of XetHub, we found that it scaled well as our repo sizes got larger. It was easy to adopt and required almost no training for the engineers on the team. The usage-based pricing model makes it easy to align our costs with system utilization, unlike some other models based on team size.”
Daniel Maturana
Co-founder and Chief ML Scientist
Simplify your ML development today
Stop juggling multiple tools and streamline your workflow with XetHub.
Simplify your ML development today
Stop juggling multiple tools and streamline your workflow with XetHub.
Simplify your ML development today
Stop juggling multiple tools and streamline your workflow with XetHub.
© 2024 XetData, Inc. All right reserved.
© 2024 XetData, Inc. All right reserved.
© 2024 XetData, Inc. All right reserved.