您可以在不安装的情况下使用Fastai,方法是使用Google Colab事实上,本文档的每一页还可以作为交互式笔记本使用-单击任何页面顶部的“Open in CoLab”将其打开(请确保将Colab运行时更改为“GPU”以使其快速运行!)请参阅上的fast.ai文档Using Colab了解更多信息
# Download best-matching version of specific model for your spaCy installation
python -m spacy download en_core_web_sm
# pip install .tar.gz archive or .whl from path or URL
pip install /Users/you/en_core_web_sm-3.0.0.tar.gz
pip install /Users/you/en_core_web_sm-3.0.0-py3-none-any.whl
pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz
Copyright 2015 Donne Martin
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
我要感谢所有人who contributed to this project,通过提供有用的反馈、提交问题或提交拉取请求。特别感谢海森·帕克和伊恩·博雷德,他们审阅了每个笔记本,并提交了许多公关,包括在一些练习解决方案上的帮助。还要感谢史蒂文·邦克利和齐恩布拉,他们创造了docker目录,并感谢GitHub用户SuperYorio,他在一些运动解决方案上提供了帮助
@article{zhang2021dive,
title={Dive into Deep Learning},
author={Zhang, Aston and Lipton, Zachary C. and Li, Mu and Smola, Alexander J.},
journal={arXiv preprint arXiv:2106.11342},
year={2021}
}
hands-on:如果您在线搜索Production ML或MLOps,您会找到很棒的博客帖子和tweet。但是为了真正理解这些概念,您需要实现它们。不幸的是,由于规模、专有内容和昂贵的工具,您没有看到很多运行Production ML的内部工作原理。然而,Made with ML是免费的、开放的和活生生的,这使得它成为社区完美的学习机会
# CUDA only: Add LAPACK support for the GPU if needed
conda install -c pytorch magma-cuda110 # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo
在MacOS上
# Add these packages if torch.distributed is needed
conda install pkg-config libuv
在Windows上
# Add these packages if torch.distributed is needed.# Distributed package support on Windows is a prototype feature and is subject to changes.
conda install -c conda-forge libuv=1.39
获取PyTorch源代码
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive --jobs 0
build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
collect2: error: ld returned 1 exit status
error: command 'g++' failed with exit status 1
在Visual Studio的新版本中有时会出现回归,因此最好使用相同的Visual Studio版本16.8.5作为Pytorch CI。虽然PyTorch CI使用Visual Studio BuildTools,但您可以使用Visual Studio Enterprise、Professional或Community
有关OpenMP的说明:所需的OpenMP实施是英特尔OpenMP(IOMP)。为了链接到iomp,您需要手动下载库并通过调整设置构建环境CMAKE_INCLUDE_PATH和LIB该说明这里是设置MKL和英特尔OpenMP的示例。如果没有这些CMake配置,将使用Microsoft Visual C OpenMP运行时(vcomp
cmd:: [Optional] If you want to build with the VS 2017 generator for old CUDA and PyTorch, please change the value in the next line to `Visual Studio 15 2017`.:: Note: This value is useless if Ninja is detected. However, you can force that by using `set USE_NINJA=OFF`.setCMAKE_GENERATOR=Visual Studio 162019:: Read the content in the previous section carefully before you proceed.:: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.:: "Visual Studio 2019 Developer Command Prompt" will be run automatically.:: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.setCMAKE_GENERATOR_TOOLSET_VERSION=14.27
setDISTUTILS_USE_SDK=1for /f "usebackq tokens=*"%i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,16^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%:: [Optional] If you want to override the CUDA host compilersetCUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe
python setup.py install