from interval import interval
a = interval[1,5]
# interval([1.0, 5.0])
print(3 in a)
# True
此外,你还可以构建一个多区间:
from interval import interval
a = interval([0, 1], [2, 3], [10, 15])
print(2.5 in a)
# True
interval.hall 方法还可以将多个区间合并,取其最小及最大值为边界:
from interval import interval
a = interval.hull((interval[1, 3], interval[10, 15], interval[16, 2222]))
# interval([1.0, 2222.0])
print(1231 in a)
# True
区间并集计算:
from interval import interval
a = interval.union([interval([1, 3], [4, 6]), interval([2, 5], 9)])
# interval([1.0, 6.0], [9.0])
print(5 in a)
# True
print(8 in a)
# False
2016年Plos One 的另一项来自美国的研究以16179名,年龄在18岁以上的人群为对象,并对其进行了高达19年的随访,发现在4946例死亡患者中,食用辣椒的参与者的全因死亡率为21.6%,而未食用辣椒的参与者的全因死亡率为33.6%。相较于不吃辣或很少吃(少于每周两次)的人群,每周吃辣>4次的人群总死亡风险降低23%,心血管死亡风险降低34%。
for epoch in range(2):
running_loss = 0.0
for i, data in enumerate(trainloader):
inputs, labels = data[0].to(model_engine.local_rank), data[1].to(
model_engine.local_rank)
outputs = model_engine(inputs)
loss = criterion(outputs, labels)
model_engine.backward(loss)
model_engine.step()
# print statistics
running_loss += loss.item()
if i % args.log_interval == (args.log_interval - 1):
print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / args.log_interval))
running_loss = 0.0
2.5 测试逻辑
模型测试和模型训练的逻辑类似:
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images.to(model_engine.local_rank))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels.to(
model_engine.local_rank)).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' %
(100 * correct / total))
(gs3_9) zjr@sgd-linux-1:~/cnn_test$ asciinema rec first.cast
asciinema: recording asciicast to first.cast
asciinema: press <ctrl-d> or type "exit" when you're done
意思就是日志会被保存在当前文件夹下的first.cast,如果你想结束录制,按 Ctrl + D 即可。