## 1.准备

(可选1) 如果你用Python的目的是数据分析，可以直接安装Anaconda：Python数据分析与挖掘好帮手—Anaconda，它内置了Python和pip.

(可选2) 此外，推荐大家用VSCode编辑器来编写小型Python项目：Python 编程的最好搭档—VSCode 详细指南

Windows环境下打开Cmd(开始—运行—CMD)，苹果系统环境下请打开Terminal(command+空格输入Terminal)，输入命令安装依赖：

# 公众号 Python实用宝典

def oneline(x):
y = x/2
return y

(base) G:\push\20220724>python 1.py
0.5

# 公众号 Python实用宝典

def tanh(x):
y = np.exp(-2.0 * x)
return (1.0 - y) / (1.0 + y)

(base) G:\push\20220724>python 1.py
0.419974341614026

# 公众号 Python实用宝典

def tanh(x):
y = np.exp(-2.0 * x)
return (1.0 - y) / (1.0 + y)

import matplotlib.pyplot as plt
x = np.linspace(-7, 7, 200)
plt.show()

## 3.实现一个逻辑回归模型

# Build a toy dataset.
inputs = np.array([[0.52, 1.12,  0.77],
[0.88, -1.08, 0.15],
[0.52, 0.06, -1.30],
[0.74, -2.49, 1.39]])
targets = np.array([True, True, False, True])

def sigmoid(x):
return 0.5 * (np.tanh(x / 2.) + 1)

def logistic_predictions(weights, inputs):
# Outputs probability of a label being true according to logistic model.
return sigmoid(np.dot(inputs, weights))

def training_loss(weights):
# Training loss is the negative log-likelihood of the training labels.
preds = logistic_predictions(weights, inputs)
label_probabilities = preds * targets + (1 - preds) * (1 - targets)
return -np.sum(np.log(label_probabilities))

# Define a function that returns gradients of training loss using Autograd.

# Optimize weights using gradient descent.
weights = np.array([0.0, 0.0, 0.0])
print("Initial loss:", training_loss(weights))
for i in range(100):

print("Trained loss:", training_loss(weights))

(base) G:\push\20220724>python regress.py
Initial loss: 2.772588722239781
Trained loss: 1.067270675787016

​Python实用宝典 ( pythondict.com )