I am trying to predict future profit data in a dataset of a copper mine enterprise data in csv format.
我试图用csv格式预测铜矿企业数据数据集中的未来利润数据。
I read the data:
我读了数据:
data = pd.read_csv('data.csv')
I split the data:
我拆分数据:
data_target = data[target].astype(float)
data_used = data.drop(['Periodo', 'utilidad_operativa_dolar'], axis=1)
x_train, x_test, y_train, y_test = train_test_split(data_used, data_target, test_size=0.4,random_state=33)
Create an svr predictor:
创建一个svr预测器:
clf_svr= svm.SVR(kernel='rbf')
Standarize the data:
标准化数据:
from sklearn.preprocessing import StandardScaler
scalerX = StandardScaler().fit(x_train)
scalery = StandardScaler().fit(y_train)
x_train = scalerX.transform(x_train)
y_train = scalery.transform(y_train)
x_test = scalerX.transform(x_test)
y_test = scalery.transform(y_test)
print np.max(x_train), np.min(x_train), np.mean(x_train), np.max(y_train), np.min(y_train), np.mean(y_train)
Then predict:
然后预测:
y_pred=clf.predict(x_test)
And the prediction data is standarized as well. I want the predicted data to be in the original format, how i can do that?
并且预测数据也是标准化的。我希望预测数据采用原始格式,我该怎么做?
2 个解决方案
#1
5
You would want to use the inverse_transform
method of your y-scaler. Note that you can do all this more concisely using a pipeline, as follows
您可能希望使用y-scaler的inverse_transform方法。请注意,您可以使用管道更简洁地完成所有这些操作,如下所示
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVR
pipeline = Pipeline([('scaler', StandardScaler()), ('estimator', SVR(kernel="rbf"))])
y_scaler = StandardScaler()
y_train = y_scaler.fit_transform(y_train)
pipeline.fit(x_train, y_train)
y_pred = y_scaler.inverse_transform(pipeline.predict(x_test))
Many would just scale the target globally and get away without too much overfitting. But you are doing good in not falling for this. AFAIK using a separate scaler for y data as shown in the code is the only way to go.
许多人只会在全球范围内扩大目标并在没有太多过度拟合的情况下逃脱。但是你并没有因此而堕落。 AFAIK使用单独的缩放器为y数据,如代码所示是唯一的方法。
#2
2
I know this question is old and the answer was correct at the time, but there is a scikit-learn method of doing this now.
我知道这个问题很老,当时的答案是正确的,但现在有一种scikit-learn方法。
http://scikit-learn.org/dev/modules/compose.html#transforming-target-in-regression
http://scikit-learn.org/dev/modules/compose.html#transforming-target-in-regression
#1
5
You would want to use the inverse_transform
method of your y-scaler. Note that you can do all this more concisely using a pipeline, as follows
您可能希望使用y-scaler的inverse_transform方法。请注意,您可以使用管道更简洁地完成所有这些操作,如下所示
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVR
pipeline = Pipeline([('scaler', StandardScaler()), ('estimator', SVR(kernel="rbf"))])
y_scaler = StandardScaler()
y_train = y_scaler.fit_transform(y_train)
pipeline.fit(x_train, y_train)
y_pred = y_scaler.inverse_transform(pipeline.predict(x_test))
Many would just scale the target globally and get away without too much overfitting. But you are doing good in not falling for this. AFAIK using a separate scaler for y data as shown in the code is the only way to go.
许多人只会在全球范围内扩大目标并在没有太多过度拟合的情况下逃脱。但是你并没有因此而堕落。 AFAIK使用单独的缩放器为y数据,如代码所示是唯一的方法。
#2
2
I know this question is old and the answer was correct at the time, but there is a scikit-learn method of doing this now.
我知道这个问题很老,当时的答案是正确的,但现在有一种scikit-learn方法。
http://scikit-learn.org/dev/modules/compose.html#transforming-target-in-regression
http://scikit-learn.org/dev/modules/compose.html#transforming-target-in-regression