I am trying to find the optimal solution to the follow system of equations in Python:
我在寻找Python中的下列方程组的最优解:
(x-x1)^2 + (y-y1)^2 - r1^2 = 0
(x-x2)^2 + (y-y2)^2 - r2^2 = 0
(x-x3)^2 + (y-y3)^2 - r3^2 = 0
Given the values a point(x,y) and a radius (r):
给定值(x,y)和半径(r):
x1, y1, r1 = (0, 0, 0.88)
x2, y2, r2 = (2, 0, 1)
x3, y3, r3 = (0, 2, 0.75)
What is the best way to find the optimal solution for the point (x,y) Using the above example it would be:
~ (1, 1)
使用上面的例子,找到(x,y)点的最优解的最好方法是什么?
4 个解决方案
#1
14
If I understand your question correctly, I think this is what you're after:
如果我正确地理解了你的问题,我想这就是你想要的:
from scipy.optimize import minimize
import numpy as np
def f(coord,x,y,r):
return np.sum( ((coord[0] - x)**2) + ((coord[1] - y)**2) - (r**2) )
x = np.array([0, 2, 0])
y = np.array([0, 0, 2])
r = np.array([.88, 1, .75])
# initial (bad) guess at (x,y) values
initial_guess = np.array([100,100])
res = minimize(f,initial_guess,args = [x,y,r])
Which yields:
收益率:
>>> print res.x
[ 0.66666666 0.66666666]
You might also try the least squares method which expects an objective function that returns a vector. It wants to minimize the sum of the squares of this vector. Using least squares, your objective function would look like this:
您还可以尝试使用最小二乘方法,该方法要求返回一个向量的目标函数。它想最小化这个向量的平方和。使用最小二乘法,目标函数是这样的:
def f2(coord,args):
x,y,r = args
# notice that we're returning a vector of dimension 3
return ((coord[0]-x)**2) + ((coord[1] - y)**2) - (r**2)
And you'd minimize it like so:
你可以这样最小化它:
from scipy.optimize import leastsq
res = leastsq(f2,initial_guess,args = [x,y,r])
Which yields:
收益率:
>>> print res[0]
>>> [ 0.77961518 0.85811473]
This is basically the same as using minimize
and re-writing the original objective function as:
这基本上与使用最小化和重写原始目标函数相同,如:
def f(coord,x,y,r):
vec = ((coord[0]-x)**2) + ((coord[1] - y)**2) - (r**2)
# return the sum of the squares of the vector
return np.sum(vec**2)
This yields:
这个收益率:
>>> print res.x
>>> [ 0.77958326 0.8580965 ]
Note that args
are handled a bit differently with leastsq
, and that the data structures returned by the two functions are also different. See the documentation for scipy.optimize.minimize
and scipy.optimize.leastsq
for more details.
注意,对于最小sq, args的处理略有不同,两个函数返回的数据结构也有所不同。请参阅scipy. optimization的文档。和scipy.optimize最小化。leastsq为更多的细节。
See the scipy.optimize
documentation for more optimization options.
看到scipy。优化文档以获得更多优化选项。
#2
3
I noticed that the code in the accepted solution doesn't work any longer... I think maybe scipy.optimize
has changed it's interface since the answer was posted. I could be wrong. Regardless, I second the suggestion to use the algorithms in scipy.optimize
, and the accepted answer does demonstrate how (or did at one time, if the interface has changed).
我注意到已接受的解决方案中的代码不再工作了……我想也许scipy。自从答案被发布后,优化已经改变了它的界面。我可能是错的。无论如何,我建议在scipy中使用算法。优化,并且所接受的答案确实演示了如何(或者,如果接口发生了变化的话)。
I'm adding an additional answer here, purely to suggest an alternative package that uses the scipy.optimize
algorithms at the core, but is much more robust for constrained optimization. The package is mystic
. One of the big improvements is that mystic
gives constrained global optimization.
我在这里添加了一个额外的答案,纯粹是为了建议使用scipy的替代包。优化算法的核心,但更健壮的约束优化。这个包是神秘的。最大的改进之一是,神秘主义者给出了受限的全局优化。
First, here's your example, done very similarly to the scipy.optimize.minimize
way, but using a global optimizer.
首先,这是您的示例,与scipy. optimization非常相似。最小化方法,但是使用全局优化器。
from mystic import reduced
@reduced(lambda x,y: abs(x)+abs(y)) #choice changes answer
def objective(x, a, b, c):
x,y = x
eqns = (\
(x - a[0])**2 + (y - b[0])**2 - c[0]**2,
(x - a[1])**2 + (y - b[1])**2 - c[1]**2,
(x - a[2])**2 + (y - b[2])**2 - c[2]**2)
return eqns
bounds = [(None,None),(None,None)] #unnecessary
a = (0,2,0)
b = (0,0,2)
c = (.88,1,.75)
args = a,b,c
from mystic.solvers import diffev2
from mystic.monitors import VerboseMonitor
mon = VerboseMonitor(10)
result = diffev2(objective, args=args, x0=bounds, bounds=bounds, npop=40, \
ftol=1e-8, disp=False, full_output=True, itermon=mon)
print result[0]
print result[1]
With results looking like this:
结果是这样的:
Generation 0 has Chi-Squared: 38868.949133
Generation 10 has Chi-Squared: 2777.470642
Generation 20 has Chi-Squared: 12.808055
Generation 30 has Chi-Squared: 3.764840
Generation 40 has Chi-Squared: 2.996441
Generation 50 has Chi-Squared: 2.996441
Generation 60 has Chi-Squared: 2.996440
Generation 70 has Chi-Squared: 2.996433
Generation 80 has Chi-Squared: 2.996433
Generation 90 has Chi-Squared: 2.996433
STOP("VTRChangeOverGeneration with {'gtol': 1e-06, 'target': 0.0, 'generations': 30, 'ftol': 1e-08}")
[ 0.66667151 0.66666422]
2.99643333334
As noted, the choice of the lambda
in reduced
affects which point the optimizer finds as there is no actual solution to the equations.
如前所述,在约简中选择lambda会影响优化器找到的那个点,因为该方程没有实际的解。
mystic
also provides the ability to convert symbolic equations to a function, where the resulting function can be used as an objective, or as a penalty function. Here is the same problem, but using the equations as a penalty instead of the objective.
神秘主义还提供了将符号方程转换为一个函数的能力,在这个函数中,产生的函数可以作为一个目标,或者作为一个惩罚函数。这是同样的问题,但是用方程作为惩罚而不是目标。
def objective(x):
return 0.0
equations = """
(x0 - 0)**2 + (x1 - 0)**2 - .88**2 == 0
(x0 - 2)**2 + (x1 - 0)**2 - 1**2 == 0
(x0 - 0)**2 + (x1 - 2)**2 - .75**2 == 0
"""
bounds = [(None,None),(None,None)] #unnecessary
from mystic.symbolic import generate_penalty, generate_conditions
from mystic.solvers import diffev2
pf = generate_penalty(generate_conditions(equations), k=1e12)
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=pf, \
npop=40, gtol=50, disp=False, full_output=True)
print result[0]
print result[1]
With results:
与结果:
[ 0.77958328 0.8580965 ]
3.6473132399e+12
The results are different than before because the penalty applied is different than we applied earlier in reduced
. In mystic
, you can select what penalty you want to apply.
结果与以前不同,因为应用的惩罚与我们之前减少的不同。在神秘主义中,你可以选择你想要的惩罚。
The point was made that the equation has no solution. You can see from the result above, that the result is heavily penalized, so that's a good indication that there is no solution. However, mystic
has another way you can see there in no solution. Instead of applying a more traditional penalty
, which penalizes the solution where the constraints are violated... mystic
provides a constraint
, which is essentially a kernel transformation, that removes all potential solutions that don't meet the constants.
有人指出这个方程没有解。从上面的结果可以看出,结果是严重的,所以这是一个很好的迹象,表明没有解决方案。然而,神秘主义者有另一种方式,你可以在无解中看到。而不是采用更传统的惩罚,惩罚违反约束的解决方案……神秘提供了一个约束,本质上是一个内核转换,它删除了所有不满足常数的可能解。
def objective(x):
return 0.0
equations = """
(x0 - 0)**2 + (x1 - 0)**2 - .88**2 == 0
(x0 - 2)**2 + (x1 - 0)**2 - 1**2 == 0
(x0 - 0)**2 + (x1 - 2)**2 - .75**2 == 0
"""
bounds = [(None,None),(None,None)] #unnecessary
from mystic.symbolic import generate_constraint, generate_solvers, simplify
from mystic.symbolic import generate_penalty, generate_conditions
from mystic.solvers import diffev2
cf = generate_constraint(generate_solvers(simplify(equations)))
result = diffev2(objective, x0=bounds, bounds=bounds, \
constraints=cf, \
npop=40, gtol=50, disp=False, full_output=True)
print result[0]
print result[1]
With results:
与结果:
[ nan 657.17740835]
0.0
Where the nan
essentially indicates there is no valid solution.
当nan本质上表示没有有效的解决方案时。
FYI, I'm the author, so I have some bias. However, mystic
has been around almost as long as scipy.optimize
, is mature, and has had a more stable interface over that length of time. The point being, if you need a much more flexible and powerful constrained nonlinear optimizer, I suggest mystic
.
我是作者,所以我有偏见。然而,神秘主义者几乎和scipy一样存在。优化是成熟的,并且在这段时间内具有更稳定的界面。关键是,如果你需要一个更灵活、更强大的约束非线性优化器,我建议神秘。
#3
1
These equations can be seen as describing all the points on the circumference of three circles in 2D space. The solution would be the points where the circles intercept.
这些方程可以看作是描述二维空间中三个圆圆周上的所有点。解就是圆的截点。
The sum of their radii of the circles is smaller than the distances between their centres, so the circles don't overlap. I've plotted the circles to scale below:
圆的半径之和小于其中心之间的距离,所以圆不重叠。我把圆标在下面
There are no points that satisfy this system of equations.
没有任何点能满足这个方程组。
#4
0
I made an example script by the following. Note that the last line will find an optimal solution (a,b):
我做了一个示例脚本如下所示。注意最后一行将找到一个最优解(a,b):
import numpy as np
import scipy as scp
import sympy as smp
from scipy.optimize import minimize
a,b = smp.symbols('a b')
x_ar, y_ar = np.random.random(3), np.random.random(3)
x = np.array(smp.symbols('x0:%d'%np.shape(x_ar)[0]))
y = np.array(smp.symbols('y0:%d'%np.shape(x_ar)[0]))
func = np.sum(a**2+b**2-x*(a+b)+2*y)
print func
my_func = smp.lambdify((x,y), func)
print 1.0/3*my_func(x_ar,y_ar)
ab = smp.lambdify((a,b),my_func(x_ar,x_ar))
print ab(1,2)
def ab_v(x):
return ab(*tuple(x))
print ab_v((1,2))
minimize(ab_v,(0.1,0.1))
The outputs are :
输出是:
3*a**2 + 3*b**2 - x0*(a + b) - x1*(a + b) - x2*(a + b) + 2*y0 + 2*y1 + 2*y2
1.0*a**2 - 0.739792011558683*a + 1.0*b**2 - 0.739792011558683*b +0.67394435712335
12.7806239653
12.7806239653
Out[33]:
status: 0
success: True
njev: 3
nfev: 12
hess_inv: array([[1, 0],
[0, 1]])
fun: 3.6178137388030356
x: array([ 0.36989601, 0.36989601])
message: 'Optimization terminated successfully.'
jac: array([ 5.96046448e-08, 5.96046448e-08])
#1
14
If I understand your question correctly, I think this is what you're after:
如果我正确地理解了你的问题,我想这就是你想要的:
from scipy.optimize import minimize
import numpy as np
def f(coord,x,y,r):
return np.sum( ((coord[0] - x)**2) + ((coord[1] - y)**2) - (r**2) )
x = np.array([0, 2, 0])
y = np.array([0, 0, 2])
r = np.array([.88, 1, .75])
# initial (bad) guess at (x,y) values
initial_guess = np.array([100,100])
res = minimize(f,initial_guess,args = [x,y,r])
Which yields:
收益率:
>>> print res.x
[ 0.66666666 0.66666666]
You might also try the least squares method which expects an objective function that returns a vector. It wants to minimize the sum of the squares of this vector. Using least squares, your objective function would look like this:
您还可以尝试使用最小二乘方法,该方法要求返回一个向量的目标函数。它想最小化这个向量的平方和。使用最小二乘法,目标函数是这样的:
def f2(coord,args):
x,y,r = args
# notice that we're returning a vector of dimension 3
return ((coord[0]-x)**2) + ((coord[1] - y)**2) - (r**2)
And you'd minimize it like so:
你可以这样最小化它:
from scipy.optimize import leastsq
res = leastsq(f2,initial_guess,args = [x,y,r])
Which yields:
收益率:
>>> print res[0]
>>> [ 0.77961518 0.85811473]
This is basically the same as using minimize
and re-writing the original objective function as:
这基本上与使用最小化和重写原始目标函数相同,如:
def f(coord,x,y,r):
vec = ((coord[0]-x)**2) + ((coord[1] - y)**2) - (r**2)
# return the sum of the squares of the vector
return np.sum(vec**2)
This yields:
这个收益率:
>>> print res.x
>>> [ 0.77958326 0.8580965 ]
Note that args
are handled a bit differently with leastsq
, and that the data structures returned by the two functions are also different. See the documentation for scipy.optimize.minimize
and scipy.optimize.leastsq
for more details.
注意,对于最小sq, args的处理略有不同,两个函数返回的数据结构也有所不同。请参阅scipy. optimization的文档。和scipy.optimize最小化。leastsq为更多的细节。
See the scipy.optimize
documentation for more optimization options.
看到scipy。优化文档以获得更多优化选项。
#2
3
I noticed that the code in the accepted solution doesn't work any longer... I think maybe scipy.optimize
has changed it's interface since the answer was posted. I could be wrong. Regardless, I second the suggestion to use the algorithms in scipy.optimize
, and the accepted answer does demonstrate how (or did at one time, if the interface has changed).
我注意到已接受的解决方案中的代码不再工作了……我想也许scipy。自从答案被发布后,优化已经改变了它的界面。我可能是错的。无论如何,我建议在scipy中使用算法。优化,并且所接受的答案确实演示了如何(或者,如果接口发生了变化的话)。
I'm adding an additional answer here, purely to suggest an alternative package that uses the scipy.optimize
algorithms at the core, but is much more robust for constrained optimization. The package is mystic
. One of the big improvements is that mystic
gives constrained global optimization.
我在这里添加了一个额外的答案,纯粹是为了建议使用scipy的替代包。优化算法的核心,但更健壮的约束优化。这个包是神秘的。最大的改进之一是,神秘主义者给出了受限的全局优化。
First, here's your example, done very similarly to the scipy.optimize.minimize
way, but using a global optimizer.
首先,这是您的示例,与scipy. optimization非常相似。最小化方法,但是使用全局优化器。
from mystic import reduced
@reduced(lambda x,y: abs(x)+abs(y)) #choice changes answer
def objective(x, a, b, c):
x,y = x
eqns = (\
(x - a[0])**2 + (y - b[0])**2 - c[0]**2,
(x - a[1])**2 + (y - b[1])**2 - c[1]**2,
(x - a[2])**2 + (y - b[2])**2 - c[2]**2)
return eqns
bounds = [(None,None),(None,None)] #unnecessary
a = (0,2,0)
b = (0,0,2)
c = (.88,1,.75)
args = a,b,c
from mystic.solvers import diffev2
from mystic.monitors import VerboseMonitor
mon = VerboseMonitor(10)
result = diffev2(objective, args=args, x0=bounds, bounds=bounds, npop=40, \
ftol=1e-8, disp=False, full_output=True, itermon=mon)
print result[0]
print result[1]
With results looking like this:
结果是这样的:
Generation 0 has Chi-Squared: 38868.949133
Generation 10 has Chi-Squared: 2777.470642
Generation 20 has Chi-Squared: 12.808055
Generation 30 has Chi-Squared: 3.764840
Generation 40 has Chi-Squared: 2.996441
Generation 50 has Chi-Squared: 2.996441
Generation 60 has Chi-Squared: 2.996440
Generation 70 has Chi-Squared: 2.996433
Generation 80 has Chi-Squared: 2.996433
Generation 90 has Chi-Squared: 2.996433
STOP("VTRChangeOverGeneration with {'gtol': 1e-06, 'target': 0.0, 'generations': 30, 'ftol': 1e-08}")
[ 0.66667151 0.66666422]
2.99643333334
As noted, the choice of the lambda
in reduced
affects which point the optimizer finds as there is no actual solution to the equations.
如前所述,在约简中选择lambda会影响优化器找到的那个点,因为该方程没有实际的解。
mystic
also provides the ability to convert symbolic equations to a function, where the resulting function can be used as an objective, or as a penalty function. Here is the same problem, but using the equations as a penalty instead of the objective.
神秘主义还提供了将符号方程转换为一个函数的能力,在这个函数中,产生的函数可以作为一个目标,或者作为一个惩罚函数。这是同样的问题,但是用方程作为惩罚而不是目标。
def objective(x):
return 0.0
equations = """
(x0 - 0)**2 + (x1 - 0)**2 - .88**2 == 0
(x0 - 2)**2 + (x1 - 0)**2 - 1**2 == 0
(x0 - 0)**2 + (x1 - 2)**2 - .75**2 == 0
"""
bounds = [(None,None),(None,None)] #unnecessary
from mystic.symbolic import generate_penalty, generate_conditions
from mystic.solvers import diffev2
pf = generate_penalty(generate_conditions(equations), k=1e12)
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=pf, \
npop=40, gtol=50, disp=False, full_output=True)
print result[0]
print result[1]
With results:
与结果:
[ 0.77958328 0.8580965 ]
3.6473132399e+12
The results are different than before because the penalty applied is different than we applied earlier in reduced
. In mystic
, you can select what penalty you want to apply.
结果与以前不同,因为应用的惩罚与我们之前减少的不同。在神秘主义中,你可以选择你想要的惩罚。
The point was made that the equation has no solution. You can see from the result above, that the result is heavily penalized, so that's a good indication that there is no solution. However, mystic
has another way you can see there in no solution. Instead of applying a more traditional penalty
, which penalizes the solution where the constraints are violated... mystic
provides a constraint
, which is essentially a kernel transformation, that removes all potential solutions that don't meet the constants.
有人指出这个方程没有解。从上面的结果可以看出,结果是严重的,所以这是一个很好的迹象,表明没有解决方案。然而,神秘主义者有另一种方式,你可以在无解中看到。而不是采用更传统的惩罚,惩罚违反约束的解决方案……神秘提供了一个约束,本质上是一个内核转换,它删除了所有不满足常数的可能解。
def objective(x):
return 0.0
equations = """
(x0 - 0)**2 + (x1 - 0)**2 - .88**2 == 0
(x0 - 2)**2 + (x1 - 0)**2 - 1**2 == 0
(x0 - 0)**2 + (x1 - 2)**2 - .75**2 == 0
"""
bounds = [(None,None),(None,None)] #unnecessary
from mystic.symbolic import generate_constraint, generate_solvers, simplify
from mystic.symbolic import generate_penalty, generate_conditions
from mystic.solvers import diffev2
cf = generate_constraint(generate_solvers(simplify(equations)))
result = diffev2(objective, x0=bounds, bounds=bounds, \
constraints=cf, \
npop=40, gtol=50, disp=False, full_output=True)
print result[0]
print result[1]
With results:
与结果:
[ nan 657.17740835]
0.0
Where the nan
essentially indicates there is no valid solution.
当nan本质上表示没有有效的解决方案时。
FYI, I'm the author, so I have some bias. However, mystic
has been around almost as long as scipy.optimize
, is mature, and has had a more stable interface over that length of time. The point being, if you need a much more flexible and powerful constrained nonlinear optimizer, I suggest mystic
.
我是作者,所以我有偏见。然而,神秘主义者几乎和scipy一样存在。优化是成熟的,并且在这段时间内具有更稳定的界面。关键是,如果你需要一个更灵活、更强大的约束非线性优化器,我建议神秘。
#3
1
These equations can be seen as describing all the points on the circumference of three circles in 2D space. The solution would be the points where the circles intercept.
这些方程可以看作是描述二维空间中三个圆圆周上的所有点。解就是圆的截点。
The sum of their radii of the circles is smaller than the distances between their centres, so the circles don't overlap. I've plotted the circles to scale below:
圆的半径之和小于其中心之间的距离,所以圆不重叠。我把圆标在下面
There are no points that satisfy this system of equations.
没有任何点能满足这个方程组。
#4
0
I made an example script by the following. Note that the last line will find an optimal solution (a,b):
我做了一个示例脚本如下所示。注意最后一行将找到一个最优解(a,b):
import numpy as np
import scipy as scp
import sympy as smp
from scipy.optimize import minimize
a,b = smp.symbols('a b')
x_ar, y_ar = np.random.random(3), np.random.random(3)
x = np.array(smp.symbols('x0:%d'%np.shape(x_ar)[0]))
y = np.array(smp.symbols('y0:%d'%np.shape(x_ar)[0]))
func = np.sum(a**2+b**2-x*(a+b)+2*y)
print func
my_func = smp.lambdify((x,y), func)
print 1.0/3*my_func(x_ar,y_ar)
ab = smp.lambdify((a,b),my_func(x_ar,x_ar))
print ab(1,2)
def ab_v(x):
return ab(*tuple(x))
print ab_v((1,2))
minimize(ab_v,(0.1,0.1))
The outputs are :
输出是:
3*a**2 + 3*b**2 - x0*(a + b) - x1*(a + b) - x2*(a + b) + 2*y0 + 2*y1 + 2*y2
1.0*a**2 - 0.739792011558683*a + 1.0*b**2 - 0.739792011558683*b +0.67394435712335
12.7806239653
12.7806239653
Out[33]:
status: 0
success: True
njev: 3
nfev: 12
hess_inv: array([[1, 0],
[0, 1]])
fun: 3.6178137388030356
x: array([ 0.36989601, 0.36989601])
message: 'Optimization terminated successfully.'
jac: array([ 5.96046448e-08, 5.96046448e-08])