We get a lot of questions regarding the differences between the original academic package FORCES and our commercial tool FORCES Pro. FORCES is now unsupported and will not be developed further. The following table explains the main differences between the two tools. If you have any other questions please contact us at info(at)embotech.com.
||Professional support and continuous upgrades
||Commercial and academic
||Matlab/Python (low-level and high-level) and Simulink (graphical)
||Optimization Modelling and Model Predictive Control
||Nonlinear Barrier Interior-Point, Primal-Dual Interior-Point, ADMM, Accelerated Gradient
||QPs, LPs, QCQPs
||NLPs, binary problems, QPs, LPs, QCQPs, SOCPs
||Up to 100x faster. See the example below this table.
||Customised for x86, ARM, Tricore, PowerPC and MIPS based embedded platforms. You can also obtain a custom circuit described in VHDL.
|Supported Data Types
||Floating-point and fixed-point
We will use the simple MPC example described here to illustrate the performance improvements in FORCES Pro.
First add the following two lines to the end of the file to compute the cumulative closed loop cost during the simulation. We can use this metric to check that the control performance is not deteriorating when changing the solver settings.
%% closed loop cost
cl_cost = sum(sum(X(:,1:kmax).*X(:,1:kmax)) + U(:,1:kmax).*U(:,1:kmax));
sprintf('the closed-loop cost is %f', cl_cost)
Also add the following line after every call to the custom solver to display the execution time:
Now, run the simple MPC example to generate a controller based on the Primal-Dual Interior-Point method with no platform customizations and record the maximum solution time.
For the 'Prototype' and 'Deployment' versions of FORCES Pro one can also generate code for other optimization methods that are typically more efficient for MPC problems. To use ADMM enter the following lines before the call to generate code from the server in the solver settings section. When you run the simulation again you should see a large reduction in the execution time but no reduction in the closed-loop control performance.
codeoptions.solvemethod = 'ADMM';
codeoptions.maxit = 20;
codeoptions.ADMMrho = 2;
One can obtain further speedups by further customizing the solver code to the target platform. In this case, we will be running the solver on a desktop 64-bit computer so we will add the following lines in the solver settings section to obtain fast code for this platform. We can also change the data format from the default double precision floating-point to single precision and we can tell FORCES Pro to make use of SIMD SSE instructions.
codeoptions.platform = 'x86_64';
codeoptions.floattype = 'float';
codeoptions.sse = 1;
We also need to link the target specific object file when compiling the MEX file by inserting the following lines after the code generation request:
mex -DMEXARGMUENTCHECKS myMPC_FORCESPro/obj_target/myMPC_FORCESPro.o myMPC_FORCESPro/interface/myMPC_FORCESPro_mex.c
When you run the simulation again you should see a large reduction in execution time while the closed-loop control performance remains the same. The results are summarized in the following table for an Intel Core i7 processor at 3.40 GHz clock speed with 16GB of RAM running 64-bit Windows 7. Please note that the numbers will be different on your machine.
||Maximum Execution Time
||x86_64, float, with SSE instructions