Examples of NDIToolbox plugins
Now that you've read the basics on developing plugins for NDIToolbox, it's time to see some basic examples.
We've already seen example code for local plugins, now let's look at one way to keep code on the server. Consider a plugin designed to detect edges in a 2D image plot. SciPy has several routines of interest for edge detection, but suppose we've decided to use the excellent scikits-image image processing add-on for SciPy. This makes things slightly more complicated because scikits-image requires compiling some C code on installation; this can be problematic for Windows users that don't have Visual Studio installed. We've therefore decided to keep most of the code on a server we control so we can operate in a well-controlled environment, and have the plugin phone in for calculations.
After looking at JSON and XML-RPC, we've decided to go with Python's SimpleXMLRPCServer. There are several different edge detection algorithms we could employ, and since they'd all be run on the server anyway we've decided to just run one edge detector server that can provide all the algorithms we implement. The idea will be to expose an API that offers several different edge detection algorithms; we could write a single NDIToolbox plugin that allows the user to select the algorithm on the fly but for the purposes of this demonstration we'll write one plugin per algorithm. In this case, we'll implement the Sobel and Canny methods.
On the server side, we simply need to tweak the example code shown in SimpleXMLRPCServer to provide the methods we need:
#!/usr/bin/env python """server.py - example of hosting NumPy functions on a server via XML-RPC Chris R. Coughlin (TRI/Austin, Inc.) """ # A modified version of the Python example code for SimpleXMLRPCServer # http://docs.python.org/library/simplexmlrpcserver.html#module-SimpleXMLRPCServer # Demonstrates one way to move large NumPy arrays between a client and server - # client sends NumPy array, server performs calculations and returns results from SimpleXMLRPCServer import SimpleXMLRPCServer from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler from skimage import filter import numpy as np class RequestHandler(SimpleXMLRPCRequestHandler): rpc_paths = ('/edge_detector',) # Create server server = SimpleXMLRPCServer(("", 8000), requestHandler=RequestHandler) server.register_introspection_functions() def sobel_edges(arr_as_list): """Applies the Sobel operator to the provided data, returns the edges detected. Use numpy.array(returned_data) to produce NumPy array.""" an_array = np.array(arr_as_list) edges = filter.sobel(an_array) return edges.tolist() server.register_function(sobel_edges, 'sobel_edges') def canny_edges(arr_as_list, std_dev="1.0", low_t="0.1", high_t="0.2"): """Applies the Canny algorithm to the provided data, returns the edges detected. Use numpy.array(returned_data) to produce NumPy array.""" an_array = np.array(arr_as_list) sigma = float(std_dev) low_threshold = float(low_t) high_threshold = float(high_t) edges = filter.canny(an_array, sigma, low_threshold, high_threshold) return edges.tolist() server.register_function(canny_edges, 'canny_edges') # Run the server's main loop server.serve_forever()
The server starts up and stays running; listening for client connections on port 8000. A client sends a NumPy array (the current data set in the image plot) converted to a standard Python list; the server converts back to a NumPy array and returns the edges. On the client side, the plugin will then re-convert from the Python list to a NumPy array and NDIToolbox refreshes the image plot with the results.
Looking at the scikits-image Sobel documentation, no parameters are necessary to run the filter. So a simple Sobel Edge Detection plugin for NDIToolbox might look like this:
"""sobel_edge_detection_plugin.py - simple A7117 plugin that demonstrates XML-RPC communication by returning edges found in an image Chris R. Coughlin (TRI/Austin, Inc.) """ __author__ = 'Chris R. Coughlin' from models.abstractplugin import TRIPlugin import numpy as np import xmlrpclib class SobelPlugin(TRIPlugin): """Returns absolute magnitude Sobel to find edges in an image http://en.wikipedia.org/wiki/Sobel_operator""" name = "Sobel Edge Detection" description = "Applies the Sobel operator to the current data to detect edges" authors = "Chris R. Coughlin (TRI/Austin, Inc.)" version = "1.0" url = "www.tri-austin.com" copyright = "Copyright (C) 2012 TRI/Austin, Inc. All rights reserved." def __init__(self): super(SobelPlugin, self).__init__(self.name, self.description, self.authors, self.url, self.copyright) self.config = {'server_url':'http://172.16.100.2:8000/edge_detector'} self.srvr = xmlrpclib.ServerProxy(self.config['server_url']) def run(self): """Executes the plugin - returns a new NumPy array with edges detected in original data""" if self._data is not None: self._data = self.srvr.sobel_edges(self._data.astype(np.float64).tolist())
The only configuration option we require in this case is a pointer to the edge detection server's URL; you could hard-wire this into the plugin to streamline the plugin's operation if desired. Here's an example of the Sobel plugin - before and after execution:
Looking at the Canny edge detector API,
we can specify thresholds and a number of standard deviations. In this case, we'll add each of these parameters to
the plugin's config
dict so that the user can provide arguments to the server:
"""canny_edge_detection_plugin.py - simple A7117 plugin that demonstrates XML-RPC communication by returning edges found in an image Chris R. Coughlin (TRI/Austin, Inc.) """ __author__ = 'Chris R. Coughlin' from models.abstractplugin import TRIPlugin import numpy as np import xmlrpclib class CannyPlugin(TRIPlugin): """Detects edges in an image with the Canny algorithm http://en.wikipedia.org/wiki/Canny_edge_detector """ name = "Canny Edge Detection" description = "Applies the Canny algorithm to the current data to detect edges" authors = "Chris R. Coughlin (TRI/Austin, Inc.)" version = "1.0" url = "www.tri-austin.com" copyright = "Copyright (C) 2012 TRI/Austin, Inc. All rights reserved." def __init__(self): super(CannyPlugin, self).__init__(self.name, self.description, self.authors, self.url, self.copyright) self.config = {'sigmas':1, 'low_threshold':0.1, 'high_threshold':0.2, 'server_url':'http://172.16.100.2:8000/edge_detector'} self.srvr = xmlrpclib.ServerProxy(self.config['server_url']) def run(self): """Executes the plugin - returns a new NumPy array with edges detected in original data""" if self._data is not None: self._data = self.srvr.canny_edges(self._data.astype(np.float64).tolist(), self.config['sigmas'], self.config['low_threshold'], self.config['high_threshold'])
In this case the results of the Canny edge detection aren't as nice as the Sobel; but of course the user can revert to their original data and try experimenting with the Canny parameters to optimize the results.
Once we've set up an edge detection server, we'd update the source code for both plugins to change the server_url
key in the config
dict to point to the server. Since both plugins consist of a single file,
installation is as simple as asking the user to copy both .py
files to their plugins folder. Assuming
you'd prefer to make things easier for your users however, you'd probably want to make a proper NDIToolbox plugin
archive so that NDIToolbox can perform the plugin installation automatically. To do that for the Sobel plugin for
example, create a new ZIP sobel_edge_detection_plugin.zip
and add
sobel_edge_detection_plugin.py
and a README to the archive such as the following.
# README Contents Sobel Edge Detection - returns detected edges in image plot data Chris R. Coughlin (TRI/Austin, Inc.) www.tri-austin.com Uses the Sobel operation (http://en.wikipedia.org/wiki/Sobel_operator) to detect edges in 2D data.
Once we've created the sobel_edge_detection_plugin.zip
file, we can directly provide the archive to the
user and have them perform a local installation. The other option is to host the archive on a server and allow the
user to perform a remote installation such as the Linux user shown below.
As the first example of using other programming languages in NDIToolbox plugins, consider the case that you have pre-existing data analysis code written in Java and rather than attempt to port to Python you'd like a relatively easy way to re-use your existing Java code.
Suppose that we have written a standard Java library ToolboxDemoLib.jar
with a BasicStats
class that we'd like to call from Python:
package stats; import java.util.ArrayList; import java.util.List; /** * * @author ccoughlin */ public class BasicStats { public static double calcMin(List<double> dataList) { double minVal = dataList.get(0); for (double el:dataList) { if (el < minVal) { minVal = el; } } return minVal; } public static double calcMax(List<double> dataList) { double maxVal = dataList.get(0); for (double el:dataList) { if (el > maxVal) { maxVal = el; } } return maxVal; } public static List<double> normalizedData(List<double> dataList) { double maxValue = calcMax(dataList); List<double> normData = new ArrayList<>(dataList.size()); for (double el:dataList) { normData.add(el/maxValue); } return normData; } }
The primary difficulty in using other programming languages with NDIToolbox is in dealing with NumPy arrays. Although
numerous bridges between Python and other languages exist, for the most part NumPy arrays are not directly
supported. The most straightforward way to deal with the data
arrays your plugin will receive and send
is to convert to and from standard Python lists
, which are well supported.
Going back to our scenario, there are several ways to connect Python and Java code. We've decided to go with Jython in this case as it is an active and fairly well-supported project.
Jython also allows us to make more use of existing NDIToolbox plugin code: as a nearly-compatible Python
implementation running on the JVM, we can use large portions of the existing .py
we've used elsewhere
in this document.
There are two basic ways to use Jython in this scenario: embed Python code in an existing Java application, or load
Java code into a Python application. If your Java code is a full-blown application, you would probably want to embed
Jython in the application to have it interact with the plugin front-end. In this case however, our Java code is
simple enough that it's more convenient to call it from a Jython instance. To do this, build the project as usual to
get ToolboxDemoLib.jar
. Following the Jython installation
instructions, build a self-contained jython.jar
and copy ToolboxDemoLib.jar
to
this folder. The Jython interpreter will then be able to import your Java library with the following:
import sys sys.path.append("ToolboxDemoLib.jar") from stats import BasicStats
As it stands, running the above code in Jython would make our Java code available to any subsequent code, but we
still need to connect Jython with Python. Again, there are several ways to accomplish this but one way is to recycle
the XML-RPC code we used earlier: if we create a server.py
file to run in Jython, it can then respond
to client.py
XML-RPC requests from Python. Since Jython is mostly compatible with Python this code will
look very similar to the code we used previously to demonstrate plugin communications with a server backend.
"""server.py - example of hosting Java functions on a server via XML-RPC Chris R. Coughlin (TRI/Austin, Inc.) """ import sys sys.path.append("ToolboxDemoLib.jar") from stats import BasicStats from SimpleXMLRPCServer import SimpleXMLRPCServer from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler class RequestHandler(SimpleXMLRPCRequestHandler): rpc_paths = ('/basic_stats',) server = SimpleXMLRPCServer(("", 8000), requestHandler=RequestHandler) server.register_introspection_functions() # Not strictly necessary here since all the methods are static, # provided to demonstrate instantiation of Java classes calculator = BasicStats() def min(arr): """Returns the minimum of an array as a double""" converted_arr = [float(el) for el in arr] return BasicStats.calcMin(converted_arr) server.register_function(min, 'min') def min2d(arr): """Returns the minimum of a 2D array as a double""" min_val = sys.float_info.max for row in arr: row_min = min(row) if row_min < min_val: min_val = row_min return min_val server.register_function(min2d, 'min2d') def max(arr): """Returns the maximum of an array as a double""" converted_arr = [float(el) for el in arr] return BasicStats.calcMax(converted_arr) server.register_function(max, 'max') def max2d(arr): """Returns the maximum of a 2D array as a double""" max_val = sys.float_info.min for row in arr: row_min = max(row) if row_min > max_val: max_val = row_min return max_val server.register_function(max2d, 'max2d') def normalize(arr): """Returns the normalized input array. Elements are converted to floating point if required.""" converted_arr = [float(el) for el in arr] return list(calculator.normalizedData(converted_arr)) server.register_function(normalize, 'normalize') def normalize2d(arr): """Returns the normalized 2D input array. Elements are converted to floating point if required.""" normalized_data = [] for row in arr: normalized_row = normalize(row) normalized_data.append(normalized_row) return normalized_data server.register_function(normalize2d, 'normalize2d') # Run the server's main loop server.serve_forever()
To run this code we simply run jython.jar
as a standard Java JAR, and provide it with the name of the
script, e.g. java -jar jython.jar server.py
. Note that Jython does a lot of the work for us in
conversion, e.g. automatically handling the conversion between Python's lists
and our Java code's
ArrayList
. We have had to make a few adjustments to our original server code since NumPy does not run
on Jython, most notably that we now have to convert to and from NumPy arrays on a row-by-row basis.
Similarly, the plugin's frontend code will look familiar:
"""jython_normalize_plugin.py - simple A7117 plugin that demonstrates XML-RPC communication with Java code on the server Chris R. Coughlin (TRI/Austin, Inc.) """ __author__ = 'Chris R. Coughlin' from models.abstractplugin import TRIPlugin import numpy as np import xmlrpclib class JythonNormalizePlugin(TRIPlugin): """Normalizes the data""" name = "Jython Normalize" # Name in the Plugin menu description = "Demonstrates communication with a Java backend" authors = "Chris R. Coughlin (TRI/Austin, Inc.)" version = "1.0" url = "www.tri-austin.com" copyright = "Copyright (C) 2012 TRI/Austin, Inc. All rights reserved." def __init__(self): super(JythonNormalizePlugin, self).__init__(self.name, self.description, self.authors, self.url, self.copyright) self.config = {'server_url':'http://127.0.0.1:8000/basic_stats'} self.srvr = xmlrpclib.ServerProxy(self.config['server_url']) def run(self): """Executes the plugin - if data are not None they are normalized against the largest single element in the array.""" if self._data is not None: if self._data.ndim == 1: self._data = np.array(self.srvr.normalize(self._data.tolist())) elif self._data.ndim == 2: self._data = np.array(self.srvr.normalize2d(self._data.tolist()))
The only major difference in this plugin compared to the previous XML-RPC example is that in this case the plugin defaults to assuming the Jython XML-RPC server is running on the local machine. You can of course run the Jython server on a different machine, and in fact this may be preferable if you would prefer to work in a known good environment and/or don't want to require users to have Java installed on their machine.
As an example of how to write extensions in C++ for Python, let's suppose that the Java library shown above was instead a C++ library, and we would like to use it from Python. Rather than going the XML-RPC route, we'd like to compile a C++ extension that Python can call directly. Extending Python with C or C++ code is relatively straightforward on POSIX (OS X, Linux, FreeBSD, etc.) platforms, but can be tricky on Windows machines. Quoting from the official Python documentation on the process:
In general, you should use the same compiler and version of compiler to build your extension as was used to build Python itself. If you are using a different compiler, you may need to build Python from source in order for your extension to function properly. In this example, we will demonstrate using Boost.Python to build a C++ extension for Windows machines using Visual Studio 2008 and Python 2.7. If you expect your C++ extension to be used on other operating systems, you will need to become familiar with the C/C++ compiler used on that OS (e.g. GCC or LLVM). For an example of how to use SWIG to build a C++ extension on Linux machines with GCC, NDIToolbox's developer has provided an example on his website that isn't specific to NDIToolbox but may still prove useful.
basicstats.h#pragma once #include <vector> class BasicStats { public: BasicStats(void) {}; virtual ~BasicStats(void) {}; double calcMin(const std::vector<double>& dataList); double calcMax(const std::vector<double>& dataList); std::vector<double> normalizedData(const std::vector<double>& dataList); std::vector<std::vector<double>> normalizedData(const std::vector<std::vector<double>>& dataList); };basicstats.cpp
#include "StdAfx.h" #include "BasicStats.h" double BasicStats::calcMin(const std::vector<double>& dataList) { double minVal = dataList[0]; for (std::vector<double>::const_iterator iter=dataList.begin(); iter!=dataList.end(); ++iter) { if (*iter < minVal) { minVal = *iter; } } return minVal; } double BasicStats::calcMax(const std::vector<double>& dataList) { double maxVal = dataList[0]; for (std::vector<double>::const_iterator iter=dataList.begin(); iter!=dataList.end(); ++iter) { if (*iter > maxVal) { maxVal = *iter; } } return maxVal; } std::vector<double> BasicStats::normalizedData(const std::vector<double> &dataList) { double maxValue = calcMax(dataList); std::vector<double> normData; for (std::vector<double>::const_iterator iter=dataList.begin(); iter!=dataList.end(); ++iter) { normData.push_back(*iter/maxValue); } return normData; } std::vector<std::vector<double>> BasicStats::normalizedData(const std::vector<std::vector<double>> &dataList) { std::vector<std::vector<double>> normData; for (std::vector<std::vector<double>>::const_iterator iter=dataList.begin(); iter!=dataList.end(); ++iter) { std::vector<double> normalizedRow = normalizedData(*iter); normData.push_back(normalizedRow); } return normData; }
One slight difference here from the previous Java code - we've included a method that handles 2D data. In the Java example we handled this in the Python code-you can use either approach based on personal preferences, benchmarking, etc.
Although you can use Boost.Python from within the Visual Studio IDE, it's actually easier to compile the extension using Boost.Build from the command line, so that's the approach we'll use here. Quoting from the Boost.Python documentation:
Begin by downloading the latest version of Boost, and extract it to a folder of your choice. Following the Getting Started On Windows guide, start by building the Boost lib by opening a command prompt in the folder you've extracted the Boost source to and issue the following commands:
bootstrap .\b2
Strictly speaking this step isn't necessary since it builds all the components of Boost that require compilation (i.e., not just Boost.Python), but it doesn't take very long and it's a good test to ensure that your compiler and Boost download are set up properly.
Looking at the Boost.Python
documentation, the next step is in creating a config file that configures the Boost bjam tool's compilation for your setup. This
file is a simple text file and in our case we really only need it to a) configure our compiler and b) set
the version of Python we're using. In our $HOME
folder (e.g.
c:\users\chris
in Windows 7 or /home/chris
under Linux), we'd create a file named user-config.jam
that looks like the following:
# MSVC configuration using msvc : 9.0 ; # Python configuration using python : 2.7 : C:/Python27 ;
Although the Boost.Python documentation recommends putting the bjam tool somewhere in your $PATH
so it
can be found on the command-line, it's just as easy to create a batch file bjam.bat
that points to the
bjam executable. A batch file is also a good choice because we need to set an environment variable BOOST_BUILD_PATH
prior to compilation; putting it in the batch file saves trying to remember to set it every time we want to
recompile. Set BOOST_BUILD_PATH
to the folder that you've extracted Boost to; the bjam
tool is normally in the root of this folder. For example, if you have extracted the Boost source code to C:\Users\CRC\src\cxx\boost_1_50_0
,
your bjam.bat
file would look like this:
@echo off set BOOST_BUILD_PATH=C:\Users\CRC\src\cxx\boost_1_50_0\ C:\Users\CRC\src\cxx\boost_1_50_0\bjam
Save the bjam.bat
file to the folder you're using for your toolbox source code (i.e., where your C++
extension files are located). Next, we create a Jamroot file (similar to a Makefile) that details how our project is
built. This file is also stored in the toolbox source code root folder; an example for this particular project
basicstats
is shown below.
# Copyright David Abrahams 2006. Distributed under the Boost # Software License, Version 1.0. (See accompanying # file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) import python ; if ! [ python.configured ] { ECHO "notice: no Python configured in user-config.jam" ; ECHO "notice: will use default configuration" ; using python ; } # Specify the path to the Boost project. If you move this project, # adjust this path to refer to the Boost root directory. use-project boost : C:/Users/CRC/src/cxx/boost_1_50_0 ; # Set up the project-wide requirements that everything uses the # boost_python library from the project whose global ID is # /boost/python. project : requirements <library>/boost/python//boost_python ; # Declare the three extension modules. You can specify multiple # source files after the colon separated by spaces. python-extension basicstats_ext : basicstats.cpp ; # Put the extension and Boost.Python DLL in the current directory, so # that running script by hand works. install convenient_copy : basicstats_ext : <install-dependencies>on <install-type>SHARED_LIB <install-type>PYTHON_EXTENSION <location>. ; # A little "rule" (function) to clean up the syntax of declaring tests # of these extension modules. local rule run-test ( test-name : sources + ) { import testing ; testing.make-test run-pyd : $(sources) : : $(test-name) ; } # Declare test targets run-test basicstats : basicstats_ext basicstats.py ;
If you're just building a basic C++ extension, you can copy-paste this example as-is with a few modifications. In
particular, we're calling our module basicstats
so you'll want to edit your Jamroot if you're using a
different name (e.g. replace "basicstats" in the above with your project name).
That's it for defining the basic project; you can find another example of setting up a project from Boost.Python's Hello,
World! example. Next, our original C++ code needs to be modified slightly - as it stands right now it expects to
be dealing with conventional C++ std::vector<double>
containers, but our code will need to handle
Python lists
instead. The most straightforward way of dealing with this change is by changing the C++
code to use Boost.Python's boost::python::list
container. Here's the updated version.
#pragma once #include <vector> #include <boost/python.hpp> #include <boost/python/module.hpp> #include <boost/python/def.hpp> class BasicStats { public: BasicStats(void) {}; virtual ~BasicStats(void) {}; double calcMin(boost::python::list& dataList); double calcMax(boost::python::list& dataList); boost::python::list normalizedData(boost::python::list& dataList); boost::python::list normalized2DData(boost::python::list& dataList); };basicstats.cpp
#include "BasicStats.h" double BasicStats::calcMin(boost::python::list& dataList) { double minVal = boost::python::extract<double>(dataList[0]); for (int iter=0; iter<len(dataList); ++iter) { double el_value = boost::python::extract<double>(dataList[iter]); if (el_value < minVal) { minVal = el_value; } } return minVal; } double BasicStats::calcMax(boost::python::list& dataList) { double maxVal = boost::python::extract<double>(dataList[0]); for (int iter=0; iter<len(dataList); ++iter) { double el_value = boost::python::extract<double>(dataList[iter]); if (el_value > maxVal) { maxVal = el_value; } } return maxVal; } boost::python::list BasicStats::normalizedData(boost::python::list& dataList) { double maxValue = calcMax(dataList); boost::python::list normData; for (int iter=0; iter<len(dataList); ++iter) { double el_value = boost::python::extract<double>(dataList[iter]); normData.append(el_value/maxValue); } return normData; } boost::python::list BasicStats::normalized2DData(boost::python::list& dataList) { boost::python::list normData; for (int iter=0; iter<len(dataList); ++iter) { boost::python::list normalizedRow = normalizedData(boost::python::list(dataList[iter])); normData.append(normalizedRow); } return normData; }
One thing to note in this code is the normalized2DData
method we're using to normalize 2D arrays, in
particular the creation of a boost::python::list
to send to the 1D normalizedData
method.
To handle Python's dynamic nature, Boost.Python treats everything as a generic Python object so our code needs to
create a Python list
to pass to normalizedData
.
If you'd prefer not to convert your code to use lists (e.g. if you expect to continue using it in other applications), you can of course write an adapter class to convert to and from your code. Consult the Boost.Python documentation for details on converting between C++ and Python.
Now that the C++ data analysis library has been modified to work with Python, the only thing left to do is to define
our library's Python API. Add the following to basicstats.cpp
:
// Demo of exposed function char const* version() { return "BasicStats v0.1.1"; } BOOST_PYTHON_MODULE(basicstats_ext) { using namespace boost::python; def("version", version); class_<BasicStats>("BasicStats") .def("calcMin", &BasicStats::calcMin) .def("calcMax", &BasicStats::calcMax) .def("normalize", &BasicStats::normalizedData) .def("normalize2d", &BasicStats::normalized2DData) ; }
This addition creates the API that Python will see from our extension library: a class BasicStats
with
methods calcMin
, calcMax
, normalizedData
, and normalized2DData
;
and a function
version
that we've added to illustrate how to expose standalone C++ functions in the API. This
completes the project so the only thing left to do is run bjam
in our toolkit's source code folder to
compile our new Python module basicstats_ext.pyd
(basically a renamed .DLL). You should see output
similar to the following if all goes well.
C:\Users\CRC\src\cxx\toolbox_demo>bjam ...patience... ...patience... ...found 1715 targets... ...updating 7 targets... compile-c-c++ bin\msvc-9.0\debug\threading-multi\basicstats.obj basicstats.cpp msvc.link.dll bin\msvc-9.0\debug\threading-multi\basicstats_ext.pyd Creating library bin\msvc-9.0\debug\threading-multi\basicstats_ext.lib and object bin\msvc-9.0\debug\threading-multi\basicstats_ext.exp msvc.manifest.dll bin\msvc-9.0\debug\threading-multi\basicstats_ext.pyd common.copy basicstats_ext.pyd bin\msvc-9.0\debug\threading-multi\basicstats_ext.pyd 1 file(s) copied. capture-output bin\basicstats.test\msvc-9.0\debug\threading-multi\basicstats 1 file(s) copied. **passed** bin\basicstats.test\msvc-9.0\debug\threading-multi\basicstats.test ...updated 7 targets...
To see the new C++ module in action, you can create a simple Python script to call it:
import basicstats_ext # Making calls to standalone functions print("Requesting version info...{0}".format(basicstats_ext.version())) # Create sample data to test-drive the calculations demo_data = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] # Instantiating our C++ BasicStats class calculator = basicstats_ext.BasicStats() print("Min value={0}".format(calculator.calcMin(demo_data))) print("Max value={0}".format(calculator.calcMax(demo_data))) # Receiving a list print("Normalized data:\n") normalized_data = calculator.normalize(demo_data) print(normalized_data)
To create an NDIToolbox plugin from this new C++ extension, copy basicstats_ext.pyd
and the
newly-created Boost.Python support DLL boost_python-vc90-mt-gd-1_50.dll
(remembering that the names of
your extension files may be different) to the NDIToolbox plugins folder along with your plugin's Python front-end.
Here's a sample plugin that uses the BasicStats extension to normalize NumPy arrays.
"""cpp_normalize_plugin.py - simple A7117 plugin that demonstrates using C++ to extend Python Chris R. Coughlin (TRI/Austin, Inc.) """ __author__ = 'Chris R. Coughlin' from models.abstractplugin import TRIPlugin import numpy as np import xmlrpclib import basicstats_ext class CPPNormalizePlugin(TRIPlugin): """Normalizes the data""" name = "C++-Based Normalize" # Name in the Plugin menu description = "Demonstrates writing C++ code to extend NDIToolbox plugins" authors = "Chris R. Coughlin (TRI/Austin, Inc.)" version = "1.0" url = "www.tri-austin.com" copyright = "Copyright (C) 2012 TRI/Austin, Inc. All rights reserved." def __init__(self): super(CPPNormalizePlugin, self).__init__(self.name, self.description, self.authors, self.url, self.copyright) def run(self): """Executes the plugin - if data are not None they are normalized against the largest single element in the array.""" if self._data is not None: calculator = basicstats_ext.BasicStats() raw_data = self._data.astype(np.float).tolist() if self._data.ndim == 1: self._data = np.array(calculator.normalize(raw_data)) elif self._data.ndim == 2: self._data = np.array(calculator.normalize2d(raw_data))
If you plan on distributing
your plugin as an archive, you'll need to create a sub-folder to place the DLL and .PYD file. Note that you may
then also have to adjust your paths to compensate. Remember also that since you are distributing compiled code you
may also need to consider your users' operating system, Python version, etc. and ship multiple versions of your
extension. You may also wish to collect some basic information about the system your plugin is running on-
operating system and version of Python in particular-and
use this information to tailor your plugin's imports. NDIToolbox's mainmodel.py
module also contains
some convenience functions that help to narrow down which version of Windows is being used:
is_win7() is_winvista() is_winxp() is_winxp64() is_win2k()
The platform module in the Python Standard Library can
provide additional details, e.g. platform.python_compiler()
will identify the compiler used to build
the Python interpreter. If you need to ship multiple compiled extensions, you can then use this information to
determine which extension to import, for example:
import platform platform_name = platform.system() if platform_name == "Windows": import basicstats_win32_ext elif platform_name == "Linux": import basicstats_linux_ext # etc.