Software developers kit

From edgertronic slow-motion video camera
Jump to: navigation, search

For those developing software to control the edgertronic camera, or developing software that runs on the camera, be sure to check out the edgertronic developer tricks.



The edgertronic is a mostly open software platform which allows those with software skills to customize the operation of the camera. The version of the software license that comes with the camera requires you to provide back, unencumbered, any software you develop that modifies edgertronic software or uses CAMAPI. The intent of having you provide back the software is so we can share the code with other users so everyone can benefit. If your contribution is well received, the feature may be added in a future software release. If we use your implementation of the feature, you will be credited as a community contributor to the camera.

In some instances, a customer who modifies the software may not want to share the changes outside their organization. In this case a special camera software license is required. Contact edgertronic for pricing information.

There is no warranty for any of the software in the edgertronic camera, to the extent permitted by applicable law, except when otherwise stated in writing the copyright holders and/or other parties provide the program "as is" without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the quality and performance of the program is with you. Should the program prove defective, you assume the cost of all necessary servicing, repair or correction.

Further, there is no warranty that you be able to modify or enhance the software to change the capabilities of the camera. No free support is provided to assist in modifying the software. If you want professional support, contact edgertronic for pricing information.

You can control the camera by running code in the camera directly or controlling the camera over the network. Of course you can control the camera trigger with an external signal. Code running on the camera needs to be written in python and will control the camera using CAMAPI (documented below). If you control the camera over the network from an external computer, then you can use any programming language/environment you choose that supports HTTP/JSON. CAMAPI is exposed over the network with a HTTP/JSON wrapping (which is also documented below). If you like python, you can use the module to use CAMAPI on your host computer. See below for details on

If you require custom engineering services to modify or extend the edgertronic camera feature set, please let us know. Generally any changes or enhancements made will be licensed back to you so that we can include the feature in a future camera release.

Closed software

The high speed sensor control and its coupling to the main processor is not documented and the source is not available. The functionality is exposed by CAMAPI. The goal is to keep the external interfaces to CAMAPI stable, but changes may occur. Refer to the change log before updating your camera software if you have modified any software on the camera. Of course a camera update will overwrite your changes so practice good software engineering and use a revision control system outside the camera.

In addition, Texas Instruments libraries are used to access the hardware accelerators for H.264 and JPEG encoding. The source code for these libraries is not available.

Hardware overview

For this discussion, the hardware of interest is an FPGA, high speed CMOS sensor, large SDRAM for holding unencoded (RAW) video frames, and DM368 System-on-Chip (SoC) that supports hardware accelerated JPEG and H.264 encoding.

During operation, the FPGA controls the CMOS sensor, and continuously stores frames into a large SDRAM ring buffer.

Periodically, a frame from this buffer is sent to the DM368, JPEG encoded, and displayed as a live preview frame.

After a trigger occurs, the FPGA will continue storing a programable number of post-trigger frames into the buffer. Once this number is reached, the FPGA stops storing further frames and the buffer holds a combination of pre and post trigger frames. The capture is now complete, and the FPGA is reconfigured to transfer the stored frames from the large SDRAM buffer to the DM368, where they will be encoded and saved to the user's SD card.

Storage devices

The edgertronic camera supports several storage devices:

Big SD card Main device to hold user captured videos. Also can contain the default settings used by the camera when it powers on and a file whose name contains the current IP address.
micro SD card Partition into 3 file system.
  • First file system is VFAT containing the Linux kernel image file and the u-boot environment values. This file system is not mounted when Linux is running.
  • Second file system is ext3 containing the Linux root file system. It is mounted at '/' and is a read-only file system.
  • Third file system is a VFAT containing user settings (like network settings) and critical software error logs. The file system is mounted at /mnt/rw.
USB storage device
(user supplied)
If you rather store video to a USB storage device, the camera will auto-switch to use USB storage device if attached (and if there is no big SD card installed). USB storage is an experimental feature and thus is not supported.
I2C EEPROM Small device containing the make, model, serial number, Ethernet MAC address, etc. You can read the decoded values held in I2C EEPROM using

The I2C EEPROM is locked during the manufacturing process and is read-only thereafter for the life of the camera.

SPI EEPROM FPGA algorithm. May be reprogrammed when you do a software update. Only digitally signed bits can be used.

Since the camera has two potential locations to store videos, namely the big SD card and a USB storage device, either the camera has to decide where to store the video or the user has to explicitly configure the camera. The design choice was for the camera to favor USB storage if available and use SD card if a USB storage device is unusable.

Software overview

The software in the edgertronic camera is 99% Open Source software (Linux, busybox, lighttpd, GStreamer, python, etc) with the specific high speed camera application written as a series of python modules. Once the camera has booted (described later), the lighttpd web server is started, and initializes the camera using the settings from the last saved video. When the user browses to the camera, the user can adjust the camera settings, to see a live preview. If you are familiar with HTML, JavaScript, and python, you should be able to modify the camera's behavior to meet your particular needs.

The Open Source software in the edgertronic camera is made available under the terms of the package's specific Open Source license. If you need to rebuild any of the non-python code, you will need to setup a cross-compilation environment. The DM368 Linux SDK used in the camera is from RidgeRun. You can use their professional services if you need any assistance.

Boot sequence

The camera boots using the DM368 ROM Bootloader (RBL), which loads the User Bootloader (UBL) from the micro SD card. UBL load U-boot, and U-boot's environment, checks if you were holding the multi-purpose and/or trigger button(s) down, and then boots Linux. At the end of Linux boot process, the init script executes the entries in /etc/rc.d, which handle device update, networking, and of course starting the main python high speed camera application.

If U-boot detects the multi-purpose was held down during power-on, a factory reset is performed.

If U-boot detects both external trigger buttons and the multi-purpose were held down during power-on, the serial console is enabled and the u-boot automatic boot is interrupted and the camera has the serial console active waiting for the user to enter a u-boot command.

The lighttpd configuration file uses /home/root/ss-web/app.fcgi as the default home page. The fast CGI script starts the which supports the dynamic interaction between the JavaScript running on the web browser and the rest of the software on the camera. uses CAMAPI and can be reviewed as an example python application for those considering writing their own software that uses CAMAPI. The easiest way to think about is an application that exposes the python CAMAPI over HTTP.

Simplified camera control flow

After booting the camera, the camera is providing live preview JPEG images, filling the pre-trigger buffer, and ready to process a trigger event. The starting set of capture parameter values that are used by the camera are the last set of values the user specified. Once connected, the typical workflow interaction is shown below.

External world
(Web browser)
(User developed application)
What's going on?
Waiting for trigger. The pre-trigger buffer is 100% full and the oldest video frame is being overwritten with the newest video frame in a circular buffer fashion.
I want you to take a big video quickly for 10 seconds)
(suggested camera configuration settings)
Ok, if I do that you can set the frame rate to 494
(camera configuration settings that will be used)
Hmm, that's not what I wanted, try these values
(suggested camera configuration settings)
Ok, if I do that you can set the frame rate to 1000
(camera configuration settings that will be used)
Are you busy?
I am filling the pre-trigger buffer and it is 32% full
Now what are you doing?
I am still filling the pre-trigger buffer and it is 85% full
Now what are you doing?
I have filled the pre-trigger buffer, I am overwriting the oldest frame in the pre-trigger buffer as new frames are received, and am waiting for a trigger event
Capture video
Filling post-trigger buffer
Now what are you doing?
I am filling the post-trigger buffer and it is 43% full
Now what are you doing?
I am saving the video to a file and am 16% done
Now what are you doing?
I am filling the pre-trigger buffer and it is 8% full
External trigger
Now what are you doing?
I am filling the post-trigger buffer and it is 24% full

Model View Controller

The camera software is designed to support multiple simultaneous displays, different entities controlling the camera at the same time, and all the while the same model making sure all the details of the camera are attended to. For example, you can use a laptop to configure the camera and capture a video. Then you can use the remote trigger to take capture additional videos. You can even interleave triggering from the remote trigger, the multi-function button, and from the web interface. If you are ambitious, you can write an Android app so that you can view and trigger from any Android device. So, who is in control? Anyone and everyone. Who can view what is going on? Anyone and everyone. Who exposes the camera model that allows multiple controllers and multiple viewers? Just CAMAPI.

To support multiple controllers and multiple displays, each entity external to the camera (or for that matter, a custom application running on the camera) needs to have an interface to tell the camera what to do and an ability to read the the current device status. All external entities need to check the status before changing the camera state (since another external entity may have changed it) and to expect your request to fail (for example, if an external entity changed the device state after you requested status, but did so before you issued the request to change device state).

The camera software model is exposed by CAMAPI. In addition, supports a series of web URLs that expose CAMAPI via HTTP.

Trigger sources

There are four sources that can trigger the camera.

  • CAMAPI trigger() method, which is used by the web user interface.
  • Multi-purpose button
  • External trigger jack
  • Genlock daisychain cable when camera is configured as a genlock slave

A camera can triggered only if there is usable storage installed and if the camera is not calibrating or saving.

Unattended operation

When video is captured, the last allowed camera configuration settings are saved. When the camera is turned on, the saved values areread once, and those camera configuration settings are preloaded and used as the power-on camera settings. If the multi-function button or remote trigger is pressed (or any other trigger event occurs), then the video that is captured uses the power-on camera settings. This allows you to configure the camera, power down, setup the camera up in the wild, and capture videos (hope you are a good aim!).

What-if style capture parameter negotiation

There is a dependency between capture parameters, so you can specify some parameters and the camera returns the maximum value available for the other parameters. You can play "what if", by providing your desired set of parameters (where some can be unspecified) and the camera will provide the actual set that will be used. You can request various parameters and get back a consistent set of numbers multiple times until the returned set of camera parameters actually being used is acceptable.

The negotiation uses a priority scheme to resolve setting conflicts, based on a priority order is as follows:

  • iso - not dependent on other arguments.
  • exposure - not dependent on other arguments.
  • subsample - not dependent on other arguments.
  • pretrigger - not dependent on other arguments.
  • overclock - not dependent on other arguments.
  • genlock - not dependent on other arguments.
  • frame_rate - always dependent on exposure, conditionally dependent on horizontal, vertical if specified, and overclock, if specified. The genlock setting has a special effect if the device is configured as genlock slave.
  • horizontal, vertical - conditionally dependent on frame_rate if specified and overclock, if specified.
  • duration - always dependent on frame_rate, horizontal and vertical.

If the exposure value is such that the requested frame_rate can't be met, then a lower frame_rate will be used, which may allow a larger horizontal and vertical size to be valid. If the requested frame_rate is valid for the requested exposure, then the horizontal and vertical size may be reduced to meet the requested frame_rate.

If the camera is specified as genlock slave, and is connected to a genlock master which is outputting compatible timing, then the genlock slave will capture frames with the same rate and phase as the genlock master.

If the camera is specified as genlock slave, and is not connected to a genlock master, then the genlock slave will capture frames at the frame rate specified in the slave's menu settings.

If a parameter is not specified, or if a parameter is in conflict with a higher priority parameter, then the largest possible value for the lower priority parameter, consistent with the other values, is used.

You supply the requested values by passing in a dictionary of values (whose key name begins with requested_, like requested_exposure) to the configure_camera() CAMAPI. The function returns a dictionary containing both the requested values and the actual values used, along with some additional key/value pairs that need to be passed back into the CAMAPIrun() method.

To ensure your application continues to work as expected as new features are added, you should first call get_current_settings(), then override the set or requested_* values you are settings, then pass that dictionary to configure_camera(). For example, in version 2.1, requested_multishot_count was added with a default value of 1. If your version 2.0 application first calls get_current_settings(), then the returned dictionary will have that default value present. When you pass the dictionary, with your modified requested settings, back to the camera via configure_camera(), the camera will take one video and save that video as expected - just like in version 2.0. However, if you simply created a new dictionary based on the version 2.0 feature set, leaving out requested_multishot_count (since it didn't exist in version 2.0), then when you try running your application in version 2.1 the allowed multishot_count might be bigger than 1, thus changing the camera's behavior.

Camera configuration

The edgertronic camera supports two ways of being configured.

Video capture configuration is supported by CAMAPI configure_camera() and run() APIs. Specifically, the run() API updates the default video capture settings the camera uses the next time the camera is powered on. You can also save video settings via the CAMAPI favorites API save_settings().

Camera configuration that typically is set once and then doesn't change is done by storing a configuration file on the removable storage device and then powering on the camera. You can configure ntp, network settings, video encode settings, and other camera features using Configuration Files.

Web look and feel

Basically there is one webpage that includes a few other HTML files. The sanstreak.html includes the rest of the /home/root/ss-web/templates HTML files. The layout is a viewport for the video (video_image.html) with buttons below it (button_row.html). There are two modal dialog windows, settings (settings_modal.html) and replay (replay_modal.html). Stylesheets, located in /home/root/ss-web/static, control the overall look.

The actions (update live preview, trigger, etc) are supported by JavaScript. The JavaScript code accesses CAMAPI (described below) using JSON / HTTP calls to the caemra. The JSON / HTTP encoding is done by The source code of is provided as a reference (/home/root/ss-web/static/sdk/camera/

If you want to enhance the look-and-feel, your ability to control the camera is supported by CAMAPI. First understand CAMAPI and then change the code in /home/root/ss-web to your heart's content.


The Camera Application Programming Interface (CAMAPI), consists of a handful of functions, summarized in the table below and explained in detail in the code based documentation and sections below.

Camera URL Parameters Return value Description
get_camstatus() /get_camstatus None state,
Returns all the camera status information, including the camera state, completion level (for states CAMAPI_STATE_RUNNING, CAMAPI_STATE_TRIGGERED, and CAMAPI_STATE_SAVING), bit field flag, image sensor temperature, and FPGA temperature indicating current hardware status.
get_caminfo() /get_caminfo None dictionary Returns all the static camera information including serial number, model, firmware version, etc. The dictionary contents will change after a device update.
get_pretrigger_fill_level() /pretrigger_buffer_fill_level None percent pretrigger buffer filled before trigger event Returns accurate value of the actual pretrigger buffer fill level.
get_storage_info() /get_storage_info mount point dictionary with keys:
available_space: number of available bytes
storage_size: size of device in bytes
mount_point: storage device mount point
Returns dictionary of storage device information.
get_saved_settings() /get_saved_settings None saved camera settings Returns dictionary of saved settings. These are the settings from the last time the run() method was invoked.
get_storage_dir() /get_storage_dir None directory path
or None
Returns mount point of the active storage device. Changes depending what storage devices are installed. Return None if no storage device is available.
deprecated, do not use
/format mount point* ret code Formats active storage device or storage at mount point if specified.
erase_all_files() /erase_all_files mount point* ret code Erases all files on active storage device or storage at mount point if specified. At this point the code blocks erasing files from USB.
get_current_settings() /get_current_settings None current camera settings Returns dictionary that was passed into last call to run() if camera is active, otherwise None.
configure_camera() /configure_camera requested
Calculates a camera configuration based on the requested values, camera limitations, and a prioritized scheme to to eliminate inconsistencies.
run() /run allowed
ret code Reconfigures the camera to use the allowed values, calibrates the camera using those values, and starts capturing the pre-trigger video frames.
base_filename ret code Stops filling the pre-trigger buffer and starts filling the post-trigger buffer. When the post-trigger fill is complete, camera will create the video file and metadata file based on the base_filename by adding ".mov" extension and ".txt" extension. If base_filename is not supplied, the camera will create a base_filename that includes a value based on the current time of day.
save_stop() /save_stop None ret code Stop save that is in process, truncating video to the portion saved so far. Rest of captured video data is discarded.
cancel() /cancel None ret code Stops filling the post-trigger buffer.
get_last_saved_filename() /get_last_saved_filename
(with ".mov" appended)
None filename Filename used by the last successful video capture.
Note: this API is deprecated, better to query file system directly.
Device CAMAPI methods only
stop() None ret code Stops capturing and streaming video frames.
Camera URL only
(Not part of CAMAPI)
/ Camera home page. Intended for use with browser that supports javascript
/sync_time Returns camera current time or allows the camera's time to be set.
/reboot Reboots the camera
/download Downloads last saved video marking data with MIME type video/quicktime
/image2 Returns a URL redirect to the actual JPEG image.
/dir_listing  ?path= Returns list of files in the specified path or in the active storage video directory if no path is specified.

Note *: For URL, use /format?device=USB or format?device=SD to format device other than the currently active storage device

The following CAMAPI return status values are defined:

Value Symbolic Name Meaning
1 CAMAPI_STATUS_OKAY Method completed without error.
2 CAMAPI_STATUS_INVALID_STATE Attempted action which is not allowed in the current camera state. No action was taken.
3 CAMAPI_STATUS_STORAGE_ERROR The storage subsystem returned an error.
4 CAMAPI_STATUS_CODE_OUT_OF_DATE The FPGA code needs to be updated. Typically power cycling the camera will resolve this issue.
5 CAMAPI_STATUS_INVALID_PARAMETER A parameter passed to the method is invalid. No action was taken.

Python examples

There is a python module which provides HTTP access to CAMAPI. You can browse to your camera to download the python module and example programs. The HCAMAPI supports the exactly same API as CAMAPI, with the exception you need to pass the camera's IP address or DNS name to the HCAMAPI module.

Here is a simple example:

import hcamapi
cam = hcamapi.HCamapi(opts.address, logger)
(state, level, flags) = cam.get_status()
print "Camera state %d, level %d, flags %d" % (state, level, flags)

saved_settings = cam.get_saved_settings()

You can browse to the camera to retrieve and related example programs written in Python.

Shell script command line examples

You can create an application on your host computer to control the camera using HTTP. Here are some simple command line examples:


# simple shell function to print out the camera status information once a second
# on entry - number of seconds to monitor camera
    for A in `seq 1 $SEC` ; do
        curl $CAMIP/get_status
        sleep 1

ALLOWED=`curl  -H "Content-Type: application/json; charset=utf-8" --data $REQ $CAMIP/configure_camera`

curl  -H "Content-Type: application/json; charset=utf-8" --data "$ALLOWED" $CAMIP/run
monitor_cam 4

curl $CAMIP/trigger
monitor_cam 6

curl $CAMIP/save_stop
monitor_cam 5


CAMAPI uses a state machine model to move from pre-trigger to trigger to saving and the back to pre-trigger again. While doing these three activities, there can be an amount of progress that has occurred, such as how much of the save has completed. In addition, there is device status information, in the form of flags represented as a bit-field.

get_status() returns a dictionary containing

Key Value type Value range Meaning
state int 1 .. 8 Enumeration of current state
level int 0 .. 100 Percentage full or complete. Meaning depends on the state.
flags int 19 bits bit-field encoded device status.

The defined camera states include:

Value Name level Meaning
1 CAMAPI_STATE_UNCONFIGURED N/A CAMAPI run() method hasn't been invoked.
2 CAMAPI_STATE_CALIBRATING N/A Camera is capturing a black frame to be able to subtract the image sensor pixel bias from each video frame. When calibration is done, camera will automatically transition to filling the pre-trigger buffer with video frames.
3 CAMAPI_STATE_RUNNING Percentage the pre-trigger buffer has filled. The camera is capturing video frames and storing them in the pre-trigger buffer.
4 CAMAPI_STATE_TRIGGERED Percentage the post-trigger buffer has filled. The camera is capturing video frames and storing them in the post-trigger buffer. When the post-trigger buffer is full, the camera will automatically transition to saving the capture video.
5 CAMAPI_STATE_SAVING Percentage of captured video frames that have been saved to the file. Camera is reading frames from the pre and post trigger buffers, encoding the frames, and saving them to a file.
6 CAMAPI_STATE_RUNNING_PRETRIGGER_FULL 100, meaning the pre-trigger buffer is full. The camera is capturing video frames and the new frames are overwriting the oldest frame in the pre-trigger buffer.
7 CAMAPI_STATE_TRIGGER_CANCELED N/A User canceled the video capture after the camera was triggered. Camera is resetting and will automatically transition to again filling the pre-trigger buffer with video frames.
8 CAMAPI_STATE_SAVE_CANCELED N/A User canceled the save video to file. Camera is resetting and will automatically transition to again filling the pre-trigger buffer with video frames. This feature has been disabled so the CAMAPI_STATE_SAVE_CANCELED can not be reached.

the defined camera status bit-field values are:

Bit Name Meaning
0 CAMAPI_FLAG_STORAGE_FULL Current storage device does not have room to hold another video file.
1 CAMAPI_FLAG_STORAGE_MISSING No usable storage device is installed.
2 CAMAPI_FLAG_USB_STORAGE_INSTALLED USB storage device is installed.
4 CAMAPI_FLAG_USB_STORAGE_FULL USB storage device does not have room to hold another video file.
5 CAMAPI_FLAG_SD_CARD_STORAGE_FULL SD card does not have room to hold another video file.
6 CAMAPI_FLAG_STORAGE_BAD An installed storage device is returning an error.
7 CAMAPI_FLAG_NET_CONFIGURED Setting to allow the camera to connect to a network file system are stored in the camera.
8 CAMAPI_FLAG_NET_UNMOUNTABLE Camera attempted to use the network file system settings to mount the shared resource but the mount failed. Likely the network file system settings are incorrect.
9 CAMAPI_FLAG_NET_FULL The network file system shared storage does not have room to hold more captured videos.
18 CAMAPI_FLAG_GENLOCK_NO_SIGNAL The camera is configured as a genlock slave device and no periodic genlock start-of-exposure signal is detected.
19 CAMAPI_FLAG_GENLOCK_CONFIG_ERROR The camera is configured as a genlock slave device and at least one frame did not have received start-of-exposure signal when expected.
CAMAPI state transition diagram


The configure_camera() function accepts user-supplied requested camera settings, in the form of a python dictionary, and returns a dictionary containing the allowed values. Details of the negotiation are described above.

Note: In addition to the allowed camera settings, configure_camera() also returns additional values that are expected by other CAMAPI functions. Only the values intended to be used external to CAMAPI are documented. The rest should be treated as opaque and not modified by the application using CAMAPI. The opaque values may change at any point in the future.

User settings
(Dict key)
Units Supported values Description
requested_horizontal pixels 192 .. 1280 Width of image to capture.
requested_vertical pixels 96 .. 1024 Height of image to capture.
requested_frame_rate frames per second 0.1 .. 25000 Number of frames to capture in every second.
requested_exposure seconds 0.000004 .. 0.1 Amount of time the CMOS sensor is integrating the received light.
requested_duration seconds Limited by memory Amount of the video to capture.
requested_pretrigger percentage 0 .. 100 Amount of the video being captured is captured before the trigger event.
requested_iso unitless color: 100 .. 400
mono: 400 .. 1600
sensitivity to light.
requested_subsample int enum 0 - off
1 - on
Controls skipping every other row and column.
requested_force_mono int enum 0 - off
1 - on
Force color sensor video to be processed as monochrome. This is useful for high frame rates where the de-Bayering algorithm might add noise to the image. Note: this doesn't change the ISO range. This feature is not implemented.
requested_genlock int enum 0 - off
1 - master
2 - slave
Camera genlock configuration. Master means the camera is generating the trigger and start-of-exposure timing signals. Slave mean the camera is receiving those signals, via the external trigger connector, from the genlock master camera.
requested_overclock int enum 0 - off
1 - A level
2 - B level
3 - C level
4 - D level
Camera overclock configuration. The amount of overclocking is not specified beyond saying A is the least overclocking and D is the most overclocking.


Once the camera has returned a set of acceptable capture parameters, you can run the camera to activate those values. The camera will take a dark frame to maximize image quality for those particular capture parameters, then it will start filling the pre-trigger buffer (if configured).

In addition to filling the pre-trigger buffer, live preview is also started, meaning the JPEG file is being updated around five times a second. Other live preview options are possible.

Once the pre-trigger buffer is full, the camera will overwrite the oldest frame in the buffer with the newly captured frame until the camera is triggered.

Trigger and save

The camera can be triggered anytime it is filling the pre-trigger buffer, even if the pre-trigger buffer is not full. Any future trigger requests are ignored until the camera is again filling the pre-trigger buffer (which happens automatically after the save is complete).

Once the trigger occurs, the camera switches from filling the pre-trigger buffer to filling the post-trigger buffer. Once the post-trigger buffer is full, the camera automatically switches from capturing video to saving the captured video. Once the video has been saved, the camera automatically switches from saving video to filling the pre-trigger buffer and waiting for a trigger event.

This simple flow allows the camera to be configured once and then controlled by an external trigger without requiring an external computer.

Get status

The status (really the camera state) can be obtained at any time. The follow table shows the camera's state transition diagram.

Current state state value Event New state Current state description
CAMAPI_STATE_UNCONFIGURED 1 run CAMAPI_STATE_CALIBRATING The camera has been powered on and is ready to have the camera capture parameters configured.
CAMAPI_STATE_RUNNING The camera is optimizing image quality by calibrating the sensor
CAMAPI_STATE_RUNNING 3 trigger CAMAPI_STATE_TRIGGERED The camera is filling or overwriting the pre-trigger buffer.
buffer full
CAMAPI_STATE_RUNNING_PRETRIGGER_FULL The camera has filled the pre-trigger buffer.
CAMAPI_STATE_TRIGGERED 4 post trigger buffer full CAMAPI_STATE_SAVING The camera is filling the post-trigger buffer.
cancel CAMAPI_STATE_TRIGGER_CANCELED The trigger has been canceled before saving video has started
CAMAPI_STATE_SAVING 5 saving complete CAMAPI_STATE_CALIBRATING The camera is saving the video frames in the pre-trigger and post-trigger buffers to a file.
cancel CAMAPI_STATE_SAVE_CANCELED The save video operation has been canceled - no files are created. This feature has been disabled so the CAMAPI_STATE_SAVE_CANCELED state can not be reached.
stop save CAMAPI_STATE_CALIBRATING The save video operation has been stopped with the encoded video frames saved to the file and unprocessed video frames are discarded.
CAMAPI_STATE_RUNNING_PRETRIGGER_FULL 6 The camera is overwriting the pre-trigger buffer.
CAMAPI_STATE_SAVE_CANCELED 8 This feature has been disabled so the CAMAPI_STATE_SAVE_CANCELED state can not be reached.

Extend table to show you can call configure_camera in any state.




While in state CAMAPI_STATE_TRIGGERED, cancel() will cancel filling the post-trigger buffer.
Note: In normal operation, once the post-trigger buffer has filled, the camera will automatically transition from CAMAPI_STATE_TRIGGERED to CAMAPI_STATE_SAVING. It is possible that the cancel() may be received after this automatic transition has occurred. In that case, the cancel() will have no effect.

Mac OSX SDK example usage

The camera includes some example python code and a handy host CAMAPI library ( that you can use to automate camera control. You can see the python module and applications by browsing to your camera replacing with your camera's IP address.

The following are some cut-and-paste commands that should work on a Mac OSX computer with no additional software being added. All commands are run in the Terminal application. To start Terminal either press the COMMAND-SPACE keys at the same time or using Finder to navigate into Applications -> Utilities -> Terminal. Once terminal is running, try out these commands

CAMIP=                                                                  # set to the IP address of the edgetronic camera

mkdir edgetronic
cd edgetronic
curl http://$CAMIP/static/host/ >                             # fetch the handy host CAMAPI library
curl http://$CAMIP/static/host/ > # fetch example program

python -a $CAMIP

Here is some of the output from running those commands:

lyre-osx:~ tfischer$ CAMIP=
lyre-osx:~ tfischer$ mkdir edgetronic
lyre-osx:~ tfischer$ cd edgetronic

lyre-osx:edgetronic tfischer$ curl http://$CAMIP/static/host/ >
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 21827  100 21827    0     0   533k      0 --:--:-- --:--:-- --:--:--  710k

lyre-osx:edgetronic tfischer$ curl http://$CAMIP/static/host/ >
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 12469  100 12469    0     0   366k      0 --:--:-- --:--:-- --:--:--  468k

lyre-osx:edgetronic tfischer$ python -a $CAMIP
Camera state 6, level 100, flags 8
Camera extended status - state 6, level 100, flags 8, IS temp (C) 31, FPGA temp (C) 33
Status string: State: Running pretrigger buffer full; Level: 100; Flags: SD card installed; Empty: 776.8 MB, Storage directory: /mnt/sdcard
Directory path to active storage device:  /mnt/sdcard
Storage information: 814514176 / 1977286656 bytes, mount point: /mnt/sdcard
Camera information:
    Software build date: 20140824092106
    Hardware build date: 20130520
    FPGA version: 70
    Model Number: 1
    Serial Number: 3
    Hardware Revision: 3
    Hardware Configuration: 0
    IR Filter: installed
    Ethernet MAC Address: 00:1B:C5:09:60:03

Saved camera settings:
    Sensitivity: None
    Shutter: 1/500
    Frame Rate: 60
    Horizontal: None
    Vertical: None
    Sub-sampling: On
    Duration: 10
    Pre-trigger: 75

There is a lot more output. The example program uses every exposed edgertronic camera API (CAMAPI).

Media server

GStreamer is used to get video frames from Video4Linux2 (V4L2), encode them in JPEG for live preview or H.264 when saving to file. It is a bit tricky to show the progress of the save operation by occasionally displaying a JPEG encoded frame while the save is in progress. To handle this capability, a media server is used.

Personal tools