This directory contains static files for the client dashboard app that allows user to stream webcam feed and upload videos for ingestion by Amazon Kinesis Video Streams and Rekognition Video. Raw and analyzed face motion metrics from Rekognition Video are then rendered back to the dashboard in near real-time.
Built using:
Front-end libraries used (found in js/lib
directory):
Single-page, responsive app -- index.html
is the root view.
views/
contains client-side partials.css/
contains custom styling.js/app
contains AngularJS controllers and configuration.js/lib
contains locally-hosted 3rd-party libraries.
bootstrapper.js
is an AWS Lambda function provisioned by a CloudFormation Custom Resource.
It is responsible for syncing files in this directory with the S3 bucket hosting the web app when stack is deployed.
When the stack is destroyed, the script is responsible for deleting the web app bucket and deleting all video uploads/fragments that were streamed using the web app.
WebcamJS library is used to provide cross-browser, cross-platform support for accessing user's webcam via WebRTC getUserMedia
API.
For this project, we include a demo FrameBuffer
class (js/custom/frame-buffer.js
) that handles buffering image frames acquired from webcam.
Here is basic sample code for acquiring webcam feed, buffering frames at regular interval, and posting to data endpoint.
For a full example that is used in the demo app, see js/app/webcam-stream-controller.js
.
HTML
<head>
...
<!-- Import required dependencies !-->
<script "js/lib/webcamjs/webcam.js"></script>
<script "js/custom/frame-buffer.js"></script>
...
</head>
<body>
<div id="webcam-feed-container"></div>
</body>
...
Javascript
var frameBuffer;
function startStreaming() {
// Inititialize frame buffer.
frameBuffer = new FrameBuffer({ size: 40 });
// Attempt to stream webcam feed to canvas element.
Webcam.attach("#webcam-feed-container");
// When webcam feed acquired, executes callback.
Webcam.on('live', startStreamLoop);
}
var looperPromise;
function startSteamLoop() {
var TARGET_FPS = 10;
var looper = function() {
// Pass current frame image data to handler.
Webcam.snap(frameCallback);
looperPromise = setTimeout(looper, 1000 / TARGET_FPS);
}
looper();
}
function frameCallback(imgData) {
// imgData is base64-encoded string of current frame
// e.g. "data:image(jpeg|png);base64,----"; this is generated in WebcamJS library by calling
// canvas.toDataURL('image/jpeg')
frameBuffer.add(imgData);
if (frameBuffer.getSize() >= frameBuffer.bufferSize) {
// Clear buffer, and post frames to data endpoint.
var data = frameBuffer.getData();
frameBuffer.clear();
// DATA_ENDPOINT is API endpoint that handles conversion of image frame sequence to streamable MKV fragment.
postFrameData(data, DATA_ENDPOINT, ...);
}
}
function postFrameData(data, endpoint, callback) {
var $http = new XMLHttpRequest();
$http.open("POST", endpoint);
$http.setRequestHeader("Content-Type", "application/json");
$http.send(JSON.stringify(data));
}
function stopStreaming() {
try {
Webcam.reset();
Webcam.off('live');
clearTimeout(looperPromise);
frameBuffer.clear();
} catch(e) {
console.error(e);
}
}
For details on how frame sequence data sent by client should be processed on server-side, refer to documentation in lambda/WebApi