Computers can now look at a video or image and know what’s going on and, sometimes, who’s in it. Amazon Web Service Rekognition gives your applications the eyes it needs to label visual content. In the following, you can see how to use Rekognition along with MongoDB Stitch to supplement new content with information as it is inserted into the database.
You can easily detect labels or faces in images or videos in your MongoDB Stitch application using the built-in AWS service. Just add the AWS service and use the Stitch client to execute the AWS SES request right from your React.js application or create a Stitch function and Trigger. In a recent Stitchcraft live coding session on my Twitch channel, I wanted to tag an image using label detection. I set up a trigger that executed a function after an image was uploaded to my S3 bucket and its metadata was inserted into a collection.
exports = function(changeEvent) {
const aws = context.services.get('AWS');
const mongodb = context.services.get("mongodb-atlas");
const insertedPic = changeEvent.fullDocument;
const args = {
Image: {
S3Object: {
Bucket: insertedPic.s3.bucket,
Name: insertedPic.s3.key
}
},
MaxLabels: 10,
MinConfidence: 75.0
};
return aws.rekognition()
.DetectLabels(args)
.then(result => {
return mongodb
.db('data')
.collection('picstream')
.updateOne({_id: insertedPic._id}, {$set: {tags: result.Labels}});
});
};
With just a couple of service calls, I was able to take an image, stored in S3, analyse it with Rekognition, and add the tags to its document. Want to see how it all came together? Watch the recording on YouTube with the Github repo in the description. Follow me on Twitch to join me and ask questions live.
-Aydrian Howard
Developer Advocate
NYC
@aydrianh