Quantcast
Channel: MongoDB | Blog
Viewing all 2423 articles
Browse latest View live

MongoDB’s BI Connector the Smart Connector for Business Intelligence

$
0
0

In today's world, data is being produced and stored all around us. Businesses leverage this data to provide insights into what users and devices are doing. MongoDB is a great way to store your data. From the flexible data model and dynamic schema, it allows for data to be stored in rich, multi-dimensional documents. But, most Business Intelligence tools, such as Tableau, Qlik, and Microsoft Excel, need things in a tabular format. This is where MongoDB's Connector for BI (BI Connector) shines.

MongoDB BI Connector

The BI Connector allows for the use of MongoDB as a data source for SQL based business intelligence and analytics platforms. These tools allow for the creation of dashboards and data visualization reports on your data. Leveraging them allows you to extract hidden insights in your data. This allows for more insights into how your customers are using your products.

The MongoDB Connector for BI is a tool for your data toolbox which acts as a translation layer between the database and the reporting tool. The BI Connector itself stores no data. It serves as a bridge between your MongoDB data and business intelligence tools.

MongoDB BI Connector Flow

The BI Connector bridges the tooling gap from local, on-premise, or hosted instances of MongoDB. If you are using MongoDB Atlas and are on an M10 or above cluster, there's an integrated built-in option.

Why Use The BI Connector

Without the BI Connector you often need to perform an Extract, Transform, and Load (ETL) process on your data. Moving it from the "source of truth" in your database to a data lake. With MongoDB and the BI Connector, this costly step can be avoided. Performing analysis on your most current data is possible. In real-time.

There are four components of a business intelligence system. The database itself, the BI Connector, an Open Database Connectivity (ODBC) data source name (DSN), and finally, the business intelligence tool itself. Let's take a look at how to connect all these pieces.

I'll be doing this example in Mac OS X, but other systems should be similar. Before I dive in, there are some system requirements you'll need:

  • A MongoDB Atlas account
  • Administrative access to your system
  • ODBC Manager, and
  • The MongoDB ODBC Driver for DSN
  • Instructions for loading the dataset used in the video in your Atlas cluster can be found here.

    Feel free to leave a comment below if you have questions.

    Get started with MongoDB Atlas today to start using the MongoDB Connector for BI to examine and visualize your data.

Millions of Users and a Developer-Led Culture: How Blinkist Powers its Berlin Startup on MongoDB Atlas

$
0
0

Not unlike other startups, Blinkist grew its roots in a college dorm. Only, its creators didn’t know it at the time. It took years before the founders decided to build a business on their college study tricks. Blinkist condenses nonfiction books into pithy, but accessible 15-minute summaries which you can read or listen to via its app.

“It all started with four friends,” says Sebastian Schleicher, Director of Engineering at Blinkist. “After leaving university, they found jobs and built lifestyles that kept them fully occupied—but they were pretty frustrated because their packed schedules left them no time for reading and learning new things.”

Rather than resign themselves to a life without learning, they racked their brains as to how they could find a way to satisfy their craving for knowledge. They decided to revive their old study habits from university where they would write up key ideas from material that they’d read and then share it with each other. It didn’t take long for them to realise that they could build a business on this model of creating valuable easily accessible content to inspire people to keep learning. In 2012, Blinkist was born.

Six years later, the Berlin-based outfit has nearly 100 employees, but instead of writers and editors, they have Tea Masters and Content Ninjas. Blinkist has no formal hierarchical management structure, having replaced bosses with BOS, the Blinkist Operating System. The app has over five million users and, at its foundation, it has MongoDB Atlas, the fully managed service for MongoDB, running on AWS. But it didn’t always.

“In four years, we had a million users and 2,500 books,” says Schleicher. “We’d introduced audiobooks and seen them become the most important delivery channel. We tripled our revenue, doubled our team, moved into a larger, open-plan office, and even got a dog. Things were good.”

Running into trouble with 3rd party MongoDB as a Service

Then came an unwelcome plot twist. Blinkist had built its service on Compose, a third-party database as a service, based on MongoDB. MongoDB had been an obvious choice as the document model provided Blinkist with the flexibility needed to iterate quickly, but the team was too lean to spend time on infrastructure management

In 2016, Compose unexpectedly decided to change the architecture of its database, creating major obstacles for Blinkist as they would become locked in to an old version of MongoDB. “They left us alone,” says Schleicher. “They said, ‘Here’s a tool, migrate your data.’ I asked if they’d help. No dice. I offered them money. Not interested, no support. After being a customer for all those years? I said goodbye.”

After years of issues, it became clear last year that Blinkist would need to leave Compose, which meant choosing a new database provider. “We looked at migrating to MySQL, we were that desperate. That would have meant freezing development and concentrating on the move ourselves. On a live service. It was bleak.”

Discovering MongoDB Atlas

By this time, MongoDB’s managed cloud Atlas service was well established and seemed to be the logical solution. “We downloaded MongoDB’s free mongomirror service to make the transition,” says Schleicher, “but we hit a brick wall. Compose had locked us into a very old version of the database and who knows what else, and we couldn’t work it out.”

At that point, Schleicher made a call to MongoDB. MongoDB didn’t say, ‘Do it yourself.’ Instead, they sent their own data ninja—or, in more conventional, business-card wording, a principal consulting engineer. “It was the easiest thing in the world,” Schleicher remembers. “In one day, he implemented four feature requests, got the migration done and our databases were in live sync. Such a great experience.”

Now that Blinkist is on Atlas, Schleicher feels like they have a very solid base for the future. “Performance is terrific. Our mobile app developers accidentally coded in a distributed denial of service attack on our own systems. Every day at midnight, in each time zone, our mobile apps all simultaneously sync. This pushes the requests load up from a normal peak of 7,500 requests a minute to 40,000 continuous. That would have slaughtered the old system, with real business impacts—killing sign-ups and user interactions. This time, nobody noticed anything was wrong."

Blinkist

Right now it feels like we have a big tech advantage. With MongoDB Atlas and AWS, we’re on the shoulders of people who can scale the world. I know for the foreseeable future I have partners I can really rely on.

Sebastian Schleicher, Director of Engineering, Blinkist

Schleicher adds: “We’re building our future through microarchitecture with all the frills. Developers know they don’t have to worry about what’s going on behind the API in MongoDB. It just works. We’re free to look at data analytics and AI—whatever techniques and tools we believe will help us grow—and not spend all our time maintaining a monolithic slab of code.”

With Blinkist’s global ambitions, scaling isn’t just a technical challenge; it tests company culture—no matter how modern—to the limits. MongoDB’s own customer focused culture, it turns out, is proving as compatible as MongoDB’s data platform.

“Talking to MongoDB isn’t like being exposed to relentless sales pressure. It’s cooperative, it’s reassuring. There are lots of good technical people on tap. It’s holistic, no silos, whatever it takes to help us.”

This partnership is helping make Blinkist a great place to be a developer.

“A new colleague we hired last year told me we’ve created an island of happiness for engineers. Once you have an understanding of the business needs and vision, you get to drive your own projects. We believe in super transparency. Everyone is empowered.”

“Oh, and did I mention we have a dog?”

Atlas is the easiest and fastest way to get started with MongoDB. Deploy a free cluster in minutes.

Testing & Debugging MongoDB Stitch Functions

$
0
0
Testing and debugging serverless functions can be tricky – not so with MongoDB Stitch functions. This post shows how quick and easy it is through the Stitch UI.

How To Pause and Resume Atlas Clusters

$
0
0

Last week we showed you how to list the resources associated with your MongoDB Atlas environment via a simple Python program. Let’s extend this program this week with a more useful feature, the ability to pause and resume clusters. We can use the Atlas Management API to do this via the “Pause Cluster” menu entry.

Pause a Cluster in MongoDB Atlas

However, when we pause a cluster the Atlas environment will restart the cluster after seven days. Also, both pausing and resuming require a login, navigation etc. Basically, it’s a drag to do this regularly. If you are running clusters for development they are rarely required late at night or at weekends.

It would be great to have a simple script to pause and resume these clusters using the project ID and cluster name. Then we could run this script in crontab or our own favorite scheduling program and pause and resume clusters on a defined schedule. We have rewritten the py-atlas-list.py script to do exactly that.

The extended py-atlas-list.py script allows you to both list resources and pause and/or resume clusters using their project ID and cluster name.

$ python py-atlas-cluster.py -h
usage: py-atlas-cluster.py [-h] [--username USERNAME] [--apikey APIKEY]
                           [--project_id PROJECT_ID] [--org_id ORG_ID]
                           [--pause PAUSE_CLUSTER_NAME]
                           [--resume RESUME_CLUSTER_NAME] [--list]

optional arguments:
  -h, --help            show this help message and exit
  --username USERNAME   MongoDB Atlas username
  --apikey APIKEY       MongoDB Atlas API key
  --project_id PROJECT_ID
                        specify project for cluster that is to be paused
  --org_id ORG_ID       specify an organisation to limit what is listed
  --pause PAUSE_CLUSTER_NAME
                        pause named cluster in project specified by
                        --project_id
  --resume RESUME_CLUSTER_NAME
                        resume named cluster in project specified by
                        --project_id
  --list                List of the complete org hierarchy
$

To pause a cluster just run:

$ python py-atlas-cluster.py --list --org_id XXXXXXXXXXXXXXXXXXXX175c
 1. Org  : 'Open Data at MongoDB',   id=XXXXXXXXXXXXXXXXXXXX175c
  1. Proj : 'JD Stitch Demos',        id=XXXXXXXXXXXXXXXXXXXXcb08
   1. cluster: 'stitch',                 id=XXXXXXXXXXXXXXXXXXXX5697 paused=True
  2. Proj : 'MUGAlyser',              id=XXXXXXXXXXXXXXXXXXXX9bab
   1. cluster: 'MUGAlyser',              id=XXXXXXXXXXXXXXXXXXXXbfba paused=False
  3. Proj : 'Open Data',              id=XXXXXXXXXXXXXXXXXXXX8010
   1. cluster: 'Utility',                id=XXXXXXXXXXXXXXXXXXXX1a03 paused=True
   2. cluster: 'MOT',                    id=XXXXXXXXXXXXXXXXXXXX94dd paused=False
   3. cluster: 'Foodapedia',             id=XXXXXXXXXXXXXXXXXXXX9fbf paused=False
   4. cluster: 'UKPropertyPrices',       id=XXXXXXXXXXXXXXXXXXXX7ac5 paused=False
   5. cluster: 'New-York-Taxi',          id=XXXXXXXXXXXXXXXXXXXXa18a paused=False
   6. cluster: 'demodata',               id=XXXXXXXXXXXXXXXXXXXX2cf8 paused=False
(We have hidden the real resource IDs behind X’s).

To get the project ID look for the id field for the Proj entry. To get the cluster name just look for the string in quotes after the cluster identifier. We have highlighted the project ID and the cluster name we are going to use.

Now to pause the cluster just run:

$ python py-atlas-cluster.py --project_id XXXXXXXXXXXXXXXXXXXX9bab --pause MUGAlyser
Pausing cluster: 'MUGAlyser'
$

To resume a cluster just use the --resume argument instead of the --pause argument. Want to pause or resume more than one cluster in a single project? You can, just by adding multiple --pause or --resume arguments.

Now, you just need to add this script to your favourite scheduler. Note for this example I have already set the environment variables ATLAS_USERNAME and ATLAS_APIKEY so we don’t need to pass them in on the command line.

Now go save some money on your development clusters. Your boss will thank you!

Reacting to Auth Events using Stitch Triggers

$
0
0

MongoDB Stitch makes it easy to add authentication to your application. Several authentication providers are available to be configured using the Stitch Admin Console. Recently, authorization triggers were added to Stitch. Functions can now be executed based on authorization events such as user creation, deletion, and login.

During my Stitchcraft live coding sessions, I’ve been creating an Instagram-like application that uses Google Authentication. The Google authentication provider can be configured to return metadata with the authenticated user. I set up my provider to retrieve the user’s email, name, and profile picture. This works well as long as only the authenticated users need to see it. If you want other users to be able to access this data, you’re going to have to write it to a collection. Before authorization triggers, this could have been an arduous task.

Now it’s as simple as executing a function to perform an insert on the CREATE operation. Because I wanted to also ensure that the data in this collection stayed up-to-date, I created authorization triggers for CREATE and LOGIN and pointed them to a single upsert function as seen below.

exports = function(authEvent) {
  const mongodb = context.services.get("mongodb-atlas");
  const users = mongodb.db('data').collection('users');

  const { user, time } = authEvent;

  const newUser = {
    user_id: user.id,
    last_login: time,
    full_name: user.data.name,
    first_name: user.data.first_name,
    last_name: user.data.last_name,
    email: user.data.email,
    picture: user.data.picture
  };

  return users.updateOne({user_id: user.id}, newUser, {upsert: true});
};

During the last Stitchcraft session, I set up this authorization trigger and a database trigger that watched for changes to the full_name field. Check out the recording with the GitHub repo linked in the description. Follow me on Twitch and be notified of future Stitchcraft live coding sessions.

-Aydrian Howard
Developer Advocate
NYC
@aydrianh

Creating your first Stitch app? Start with one of the Stitch tutorials.

Want to learn more about MongoDB Stitch? Read the white paper.

Sending Text Messages with MongoDB Stitch & Twilio

$
0
0
How to send text messages from your app using MongoDB Stitch and Twilio.

How Planable Uses Mongodb Atlas to Help Social Media Teams Move Their Creative Processes Forward

$
0
0

In 2015, Nicolae Gudumac was working at a social media agency managing hundreds of campaigns for clients. However, with each campaign requiring multiple rounds of review, he needed a way to streamline the disjointed feedback loop and bring everyone onto the same page. Along with his co-founders, Xenia Muntean and Vlad Calus, he began building a platform that would streamline this time-consuming process for social media managers, agencies, and their clients.

The team created Planable, a platform that simplifies planning, visualizing, and approving social media posts. The tool feels like a live mock-up of the social feed, making it easy for teams to collaborate and give real-time feedback in a familiar format.

Prioritizing Developer Velocity From the Beginning

As the newly minted CTO, Nicolae decided to use MongoDB’s document data model to help his team innovate quickly to stay ahead of the ever-changing social media landscape. “With social media content, requirements are constantly changing as platforms evolve and introduce new formats. Having a flexible data model has been a boon to our productivity.”

He also needed to build a foundation that could scale with them as the business grew. Since Planable is a collaboration tool that needs to keep users synced in real-time, Nicolae built the tool on top of Node.js and websockets. The team harnessed MongoDB’s oplog tailing functionality to send real-time updates to all connected users when relevant data changed and recently started to leverage MongoDB change streams to make this process more simple and scalable. To easily scale their app servers at peak times, Nicolae chose to run Planable on AWS container service.

Migrating to MongoDB Atlas

For managing their MongoDB deployment, the team started off using Compose.com but were having issues with restoring backups and only had access to a very limited set of configuration options. Compose also charged a premium for upgrading their storage engine to WiredTiger. MongoDB Atlas, with its queryable backup snapshots, automated upgrades, and configuration flexibility — especially the ease at which clusters can be scaled horizontally — looked very appealing.

Blinkist

We needed to be able to provide our clients with a platform that is as reliable as we are. Atlas’ native cloud first and scale-out architecture aligned well with our increasing performance demands and usage growth.

Nicolae Gudumac, CTO, Planable

When Planable was accepted into the MongoDB Startup Accelerator, the timing was right to make the move to MongoDB Atlas. The team had customers all over the world and couldn’t afford any downtime with the migration. They used the Atlas Live Migration Service to move over their data from Compose with no downtime.

Currently, the team of 6 is split between engineering and business. With a small engineering team, they’re hyper-focused on product improvements and new features that will drive the business forward. Atlas features such as the Real-Time Performance panel and the Performance Advisor, which monitors clusters for slow queries and automatically suggests indexes to improve performance, allow the team to dedicate more of their attention to application improvements. According to Nicolae, “The Performance Advisor has made indexing the database and optimizing queries a no-brainer."

Queryable backups have also helped the team quickly address customer questions and as Nicolae recalls, “It’s saved us numerous times when someone accidentally dropped/updated a few documents and they needed to be restored. We’ve managed to quickly inspect and restore data in just a few clicks.”

The team is looking forward to the upcoming release of MongoDB Charts which is currently available in beta. “Charts will enable our marketing and business team to gain insights from our database, without resorting to sophisticated and expensive BI tools” says Nicolae.

Planable is bringing thousands onto their platform each month and are well on their way to becoming the default tool for social media collaboration.

MongoDB Atlas has helped this fast-growing startup focus on helping social media teams move their process forward and allows Nicolae to give valuable time back to his team. “There’s no doubt in my mind that moving to MongoDB Atlas has increased our team’s productivity.”

Atlas is the easiest and fastest way to get started with MongoDB. Deploy a free cluster in minutes.

Fraud Detection at FICO with MongoDB and Microservices

$
0
0

FICO is more than just the FICO credit score. Founded in 1956, FICO also offers analytics applications for customer acquisition, service, and security, plus tools for decision management.

One of those applications is the Falcon Assurance Navigator (FAN), a fraud detection system that monitors purchasing and expenses through the full procurement to pay cycle. Consider an expense report: the entities involved include the reporter, the approver, the vendor, the department or business unit, the expense line items, and more. A single report has multiple line items, where each line may be broken into different expense codes, different budget sources, and so on. This translates into a complicated data model that can be nested 6 or 7 layers deep – a great match for MongoDB’s document model, but quite hard to represent in the tabular model of relational databases.

FICO FAN Fraud Detection Architecture FAN Architecture Overview

The fraud detection engine consists of a series of microservices that operate on transactions in queues that are persisted in MongoDB:

  • Each transaction arrives in a receiver service, which places it into a queue.
  • An attachment processor service checks for an attachment; if one exists, it sends it to an OCR service and stores the transaction enriched with the OCR data.
  • A context creator service analyzes it and associates it with any past transactions that are related to it.
  • A decision execution engine runs the rules that have been set up by the client and identifies violations.
  • One or more analytics engines review transactions and flag outliers.
  • Now decorated with a score, the transaction goes to a case manager service, which decides whether to create a case for human follow-up based on any identified issues.
  • At the same time, a notification manager passes updates on the processing of each transaction back to the client’s expense/procurement system.

To learn more, watch FICO’s presentation at MongoDB World 2018.


Hacking for Resilience with MongoDB Stitch at PennApps XVIII

$
0
0

Hosted and run by students at The University of Pennslyvania, PennApps is billed as “The original hackathon.” The eighteenth iteration of the nation's first college hackathon kicked off on Friday, September 7th at 7:30 pm and with participants hacking away until Sunday, September 9th at 8:00 am.

MongoDB was a technology choice for many of the hackathon teams, and as the weekend progressed, participants leveraging MongoDB stopped by to share details of their projects.

One application that stood out immediately was pitched by its team as a “100% offline communication app” called Babble. The trio from Carnegie Mellon University spoke enthusiastically about the app they were developing.

“Babble will be the world’s first chat platform that is able to be installed, setup, and used 100% offline,” said Manny Eppinger, a Junior studying CS at CMU.

The Babble development team
From left to right: Manny Eppinger, Michael Lynn (MongoDB), Conlon Novak, and Aneek Mukerjee

In keeping with the PennApps XVIII theme of “HACK-FOR-RESILIENCE”, a critical design goal of Babble is to be able to support 100% offline utilization including application installation via near-field communication (NFC).

Imagine you’re in the midst of a disaster scenario and the internet infrastructure is damaged, or severely degraded. Communication into, and out of these areas is absolutely critical. Babble asks the questions:

  • What if you didn’t have to rely on that infrastructure to communicate?
  • What if you could rely on what you do have -- people, cell phones, and physical proximity?

Working in a peer-to-peer model, each Babble user’s device keeps a localized ledger of all messages that it has sent and received, as well as all of the ledgers of each device that this instance of Babble has been connected directly to via Android Nearby Connections.

The team leveraged MongoDB Stitch and MongoDB Mobile, now in beta to ensure that the app will capture and store chats and communication from its users and when a connection becomes available, automatically sync with the online version of the database.

Babble Stitch Diagram

As hackathon mentors and judges for the event, my team and I were so impressed with the team's vision, and with their innovation that we chose them as recipients of the Best Use of MongoDB Stitch award which includes a prize package valued at $500.

Whether you’re a student hacker, or an engineer simply looking to get your brilliant app idea off the ground, I’d strongly encourage you to take a look at MongoDB Atlas, MongoDB Stitch, and MongoDB Mobile to help you accelerate your innovation cycle and reduce the amount of time you need to spend building and managing servers and replicating boilerplate code.

Check out project Babble on Devpost.
Are you a developer, advocate or similar with a combination of excellent coding and communication skills and a passion for helping other developers be awesome? We’re hiring at MongoDB and we’d love to talk with you.

MongoDB Connector for Apache Spark now Officially Certified by Cloudera

$
0
0

We are delighted to announce that the MongoDB Connector for Apache Spark is officially certified by Cloudera. MongoDB users may already integrate Spark and MongoDB using the MongoDB Connector for Apache Spark, a fully supported package maintained by MongoDB. This connector allows you to perform advanced analytics and machine learning against the data sets that reside in MongoDB. Users of Cloudera may use this same connector to run Spark jobs from their managed clusters against both MongoDB Atlas and self-managed MongoDB instances.

Apache Spark and MongoDB are a potent analytics combination. MongoDB’s flexible schema, secondary indexing, aggregation pipelines, and workload isolation make it easy for users to efficiently process data drawn from multiple sources into a single database with zero impact to other business-critical database operations. Running Spark on MongoDB reduces operational overhead as well. Running Spark jobs on MongoDB eliminates the need to ETL duplicate data to a separate cluster of HDFS servers, greatly simplifying your architecture and increasing the speed at which analytics can be executed.

MongoDB Atlas, our on-demand, fully-managed cloud database service for MongoDB, makes it even easier to run sophisticated analytics processing by eliminating the operational overhead of managing database clusters directly. By combining Cloudera and MongoDB, Atlas users can make benefit of a fully managed analytics platform, freeing engineering resources to focus on their core business domain and deliver actionable insights quickly.

What’s next

Creating a Data Enabled API in 10 Minutes with MongoDB Stitch

$
0
0

Creating an API that exposes data doesn’t have to be complicated. With MongoDB Stitch, you can create a data enabled endpoint in about 10 minutes or less.

At the heart of the entire process is MongoDB Stitch’s Services. There are several from which to choose and to create a data enabled endpoint, you’ll choose the HTTP Service with a Webhook.

Adding a Stitch Service

When you create an HTTP Service, you’re enabling access to this service from Stitch’s serverless functions in the form of an object called context.services. More on that later when we create a serverless function attached to this service.

Name and add the service and you’ll then get to create an “Incoming Webhook”. This is the process that will be contacted when your clients request data of your API.

Call the webhook whatever you like, and set the parameters as you see below:

We’ll create this API to respond with results to GET requests. Next up, you’ll get to create the logic in a function that will be executed whenever your API is contacted with a GET request.

Defining the Function

Before we modify this script to return data, let’s take a look at the Settings tab — this is where you’ll find the URL where your clients will reach your API.

That’s it — you’ve configured your API. It’s not going to do anything interesting. In fact, the default responds to requests with “Hello World”. Let’s add some data.

Assuming we have a database called mydatabase and a collection of contact data called mycollection, let’s write a function for our service:

Creating a function to return data from a collection in MongoDB Stitch

And here’s the source:

exports = function(payload) {
 const mongodb = context.services.get(“mongodb-atlas”);
 const mycollection = mongodb.db(“mydatabase”).collection(“mycollection”);
 return mycollection.find({}).toArray();
};

This exposes all documents in the database whenever a client calls the webhook URL associated with our HTTP Service. That’s it.

Let’s use Postman to show how this works. Grab your API Endpoint URL from the service settings screen. Mine is as follows — yours will differ.


https://webhooks.mongodb-stitch.com/api/client/v2.0/app/devrel-mrmrq/service/api/incoming_webhook/webhook0


Paste that into the GET URL field and hit Send, you should see something similar to the following:

Check out the GitHub Repository to review the code and try it yourself and review the screencast where I create a data enabled API in 10 Minutes with MongoDB Stitch.

Want to try this for yourself? Sign up for a free MongoDB Atlas account. Looking to leverage an API for integration with MongoDB? Read Andrew Morgan’s article on Building a REST API with MongoDB Stitch.

Additional Resources:

The MongoDB Summer ‘18 Intern Series: From Learning How a Computer Works to Helping Build MongoDB Stitch

$
0
0

Most interns joining us for the MongoDB summer engineering program are in the process of pursuing a degree in computer science and come looking for a hands-on, impactful work experience.

As a sophomore in high school, Julia Ruddy was introduced to computer science in a basic CS class that received so much positive feedback, her school introduced an Advanced Topics in CS course to the curriculum for her senior year. When Julia started her freshman year at Princeton University, she decided to pursue a degree in electrical engineering and joined us this summer as one of three interns on our Stitch team.

Andrea Dooley: Why did you decide to declare electrical engineering as your major?
Julia Ruddy: The Advanced Topics course in high school went very low level. We started the course working with transistors and proceeded to build up to the level of writing a Tetris game. It was interesting to see how it all fit together from transistors and binary to code you can write a program on. It was at that point I considered electrical engineering because as interested as I was in CS, electrical engineering gives you the opportunity to see what’s under the hood. I wanted to understand how a computer worked from 1s and 0s to building something like MongoDB.

AD: So you’ll graduate with a degree in Electrical Engineering. What experience have you had with Computer Science?
JR: One of my priorities was to keep up with the computer science schedule, so I took a lot of upper-level CS classes. Last summer during an internship at an early stage startup, my work consisted of half hardware and half software. Through that internship, I learned I didn’t want to work in the hardware space. I liked the hardware aspect of my work but found I enjoyed my days doing software more. Although I love learning about it, I find building hardware from scratch to be very tedious, and it’s hard to debug. Also, I find the software industry as a whole more intriguing.

AD: How did you first learn about MongoDB?
JR: A good friend of mine from Princeton interned here last summer and encourage me to attend the open house. It seemed like a cool place to work and piqued my interest. When I got deeper into the search I found MongoDB especially attractive because it’s an established company, but smaller in size. I know sometimes at larger organizations your work can get lost, but my friend vouched for the work he did here, and that it impacted the business.

AD: As someone not fully immersed in CS as other intern candidates might be, did you find the interview to be particularly difficult?
JR: From my experience, software interviews, in general, follow a similar pattern regarding data structure and algorithms. I did a ton of prep in those areas, ensuring I was very familiar with them. During my interviews here, everyone was very approachable and easy to talk to and the conversations flowed naturally. If my interviewers had any hesitations regarding my experience, I didn’t notice them.

AD: What team are you working on this summer?
JR: I was given my first choice, which was to work on the Stitch team. Stitch is a serverless platform designed to help people focus on the interesting and exciting pieces of their applications, rather than get bogged down with boilerplate and tedious back end code. It’s a newer team for a relatively new product, as well as something people are talking a lot about. I wanted to be on a team that was on the forefront of the upcoming MongoDB release.

AD: What project are you working on for Stitch?
JR: There is no one concrete project within Stitch. I’ve been picking up tickets, some bigger than others, working on both the front end and back end. Going into my internship, I had zero front end experience and I felt that if I wanted to become a respected full stack engineer, I needed to change that. So, my goal for the summer was to pick up as many UI tickets as I could. I actually really enjoyed front end development and learned a lot. Overall, this summer I’ve been able to work on many different things, which I find to be more similar to what a day in the life of a full-time engineer would be like. I now know what a career in software engineering entails, and it’s fun!

AD: What’s one of the bigger tickets you were able to work on?
JR: One of the bigger tickets was to create a generic AWS service in the UI, which allows for an extra layer of ease for our users. They used to have to edit code themselves to do more specific actions, but now they can choose from a drop down. I’ve also been working through a series of UI tickets for Stitch usage metrics, which is a real time visual representation for users to see how much data they’ve used, and how many transactions they’ve done, which will help with transparency in billing.

AD: Aside from project work, what has been one of the most memorable aspects of your internship?
JR: I worked with a group of interns on a project for Skunkworks, MongoDB’s internal hackathon. We built a computer game for people with minimal technical skills, to help them get familiar with MongoDB query language. The user plays a detective, and the goal is to solve the mystery of the missing emerald leaf in the MongoDB museum by querying databases. When the detective is completing one of the tasks to help solve the mystery, there is a prompt to drag and drop the proper argument into the query. We made it to the final round to present the game to the entire office and won the award for “Most Fun.”

AD: Is there anything you learned during your time at MongoDB that surprised you?
JR: I found the Speaker Series with our CPO [Chief People Officer] Dan Heasman to be really interesting mostly because it’s the side of a company I don’t ever think about. The idea that there is someone dedicated to fostering a great culture, managing how people interact, and maintaining the vibe is new to me, but he was so clear and concise in his approach I learned that there actually is a science to making people feel comfortable and welcome at work.

AD: What’s one key takeaway from your experience as a MongoDB intern?
JR: After this internship, I can confidently say software engineering is what I want to pursue after I graduate. One of my worries beforehand was that it was an isolated career, where you code all day and don’t have much interaction with other people, but my experience at MongoDB has shown me that there are always people willing to help, and asking for help, and there is a lot of collaboration in between. It’s been really rewarding to be able to write code that fits into a massive code base like MongoDB, as opposed to working on an isolated project as I would perhaps at another internship, or at school.

MongoDB On The Road - DevSharp in Gdansk, Poland

$
0
0

Devsharp 2018 was held in Gdansk on September 21st, 2018 in the Stary Maneż cultural center. It’s only 15 min away from the Airport so it’s very easy to go there and of course, all the conference were in English.

This free conference was such a victim of its success that they had to increase the number of places. Initially planned for 250 persons, about 400 passionate developers answered the call.

The conference was sponsored by IHS Markit, automotiveMastermind, Carfax and of Course, MongoDB. Seven talks were planned during the day from Microsoft, 8x8, and JetBrains for example.

I also happen to have a slot to speak about MongoDB Atlas & MongoDB Stitch and I explained how you could benefit from our platforms to accelerate and simplify your interactions with your data.

I shared my presentation here so feel free to have a look but I have made a lot of live demos leveraging MongoDB Compass, MongoDB Charts, MongoDB Atlas and MongoDB Stitch so make sure to come and see me on stage next time :-).

I would definitely recommend this conference, especially if you are a C# developer so please feel free to join us next year.

I received a lot of questions at the end of my presentation about the MongoDB Drivers and the new Multi-Document ACID Transactions we introduced in MongoDB 4.0.

It was also my largest audience I have spoken to so far. I am really proud and I can’t wait to go again next year :-). Special thanks to the team for your warm welcome!

PyMongo Monday - Episode 3 - Read

$
0
0

PyMongo Monday - Episode 3 - Read

Previously we covered:

In this episode (episode 3) we are are going to cover the Read part of CRUD. MongoDB provides a query interface through the find function. We are going to demonstrate Read by doing find queries on a collection hosted in MongoDB Atlas. The MongoDB connection string is:

mongodb+srv://demo:demo@demodata-rgl39.mongodb.net/test?retryWrites=true

This is a cluster running a database called demo with a single collection called zipcodes. Every ZIP code in the US is in this database.

To connect to this cluster we are going to use the Python shell.

$ cd ep003
$ pipenv shell
Launching subshell in virtual environment…
JD10Gen:ep003 jdrumgoole$  . /Users/jdrumgoole/.local/share/virtualenvs/ep003-blzuFbED/bin/activate
(ep003-blzuFbED) JD10Gen:ep003 jdrumgoole$ python
Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 03:03:55)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> from pymongo import MongoClient
>>> client = MongoClient(host="mongodb+srv://demo:demo@demodata-rgl39.mongodb.net/test?retryWrites=true")
>>> db = client["demo"]
>>> zipcodes=db["zipcodes"]
>>> zipcodes.find_one()
{'_id': '01069', 'city': 'PALMER', 'loc': [-72.328785, 42.176233], 'pop': 9778, 'state': 'MA'}
>>>

The find_one query will get the first record in the collection. You can see the structure of the fields in the returned document. The _id is the zip code. The city is the city name. The loc is the GPS coordinates of each zip code. The pop is the population size and the state is the two-letter state code. We are connecting with the default user demo with the password demo. This user only has read-only access to this database and collection.

So what if we want to select all the ZIP codes for a particular city?

Querying in MongoDB consists of constructing a partial JSON document that matches the fields you want to select on. So to get all the zip codes in the city of PALMER we use the following query

>>> zipcodes.find({'city': 'PALMER'})
<pymongo.cursor.Cursor object at 0x104c155c0>
>>>

Note we are using find() rather than find_one() as we want to return all the matching documents. In this case find() will return a cursor.

To print the cursor contents just keep calling .next() on the cursor as follows:

>>> cursor=zipcodes.find({'city': 'PALMER'})
>>> cursor.next()
{'_id': '01069', 'city': 'PALMER', 'loc': [-72.328785, 42.176233], 'pop': 9778, 'state': 'MA'}
>>> cursor.next()
{'_id': '37365', 'city': 'PALMER', 'loc': [-85.564272, 35.374062], 'pop': 1685, 'state': 'TN'}
>>> cursor.next()
{'_id': '50571', 'city': 'PALMER', 'loc': [-94.543155, 42.641871], 'pop': 1119, 'state': 'IA'}
>>> cursor.next()
{'_id': '66962', 'city': 'PALMER', 'loc': [-97.112214, 39.619165], 'pop': 276, 'state': 'KS'}
>>> cursor.next()
{'_id': '68864', 'city': 'PALMER', 'loc': [-98.241146, 41.178757], 'pop': 1142, 'state': 'NE'}
>>> cursor.next()
{'_id': '75152', 'city': 'PALMER', 'loc': [-96.679429, 32.438714], 'pop': 2605, 'state': 'TX'}
>>> cursor.next()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/jdrumgoole/.local/share/virtualenvs/ep003-blzuFbED/lib/python3.6/site-packages/pymongo/cursor.py", line 1197, in next
    raise StopIteration
StopIteration

As you can see cursors follow the Python iterator protocol and will raise a StopIteration exception when the cursor is exhausted.

However, calling .next() continuously is a bit of a drag. Instead, you can import the pymongo_shell package and call the print_cursor() function. It will print out twenty records at a time.

>>> from pymongo_shell import print_cursor
>>> print_cursor(zipcodes.find({'city': 'PALMER'}))
{'_id': '01069', 'city': 'PALMER', 'loc': [-72.328785, 42.176233], 'pop': 9778, 'state': 'MA'}
{'_id': '37365', 'city': 'PALMER', 'loc': [-85.564272, 35.374062], 'pop': 1685, 'state': 'TN'}
{'_id': '50571', 'city': 'PALMER', 'loc': [-94.543155, 42.641871], 'pop': 1119, 'state': 'IA'}
{'_id': '66962', 'city': 'PALMER', 'loc': [-97.112214, 39.619165], 'pop': 276, 'state': 'KS'}
{'_id': '68864', 'city': 'PALMER', 'loc': [-98.241146, 41.178757], 'pop': 1142, 'state': 'NE'}
{'_id': '75152', 'city': 'PALMER', 'loc': [-96.679429, 32.438714], 'pop': 2605, 'state': 'TX'}
>>>

If we don't need all the fields in the doc we can use projection to remove some fields. This is a second doc argument to the find() function. This doc can specify the fields to return explicitly.

>>> print_cursor(zipcodes.find({'city': 'PALMER'}, {'city':1,'pop':1}))
{'_id': '01069', 'city': 'PALMER', 'pop': 9778}
{'_id': '37365', 'city': 'PALMER', 'pop': 1685}
{'_id': '50571', 'city': 'PALMER', 'pop': 1119}
{'_id': '66962', 'city': 'PALMER', 'pop': 276}
{'_id': '68864', 'city': 'PALMER', 'pop': 1142}
{'_id': '75152', 'city': 'PALMER', 'pop': 2605}

To include multiple fields in a query just add them to query doc. Each field is treated as a boolean and to select the documents that will be returned.

>>> print_cursor(zipcodes.find({'city': 'PALMER', 'state': 'MA'}, {'city':1,'pop':1}))
{'_id': '01069', 'city': 'PALMER', 'pop': 9778}
>>>

To pick documents with one field or the other we can use the $or operator.

>>> print_cursor(zipcodes.find({ '$or' : [ {'city': 'PALMER' }, {'state': 'MA'}]}))
{'_id': '01069', 'city': 'PALMER', 'loc': [-72.328785, 42.176233], 'pop': 9778, 'state': 'MA'}
{'_id': '01002', 'city': 'CUSHMAN', 'loc': [-72.51565, 42.377017], 'pop': 36963, 'state': 'MA'}
{'_id': '01012', 'city': 'CHESTERFIELD', 'loc': [-72.833309, 42.38167], 'pop': 177, 'state': 'MA'}
{'_id': '01073', 'city': 'SOUTHAMPTON', 'loc': [-72.719381, 42.224697], 'pop': 4478, 'state': 'MA'}
{'_id': '01096', 'city': 'WILLIAMSBURG', 'loc': [-72.777989, 42.408522], 'pop': 2295, 'state': 'MA'}
{'_id': '01262', 'city': 'STOCKBRIDGE', 'loc': [-73.322263, 42.30104], 'pop': 2200, 'state': 'MA'}
{'_id': '01240', 'city': 'LENOX', 'loc': [-73.271322, 42.364241], 'pop': 5001, 'state': 'MA'}
{'_id': '01370', 'city': 'SHELBURNE FALLS', 'loc': [-72.739059, 42.602203], 'pop': 4525, 'state': 'MA'}
{'_id': '01340', 'city': 'COLRAIN', 'loc': [-72.726508, 42.67905], 'pop': 2050, 'state': 'MA'}
{'_id': '01462', 'city': 'LUNENBURG', 'loc': [-71.726642, 42.58843], 'pop': 9117, 'state': 'MA'}
{'_id': '01473', 'city': 'WESTMINSTER', 'loc': [-71.909599, 42.548319], 'pop': 6191, 'state': 'MA'}
{'_id': '01510', 'city': 'CLINTON', 'loc': [-71.682847, 42.418147], 'pop': 13269, 'state': 'MA'}
{'_id': '01569', 'city': 'UXBRIDGE', 'loc': [-71.632869, 42.074426], 'pop': 10364, 'state': 'MA'}
{'_id': '01775', 'city': 'STOW', 'loc': [-71.515019, 42.430785], 'pop': 5328, 'state': 'MA'}
{'_id': '01835', 'city': 'BRADFORD', 'loc': [-71.08549, 42.758597], 'pop': 12078, 'state': 'MA'}
{'_id': '01845', 'city': 'NORTH ANDOVER', 'loc': [-71.109004, 42.682583], 'pop': 22792, 'state': 'MA'}
{'_id': '01851', 'city': 'LOWELL', 'loc': [-71.332882, 42.631548], 'pop': 28154, 'state': 'MA'}
{'_id': '01867', 'city': 'READING', 'loc': [-71.109021, 42.527986], 'pop': 22539, 'state': 'MA'}
{'_id': '01906', 'city': 'SAUGUS', 'loc': [-71.011093, 42.463344], 'pop': 25487, 'state': 'MA'}
{'_id': '01929', 'city': 'ESSEX', 'loc': [-70.782794, 42.628629], 'pop': 3260, 'state': 'MA'}
Hit Return to continue

We can do range selections by using the $lt and $gt operators.

>>> print_cursor(zipcodes.find({'pop' : { '$lt':8, '$gt':5}}))
{'_id': '05901', 'city': 'AVERILL', 'loc': [-71.700268, 44.992304], 'pop': 7, 'state': 'VT'}
{'_id': '12874', 'city': 'SILVER BAY', 'loc': [-73.507062, 43.697804], 'pop': 7, 'state': 'NY'}
{'_id': '32830', 'city': 'LAKE BUENA VISTA', 'loc': [-81.519034, 28.369378], 'pop': 6, 'state': 'FL'}
{'_id': '59058', 'city': 'MOSBY', 'loc': [-107.789149, 46.900453], 'pop': 7, 'state': 'MT'}
{'_id': '59242', 'city': 'HOMESTEAD', 'loc': [-104.591805, 48.429616], 'pop': 7, 'state': 'MT'}
{'_id': '71630', 'city': 'ARKANSAS CITY', 'loc': [-91.232529, 33.614328], 'pop': 7, 'state': 'AR'}
{'_id': '82224', 'city': 'LOST SPRINGS', 'loc': [-104.920901, 42.729835], 'pop': 6, 'state': 'WY'}
{'_id': '88412', 'city': 'BUEYEROS', 'loc': [-103.666894, 36.013541], 'pop': 7, 'state': 'NM'}
{'_id': '95552', 'city': 'MAD RIVER', 'loc': [-123.413994, 40.352352], 'pop': 6, 'state': 'CA'}
{'_id': '99653', 'city': 'PORT ALSWORTH', 'loc': [-154.433803, 60.636416], 'pop': 7, 'state': 'AK'}
>>>

Again sets of $lt and $gt are combined as a boolean and. if you need different logic you can use the boolean operators.

Conclusion

Today we have seen how to query documents using a query template, how to reduce the output using projections and how to create more complex queries using boolean and $lt and $gt operators.

Next time we will talk about the Update portion of CRUD.

MongoDB has a very rich and full-featured query language including support for querying using full-text, geospatial coordinates and nested queries. Give the query language a spin with the Python shell using the tools we outlined above. The complete zip codes dataset is publicly available for read queries at the MongoDB URI:

mongodb+srv://demo:demo@demodata-rgl39.mongodb.net/test?retryWrites=true

Try MongoDB Atlas via the free-tier today. A free MongoDB cluster for your own personal use forever!

Welcome to Hacktoberfest 2018!

$
0
0
Hacktoberfest is a month-long celebration of open source software, started originally by our friends at DigitalOcean, and held in partnership with GitHub and Twilio.

MongoDB Stitch Authentication Triggers

$
0
0
See how Stitch authentication triggers let you use third-party user-authentication services such as Facebook or Google without losing the ability to perform custom actions when users register, sign-in, or leave.

The MongoDB Summer ‘18 Intern Series: the Node Driver, Inside and Out

$
0
0

Rebecca Weinberger joined us this year as a Summer ‘18 intern, but this was not her first encounter with MongoDB. During her sophomore year at MIT, Rebecca participated in a web development competition. She and her team used the MongoDB Node.js Driver in their application and won their division for first time participants, so it’s no surprise Rebecca spent her time at MongoDB on the Node Driver Team.

Andrea Dooley: How did you first become interested in computer science?
Rebecca Weinberger: My older brother studied computer science in college, while I was still in high school. I always liked computers but never did any real coding. He would show me snippets here and there of what he was working on, and I thought it was really interesting, even though I didn’t totally understand it. When I went to MIT, I had a hunch that I would enjoy computer science, and it definitely was the right choice for me.

AD: Can you talk more about the web development competition? What did you create using the Node Driver?
RW: We made an app for trading products within the MIT community. For example, if you are moving out of your dorm and wanted to get rid of a lamp, you could upload a photo and an asking price, and someone could purchase it from you. It was login-secured and had a built-in messaging system. It was really easy to integrate MongoDB into the project. We had to store information about the products and the users, and because of MongoDB’s flexible schema, we were able to develop quickly and make changes later if we needed to. The experience really kickstarted my passion for building web apps.

AD: Did your experience early on working with MongoDB have anything to do with your decision to apply for an internship?
RW: I ultimately ended up coming to MongoDB because I got the sense the internship program was exceptionally good here. Just from talking to recruiters and doing research online, I learned great things about the program. I liked how fast the company is growing and the kind of work people are doing. I knew about the product but wanted to know more about the company. I remember the interview process being the most positive interview experience I’ve had. All the interviewers were very supportive. They didn’t make me feel stressed or stuck. They asked questions that were fun to solve, like how to determine if a binary tree is symmetrical, and how to reverse an integer without making it into a string.

AD: Was it a coincidence that you wound up on the Node Driver team, or did you request it?
RW: Even though I used the Node Driver before, I didn’t necessarily know much about it. I knew it provided the API for developers to use MongoDB but I didn’t know what it looked like on the inside. It was my first choice for my team, not only because I had some familiarity, but because I also knew it was a highly regarded product with high impact on the millions of people who use it.

AD: Can you speak more to the level of impact?
RW: Impact was one of the main things I was looking for in a company. I remember working on a ticket that got some attention from the community. It was a high priority bug fix, and when I put up a pull request with the fix, people from the community around the world started commenting on it on GitHub. It helped reiterate that MongoDB is open source. Every user can see what is going on internally in the driver, including fixes for bugs significant to their projects. The immediate feedback from the community is proof that the work is incredibly significant. One thing I really wanted was to ensure any work I did would not get lost or not see the light of day, and this is something that proved true throughout my experience. This was super important to me.

AD: What are some of the specific projects you got to work on?
RW: At the beginning of the internship, the codebase had a problem with the way deprecation warnings were emitted. Methods were inconsistent, hard to read, and as a result hard to change. I worked with another intern on the team, and we created a uniform deprecation function and integrated it into the codebase. It was a good starting project and efficiently introduced us to the codebase. We also worked together on a framework for running tests on BSON libraries in the browser, or client-side. These libraries were starting to be used client-side, for things like MongoDB Stitch, so we wanted to make sure they behaved correctly client-side as well as server-side. It was not intended to be a huge project at first, but it had a lot of kinks and niche bugs to work out. At some point, we had to pull in people from other teams to help. I ended up learning a ton, especially about some specific packages.

AD: Did your previous experience with Node Driver help you ramp more quickly?
RW: I remembered how the API looked on the outside when I first worked with it because I was simply calling those methods. But once on the team, I was seeing and working on the driver from the inside. For example, when I call a method and pass it an option, I can see exactly how that option is passed through the code and eventually applied to the command to achieve the desired effect. Our mentors did a really good job of getting me quickly comfortable with a huge codebase. They assigned projects very thoughtfully, so there was a natural progression through the summer.

AD: What was one of the most interesting things you learned during your internship at MongoDB?
RW: I didn’t realize how far-reaching MongoDB is until I came here. It’s used in so many places, from the sports industry to airline travel. It makes sense when you think about it, but as a consumer, you usually don’t think about the database behind the applications you’re using. Almost every website needs a database. It was so cool to see how widely used MongoDB is. As far as academic learning, I’m a lot more comfortable with writing JavaScript in the Node environment. There’s a lot of features Node has that I was not aware of, and they came into play in various ways throughout these projects.

Building Intelligent Apps with MongoDB and Google Cloud - Part 1

$
0
0

Data analytics is a perpetual underachiever. Every generation of tools promises us better insight and never quite delivers. So we get stuck re-platforming and re-designing, hoping the next iteration will finally get us to the intelligence utopia. Yet modern applications must provide rich experiences, offer decision support, and continuously learn and adapt to win their users. Analytics and AI are at the heart of these Intelligent Apps.

We decided to build an Intelligent App to demonstrate how easy it is to take advantage of ML and AI cloud services without hiring a team of data scientists. First, we built a simple e-commerce application - MongoDB SwagStore - using React and MongoDB Stitch with MongoDB Atlas on GCP. Stitch saved us hundreds of lines of code and our app was ready in days. But aside from implementing stock replenishment notifications with Stitch Triggers and Twilio, it wasn’t very intelligent... yet.

We enabled our SwagStore with a product recommendation engine. Rather than implementing a recommendation engine from scratch, we used Google Cloud ML to train and tune a TensorFlow model that implements a WALS collaborative filtering algorithm. We then used Google Cloud Endpoints to serve up these personalized recommendations.

When a user authenticates, MongoDB Stitch sends an HTTP GET request to the Google Cloud Endpoint to obtain a list of recommended products.

A Stitch Function updates the recommendations array in the user document with the returned result.

exports = function() {
  //services
  const gcp = context.services.get("GoogleCloudRec");
  const mongodb = context.services.get("mongodb-atlas");
  //my swagstore collection
  const users = mongodb.db("swagstore").collection("users");
  const products = mongodb.db("swagstore").collection("products");

  return users.findOne({user_id: context.user.id})
    .then(user => {
      if(!user.gcpId) {
        return [];
      }
      //URL to GCP cloud endpoint
      const url = `https://jfmlrecengine.appspot.com/recommendation?userId=${user.gcpId}`;
      return gcp.get({ url }).then(response => { 
        console.log("Retrieved Recommendations");
        return EJSON.parse(response.body.text());
      })
      .then(result => {
        // Get the product info for the array of product ids
        return products.find({id: {"$in": result.articles}}, {_id:0, id:1, name:1, image:1}).toArray();
      })
      .then(products => {
        console.log(JSON.stringify(products));
        // Write the products to the user document
        return users.updateOne({"gcpId": user.gcpId}, { $set: { "personalized_recs" : products}})
          .then(() => { return products });
      });
    });
};

So when Jane logs into SwagStore she will see these product recommendations:

And Jasper - different ones:

By using MongoDB Stitch combined with powerful cloud services and APIs you can build a recommendation system like this very quickly and plug it right into you operational app getting your developers and data scientists to work together, operationalize insight, and deliver intelligence to your customers. Give it a try!

Stay tuned for Part 2 where SwagStore becomes even more intelligent with an AI chatbot.

Sending Email From Your Frontend Application Using MongoDB Stitch and AWS SES

$
0
0

Email is king of communication. Most grandparents have an email account, so if you are looking for a communication channel with the most coverage, it’s going to be email. But sending an email from an application is not fun. Many transactional email services, for security reasons, will not allow you to make a request to their APIs from a front end application. This requires you to support a backend application to handle these transactions. If you are looking to host a simple Contact Form or share content from your website, setting up a Node.js application, configuring REST routes with Express.js, and deploying that somewhere would be overkill.

The built-in AWS service from MongoDB Stitch makes it easy to send a transactional email using the Simple Email Service (SES). Just add the AWS service, configure SES, and use the Stitch client to execute the AWS SES request right from your React.js application. I created the following function in a recent Stitchcraft live coding session on my Twitch channel.

share = async (entry, email) => {
  const args = {
    Destination: {
      ToAddresses: [email]
    },
    Message: {
      Body: {
        Html: {
          Charset: 'UTF-8',
          Data: `
              <h1>Enjoy this pic!</h1>
              <img src="${entry.url}" />
              `
        }
      },
      Subject: {
        Charset: 'UTF-8',
        Data: `Picture shared by ${entry.owner_name}`
      }
    },
    Source: 'picstream@ses.aydrian.me'
  }

  const request = new AwsRequest.Builder()
    .withService('ses')
    .withAction('SendEmail')
    .withRegion('us-east-1')
    .withArgs(args)
    .build()

  return this.aws.execute(request)
}

With these few lines of code, I was able to take an image, stored in S3 and email it to the specified email. Wasn’t able to see it live? Watch the recording on YouTube with the Github repo in the description. Follow me on Twitch to join me and ask questions live.

-Aydrian Howard
Developer Advocate
NYC
@aydrianh

Why You Need to Be at MongoDB Europe 2018

$
0
0

MongoDB Europe 2018 is just around the corner. On the 8th of November, our premiere European event will bring together over 1000 members of the MongoDB developer community to learn about our existing technology, find out what’s around the corner and hear from our CTO, Eliot Horowitz. It is also a chance to celebrate the satisfaction of working with the world’s most developer focussed data platform.

This year we are back at Old Billingsgate which is a fabulous venue for a tech event. There will be three technical tracks (or Shards as we call them) and, of course, this year we see the return of Shard N.

Shard N is our high-end technical tutorial sessions where members of MongoDB technical staff get more time to cover more material in depth. These sessions are designed for our most seasoned developers to get new insights into how our products and offerings can be used to solve the most challenging business problems.

This year's sessions include John Page on comparing RDBMS and MongoDB performance and the real skinny on Workload isolation from everyone’s favourite MongoDB Ninja, Asya Kamsky.

In the main Shards we have Keith Bostic talking about how we built the new transactions engine and lots of sessions on our new serverless platform MongoDB Stitch. Remember, regardless of whether you are a veteran of MongoDB or coming to the database for the first time, the four parallel tracks will ensure that there is always something on for everybody.

The people in white coats will be back again this year. Who are they?

They are members of our MongoDB Consulting and Solution Architecture teams and nobody knows more about MongoDB than these folks. You can book a slot with them via a calendaring system that will be sent out after registration.

All attendees will receive:

  • A MongoDB Europe 2018 hoodie and other exclusive swag such as MongoDB Europe stickers, buttons, and pins
  • 3-months of free on-demand access to MongoDB University (Courses in Java, Python, and Node.js are included.)
  • 50% off MongoDB Certification exams
  • Future discounts on MongoDB events as Alumni

We will have the top of the line London Street Food initiative, Kerb, catering the day, and other fun stuff like a nitro-ice-cream parlour and all-day table tennis tournaments.

The day will off finish with a drinks reception on us!

Register today for your tickets.

Get a 25% discount per person for groups of 3 or more.

And just for reading this far you get another 20% off by using the code JOED20.

What’s not to like?

See you all on the 8th of November at Old Billingsgate.

Viewing all 2423 articles
Browse latest View live