AWS has Lambda, Azure has Azure Functions, Web Jobs, and App Service. Whether you want to script pay-by-the-drink web endpoints or build more traditional micro services. Come join us for this code-focused look at architecting, coding, and deploying serverless resources in C# and .NET Core.
With the Microsoft Bot Framework an elegant solution exists to drive natural human-machine interaction. Furthermore, the Microsoft Cognitive Services such as the Language Understanding Intelligence Service (LUIS) give us a comfortable way of taking the input from a user and transforming it to actions. In the end the framework allows us to connect a large number of channels (such as Skype, Telegram, Email, Text, …) to our own backend for further processing. In this talk we start with the basics of the Microsoft Bot Framework and walk through the most important steps and decisions of creating an intelligent chat bot. Our ultimate goal is to begin with an idea and end with a rudimentary solution that exposes an existing service in a new way - completely accessible from popular messaging applications. In our journey we tackle questions such as persistence, authentication, and scalability.
When was the last time you ran a security / operation system update on your production server? Yep, me too, far too long. This is one of the reasons why serverless solutions are awesome, we will go through why should you use serverless solutions. More importantly we will discuss the challenges the serverless architecture style raises and how to solve them.
Azure Cognitive Services are SaaS AI services offered by Microsoft, which provide rich API sets to analyze text, video, photos, and more. In this session we will explore APIs for speech recognition, natural language analysis, face detection, and other services designed to save you valuable time and effort integrating cognitive solutions.
This talk introduces the Service Fabric Actor model and the framework in general, while surveying the Service Fabric cluster environment and how the Actor programming model fits into it. We will also talk about best practices in the Actor model, what kind of services it can replace, and its general pros and cons. Learn how to apply the next generation of Azure cloud services, today.
Google is one fo the first companies that developed a cloud platform to run their applications at massive scale. The Google Cloud Platform runs Google Search, Gmail, YouTube, Maps, and many other services and applications. In this talk, we will explore the architectures and services that enable Google and other companies that run on GCP to run their applications at scale, perform data analysis, and machine learning, without downtime and with very high performance.
Today many projects are becoming open source, and even the C# compiler and F# compiler are! My dream was always to work on a real compiler project, and after a few years of learning F# and compilers I had an opportunity to participate in a mentorship program, where my mentor was a developer on the F# compiler team. Together we worked on cool features and since then I have been actively contributing to the F# compiler.
The .NET garbage collector can be your best friend or your worst enemy; and it’s not friendly with a lot of people. The GC left more than a few production systems burning in smoke after developers failed to anticipate the effects of real production loads on the memory subsystem. In this talk, we will methodically measure and improve the .NET garbage collector’s performance. We will begin with a quick refresher on dynamic performance tools that can identify GC issues: CLR performance counters, ETW GC events, and ETW object allocation events; as well as static analysis tools, such as the Roslyn-based heap allocations analyzer. Then, we will inspect multiple issues at the source code level: excessive boxing, unintended effects of lambdas closing over local variables, await-generated state machines, intermediate objects in LINQ queries, and many others. We will also discuss higher-level memory problems: how to get rid of large object allocations, how to avoid finalization, and how to convert heap-based designs to local objects. Some of these ideas are now being applied at the language and framework level in C# 7 and .NET Core. At the end of the talk, you will be equipped to reduce memory traffic and GC overhead in your own applications, often by a factor of 10 or more!
Your .NET application is up and running? Great. Now count the minutes before your first problem. What do you do now? Check logs? Debug? Stare at the screen and hope for the best? In my line-of-work, this is when I get the call. And just like your plumber, I too come equipped, carrying my tool belt filled with debugging tools, from the smallest tracing wrench to the memory dump crowbars. During this session, we will hear real world customer debugging stories, the tools we’ve used for troubleshooting, and the stuff we discovered (bugs/features/ “who put that code there?”).
Are you confused by all the options for compilation of .NET code? Are you wondering how to write assemblies that are cross platform? Are you confused with how to create assemblies that can be referenced from UWP, .NET Core, Xamarin and .NET 4.6? Join this session to explore the .NET platform offerings available in 2017 and the impact of .NET Standard 2.0.
In this session, we will delve into all that makes up C# 7.0. As we introduce new features such as pattern matching, out-variable updates, anonymous yet strongly typed tuple returns, deconstructor support, local functions, variable declaration improvements, and more. Don’t miss out on this session to update your C# programming skillset to the latest C# language capabilities.
Since ASP.NET came out 16 years ago, many developers used it to write their web applications. ASP.NET Core, formerly known as ASP.NET 5, is significantly different from previous versions of ASP.NET. ASP.NET Core has been completely rewritten to provide an optimized development framework to write web applications. In this session we will introduce ASP.NET Core MVC.
System Architecture with NoSQL, will explore how to build and architect enterprise systems using NoSQL, in particular, using the RavenDB Document Database. Oren Eini, founder of RavenDB, will talk about modeling concerns, high availability, and scale-out, how to manage a system with polyglot persistence, complex domain and a high rate of change while maintaining speedy development rate and keeping the ops team happy.
This talk will cover everything you need to know about geospatial data in ElasticSearch! We will learn how to store and index geographic data in ElasticSearch, the ways to search it and how to use geo aggregations and visualizations.
Apache Beam is an open source framework that creates a uniform API for data processing tools like Spark, Flume, and Storm. Beam makes it possible to write your code once and run it on different platforms without any changes. Google Cloud DataFlow is a service that supports data processing with Apache Beam without deploying any servers of your own. In this talk, we will explore the Apache Beam model and API, and run an example on Google Cloud DataFlow.
Are you working on IoT or web scale clickstream processing solutions? Got megabytes, tens of megabytes or even hundreds of megabytes of small data coming at you? Per second? If you answered yes to any of these, awesome, this session is for you. We will introduce the Lambda Architecture for Big Data, and walk thru an Azure reference architecture that answers the questions of: How best should you ingest all that data? What can you do with the data now in near real-time now that you have it (the hot path), and how you should go about keeping it for future analysis (the cold path)? While you are on your way to building the next twitter or solving the world’s energy crisis with a massively successful IoT platform, understanding how and when to leverage Data Lake, IoT Hub, Event Hubs, Blob Storage, Stream Analytics, HD Insight (Spark and Storm), Azure Machine Learning and how to position those pipelines next to your operational SQL Server in a VM or SQL Azure Data Warehouse is a mission critical decision. Choose poorly and your solution will cost too much, be a burden on your developers or will ultimately collapse underneath the volume of data. Choose wisely, and you are well on your way to stream processing nirvana. Choose wisely and attend this session.
Big Data has three forms- volume, variety and velocity. IoT is unique in that tends towards time-series data that is small individually, but comes predominantly as high speed streams. Time-series data brings with it a special set of challenges: How can you build a solution that can ingest data arriving at this speed? How do you manage the devices that are allowed to send data to your solution? Once you have received the data, how do you process it so you can support detailed analytics and yet not miss out on important events that need to generate real-time alerts? Beyond supporting analytics, how you can you apply intelligence to the stream of data flowing thru your solution? How do you make this solution scale? Come to this fast paced session to understand how you can implement a scalable IoT analytics solutions using tuple-at-a-time and micro-batch processing approaches atop IoT Hub, Event Hubs, Kafka, Stream Analytics, Web Jobs with Event Processor Host, HDInsight Storm, HDInsight Spark, Machine Learning and SQL Database.
In the Big Data era, stream processing is essential to performing many tasks - responding to sensor data to optimize operation, deriving insights from data in real-time, executing machine-learning algorithms on the fly, and much more. Stream Processing Frameworks are everywhere - Apache Storm, Flink, Samza and Spark Streaming are just a few of many battle-tested technologies helping us deal with streams of data or events in scale. But that's just the tip of the iceberg, as the real work in this area has just begun. This talk covers the main technologies in the field, as well as patterns of use and complementing technologies. We will also discuss how stream processing fits in a larger system, and how it can be complemented (or can complement) batch processing methodologies.
Are you unfamiliar with NodeJS, and are curious to learn about this fascinating environment? Do you have the NodeJS basics, but would like to tighten your understanding and learn a little more? If so, then this workshop is for you. During the workshop you will learn the basics of Node. You will learn about the unique asynchronous nature of Node, learn how to use callbacks, how to use APIs like the filesystem API, how to create node modules, and how to build web applications using express. Since we are in 2017, you will also learn how to leverage ES2015 to make your code nicer and more concise, and to leverage Promises to make asynchronous programming bearable. Given that I am a firm believer in testing your code, you will also learn about how to write tests in Node using Mocha - unit tests, integration tests, and end to end tests.
Docker is making history in the software world. Containers integrate into every part of the application lifecycle, from dev and test all the way to staging and deployment. In this workshop, we will see how containers can do much more than what you came to expect from a VM, how containers help manage application infrastructures, and how to use Docker management tools to manage application scale from a single instance to large clusters. We will discuss Docker history, understand how containers are different from VMs and talk about the Docker file system, install Docker on Linux, macOS, and Windows, build our own container and talk about Docker file definitions, and towards the end of the day discuss some advanced topics such as the Docker Hub, Docker Compose, and others -- all accompanied by hands-on labs.
So you’ve built your HTTP API, and now that it’s live, you’re suddenly dealing with a whole new set of problems. Do you really need to PUT the entire Customer just to change someone’s email address? Why does it take you 25 API calls just to render a shopping cart? How do you find the bottlenecks when just drawing a web page requires fifty HTTP requests? What happens when one of your API consumers accidentally tries to GET your entire customer database? The architectural style known as REST can answer all these questions - and more - but even experienced developers often find it difficult to apply RESTful principles when building real-world applications. In this workshop, we’ll explore the elements of REST related to hypermedia and the principle of “hypermedia as the engine of application state” (HATEOAS) - we’ll talk about why they matter, what problems they solve, and when you might want to implement them in your own systems. During the workshop, we’ll implement a RESTful HTTP API using C# and ASP.NET. Starting with the most basic “hello world” service, we’ll cover patterns like content negotiation and resource expansion. We’ll compare several popular formats for representing hypermedia in modern web APIs, we’ll cover the semantics of PUT, POST and DELETE (and implement support for HTTP PATCH), and we’ll look at some of the tools and frameworks that are available to help you design, build and monitor your HTTP APIs. Finally, we’ll cover topics like monitoring and security, and we’ll discuss the strategic value of building HTTP APIs in modern organisations - so you’ll go home not just knowing how to build a great RESTful APIs, but also how to persuade your boss that it’s a good idea.
All kinds of applications run on Linux, from web servers to distributed database engines and embedded applications. Troubleshooting performance in the field, especially when invasive profilers can't be used, is a delicate art that requires a solid understanding of the system and low-overhead tools. In this workshop, we will visit a spectrum of Linux performance monitoring tools. We will start with a simple performance checklist based on the USE method, including tools like top, iostat, vmstat, mpstat, sar, and others. Then, once we identify the overloaded resource, we will dig in deeper using perf: tracepoints, hardware events, dynamic probes, and USDT. We will also collect stack traces of heavy events (CPU usage, disk accesses, network) and visualize them using flame graphs. Finally, we will discuss the emerging superpower for Linux performance monitoring: BPF and BCC. This is a new kernel technology that enables low-overhead, super-efficient monitoring and tracing tools, which perform aggregation closer to the source where the events occur and provide useful information at a fraction of the cost. We will review a performance checklist based on BCC tools, and explore one-liners from the general-purpose trace and argdist tools.
In this workshop, members of the DevOps and ALM department at Sela will share their experience building modern DevOps solutions with various tools and technologies. The workshop consists of six sessions. The first session will cover the new features in Team Foundation Server 2017, including test and build, release management, Git integration and source control improvements, and more. The second session will cover new features in Visual Studio 2017, including debugger enhancements, profiling tools, Docker container support, live unit testing, and others. The third session will discuss the Microsoft approach to open source tools, where we will explore the ways to integrate open source technologies like Jenkins and Docker into your DevOps process. The fourth session will focus on Git, and cover Git distribution models, branching, typical workflows, and tailoring Git to your own needs. The fifth session will showcase a practical study of how to perform continuous delivery of microservices using Jenkins slaves over AWS spot instances and Docker containers in ECS. Finally, the last session is a client success story from one of the DevOps teams at Intel Corporation.
Building mobile apps with Visual Studio was always easy and fast. And now, with the new project system, debugging, profiling tools, and unit test generation features you can deliver quality mobile apps, smarter apps, easier and faster. In this session we'll learn how to use the new Visual Studio 2017 to build Five-star mobile apps.
WPF was, and for some still is, the best solution for building desktop application on Windows. Unfortunately, it is becoming less popular and is not actively maintained -- although still far from being deprecated. Microsoft has now shifted to recommending UWP (Universal Windows Platform) as the framework for building Windows applications going forward, specifically for Windows 10. In this session, we will compare the features and abilities of WPF and UWP, from performance to developer productivity, concepts, patterns, target devices, and more. We will also address the big question: is it possible to migrate from WPF to UWP? For experienced WPF developres, this is the talk that will help you face the challenges when joining the community of UWP developers.
When building a mobile app, you often have to decide first which platform to target. But with Xamarin.Forms, it is now possible to build an application once and run it on all platforms. In this talk, we will see how to build native apps for iOS, Android, and Windows from a single C# codebase, and discuss code-sharing best practices.
Progressive Web Applications (PWA) are web applications that have the look and feel of mobile apps -- taking the best of the two worlds (mobile and web) to create the next generation of applications. We will talk about PWA, the problems of mobile apps and web apps today, how to create a small PWA with Ionic 2, service workers and manifest files, and using Lighthouse to check if your app is indeed progressive to use PWAs today.
So you’ve built your HTTP API, and now that it’s live, you’re suddenly dealing with a whole new set of problems. Do you really need to PUT the entire Customer just to change someone’s email address? Why does it take you 25 API calls just to render a shopping cart? How do you find the bottlenecks when just drawing a web page requires fifty HTTP requests? What happens when one of your API consumers accidentally tries to GET your entire customer database? The architectural style known as REST can answer all these questions - and more - but even experienced developers often find it difficult to apply RESTful principles when building real-world applications. In this workshop, we’ll explore the elements of REST related to hypermedia and the principle of “hypermedia as the engine of application state” (HATEOAS) - we’ll talk about why they matter, what problems they solve, and when you might want to implement them in your own systems. We’ll look at architectural patterns like resource expansion, OAuth2, HTTP PATCH and API versioning. Finally, we’ll discuss the strategic value of building HTTP APIs in modern organisations - so you’ll go home not just knowing how to build a great RESTful APIs, but also how to persuade your boss that it’s a good idea.
In recent times React has attracted quite a lot of attention despite its poor performance on mobile. The cure for this problem did not come from a big corporation, but instead originated within the open-source community. The personal project Inferno flourished into a whole community, which created a serious competitor to React. This library promises an improved performance, nearly fully-compatible API, and a more focused solution than React. Why do we want to migrate away from React? What obstacles have to be faced in the migration? Is the migration worth the efforts? In this talk we will hear the story of a large cross-platform mobile application that was migrated from React to Inferno.
With the evolution of front-end frameworks and the huge change in how we build web applications nowadays, the preferred approach to authenticate users is to use a signed token, as this token is sent to the server with each request. In this session we will learn how to use OWIN and ASP.NET Web API to create an authentication pipeline with an Angular client.
Continuous delivery is all about reducing risk and delivering value faster by producing reliable software in short iterations. Containerization of software allows us to further improve on this process. The biggest improvements are in speed and the level of abstraction. In this session, we will see how to set up a continuous delivery pipeline using Docker, in order to easily build and run Docker containers as part of our continuous delivery pipeline.
All right, you have your services all up and running in the cloud. Now what? In this session, we will review the services available for monitoring your cloud infrastructure and the applications running on top of them. Review the why and how of setting up integrated monitoring for your cloud application, with examples on leading cloud platforms: Google Cloud Platform, Amazon Web Services, and Microsoft Azure.
When you need an environment that best suits your needs, you need the cloud. When you need the cloud for DevOps, then the cloud is more than just virtual machines and PaaS -- you need the best architecture for your system. In this session, we will learn how to use Azure Resource Manager for creating our own highly available and self-maintained environment for better use of the Azure cloud, and see how to integrate it with your own infrastructure.
Docker is the leading container technology nowadays, and it integrates into every application lifecycle. Running Docker stand-alone is great for development and testing, but staging and production is a different story. In this talk we will review some of the leading tools for Docker cluster management, and discuss scaling, monitoring, CI/CD, high availability and other concerns.
Modern Linux systems come with a wealth of built-in instrumentation that can be used for safe, easy, low-overhead production-time monitoring. In this session we will review the available Linux tracing and monitoring tools: a simple checklist for system performance with tools like vmstat, pidstat, iostat, and sar; the venerable perf multi-tool for CPU sampling and capturing kernel tracepoints; and the upcoming BPF kernel runtime for developing tracing programs that run in the kernel and perform aggregations close to the source. At the end of this session you will be equipped with a variety of tools for monitoring all kinds of Linux systems and applications, including web servers, Java applications, PHP/Node.js/Python, and more.
Git is undoubtedly one of the most important version control systems, and learning how to use it is practically indispensible. After learning and getting acquainted with Git the first thing you can notice is that it's a powerful but terrifying tool. Due to its great flexibility if used properly it becomes a very useful tool, but if used in the wrong way it turns into a painful nightmare. In this session we will talk about how to use Git properly by analyzing the distribution models, the branching models and the common constraints, and explaining most popular Git workflows and how to create a custom workflow that suits your needs.
In a constantly changing and evolving world, we are constantly required to adapt ourselves to the changes in the organization. In this session, we will learn how to lead our employees through the changes and how to empower ourselves and influence our employees. We will understand the true value of the employee and the manager in a complex business reality, and how to lead the employees to improve their performance by using positive influence. In addition, we will emphasize how to become an influential employee in an organization by doing the job at the most meaningful way.
In recent years, brain study and understanding its impact on human behavior is accelerating. This session will expose you to the way science is mapping the employees' capabilities, patterns and behaviors. Participants will receive tools to help them create new ideas, explore their potential and their strengths to resolve issues and conflicts within the organization.
In this session we will about the use of icons through the ancient Egyptian writing (hieroglyphics), and discuss the importance of communicating visually using icons (logo, slogan, emoji) and how these symbols affect thought processes, decision making and emotions.
The "Advanced employee" is an employee with the ability to look through a variety of perspectives on a situation / organizational dilemma. Through the appropriate choice of perspective, he could solve it in a creative way, while improving performance. In this session, we will acquire tools for defining the perspective, the advantages and disadvantages of that perspective and learning new and innovative perspectives.
It's all too easy to think software is magic - but it's not. Most of the time, it's not even sufficiently advanced. Like everything else in our world, the people you work with and the products they build are subject to the fundamental laws of nature. Based on an original idea by Pieter Hintjens, this talk explores the laws of our universe - from the fundamental laws of physics to the eponymous laws found in the IT industry. Dylan Beattie shows you how Newton's Laws of Motion can explain why big organizations struggle with agile development; how the Equivalency Principle explains why so many startups fail, and why Heisenberg's Uncertainty Principle makes it so hard to estimate and report on your software projects. Finally, we'll look at three of the oldest laws of software engineering - Moore's Law, Amdahl's Law and Conway's Law - and how they can prove that if you don't stop having meetings, the internet will stop working.
Most materials about machine learning focus on the details around model building. While that is important, as a developer what is really important to you is that you understand both model creation and model operationalization. Succinctly, this workshop is really about the end game of delivering a successful solution in Azure - how you operationalize the model and integrate intelligence into your solution architecture. For those unfamiliar with machine learning concepts, we will provide a backgrounder so you that you understand the key tools in the toolbox (data transformation, supervised learning modules, unsupervised learning modules) and the value that Azure ML and R Server bring to the larger solution (such as classification, clustering and predictive analytics). As a developer, you will leave with a good sense of how models are programmed across multiple languages including R, Scala and Python, as well as how they can be composed visually using the designer provided by Azure Machine Learning. We will cover the pipeline of how you need to prepare your data, how you perform model training, and how to architect your solution so that the model training can be done with data at scale (e.g., exceeding 10 GB and reaching Terabyte scale) in the situations that require it. We cover the gamut of model creation from local training on your dev box to remote training with Azure ML, HDInsight and SQL R Services. You will learn how models are trained in Azure Machine Learning via training experiments, as well as via programs that leverage the parallel algorithms in HDInsight with Spark and the RxSpark context, and how you train a model without ever leaving your SQL database by using T-SQL to execute embedded R scripts against tabular data managed by SQL Server. With your model in hand, we’ll tackle an issue that surprises developers new to machine learning - how to leverage the trained model from their programs. Operationalizing your model is the critical last mile in getting value out of your effort. Here we’ll cover the gamut of operationalization options from local prediction to remote prediction with Web Services and Stored Procedures. Not knowing your options for how to operationalize, or not having a plan for operationalizing and integrating into your application can mean significant delays in bringing the value of your model to your application because you will see wasted time in transcoding models to different productions languages (like converting from R to C#), questions about the parity of that converted model to the original, finding your solution for operationalization does not scale well or wasting money on approaches that are simply inefficient for hosting operationalized models. After integrating our model into our solution, our journey continues. Here we will pause to reflect on and appreciate the build versus buy decision - in some cases we may not have to go through the trouble of building and operationalizing our own model at all because Microsoft Cognitive Services already provides what our solution needs, in a convenient REST API form. With an end-to-end, intelligent solution in place we turn our attention to the subtle follow-on problems we need address, such as how we ensure our model is continuing to perform and how we re-train the model with new information over time to keep it current and accurate. Come to this workshop and leave equipped to develop intelligent solutions end-to-end using Microsoft Azure.
This is a full-day workshop focusing on async and await theory and practice. You will gain deep understanding of async methods, how to use them right, best practices, and useful patterns. Although they seem easy on the surface, async methods and the await keyword are full of tricks and pitfalls that you need to remember in order to write efficient asynchronous code. This day is designed for developers currently using async and await for their systems, and also for developers who have experience with asynchronous programming using the TPL or other threading libraries.
The Angular team made the wise but difficult choice of starting over with Angular 2, rather than just patching the mature but increasingly fragile framework. From now on, it's Angular -- not Angular 1, 2, or 4. The new version is based on a modern approach, which prefers a more composable component model, reactivity, predictable data flows, type safety, embracing the wider ecosystem, and other current conventions and best practices. These changes represent a significant paradigm shift that is taking over the whole ecosystem. This workshop introduces this new paradigm that underlies Angular, helps you set up a productive development environment, and teaches you how to build a well-structured app with Angular modules, components, services, and routing.
Right-click Publish is easy, but it’s evil. FTP it up is fragile and error prone. You know you need to run tests and deploy in a consistent way. Bring an empty Windows VM, and we’ll install and configure TeamCity, Octopus Deploy, Microsoft build tools, and Red Gate database management tools. Set a sample project or your solution into place, and watch it build and deploy on your system. You’ll leave with a fully working CI/CD pipeline you can extend for your organization and the skills to set this up where it counts.
What’s Up with the new ASP.NET Core? (Who Moved My Cheese?) The latest release of Visual Studio is no minor upgrade from Visual Studio 2017. This is the culmination many changes, even to areas that have remained the same since the first days of Visual Studio .NET. This session will delve into the details of those changes as they relate to ASP.NET Core. This includes things like the solution structure, the addition of the wwwroot folder, unit test tool updates, config file changes, project schema changes (with or without JSON) and much more. Attend this session to jumpstart your ASP.NET Core development, and catapult you to immediate productivity instead of wallowing in the surprise of change.
Getting value from yor data has become a major source of income for software companies. Data science is the art of producing business value from data, and includes getting data from a vast number of sources, analyzing it, and presenting it in a way that provides business value. Python is a great language to do it. In this one-day workshop we will explore the basic tools in Python that help you store, analyze, and present data to reflect business value. We will learn IPython, Jupyter, numpy, pandas, matplotlib, and scipy.
React has long become one of the most popular front-end frameworks, and powers many web applications including Facebook itself, but what tools and features does it provide us to write robust and performant apps? From React’s advanced propTypes, to FlowType and the Jest testing platform, let’s dive deep into the React ecosystem’s set of production-grade tools. These make development easier, our code safer and more dependable! Working through a React project, we’ll put these tools to use and see how to incrementally improve our code, including many small tips you could apply to your work or personal projects right away!
Building cloud based applications is a challenging task. One of the most painful tasks is deploying, configuring and maintaining your hosts that run your applications. You need to take care of upgrading your infrastructure, securing your applications and take care of failures of your servers due to large number of reasons. Google Cloud Platform offers to do this job for you and save money for you by reducing the TCO of your applications, by using Google app engine and by using all kinds of servless services like BigQuery, DataFlow, DataStore and many more. In this workshop we will learn how you can run your applications and use many types of services without any servers at all! Just write your code and load it to the cloud, no need for DevOps! In addition, we will learn about the powerful tools that Google App Engine provides us for serverless applications.
React Native is quickly becoming most web developers’ technology of choice when building mobile apps. Unlike Phonegap or Cordova, React Native doesn’t package web pages, but provides a robust cross-platform API to build native applications! In this workshop we’ll build, step-by-step, a fully functional React Native application for iOS and Android, we’ll see React-Native’s power in building slick, powerful apps and the different details and pitfalls in trying to build a one-size-fits-all app, as well as how to avoid them. If you’re familiar with React, React Native will open a world of mobile possibilities!
Git is undoubtedly one of the most important version control systems in the few years, and in this workshop you will learn how to use this tool that has become practically indispensable. In this session we will talk about what Git really is, we will know "the four areas" and we will see how the different commands affect each one of them. Unlike what many people think, learning Git is not learning how to run the commands, but rather understanding how each command affects each area. In general the objective of the session is to change the way of seeing and understanding Git. Don't learn to use Git, learn to think in Git. NOTE: This talk is suitable for both new users and intermediate users. New users will be able to understand the basics of Git and gain the ability to continue learning on their own, while downstream users will be able to forge their knowledge and advance to a new level. Topics covered: configuring a Git repository, the Git four areas, working locally, working with remotes, working with branches, advanced topics.
Your web application is up and running? Great. Now count the minutes before your first problem. What do you do now? Check logs? Debug? Stare at the screen and hope for the best? In this workshop, we will go over the list of tools every web developer should have in their toolset, from the smallest tracing wrench to the memory dump crowbars. We will learn which tools to use to test slow web sites, random exceptions, memory consuming apps, and hung w3wp processes.