Tag Archives: big data
Updated December 10, 2015
Science at Esri continues to be an exciting initiative where we are concerned with supporting both basic and applied science, while also recognizing that there are many major themes of compelling interest to society that will drive scientific research for the next two decades. Thus we view science as helping us to understand not only how the Earth works, but also how the Earth should look (e.g., by way of geodesign), and how we should look at the Earth (i.e., by way of Earth observation in varying forms and the accompanying data science issues of analysis, modeling, developing and documenting useful datasets for science, interoperating between these datasets and between various approaches). In addition to supporting the science community, we seek to do good science at Esri ourselves, as it underpins much of what we do as an organization. This is helping us to evolve ArcGIS into a comprehensive geospatial platform for science; a platform that supports research project management and collaboration, spatial analysis, visualization, open data, and communication of science, all at multiple scales (i.e., from individual researcher to lab workgroup, to multi-department, multi-university, university-to-agency collaboration, to citizen engagement).
You can always track the totality of the Esri science initiative at esriurl.com/scicomm,Hot! but in this post I’ll share some highlights from 2014, and as we near the end of 2015′s first quarter, talk about the immediate road ahead. Continue reading
Turning big data into knowledge is all about relevance and context
Big data may be all the rage these days, but it isn’t exactly new. In fact, Esri has been dealing with big data since the inception of digital mapping more than three decades ago. When every contour, stream, street, rail line, park, building, or shoreline for the entire world is stored in an intelligent database, data doesn’t get much bigger than that.
Data as Big, Beautiful, and Living as the Earth
Back in 1992, Esri embarked on an ambitious campaign to create the very first seamless digital map and database of the whole world. This project—aptly named the Digital Chart of the World—converted paper maps of political boundaries, transportation lines, utilities, cultural landmarks, and more into a digital map product that could be viewed for the first time as something other than a pretty picture. In a world where CDs were still considered new and expensive storage media, and hard drives came in hundreds of megabytes, the 1.7 gigabyte database was not only huge, but it also challenged many computer specification and storage architectures. Continue reading
Finding a balance between consumers and companies when sharing geolocation information in the age of big data analytics.
Recently we returned from a retail conference where we highlighted to attendees the differences in perception and attitudes they have toward location data, depending on whether they are using it in their personal or professional lives.
This was the type of conference where those big-box and household-name retailers you see every day send their people in charge. They meet and discuss different ways to sort out the massive amounts of data they capture from today’s digital world. Their main purpose? Turn that data into hard results. Continue reading
Updated April 17, 2016
With all the recent excitement and good hopes over the White House Climate Data Initiative, and the ongoing progress of the Global Earth Observation System of Systems (GEOSS), there is another huge data initiative that bears mention: EarthCube.
I have used the word “initiative” for EarthCube but it has also been described as a vision, as a multi-faceted, multi-layered partnership, and also as a “virtual organization.” As such, it bears quite a bit of resemblance to the international GEOSS, but is much more US-based, having been conceived and currently funded by the US National Science Foundation (NSF). Continue reading
Last Update: December 10, 2015 In early January of 2014, we heard quite a bit about the polar vortex (not a new term, by the way) as North America struggled with some of the most frigid and dangerous temperatures seen … Continue reading
In an earlier post, I had mentioned Esri’s involvement in the large National Science Foundation-funded project known as CyberGIS, which aims to establish a fundamentally new software framework via a seamless integration of cyberinfrastructure, GIS, and spatial analysis/modeling capabilities, particularly … Continue reading
Updated: May 15, 2016 Jump to: Current Projects | Other Initiatives | Staying Connected | Deepsea Dawn2016 UC Science Symposium NEW! | R – ArcGIS Integration Hot! | 2015/’16 Road Map Hot! At Esri we are concerned with supporting basic … Continue reading
Researchers today need to deal with an avalanche of data—from environmental sensor networks (both on land and at sea), social media feeds, LiDAR, and outputs from global- and regional-scale atmospheric circulation and general climate models and simulations. Because of this, “big data” is emerging as a major research theme for the academic community.
I recently had the opportunity to attend GIScience 2012, which is convened every two years and brings together leading researchers from around the world to reflect on a wide spectrum of geographic information science research areas. Attendees are normally university academics and graduate students working in the areas of geography, computer science, information science, cognitive science, mathematics, philosophy, psychology, social science, environmental sciences, and spatial statistics.
The technology tides have shifted again and, as the notion of cloud computing is becoming mainstream across most industries, a new buzzword is emerging: Big Data. Never heard of it? Simply put, the term refers to the ever-growing mountain of data, generated from myriad sources, that organizations must effectively address.
For instance, according to a recent MeriTalk survey, 96% of Federal IT professionals expect their agency’s stored data to grow in the next two years by an average of 64 percent.
Big Data is often described using the Three “V”s: Velocity, Volume, and Variety. By example, let’s take a few of the real world case studies gathered by IBM and provided by Mike Rhodin, Senior Vice President at IBM Software Solutions:
Business data is growing at such a rate that many organizations can become overwhelmed by the big data problem. A recent McKinsey, IDC, and Department of Labor Statistics analysis [PDF] of data in business found that financial/securities organizations have 3.8 petabytes per firm—that’s more than 400 million gigabytes, or about 12.5 million iPads, per company! Banking comes in a distant second with 1.9 PB. This puts big data found in financial services companies into perspective since this is even greater than most communications and media companies’ average of 1.8 PB. Continue reading