id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
14,167
2,022
"Mobile security firm Zimperium to be acquired for $525M, eyes IPO | VentureBeat"
"https://venturebeat.com/security/mobile-security-firm-zimperium-to-be-acquired-for-525m-eyes-ipo"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Mobile security firm Zimperium to be acquired for $525M, eyes IPO Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Zimperium , a provider of security for mobile devices and apps, today announced an agreement for Steven Mnuchin’s investment firm Liberty Strategic Capital to acquire a controlling stake in the company for $525 million. The company is now aiming to accelerate its growth and has discussed pursuing an initial public offering (IPO) in the future, CEO Shridhar Mittal told VentureBeat. Mnuchin — formerly the U.S. secretary of the treasury and now the founder and head of Liberty Strategic Capital — will become chair of the board of directors at Zimperium in connection with the deal. The acquisition is expected to close at some point during the second quarter. According to Proofpoint, 74% of organizations faced phishing attacks over SMS text message, aka “ smishing ” attacks, in 2021. That’s compared to 61% in 2020. Zimperium, meanwhile, has reported that zero-day vulnerabilities actively exploited against mobile devices surged 466% in 2021. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As threats such as mobile phishing and zero-day attacks targeting mobile devices have grown, “if there’s access to corporate data on these devices, they need to be protected just like PCs and Macs,” Mittal said. Protecting mobile devices In the realm of mobile threat defense (MTD), used for securing mobile devices such as smartphones and tablets, Zimperium stands out by offering on-device threat detection, according to Mittal. The company’s machine learning (ML) technology detects attacks on the device itself without needing to send data out to the cloud. This has benefits including better user data privacy and reduced latency, Mittal said. Most of Zimperium’s competitors in MTD, on the other hand, depend on the cloud, he said. “They have started trying to do things on-device. But that’s not an easy thing to do,” Mittal said. “We’ve spent years of looking at data and creating those machine learning models, and using all kinds of different techniques to make this happen. This would be a very difficult problem for any of our competitors to solve very easily.” Other key vendors in MTD include Check Point, Lookout, Symantec, Palo Alto Networks, Better Mobile Security, Jamf (which acquired Wandera) and Pradeo, among others, according to Gartner. Meanwhile, in the realm of app security, Zimperium provides app development teams with a scanning engine that can automatically detect issues such as vulnerabilities and policy violations, and enable teams to fix those issues early in the process, he said. Other capabilities include code protection (through obfuscation of code) and runtime defense with anti-malware functionality, Mittal said. Customer traction Zimperium now has 500 enterprise customers, and more than 7,000 customers in all. The Dallas-based company reports that it generated 53% growth in annual recurring revenue (ARR) in 2021, and is aiming to grow its ARR at an even faster rate for 2022, Mittal said. The company didn’t disclose its total revenue or ARR figures for last year. In MTD alone, “between us and our competition, we’ve barely penetrated the U.S. market,” Mittal said. “So there’s huge potential.” The acquisition deal with private equity firm Liberty Strategic Capital came about as Zimperium assessed its options for funding to drive the company’s next phase of growth, he said. Zimperium currently has 250 employees. In addition to hiring, the company plans to look at making acquisitions in its core areas of MTD and app security, Mittal said. IPO in the cards? The company may also look at going public, given the massive opportunity for growth ahead and the company’s differentiated solutions, he said. While no timeframes have been discussed for an IPO, “we’ve talked about the possibility,” he said. Down the road, “we would definitely consider that,” Mittal said. Liberty Strategic Capital will not own 100% of the equity in Zimperium under the agreement. One investor, SoftBank Corp., is hanging on to its stake in Zimperium, as are members of the company’s management team and employees. Zimperium didn’t disclose what percentage of its equity is set to be acquired by Liberty Strategic Capital under this agreement, or what the company’s total valuation would be in connection with the deal. Past investors in Zimperium have included Samsung, Warburg Pincus, Sierra Ventures and Telstra Ventures. Since its founding in 2021, Liberty Strategic Capital has made investments in several cybersecurity firms, including Cybereason , Contrast Security and BlueVoyant. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,168
2,022
"How automating vulnerability management reduces risk of cyberattacks | VentureBeat"
"https://venturebeat.com/security/how-automating-vulnerability-management-reduces-risk-of-cyberattacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How automating vulnerability management reduces risk of cyberattacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cybercriminals are growing ever more relentless and deft with their attacks, with data breaches and system disruptions due to cyberattacks rising every year. Therefore, finding and strengthening cybersecurity weak spots, or vulnerabilities, is key to thwarting these attacks. A key vulnerability is apps. Many organizations rely on productivity software and apps built in-house or from IT service providers to be competitive in today’s market. However, while these solutions boost productivity and employee and customer experiences, many of them have weak security measures that can expose the organization to cyberattackers. Implementing a successful vulnerability management program is necessary for your overall IT risk management plan to protect your business from these threats. According to a report by Mordor Intelligence , the security and vulnerability management market is expected to reach $11.72 billion by 2026. Dealing with cybersecurity vulnerabilities, exploits and attacks is difficult since they are continuously evolving. New vulnerabilities and exploits are found daily, leading attackers to build innovative cyberthreats to exploit them. As a result, automated vulnerability management techniques like vulnerability testing and patch management are critical for mitigating emerging cybersecurity risks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! If an organization doesn’t currently engage in vulnerability management, it’s essential to understand the potential consequences and how to develop a successful vulnerability management solution as part of your overall cybersecurity strategy. How does vulnerability management work? Vulnerability management can help identify security vulnerabilities in unpatched systems that, if exploited by adversaries, can put an entire enterprise environment at risk. Typically, vulnerability management is a foundational practice and an integral part of any standard cybersecurity initiative. However, constantly changing device demographics and increasing sophistication in cyberattack techniques, including an increase in recent multipronged attacks, are challenging existing vulnerability management practices. “Vulnerabilities open doors for attackers that are hidden from an organization. Even if attackers and organizations learn at the same time of a vulnerability, the attackers are faster to exploit than the organizations are to find and fix it,” said Kevin Haley, director of security response at Symantec. According to Haley, robust vulnerability management is the only way for businesses to have a fair chance against attackers and mitigating such cyberthreats. A vulnerability management program’s goal is to keep networks safe against known exploitations while ensuring compliance with regulatory obligations. This protects a business network from being breached through well-known vulnerabilities, making it much harder for cybercriminals to target the company. It can also help protect the business from penalties associated with regulatory noncompliance, saving money and your company’s reputation. Steve Benton, vice president of Anomali Threat Research, said he believes that as much as vulnerability management programs are absolutely critical for data-driven businesses to mitigate cyberthreats, they also need to be intelligence-led. “Organizations give themselves away too cheaply to attackers by not prioritizing mitigating vulnerabilities from their attack surface. Given the resource constraints all organizations face, you must have the means to determine and act on the vulnerabilities most likely to be exploited in attacks on your organization,” Benton told VentureBeat. Talking about how data-driven organizations can achieve best-in-class status for a vulnerability management program, Benton says that the vulnerability management cycle needs to be empowered and enabled by threat-relevant intelligence correlated to the organization’s attack surface and key assets. “Such precise and laser-focused assessment must be further translated into a verifiable patch/mitigation execution. Intelligence is the steel thread that will pump-prime best-in-class status,” said Benton. Key processes A vulnerability management program may be built internally or by utilizing a vulnerability management service from a managed security service provider (MSSP). When developing a program internally, several factors must be taken into account: Identification: A vulnerability assessment is an essential first step in developing a vulnerability management strategy. Without a method for identifying weaknesses, your management approach will be a shot in the dark rather than an intelligent strategy. As a result, conduct an initial evaluation to discover vulnerabilities and be receptive to employee input if they uncover other problems. For a thorough assessment, it is critical to scan systems and programs that have network access and track the services that run on the network, including remote access portals, during this stage. Analysis: The next step is to assess the risk of a vulnerability and estimate how much time, money or other resources would be required to rectify it. To determine these features, a team must discuss a few critical questions: How difficult would it be for an attacker to exploit this vulnerability? What danger does this vulnerability represent to our network or digital assets? Since each vulnerability is unique, it is critical to identify vital facts to make educated decisions with your vulnerability management team moving forward. Treatment: The next step is to address any vulnerabilities discovered within the network, hardware or software. The following action plans should be used to prioritize vulnerabilities based on their severity: Remediate: The ideal action plan for any possible risks discovered within a network is to completely resolve the vulnerability. If it is not feasible to resolve every vulnerability discovered, this should at least be the expectation when dealing with weaknesses that might cause significant damage to the organization. Mitigate: If the full resolution isn’t possible for the vulnerability, a solution is to mitigate its potential impact on the enterprise. This action plan ultimately buys you time until a solution is found and helps your cybersecurity posture tremendously. Acceptance: When the cost of fixing a vulnerability surpasses the potential harm of the exposure, it’s best to merely be aware of it. To address vulnerabilities more effectively, it is critical to collaborate with an internal IT team to evaluate which vulnerabilities require immediate attention and remedy, which may simply be mitigated for the time being and which don’t warrant any action at all. Continued reporting and monitoring: For continually developing cyberthreats, it’s critical not to stagnate in the vulnerability management program — something that may be avoided by periodically monitoring current vulnerabilities and scanning for new ones. Establish a simple approach to report potential vulnerabilities across all teams within your business by compiling reports of existing vulnerabilities and their plans of action. This will assist the internal IT staff in staying informed of current and prospective dangers. According to Pete Chestna, CISO North America at Checkmarx, when designing a vulnerability management program, firms frequently spend too much time “managing” the vulnerabilities rather than addressing them. “We need realistic goals based on the team’s maturity and the application’s importance. Any vulnerabilities that get to production by exception process or ‘management’ are probably there for good. So it’s important to be clear-eyed on that and refer from your data to confirm,” Chestna told VentureBeat. The role of automation Since current threats need constant moderation, vulnerability management software can assist in automating this process. A vulnerability management program employs a vulnerability scanner and, in some cases, endpoint agents to inventory and identify vulnerabilities in multiple systems on a network. Vulnerability scanning uses an automated program to scan an organization’s IT networks, apps, devices, and other internal or external assets for potential security flaws and vulnerabilities. Users receive a report at the end of each vulnerability scan that records the vulnerabilities discovered, as well as risk rankings for each vulnerability and security advice. Furthermore, the discovered vulnerability threats are evaluated in various contexts so that decisions regarding how to effectively handle them can be made. “The idea behind automated vulnerability management programs (AVMPs) is to reduce the time it takes organizations to roll out patches,” Alon Nachmany, field CISO at AppViewX, told VentureBeat. Nachmany says that the remediation process where patches must be tested and deployed is time-consuming and could increasingly benefit from automation. “[AVMPs] can help automate and ultimately reduce this process, rolling out patches much faster and plugging security holes that expose the company. In addition, automating the QA process for testing and the implementation factor would reduce the time it takes to secure the organization,” he said. The impact and exploitability of a vulnerability are estimated by taking into consideration a variety of parameters such as ease of access, authentication, the diffusion of the vulnerability, the availability of mitigation, and others. The exploitability and impact are then combined to assign each vulnerability a severity score between 0.0 and 10.0. This is known as the CVSS score (common vulnerability scoring system). The vulnerabilities are further categorized as high, medium or low severity based on their CVSS score. Vulnerabilities with a score of 7 to 10 are regarded as extremely serious, while a score of 4 to 6.9 are classified as medium and those with a value of 0 to 3.9 are classified as low. These scores enable developers and security professionals to prioritize vulnerabilities based on severity, ensuring that the most significant ones are handled first. Forrester senior analyst, Erik Nost, said that many security teams today deal with staffing and skill shortages, and automating critical processes such as vulnerability management can aid such use cases. “Anything that removes manual effort is always helpful. However, dealing with today’s threat volume is almost impossible without automation. Scanning for assets, and vulnerabilities on them, is the most common process that is fully automated today,” Nost told VentureBeat. Future vulnerability management challenges One of the critical future challenges for vulnerability management frameworks is the need for an integrated solution for supply chain attacks, said Rohit Dhamankar, VP of threat intelligence at Alert Logic. Dhamankar believes that supply chain attacks are a critical vulnerability that organizations need to address, as evidenced by the infamous Log4j critical vulnerability in December of 2021. “As organizations get more and more code-shared for development, it is necessary to know what software and packages are being used in the network directly or indirectly. It also highlights the boundary lines of shared responsibilities in this aspect,” he said. While automation can bring various benefits to the vulnerability management process for most medium- to enterprise-sized firms, it can also add potentially significant expenses, according to Jerrod Piker, competitive intelligence analyst at Deep Instinct. “An organization must know what assets are the most important to protect so they can balance the cost of automation, whether it be through in-house or third-party solutions. This can only be achieved through the process of categorization and prioritization,” Piker explained. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,169
2,022
"The software supply chain: New threats call for new security measures | VentureBeat"
"https://venturebeat.com/security/the-software-supply-chain-new-threats-call-for-new-security-measures"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The software supply chain: New threats call for new security measures Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The contemporary software supply chain is made up of the many components that go into developing it: People, processes, dependencies and tools. This goes far beyond application code — typically the main focus of existing DevSecOps tools. Thus, today’s increasingly complex software supply chain requires a whole new security method. The quandary, though, is that many organizations struggle to not only secure their software supply chains — but to identify them. “The challenge of securing the software supply chain is significant and complex for virtually every organization,” said Katie Norton, IDC senior research analyst for devops and DevSecOps. “And, the many entry points into the software supply chain constitute a significant risk that has gone unaccounted for in many organizations.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A new approach To address the growing issue, Chainguard today announced Wolfi, a new community Linux (un)distribution. It combines aspects of existing container base images with default security measures that will include software signatures powered by Sigstore, provenance and software bills of material (SBOMs). The company is also announcing Chainguard Academy, the first free, open source and interactive educational platform designed for software supply chain security. Additionally, its Chainguard Enforce platform is now generally available. “One of the biggest threats to securing the software supply chain is the way that we build software today,” said Dan Lorenc, Chainguard founder and CEO. “The tools we use to build software were not designed for the speed and scale of its use, which results in clunky architecture that is easy for bad actors to exploit or tamper with.” Governments around the world are asking questions and demanding guarantees in software. And while vendors — both existing and new — are providing tools, they fail to address the deeper problem: “The need for a fundamental shift in the way software is built,” said Lorenc. But first: Identifying the software supply chain The latest IBM 2022 Cost of a Data Breach Report provided one of the first analyses of supply chain security, revealing that nearly one-fifth of organizations were breached due to a software supply chain compromise. One of the biggest hurdles: Simply recognizing and identifying all the different ways bad actors can exploit the software supply chain, said Norton. When people say “software supply chain security,” they often think of exploiting open-source software vulnerabilities such as Log4Shell. But this is only part of the attack surface. A few supply chain attack vectors Norton identified include misconfigurations and hard-coded secrets in infrastructure-as-code (IaC) and misconfiguration in the CI/CD pipeline that can expose sensitive information or can be used as an entry point for malicious activity. Another threat is compromised developer credentials, often the result of poor governance or failure to apply least-privilege principles. Then there are hacking tools and techniques that are readily available on the web. “Advanced skills are not requisite for someone to breach your company’s software supply chain,” said Norton. The good news is that, with increased instances of exploits — and, with them, growing awareness — the software supply chain market is “an evolving domain” with new competitors constantly entering the space, she said. Building in security from the start As Lorenc explained, most of today’s workloads run on containers and distros were designed for an earlier era. This, coupled with new supply chain security risks, has exposed major gaps when running containers. For example, container images tend to lag behind upstream updates, meaning users are installing packages manually or outside package managers and running images with known vulnerabilities, he said. Many container images have no provenance information, making it difficult to verify where they came from or if someone has tampered with them. Naturally, this increases the attack surface. “The only way to solve these problems is to build a distribution designed for container/cloud native environments,” said Lorenc. Wolfi is a container-specific distribution that can “vastly simplify” the process by dropping support for traditional — and often irrelevant — distribution features, he said. It also allows developers to grasp the immutable nature of containers and avoid package updates altogether, instead rebuilding from scratch with new versions. “The reality is that software has vulnerabilities and that will never change,” said Lorenc. “And to begin to improve software supply chain security, we must begin where development begins — with developers — and provide tools that make the development lifecycle secure by default, from build to production.” The requirements of a modern software supply chain Wolfi enables purpose-built Chainguard images that are designed with minimal components to help reduce an enterprise’s attack surface and generate SBOMs at the time of development, said Lorenc. It is completely reproducible by default, meaning every package can be rebuilt from Chainguard’s source code. “This means a user will get the same package,” he said. It also allows developers to build images that are, “tamper-proof and trusted.” The company is producing an SBOM at the start of building software — not after the fact, he pointed out. The base is secure by default, scales to support organizations running massive environments, and provides the control needed to fix most modern supply chain threats. “Reverse engineering SBOMs isn’t going to work and will defeat the purpose of them before they can even be used effectively,” said Lorenc. “Wolfi helps to address this problem.” Chainguard Enforce is also now generally available. The supply chain risk management platform was launched as an early access program in April. It now includes new features such as “agentless” mode, a re-designed user interface with security metrics, SOC2 Type 1 certification, curated security policies and alerting and integrations with CloudEvents, OPA Gatekeeper and Styra, Terraform provider and Vault. A more holistic view All told, organizations should “look more holistically” at software supply chain security, said Norton. “Focusing only one dimension of the software supply chain is both unscalable and inadequate,” she said. “All the software supply chain attack vectors are interrelated and interdependent.” So, in addition to securing independent components of their applications, organizations should lock and guard all digital entry points into their software factories. “Securing only one attack entry point is the equivalent of locking the front door of your house while leaving the back door open,” said Norton. Organizations must find comprehensive tools that provide protection across the software development lifecycle. Established DevSecOps and application security testing vendors are increasingly incorporating software supply chain security into their larger platforms, so organizations should look to their current partners to understand their capabilities, she said. At the same time, the rapidly growing number of startups attacking this challenge should not be overlooked. Going forward, guidance and regulations from the U.S. government — such as Biden’s Executive Order on Improving the Nation’s Cybersecurity, guidance from the National Institute of Standards and Technology ( NIST ) and the Office of Management and Budget memos — will continue to be incredibly powerful forces. She credits these as a “significant contributor to how rapidly software supply chain security has become top of mind.” “It’s not only software suppliers that sell to the government that are going to be impacted — there will be downstream impacts,” said Norton. “As more software suppliers adopt these standards, non-governmental organizations will expect the same due diligence.” Education is critical Further exacerbating the supply chain security issue is a lack of comprehensive education, said Lisa Tagliaferri, Chainguard’s head of developer education. This is a barrier to wider adoption of software supply chain security recommendations, and is due to an “ever-changing technical landscape” and a lack of open-source tooling like Sigstore. This prompted Chainguard Academy, which provides free educational resources and recommended practices for software supply chain security tooling. “A driving force behind our effort was to provide software engineers and technology leaders the resources they need to be able to identify, mitigate and fix software vulnerabilities through tools and solutions that allow them to address security early and often across their development lifecycle,” said Tagliaferri. The Academy builds on the company’s previous educational efforts, including Securing Your Software Supply Chain with Sigstore course in partnership with the Linux Foundation and edX. Developers using Chainguard Academy will also be able to work with Sigstore and distroless container images directly from their browsers through an interactive sandbox terminal. “We believe that a key part of making the software supply chain secure by default is to help close this skills gap,” said Tagliaferri. “To achieve this goal, it was important that we kept critical educational resources open to everyone because we all have to do our part to help solve the software supply chain security problem.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,170
2,022
"Report: 85% of SREs say that automation is 'imperative' for innovation | VentureBeat"
"https://venturebeat.com/automation/report-85-of-sres-say-that-automation-is-imperative-for-innovation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 85% of SREs say that automation is ‘imperative’ for innovation Share on Facebook Share on X Share on LinkedIn Semiconductors are heart of digital electronics. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new report by Dynatrace indicates that as organizations continue to evolve their digital transformation roadmaps, the proliferation of new applications, services and software products being launched and updated introduce mounting complexities in cloud ecosystems and IT infrastructures. Enter site reliability engineering (SRE). Businesses today are increasingly turning to SRE teams to drive greater alignment between development teams and to define the best practices that will enable various teams within an organization to automate processes at scale and ensure goals for business, security, quality, and performance, are all met equally. Despite this growing reliance, SREs are still bogged down by manual labor and large amounts of time spent addressing the security vulnerabilities and application failures that come with the multiplying number of applications, microservices and software products that expand existing cloud ecosystems. That’s why 85% of SREs report that the scalability of SRE practices will be extremely dependent on the availability of automation and AIops capabilities. Both are needed to accelerate innovation and transformation while alleviating some of the processes that typically require time-consuming manual effort. Minimizing time spent on these efforts will help SREs evolve into the more critical role businesses are giving them when it comes to digital transformation strategy. In fact, 88% of SREs surveyed claim to have a better understanding and recognition of the strategic responsibility that comes with their role in comparison to three years ago, especially as organizations tackle new challenges including the growth of new technologies, languages, platforms and tools in cloud-native delivery that’ve created an explosion of complexity. The findings are based on a global survey commissioned by Dynatrace and conducted by Coleman Parkes, which gathered responses from 450 SREs in large enterprises across various regions, including 150 in the U.S., 150 in EMEA, and 150 in Asia Pacific. Read the full report by Dynatrace. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,171
2,023
"New Relic launches Grok, a generative AI assistant to monitor software performance | VentureBeat"
"https://venturebeat.com/ai/new-relic-launches-grok-generative-ai-assistant"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New Relic launches Grok, a generative AI assistant to monitor software performance Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, all-in-one observability provider New Relic announced the launch of Grok – a generative AI assistant to help engineering teams monitor, debug, secure and improve their software stacks using natural language prompts. Grok comes embedded in the entire New Relic platform, which includes over 30 correlated monitoring services. Using a simple chat interface it can be triggered to keep an eye on and fix software issues, among other things. For example, it can save engineers from the tedious task of manually sifting through telemetry data, the company noted. How exactly does the Grok generative AI assistant help? Observability is critical to running digital businesses and making sure software applications continue to deliver expected performance and results. But the current approach to observability largely relies on engineering teams sifting through mountains of siloed telemetry data. This takes time and effort. Plus, a lot of engineers are not familiar with the complex systems and hard-to-use troubleshooting interfaces many observability platforms provide. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With Grok, which uses OpenAI’s large language models (LLMs), New Relic is looking to address this gap. The new solution, as the company explains, provides enterprises with a simple chat interface. Engineers, regardless of experience, can type in their queries in natural language and get answers to help them isolate and fix the issue at hand. “Users can simply ask for root causes. Any question on their mind is fair game, however complex, such as ‘Why is my shopping cart service slow?’ or ‘How did the latest server update impact my app?’ New Relic Grok can analyze your telemetry and context (including recent changes introduced) across your entire software stack to suggest underlying causes and resolution steps,” Manav Khurana, chief product officer at New Relic, told VentureBeat. “Before New Relic Grok, a user would have had to either know exactly where to find some insight or data in New Relic’s platform, or think of how to phrase their question as a custom query, and then iterate to resolve errors in the query and make it return what they need. With New Relic Grok, they can just ask the question and see an answer,” Khurana explained. Along with isolating and fixing issues, the generative AI assistant can help with other aspects of using New Relic’s observability platform — setting up instrumentation and monitoring, building reports and dashboards, debugging code-level errors, managing accounts. Overall, it enables new teams to adopt observability in their workflow. “The combination of New Relic’s unified database and OpenAI’s large language models (LLMs) allows users to extend this kind of plain-language questioning into new territory,” Khurana said. “New Relic Grok can actually take questions, translate them into queries, run those queries and then return results as charts, tables, forms, reports and more.” Notably, a user who receives an answer that’s not as detailed as expected can iterate on the initial question by simply asking follow-up questions in plain language. Wherever I go, I see LLMs This move from New Relic marks the latest effort from an enterprise technology vendor to integrate large language models into its product. In recent weeks, we have seen Salesforce integrating Einstein GPT with its Flow automation suite, Microsoft’s Copilot proliferation , conversational querying from Kinetica, and a generative AI-powered manager from Pathlight for giving teams feedback on their performance. Many business intelligence solutions, including Domo , ThoughtSpot and SiSense , have also started offering generative AI capabilities. For New Relic, Khurana said, the focus is on creating value by pairing the latest and greatest in multimodal LLM models with the company’s own APIs and product capabilities. “That means leveraging OpenAI’s GPT-4 for its NLP prowess, establishing a feedback loop with our own APIs, such as NerdGraph, and enriching the experience with additional context from vector DBs,” he said. “We’re also experimenting with training our own models, which will be especially effective at addressing our customers’ most pressing problems in the observability space.” In that space, the company competes with players like Dynatrace , Datadog and Splunk. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,172
2,022
"Report: Only 27% of orgs have observability over their full stack | VentureBeat"
"https://venturebeat.com/data-infrastructure/report-only-27-of-orgs-have-observability-over-their-full-stack"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Only 27% of orgs have observability over their full stack Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New Relic’s 2022 Observability Forecast is the industry’s largest survey of IT practitioners and decision-makers to understand the current state of observability. Their definition of observability is the ability to measure how a system is performing and identify issues and errors based on its external outputs. The 2022 Observability Forecast offers a detailed view of how this practice is shaping engineering and the technologies of the future. Of those who had mature observability practices, 100% indicated that observability improves revenue retention by deepening their understanding of customer behaviors compared to the 34% whose practices were less mature. Based on the report’s definition of full-stack observability, only 27% of survey respondents’ organizations have achieved it. And an even smaller percentage — 3% — said that their organization has already prioritized/achieved full-stack observability. Full-stack observability, as used in this report, is achieved by organizations that deploy specific combinations of observability capabilities, including customer experience monitoring/DEM (front-end), services monitoring, log management and environment monitoring (back-end). Observability and outages The data supports a strong correlation between achieving or prioritizing full-stack observability and experiencing fewer outages, improved outage detection rates, and improved resolution. For example, 34% of respondents who indicated that they had already prioritized or achieved full-stack observability were also less likely to experience the most frequent high-business-impact outages (once per week or more), compared to the 52% who had not. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In addition, 68% of respondents who said they had already prioritized or achieved full-stack observability also said it takes less than 30 minutes to detect high-business-impact outages, compared to the 44% that had not. The ideal The research implies that the ideal state of observability is one where engineering teams monitor the entire tech stack in all stages of the software development life cycle, employ mature observability practice characteristics, and have unified telemetry data and a unified dashboard or visualization of that data — ideally in a single, consolidated platform. Nearly half of all respondents (47%) said they prefer a single, consolidated platform, yet just 2% said they use one tool for observability. More than half (52%) of respondents, including 57% of C-suite executives, expected observability budgets to increase over the next year. Respondents foresee their organizations needing observability for a variety of trending technologies, including artificial intelligence (AI), 5G, and Web3. Looking out to 2025, the report estimates that nearly all expect to deploy observability capabilities like network monitoring, security monitoring, log management and more. The report is also notable in that its raw data is open and available for public download. Methodology For the 2022 Observability Forecast, New Relic surveyed 1,614 IT professionals across 14 countries in North America, Europe and the Asia Pacific region between March and April 2022. Read the full report from New Relic. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,173
2,021
"Save time and money with this full-stack web application generator | VentureBeat"
"https://venturebeat.com/uncategorized/save-time-and-money-with-this-full-stack-web-application-generator"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Save time and money with this full-stack web application generator Share on Facebook Share on X Share on LinkedIn In this high-tech age, we are all about completing our tasks faster, more efficiently. Of course, this was one of the original intents of computers, and now, a couple of decades into the 21 st century, coding those computers has become integral to our everyday lives. Every piece of technology relies upon the coding that has made our lives easier. But how can we make the coding easier? If you are a Javascript/Typescript developer, then ScaffoldHub can make help. JavaScript continues to be a popular language among programmers. According to Statista , in 2021 almost 65% of survey respondents stated it was their language of choice. It’s relatively simple to learn (but a little harder to master), it can be run anywhere, and it features modern frameworks that are very developer-friendly with good community support. But if you have done any work in developing web applications, you know that there is still a great deal of time and effort that goes into each project. What if you could save not only time but money as well, by using a full-stack web application generator? What used to take about two months to complete can now be done in as little as 15 minutes. With ScaffoldHub you pick and choose your entities, relations, fields, and validations. You select your front-end framework and create your back end with NodeJS. And just like that, you have a functioning web app. You can preview it online and edit the source code, subject to the Scaffold Hub License. It comes with API documentation, authentication, security, audit logs, forms, lists, filters, and so much more. Secure and mobile-friendly, the app is constantly being updated to keep up with the latest versions of the technologies used. Normally valued at $187, lifetime access to this feature-packed full-stack web application generator can be yours for only $129. When you consider that the average developer earns $60 per hour, this package will pay for itself in the development of your first application. VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here. Prices subject to change. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,174
2,018
"Are microservices right for you? | VentureBeat"
"https://venturebeat.com/business/are-microservices-right-for-you"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Are microservices right for you? Share on Facebook Share on X Share on LinkedIn Presented by ThoughtWorks Microservices play a major role in many organizations today. It’s an architectural approach that’s gained attention at companies such as Netflix, Google, and indeed, my own, ThoughtWorks. We’ve seen more and more enterprises adopt this style and in many cases, with great success, helping them overcome agility and scalability challenges that they routinely experience in traditional monolithic deployments. But a word to the wise: microservices aren’t for everyone. For starters, there’s a minimum level of maturity needed in a number of things before you consider microservices; things like continuous delivery and infrastructure automation practices. If you don’t think your business has a strong handle on these, you need to wait before you think about microservices. And this level of maturity is still a stretch for many organizations. Microservices place an increased burden on your operations because it means more things to monitor, more alerts being generated — and more things to deploy. It’s only those organizations with comprehensive automation and continuous delivery practices that can hope to succeed. In some instances, organizations aren’t set up to cope with the complexity microservices bring. For example, where organizations have monolithic applications, business processes execute most often within the same process boundary, allowing for traditional transactions and ensuring all or nothing execution. Microservices systems are inherently distributed, and business processes are most often completed through the interaction of multiple microservices. Thus, there are failure modes for microservices that simply don’t exist in a monolith, and those failures must be handled. There’s a trade-off (a common one) between the increased flexibility of the microservices approach and the simplicity of the monolithic approach, particularly if it is a well-structured monolith. Applications that won’t benefit as much from the flexibility are poor candidates for a microservices architecture. A crucial design decision for a microservices architecture is the placement of the boundaries between services. While bounded contexts certainly provide strong guidance for where the appropriate boundaries are, choices still exist and the wrong choice complicates the system. It might not actually be clear for a new domain where the proper boundaries are, so there’s some justification for not starting with a microservices architecture until the domain and the proper contexts are more clear. Even in a well-understood domain, the technical complexities associated with microservices, like the previously mentioned failure scenarios, introduce additional considerations for the service boundaries that again complicate the resolution of this crucial design decision. You’re probably now wondering why or if ThoughtWorks even still recommends a microservices architecture. The flexibility, the independent scalability, the evolutionary characteristics, and the strong encapsulation are still very real benefits to a microservices approach. We are still firmly committed to using microservice architectures, extending our understanding of those architectures, and continuing to explore tools and approaches that address the issues articulated here. We’re also extremely aware of the subtleties and complexities of deciding which technologies are a good fit — and when they’re right. That’s why we came up with the idea to create the Technology Radar , which we publish biannually. It allows us to track emerging and innovative ideas in tech, and how ready they are for prime time. Microservices were rapidly put in our Trial ring on the Radar. To us, that signifies that we’ve successfully used the tech in production environments. But it’s never made it to our Adopt ring — that’s where we put things we think you should be using now, pretty much as a default. For the reasons I’ve outlined above, we have many reasons why we’re enthusiastic about microservices, but for us, it’s not a recommendation without caveats. Rebecca Parsons is Chief Technology Officer at Thought Works. Sponsored posts are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,175
2,021
"Why siloed SaaS tools do not play well with others | VentureBeat"
"https://venturebeat.com/2021/11/24/why-siloed-saas-tools-do-not-play-well-with-others"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Why siloed SaaS tools do not play well with others Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Buddy Brewer, GVP &GM, Partnerships, New Relic In Slack channels and Zoom calls across Silicon Valley, no business buzzword has more cache at the moment than “customer-centricity.” Tech companies are obsessed with delivering better experiences for the customer , reasoning that personalized, reliable services will deliver exceptional business returns over time. Rather than developing products and working to build excitement for those products among customers, this new approach relies on the voice of the customer to drive development. But while these large tech companies pay lip service to the idea of the exceptional customer experience, they have so far refused to address one of the biggest pain points for enterprise tech customers: a vast number of siloed SaaS tools that don’t play well with one another. While each solution plays an important role in the customer’s tech stack, the majority of developer teams are left to build cumbersome, clunky workarounds to accommodate tools that aren’t interoperable. Siloed systems and proprietary software cause headaches for the end user both during everyday use and in moments of crisis. When you can’t connect your systems, it’s more challenging to understand what is happening throughout your tech stack and how one function affects another. In particular, it’s much more difficult to fix things when they break — imagine trying to fix a car without having visibility into how fluids flow from one part to another. The end result of this situation is the exact opposite of the stated goal of customer centricity: the user experience is slow and frustrating, with everyday tasks demanding an outsized amount of time and attention. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Collaboration’s rising tide What is the logic behind building proprietary software and siloed services? For individual businesses, the idea is that the best technologies will rise to the top ; customers will have no choice but to use the most effective solution, regardless of how it interacts with the rest of the tech stack. But over time, this strategy may deliver diminishing returns. If given the choice between the best software and a solution that is nearly as good and much more interoperable, you can expect customers to eventually choose the more flexible product. In the short term, companies that invest in interoperability and collaboration may be sacrificing a bit of turf, but they’ll benefit in the long term from a rising tide of customer satisfaction. If software providers truly believe in customer centricity, they’ll consider how to solve the biggest pain point for enterprise software users. Shifting towards collaborative standards and interoperability for SaaS tools will have a game-changing effect on the holistic customer experience — and you can expect customers to reward the companies that make this happen. SaaS tool solutions in standards How can our industry move towards more interoperable software? The answer may be found in a number of existing technology standards. A valuable example of observability is the OpenTelemetry standard , an open source standard for service instrumentation that makes it significantly easier to collect and manage performance data throughout an enterprise’s systems. OpenTelemetry is just one example: today’s leading SaaS tool providers could very easily use open standards to develop solutions that work well together and enhance performance for the end-user. If we look at specific industries, there are even more impressive examples of the way established software standards can transform the user experience. In the construction industry, buildingSMART is an open standard that helps to support application development; this has dramatically simplified the process of generating and managing building information models, the digital representations of physical buildings. For government organizations, the National Information Exchange Model provides a vital framework for information exchange among all levels of government and the private sector. When companies and industries commit to common standards, the result is consistently a more efficient and enjoyable experience for the end-user. At the end of the day, SaaS providers need to ask if they’re truly centering the customer in their decision-making. From the end user’s vantage point, it looks instead like a sea of silos with proprietary software at the center. Buddy leads the partnerships team at New Relic. He has nearly two decades of experience building SaaS products for DevOps and has expertise in web performance optimization, frequently speaking at leading tech conferences on the subject. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,176
2,015
"Organizational debt is like technical debt -- but worse | VentureBeat"
"https://venturebeat.com/2015/05/19/organizational-debt-is-like-technical-debt-but-worse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Organizational debt is like technical debt — but worse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Startups focus on speed since they are burning cash every day as they search for product/market fit. But over time, code/hardware written/built to validate hypotheses and find early customers can become unwieldy, difficult to maintain, and incapable of scaling. These shortcuts add up and become what is called technical debt. And the size of the problem increases with the success of the company. You fix technical debt by refactoring , going into the existing code and “cleaning it up” by restructuring it. This work adds no features visible to a user but makes the code stable and understandable. While technical debt is an understood problem, it turns out startups also accrue another kind of debt – one that can kill the company even quicker – organizational debt. Organizational debt is all the people/culture compromises made to “just get it done” in the early stages of a startup. Just when things should be going great, organizational debt can turn a growing company into a chaotic nightmare. Growing companies need to understand how to recognize and “refactor” organizational debt. I had lunch last week with Tom, the CEO of a startup that was quickly becoming a large company – last year’s revenue was $40 million, this year likely to be $80 million maybe even $100 million in ad revenue. They had reinvented a traditional print media category onto web and mobile devices for a new generation of users who were no longer buying magazines but reading online. Their content was topical, targeted, and refreshed daily. Equally important, their VP of marketing had brilliantly executed a stream of social media campaigns (Facebook likes and partnerships, email campaigns, etc.) to drive traffic to their site, which they then turned into ad revenue. Tom was excited about their next big round of funding that valued them at almost half a billion dollars. He talked about how they were trying to maintain their exponential growth and told me how many people they were adding and the issues of scaling that rapidly. (They had doubled headcount from 100 to 200 in the last year and were planning to double again.) While he kept bringing the conversation back to their big valuation, I tried to steer the conversation back to how they were going to deal with: training the influx of new hires – in both culture and job-specific tasks retaining their existing hires who were working for intern-like salaries with little equity. His answer centered on the great location of the new building, what great furniture they were getting, and the compensation plans for the key members of the executive staff. This didn’t feel good. Since the meeting had been a courtesy to Phillipe, one of their VC board members, I grabbed coffee and asked him what scaling challenges he saw for the company. I was taken aback when I got a reply that sounded like VC buzzword bingo – phrases like, “They’re a platform not a product” and the ever popular “They’re a potential Unicorn. ” While the strategy sounded like a great long-term plan, I poked a bit and asked, “So what’s the training and onboarding plan for the new hires? What are you doing about the pay scales at the bottom of the organization? Aren’t you concerned about losing qualified people that the company spent the last few years training but never compensated adequately?” I got answers that sounded like Tom’s – new stock grants for the executive staff, great new building, and oh, by the way, Tom and his cofounder got to sell some stock in the new round. And let me tell you about the vision and strategy again. As Phillipe kept talking, I listened, but not really, because I started realizing that while he was a genius in finding and nurturing great early-stage deals and had a vision that sounded great for the new investors, he didn’t have a clue about how to actually scale a company. He had never run one, and worse, had never been on a board of a startup making the transition from searching for a business model and product/market fit to the next phase of “building” the infrastructure to support scale. Unless they were planning to flip this company, organizational debt was going to hit faster than they could imagine. They needed a plan to “refactor” organizational debt. And Tom wasn’t going to get it from his board. While the company had a great plan for keeping the top executives and had all the startup perks like free food and dogs at work, they had spent little time thinking about the organizational debt accruing with the first 100 employees who had built the company underneath them. These were the employees that had the institutional knowledge and hard-earned skills. Originally they had been attracted by the lure of being part of a new media company that was disrupting the old and were working for low salaries with minimal stock. And while that had been enough to keep them focused on their jobs, the new funding round and onslaught of new employees at much higher salaries had them looking around and updating their resumes. Surprisingly, given the tidal wave of new hires, formal training and job descriptions were still stuck in the early-stage “we’re too small to need that” mindset. The reality was that, with hundreds of new employees coming on board, the company desperately needed a formal onboarding process for new employees; first, to get them assimilated into the company culture and second, to train them how to do their specific jobs. Unfortunately, the people who could best train them were the underpaid employees who were now out looking for new jobs. Organizational debt was coming due. I had promised Tom the CEO we’d grab coffee again. When we did, I asked him about his head of HR and heard all about what great medical and insurance benefits, stock vesting, automated expense account forms, movie night, company picnics, etc., the company had. I offered that those were great for an early-stage company, but it was time to move to a new phase (and perhaps a new head of HR). Since Tom was an engineer, I explained the “organizational debt” metaphor. He got it instantly, and before I could even suggest it, he asked, “So how do I refactor organizational debt?” I suggested that were seven things he could do – some quickly, some over time: 1. Put together a simple plan for managing this next wave of hiring. Tell each hiring manager: No new hires until you write/update your own job description Next write your new hire job description Next write how you will train new hire(s) in their functional job Next write how their job fits into each level upward and downward And how it supports the mission of each level upward and downward 2. Realize his expense plan is too low. I offered that it appeared he had put together an expense budget using current employee salaries. If so, he was in danger of losing the people he most cared about keeping. He should stop thinking about 10 percent raises and start thinking about what he’d have to pay to replace employees who hold critical knowledge and train new ones. It felt to me more like 50 percent raises in quite a few cases. 3. He needed to have his head of HR: Do a salary survey of existing employees and industry comparables Identify the employees they wanted to keep Upgrade their salaries and equity ASAP Some of the harder suggestions had to do with the organization as whole: 4. He needed to consider refactoring some of the original hires and their roles. Some employees don’t scale from “Search” to this new phase of “Build.” Some because they have performance problems or don’t fit a bigger organization, attitude etc. Some of these may be friends. Leaving them in the same role destroys a sense of what’s acceptable performance among new employees. This is hard. In addition to refactoring the people, it’s time to relook at the company culture. Do the cultural values today take into account the new size and stage of the organization? What are the key elements that have “made it great” so far? Are they the same? different? how? why? It may be time to revisit what the company stands for. Now that the company no longer fits in a conference room or even the cafeteria, it needs a way to disseminate information that grows with the organization. At times, this requires the same messages being repeated 4 or 5 times to make up for the fact the CEO isn’t always delivering them personally. Emphasize in the corporate messaging that while it is a period of rapid change, the company culture will be an anchor that we can rely upon for orientation and stability. Does customer communication need to change? In the past, any customer could talk to Tom or expected Tom to talk to them. Is that feasible? Desirable? Finally, since this is new territory for Tom and board, create an advisory board of other CEOs who’ve been through the “build” stage from a startup to growing company. Lessons Learned Companies lucky enough to get to the “build” phase have a new set of challenges. They’re not just about strategy. They’re about fixing all the organizational debt that has collected. Onboarding, training, culture, and compensation for employees at the “build” phase all require a fresh look and new approaches. Failing to refactor organizational debt can kill a growing company. Steve Blank is a retired serial entrepreneur-turned-educator who has changed how startups are built and how entrepreneurship is taught. He created the Customer Development methodology that launched the lean startup movement, and wrote about the process in his first book, The Four Steps to the Epiphany. His second book, The Startup Owner’s Manual , is a step-by-step guide to building a successful company. Blank teaches the Customer Development methodology in his Lean LaunchPad classes at Stanford University, U.C. Berkeley, Columbia University, UCSF, NYU, the National Science Foundation and the I-Corps @NIH. He writes regularly about entrepreneurship at www.steveblank.com. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,177
2,017
"Developers spend 20% of their time fixing problems -- and it’s killing your company | VentureBeat"
"https://venturebeat.com/2017/07/13/developers-spend-20-of-their-time-fixing-problems-and-its-killing-your-company"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Developers spend 20% of their time fixing problems — and it’s killing your company Share on Facebook Share on X Share on LinkedIn Presented by Raygun I’d assume, like me, you grew up in a very different world than we live in today when it comes to technology. Today software touches our lives constantly from the moment we wake up, go to work, and then hit the hay and the end of the day. But as software eats the world, it also eats into the time of software development teams. They need to fix any problems in their codebase that inevitably crop up, regardless of how much testing they put in before releasing into production. That comes with a cost. The Systems Sciences Institute at IBM has reported that “the cost to fix an error found after product release was four to five times as much as one uncovered during design, and up to 100 times more than one identified in the maintenance phase.” in the earlier days of windows and home computing, when things went wrong and didn’t work as expected, we’d be prompted with the question: “Do you want to send this error report to the developer?” This clunky and somewhat backwards way of gaining feedback from users is still present in some software applications today. Why do companies make their users do all the work? Users are well aware they are sending such reports down a black hole, never knowing if anyone received it, looked at the information and applied a fix. Only 1 percent of users will actively report the issues they encounter, so developers who put the onus on the users to report bugs are likely unaware of the large majority of issues their users are facing. You can’t fix what you can’t see. Teams want to deploy code more quickly Teams with traditional set-up who rely on users to report issues before any action is taken can spend hundreds of hours a week in lost development time — much of it just looking for root causes of bugs and performance issues. This cost is not only financial though. The opportunity cost of not shipping code quickly and efficiently means features take longer to ship, customers don’t see any rapid innovations, and managers get impatient. According to a recent survey, more teams are deploying faster, with 14 percent reporting they deploy hourly, up from only 10 percent last year. At the other end of the spectrum, few teams are deploying weekly (21 percent, down from 23 percent last year) or even less frequently (31 percent, down from 33 percent last year). However, this isn’t the end of the story. On the flip side, we also see an increase in the desires of development teams to deploy even more quickly. When asked a similar question about their ideal deployment times, 28 percent reported that they would like to deploy hourly, a sizable increase over the 18 percent that reported the same in the past year. Boosting the ROI of software development teams The days of the developer saying “I can’t replicate it” are quickly coming to an end now that they have dedicated software intelligence tools available. Dedicated error and crash reporting software can pull in full stacktraces, environment information, and even automatically identify which specific users experienced the problem, making error replication and debugging far quicker than traditional methods of digging through log files. Companies slow to adopt such innovations stand to lose considerably when users run into issues that development teams aren’t aware of. And shipping of new features and updates is hampered by time-intensive exploration into what happened and why. Source: Raygun ROI Calculator A developer could spend half their time fixing problems with code that’s already written. This is the maintenance aspect of any software development lifecycle and fixing problems in colleagues code can only lead to increased detection and resolution time. So what’s next? The way we build, maintain, and deploy software is much different in today’s environment from the one of ten years ago. So what will the next stage of software development look like? Continuous deployment and integration is being adopted by more and more teams and is necessary for developers to deploy features, alterations, and bug fixes at a fast pace. Tools now exist to give developers real-time feedback on how their code is being received by end users in production. With insights into why users are having poor user experiences delivered to the team directly there is little need for users to be the ones finding and reporting problems themselves. Software development teams who don’t adopt modern-day release cycles and real-time diagnostics on production code stand to lose significantly when their faster-paced competitors overtake them. Sponsored posts are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,178
2,018
"Implementing micro frontends to overcome technical debt | VentureBeat"
"https://venturebeat.com/2018/10/23/implementing-micro-frontends-to-overcome-technical-debt"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Implementing micro frontends to overcome technical debt Share on Facebook Share on X Share on LinkedIn Any company that is constantly innovating, upgrading and pushing out new products is likely to rack up technical debt. This debt consists of the time and resources it takes to modify or enhance systems. While some level of technical debt is healthy, too much can throttle innovation. And the longer you take to address it, the bigger the bill will be when you finally decide to pay it off. My team at Trulia recently tackled our own technical debt by re-architecting the company’s codebase and the team structure. Our ‘aha’ moment Our company supports tens of millions of users each month, fulfilling thousands of requests per second. But we realized there was a problem: As we continued to grow, we were not moving fast enough or delivering the product at the pace consumers needed. Our aha moment came at a gathering of engineers from across the organization. We realized that 13 years of technical debt — a byproduct of our rapid growth — was the problem. This was when it became clear we needed to significantly modernize our front-end architecture to continue improving the product. So we got to work. Our solution: Project islands We decided to restructure our team using micro frontends (what we call “project islands”) to streamline development and ensure consistency across the organization. These separately deployed “islands” of end-user experience allow us to streamline future development by empowering each team to own its destiny all the way to production. By breaking our large projects into manageable, autonomous components, our teams can work smaller and faster. We created two new roles, architect and performance manager, to streamline work and ensure no one’s efforts were duplicated. Our architect and I intentionally sat next to each other so we could continuously interact to ensure consistency and communication among our teams. In order to help prevent future technical debt, we also created two new working groups, Engineering Principles and Microservices Strategy, made up of engineers from across the organization. They are responsible for guiding engineers to better weigh the tradeoffs when making technical decisions and to outline what the technical architecture we’re working towards should look like. Why islands work Project islands have made a significant difference in how our engineering teams work. But what’s the secret? What it really comes down to is allowing our teams to work independently with flexible technology that gives them more freedom to innovate. The biggest win is time to deployment — we’re more efficient on islands because they involve less team coordination. Now, instead of a couple of releases a week, we can make daily, or even hourly, releases with the goal of a continuous delivery model. In order to achieve this, we looked at the typical user’s journey through our site and packaged pages into natural groupings of code, or islands. Organizations looking to replicate this should start by developing a high-level model that shows where each microservice fits and allow developers to engage with the model to build and leverage a common platform. Once you have a solid micro frontends model, you can develop application shells to implement changes without impacting another team’s code. Application shells insulate apps from frequent frontend changes to provide breathing room between islands and the underlying technology. Some of our biggest takeaways from this project involved the technology we used to aid in our micro frontend deployment. We cut down complexity by utilizing GraphQL to allow developers to access only the data they need and in the proper format by integrating with backend services. We use Kubernetes and Istio to standardize metrics, monitoring, and traffic between microservices. Other organizations can use these basics to implement a micro frontend strategy to work more strategically and efficiently while managing technical debt. This architecture enables structured and intentional planning and independent teams — and it lets engineers focus on innovation. Deep Varma is VP of Engineering at Trulia. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,179
2,021
"CircleCI: Making life easier for software engineers speeds up innovation | VentureBeat"
"https://venturebeat.com/2021/07/06/circleci-making-life-easier-for-software-engineers-speeds-up-innovation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CircleCI: Making life easier for software engineers speeds up innovation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Making life easier for software engineers can improve the organization’s bottom line and speed up innovation, a recent report from continuous integration and delivery platform provider CircleCI found. Above: Developer advocates are already at work in various roles. In May 2021, CircleCI searched LinkedIn and found 117,151 results for “developer experience” in the United States. In a world that increasingly relies on digital products, the role of the developer is growing in importance within the business matrix. Engineering teams need a leader — a Developer Experience Engineer (DXE) — who ensures developers have the right tools, processes, and environment to maximize productivity and create the greatest business value possible, CircleCI said in its report. There is growing awareness developer velocity, productivity, and happiness are cornerstones of successful businesses, and that DXEs play an important role on development teams. Without DXE expertise, engineers spend time on maintenance and workflow optimization instead of building, which is less efficient than having a person with centralized authority handle maintenance. Recent McKinsey research found that businesses that prioritize developer velocity have four to five times the revenue growth of their counterparts. CircleCI identified six valuable ways a DXE can enhance developer experience — from ensuring developer flow to bringing leadership closer to engineering teams — which ultimately improves business success. Gain meaningful value from talent. The average cost of a developer minute in Silicon Valley is about $1.42. That’s every minute a developer is in a seat and the meter is running, and yet somehow organizations are rife with productivity killers. Developers in flow. Distractions can make or break a developer’s productivity. Everything from email and Slack to the tools developers use to build and test can take a developer out of the flow state — reducing productivity and increasing costs and toil. Solving interesting problems. Developers want to work on interesting problems but often the work doesn’t meet this standard. Some of the less cutting-edge work developers are tasked with — updating plugins or investigating and fixing flaky tests — can be reduced or resolved by leveraging the right automation tools — with the expertise and direction of a DXE. Ensuring work has meaning. Getting developers closer to the end customer and the challenges their product helps to solve is what connects them to the company mission. Too often, teams can lose sight of their organization’s mission and the value they deliver to their customers. Lifting developers out of daily toil by solving real and difficult challenges, helping them ship quality products faster, helps bring the team closer to the end customer, and highlights how they are helping improve the experiences and lives of their users. Everyone benefits and team satisfaction is boosted. Bring buying decisions closer to the engineering team. At many organizations, tools and engineering solutions are largely decided upon by managers, are far removed from the core needs of the developers, and focused on cost rather than value. Tool decisions are made at levels removed from the engineers who use them, at the same time that an abundance of new tooling options are available. Decision paralysis may happen either way but DXEs with their experience and focus can overcome this risk. A DXE can bridge the gap between the top of the organization and developers that are doing the work, offering holistic benefits. Bring leadership closer to the engineering team. Measuring and optimizing engineering velocity is the primary goal, as well as the need to capture and report on engineering success and how that maps onto business value. The leadership level benefits from having a context-switching DXE in the engineering department who will translate engineering success into business value. CircleCI recommends that developer experience managers have these qualifications: Experience managing software development teams A deep understanding of modern development practices and tools Ability to establish team objectives aligned to business goals Is a process expert that can organize and disseminate information Ability to make decisions The report also suggests that a DXE should focus on these business outcomes: Revenue growth Improved end-user experience Increased quality of releases Engineering team efficiency The emergence of the DXE as a standard role will unleash the power of developers across every type of organization and in every industry, promising to increase productivity, efficiency, and product quality. For organizations looking to create resilient teams, tools, and infrastructure to combat the next inevitable disruption to the industry: it starts with the DXE. Read the full Why Developer Experience Engineers are the key to accelerating your business report from CircleCI. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,180
2,012
"How to develop open-source software within any kind of company | VentureBeat"
"https://venturebeat.com/2012/03/19/how-to-develop-open-source-software-within-any-kind-of-company"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to develop open-source software within any kind of company Wayne Jackson Share on Facebook Share on X Share on LinkedIn For businesses and other organizations today, open-source software (OSS) is transformative in terms of its ability to allow organizations to write software very quickly and to leverage innovation very aggressively. OSS component-based development has reached a strategic tipping point, having moved from a cost-effective solution to a competitive advantage capable of delivering rapid and substantial return on investment for organizations that use it. And use it they do. More than 80 percent of modern software includes open sourcecomponents. Typical organizations, including Global 2000 enterprises, use thousands of OSS components, often in mission-critical software portfolios. Startups can quickly bring applications to market by focusing creative development on their core competency and relying on OSS for everything else. My company’s Central Repository , containing nearly 90 percent of open source Java projects, serves more than four billion requests per year to more than 61,000 organizations per year, including more than half of the Global 2000. A vibrant ecosystem with a fundamental flaw The same core advantage of OSS –its free availability, rapid innovation, and highly interdependent projects — introduces risks that can sabotage the IT or business value of key applications. The issue really boils down to two interrelated concerns: Complex dependencies: The open source ecosystem is comprised of hundreds of thousands of components, each of which may depend on tens or hundreds of other components. The whole ecosystem is interdependent. As a result, the properties (good or bad) of any one component are inherited across many others. A simple, but potent example might help. Version 2.5.6 of the Spring-beans contained a severe, remotely exploitable security flaw. Spring-beans is a commonly used component, and 1,447 others depend on it. So the security vulnerability was inherited by all 1,447 other components, and untold thousands of applications that rely Spring-beans directly or indirectly. Intellectual property issues add another dimension. Every component and dependency added to an application has specific and enforceable licensing and copyright requirements. This is true even if those dependencies are added unwittingly. This is troubling for software and embedded systems vendors who might inadvertently include a copyleft license such as the GPL in their shipping products. This exact issue has resulted in numerous and expensive lawsuits including the well-publicized instance of Cisco’s unknowing inclusion of GPL code in their Linksys routers. In this case, the Free Software Foundation sued Cisco and forced the company, among other things, to make their source code publicly available. Lack of update notification infrastructure: Components are updated frequently; the average component in Central is updated four times year. And yet, with all this change, there is no automated mechanism for update notification. Take the Spring-beans example. Once the security vulnerability was fixed, there was no automated mechanism for the projects that depend on the old version to be updated to the new, fixed version. Taking it one step further, absent automated update notification, none of the direct or indirect users of the flawed Spring-beans components would have any idea that their applications were at risk. In this wild West sort of lawlessness, many organizations are clearly taking chances and hoping for the best. A 2012 survey we conducted among 2,550 developers, architects, and managers found that only 20 percent of organizations have put effective open source management policies in place. Order out of chaos: A strategy for optimization Strategizing to yield the greatest ROI in using OSS demands a high-level awareness of how, why, and where OSS is used, along with consistent knowledge of OSS benefits, risks, and policies. To this end, several vendors offer software composition analysis tools that apply data mining technology for use in inspecting OSS components for security and functionality issues, known fixes, IP ownership, and versioning. The best of these tools enable organizations to govern development processes, continuously monitor the health of their repositories, and retrieve real-time alerts when critical applications are affected by newly discovered threats. To maximize the business value of OSS while minimizing risks: Assess your current usage of OSS components to grasp where you’re starting from, as an aid to setting realistic goals. Establish an open source governance program to filter, audit, track and manage open-source assets in the organization, and deploy mechanisms to monitor the effectiveness of your governance program. Build open source management into your entire software development process, evaluating OSS components before and while using them in development . Analyze and continuously monitor all deployed applications for newly discovered security vulnerabilities and stability issues. Establish well-defined channels of acquisition (such as the Central Repository) for each OSS component you leverage. Engage with the OSS community and establish routes to service and support for key components and frameworks. Properly managing the use of OSS in development will let you focus not merely on the cost savings it can bring you, but also on the wealth of innovation ongoing in the open source domain. It will help make OSS a catalyst for change in your organization. Wayne Jackson is CEO of Sonatype , a company that is transforming software development with tools, information, and services that enable organizations to build better software faster using open-source components. Contact him at [email protected]. Image courtesy of olly , Shutterstock VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,181
2,018
"The tech supply chain is more vulnerable than ever | VentureBeat"
"https://venturebeat.com/2018/10/11/the-tech-supply-chain-is-more-vulnerable-than-ever"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The tech supply chain is more vulnerable than ever Share on Facebook Share on X Share on LinkedIn Cybersecurity hacking Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A shot heard around the world was fired last week when Bloomberg published its article “ The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies. ” In it, Jordan Robertson and Michael Riley, explain how Chinese spies infiltrated nearly 30 U.S. companies by including compromised microchips in Supermicro motherboards, which those companies then used across data centers. Once installed in the data centers, those microchips could be accessed by the bad actors who could then control the motherboards from afar. As the article states, this was “the most significant supply chain attack known to have been carried out against American companies.” To give even more context to the potential scale of this, Robertson and Riley quote a former U.S. intelligence official who said, “Think of Supermicro as the Microsoft of the hardware world.” He then continued, “Attacking Supermicro motherboards is like attacking Windows. It’s like attacking the whole world.” As the dust began to settle from the initial shock of what Bloomberg was claiming, most of the companies mentioned in the article vehemently denied its claims. Apple even wrote a letter to congress , saying the story was “simply wrong.” Both the U.K. National Cyber Security Center and U.S. Homeland Security have said they believe Apple and Amazon are telling the truth — and that the alleged Supermicro hack never happened. Regardless of whether the Bloomberg story is valid, supply chain attacks are already happening in the wild, and this should be a wake-up call for all of us. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Software is even easier to pollute than hardware While the Supermicro story pertains to an alleged attack on a hardware supply chain, the scary truth is that it’s much easier for bad actors to infiltrate and hack a software supply chain. With hardware, you need to physically access something in order to conduct a hack. With software, you can do it from anywhere. To this end, I’ve witnessed 10 events during the past 2 years that triangulate a serious escalation of software supply chain attacks. Specifically, adversaries have directly injected vulnerabilities into open source ecosystems and projects. In some cases, these compromised components have been subsequently and unwittingly used by software developers to assemble applications. These compromised applications, which are assumed to be safe, are then made available for use by consumers and businesses alike. The risk is significant — and it’s unknown to everyone except the person that intentionally planted the compromised component inside of the software supply chain. Historically, software hacks have occurred after a new vulnerability has been publicly disclosed, not before. Effectively, “bad guys” have paid close attention to public disclosures — and any time a new vulnerability has been announced, they move quickly to exploit it before “good guys” can patch it. It’s a great business model — especially when you consider that only 38 percent of companies are actively monitoring and managing their software supply chain hygiene. Today, the game has changed. Organizations now must contend with the fact that hackers are intentionally planting vulnerabilities directly into the supply of open source components. In one such example from February 2018, a core contributor to the conventional-changelog ecosystem (a common JavaScript code package) had his commit credentials compromised. A bad actor, using these credentials, published a malicious version of conventional-changelog (version 1.2.0) to npmjs.com. While the intentionally compromised component was only available in the supply chain for 35 hours, estimates are that it was downloaded and installed more than 28,000 times. Some percentage of these vulnerable components were then assembled into applications that were then released into production. The result is that these organizations then unwittingly released a Monero cryptocurrency miner into the wild — and the perpetrators of the supply chain hack profited handsomely. So, here’s the point: Whether the Bloomberg report on Supermicro is valid or not, attacks are already happening on our technology supply chains — both software and hardware. Now more than ever, it’s time to talk about ways to secure our supply chains. Brian Fox is SVP and Chief Technology Officer of Sonatype. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,182
2,021
"Reflect brings automated no-code web testing to the cloud | VentureBeat"
"https://venturebeat.com/2021/01/22/reflect-brings-automated-no-code-web-testing-to-the-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Reflect brings automated no-code web testing to the cloud Share on Facebook Share on X Share on LinkedIn Reflect: Viewing a test result replay Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Every company is now a software company, or so we’re told , meaning they have to employ designers and developers capable of building websites and apps. In tandem, the much-reported software developer shortage means companies across the spectrum are in a constant battle for top talent. This is opening the doors to more automated tools that democratize some of the processes involved in shipping software, while freeing developers to work on other mission-critical tasks. It’s against this backdrop that Reflect has come to market, serving as an automated, end-to-end testing platform that allows businesses to test web apps from an end user’s perspective, identifying glitches before they go live. Founded out of Philadelphia in 2019, the Y Combinator (YC) alum today announced a $1.8 million seed round of funding led by Battery Ventures and Craft Ventures, as it looks to take on incumbents with a slightly different proposition. Similar to others in the space, Reflect hooks into the various elements of a browser so it can capture actions the user is taking, including scrolls, taps, clicks, hovers, field entry, and so on. This can be replicated later as part of an automated test to monitor the new user signup flow for a SaaS app, for example. If the test later throws up an error, perhaps due to a change made to the user interface, the quality assurance (QA) team can be notified instantly with a full video reproducing the bug, along with relevant logs. Above: Reflect: Viewing a replay There are a number of notable players in the automated web testing space, including open source testing framework Selenium and Cypress , which raised a $40 million funding round just last month. And the low-to-no code space has the likes of Testim , which also covers native mobile apps , and GV-backed Mabl , which was launched by two former Googlers back in 2018. But Reflect is setting out to differentiate its offering in a number of ways. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! First up, rather than using browser extensions that record actions locally, Reflect records actions via a virtual machine (VM) in its cloud and “screen-shares” it back to the user through the Reflect web app. This helps eliminate the causes of common recording errors, like cookies, VPNs, or extensions — such as ad blockers — that may impact the state of the browser. In short, Reflect standardizes the testing environment and neutralizes potential inferences, all without requiring any installations. “This approach lets us completely control the test environment, which means we more accurately capture each action you take when testing your site, even for complex actions like drag-and-drops or file uploads,” Reflect cofounder Todd McNeal told VentureBeat. Above: Reflect: Recording a test McNeal said the company has already amassed more than 80 paying customers who subscribe through a SaaS model that starts at free for three users and 30 minutes of execution time per month. Starter, standard, and enterprise plans offer more features and flexibility. There are potential downsides to handing full control to a third-party’s cloud. Many businesses, particularly larger enterprises, would be more comfortable with an on-premises Reflect installation, something that offers them more control, which would be pertinent if Reflect ever went bust. An open source route might also make some sense for this reason, affording companies greater freedom in terms of how they deploy Reflect. But that would come with major trade-offs in terms of Reflect’s “no-code” aspirations. “On-premise installation is something we may add in the future. It has come up with larger enterprises, for sure,” McNeal said. “We’re not considering the open source route though — our goal, and what we think the market is looking for, is something that hides away the complexities, and we think the best way to do that is via the no-code approach.” Being “truly no-code”, as McNeal puts it — versus “low-code,” which may require some form of coding expertise to script specific actions — could also help it become the go-to tool for non-developers. “It means that you can truly give our product to anyone in the organization — it doesn’t have to be just developers,” McNeal said. “Also, since we don’t have the crutch of code to fall back to, it ensures that our recorder needs to be accurate in order to allow customers to test these complex actions.” It’s worth noting that Reflect also offers an API and direct CI/CD integrations , enabling its customers to integrate Reflect deeper into their DevOps processes and schedule tests after every deployment, for example, or even after every pull request. Going no-code The broader no-code movement has emerged as a major trend in recent years, with Gartner predicting in a 2019 report that by 2023 “citizen developers” within large enterprises will outnumber professional developers by at least 4 times. This shift is evidenced by a flurry of activity across the space over the past year, with the likes of Amazon’s AWS launching a no-code app development platform called Honeycode , while Google last year snapped up enterprise-focused AppSheet. Earlier this month, no-code development platform Webflow raised $140 million at a $2.1 billion valuation. It’s clear what benefits automated, no-code platforms could bring to smaller businesses, but why would larger enterprises with plenty of resources be drawn to such tools? “It comes down to what we consider the biggest problems with automated end-to-end testing tools today — tests take too long to create and they’re too difficult to maintain,” McNeal said. “At an enterprise, you have the resources to make this work. You can afford to have developers working full-time on this, who have expertise in the tool necessary to build and maintain your own custom test framework and a suite of code-based tests. But if you can get the same result — the same peace of mind that your application works — with a lot less time and effort, we think that’s a pretty compelling value proposition.” Moreover, even the largest companies have to battle to hire — and retain — their top technical talent and ensure their time is optimized. By going “no-code,” they can delegate more QA work to less technically skilled personnel. “It lets enterprises take full advantage of testers in their organization that aren’t developers,” McNeal added. “Whereas today those testers are doing primarily manual testing, Reflect actually lets a tester with no coding experience build and maintain entire test suites without any developer intervention.” It is still early days for Reflect. Although it’s showing some promise, it lacks some of the smarts of its rivals, such as AI or machine learning that can adapt and self-improve over time. However, this is on its roadmap. “Our approach thus far has been to really get the underpinnings of the product correct, and that’s rooted in accurately capturing and replicating the actions the user takes in the browser,” McNeal said. “We’ll be augmenting this with ML in the future.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,183
2,021
"Salesforce taps AWS to bring 'intelligent document automation' to Health Cloud | VentureBeat"
"https://venturebeat.com/2021/02/10/salesforce-brings-intelligent-document-automation-to-health-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce taps AWS to bring ‘intelligent document automation’ to Health Cloud Share on Facebook Share on X Share on LinkedIn Salesforce Tower, September 22, 2020 in New York City. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Salesforce today announced a new product to help health care and life sciences companies digitize their document management processes. Intelligent document automation (IDA), as it’s called, is designed for use with another new Salesforce tool called intelligent form reader. With IDA, Salesforce is promising its customers reduced manual data entry while enabling them to manage all patients or members from a single place. So any incoming documents, including typed or handwritten forms such as patient referrals that may have arrived as a digital or hard copy (e.g. by fax or post), can now be automatically analyzed and routed to the right queue for review and processing in Salesforce’s Health Cloud. Above: Using Salesforce’s intelligent document automation in Health Cloud The intelligent form reader, which leans on optical character recognition (OCR) technology, is powered by Amazon Web Services’ (AWS) Textract. AWS launched Textract in 2019 , leveraging machine learning smarts to enable any business to automatically extract content from tables, forms, pages, and more. A few months back, AWS introduced added support for handwriting recognition and a host of new languages. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Any business wishing to use Salesforce’s intelligent form reader must also have a separate Textract license, which is available through Salesforce. * Article updated to clarify that the Textract license is available through Salesforce. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,184
2,013
"VMware to launch public cloud to fight Amazon & Nicira-based software-defined data centers | VentureBeat"
"https://venturebeat.com/2013/03/13/vmware-to-launch-public-cloud-to-fight-amazon-nicira-based-software-defined-data-centers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VMware to launch public cloud to fight Amazon & Nicira-based software-defined data centers Share on Facebook Share on X Share on LinkedIn VMware drops a boat load of cloud news on an already busy day, including that it will launch its own public cloud contender to challenge Amazon and Rackspace. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. VMware picked an unusually busy day ( new pope , new Android head , new Kickstarter record ) to dump a lot of high-level cloud announcements in the laps of tech reporters everywhere. The company has some big things in the works, including an upcoming public cloud that will challenge Amazon and a new focus on software-defined data centers. Let’s dig into the news dump. Public cloud to take on Amazon and Rackspace First up, VMware vaguely announced that it will be offering a stand-alone public cloud from its newly formed “Hybrid Cloud Services” business unit. This public cloud will challenge all the big dogs offering infrastrcture-as-a-service — Amazon, Rackspace, IBM, HP, SoftLayer, Google, Microsoft, and more — and it will launch in the second quarter of 2013. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Forrester analyst James Staten, who was pre-briefed on the announcement, writes about the decision in a blog post: Sometimes you can only coax a reluctant partner and I&O customer community for so long before you feel you have to take matters into your own hands. That is exactly what VMware has decided to do, to become relevant in the cloud platforms space. The hypervisor pioneer unveiled vCloud Hybrid Service to investors today in what is more a statement of intention than a true unveiling. VMware’s public cloud service – yep, a full public IaaS cloud meant to compete with Amazon Web Service, IBM SmartCloud Enterprise, HP Cloud, Rackspace and others – won’t be fully unveiled until Q2 2013, so much of the details about the service remain under wraps. VMware hired the former president for Savvis Cloud, Bill Fathers, to run this new offering and said it was a top three initiative for the company and thus would be getting, “the level of investment appropriate to that priority and to capitalize on a $14B market opportunity,” according to Matthew Lodge, VP of Cloud Services Product Marketing and Management for VMware who spoke to us Tuesday about the pending announcement. VMware said its public cloud will be aimed at its existing customer base and sold through its existing VAR and SI channel. This explains CEO Gelsinger’s strong comments from last month’s Partner Exchange — it wasn’t public clouds he was worried about but non-VMware public clouds. But for this channel fulfillment strategy to come true, its partners will have to get with the cloud program too and like the I&O clients they serve, many don’t see more revenue at the end of the public cloud rainbow. And most channel partners don’t have the skills or the trust level to help their I&O clients transition from static virtualization to cloud – that’s a culture and career path change more than a product they can sell them. This requires consulting skills and real cloud experience and most VMware partners don’t have either. Software-defined data centers VMware also said today that it will merge its “VMware vCloud Networking and Security” product line with the “Nicira Network Virtualization Platform (NVP)” into one product family called “VMware NSX.” VMware paid more than $1 billion to buy Nicira in July 2012 for its software-defined networking potential, so here we are seeing that move in action. There is some skepticism that software-defined data centers are overhyped , but we’re willing to let VMware put itself out there and see what it is able to do on this front. The company explained the decision in this excerpt from a lengthy blog post : VMware NSX will be the world’s leading network and security virtualization platform providing a full-service, programmatic, and mobile virtual network for virtual machines, deployed on top of any general purpose IP network hardware. The VMware NSX platform brings together the best of Nicira NVP and VMware vCloud Network and Security (vCNS) into one unified platform. VMware NSX exposes a complete suite of simplified logical networking elements and services including logical switches, routers, firewalls, load balancers, VPN, QoS, monitoring, and security; arranged in any topology with isolation and multi-tenancy through programmable APIs – deployed on top of any physical IP network fabric, resident with any compute hypervisor, connecting to any external network, and consumed by any cloud management platform (e.g. vCloud, OpenStack, CloudStack). The Pivotal Initiative is a go Finally, the VMware and EMC spin-off The Pivotal Initiative has become official. The Pivotal Initiative is led by former VMware CEO Paul Maritz. It is 69 percent owned by EMC and 31 percent owned by VMware and focuses on “big data” and data processing initiatives. The new company has employees and technology from EMC’s Pivotal Labs and Greenplum units and from VMware’s Cloud Foundry, Spring, and Cetas. EMC CEO and Chairman Joe Tucci believes The Pivotal Initiative will go public in the future, and Martiz has stated it will likely be a $1 billion business in five years. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,185
2,013
"eBay Now same day delivery service gets a website and new cities | VentureBeat"
"https://venturebeat.com/2013/07/22/ebay-now-same-day-delivery-service-gets-a-website-and-new-cities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages eBay Now same day delivery service gets a website and new cities Share on Facebook Share on X Share on LinkedIn eBay Now car Looks like eBay Now , eBay’s same-day delivery service, is ready to take on orders from the Web. The company had been taking orders for same-day delivery via its iOS and Android apps since August of last year , but it opened up an actual website today. In so doing, it’s taking on Amazon, which also offers same-day delivery via its Local Express Service. But Amazon has poured hundreds of millions of dollars into building shipping and fulfillment centers around the country in order to deliver the goods, while eBay will be relying on local retailers to supply its same-day products. Both companies are chasing an estimated $2 trillion market : Products that you’d ordinarily buy at a local grocery store, drugstore, or other retail shop within a few miles of your home. While we’ve seen a huge uptick in mobile shopping in the last few years, websites still provide prime real estate for retailers to display their products. Ebay Now promises to deliver anything listed on its apps, and now its website, within an hour, for only $5. That, plus the fact that it is available via a desktop website, might make it appealing to office workers. The company thus far has only served the San Francisco and New York areas, though with this expansion, eBay Now is available to Brooklyn, Queens, and new cities around the Bay Area. Dane Glasgow, eBay’s vice president of mobile and local, explained at VentureBeat’s MobileBeat conference that the company has relied a lot on the context that the mobile phone gives you about a customer. Ebay looks at a bunch of different elements such as the a person’s location and information about their device to come up with the best user experience. People seem to input this kind of information on the website, identifying where they are and what they’re looking for through search terms. We’ve contact eBay for more on what the website means for Now and will update upon hearing back. We’ll soon see whether the website drives more traffic to eBay Now and whether the team is scaled out enough to handle it. Until then, happy instant-gratification online shopping! hat tip Engadget VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,186
2,013
"MongoDB to open the gap on Cassandra with $150M fundraise | VentureBeat"
"https://venturebeat.com/2013/10/04/mongodb-to-close-the-gap-on-cassandra-with-150m-fundraise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MongoDB to open the gap on Cassandra with $150M fundraise Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. MongoDB raised $150 million this morning from T. Rowe Price Associates, Salesforce, and others, making it the top startup in New York, and the most well-funded company in the big data market. MongoDB is a noSQL database provider with some 600 customers, including hot tech startups and Wall Street’s elite firms, like Goldman Sachs. Developers use it to create a system that can store and retrieve information at lightning speeds. “This is great for powering applications [but] not so great for complex analysis,” said Kent Bennett of Silicon Valley- and Boston-based venture firm Bessemer Venture Partners. Shortly after the announcement hit the headlines, I reached out to Bennett, who specializes in data and infrastructure, and New York-based big data investor Matt Turck for their perspective on MongoDB’s aggressive growth strategy. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Editor’s note: Our upcoming DataBeat conference , Dec. 4-Dec. 5 in Redwood City, will focus on the most compelling opportunities for businesses in the area of big data analytics and beyond. Register today! According to Turck, MongoDB, formerly known as 10Gen, rose to prominence by targeting startups and building strong relationships with developers. Many of these developers remained loyal, and MongoDB was able to leverage this initial traction to expand their footprint into larger enterprises. MongoDB stresses its “market leadership” in its marketing materials. This may well be true, but the company still competes with Cassandra (commercialized by DataStax ). According to Bennett, Cassandra is still perceived by many large customers as “more enterprise ready” — enterprise is an area where Cassandra has done well. Bennett isn’t convinced that this means that MongoDB is an undisputed market leader just yet. With this huge infusion of capital, MongoDB will be looking to “close that gap,” said Bennett. Moreover, MongoDB will occasionally butt heads with Cloudera , the Silicon Valley company that is has promoted the mainstream adoption of Hadoop, although these technologies are used by developers for vastly different purposes. “MongoDB can be used by any startup that needs a database, although it can be scaled up to be used for big data as well,” said Turck. Meanwhile, Cloudera is an option for companies who are managing “very large amounts of data,” he explained, and some companies use both concurrently. Although rumors of a public offering are already rampant, Bennett doesn’t believe it’s an inevitable outcome for MongoDB just because the company raised a boatload of cash. Just like Cloudera, an acquisition should not be ruled out completely. “The open-source model can be capital intensive,” he said. This latest round includes new investors EMC, Salesforce, T. Rowe Price, and Altimeter as well as previous investors Intel, Red Hat, New Enterprise Associates, and Sequoia Capital. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,187
2,013
"Facebook tops itself with an even faster tool for querying big data in Hadoop | VentureBeat"
"https://venturebeat.com/2013/11/06/facebook-tops-itself-with-an-even-faster-tool-for-querying-big-data-in-hadoop"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook tops itself with an even faster tool for querying big data in Hadoop Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Updated at 1 p.m. Pacific time on Nov. 7. Just when big data vendors got used to Hive, the Facebook-created open-source tool for querying big datasets on Hadoop, here comes an even faster alternative. Called Presto, the new tool also comes from Facebook — and, like Hive, it too has now been released under an open-source license, a few months after it was publicly disclosed at a Facebook conference. You’d think that a new, faster, open-source tool would be cause for celebration, right? Or at least you’d think companies commercializing Hive and similar tools to stop what they were doing and immediately start supporting Presto, or even building on what they have. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Not exactly. Existing open-source interactive querying engines are plenty fast, said executives from two companies offering support for parts of the Hadoop ecosystem. Still, there could be a few things the companies could glean from Presto. After all, Facebook is a heavy-duty user of Hadoop, a family of open-source technologies that includes a file system well suited for large data sets and several analytical tools. Hive is among the most popular of those tools, enabling users to ask questions of data in the Hadoop Distributed File System with a modified version of the well-established SQL query language. Facebook pioneered Hive and open-sourced it in 2008. But Hive, relying as it does on the powerful but generally slow batch-processing system MapReduce, is not the ideal program for multiple users to scan across an ever-growing data warehouse. It’s not fast enough. So Facebook engineers began developing Presto, even as Cloudera was building a new query engine from the ground up called Impala. (An earlier version of this story called Impala “a sped-up version of Hive,” but a Cloudera spokesperson informed us that the technology is a Hive substitute with differences that provide it with certain “performance, security, and ANSI SQL capabilities.”) A few months later, Hortonworks said it would accelerate Hive in new versions. It turns out Presto isn’t just something Facebook analysts have been using. The new Presto website shows use of the technology by two well-known companies that have taken on plenty of venture capital and could conceivably pay money to get support for similar products: Airbnb and Dropbox. Editor’s note: Our upcoming DataBeat/Data Science Summit , Dec. 4-Dec. 5 in Redwood City, will focus on the most compelling opportunities for businesses in the area of big data analytics and data science. Register today! Christopher Gutierrez, Airbnb’s manager of online analytics, provides a quote suggesting certain advantages over Amazon Web Services’ Redshift data warehouse service. And Fred Wulff, a Dropbox software engineer, is quoted as saying Presto has been “rock solid and extremely fast when applied to some of our most important ad hoc use cases.” One would think rhetoric like that might make Hadoop distribution vendors tremble out of fear that companies would just bypass the open-source-with-support option and go directly for Presto. But Dave McJannet, the vice president of marketing at Hortonworks, didn’t sound nervous about the early interest in Presto. Now, if staid enterprises start clamoring for it, that would be a different story. “Our whole approach is about ensuring 100 percent open-source Hadoop is enterprise-grade for everybody,” McJannet said. “If and when, you know, commercial enterprises show interest in these new and emerging technologies, we’ll absolutely investigate the potential and include them in our distribution, because that is very consistent with our approach.” In the latest version of its Hortonworks Data Platform, or HDP, Hortonworks supports version 12 of Hive. Hortonworks has been working on vastly speeding up Hive from where it was in version 10, as part of the company’s Stinger project. In the future, Hortonworks could integrate Presto with other pieces of the Hadoop ecosystem and offer support in the next HDP release. A similar evolution happened with Storm for stream processing, and with Hive for SQL querying, and Pig for scripting, he said. Then again, Presto could turn out to be something only webscale companies will want to use, in which case Hortonworks could leave it alone. As for Cloudera, its engineers made a lot of the same decisions as Presto’s architects when they were designing the Impala interactive query engine, said Cloudera’s vice president of products, Charles Zedlewski. Like Impala, Presto doesn’t use MapReduce, supports queries in good-old SQL style, and aims to be flexible in terms of how others store data, Zedlewski said. Plus, both engines enable lots of queries to run at the same time. Cloudera could bring certain elements of Presto into Impala as it continues to get new features all the time, Zedlewski said. One aspect of Impala that’s important to enterprise customers is compatibility with most of the business-intelligence tools on the market today, Zedlewski said, and that could be one strength Impala has over Presto in its current form. It’s unclear how well Presto can tie in with such software at this point. And with more users querying data in Hadoop, Cloudera found questions about security — such as who gets to access what data — cropping up more often, prompting the company to release features that grant administrators fine-grained control. “The Presto team is going to run into the same issue,” he said. But even with those potential shortcomings, Presto could have an advantage over existing SQL-on-Hadoop tools out there, and it’s something only a company with as much analytical data as Facebook could have managed: the ability to run simultaneous queries on many, many petabytes of data. The Presto site states that “Facebook uses Presto for interactive queries against several internal data stores, including their 300PB data warehouse.” Without a doubt, that’s big data territory. “Presto is, I think, different in some ways, because right from the outset it’s claiming that it’s (capable of querying) petabytes,” said Ben Lorica, chief data scientist at O’Reilly Media. Hadoop distribution vendors might say their products can handle petabytes, but really the optimal use case might be querying hundreds of terabytes. Perhaps that’s what Cloudera, Hortonworks, and other vendors will want to add to their offerings. “Maybe if that proves to be Presto’s winning feature, I think they’ll try to figure it out,” he said. The vendors could go still further if they want to. In a Sunday blog post , Lorica wrote that Facebook will incorporate into Presto a query engine under development called BlinkDB. That query engine “will tell you, ‘How fast do you want this back? If you want it fast, I’ll give you an approximate answer,'” Lorica said. Such functionality could make some Presto users even more productive, assuming they’re OK with rough answers. If the idea appeals to lots of enterprises, Hadoop vendors might end up supporting Presto after all. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,188
2,014
"Heroku founder: Y Combinator developer tools request is a very big deal | VentureBeat"
"https://venturebeat.com/2014/10/13/heroku-founder-y-combinator-developer-tools-request-is-a-very-big-deal"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Heroku founder: Y Combinator developer tools request is a very big deal James Lindenbaum, Heavybit Share on Facebook Share on X Share on LinkedIn We need more tools. While it might not seem so to outsiders, Y Combinator’s new Developer Tools Request for Startups (RFS) is a big deal. (Editor’s note: The deadline to apply is tomorrow.) This is a request from one of the biggest tech incubators for startups that make developer tools, and it’s an important sign of the increasing value and prominence of developer products and the companies behind them. And I’m thrilled about it. When Adam, Orion, and I founded Heroku, we were one of the earliest developer product companies to go through YC. It was critical for us: We wouldn’t be where we are without it, and I can’t recommend it enough. Since then, many great developer companies have come out of YC, including Stripe , PagerDuty , Docker , Meteor , CoreOS , Firebase , Parse , Cloudkick , Zencoder , Light Table , Circle CI , and Rainforest , just to name a few. Founders of developer-focused startups often ask if YC is a good fit for them. The answer is yes: At that early stage, the value YC provides is huge and is very similar across company types. It’s after they get past their seed funding that things become different for these companies as they realize they face a unique set of challenges that other startups don’t. This is why I created Heavybit as a sort of grad school to YC’s undergrad — it’s narrowly focused and later stage. From Heroku to Heavybit At Heroku, we had to transition developers to a new model of agile and continuous development, get people sold on the idea of the cloud, warm people up to a new on-demand SaaS business model, figure out how to translate happy developers into enterprise sales, and convince investors there was serious money to be made in the dev-tools business. It was tough. Having also been an investor, advisor, and board member of many of these companies, it was clear that not only do they face unique challenges, but that there’s a right way to build a developer-focused company. Heavybit’s nine-month program is designed to help post-seed companies like Rainforest (YC Winter 2013), Apiary , and Librato and their older counterparts like Stripe (YC Spring 2010), PagerDuty (YC Spring 2010), and Meteor (YC Spring 2011), get traction with developers, grow their team, harden their technology, create their go-to-market strategy, and attract meaningful customers. Even though there’s a generation of mentors and advisors who’ve already seen significant wins with developer companies, there’s a lack of community and surprisingly poor transmission of learnings. We’re aiming to address this with Heavybit’s library and curriculum aimed squarely at founders of developer-focused companies. The rise of developer products Developer productivity has never been more valuable. Building quality software quickly has become a competitive advantage for every business, and this move toward continuous development and delivery of software is creating opportunities for new solutions throughout the developer’s toolchain. Products like Apiary allow developers at Microsoft to quickly design and build new APIs and keep code and documentation always up to date. Rainforest provides sophisticated quality assurance to prevent bugs from being introduced into AirBnB or Zenefits. And PagerDuty is relied on 24/7 for incident response by the DevOps teams at Pinterest , Wikipedia , and Evernote. Developer products can be anything from programming, workflow, and collaboration tools to platforms and infrastructure for building, testing, deploying, and running software. These products are high leverage, high scale, and super valuable. They’re the foundations on which everything else is built — the heavy industries of the software supply chain. Developer tools will shape the world YC’s new RFS is both evidence of how important these companies are, and that YC wants to work with them. And it’s no mystery why: several of YC’s top 10 exits to date have been developer companies (Zencoder, Cloudkick, Parse, and Heroku which until 6 weeks ago was YC’s largest exit). This is the era of designing “by developers for developers:” There has never been a better time to start one of these companies. If you’re a developer with an entrepreneurial itch and you’ve got an idea for a product you wish you had, start a startup and build it. The future is made of software, and the products developers use have a dramatic impact on the kinds of software being built. Developer tools will shape the world. YC’s application deadline is October 14. James Lindenbaum is the founder of Heavybit — a 9-month program for post-seed funded companies building developer-facing products. Prior to founding Heavybit, Lindenbaum co-founded Heroku , a YC company which was acquired by Salesforce.com in 2010. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,189
2,017
"HashiCorp launches Sentinel, its tool for building compliance as code | VentureBeat"
"https://venturebeat.com/2017/09/19/hashicorp-launches-sentinel-its-tool-for-building-compliance-as-code"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive HashiCorp launches Sentinel, its tool for building compliance as code Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. HashiCorp debuted a new framework today that’s designed to let compliance and security teams keep their environments protected while letting engineers rapidly deploy code. Called Sentinel, it enables users to lay out policies using a specialized language and then have those policies automatically enforced through HashiCorp products. That means it’s possible for a compliance team to write Sentinel code and ensure that all of the infrastructure managed by HashiCorp’s Terraform software will run in accordance with the new code. Sentinel was created in response to feedback from the company’s enterprise customers, who wanted this sort of capability. Sentinel is similar in intention to the compliance features that Chef added to its Automate product earlier this year. Those features let companies create compliance code that is checked when a piece of software is built. Chief technology officer Armon Dadgar said in an interview with VenturBeat that Sentinel is different because it’s possible for the system to watch the active path of code execution persistently and ensure compliance on an ongoing basis, rather than just during the initial build of an application. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! On top of the Sentinel news, HashiCorp also unveiled a new Terraform Module Registry that’s designed to provide developers and engineers with a centralized location to find pre-built infrastructure code. Terraform provides an automated system to set up infrastructure, and the new modules (provided by HashiCorp and partner companies like Microsoft, Google, and CoreOS) will help jump-start deployments with common patterns. The Terraform Registry is designed to make it easier for people to get started with the popular infrastructure management software so that engineers can get Terraform-based systems up and running with a minimal amount of fuss and without having to worry about whether they’re following best practices. At launch, the registry will contain about 32 modules, with more on the way through community contributions and partnerships. Terraform Enterprise also gained a new user interface, as well as an API that lets developers integrate with the software’s management functions programmatically. HashiCorp announced updates to some of its other products, as well. Vault, its secrets management product, now integrates natively with Kubernetes. Consul, its service discovery and configuration product, reached version 1.0. And the paid enterprise version of that application gained support for segmented LAN environments. Nomad , the company’s service and batch scheduler, gained a new web-based interface and an access control system. Enterprise users can now get access to a beta version of that product, which includes support for namespaces to help isolate different teams’ workloads. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,190
2,021
"PagerDuty expands enterprise incident management with remediation tools | VentureBeat"
"https://venturebeat.com/2021/06/25/pagerduty-expands-enterprise-incident-management-with-remediation-tools"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PagerDuty expands enterprise incident management with remediation tools Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As part of an effort to expand the reach of its incident management platform beyond IT teams, PagerDuty this week added a service graph to its portfolio through which users can discover the source of digital disruptions that can then be automatically remediated. Announced during its Summit21 conference, PagerDuty this summer will make available a service graph tool through which the company can discover, map, and visualize both business and technical service dependencies spanning multiple digital business processes. Whenever there is a disruption, that same service graph will enable anyone in the organization to immediately discern the scope of any given issue, said Sean Scott, chief product officer at PagerDuty. At the same time, PagerDuty is creating PagerDuty Runbook Actions, an add-on offering that will become available in the fall that enables front-line teams to diagnose and resolve urgent incidents in real time. PagerDuty Runbook Actions makes it simpler to delegate prescriptive diagnostic and remedial workflows based on, for example, scripts and system commands that will automatically execute. PagerDuty also announced a Customer Service Ops Business level plan that will provide customer support teams with access to real-time status updates of issues impacting specific customers. The goal is to enable organizations that are primarily responsible for managing those customers to stay informed as issues get addressed, said Scott. Finally, PagerDuty is also enhancing the AI capabilities embedded within its platform to surface the root cause of any issue faster in addition to adding a change correlation capability that identifies potential root causes based on the most recent changes made to the IT environment. There is also an outlier incident tool that also identifies incident types that are anomalies and rare occurrences or, conversely, frequent causes or repeat issues. Digital transformation In general, PagerDuty is trying to change the relationship between IT and the rest of the business as more organizations embrace digital business transformation, Scott said. In the immediate impact of the COVID-19 pandemic, it became apparent the relationship between IT and the rest of the business was finally changing, said Scott. “COVID was really an inflection point,” he said. The new capabilities should enable IT teams to proactively address more issues before they become a major problem. However, once an issue becomes apparent, all the stakeholders affected by that event should be able to understand the scope. Armed with that insight, it then becomes possible to make more informed decisions at both the IT and business level, said Scott. Today most organizations still need to convene a “war room” meeting where specialists from various parts of the IT organization will spend hours, sometimes even weeks, trying to determine the root cause of the issue. The onus is then on that IT team to keep the rest of the business informed about an issue. The challenge is IT teams don’t often know what impact an incident might be having on the business, which can make it challenging to prioritize remediation efforts. While employing graphs to map the relationships between entities is starting to be employed more widely, Scott said the PagerDuty approach will enable IT and business users to launch an automated set of workflows to remediate issues identified in the service graph. As application owners become more technical, they are increasingly capable of launching workflows to fix application issues that are ultimately their responsibility, noted Scott. One way or another, business users across the organization who understand how dependent they are on applications to succeed are demanding more visibility into IT processes. The days when IT teams could obfuscate those processes behind a wall of tools only they could decipher are coming to an end. The challenge now is redefining the cultural relationship between IT and the rest of the business as the historic divide between the two camps increasingly disappears. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,191
2,021
"Netlify, a platform for building web content, raises $105M | VentureBeat"
"https://venturebeat.com/2021/11/17/netlify-a-platform-for-building-static-web-content-raises-105m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Netlify, a platform for building web content, raises $105M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Netlify , a platform as a service that builds, deploys, and hosts websites and apps, today announced that it raised $105 million in a series D round led by Bessemer Venture Partners, valuing the company at $2 billion post-money. Alongside the funding, Netlify announced the acquisition of OneGraph, a tool for building integrations with third-party services, for an undisclosed sum. Founded in 2014 by Christian Bach and Mathias Biilmann Christensen, San Francisco, California-based Netlify offers hosting and backend services for web apps and websites. The company provides hosting for websites whose files are stored in the version control system Git and then generated into web content files served via a content delivery network. “Netlify [fundamentally moves] away from monolithic web apps to a decoupled architecture that separates the frontend user interface from the backend business logic [and is] more open and accessible for developers,” Biilmann said. “If you’re a company and you want to reach users, your main channel is online. 2020 and the pandemic accelerated that a lot as companies realized they needed to build differentiated user experiences on the web. That’s where Netlify has seen a lot of interest from businesses who are competing to deliver the most performant and personalized web experiences, whether that’s an ecommerce site, software-as-a-service application, or marketing campaign.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Web hosting With Netlify’s technology, developers connect their code repository of choice (e.g., GitHub) and add build settings, after which Netlify deploys the website or app across an app delivery network. This eliminates the need to find a host and upload app support files, as well as configure domain settings. Among other solutions, Netlify promotes Jamstack, a web development architecture based on client-side JavaScript code, reusable APIs, and markup content. In its purest form, the idea of Jamstack is that a web app is prebuilt into webpages, using content and code to generate the output. GitHub founder and former CEO Tom Preston-Werner predicted in 2018 that “within 5 years, you’ll build your next large scale, fully featured web app with Jamstack and deploy on Netlify.” True to this prediction, 16% of the internet population visits a Netlify-powered website monthly; the startup’s client base has grown to include Google, Facebook, Verizon, NBC, Samsung, Nike, Cisco, Atlassian, Citrix, Peloton, and other large enterprises. “The whole world is moving to a decoupled approach to building web applications, so we’re seeing a lot of validation in this market,” Bach said. “Netlify’s biggest differentiator is its platform — which is composable, pluggable and not tied to a singular framework or technology, and therefore is accessible to all developers.” OneGraph acquisition and fund So where does the aforementioned OneGraph fit into Netlify’s portfolio? OneGraph — launched in 2018 — offers a GraphQL service that wraps and connects software-as-a-service APIs. GraphQL is an open source data query and language for APIs combined with a runtime for fulfilling queries with existing data. GraphQL allows clients such as web browsers to define both the structure of data required for an app or website and the same structure of data returned from the server, preventing large amounts of data from being returned. Netlify sees the service — packaged through OneGraph — as complementing its content management system, Netlify CMS, by making static content delivery both simpler and more efficient. Netlify — which has raised nearly $212 million in capital to date and has 200 employees — also announced today a $1 million investment in open source project sponsorship, promotion, and contributions. Bach says that the goal is to “advance the modern web” by investing and promoting innovation through the Jamstack ecosystem. “We’re creating the Netlify Jamstack Innovation Fund with the goal of investing $10 million in emerging companies to fuel the next decade of innovation for the web. [We’re also] committing to invest $1 million of work in open source technologies driving the modern web, through upstream code contributions and sponsorships,” Bach said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,192
2,021
"What are low-code databases? | VentureBeat"
"https://venturebeat.com/2021/02/15/what-are-low-code-databases"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What are low-code databases? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Low-code databases are tools designed with simple user interfaces that can be used successfully even by those without any background in programming. They’re in strong demand because of the boom in low-code development. More and more new products are opening up opportunities for non-programmers through well-designed and simplified interfaces. The term “low code” means that they expect it won’t take much programming to finish the job. Sometimes all it takes is dragging and dropping some icons, followed by some earnest clicking and, perhaps, filling out a few forms. The term is being used across a wide variety of enterprise products, and databases are just one corner. Indeed, many products are offering specialized services wrapped around a core database. Consider the following scenario: Chris in receiving wants to track incoming packages from states with high COVID-19 rates. Pat in the PR department needs to keep a running list of all requests from reporters, a list that must be followed and updated by six other people on the team. The events team needs to build out databases for tracking attendees for each of the ten new conferences next year. No one is a skilled coder, and the development staff hides when everyone bangs on their door with a request for a new tool. As workforces and workflows get more automated, this sort of scenario is playing out more frequently and is driving companies to adopt low-code databases. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The term “low code” is not just for the front-line products. It’s also finding a home in all corners of the IT stack, including some of the lower and normally more arcane levels, like the database. The endless command line invocations are being replaced, slowly but surely, with set-up wizards and prettier user interfaces. There is plenty of debate about whether the products can fulfill the buzzword’s promise for anyone. While the beautiful screens and graphical user interfaces are real, some of the trickiest details are hidden beneath the facade. Sometimes it takes a skilled coder to understand the best way to store data so it can be sorted and retrieved efficiently. What are some use cases for low-code databases? Both traditional developers and novices are able to create basic applications that connect users to databases. Some common use cases include: Record keeping — Office workers can create new database tables to track office functions without relying on full developers. Customer interaction — Businesses that need to gather requests or send updates to customers can create outward-facing apps filled with data-rich forms. Partnerships — It’s not just customer-facing applications; low-code tools can speed the development of new business relationships by reducing the iterations needed to support a new contract. Experimentation — Low-code databases are popular for building prototypes and testing workflow. They don’t require the investment of large teams working over multiple months. Who are the main providers of low-code databases? Microsoft was one of the original companies to market a low-code database. Its original version of Access , first shipped in 1992, was aimed at average computer users and eventually bundled with Office products like a word processor. People could create and fill a database with almost as much ease as writing a memo. The latest set of tools is now marketed under the “ Power App ” banner, which supports sophisticated apps wrapped around a database. The tools are tightly integrated with basic Office applications and marketed to the same group of users. Users might not spend much time worrying about where the data goes, but many may be using a connection to Microsoft’s high-end flagship SQL Server. Oracle’s database may have been one of the hardest to install at one time, but today the company markets some versions as “autonomous.” That is, the tool includes automated routines that handle many of the chores that were originally performed by humans. It is said to be “auto-scaling,” “auto-tuning,” “auto-repairing,” and “auto-provisioning.” There are also “automatic backups” and “auto-failover.” Many of these features are making life easier for the database administrators and making it simpler for the rest of the developers to handle the job on the side. While there are web interfaces for creating the databases, they may still be too complicated for ordinary users. SAP calls its process “ Rapid Application Development ” and offers several tools for accessing the data in their cloud. Ruum , for instance, will thread together icons to channel data into SAP processes. Its Robotic Process Automation tool includes AI features like text recognition to convert data automatically before storing it in the database. Who are the upstart providers? It’s difficult to draw the line between a low-code database and any generic application. Many apps are just thin front ends wrapped around a database, so users may be storing their information in traditional databases without even realizing it. A layer of automation eases the flow, at least for common applications. Some open source toolkits are designed to make this simple. Drupal and Joomla , for instance, are content management systems designed to create databases filled with pages and articles. Drupal’s Webform module adds the ability to create elaborate surveys so users can input their own data. Other content management systems like WordPress can do much of the same thing, but they’re often more focused on building out blogs and other text documents. The major cloud services are adding tools and offering multiple ways to create an app that stores data in the cloud’s data services. Google’s AppSheet offers a quick way to thread together an app that is tightly integrated with the office products in G Suite. It is one replacement for App Maker, an earlier effort that recently shut down. The Google G Suite also includes Google Forms , one of the simplest ways to gather data from users into spreadsheets. To make things a bit more confusing, Google also supports AppEngine and AppScript , two other tools that simplify the process of creating apps but use enough of a programming language that they might not be considered “low code” even though they’re pretty easy to use. Amazon is also pushing out new options. Its Honeycode offers pure drag-and-drop simplicity as a front end. Any data can be routed to any of the various AWS storage services and databases using Lambda functions. It also offers AppFlow , a tool for connecting different AWS services and also external ones like Salesforce. Other cloud services are specializing in bringing computation close to the users with their distributed endpoints. Cloudflare’s Workers will respond quickly from the closest Cloudflare CDN node after executing snippets of traditional languages like JavaScript. Airtable is concentrating on improving the user interface by adding an elegant presentation layer for the browser that turns its cloud-hosted database into a prettier, more sophisticated app. There are several major ways of presenting the data tables, from spreadsheet grids to calendars to kanban boards. They also begin with a number of templates for common use cases. At some point, the products become so elaborate that they aren’t considered or marketed as merely databases. There are several dozen good examples that are packaged as “robotic process automation” or “hyperautomation.” Some of these include Appian , Kissflow , or Outsystems. All use many of the same techniques for enabling average users to write code in an easy way. All end up storing the data in a database. But at some point, the database is buried so deeply in the code that they stop fitting into the “low-code database” box. Is there anything a low-code database can’t do? The sophistication and polish of low-code tools is substantial, and many simple tasks can be accomplished by developing an app that acts as a basic front-end to the database. If the job involves creating, updating, or deleting rows in a database, it can be the quickest way to deliver a tool to users. Most of the time, low-code tools offer a backdoor for installing larger chunks of code to handle cases that might not be accomplished with the standard features. Skilled developers can make use of the low-code features to move quickly and then resort to more traditional code. AWS Lambda functions, for instance, can execute a fairly big block of code when triggered by Honeycode. Some people are writing elaborate simulations and computational jobs that take advantage of Lambda’s low cost. But low-code solutions, and especially low-code databases, are often tripped up by small but important caveats in the workflow — ones that, for example, might involve someone in the back office explaining that an entry is valid on all days except on the second Tuesday of the month. Or perhaps when supplies run low, and orders from better customers are processed first. These kinds of details need programmers to write code. This article is part of a series on enterprise database technology trends. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,193
2,021
"Creatio raises $68 million for low-code enterprise process automation | VentureBeat"
"https://venturebeat.com/2021/02/22/creatio-raises-68-million-for-low-code-enterprise-process-automation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Creatio raises $68 million for low-code enterprise process automation Share on Facebook Share on X Share on LinkedIn Creatio Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Creatio , a low-code process automation and customer relationship management (CRM) platform, has raised $68 million in a round of funding led by growth equity firm Volition Capital. The raise, which is the first external investment in Creatio’s eight-year history, comes during a boom period for low-code and no-code platforms , spanning everything from enterprise app building and marketing analytics to web development , game production , and process automation. The global low-code development market is pegged at roughly $10 billion today , a figure that’s projected to rise to $187 billion within a decade as businesses battle to hire and retain top technical talent. Low-code platforms promise to help businesses improve many of their internal development and operational processes by enabling less technically-able workers to lean on automated tools such as Creatio. Founded in 2013, Boston-based Creatio was initially known as BPM’Online before a rebrand in 2019. The company offers a low-code studio that enables businesses to automate any of their internal processes in minutes through a drag-and-drop rules-based interface. For example, the myriad steps involved in a repetitive employee onboarding process, such as sending back-and-forth emails and uploading documents, can be fully automated with Creatio. Above: Creatio: Studio interface Creatio also offers enterprises integrations with external tools such as Excel, Microsoft Exchange, and Google, while the Creatio marketplace opens up a wide array of connections to social networks, messaging services, and productivity tools, as well as out-the-box process templates. Elsewhere, Creatio offers an open API for businesses to develop their own custom integrations with third-party apps. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Creatio’s raise follows a flurry of activity in the process automation space. This movement has been accelerated by the pandemic, which has required businesses to optimize operations. Germany-based no-code process automation startup Next Matter announced a $4 million seed investment earlier this month to expand in the U.S. This came shortly after Boston-based Indico locked down $22 million in funding. And in the closely related robotic process automation (RPA) sphere, UiPath this month secured a $750 million investment at a $2 billion valuation. Elsewhere, Microsoft recently announced a new process advisor tool that identifies processes for automation and builds on the platform’s existing RPA toolset. That Creatio has grown organically without any external funding is an impressive feat, particularly as it claims 600 employees globally and has amassed a roster of clients that include BNP Paribas, Hershey’s, and DB Schenker. With $68 million in the bank, the company said it intends to “build aggressively” on this momentum, with plans to invest in its R&D, marketing, and sales. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,194
2,021
"How low-code platforms can aid intelligent business process management | VentureBeat"
"https://venturebeat.com/2021/04/25/how-low-code-platforms-can-aid-intelligent-business-process-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How low-code platforms can aid intelligent business process management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The potential for low-code/no-code platforms is enormous. Low-code increases the productivity of IT developers — sometimes by several orders of magnitude. And no-code empowers experts and subject matter experts primarily on the business or operations side (as opposed to IT) to become “ citizen developers. ” But as I explained in a previous article , low-code and no-code platforms are not a panacea; they face challenges. Given the broad spectrum of low-code and no-code platforms, how should enterprises find the best options for their specific needs? And what are the use cases for using multiple low-code/no-code platforms? I will address these questions in a series of articles to help you navigate this transformational landscape while avoiding the pitfalls. Specifically, I will be looking at low-code/no-code related to intelligent business process management (BPM), intelligent databases, automated integration, and a number of other areas. In this first installment, I will be focusing on low-code/no-code in the context of intelligent BPM , or iBPM. What is iBPM? iBPM’s core value proposition is the collaboration and orchestration of people, applications, connected devices, and trading partners to achieve and continuously improve business objectives. Intelligence and automation are two essential conjuncts for BPM. Intelligence for BPM comes in many forms: digitizing business rules, intelligent virtual assistants, and increasingly process mining. A BPM solution will involve fully automated robotic process automation sub-processes for repetitive tasks that do not need human intervention and automated tasks assigned to human participants. Thus, increasingly RPA is becoming part of the complete intelligent BPM platform. Here is a simple order-to-cash process example: Some tasks will be performed by humans — for instance, approving the orders. Others could involve automation with RPA — for example, receiving the goods. There will also be tasks accessing systems of record — for instance, preparing and paying the invoice. An iBPM platform will model, execute, monitor, and improve the end-to-end process. Other terms are also often used to denote end-to-end processes. These include “workflow” and “case management.” Intelligent BPM is much more than technology. At its core, it is a transformational management discipline that helps organizations achieve their strategic goals. Automation is a crucial component of iBPM solutions. As a discipline, BPM drives the operations of enterprises. It includes several iterative phases from design to execution to monitoring and continuous improvement. There is a remarkably close affinity between low-code/no-code and BPM. As far as back in 2005 or earlier, BPM suites were touted as platforms for model-driven development, which is akin to what we now call low-code/no-code. What were the “models?” Well, check the next section on how low-code/no-code manifests itself in iBPM platforms. Low-code/no-code in iBPM Low-code/no-code iBPM platforms handle: Modeling the workflow or the process The user experience or screens for the human participants, and The process analytics dashboards — for continuous improvement. There are other components of a complete iBPM low-code/no-code platform — such as the decisioning (aka business rules), integration, and data model — but I won’t be getting to those in this post. The following is a simple purchase request process model from Bizagi , using shapes from the BPMN graphical notation for business processes (the de-facto standard): The swim lanes represent the participants in the process. The rectangular shapes are tasks or activities. The diamond shape is for a decision, and the circles represent the start and end of the process. There are many other shapes in the BPMN standard, but these three are the most common. If human participants, such as Boss, Requester, etc, are involved in a particular workflow, the low-code/no-code BPM modeling also supports the creation of UI forms to enable that interaction, and these are pretty easy to model. This “drag and drop” paradigm of building user experience is common and similar across all low-code/no-code platforms that support Web or mobile applications. The following figure illustrates a simple user experience designer from Kissflow. There will be elements such as buttons, input fields, drop-downs, images, etc., that a non-technical developer can use to create the user experience. The elements are then connected to the properties or fields in the business process being modeled and automated. The interface builder of the iBPM platform is robust enough to allow the designer to build a user experience — preferably without any code. Once the application is deployed, the various participants can then monitor the performance of the activities through interactive analytics. These are actionable analytics dashboards, which means that if there is a bottleneck or issue, the stakeholder can take action, such as escalating or re-assigning a task. The analytics dashboards will typically have pre-built analytics that also support low-code/no-code customization. Here is an example of an actionable business process analytics dashboard from Nintex : iBPM low-code/no-code recommendations Why is it important for organizations to be able to model, automate, monitor, and improve their business processes without coding? An organization is a collection of business processes for production, marketing, sales, service, and support functions. So any optimizations and improvements of the most critical processes will enhance the bottom line: cost savings, revenue generation, and compliance. These are called operational excellence (OE) improvements. iBPM low-code/no-code platforms are an enabling technology for OE. Here are my recommendations for iBPM low-code/no-code. Prioritize your improvements: There will typically be many mission-critical and support processes that need improvement. By balancing the complexity of implementation with business value, you will identify the low-hanging fruit. (For more details, check out this explanation of four intelligent automation methodologies ). The result will be a list of automation and OE business processes that you can optimize through an iBPM low-code/no-code platform. Make sure you start with process mining : To find the top priority low-hanging fruit, you need to know the most common process paths, the bottlenecks, the variations, and improvement opportunities. In other words, you need to understand what processes your transactional data is subject to and then improve them. That is precisely the domain of process mining. Do not automate bad processes. The figure below illustrates the OE reference architecture with iBPM low-code/no-code. At the bottom, you have the systems of record that generate the transactions for specific processes. After aggregating and cleaning the transactional data, a process mining tool — such as Celonis — can then identify the most common process path and the variations and the root causes for the issues. Like data mining, process mining algorithmically “mines” and discovers the processes from the transactional data, including the variations and bottlenecks. Based on these, a iBPM low-code/no-code platform is used to improve, implement, and automate the processes, leveraging workflows with human participants and robotic process automation. Create and fund an operational excellence competency center: iBPM low-code/no-code — and all other low-code/no-code, for that matter — is technology. As noted above, it is also a management discipline for operational excellence. For organizations that use this approach, it is a good idea to have a competency center that does three things at a minimum: balances innovation through iBPM low-code/no-code with best practices for security and reliability, enables non-technical subject matter experts to leverage iBPM low-code/no-code and become participants in development, and governs the continuous improvement from process mining to automation. Understand the landscape and leverage experts: There is quite a bit of confusion when it comes to classifying what solutions are BPM solutions. Some analysts classify these platforms as “workflow,” “business process,” or “case management” solutions. For example, see these classification schemes: From Capterra: Workflow Management and Automation Software From DPM: Business Process Management Suites or Systems From Forrester: Digital Process Automation Software Also from Forrester: Dynamic Case Management From Gartner: Intelligent Business Process Management Suites There are also low-code/no-code development platforms that are closely affiliated with the BPM space but that might be classed into other low-code/no-code categories: From Gartner: Enterprise Low Code From Forrester: Low Code Platforms for Business Developers Also from Gartner: Enterprise High Productivity Application Platform As A Service (hpaPaaS) — seriously! The low-code/no-code ecosystem is constantly evolving. There are hundreds of platforms — and new ones are entering the market all the time. Sometimes inexpensive and straightforward low-code/no-code tools will be sufficient for your needs. Do not pay for what you will rarely use. Also, avoid vendor lock-in. There are emerging new and innovative low-code/no-code platforms that support plug-ins and add-ons, including those that address process mapping. Dr. Setrag Khoshafian is a cofounder at Startup Assistant and Principal and Chief Scientist at Khosh Consulting. He was previously VP of BPM Technology at Pega, Senior VP of Technology at Savvion, and CTO at Portfolio Technologies and is a member of the Cognitive World Think Tank on enterprise AI. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,195
2,021
"No-code/low-code: Why you should be paying attention | VentureBeat"
"https://venturebeat.com/business/no-code-low-code-why-you-should-be-paying-attention"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest No-code/low-code: Why you should be paying attention Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. We’ve all been hearing the hype lately about low-code and no-code platforms. The promise of no-code platforms is that they’ll make software development just as easy as using Word or PowerPoint so that the average business user can move projects forward without the extra cost (in money and time) of an engineering team. Unlike no-code platforms, low-code platforms still require coding skills but promise to accelerate software development by letting developers work with pre-written code components. According to Gartner , 65% of application development will be low code by 2024. I was involved in an early comparative productivity benchmark test between traditional development (using Java) and a model-driven low-code/no-code development project back in 2017. The results were impressive: 5X to 7X productivity improvement with low-code/no-code development. A survey by No-Code Census in 2020 showed a 4.6X productivity gain over traditional programming. Low-code/no-code: A fragmented market The low-code/no-code landscape is complex, with numerous solutions, platforms, and submarkets. For example, there are submarkets targeting large enterprises, medium-sized businesses, and small businesses. Enterprise low-code/no-code platforms provide high scalability, performance, security, and integration with enterprise applications. They tend to be more expensive. Here’s Gartner’s Magic Quadrant for enterprise low-code platforms: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Gartner defines a low-code application platform (LCAP) as “an application platform that supports rapid application development, one-step deployment, execution and management using declarative, high-level programming abstractions, such as model-driven and metadata-based programming languages.” G2 offers a similar landscape overview for small-sized businesses. There is not much of an intersection between the small-business and the enterprise low-code platforms. Some of the small-business platform vendors will not be known or recognized in enterprises. Similarly, small and midsized businesses usually do not tend to purchase the enterprise platforms – primarily due to their pricing and complexity. Not surprisingly, many low-code platforms are business process management platforms. BPM has long supported model-driven development (MDD) — where you first diagram how software should work before building it. This diagramming is similar to the BPM process-map approach, where, to specify a business process, you drag and drop shapes representing subprocesses into the correct order. (The most popular process mapping standard supported by most BPM platforms is BPMN. ) So process-centric low-code solutions are quite popular. Examples of BPM low-code/no-code platforms include Appian , Pega , and Outsystems. (Disclosure: I previously worked as VP of BPM technology at Pega.) But there are other paradigms under the low-code/no-code umbrella: Web site low-code/no-code platforms: Enterprises of all sizes can leverage these platforms. The leading contenders are WordPress , Wix , Squarespace , and WebFlow. Database management low-code/no-code platforms: On the high end (enterprise), you have platforms such as Mendix. On the lower end, you have Airtable. There are also NoSQL database low-code/no-code platforms such as KgBase for knowledge graphs. Automated integration low-code/no-code platforms: There are several exciting and emerging platforms in this domain: Zapier , Parabola , and Integromat are in this category. You can develop powerful and complex integration flows relatively quickly through these tools. Here is an example of a Parabola workflow that pulls from an API, does some data manipulations, and then sends it to another API. The automated workflow can be run on-demand, scheduled, or invoked via a webhook. Mobile application development: Most low-code/no-code platforms, such as Bubble , provide responsive UI capabilities for mobile applications. Others offer native support for the leading mobile operations systems (iOS and Android). Thunkable is perhaps the ultimate example for low-code/no-code mobile application development. Many of these platforms provide rich collections of plug-ins and templates for certain types of applications. Other categories of low-code/no-code platforms target specific application areas or niches: E-commerce and online stores: A leading example in this category is Shopify. Work management: A good example in this category is Monday.com. ERP applications: An interesting example here – also listed in Gartner’s MQ – is Zoho. Another significant and impactful platform for ERP and CRM is Salesforce. Blockchain and IoT: Atra is an example in this category – for blockchain. Artificial intelligence: A fascinating area for low-code/no-code is AI, and we are now starting to see the emergence of tools in this area. An example here is C3 AI Ex Machina. Low-code/no-code challenges Low-code/no-code platforms have many benefits, but they also present some challenges and involve a learning curve. Many best practices are just emerging and are relatively immature. This is a critical liability. With traditional programming, there is an enormous body of experience, robust communities, and documented best practices. In many ways, low-code/no-code is at its infancy – even though MDD has been around for a long time: especially with BPM platforms. Here are some of the more critical challenges for low-code/no-code: 1. It involves a culture change: Low-code/no-code requires a change in an organization’s culture , whether that organization be an enterprises or a startup. Changing the culture to obliterate silos is not easy. It requires executive vision and endorsement. It also requires the allocation of budget and empowerment to a low-code/no-code digital transformation competency center. 2. It takes time and effort to learn the platforms: Low-code/no-code increases speed and productivity. But it is not easy. The tools and platforms are not trivial, and developing a level of expertise takes time. This is one of the most misunderstood aspects of low-code/no-code. Complex programming constructs such as nested loops are not that easy on any platform. 3. You may need multiple platforms: Some platforms are more complete than others. Unqork and Bubble , for example, are designed to be used across any use case and so offer many options for integration with enterprise systems. However, they can benefit greatly from other components that specialize in specific areas; for instance, Bubble together with, say, Parabola or the Zapier plugin for automated integration. The data manipulation and integration capabilities in Parabola or Zapier are easier to work with than the native ones in Bubble. There are other plugins or technology components that complement low code/no code platforms with additional technologies: Check out, for instance, the technology partnerships for Unqork or the comprehensive list of plugins for Bubble. 4. Resources and community support are scarce: Many low-code/no-code platforms are relatively immature. There are millions of developers – sometimes tens of millions – for conventional programming languages. Many online and on-site courses and books and materials are readily available for languages such as Java or C#. There are multiple communities and resources for outsourcing. It is an entirely different scenario for low-code/no-code – especially for the more recent platforms. 5. Pricing can be confusing: Enterprise low-code/no-code platforms tend to be unnecessarily expensive. The mid- and small-market platforms are less costly but are typically less scalable. The involvement of multiple platforms for an end-to-end solution complicates pricing issues more. Those are just some of the key challenges. They make it clear that low-code/no-code is no panacea. However, it remains a formidable trend for developing innovative solutions both for incumbent enterprises and startups. We should expect to hear about more challenges from this space as it continues to mature. And there will be failed projects. But the advantages – especially in accelerating speed of development and productivity – will win the day. Are you ready? Dr. Setrag Khoshafian is a cofounder at Startup Assistant and Principal and Chief Scientist at Khosh Consulting. He was previously VP of BPM Technology at Pega, Senior VP of Technology at Savvion, and CTO at Portfolio Technologies and is a member of the Cognitive World Think Tank on enterprise AI. VentureBeat regularly publishes guest posts from expert data and AI practioners. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,196
2,019
"CircleCI raises $56 million to continuously test software builds for bugs | VentureBeat"
"https://venturebeat.com/2019/07/23/circleci-raises-56-million-to-continuously-test-software-builds-for-bugs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CircleCI raises $56 million to continuously test software builds for bugs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. San Francisco-based CircleCI , which develops continuous delivery automation software, today revealed that it closed a $56 million series D funding round led by Owl Rock Partners and Next Equity, with participation from existing investors Scale Venture Partners, Top Tier Capital, DFJ, Baseline Ventures, Industry Ventures, Heavybit, and Harrison Metal Capital. The capital infusion brings the startup’s total raised to $115.5 million following a $31 million series C round in January 2018, and CEO Jim Rose says the funding will further CircleCI’s “extensibility” and “agnosticism” capabilities and help to expand the company’s global presence. “Our rapid pace of change over the last 18 months reflects our industry as a whole: software development is increasingly complex, fragmented, and difficult to map. CircleCI is in the unique position to help engineering organizations write better software and deliver value faster,” said Rose. “Our product strategy combines an agnostic, build anything, anywhere roadmap with a deep focus on lifting business value delivery of software everywhere.” CircleCI — which was cofounded in 2011 by Allen Rohner and Paul Biggar and which counts among its customers Facebook, Coinbase, Sony, Kickstarter, GoPro, Spotify, Segment, and Percolate — develops a tool set that integrates with Bitbucket and GitHub to create builds the minute code is committed. It automatically tests those builds in containers or virtual machine and notifies teams if issues arise, and it deploys passing builds to target environments such as Docker images or virtual Linux, Android, or macOS machines. Dev teams can define and orchestrate how execution runs and take advantage of Docker support to build images from registries. Moreover, they’re able to specify required compute and memory resources and tap caching options to speed up builds with save and restore points. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! CircleCI’s SSH support lets managers or dev team members run jobs in local environments, and its security tools — which include LDAP for user management, audit logging, and full-level virtual machine isolation — shields code from illicit tampering. On the analytics side of things, an interactive visual dashboard collates metrics in one place, including the number of failed builds and the slowest tests. CircleCI’s platform can be installed on a private server, and the company offers a managed cloud service that includes instant access to feature releases and automatic upgrades. No matter where it’s installed, CircleCI supports any language that builds on Linux or macOS, including C++, Javascript, .NET, PHP, Python, and Ruby. “Every company today relies on software to continuously improve its products and business in order to keep up with consumers’ evolving needs and expectations,” said CircleCI chief technical officer Rob Zuber. “We believe humans should never have to wait on machines. This new round of funding will better allow us to provide developers flexibility and control to power their workflows seamlessly.” It’s been a banner few years for CircleCI, which recorded an increase in the number of monthly jobs on its platform from 7 million to more than 30 million. CircleCI recently expanded its technology partner program (CircleCI Orbs) to 800 integrations across 45 partners (including Azure, Fastlane, and Slack) ahead of the launch of its v2 API, and it opened its first international office in Tokyo along with hubs in Boston and Denver. (Orbs, which launched eight months ago, allows teams and devs to share preferred configurations by packaging commands, executors, and jobs into lines of code.) In other news, CircleCI brought aboard 75 new employees in the first six months of 2019, with projections to have a total of 300 by 2020. And it added to its C-suite with the promotions of Chitra Balasubramanian to chief financial officer, Jane Kim to chief revenue officer, and Erich Ziegler to chief marketing officer. CircleCI says it’s already running Windows jobs for customers in addition to Linux, Docker, and macOS, and says that it intends to support additional git providers in the future (such as GitLab). It’s also testing a new user interface and improving its onboarding process, and it says it plans to preview a new usage-based pricing model that’ll gives teams per-job control over resource classes. “When evaluating companies for investment, one quality I look for is how quickly a company can deliver value to the market,” said Scale Venture Partners partner Andy Vitus. “Scale led CircleCI’s Series B in 2016, and since then I’ve seen the company prove how effectively it improves developer throughput and velocity. We believe CircleCI is the DevOps standard for companies looking to accelerate their delivery pipeline while increasing quality.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,197
2,021
"Continuous software integration/delivery platform CircleCI nabs $100M | VentureBeat"
"https://venturebeat.com/2021/05/11/continuous-integration-and-delivery-platform-circleci-raises-100-million"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Continuous software integration/delivery platform CircleCI nabs $100M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. CircleCI , a continuous integration and delivery (CI/CD) platform for developer teams, today announced it has raised $100 million at a $1.7 billion valuation. Alongside its funding, CircleCI announced it has acquired Vamp , a cloud-native release orchestration platform that automates facets of the software release and rollback process. Continuous integration is concerned with allowing multiple developers to frequently push out small changes to a shared code repository and test for product-readiness. Then there’s continuous deployment, which is all about automatically releasing the quality-checked code to the final product in small batches. Companies can adhere to a continuous integration ethos but not continuous deployment if they prefer a manual approach to shipping final code changes, which falls into a category known as continuous delivery. That, essentially, is what CircleCI is all about — it integrates with various tools and platforms such as GitHub, Bitbucket, and Slack to enable developers to automate many of their software engineering processes, monitor the quality of their code, and enable swift rollbacks if flaws are found. It helps developers move fast without breaking anything major. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We build for the builders of the digital age — developers,” CircleCI CEO Jim Rose told VentureBeat. “Our goal is to help developers deliver high-quality code quickly.” Above: CircleCI CircleCI’s latest raise comes at a time of heightened activity across the developer space, particularly in the CI/CD sphere. Indeed, Jfrog went public on the Nasdaq in September , while CircleCI rival Harness raised $115 million at a $1.7 billion valuation (the same as CircleCI) back in January. Other notable players in the space include CloudBees , which has created a commercial, enterprise-focused product built on top of the open source Jenkins project. Insights Founded in 2011, San Francisco-based CircleCI claims a number of notable customers, including Facebook, Spotify, and GoPro. The company had previously raised $215 million, and with another $100 million in the bank it’s going to double down on its existing product and growth — specifically across three key areas, including “managing software complexity, continuous validation, and data and insights.” This builds on other new features CircleCI has recently introduced, including an insights dashboard that gives its cloud-based customers more data to help track the status of their projects and see which jobs are failing, which workflows take the longest, and more. Above: CircleCI insights dashboard Ecosystem CircleCI launched an ecosystem product called Orbs in 2018 that allowed developers to share reusable snippets of code that automate repeatable software development processes. It’s basically an open source approach to sharing solutions to common problems, freeing businesses to dedicate more resources to solving unique business-critical issues. In February, CircleCI launched private orbs , which allows developers to privately share configuration code internally across projects — this might be particularly useful in industries with high privacy or compliance standards, such as finance or health care. For the future, Rose laid out a vision whereby CircleCI further capitalizes on its “network effect and shared ecosystem of builders” to generate insights and data for developers. This is part of what its Vamp acquisition will enable. “The way we build today is more interconnected than ever,” Rose said. “Sources of change no longer exist solely in a repository, making it impossible for a single developer to understand the entire process. The acquisition of Vamp will allow us to go farther into production and bring feedback and data from users into the CI/CD feedback cycle.” CircleCI already releases research based on its vast banks of data via its annual State of Software Delivery report , which establishes benchmarks for engineering team performances. This sheds light on some of the areas CircleCI may venture into in the future. One example Rose provided is a situation in which a business sets key performance indicators (KPIs) around software tweaks and changes — “Does this change decrease shopping cart abandonment rates?” — and can automatically roll back these changes if they don’t meet KPI stipulations. “Over time, we aim to capitalize on our knowledge of how the best teams build so that we can proactively help teams manage complexity and avoid pitfalls other teams have seen,” Rose said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,198
2,021
"Microsoft's new research lab studies developer productivity and well-being | VentureBeat"
"https://venturebeat.com/2021/05/25/microsofts-new-research-lab-studies-developer-productivity-and-well-being"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft’s new research lab studies developer productivity and well-being Share on Facebook Share on X Share on LinkedIn Man and 2 laptop screen with program code. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft has unveiled a new research initiative designed to “discover, improve, and amplify” developer work and well-being. Announced at Microsoft’s annual developer-focused Build conference, the new Developer Velocity Lab (DVL) will conduct “socio-technical investigations” spanning productivity, community, and general well-being. The lab, which will seek input from across Microsoft’s various units, including GitHub, Visual Studio, and Microsoft Research — as well as external contributors — aims to look at new ways to measure and enhance developer productivity; highlight how developers collaborate and share knowledge around software projects; and “investigate the intersections of happiness, satisfaction, and personal value” in relation to software development. Software company With software pretty much devouring the world — every company is now a software company , as Microsoft CEO Satya Nadella has often said — improving efficiency within developer teams has become a key focus. GitHub, for example, recently rolled out a bunch of new mobile notification controls to boost developer productivity. Elsewhere, a slew of startups, such as Jellyfish , a platform that aligns engineering work with business objectives, and Tines , which automates repetitive workflows for non-developers, have secured significant funds to enhance the output of their developer teams. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The first paper to emerge from DVL was published in March and titled: “The Space of Developer Productivity: There’s more to it than you think.” Here, researchers from GitHub, Microsoft Research, and the University of Victoria posit that developer productivity can’t be measured by a single metric and extends beyond an “individual’s activity levels or the efficiency of the engineering systems relied on to ship software.” Microsoft and its GitHub subsidiary have a track record of producing research papers focused on developers, something DVL aims to build on. “DVL will extend this model by making the research available and accessible to broad audiences — including developers, leaders, enterprises, and OSS (open source software) communities — through additional content, such as short papers, assessments, and videos,” Dr. Nicole Forsgren, GitHub’s VP of research and strategy, told VentureBeat. “We want to get creative so that our research can be easily and quickly usable by many people.” Open source Microsoft’s new lab aligns with a number of other recent trends, such as the myriad low-code/no-code platforms that have emerged to democratize software development, as well as the continued growth of open source tech in enterprises and beyond. Indeed, Microsoft has previously noted that open source is now the accepted model for cross-company collaboration, allowing the tech giants of the world to bypass much of the traditional lawyering in favor of quickly joining forces on projects. DVL’s most recent paper , published this month, looks at the motivations and challenges of contributing to open source software projects for social good. “In addition to including open source developers and communities as part of our research agenda, DVL embraces an open source ethos,” Forsgren added. “For example, in most cases we will publish in open access journals or open-source our papers, allowing for input from other researchers and organizations to build off of. Additionally, we plan to support and share anonymized, curated datasets in the future as we work toward more open models.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,199
2,014
"Community Health Systems' data breach will likely be the first of many in health care | VentureBeat"
"https://venturebeat.com/2014/08/18/chinese-hackers-pull-largest-healthcare-cyber-attack-on-record"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Health Systems' data breach will likely be the first of many in health care Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data breaches at health care systems are on the rise, experts say, and these will become more common in the coming years as more patient data goes digital. Community Health Systems , a large health care group that has 206 hospitals in 29 states, said Monday that a cyberattack originating in China resulted in the theft of Social Security numbers and other personal data belonging to 4.5 million patients. The scale of the attack makes it the largest in the U.S. since the Department of Health and Human Services began tracking such events in 2009. Hospitals and health insurance companies are accustomed to protecting data against privacy breaches, but outright cybertheft may be a threat they’re less prepared for. In Community Health’s case, the data stolen didn’t contain any clinical data or credit card data. But the thieves did manage to grab Social Security numbers and other personal information, which crooks can cross-referenced with other data to form a composite picture of a would-be victim. It’s by using these composites that bad actors can steal identity and assets. Specifically, the data stolen from Community Health included patient names, addresses, birth dates, and telephone numbers of patients who had seen Community Health Systems doctors in the past five years. The firm says it’s now talking to patients and regulatory agencies about what happened, and the possible implications. The Chinese group that staged the attack appears to be the same people who have targeted databases of companies in other U.S. industries, said a representative from FireEye Inc.’s Mandiant forensics unit , which led the investigation of the attack in April and June. The FBI, which is now investigating the case, said in April that health care providers typically do not use the same high levels of security technology as companies in other industries. Because of this, the bureau warned, health care providers and payers could be targeted. The health care industry includes more than just hospitals and insurance companies. Health Information Exchanges, which store health data from multiple hospital systems in a given region, may be a particularly tempting target for hackers. Also, a quickly growing class of digital health data companies stores or manages more digital patient data in order to provide services to providers or on their behalf. These companies almost always sign a “business associate” agreement with the health care organization, linking the two legally. So if a digital health company ends us suffering a data breach, the hospital could, by extension, be held responsible. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,200
2,014
"Rock Health backs eight new startups, signs three new corporate partners | VentureBeat"
"https://venturebeat.com/2014/08/18/rock-health-backs-nine-new-startups-signs-three-new-corporate-partners"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rock Health backs eight new startups, signs three new corporate partners Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Digital health accelerator and venture investor Rock Health is busier than ever as more startups and large industry players turn to it for funding, advice, and partnership. San Francisco-based Rock Health announced today that it has added eight new startups to its portfolio and has landed deals with three large corporate partners. The new corporate partners include pharmaceuticals and medical devices giant Abbott, insurer Blue Shield of California, and the health practice of the Deloitte consulting firm. The new startups are addressing a variety of healthcare pain points, including chronic disease, neurological disorders, biotech issues, and health data privacy. Health startups have little trouble finding areas in the U.S.’s inefficient and costly healthcare delivery system to improve upon, and with the Affordable Care Act as a catalyst, many startups are finding the funding they need to work toward solutions. Rock Health managing director Malay Gandhi says that he and his colleagues expect to evaluate more than a thousand digital health startups as potential investments this year. Rock Health’s latest startups: Accountable is an online platform that helps companies handle personal health data securely and in compliance with federal HIPAA laws. Acumen is in the process of building a telemedicine platform that will let doctors evaluate, assess, and manage neurological disorders via patient video. Aptible provides a development platform in which digital health companies can build apps and services that are HIPAA privacy-compliant. Aptible, which is also backed by Y Combinator, launched this month. Benchling provides a cloud-based data management and collaboration platform for life science research and development. TelePharm provides a web and mobile service that allows pharmacists to verify and approve prescriptions, and consults with patients in locations where no pharmacist is physically present. Telepharm recently raised a $2.5 million investment round led by medical tech investor John Pappajohn and Iowa state Board of Regents president Bruce Rastetter. Welkin Health provides daily coaching to diabetes patients via a mobile app. The company has recently contracted with two healthcare providers that plan to prescribe the app to diabetes patients. An additional two startups have joined the accelerator’s portfolio, but the companies are still in “stealth” and will be announced later. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,201
2,010
"EMC acquires database startup Greenplum | VentureBeat"
"https://venturebeat.com/2010/07/06/emc-greenplum-acquisition"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages EMC acquires database startup Greenplum Anthony Ha Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Infrastructure giant EMC just announced that it plans to acquire Greenplum , a company offering database software. San Mateo, Calif.-based Greenplum was founded in 2003 with the goal of tackling the growing demands of storing and analyzing business data. It says that its customers include divisions of NASDAQ and the New York Stock Exchange, Skype, Equifax, T-Mobile, and Fox Interactive Media. [ Update: Greenplum, or rather its previous incarnation Metapa, goes back at least as far as 2001. ] Greenplum’s investors include Dawntreader Ventures, EDF Ventures, Hudson Ventures, Meritech Capital Partners, Mission Ventures, SAP Ventures, Sierra Ventures, and Sun Microsystems. The acquisition price was not disclosed, but it was reportedly all-cash. EMC says the company will form a new data computing product division within the larger firm. In an open letter to customers , Greenplum executives write: EMC and Greenplum bring extraordinary potential to customers at the intersection of big data and sophisticated analytics. As technology and business partners, EMC and Greenplum witness daily the enthusiasm with which customers embrace how together, we impact their businesses in very tangible, positive, and meaningful ways. The technology speaks for itself. What energizes us most about this acquisition is EMC’s ability to open new doors of opportunity and invest in the future we all so passionately believe in: The power of data. [ image via Virtual Geek ] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,202
2,013
"Data scientists needed: Why this career is exploding right now | VentureBeat"
"https://venturebeat.com/2013/11/11/data-scientists-needed"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data scientists needed: Why this career is exploding right now Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If you’re looking into a career as a data scientist, you may soon become one of the most sought-after people in your industry. Given that job posting for data scientists increased 15,000 percent between 2011 and 2012 alone, according to FICO, if you’re not looking for a career as a data scientist, then maybe you should be. Editor’s note: Our upcoming DataBeat/Data Science Summit , Dec. 4-Dec. 5 in Redwood City, will focus on the most compelling opportunities for businesses in the area of big data analytics and data science. Register today! Data scientists are the people who can understand and provide meaning to the piles and piles of data that companies collect and keep nowadays. “Big data” is the buzzword that represents those piles — tons of information about customers, products, and habits, that may one day help people sell advertising, build better merchandise, or even save lives. But they can’t do any of that if there isn’t a data scientist who can look at the data and say, “This is important. Check out this trend.” Between 2010 and 2020, the data scientist career path is projected to increase by 18.7 percent, beat only by video game designers. The big data industry is expected to be a 53.4 billion industry by 2016, as per the infographic below. And if you’re only just deciding on a college education, Stanford, Northwestern University, and the University of California San Diego offer data science degrees. UC Berkeley just added a data science course to its virtual classroom. Check out the infographic below for more on why we need more data scientists and why the market is so worth getting into: VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,203
2,013
"Data, cloud tech go better together, Pivotal exec says | VentureBeat"
"https://venturebeat.com/2013/11/15/data-cloud-pivotal-qa"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data, cloud tech go better together, Pivotal exec says Share on Facebook Share on X Share on LinkedIn Todd Paoletti, vice president of product marketing at Pivotal. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Earlier this week, Pivotal made headlines when it released a commercially supported version of the open-source Cloud Foundry platform-as-a-service (or PaaS) for building and running applications. In the same breath, the company announced supported services to store and analyze large quantities of data. Vendors don’t typically launch major cloud-computing and big data products on the same day; it’s usually one area or the other. But Pivotal executives see data analytics as a valuable resource for running applications, and they think querying data should be just a few steps away from the management of a cloud, not in a whole other universe. And that mindset helps Pivotal stand out in the PaaS market as well as the data market. To learn more about the merger of cloud tech and data science, we talked to Todd Paoletti, Pivotal’s vice president of product marketing. He discussed how far along data science is inside enterprises, how easy it is to create a repository of data that you can process, and how developers can use analytics to turn out better applications. Here’s an edited transcript of our conversation. VentureBeat: What made you guys want to roll out big data services plus commercially supported PaaS together? Why are they coupled like this? Todd Paoletti: Well, the PaaS is extremely valuable in its own right, because Pivotal still is very much about choice, being able to run your application, your environment, your workloads in the cloud of your choice, being able to leverage the development environments of your choice, etc. So Pivotal CF is extremely valuable in its own right for those purposes — collapsing the time to value for deploying applications on a variety of clouds or systems. But the Pivotal One data services are accessible and integrated within that environment. We are getting customers closer to the time-to-value notion, effectively within the construct of Pivotal CF. A handful of clicks will enable a user to set up a big data cluster. A few more clicks will enable the same customer to begin running analytics and leveraging data in that cluster or analytics that are overseeing applications that are stood up on that cluster, to get value out of it. We see those services as really important value-adds for the dev-ops team in the enterprise, but these are also really important from the spirit of helping the enterprise get to value more rapidly through services that are right there on the screen for them. VentureBeat: Is it really possible to set up a Hadoop cluster with a few clicks? Paoletti: It is, I would say, in spirit, a few clicks. In a demo , you can see a Pivotal interface. Presented in the interface are service tiles that represent these Pivotal One data services as optional service tiles, and from there the installation and deployment of those services essentially act as any other application that’s being installed and deployed by the PaaS. VentureBeat: How easy is it from there? Paoletti: Well, that’s the value of the PaaS in and of itself. I mean, effectively, we are creating an environment, a framework, for a dev-ops team or an operations team to manage installations, manage workloads, provision applications, deploy, scale up, and scale down entirely through a controlled environment, and we take what could take weeks down to days or hours, depending on the application. Editor’s note: Our upcoming DataBeat/Data Science Summit , Dec. 4-Dec. 5 in Redwood City, will focus on the most compelling opportunities for businesses in the area of big data analytics and data science. Register today! VentureBeat: Are companies actually showing interest in paying for both the PaaS and the data services? Paoletti: Three days ago, it was the first time we actually had available or announced a commercial version of the on-premise PaaS system. Time will tell if we got it right. I will say that our customers have been asking for this kind of capability for a while, so we think we’ve got it right. VentureBeat: What if companies can’t think of good questions to ask even if they want to analyze data? Paoletti: Well, then we wouldn’t recommend that they buy the service, honestly. It doesn’t help us if someone doesn’t get use out and value out of a service of the gate. If the customer knows what they want out of big data, they’re going to be down a maturity curve with the software we’re providing them. We’ll absolutely help in and of itself if they are in a phase where they are trying to get value of big data. That’s where our elite-level data science labs organization comes into play. That team is one of the most sought-after teams within the company, and effectively, they go into an organization that says, “We know we need to get value out of big data,” or “We’re capturing it, but we’re trying to understand how to convert that valuable data into insight and analysis.” That’s what the team is designed to do, and that’s what they do very, very effectively. VentureBeat: Do you find that companies already employ data scientists? Paoletti: The notion of data science, if you look at the history of that world, is relatively new, and so there are not that many data scientists out there. There are not enough data scientists to go around, which is why our team is in high demand. Not every enterprise has a data science team. They are more times than not small. But they are growing. VentureBeat: Are you seeing that developers of the applications that can run on the PaaS want to ask data science questions? Paoletti: We are, and I think it’s a combination of algorithmic data and application development — what we call data-driven applications. It’s what Google does really, really well and what Facebook does really, really well. They see algorithmic updates in real-time data, so that the app behaves with input from big data. One use case is to help protect against fraud. Another use case is making purchase recommendations through pattern recognition on buying patterns. That’s what Amazon does really well. VentureBeat: How confident are you that other PaaS sellers will stick their own data services on top of PaaSes, instead of separate from or next to them? Paoletti: If they do it right, they will, in part because we’re seeing that customers want these things together. VentureBeat: What will Pivotal come up with next on the data side? Paoletti: Well, we’re going to continue to invest in advanced systems and tools that run across our overall stack, and we will continue to invest in integrating those components of our data stack together, so that companies can use everything more effectively. From massively parallel database technology to Hadoop subsystems to predictive analytics, and Hadoop for the purpose of real-time applications — those advances we’ve made already. You can imagine we’re going to continue to invest in the most cost-effective, flexible, and advanced data architecture to help support the rapid development of big data apps and big data analytics. VentureBeat: Which do you think are more in demand: data services like those introduced this week, or the commercially supported Pivotal CF? Paoletti: I think the trend is that this stuff is going to be really, really important. I can’t give you a concrete answer. We’ll see how the uptick goes. What I will say is — look at the enterprise adoption of cloud services, infrastructure services, platform services — like an IDC report or a Gartner report. There’s massive acceleration in that space. Cloud Foundry and Pivotal CF will participate in those growth cycles. Separately, if you were to look at an IDC or Gartner report, our data services are going to participate and hopefully exceed growth from those markets independently. So the big growth in cloud utilization and cloud platform adoption absolutely will be big growth for Pivotal CF. Big growth in big data will be big growth for Pivotal One. We believe the combination and the intersection of those two will help us accelerate both even faster than those markets are accelerating, if that makes sense. To be pithy about it, you can eat peanut butter without chocolate, and you can eat chocolate without peanut butter. There’s a market for them that are discrete, but some people would argue they’re better together. But the value of the PaaS system improves with the number of data clusters that are accessed by it and managed by it. The ease of managing applications and managing Hadoop clusters on top of that through the PaaS will perpetuate the adoption of Hadoop in the enterprise. So we think that they are symbiotic, but they do not necessarily have to be sold together or consumed together. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,204
2,021
"AWS outage shines a light on hybrid cloud | VentureBeat"
"https://venturebeat.com/2021/12/08/aws-outage-shines-a-light-on-hybrid-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AWS outage shines a light on hybrid cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As the dust begins to settle on yet another cloud outage , chatter will once again center on the wisdom of companies putting all their digital eggs in a single cloud provider’s basket. Amazon’s AWS “US-East-1” cloud region went down in North Virginia yesterday, disrupting some of Amazon’s own applications and a slew of third-party services that rely on AWS. The cause? An “impairment of several network devices” led to multiple API errors, which in turn impacted myriad AWS services including Amazon Elastic Compute Cloud (EC2), Connect, DynamoDB, Athena, Chime, and more. This isn’t the first time AWS and its customers have suffered at the hand of technical glitches — a similar event occurred just last November that impacted the very same AWS region. And while all the major cloud providers including Microsoft and Google have suffered similar fates at various junctures in the past, as the world’s largest public cloud provider, AWS outages often have the farthest-reaching impact. For several hours yesterday, services such as Disney+, Netflix, Instacart, and McDonald’s were impacted, often to humorous (and somewhat inconvenient) effect, as one McDonald’s visitor demonstrated: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! I’m almost certain the images on the McDonald’s kiosk are hosted in an S3 bucket in us-east pic.twitter.com/kdQdvfejX0 — shh (@worldwise001) December 7, 2021 Disaster recovery and mitigation With more and more business dollars going toward cloud computing infrastructure , incidents such as this serve to highlight why companies need to adopt robust disaster recovery and mitigation plans. While this might include using third-party data backup services , major cloud outages also support those that argue in favor of hybrid or multi-region cloud strategies — particularly for mission-critical services. With hybrid, companies can use their own on-premises infrastructure, leaning on the public cloud only to ensure that their in-house systems don’t crumble under peak traffic. Chris Gladwin, founder, and CEO of “ exabyte-scale ” database technology company Ocient , says that despite all the hype around cloud migration, the risks posed by major outages mean that “hybrid” will likely be the best approach for many bigger companies. “This is not the first time AWS has experienced these issues,” Gladwin said. “For mission-critical applications, we see organizations turning to on-premise and hybrid cloud deployments that ensure they have greater line-of-sight and control over their deployments, uptime, and ultimately, business results.” Service level agreements (SLAs) also play an important part in companies’ cloud strategies. While any amount of downtime — even minutes — can cost businesses a lot of money, this needs to be balanced against the cost of using public cloud platforms. For example, a company that requires 100% uptime for their application will likely want to host their application across multiple regions, even though this will cost more — but a company that can live with a few hours of downtime once or twice a year might want to hedge their bets and pay less for a single cloud region or zone with a 99% uptime guarantee. “A cloud service level agreement of 99% uptime still allows almost eight hours per month of downtime,” said John Pescatore, director of emerging security trends at cybersecurity training and certification company Sans Institute. “Businesses need to invest in redundant or backup capabilities, or pay for higher levels of guaranteed availability to preserve critical business services when running in the cloud.” Pescatore also highlighted the potential “concentration risk” that large companies face if too many parties in their supply chain use the same single cloud service provider. “Larger businesses need to look at their suppliers and see if they are subject to concentration risk — too high a percentage of suppliers on one cloud service, and even a short outage can be disastrous to business,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,205
2,022
"How Altogic simplifies app development for enterprises | VentureBeat"
"https://venturebeat.com/2022/04/11/how-altogic-simplifies-app-development-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Altogic simplifies app development for enterprises Share on Facebook Share on X Share on LinkedIn Instanbul headquartered Altogic has raised $1 million in seed funding to help enterprises build and deploy mobile/web applications faster. Every company today has the ability to build applications for a range of business-to-business (B2B) and business-to-consumer (B2C) use cases. The technologies have evolved continuously, but even with all the novel capabilities in hand, the task of building a production-ready and scalable app remains long, complex, and expensive. Developers have to deal with many complexities on the backend and frontend and often end up delivering their projects late or half-baked — much to the dissatisfaction of their company and its customers. Altogic automates backend development To tackle this challenge and improve enterprises’ time to market, Altogic provides a backend-as-a-service platform that handles key tasks associated with the backend infrastructure of an application. “With Altogic, we provide the set of pre-integrated tools and cloud infrastructure that remove a considerable amount of mundane and repetitive tasks from developers, help them start building products in minutes and deploy them in seconds. Plus, with its no-code capabilities, the solution allows people without a background in programming to develop backend aspects,” Umit Cakmak, the founder and CEO of the platform, told VentureBeat. The backend of an application includes several elements, starting from the app server, database and cache to business logic, job execution and session management. Altogic handles a majority of these through a three-step process. “You first define the data model of your application. The data model defines what will be your key data entities in your app database, what kind of data fields will each entity hold, how these data entities will be related to each other, and finally, what will be the validation rules to run on input data before committing them to your app database. Then, you create your application endpoints (e.g., RESTful API endpoint) and link each endpoint with a cloud function (aka service),” Cakmak explained. In Altogic, endpoints are the communication channels to access the cloud functions of applications and are responsible for exposing application services and data to the outside world. Cloud functions, meanwhile, are defined graphically using nodes, which are the basic service execution units that perform actions on input data and create output. Once these steps are completed, the developer just has to create their execution environment and deploy the app. “An environment is a space where your application data is stored and managed and your application’s RESTful endpoints are called. In Altogic, the application designs that you have created in steps one and two are all versioned through snapshots. After creating an environment, you deploy a snapshot of your app to the execution environment. You can have several execution environments (e.g., development, test, production) and deploy different snapshots of your app design to these environments. At this stage, you can integrate your Altogic backend to your frontend app using Altogic’s client API or using any HTTP client library (e.g., axios, fetch),” the CEO added. Competition Since launching the platform in beta, Altogic has roped in about 500 developers, both enterprises and freelancers, to build apps using the platform and guide its development. However, they are not the only ones in this space. Google’s Firebase , Amazon’s Amplify and open source alternatives such as Supabase, AppWrite , and Nhost are looking to simplify app development for enterprises. Cakmak, however, says their product stands out from the crowd as it makes coding optional for developers and gives them a way to develop apps graphically. “Developers can use built-in or marketplace nodes or even create their own custom nodes and connect these nodes with connectors to define their cloud functions through simple drag & drop operations. This approach brings the best of both worlds, the speed of no-code to quickly develop business logic and integrations and the flexibility of coding to solve complex problems,” he said. With the fresh round of funding, led by ScaleX Ventures, Altogic plans to grow its engineering team and accelerate product development to make its solution generally available to developers worldwide. “We will soon release two new products to further enhance the developer experience and add real-time capabilities to the platform so that our users can develop near real-time apps using WebSockets. We will also expand our cloud infrastructure to new regions,” Cakmak said. Globally, the backend-as-a-service space is expected to grow from $1.6 billion in 2020 to nearly $8 billion by 2027. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,206
2,022
"Last9 emerges from stealth to tackle software reliability challenges | VentureBeat"
"https://venturebeat.com/2022/04/12/last9-emerges-from-stealth-to-tackle-software-reliability-challenges"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Last9 emerges from stealth to tackle software reliability challenges Share on Facebook Share on X Share on LinkedIn Last9's team Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. California headquartered Last9 , a startup helping enterprises navigate software reliability challenges, today emerged from stealth with $11 million series A funding led by Sequoia Capital India. In the past decade or so, the software development space has seen a lot of improvements. The growing adoption of cloud and of microservices architecture has given enterprises the ability to build and push dozens of system updates every week, gaining a major competitive advantage. But, this speed of advancement also brings its set of challenges. “The number of microservices is constantly growing and each of them is being deployed several times a day or week, all hosted on ephemeral servers. A typical request depends on at least three internal and one external service. It’s a densely connected web of systems,” Nishant Modak, cofounder and CEO of Last9, told VentureBeat. In this web, even a tiny anomaly or change event could cascade into a broader system-wide outage that most companies aren’t ready to deal with. They do have site reliability engineers(SREs), but the current scheme of things requires them to manually scour changelogs and charts across tens of dashboards (Grafana or Datadog , for instance) or build some internal system-specific tooling to determine what went wrong and how to fix it. This takes a lot of time and does not help when the business bleeds money. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Last9 simplifies the challenge with ‘aerial view’ Founded in 2020, Last9’s reliability platform solves this challenge by giving SREs an aerial view of their software architecture. The solution uses company data to build a ‘knowledge graph’ of all essential systems (no matter how complex), allowing engineers to scour quickly and zoom in to see the smallest, metric-level data. “Users (enterprises) provide read-only access to their cloud environment and metrics storage. After this, Last9’s platform maps out microservices call graphs, including both internal and external systems. In addition, it baselines key metrics and helps identify, measure and track these indicators to provide insights into how customers are getting impacted,” Modak explained. In a nutshell, Last9 provides engineering and devops teams a simpler and faster way to assess and determine what went wrong, was it due to a change introduced and how to prevent it – and the associated business losses. It can also act as a monitoring system, providing early warnings of impending failures and doing significantly faster root cause analysis than traditional methods. The company has seen significant demand for its product and has already handled software reliability for Indian streaming platform Disney+ HotStar, YieldStreet Inc, Skit.ai, HomeLane, DailyRounds, OwlInsights Inc and many other companies. “Users love the fact that now they have a map of all dependencies in addition to out-of-the-box tracking of their key SLIs. Being able to trace which tenant got affected instantly and the fact that all this happens without an agent, makes it extremely easy to adopt across all systems,” the CEO said. While a number of companies continue to operate in the software development and monitoring space, including Firebase , Amplify, LogicMonitor and AppDynamics , Modak claims no one is looking at the reliability aspect in the way Last9 does. “Last9’s knowledge graph, accessible visually on the screen, enables immediate identification of root cause for any incident and in fact enables early warning of upcoming downtimes since the knowledge graph knows how failures cascade across the system. We can confidently say that no other player at scale in the space today has this capability,” he said. Plan ahead Now, with the fresh capital in hand, the company plans to build out go-to-market and engineering teams to expand the reach of its product and improve its capabilities. “Improving adoption and integrations for open-telemetry and other open source standards is another key focus area. Change intelligence and insights on failures in a distributed systems environment are key to solving this problem and we will continue to invest heavily in that,” Modak noted. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,207
2,022
"Zipy.ai, which accelerates software debugging for enterprises, raises $2.8M | VentureBeat"
"https://venturebeat.com/2022/04/13/zipy-ai-which-accelerates-software-debugging-for-enterprises-raises-2-8m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zipy.ai, which accelerates software debugging for enterprises, raises $2.8M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. San Francisco-based Zipy.ai has raised $2.8 million in seed funding to accelerate software debugging for enterprises worldwide. Today, software development cycles have skyrocketed. Companies are pushing updates occasionally to improve the experience of their customers. But, on some days, these updates also break the product, affecting not just the experience of end-users but also bleeding plenty of money for the organization. In response, support, product and development teams rush to figure out what went wrong on the customers’ end and why — a task that involves stitching data across multiple tools and logs and often takes up hours. Plus, if the customer does not notice or report the bug, the issue may go undetected in the code and affect others. Zipy.ai brings customer observability Founded in 2020 by Vaishalini Paliwal, Zipy.ai solves this challenge by providing enterprises with a unified platform to react to customer issues just as they take place. When something breaks in a workflow, the solution captures the data points and notifies the concerned team on Slack. Then, when a team member interacts with the notification, they are directed to Zipy dashboard which showcases everything associated with the bug , starting from what exactly the customer did to the problem they faced and the exact line of code failure. Using this interface, developer and product teams can easily replay the user session, trace down the problem, and work on a fix, without wasting time on scouring through the data or waiting for the customer to file the bug report first. “Most of us have lived this pain point for a very long time as software engineers. We are very excited about scaling and building on the larger vision of bringing intelligent customer observability to help software teams solve code issues smartly, quickly and proactively. Behind Zipy is a very passionate product and engineering team who has experienced this problem first hand,” Paliwal said. The company launched the product less than a month ago and has already roped in over 100 customers. Zipy.ai was also ranked number one on ProductHunt. Growing need for debugging and observability Owing to rapid digitization and the need to stay ahead of the competition, companies have become bullish on frequent release cycles. This has spurred demand for tools tackling various aspects of the app development ecosystem. Recently, software reliability platform Last9 , which handles software reliability, raised $11 million in series A funding and Turkey-based Altogic , which automates backend development for apps, secured $1 million. Other players in the space include Firebase , Amplify, LogicMonitor and AppDynamics. As the demand grows, Paliwal plans to scale Zipy.ai, taking the product’s intelligent customer observability capabilities to more enterprise software teams. The company will deploy the latest round, which was led by Blume Ventures and operator-led venture capital firm Together Fund, to strengthen the platform’s technology and make bug solving more proactive, intelligent and tech stack agnostic. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,208
2,021
"What is Firebase? | VentureBeat"
"https://venturebeat.com/2021/08/02/what-is-firebase"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is Firebase? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Firebase is a development platform known originally for its realtime database that’s still at its core a multi-node, key-value database optimized for synchronizing data, often between user machines or smartphones and centralized storage in the cloud. It’s designed to make life easier for developers by handling much of the pushing and pulling of data. That relieves app developers of the programming burdens associated with managing versions or locations. They can write the new bits to Firebase and the data will be consistent throughout the system. Firebase is valued largely because it can constantly propagate and synchronize changes between local copies of information stored on users’ machines with versions kept in the cloud. Firebase eliminates many of the challenges of mixing authentication, synchronization, and segregation by juggling multiple versions and ensuring the right bits are the same throughout the system. Today, Firebase is a central part of the Google cloud development tool kit. The product, a culmination of years of evolution, was positioned at the center of a mobile backend-as-a-service offering from the company known as Firebase, which Google acquired in 2014. Firebase is available through Google. Meanwhile, open source libraries and tools are available that interface with Firebase. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Firebase was early to take the form of a database that is not limited to one physical computer. Its modern form allows it to split workloads between multiple machines by either splitting the datasets, creating copies of their bits, or both. Firebase extends the algorithms used for datacenter consistency to the entire network by treating the data stored on users’ phones or desktops as local versions of the big database. In essence, your phone or your laptop is now part of the cloud. While Firebase began as a separate company, Google has tightly integrated the software with its other cloud software products. Firebase ML , for instance, is a collection of libraries that leverage the power of Google’s other tools, like AutoML or TensorFlow. Adding features like those used to find text in an image or find appropriate labels is relatively easy. Developers don’t need to worry about the dangers of inconsistencies between the data on, say, a user’s phone and the central database. Once the data is stored locally, Firebase ships copies to cloud servers so both versions are consistent. The transfer also works in the other direction because changes made on the cloud are replicated locally. Server-side developers can communicate with client software running by simply writing data to the Firebase cloud. Google’s Cloud Functions, a serverless option, can also be integrated with Firebase so new data can trigger functions. When a user first logs in, or each time the database changes, a function will be invoked that can then trigger other events or functions in the Google cloud or elsewhere. These functions can be used to post-process images, clean up text, or ensure data consistency. Firebase Cloud Messaging adds an additional layer of organization to the process of sending messages by grouping users together by either name or topic. Once initialized, Firebase can ship event notifications as messages to predefined groups or users who have subscribed to certain topics. Google has also built out a number of standard Firebase use cases, like resizing images or triggering email messages, that simplify some common tasks. Flutter is another higher-level tool, also built by Google, that integrates sophisticated user interface widgets with the database underneath. It works with a number of databases that range from the simple, like SQLite, to the full-featured, like PostgreSQL. It can also rely upon Firebase itself. How are established database companies approaching the problem? When it comes to web development, most older databases placed too many programmatic chores on the backs of developers. These databases began life by storing data on just one machine. While the major databases have long expanded to replicate and share data across multiple databases and machines, they have generally avoided integration with the small, local copies of data cached on users’ machines. The cloud vendors are extending their own databases, often with extra layers. AWS created Amplify by linking together several of its lower-level tools to handle authentication and data storage. The DataStore layer will store information locally and push it to the AWS cloud when a connection is available. The tool also bundles a number of other services, including hosting and a set of server apps for editing data structures and content. What about the upstarts? The problems Firebase addresses are common in modern development. So it is not surprising that upstart players are working to build on, or even outright replace, Firebase. Supabase and NHost are building backend alternatives to Firebase by adding layers for authentication and replication to PostgreSQL. They’ve married more modern standards like GraphQL with a server-side core built on a trusted SQL-based engine. Much of the competition is coming from full development platforms that are also adding layers to simplify interacting with the database. Parse, for instance, is a full platform for building client-server apps that integrate with a central database. It adds features like a GraphQL interface, a file system, and a notifications framework to a core that rests upon either PostgreSQL or MongoDB. Back4App is another layer built on top of Parse that simplifies the coding even further. Some other competition comes from non-database companies that offer many of the features as part of mobile app development frameworks. Products like Xamarin are now more tightly integrated with clouds like Azure. GameSparks is designed to simplify building backends for networked games, a job that requires doing much of the same synchronization as Firebase. Other tools, like Pubnub, have approached the problem of streaming messages for tracking virtual groups and spaces, another challenge that requires much of the same support Firebase provides. In some cases, such focused products may deliver exactly what is needed without building on top of Firebase. Is there anything Firebase can’t do? Firebase is an ideal tool for helping developers get started quickly because it handles much of the work of replicating data and pushing event notifications. It abstracts away the challenges of storing data simultaneously in a user’s phone and a central database. The main data model is limited to NoSQL, although some developers have created FireSQL, a tool that adds SQL-like syntax. Firebase pricing is set according to each read or write, a feature that appeals to earlier developers but can sometimes surprise developers if the price jumps rapidly with a product’s growing popularity. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,209
2,021
"Open source backend-as-a-service provider Appwrite raises $10M | VentureBeat"
"https://venturebeat.com/2021/09/28/open-source-backend-as-a-service-provider-appwrite-raises-10m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Open source backend-as-a-service provider Appwrite raises $10M Share on Facebook Share on X Share on LinkedIn Appwrite: Functions monitoring Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. The burgeoning backend-as-a-service (BaaS) market has ushered in a handful of young upstarts that are leveraging open source to disrupt incumbents like Google’s Firebase. And investors have taken note — just a few weeks ago, Supabase secured $30 million , while Nhost landed a $3 million investment in April. The latest open source “Firebase alternative” to attract investors’ attention is Appwrite , a two-year-old Israeli BaaS company that today announced it has raised $10 million across two recent seed rounds. Flexible BaaS allows companies and developers to forget about infrastructure and put their spadework into the front end. The appeal of open source in the BaaS space is that businesses aren’t locked into any specific ecosystem and can move the service to any production or development infrastructure host. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Appwrite is an entirely self-hosted solution, packaged as Docker microservices for web or mobile app development, and includes user authentication, file storage, a database for storing and querying data, API management, security and privacy, and more. Above: Appwrite Founder and CEO Eldad Fux built Appwrite from scratch and open-sourced the project back in October 2019. A year later, he formally founded the company and recruited the project’s core maintainers from the open source community as full-time employees. For now, Appwrite remains an entirely free, self-hosted solution available on GitHub , although a commercial cloud offering is planned for later this year to allow developers to “onboard Appwrite” more quickly. “Our team is currently focused on making our open source solution the best possible and ensuring we continue to build a vibrant and exciting OSS community,” Fux told VentureBeat. “We believe that backend-as-a-service is one of the most interesting markets out there. Together with our OSS community, we believe we can build new innovative business models without hindering our open source offering like other OSS companies are sometimes forced to do.” Given that Appwrite is still technically in beta, its focus has been on individual developers, though Fux notes that these include members from major companies like Amazon, Apple, Microsoft, Alibaba, IBM, Cisco, and more. While Firebase has emerged as one of the leading players in the BaaS space since Google acquired the company back in 2014 , the startup had previously raised some $7 million from investors that include Flybridge Capital Partners. So it seems fitting that Flybridge is now investing in Appwrite eight years later. Other Appwrite investors include Bessemer Venture Partners, Ibex Investors, and Seed Camp, alongside angel backers such as Elastic cofounder Uri Boness and Heroku cofounder James Lindenbaum. Growth market The global BaaS industry was pegged at $1.6 billion in 2020 , a figure that’s predicted to grow to nearly $8 billion within six years. So what’s driving demand, exactly? According to Fux, it all boils down to removing complexity from developers’ everyday lives — the very same developers who are increasingly driving purchasing decisions in companies. “There are just too many tools, frameworks, and services that developers need to master,” Fux said. “For example, cloud providers have done a great job abstracting the hassle that was required in managing infrastructure. Still, they created a new level of complexity for developers who need to master endless services that don’t play well with each other. This problem is becoming worse every day. Therefore, abstraction layers like backend-as-a-service are a must if we want to enable developers to focus on innovation rather than on boilerplate and puzzling solutions together.” Although Appwrite has clear competition in the open source BaaS space, Fux touts the company’s 40,000-strong community of developers building, using, and engaging with other members as an indicator of its strength among rivals. “Growing as an open source project has been a huge factor in our success — building out in the open with a community of maintainers, advocates, and users who believe in our vision acts as a growth multiplier for us,” Fux noted. “The Appwrite community acts as a ‘secret sauce’ that always points our team and product in the right direction.” But is there room for so many open source Firebase alternatives? “As both a developer myself and an insider looking at the recent growth in developer interest in BaaS solutions, it’s very clear that this won’t be a single-winner market,” Fux said. “But there will be a developer favorite, and we believe Appwrite is already in a strong position there and poised to grow even more.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,210
2,020
"Researchers find cutting-edge language models fall short in basic reasoning | VentureBeat"
"https://venturebeat.com/2020/09/09/researchers-find-cutting-edge-language-models-fall-short-in-basic-reasoning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers find cutting-edge language models fall short in basic reasoning Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Even sophisticated language models such as OpenAI’s GPT-3 struggle with socially important topics like morality, history, and law. That’s the top-line finding from a new paper coauthored by Columbia, University of Chicago, and University of California, Berkeley researchers that proposes a 57-task test to measure models’ ability to reason. Models must possess problem-solving abilities and extensive knowledge about the world to perform well on the test. But in experiments, the coauthors found that the models they benchmarked — including GPT-3 — frequently didn’t know when they were wrong. The goal of the novel test set is to bridge the gap between the knowledge that models see during training and existing measures of success in natural language processing. Like all machine learning models, language models learn patterns from vast data sets often sourced from Wikipedia, Reddit, ebooks, and other web sources. Some recently introduced benchmarks attempt to capture the linguistic skills of models, but so far, there’s little evidence to suggest a correlation between benchmark performance and a model’s grasp of commonsense reasoning. The researchers claim their test is different in that it assesses models across subjects humans commonly learn, like mathematics, history, and ethics. To craft it, graduate and undergraduate students collected 15,908 questions from freely available sources online, including practice exams for undergraduate courses, quizzes for readers of Oxford University Press publications, and tests like the Graduate Record Examination, U.S. Medical Licensing Examination, and Examination for Professional Practice in Psychology. The tasks range in difficulty from an elementary level to an “advanced professional level,” a sampling the coauthors argue is sufficient for identifying a model’s blind spots. Above: Example questions from the researchers’ test set. “We measure arbitrary real-world text understanding,” they wrote, noting that each subject contains at least 100 test examples. “Since models are pretrained on the internet, this enables us to test how well they can extract useful knowledge from massive corpora.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In addition to GPT-3, the researchers benchmarked Google’s T5 and the Allen Institute for AI’s UnifiedQA question-answering model against their test set. The results show that meaningful progress has only become possible in recent months, with models containing up to 13 billion parameters achieving 25% accuracy and 175-billion-parameter models like GPT-3 reaching 43.9% accuracy. (Parameters are parts of the model learned from historical training data.) But that being the case, GPT-3 failed to excel at any single subject; its performance was on the test set was lopsided, with almost 70% accuracy for its best subject (U.S. foreign policy) but “near-random” performance for several other subjects (e.g., college chemistry). “Overall, GPT-3 does poorly on highly procedural problems,” the researchers explained. “It is notably poor at modeling human (dis)approval, as evident by the low performance on the professional law and moral scenarios tasks, [and it] also has difficulty performing calculations, so much so that it exhibits poor performance on elementary mathematics and many other STEM subjects with ‘plug and chug’ problems … We speculate that is in part because GPT-3 acquires declarative knowledge more readily than procedural knowledge.” The findings imply that current models have room for improvement, but it’s unclear whether existing techniques will suffice. As the researchers point out, previous research indicates that a 10 times increase in model size must be accompanied by an approximately 5 times increase in data, which might be logistically prohibitive. “Aside from the tremendous expense in creating multi-trillion parameter language models, data may also become a bottleneck,” the researchers continued. “There is far less written about esoteric branches of knowledge than about everyday text.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,211
2,021
"Microsoft uses GPT-3 to add AI features to Power Apps | VentureBeat"
"https://venturebeat.com/2021/05/25/microsoft-uses-gpt-3-to-add-ai-features-to-power-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft uses GPT-3 to add AI features to Power Apps Share on Facebook Share on X Share on LinkedIn View of a Microsoft logo on March 10, 2021, in New York. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. During its Build developer conference, Microsoft unveiled the first features in one of its products — Power Apps — powered by GPT-3 , the natural language model developed by OpenAI. Microsoft says this will make it easier for users to build apps without needing to know how to write computer code or formulas, and the features are set to launch for North America customers in preview in English by the end of June. Learning how to create complex data queries can involve a steep learning curve, particularly for data practitioners who don’t know to program. A study from Mendix found that 24% of customers have no previous experience using low-code platforms like Power Apps and that 40% come from a mostly business background. Still, even non-developers have to understand the logic of formulas like “FirstN(Sort(Search(‘BC Orders’, “stroller”, “aib_productname”), ‘Purchase Date’, Descending), 10).” The idea behind the new GPT-3-powered features in Power Apps is to assist people in choosing the right formulas to get the result they need. GPT-3 in Power Apps Roughly a year ago, Microsoft announced it would invest $1 billion in San Francisco-based OpenAI to jointly develop new technologies for Microsoft’s Azure cloud platform and to “further extend” large-scale AI capabilities that “deliver on the promise” of artificial general intelligence. In exchange, OpenAI agreed to license some of its intellectual property to Microsoft, which the company would then package and sell to partners, and to train and run AI models on Azure as OpenAI worked to develop next-generation computing hardware. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In the months that followed, OpenAI released a Microsoft Azure-powered API that allows developers to explore GPT-3’s capabilities.(OpenAI said recently that GPT-3 is now being used in more than 300 different apps by “tens of thousands” of developers and producing 4.5 billion words per day.) Toward the end of 2020, Microsoft announced that it would exclusively license GPT-3 to develop and deliver AI solutions for customers, as well as creating new products that harness the power of natural language generation. Microsoft says GPT-3 will be integrated “deeply” with Power Apps, its low-code app development platform — specifically for formula generation. The AI-powered features will allow a user building an ecommerce app, for example, to describe a programming goal using conversational language like “find products where the name starts with ‘kids.'” With the new capabilities, a person can get a formula by typing a plainspoken sentence like “Show 10 orders that have stroller in the product name and sort by purchase date with newest on the top.” Power Apps’ tailored GPT-3 model will offer choices for translating the command into Power Fx , Power Platform’s programming language, like “Filter(‘BC Orders’ Left(‘Product Name’,4)=”Kids”). “The goal of Power FX is to enable people to build apps more quickly by using simple language. With these new features, customers can speak in natural language, and GPT-3 can understand that and put that into the syntax that they’re using,” Eric Boyd, CVP of AI Platform at Microsoft, told VentureBeat in an interview. “What our team did is took the GPT-3 model and produced a specialized model that is particularly appropriate for this particular use case.” Microsoft says the Power Platform team worked closely with its Azure AI division to fine-tune a GPT-3 model that could translate between natural language and Power Fx expressions. Microsoft engineers used Azure Machine Learning-managed endpoints, a new capability announced in preview at Build, to deploy and manage the GPT-3 model used to deliver the new capabilities in Power Apps. “Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call ‘no code,'” Charles Lamanna, CVP for Microsoft’s low-code app platform, said in a blog post. “This will allow people to query and explore data in ways they literally couldn’t do before, and that will be the magical moment. In all cases, there is a human in the loop. This isn’t at all about replacing developers — it’s about finding the next 100 million developers in the world.” Going forward Despite the potential of natural language models like GPT-3, many blockers exist. The models can’t always answer math problems correctly or respond to questions without paraphrasing training data , and it’s well-established that they amplify the biases in data on which they were trained. That’s problematic in the language domain, because a portion of the data is often sourced from communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” A separate paper by Stanford University Ph.D. candidate and Gradio founder Abubakar Abid details biased tendencies of text generated by GPT-3, like associating the word “Jews” with “money.” To address these concerns, Microsoft says it has added filters to help detect sensitive or inappropriate content in any results that might get returned in Power Apps. In Power Apps, GPT-3 offers multiple suggestions for Power Fx formulas, and users can choose which to apply. And Microsoft argues that because the model in this circumstance is generating prescribed formulas, unintended outcomes are less likely than if GPT-3 were asked to answer open-ended questions. “GPT-3 is the most powerful natural language processing model that we have in the market, so for us to be able to use it to help our customers is tremendous,” Power Apps product marketing manager Bryony Wolf says. “This is really the first time you’re seeing in a mainstream consumer product the ability for customers to have their natural language transformed into code.” Going forward, Microsoft plans to infuse Power Fx into other tools within Power Platform, at which time the natural language features powered by GPT-3 will expand to those products as well. “We’re finding ways to bring it into Azure and our mainstream products,” Boyd said. “We think there are a whole bunch more things that GPT-3 is capable of doing. It’s a foundational new technology that lights up a ton of new possibilities, and this is sort of that first light coming into production.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,212
2,021
"Microsoft, GPT-3, and the future of OpenAI | VentureBeat"
"https://venturebeat.com/2021/06/01/microsoft-gpt-3-and-the-future-of-openai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft, GPT-3, and the future of OpenAI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One of the biggest highlights of Build, Microsoft’s annual software development conference, was the presentation of a tool that uses deep learning to generate source code for office applications. The tool uses GPT-3, a massive language model developed by OpenAI last year and made available to select developers, researchers, and startups in a paid application programming interface. Many have touted GPT-3 as the next-generation artificial intelligence technology that will usher in a new breed of applications and startups. Since GPT-3’s release, many developers have found interesting and innovative uses for the language model. And several startups have declared that they will be using GPT-3 to build new or augment existing products. But creating a profitable and sustainable business around GPT-3 remains a challenge. Microsoft’s first GPT-3-powered product provides important hints about the business of large language models and the future of the tech giant’s deepening relation with OpenAI. A few-shot learning model that must be fine-tuned? Above: Microsoft uses GPT-3 to translate natural language commands to data queries According to the Microsoft Blog , “For instance, the new AI-powered features will allow an employee building an e-commerce app to describe a programming goal using conversational language like ‘find products where the name starts with “kids.”’ A fine-tuned GPT-3 model [emphasis mine] then offers choices for transforming the command into a Microsoft Power Fx formula, the open source programming language of the Power Platform.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! I didn’t find technical details on the fine-tuned version of GPT-3 Microsoft used. But there are generally two reasons you would fine-tune a deep learning model. In the first case, the model doesn’t perform the target task with the desired precision, so you need to fine-tune it by training it on examples for that specific task. In the second case, your model can perform the intended task, but it is computationally inefficient. GPT-3 is a very large deep learning model with 175 billion parameters, and the costs of running it are huge. Therefore, a smaller version of the model can be optimized to perform the code-generation task with the same accuracy at a fraction of the computational cost. A possible tradeoff will be that the model will perform poorly on other tasks (such as question-answering). But in Microsoft’s case, the penalty will be irrelevant. In either case, a fine-tuned version of the deep learning model seems to be at odds with the original idea discussed in the GPT-3 paper , aptly titled, “Language Models are Few-Shot Learners.” Here’s a quote from the paper’s abstract: “Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art finetuning approaches.” This basically means that, if you build a large enough language model , you will be able to perform many tasks without the need to reconfigure or modify your neural network. So, what’s the point of the few-shot machine learning model that must be fine-tuned for new tasks? This is where the worlds of scientific research and applied AI collide. Academic research vs commercial AI There’s a clear line between academic research and commercial product development. In academic AI research, the goal is to push the boundaries of science. This is exactly what GPT-3 did. OpenAI’s researchers showed that with enough parameters and training data, a single deep learning model could perform several tasks without the need for retraining. And they have tested the model on several popular natural language processing benchmarks. But in commercial product development, you’re not running against benchmarks such as GLUE and SQuAD. You must solve a specific problem, solve it ten times better than the incumbents, and be able to run it at scale and in a cost-effective manner. Therefore, if you have a large and expensive deep learning model that can perform ten different tasks at 90 percent accuracy, it’s a great scientific achievement. But when there are already ten lighter neural networks that perform each of those tasks at 99 percent accuracy and a fraction of the price, then your jack-of-all-trades model will not be able to compete in a profit-driven market. Here’s an interesting quote from Microsoft’s blog that confirms the challenges of applying GPT-3 to real business problems: “This discovery of GPT-3’s vast capabilities exploded the boundaries of what’s possible in natural language learning, said Eric Boyd, Microsoft corporate vice president for Azure AI. But there were still open questions about whether such a large and complex model could be deployed cost-effectively at scale to meet real-world business needs [emphasis mine].” And those questions were answered with the optimization of the model for that specific task. Since Microsoft wanted to solve a very specific problem, the full GPT-3 model would be an overkill that would waste expensive resources. Therefore, the plain vanilla GPT-3 is more of a scientific achievement than a reliable platform for product development. But with the right resources and configuration, it can become a valuable tool for market differentiation, which is what Microsoft is doing. Microsoft’s advantage In an ideal world, OpenAI would have released its own products and generated revenue to fund its own research. But the truth is, developing a profitable product is much more difficult than releasing a paid API service, even if your company’s CEO is Sam Altman, the former President of Y Combinator and a product development legend. And this is why OpenAI enrolled the help of Microsoft, a decision that will have long-term implications for the AI research lab. In July 2019, Microsoft made a $1 billion investment in OpenAI—with some strings attached. From the OpenAI blog post that declared the Microsoft investment: “OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital for computational power. The most obvious way to cover costs is to build a product, but that would mean changing our focus [ emphasis mine ]. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.” Alone, OpenAI would have a hard time finding a way to enter an existing market or create a new market for GPT-3. On the other hand, Microsoft already has the pieces required to shortcut OpenAI’s path to profitability. Microsoft owns Azure, the second-largest cloud infrastructure, and it is in a suitable position to subsidize the costs of training and running OpenAI’s deep learning models. But more importantly—and this is why I think OpenAI chose Microsoft over Amazon—is Microsoft’s reach across different industries. Thousands of organizations and millions of users are using Microsoft’s paid applications such as Office, Teams, Dynamics, and Power Apps. These applications provide perfect platforms to integrate GPT-3. Microsoft’s market advantage is fully evident in its first application for GPT-3. It is a very simple use case targeted at a non-technical audience. It’s not supposed to do complicated programming logic. It just converts natural language queries into data formulas in Power Fx. This trivial application is irrelevant to most seasoned developers, who will find it much easier to directly type their queries than describe them in prose. But Microsoft has plenty of customers in non-tech industries, and its Power Apps are built for users who don’t have any coding experience or are learning to code. For them, GPT-3 can make a huge difference and help lower the barrier to developing simple applications that solve business problems. Microsoft has another factor working to its advantage. It has secured exclusive access to the code and architecture of GPT-3. While other companies can only interact with GPT-3 through the paid API, Microsoft can customize it and integrate it directly into its applications to make it efficient and scalable. By making the GPT-3 API available to startups and developers, OpenAI created an environment to discover all sorts of applications with large language models. Meanwhile, Microsoft was sitting back, observing all the different experiments with growing interest. The GPT-3 API basically served as a product research project for Microsoft. Whatever use case any company finds for GPT-3, Microsoft will be able to do it faster, cheaper, and with better accuracy thanks to its exclusive access to the language model. This gives Microsoft a unique advantage to dominate most markets that take shape around GPT-3. And this is why I think most companies that are building products on top of the GPT-3 API are doomed to fail. The OpenAI Startup Fund Above: Microsoft CEO Satya Nadella (left) and OpenAI CEO Sam Altman (right) at Microsoft Build 2021 And now, Microsoft and OpenAI are taking their partnership to the next level. At the Build Conference, Altman declared a $100 million fund, the OpenAI Startup Fund , through which it will invest in early-stage AI companies. “We plan to make big early bets on a relatively small number of companies, probably not more than 10,” Altman said in a prerecorded video played at the conference. What kind of companies will the fund invest in? “We’re looking for startups in fields where AI can have the most profound positive impact, like healthcare, climate change, and education,” Altman said, to which he added, “We’re also excited about markets where AI can drive big leaps in productivity like personal assistance and semantic search.” The first part seems to be in line with OpenAI’s mission to use AI for the betterment of humanity. But the second part seems to be the type of profit-generating applications that Microsoft is exploring. Also from the fund’s page : “The fund is managed by OpenAI, with investment from Microsoft and other OpenAI partners. In addition to capital, companies in the OpenAI Startup Fund will get early access to future OpenAI systems, support from our team, and credits on Azure.” So, basically, it seems like OpenAI is becoming a marketing proxy for Microsoft’s Azure cloud and will help spot AI startups that might qualify for acquisition by Microsoft in the future. This will deepen OpenAI’s partnership with Microsoft and make sure the lab continues to get funding from the tech giant. But it will also take OpenAI a step closer toward becoming a commercial entity and eventually a subsidiary of Microsoft. How this will affect the research lab’s long-term goal of scientific research on artificial general intelligence remains an open question. Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This story originally appeared on Bdtechtalks.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,213
2,021
"Researchers open-source benchmarks measuring quality of AI-generated code | VentureBeat"
"https://venturebeat.com/2021/06/03/researchers-open-source-benchmarks-measuring-quality-of-ai-generated-code"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers open-source benchmarks measuring quality of AI-generated code Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The applications of computer programming are vast in scope. And as computers become ubiquitous, the demand for quality code draws an ever-growing number of aspiring programmers to the profession. After years of study to become proficient at coding, experts learn to convert abstracts into concrete, executable programs. But what if AI could do the same? In recent years, large-scale AI language models have shown promise in generalizing to tasks including writing code, implying that humans’ work may be one day supplemented by AI systems. But while some studies show that language models can translate code and fix compilation issues, there’s been little work on rigorously testing the coding ability of models given general coding problems. That’s why a team of researchers at the University of California at Berkeley, Cornell, the University of Chicago, and the University of Illinois at Urbana-Champaign created APPS , a benchmark for code generation from natural language specifications. Unlike prior work on code generation, which mostly focuses on code translation and pseudocode-to-code, the researchers tested models on their ability to take specifications and write code that meets these specifications. Their work comes on the heels of the release of IBM’s Project CodeNet, one of the largest open source dataset for benchmarking around AI for code. But CodeNet centers around the problems of code translation, code similarity, and code constraints. APPS is broader in scope, evaluating models not only on their ability to understand coding syntax but on their ability to comprehend task descriptions and create algorithms to solve these tasks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “APPS enables robust evaluation of models along several dimensions, providing a precise and comprehensive view of code generation ability,” the coauthors wrote in a paper detailing their work. “If a model were to perform well on APPS, this would indicate an ability to flexibly use data structures and programming techniques, as well as an ability to correctly interpret diverse task specifications, follow instructions, and understand human intent.” APPS contains 10,000 programming problems in Python, Java, and C++ ranging in difficulty from introductory to coding competition challenges, as well as a bank of over 130,000 test cases and more than 230,000 human-written solutions for evaluation. The test cases were chosen to create a gold-standard metric for model performance, including correct functionality across edge cases. And most were taken from open access coding websites including Codeforces and Kattis. The introductory problems in APPS, which include counting the number of appearances of a substring and finding if a string is a palindrome, can be solved by programmers with 1-2 years of experience without requiring algorithms. The intermediate, interview-level problems are more difficult in nature and at the level of questions asked in typical technical interviews. As for the competition-level problems, they’re even more challenging and representative of those in high school and collegiate programming competitions like the United States of America Computing Olympiad (USACO). Results The researchers tested several types of models on APPS, including OpenAI’s GPT-2, GPT-3, and an open source version of GPT-3 called GPT-Neo. In experiments, they discovered that the models could learn to generate code that solves easier problems but not without syntax errors. Approximately 59% of GPT-3’s solutions for introductory problems had errors, while GPT-Neo averaged 3%. Moreover, the best-performing model — GPT-Neo — attained only 10.15% accuracy (excluding edge cases) and 1.12% strict accuracy (including edge cases) across introductory-, interview-, and competitive-level problems, indicating that there’s substantial room for improvement. “These results position code generation as a challenging but tractable testbed for large-scale language models … Writing code to meet specifications in natural language is an economically valuable task with widespread social implications should it be solved, in that it could eventually facilitate malicious code generation and one day result in job automation. As large-scale language models have the potential to make significant progress on code generation, it is essential that we begin to track advancements on this task,” the researchers wrote. Several efforts are underway to create viable AI-powered coding tools, including Intel’s ControlFlag , which can autonomously detect errors in code. Codota is developing a platform that suggests and autocompletes scripts in Python, C, HTML, Java, Scala, Kotlin, and JavaScript. Ponicode taps AI to check the accuracy of code, and DeepCode offers a machine learning-powered system for whole-app code reviews ( as does Amazon ). Perhaps one of the most impressive projects to date is TransCoder , an AI system Facebook researchers developed that converts code from one programming language into another. Another contender is a model from OpenAI that was trained on GitHub repositories to generate entire functions from English-language comments. According to a study from the University of Cambridge’s Judge Business School, programmers spend 50.1% of their work time not programming; half of the rest of their time is spent debugging. And the total estimated cost of debugging is $312 billion per year. AI-powered code suggestion and review tools, then, promise to cut development costs substantially while enabling coders to focus on more creative, less repetitive tasks. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,214
2,021
"AI Weekly: The promise and limitations of machine programming tools | VentureBeat"
"https://venturebeat.com/2021/06/18/ai-weekly-the-promise-and-limitations-of-machine-programming-tools"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: The promise and limitations of machine programming tools Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Machine programming, which automates the development and maintenance of software, is becoming supercharged by AI. During its Build developer conference in May, Microsoft detailed a new feature in Power Apps that taps OpenAI’s GPT-3 language model to assist people in choosing formulas. Intel’s ControlFlag can autonomously detect errors in code. And Facebook’s TransCoder converts code from one programming language into another. The applications of computer programming are vast in scope. And as computers become ubiquitous, the demand for quality code draws an ever-growing number of aspiring programmers to the profession. After years of study to become proficient at coding, experts learn to convert abstracts into concrete, executable programs. But they spend the majority of their work hours not programming. According to a study from the University of Cambridge, at least half of developers’ efforts are spent debugging, which costs the software industry an estimated $312 billion per year. AI-powered code suggestion and review tools promise to cut development costs substantially while allowing coders to focus on more creative, less repetitive tasks, according to Justin Gottschlich, principal AI scientist and director of Intel’s machine programming division. Gottschlich is spearheading the work on ControlFlag, which fuses machine learning, formal methods, programming languages, and compilers to detect normal coding patterns, identifying abnormalities in code that are likely to cause a bug. “Prior to machine learning- or AI-based programming systems, programmers had dozens — perhaps hundreds — of tools to help them be more productive, produce code with fewer logic errors, improve the software’s performance, and so on. However, nearly all of these systems were ‘rules-based,'” Gottschlich told VentureBeat via email. “While useful, rules-based systems are inherently limited in scope by the rules that they have been programmed into them. As such, if new kinds of things occur, the systems would need to be updated by humans. Moreover, these rules-based systems have always been prone to human error in creating the rules encoded in them. For example, programmers may accidentally create a rule to find a certain type of bug, but incorrectly define the rules to find it. This hidden bug in the rules system could go undetected forever.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Gottschlich asserts that AI-based systems offer benefits over the rules-based systems of yesteryear because AI can learn on its own in an unsupervised fashion, enabling it to draw on massive code databases. With unsupervised learning, an algorithm is fed “unknown” data for which no previously defined labels exist. The system must teach itself to classify the data by processing it to learn from its structure. For example, ControlFlag was trained on over 1 billion unlabeled lines of code to identify stylistic variations in programming language. As for TransCoder, it learned to translate between C++, Java, and Python by analyzing a GitHub corpus containing over 2.8 million repositories. Microsoft trained a bug-spotting program on a dataset of 13 million work items and bugs from 47,000 developers across AzureDevOps and GitHub repositories. And code review platform DeepCode’s algorithms were taught using billions of lines of code captured from public open source projects. Code generation versus augmentation There’s a difference between AI-powered coding tools that can generate code from whole cloth versus augment a programmer’s workflow. The latter is more common. Startups such as Tabrine (formerly Codota) are developing platforms that suggest and autocomplete scripts in Python, C, HTML, Java, Scala, Kotlin, and JavaScript. Ponicode taps AI to check the accuracy of code. Intel’s Machine Inferred Code Similarity engine can determine when two pieces of code perform similar tasks, even when they use different structures and algorithms. And DeepCode offers a machine learning-powered system for whole-app code reviews — as does Amazon. “Currently, we see a lot of AI-powered assistants, enabling software engineers to gain velocity and accuracy in their work. And the reason for the availability of more assistant tools than automation tools is that AI-powered automation has simply not yet reached the level of accuracy required,” Ponicode CEO Patrick Joubert told VentureBeat. “Our industry is still young, and even though we can already see the potential of automation with AI based code generators, we have to acknowledge that automatically generated code is still pretty unmaintainable and the overall quality is not meeting the right standards yet. While some engineers are working on the future of AI powered automation, my team and I, along with many other stakeholders, are dedicated to creating tools that can be used today. Within a few years I believe there will be enough tools to cover all steps of the development lifecycle.” For Joubert, the most intriguing categories of machine programming tools today are autocompletion and code analysis. Autocompletion systems like Tabnine and Kite employ AI to analyze semantics and make sense of code, autocompleting functions with a sense of the code’s semantic content and purpose. As for code analysis tools like Snyk and DeepCode, they’re dedicated to finding vulnerabilities in the code and suggesting actions to resolve them. “When we see the numerous leaks and bugs from any software, including the ones built by leading multinationals, we can agree that [the software] industry has not yet matured. AI-powered coding tools are mostly meant to enhance the developer experience and empower them, thanks to greater velocity and greater efficiency,” Joubert added. “Behind these developer-focused benefits, I believe we are on the way to allowing software engineers to build industrial-grade software, where quality, innovation, and speed are reached systematically … Autocompletion [in particular is] enabling software engineers to focus on the most complex part of their codebase and removing the burden of manually writing long strings of code.” Limitations Both AI-powered code generators and coding assistance tools have their limitations. For example, while GitHub has over 250 million code repositories alone, most of the data is unannotated. There’s only a few examples that describe precisely what the code does — posing a challenge for systems that can’t learn from unlabeled data. In an effort to address this, IBM recently released CodeNet , a 14-million-sample labeled dataset with 500 million lines of code written in 55 programming languages. The company claims that the rich annotations added to CodeNet make it suitable for a diverse set of tasks as opposed to other datasets specialized for specific programming tasks. Already, researchers at IBM have conducted several experiments with CodeNet, including code classification, code similarity evaluation, and code completion. “It is my speculation that in the next decade, code semantics understanding systems are likely to be one of the most important areas of machine programming in the coming decade,” Gottschlich said. “It depends on the domain the machine programming system is being applied to. For small programs, such as unit tests or regression tests, full program synthesizers are a reality today. Yet, for larger programs, it’s currently computationally intractable for machine programming systems to generate the potential thousands or millions of lines of code without the assistance of a programmer.” Boris Paskalev, the cofounder and CEO of DeepCode, calls creating a couple of lines of code with AI “more of a toy than a productivity breakthrough.” While techniques like natural language processing work well with text because there’s fixed limits on the words and syntax that need to be understood, code isn’t the same, he argues. “Since there are no formal rules for software development, [programming] is an art that requires a complete understanding of code and a developer’s intentions to produce something that works as expected without bugs,” Paskalev told VentureBeat. “As far as we’ve come in using machine learning and neural networks for code, we’re still only in the ‘invention of the wheel’ phase … machine learning is already proving to be very useful for code, but only after it goes through a semantic machine learning-representation of the code: making sure all semantic facts, variables, transitions, and logical interrelations are clearly represented and considered by the learning model.” To Paskalev’s point, recent studies suggest that AI has a ways to go before it can reliably generate code. In June, a team of researchers at the University of California at Berkeley, Cornell, the University of Chicago, and the University of Illinois at Urbana-Champaign released APPS , a benchmark for code generation from natural language specifications. The team tested several types of models on APPS, including OpenAI’s GPT-2, GPT-3, and an open source version of GPT-3 called GPT-Neo. In experiments, they discovered that the models could learn to generate code that solves easier problems — but not without syntax errors. Approximately 59% of GPT-3’s solutions for introductory problems had errors, while the best-performing model — GPT-Neo — attained only 10.15% accuracy. “When generating code from whole cloth, there are typically challenges around both specifying the intent and consuming the results,” Tabrine CEO Dror Weiss told VentureBeat. “User intent can be specified in natural language by providing examples, writing code in a higher-level language, or in other means. But in most cases, this intent does not provide a full specification of the desired behavior. Also, the generated code may be following different route than what the developer had in mind. As such, it may be challenging for the developer to judge whether the code performs the desired operation exactly.” Facebook AI researchers Baptiste Rozière and Marie-Anne Lachaux, who worked on TransCoder, agree with Tabrine’s assessment. “It is inherently difficult to generate correct code from unspecific natural language problem descriptions that could correspond to several different code snippets. An easier task would be to generate code from an input that is more specific and closer to the output code, like pseudo-code or code written in a different language,” they told VentureBeat. “A huge obstacle to the adoption of … methods generating large amounts of code without human supervision is that they would need to be extremely reliable to be used easily. Even a tool that could generate methods with 99% accuracy would fail to generate a working codebase of hundreds of functions. It could speedup the code generation process but would still require human testing and intervention.” Rozière and Lachaux also point out that tasks around code generation are generally much harder than classification tasks because the model has a lot of freedom and can create many different outputs, making it hard to control the correctness of the generation. Moreover, compared with natural languages, programming languages are very sensitive to small errors. A one-character difference can change the semantics of the code and make the output faulty. “Current machine learning algorithms may not be able to generalize well enough to different problems to match human performance for coding interviews without larger datasets or much better unsupervised pre-training methods,” Rozière and Lachaux said. Potential benefits Paskalev thinks it’ll be at least five to ten years until natural language processing enables developers to create “meaningful components” or even entire apps from a simple description. But Gottschlich is more optimistic. He notes that AI-powered coding tools aren’t just valuable in writing code, but also when it comes to lower-hanging fruit like upgrading existing code. Migrating an existing codebase to a modern or more efficient language like Java or C++, for example, requires expertise in both the source and target languages — and it’s often costly. The Commonwealth Bank of Australia spent around $750 million over the course of five years to convert its platform from COBOL to Java. “Deep learning already enables us to cover the smaller tasks, the repetitive and redundant ones which clutter a software engineers’ routine. Today, AI can free software engineers from tedious tasks slowing them down and decreasing their creativity,” Gottschlich said. “The human mind remains far superior when it comes to creation, innovation, and designing the most complex parts of our softwares. Enabling them to increase velocity in these exciting, high added value parts of their work is, I believe, the most interesting way to leverage the power of machine learning today.” Joubert and Weiss say that the potential business value of machine programming also can’t be ignored. An estimated 19% to 23% of software development projects fail, with that statistic holding steady for the past couple of decades. Standish Group found that “challenged” projects — i.e., those that fail to meet scope, time, or budget expectations — account for about 52% of software projects. Often, a lack of user involvement and clear requirements are to blame for missed benchmarks. “We see a great number of new tools using AI to enhance legacy code and help existing assets reach industrial-grade standards. We can elevate developer legacy code management workflows and be part of reducing the hefty level of technical debt built up over the past 50 years in the software industry,” Joubert said. “The days when developers had to write and read code line by line are gone. I’m excited to see how the other steps in the software development lifecycle are going to be transformed and how tools will reach the same level that Kite or Snyk have attained. Leveraging AI to build efficient, one-purpose, tested, secure, and documented code effortlessly is going to profoundly change the way software companies can create incremental value and innovation.” From Weiss’ perspective, AI-powered coding tools can reduce “costly” interactions between developers like Q&A sessions and repetitive code review feedback while shortening the project onboarding process. “[These] tools make all developers in the enterprise better. They take the collective code intelligence of the organization and make it available, during development time, to all developers. This allows any developer on the team to punch above their weight,” he said. For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,215
2,023
"Arize launches Phoenix, an open-source library to monitor LLM hallucinations | VentureBeat"
"https://venturebeat.com/ai/arize-launches-phoenix-an-open-source-library-to-monitor-llm-hallucinations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arize launches Phoenix, an open-source library to monitor LLM hallucinations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Arize AI , a California-headquartered company providing machine learning (ML) observability capabilities, today announced Phoenix, an open-source library to monitor large language models (LLMs ) for hallucinations. The software solution comes as the industry re-tools around LLMs and data scientists apply large foundational models to new use cases, including those involving medical and legal data — where even the slightest level of hallucination or bias can create a major problem in the real world. It is designed to be a standalone offering delivering ML observability in a data science notebook environment where data scientists build models, the company said. How exactly does Phoenix help with LLMs? Large language models like OpenAI’s GPT-4 and Google’s Bard are all the rage today, with data scientists and ML engineers racing to build applications on top of them. These could be anything from virtual lawyer products providing legal advice to healthcare chatbots designed to summarize doctor-patient meetings or provide information about existing insurance coverage. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Now, while these applications can be very effective, the models running them remain susceptible to hallucination — in other words, producing false or misleading results. Phoenix, announced today at Arize AI’s Observe 2023 summit, targets this exact problem by visualizing complex LLM decision-making and flagging when and where models fail, go wrong, give poor responses or incorrectly generalize. “Phoenix runs locally, in an environment that interfaces with Notebook cells on the Notebook server. Its library uses embeddings (vectors representing meaning and context of data points that the model processes and generates) and clustering of those embeddings as a method for data visualization and debugging,” Jason Lopatecki, CEO and cofounder of Arize, tells VentureBeat. In the real world, this means a user just has to upload the chatbot conversation — complete with prompts and responses — and start the software. The library will automatically use the foundational embeddings (mapping out how they connect, how they are related and how they progress as sentences are generated) and LLM-assisted evaluation to generate scores for responses and visualize them to show where the bot gave a good response and where it failed. As the visualization is produced, the user can investigate, grab groups of responses representing a problem (like questions from Spanish-speaking end users where the LLM responded incorrectly) and troubleshoot for fine-tuning the model and improving its outcomes. “Once in a notebook environment, the downloaded data can power observability workflows that are highly interactive. Phoenix can be used to find clusters of data problems and export those clusters back to the observability platform for use in monitoring and active learning workflows,” Lopatecki added. It can also help surface issues like data drift for generative AI , LLMs, computer vision and tabular models, the company noted. Rapidly evolving space While Arize AI claims that Phoenix, which is available starting today, is the first software library designed to help with LLM evaluation and risk management, enterprises should keep in mind that this is a rapidly evolving space with new players cropping up almost every day. “The current generation of AI models is a black box to almost everyone. Almost no one understands how they do what they do. Phoenix is the first step to building software that helps map out the internals of how these models think and what decisions they are making, designed for the users of LLMs,” the CEO said. He added that over 100 users and researchers at different companies and organizations advised on the development of Phoenix, with initial feedback being quite positive. Christopher Brown, CEO and co-founder of Decision Patterns and a former UC Berkeley lecturer, called the solution a “much-appreciated” advancement in model observability and production. He said the integration of observability utilities directly into the development process not only saves time but encourages model development — and production teams to actively think about model use and ongoing improvements before releasing to production. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,216
2,023
"Datadog brings OpenAI model monitoring into the fold, launches new integration | VentureBeat"
"https://venturebeat.com/ai/datadog-brings-openai-model-monitoring-into-the-fold"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Datadog brings OpenAI model monitoring into the fold, launches new integration Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New York-based Datadog , which provides a cloud observability platform for enterprise applications and infrastructure, today announced an integration to monitor OpenAI models such as GPT-4. The offering, Datadog says, will help enterprise teams understand user interactions with their GPT-powered applications, ultimately enabling them to fine-tune the models for better performance and economies. The announcement comes as OpenAI’s large language models continue to see adoption across a variety of enterprise-specific use cases, including business-critical areas such as customer service and data querying. How does the OpenAI integration help? Once up and running, the Datadog-OpenAI integration automatically tracks GPT usage patterns, providing teams with actionable insights into model performance and costs via dashboards and alerts. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For performance, the plugin looks at OpenAI API error rates, rate limits and response times, allowing users to identify and isolate issues within their applications. It also offers the ability to view OpenAI request volumes — along with metrics, traces, and logs containing prompts and corresponding completions — to understand how end customers are interacting with the applications, and to gauge quality of the output generated by their OpenAI models. “Customers can install the integration by instrumenting the OpenAI Python library to emit metrics, traces and logs for requests made to the completions, chat completions and embedded endpoints. Once instrumented, the metrics, traces and logs will be automatically available in the out-of-the-box dashboard provided by Datadog,” Yrieix Garnier, VP of product at Datadog, told VentureBeat. These dashboards can then be customized to drill down further into performance issues and optimize the models for improved user experience, the VP added. On the costs front, Datadog says, the integration allows users to review token allocation by model or service and analyze the associated costs of OpenAI API calls. This can then be used to manage expenses more effectively and avoid unexpected bills for using the service. While Garnier did confirm that customers of both companies are testing the integration, he did not share specific results they have witnessed so far. The connector currently works for multiple AI models from OpenAI, including the GPT family of LLMs, Ada, Babbage, Curie and Davinci. New Relic offers something similar New Relic , another player in the observability space, offers a similar OpenAI integration that tracks API response time, average tokens per request and the associated cost. However, Garnier claims Datadog’s offering covers additional elements, like response-time-to-prompt token ratio, as well as metrics providing contextual insights into individual user queries. “Furthermore, for API response times, API requests and other metrics, we allow users to break this down by model, service and API keys. This is critical in order to understand the primary drivers of usage, token consumption and cost,” he noted. Moving ahead, monitoring solutions like these, including those specifically tracking hallucinations , are expected to see an increase in demand, given the meteoric rise of large language models within enterprises. Companies are either using or planning to use LLMs (most prominently those from OpenAI) to accelerate key business functions, from querying their data stack to optimizing customer service. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,217
2,022
"How to choose the right NLP solution | VentureBeat"
"https://venturebeat.com/ai/how-to-choose-the-right-nlp-solution"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to choose the right NLP solution Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For decades, enterprises have jury-rigged software designed for structured data when trying to solve unstructured , text-based data problems. Although these solutions performed poorly, there was nothing else. Recently, though, machine learning (ML) has improved significantly at understanding natural language. Unsurprisingly, Silicon Valley is in a mad dash to build market-leading offerings for this new opportunity. Khosla Ventures thinks natural language processing (NLP) is the most important technology trend of the next five years. If the 2000s were about becoming a big data-enabled enterprise, and the 2010s were about becoming a data science-enabled enterprise — then the 2020s are about becoming a natural language-enabled enterprise. To fast-track its transformation to such an enterprise, an organization must establish a viable strategy that aligns with its business objectives and generates business impact. While it may sound like a complex decision that requires an expensive management consulting firm, it’s not. It starts with how you answer two questions: First, who employs the data scientists and machine learning engineers (MLEs)? Second, who builds and operates the underlying ML stack that houses the relevant models and tools? Strategy Option 1 Strategy Option 2 Company employs MLEs and Data Scientists Vendor employs MLEs and Data Scientists Vendor manages ML stack Low-code ML platform and Pre-trained models APIs Company manages ML stack Build your own using open-source elements NLP solutions: Building mature AI A “build your own” strategy allows companies to construct custom ML models on their data. It also minimizes security risks because companies don’t have to share data with external vendors to label or process. If you can pull it off and afford it, “build your own” leads to substantial competitive advantages because you now have a world-class artificial intelligence (AI) team, amplifying productivity in every aspect of the business. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, this strategy is by far the most expensive. Building and operating an ML stack is complicated and requires specialized expertise. KPMG estimates that to build mature AI capabilities, a company needs to employ at least 500 to 600 full-time AI employees — including a majority who build and operate the ML stack — and pay them a cumulative $100 million to $120 million per year. On top of that, there is no guarantee for success, since productionizing AI is challenging for even the best teams. The “low-code ML platform and pre-trained models” strategy reduces the cost of building mature AI capabilities because the vendor handles the majority of the development and operation of the ML stack. Instead of spending more than $100 million per year, organizations can likely reduce that to $25 million to $50 million annually. This strategy also still allows companies to build custom ML and NLP models. Though, like the previous strategy, there is no guarantee of success because it does not eliminate one of the most complex parts of the full AI process. That is — the handoff of models from the AI team to the business team to actually implement them into production and derive business value. An application programming interface (API) strategy minimizes the hand-off problem, increasing the probability of success in productionizing AI. ML models can be seamlessly integrated into applications because the vendor abstracts the complexity of creating and training these models, and guides the users into the best way of using them. It also reduces the cost of achieving the benefits of NLP since the vendor employs the data scientists and MLEs, and builds and operates the ML stack. Models that are accessible via APIs are built on public datasets and must still be trained and tuned to work on domain and company-specific data. However, if the vendor has implemented the tool properly, this work can be done directly by domain experts without technical skills. Unfortunately, most vendors have not solved this problem, so there is limited feasibility of re-training their large language models to work on customer data without hiring a full staff of MLEs and data scientists to train and maintain over time; it either works or it doesn’t. Where does this leave us? For most enterprises, the best approach to leveraging NLP and becoming a natural language-enabled enterprise would be a strategy that includes APIs. That is — provided that the vendor has enabled the capability for the customer to easily tune and optimize its general-purpose model so it can work on customer data. This would save enterprises tens of millions of dollars every year and accelerate time-to-value. To the extent that the use case calls for a model that can’t be accessed via API and easily tuned, then the next best strategy for most enterprises is the “low-code ML platform and pre-trained models” strategy. While the build-your-own strategy is the least practical strategy for most enterprises, there are, of course, a few companies for which this is the best path to action. After all, according to Gartner : “Enterprises sit on unexploited unstructured data, with opportunities to extract differentiating insights. Data and analytics technical professionals must uncover such insights by applying natural language technology solutions: intelligent document processing, conversational AI and insight engines.” Ryan Welsh is the founder and CEO of Kyndi. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,218
2,022
"Open-source NLP company Deepset nabs $14M to power 'plain-English' enterprise search | VentureBeat"
"https://venturebeat.com/ai/open-source-nlp-company-deepset-nabs-14m-to-power-plain-english-enterprise-search"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Open-source NLP company Deepset nabs $14M to power ‘plain-English’ enterprise search Share on Facebook Share on X Share on LinkedIn Deepset founders, from left to right: Malte Pietsch (CTO), Timo Möller (head of ML) and Milos Rusic (CEO) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open-source journey! Sign up here. If there’s one thing the enterprise world doesn’t have a shortage of, it’s data. But access to data doesn’t necessarily equate to useful, contextualized information that’s easy to search and derive insights from. The holy grail of information retrieval, arguably, is the ability to search vast data repositories using simple, plain-English (or whatever your mother tongue is) queries — natural language processing (NLP) is the name of the game. And this is something that German company Deepset is setting out to solve, with an open-source NLP framework called Haystack that enables developers to build pipelines for myriad search use-cases. Founded in 2018, Deepset started work on Haystack in 2019, and released the first incarnation of the open-source project the following May. In the near two years since, Haystack has attracted nearly 100 contributing developers from around the world, with thousands of organizations such as Alcatel Lucent using the open-source product, and many companies such as aerospace giant Airbus paying Deepset to provide professional support and services on top of Haystack. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It was these initial revenues that enabled Deepset to bootstrap its growth over the past few years, and today the Berlin-based company is unveiling a new cloud-based product that ushers Haystack into the modern enterprise software-as-a-service (SaaS) realm. Deepset is also announcing a $14 million series A round of funding led by Alphabet’s venture capital arm GV, with participation from a slew of institutional and angel investors including founders of esteemed companies such as Cockroach Labs, Cloudera, Deepmind, Neo4J, and NGINX. NLP for all So, what kinds of things can developers use Haystack for? Well, anything that involves retrieving information using natural language. A company that has built a library of technical documentation for staff to search through, as Alcatel Lucent Enterprise did, can create a chatbot to let technicians ask questions or describe an issue that they’re having, and serve up the best answers from the digital documents. Alternatively, a government could create an NLP-powered search system to make it easier to find information across different internal websites, while a financial services company can automate aspects of their risk-management workflow by allowing auditors to ask questions such as “ How did revenues evolve in the past year ” during a credit approval application. But in truth, Haystack can be used for just about anything that involves a knowledge-base search, such as internal wikis that plug into an extensive arsenal of documents and databases to deliver insights on whatever subject matter is important to an organization. In terms of how developers and companies deploy the technology within their stack, Haystack basically offers a more convenient way of serving NLP models, making it easy to try out models from Hugging Face , and figure out what works for a specific NLP use-case — Haystack presents a more developer-friendly way of building an API-driven backend application, using existing building blocks from the broader NLP realm. “Haystack is built for the modern world of NLP — it is part of an extremely rich and completely open NLP environment that has flourished in the past few years,” Deepset cofounder and CEO Milos Rusic told VentureBeat. “It is very hard to maintain the required level of sophistication with any proprietary solution, there’s so much happening and new [NLP] models, algorithms, and workflows appear practically every day. Haystack allows developers to access the latest outcomes of this open NLP world, and leverage the top-notch building blocks in a practical, rapid, and safe manner.” The Haystack-based NLP is usually deployed atop a text database such as Elasticsearch or Amazon’s OpenSearch fork, and then integrates directly with the end-user application (e.g. in a search bar or chatbot) via a REST API. So, while something like Elasticsearch is a well-established keyword-based search engine for enterprises, Haystack allows developers to add NLP-powered semantic search on top of it, one that understands the actual meaning of the query. For comparison, in a keyword search, the user will likely start with a single word or set of words to narrow down their search to find their desired results — but even then they might not find what they’re looking for, and may have to sift through various tenuously related sources. In Haystack’s neural search domain, results are automatically adjusted based on a deeper understanding of what the person is actually asking. It’s worth noting that in its current guise, Haystack is mostly designed for text-based NLP searches, though users are able to build a custom node for voice-based searches so they can tap into any number of third-party speech-to-text models from Hugging Face or other commercial APIs. But in the coming months, Deepset will be rolling out native support for voice-based searches, according to Rusic. “We will have a dedicated, native node for it [voice search], which will make it easier for developers to do all the other workflows in Haystack and Deepset Cloud, that helps them to build successful voice-based search pipelines,” Rusic said. Landscape Haystack inhabits a world that includes notable open-source NLP toolkits and frameworks like Spacy and the aforementioned Hugging Face , while it also jives with the likes of semantic search and information retrieval entities such as Vespa , Weaviate , Jina AI , Zilliz. However, Rusic is quick to stress that they are not really like-for-like comparisons. “Due to the design of Haystack, we are not really in competition with those companies but are partnering with them, are often integrated with each other, and also create joint content — like with Huggingface , Weaviate or Zilliz. On the proprietary side, Haystack can perhaps be compared to the likes of Amazon’s AWS Kendra , Microsoft’s Azure Cognitive Search , or Sinequa , but this is where Haystack’s open-source foundations set it apart. Indeed, open source has played a pivotal role not only in the advancement of the internet as we know it, but in the burgeoning AI sphere where trust and transparency is key. “In order to reach mainstream adoption, AI needs to be more approachable,” Rusic explained. “Vendors who claim to have unique AI, models and so on, struggle with large(-scale) adoption due to a lack of trust and transparency. With an open source approach, the core tech is open, benchmarks exist that give an idea about the true performance, as well as research and content is created around the projects that educate the market. All of this is essential to bring AI and NLP to the mainstream.” This also helps companies attain a higher level of independence, as they have greater control over the technologies and systems that make up their stack. “For all disruptive technologies, but especially for AI and NLP, being locked-in is what most enterprises fear,” Rusic continued. “With an open-source technology, this allows [them] to move between vendors or even consider self-hosting systems — this lock-in is way lower, and drives not only the confidence to adopt a technology but is also becoming a requirement.” On top of all that, open-source technology is far easier to customize and tailor to specific applications and use-cases — companies can adapt it to their own unique needs, while developers can tinker with things and really dive under the hood to see what makes it tick. “Many engineers are ‘kinesthetic’ learners — they like to see the code, ‘touch’ it, try things out fast, learn by example, and so on,” Rusic added. “They also like to share their findings, and this is what drives so many open-source communities. Only an open-source approach brings the most of the above, as compared to anything ‘proprietary.'” Deepset Cloud With a fresh $14 million in the bank, Deepset is better positioned to build on top of the open-source foundation it has created with Haystack over the past few years, which is where it’s new enterprise-focused SaaS product enters the mix. Deepset Cloud, available in beta from today, removes many of the practical and technical headaches that companies may otherwise face using Haystack as a standalone open-source project — it’s all about giving developers the tools to build production-ready NLP systems faster. The new SaaS product includes a user interface for designing, deploying, and monitoring NLP pipelines, with support for collaboration and garnering feedback within developer teams, while it packs Kubernetes, databases, and other crucial services “needed to run NLP pipelines at scale” in production environments, according to Rusic. “Deepset has offered professional services, support, and hosting of Haystack-based systems before — these revenues allowed the company to bootstrap for three years,” Rusic explained. “Deepset Cloud is born out of the lessons, know-how’s and rich expertise from the early bootstrapping. We learned from the community that not every team has the time to build and manage all the infrastructure around it.” So what’s next for Deepset? “Deepset Cloud will be the sole focus for the next few years, but there are big plans to build the platform out, support more and more workflows, richer NLP use cases, flexible integrations — and make it a unified platform for enterprise to develop any NLP-powered application,” Rusic said. In addition to lead investor GV, Deepset’s series A round included participation from System.One, Harpoon Ventures, Acequia Capital, Spencer Kimball, Alex Ratner , Emil Eifrem , and Mustafa Suleyman. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,219
2,022
"4 steps to closing the cybersecurity skills gap in your organization | VentureBeat"
"https://venturebeat.com/datadecisionmakers/4-steps-to-closing-the-cybersecurity-skills-gap-in-your-organization"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community 4 steps to closing the cybersecurity skills gap in your organization Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The existence of a cybersecurity skills gap is universally accepted throughout business, industry and every other sector. All you have to do is look at the job numbers. The CyberSeek Global Security Heat Map identifies more than 600,000 total cybersecurity job openings just in the United States. Considering that the same tool only identifies a little over one million total employees currently working in cybersecurity, the workforce needs to grow by at least 50% to even come close to filling the demand. Recognizing the shortage of cybersecurity pros is one thing. However, identifying which skills technical teams within your organization are missing is another. And trying to address those gaps is equally hard. Understanding what skills your teams need is the first step toward ensuring they can prevent, detect and respond effectively to threats. It can ensure that development teams bring security controls to the design phase. And it can reduce the impact of cyberattacks , both on your organization and those that use your software. Here are four key steps you can take to identify the skills that are missing in your organization. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 1. Build a cybersecurity competency model Organizations can start by defining the cybersecurity competencies needed for each job within your technical teams, describing the knowledge, skills and abilities (KSA) required to excel in a given position. A well-designed model will identify the KSAs and associated behaviors necessary to establish proficiency, and prioritizes them according to beginner, intermediate or advanced levels. Building a competency model is a careful process. The skill requirements it identifies should be aligned with your organization’s strategic plan, as well as with the National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework. Before establishing the skills needed for each job role, you should review existing job descriptions, and solicit input from technical team members for their insights. You also could make use of outside sources, such as the Department of Labor’s Occupational Information Network ( O*Net Online ). Creating a competency model, evaluating each team member and creating a training plan to increase their cybersecurity skills takes time, but it is well worth the effort. 2. Evaluate and measure cybersecurity competency With a cybersecurity competency model in place, the next step is to see how your technical teams stack up against that model. A thorough assessment of the skills you have on hand will provide a clear view of the organization’s skill gap. It can help determine where training is needed, where resources should be allocated and how to prepare proactively for future threats. You can identify skill sets using a combination of several types of evaluations. Employee self-assessments. Have employees use the model to rate their own proficiency. Surveys or interviews. Asking employees about the skills they have and want to attain can provide some valuable insights. Cybersecurity skills assessments. Use a skills checklist or a hands-on assessment to determine skills that are needed. Performance reviews. Include questions about professional development goals and what employees consider to be their strengths. Work products. Collecting work samples from each team member can help assess their skills. Assess and measure with a scoring rubric. Having knowledgeable managers score employee skills according to a rubric can identify skills gaps. 3. Identify areas of strengths and weaknesses at the team level, as well as skills silos Just as important as assessing individual skills is identifying skills gaps at the team level. A strong team should have a diverse mix of technical, cybersecurity and professional strengths. Assessing the team as a whole can identify a key missing skill — such as a familiarity with penetration testing — that could put the organization at risk. It’s likewise important to identify skills silos, where, for example, only one team member has any knowledge of a priority topic, such as PCI standards. Team evaluations can help you make informed decisions about training and development, prioritizing the skills they need most. 4. Track the effectiveness of your efforts to close the skills gap Once skills needs are identified, organizations can close the gap either by hiring new team members or training existing members. Training can be accomplished through several methods, including instructor training, online courses, mentoring, peer learning, webinars and job shadowing/job sharing. An essential step at this point is to measure the success of your skills strategy. Tracking the number of team members who have acquired new skills is one key metric. Other critical indicators include the overall skill levels of teams and the number of threats averted because of improved skills. Conclusion Closing the cybersecurity skills gap starts with identifying the skills that are missing in your technical teams, then prioritizing the skills your organization most needs and acquiring them through training or hiring. It’s a fairly painstaking process, but necessary to improving your organization’s security posture. Rather than talking about the skills gap, you’ll be doing something about it. Dr. Heather Monthie is the head of cybersecurity training and education at Offensive Security. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,220
2,022
"10 ways analytics improves endpoint security and asset management | VentureBeat"
"https://venturebeat.com/security/10-ways-analytics-improves-endpoint-security-and-asset-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 ways analytics improves endpoint security and asset management Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Security Achieving greater visibility and control over endpoints is table stakes for any organization pursuing zero-trust security. Human and machine identities are the new security perimeter in any network, and protecting those identities with data-driven insights and intelligence is one of the highest priorities for CISOs today. Knowing the current configuration and condition of every endpoint asset helps to keep patches current and endpoints safe. To underscore how essential endpoint security is to zero trust strategies, the White House published the Federal Zero Trust architecture (ZTA) strategy last month. The strategy states that federal agencies need to ensure that Endpoint Detection and Response (EDR) tools will meet Cybersecurity and Infrastructure Security Agency (CISA) technical requirements and are deployed government-wide. The strategy provides practical, pragmatic advice for securing endpoints that are applicable to any organization, also identifying the need for greater analytics-based visibility across networks. Analytics improve endpoint visibility and control Analytics are proving effective in helping enterprises take on these challenges, becoming a growth catalyst for Endpoint Protection Platform (EPP) and Endpoint Detection and Response (EDR) platform. Enterprises spent $13.3 billion on EPP in 2021, predicted to reach $26.4 billion by 2025 , attaining a compound annual growth rate of 18.7%. By the end of 2025, more than 60% of enterprises will have replaced older antivirus products with combined Endpoint Protection Platforms (EPP) and EDR solutions that supplement prevention with detection and response capabilities according to Gartner. Overall enterprise spending on information security and risk management market is projected to reach $233 billion by 2025 , attaining an 11.2% compound annual growth rate between 2020 and 2025. The following are ten ways analytics improves endpoint security, contributing to more effective zero trust architectures and strategies in the process: Predictive analytics and AI show the potential to become the primary detection method for identifying and stopping malware attacks. AI-based techniques such as algorithms have long contributed to improving endpoint security by identifying potential malware attack patterns. More cybersecurity vendors are designing AI into EPP and EDR platforms as the primary detection method and technology for malware. AI-based algorithms can detect file-based malware and learn which files are harmful or not based on the file’s metadata and content. Broadcom’s Content & Malware Analysis illustrates how machine learning is being used to detect and block malware. Their approach combines advanced AI and static code file analysis to detect and analyze threats and stop breach attempts before they can spread. Analytics and AI-based techniques for deriving risk scores based on previous behavioral patterns, time of login, location, and many other quantifiable factors is proving to be effective at securing and controlling access to endpoints. Using AI- and machine learning-based techniques to fine-tune risk scores in milliseconds is proving effective in stopping breach attempts using privileged access credentials. By combining supervised machine learning models that mine historical data to find patterns and unsupervised machine learning to find new anomalies and interrelationships, cybersecurity vendors integrating AI into their platforms are helping to stop breaches. There’s a broad spectrum of cybersecurity vendors either working on or delivering solutions with these technologies, with Microsoft Defender for Endpoint being noteworthy. Microsoft has integrated AI into the Defender platform so its customers can initiate threat hunting across networks, provide real-time threat-monitoring and analysis, detect and respond to advanced attacks with AI-based monitoring, and reduce attack surfaces. Additional vendors providing AI-based endpoint protection include CrowdStrike , Trend Micro , SentinelOne , McAfee , Sophos , VMWare Carbon Black , Broadcom , Cybereason , Ivanti , Kaspersky and others. Integrating predictive analytics, AI and SIEM (Security Information and Event Management) into a single platform enables enterprises to predict, detect and respond to anomalous behaviors and events. Predictive analytics are a core part of SIEM platforms today as they provide automated, real-time correlation and ongoing analysis of all activity observed within a given IT complex. Capturing and analyzing endpoint data in real-time using predictive analytics and AI is providing new insights into asset management and endpoint security. LogRhythm continues to be a leading provider of SIEM platforms for enterprises. The LogRhythm NextGen SIEM Platform relies on predictive analytics and AI-based algorithms to provide automated,real-time analysis and correlation of all activities across an IT environment. Predictive analytics are also helping to keep every endpoint in compliance to regulatory and internal standards. In highly regulated industries including financial services, healthcare and insurance, predictive analytics is increasingly being relied on to discover, classify and protect sensitive data. This is especially the case with HIPAA (Health Insurance Portability and Accountability Act) compliance in healthcare. Amazon Macie is representative of the latest generation of cloud security services. Amazon Macie is often used in workflows aimed at recognizing sensitive data such as personally identifiable information (PII) or intellectual property and provides enterprises with contextual insights that give visibility into how data is being accessed or moved. Amazon Machine monitors data access for any anomalies and creates alerts when it detects the risk of unauthorized access or inadvertent data leaks. Predictive analytics and AI combined are enabling threat analytics to drive greater precision regarding the risk contexts of privileged users’ behavior, creating notifications of risky activity. Combining predictive analytics and AI is the foundation of the most effective threat analytics engines on the market today. High-risk events are immediately flagged, alerted, notified and elevated to IT’s attention. Machine learning-based threat analytics also provide new insights into privileged user access activity based on real-time data related to unusual recent privilege change, the command runs, target accessed and privilege elevation. Leaders in the area include Broadcom , CrowdStrike , Cybereason , Ivanti , Kaspersky SentinelOne , Microsoft , McAfee , Sophos , VMWare Carbon Black and others. Performing real-time endpoint scans and using predictive analytics to identify potential threats in real-time. CISOs are looking for more effective approaches to achieving Hunt and Respond across diverse device networks with a large number of endpoints. Predictive analytics combined with supervised and unsupervised machine learning algorithms are becoming more ingrained in EPP and EDR platforms, helping to identify and resolve potential threats and breach attempts. Predictive analytics are also being used to discover patterns in known or stable processes where anomalous behavior generates an alert then pauses a given process in real-time. Predictive analytics are table stakes in Unified Endpoint Management (UEM) platforms today. The goal CISOs want to accomplish when they acquire and install a UEM often centers on consolidating the many diverse, often conflicting security apps and tools across their organizations. Today UEM platforms rely on predictive analytics, and in some cases, AI-based systems to deliver greater identity, security and remote access reliability and accuracy. The goal of streamlining UEM apps is to better pursue a zero trust security strategy for the long-term. UEM vendors are concentrating on making the connection between predictive analytics, AI and zero trust, showing how they can support an everywhere workplace. Leading UEM platforms are relying on analytics, AI and machine learning to deliver intelligence-driven experience automation to reduce IT overhead and improve employee experience. Leading UEM vendors include Microsoft , VMWare, Ivanti , IBM , ManageEngine , BlackBerry , Matrix42 and Citrix. Privileged access controls to the API level on endpoints need more analytics-driven adaptive intelligence. Endpoints could benefit from having privileged access controls be more adaptively intelligent. That’s the goal many EPP and EDR vendors are pursuing by replacing their static-based approaches to securing machines with session-based API calls from a vault. Knowing the access patterns of machine-based endpoints and identities relative to human ones reduces false-positives and better secures endpoints from API-based attacks. Using predictive analytics, AI and machine learning to define privileged access control levels and identify potential breach attempts to the API level is the fastest-growing area of R&D in endpoint security today. Predictive analytics combined with AI and machine learning is proving effective battling ransomware, starting with patch management. CISOs see the potential of using predictive analytics to gain pre-emptive insights into how they can best identify the start of a potential ransomware attack across any threat surface. As attacks are multifaceted and becoming more complex, the greatest weaknesses enterprises have today is a lack of solid data on patch management progress. Cybersecurity vendors need to concentrate on the long-standing CVEs that cybercriminals keep coming back to and exploiting, using analytics to better understand how CVE gaps can be closed. As ransomware becomes more weaponized , it’s becoming more urgent for EEP and EDR vendors to improve the depth of analytics insight and predictive accuracy of CVD-based attack scenarios. Analytics are proving invaluable for asset management including track and trace of endpoints on or off the network. Every endpoint is another threat surface that needs to be protected. Real-time analytics and a reliable, resilient connection to every endpoint make track-and-trace possible, giving CISOs the visibility and control they need. By combining real-time track-and-trace information with device data, CISOs can find gaps in endpoint security that need to be closed. Having analytics on asset’s health, current patch levels to the OS level and hardware configurations is also invaluable. One of the more interesting vendors is Absolute Software , who provides real-time analytics on the current condition of every endpoint on a network. Absolute’s approach of collaborating with 28 different hardware partners to have their endpoint client integrated at the BIOs level in a wide variety of endpoint devices provides asset management data in real time. Endpoint asset management is an area that private equity and venture capitalists show high interest in, given the increased reliance enterprises have on endpoints that’s driven by rapid growth of virtual workforces and cloud-first business initiatives. Analytics in 2022 and beyond Analytics is defining the future of endpoint protection platforms and is the differentiator from a technology standpoint all vendors are looking to strengthen today. It’s feasible in 2022 there’s going to be heavy merger, acquisition and private equity activity on the part of leaders in the EPP and EDR to address the areas in their product strategies most needing more data-driven insights to remain competitive for the long-term. As the cybersecurity arms race continues to escalate, improving contextual intelligence with analytics, AI and machine learning is key. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,221
2,022
"Introduction to observability: What it is and why it's important | VentureBeat"
"https://venturebeat.com/data-infrastructure/introduction-to-observability-what-is-observability-and-why-is-it-important"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Introduction to observability: What it is and why it’s important Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Observability is critical to the success of any application. However, defining observability is tricky. Some people confuse it with monitoring or logging, and others think it’s essentially about analytics , which is only a part of observability. Observability, when done correctly, gives you incredible insights into the deep internal parts of your system and allows you to ask complex, improvement-focused questions, such as: Where is your system fragile? What are you doing well? What are you doing poorly? What should come next in your product roadmap? Does any code need to be reworked/rewritten? Where are your common points of failure? All these are important questions to ask and can be answered with data-driven information created by implementing good observability practices. In this article, you’ll learn what observability is, why it’s important and what kinds of problems observability helps solve. You’ll also learn about some best practices for observability and how to implement it so that you can start improving your application today. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What is observability? Observability is how well you know what’s happening inside of your software system without writing new code. If you were asked which of your microservices are experiencing the most errors, what the worst-performing part of your system is, or what the most common frontend error your customers are experiencing, would you be able to answer those questions? If your team has to go away and write code to answer them, it’s fair to say your system isn’t observable. This means that your system constantly becomes a game of whack-a-mole whenever new questions get asked. Why is observability important? Good observability allows you to make data-driven, positive business outcomes. Knowing what to work on, what to improve, and what to ignore can propel your company from success to success and save you time on things your customers don’t care about or aren’t even real issues, such as offering a language on your site that your customers most likely aren’t using. Observability is also vitally important for new software practices. In the last few decades, software systems have become increasingly complex; however, monitoring best practices haven’t developed at the same speed. Traditionally, web development was done using something like the LAMP ( Linux , Apache , MySQL , PHP/Perl/Python ) stack, which is one big database with some middleware, a web layer and a caching layer. The LAMP stack is very simple and fairly trivial to debug. All you have to do is load balance all the above to scale, and any issues can be quickly triaged, fixed and released due to the monolithic nature of the application. However, now, software offerings, frameworks, paradigms and libraries have hugely increased the complexity of their systems due to things like cloud infrastructure, distributed microservices , multiple geo locations, multiple languages, multiple software offerings, and container orchestration technology. Observability can help you ask and answer important questions about your software system and all the different states it can go through by observing it. According to Stripe’s The Developer Coefficient report , good observability saves around 42% of a company’s developer time, including debugging and refactoring. What problems does observability help solve? There are numerous benefits when you follow good observability practices and bake them directly into your software system, including the following: Releases are faster When you know more about your system, you can iterate quicker. You save your developers days of debugging vague, random issues. For instance, I have experience working at a multibillion-dollar company with millions of concurrent users. One of the tasks of the whole software team was to look through the logs of the support queue and try to resolve them. However, this was an incredibly difficult task. All the team ever got in the ticket was a stack trace and a count of the error logs. This left the developers essentially looking through the code for hours, trying to track down the most likely reason for the error. There were many cases when the (suspected) reason was fixed, passed QA, and released, but the developer was wrong, and the process had to start all over again. Good observability takes the guesswork out of this process and can offer far more context, data and assistance to resolve issues in your system. Incidents become easier to fix When you have clear insights and data for key parts of your code and business, you provide your developers with the context and information they need to fix things. A company can never fix something they don’t measure. This applies to incidents, too. Having key information, such as the following, allows you to significantly reduce your mean time to recover from an incident: How do you replicate the incident? When does it happen? Is there a workaround? Does a service error occur when you replicate the incident? It helps you decide what to work on As previously stated, with the extra information you gain from good observability practices, you’re able to decide what you need to work on. For instance, if a certain bug affects only 0.001 percent of the customer base, occurs in a rarely used language, and is easily fixed by a refresh, it makes sense to focus on more severe system bugs. This will give you the most bang for your buck regarding the time developers spend on your system, and it allows you to focus on resolving customer issues, ultimately focusing on the user experience. With good observability, you’ll know what your customers’ biggest frustrations are, and this information can help drive your product roadmap or bug backlog. Observability best practices There are a few best practices that you should follow when implementing observability, including the following: Three pillars of observability Remember the three pillars of observability : logs, metrics, and traces. These are all different types of time-series data and can help improve your system’s observability. Using a time-series database, like InfluxDB, makes it easier to work with and effectively use these types of data. Each of these serves as a useful and important part of the observability of your system. For instance, logs are time-stamped records of events that occurred in your system. Metrics are numeric representations of data measured over time (i.e., 100 customers used your site over a one-hour period). Traces are a representation of flow-related events through your system (i.e., a customer hitting your landing page, adding a T-shirt to their cart, and then purchasing that shirt). Each of these offers unique and powerful insights into your system and can help you improve it. Conduct A/B testing A/B testing is an important tool to drive improvements in your product and your code. By observing your system, you can make changes to your system/refactoring and directly measure the customer impact. An example would be to move the navigation of your site from the footer to the header, where most sites normally place it. From here, you could measure the time people take to navigate to where they need to go, session duration, or time-to-purchase as a direct result of moving your navigation breadcrumb to the header. You can get rid of the poorly performing version of your test and use your A/B test to drive your positive key performance indicator (KPI) metrics. Don’t throw away context For your system to truly be observable, you need to maintain as much context as possible. Everything happens within the context of time, and time-series data preserves that context. It is also metadata around the events you are observing. Context helps you to better understand the whole picture of an issue you’re facing and leads to speedier resolutions. For instance, if your system starts to get an error at a certain time, context could be the key to truly observing and deciphering the cause. So if your system starts to get an error only on Fridays, you may realize that the errors are being caused by an automated database backup script that also takes place at that time. However, if you haven’t been capturing all the context and information around that specific log, the log in isolation is useless. A solution like InfluxDB can help with storing, managing and using this type of data. Context includes things like the following: The time of your event. The count of your event. The user associated with your event. The day of the event. Maintain unique IDs throughout the system In systems where multiple parts of the system need to communicate, one single event may commonly be aliased. For example, if your frontend page sends a customer to a payment page, you may have a unique ID for the customer that is hard to correlate to the payment they just made. This is considered an anti-pattern. You need to ensure that all the different parts of your system are speaking one unified language. If you don’t, you’ll only ever achieve observability in a portion of your system. Once it becomes hard to correlate one error between two different systems, you’ll be back to having an unobservable system. Observability vs. monitoring Monitoring and observability are often confused; however, it’s important to understand their differences so that you can implement both accurately. Monitoring Monitoring deals with known unknowns. For example, if you know you don’t have a lot of information in your API that deals with your payments backend, you can add logs into it in order to monitor that system. Monitoring is generally more reactive and is used to track a particular part of your system. Monitoring is important but is different from observability. Observability Observability generally deals with unknown unknowns. For example, you may not even know you don’t have much information in your payments backend system, and this is where observability comes into play. You begin to understand your system more deeply, and when you gain a deep, intricate view of your system, you can identify your holes and where you need to improve. This is less reactive and is normally broadly termed discovery work. Conclusion In this article, you learned about the importance of observability and the common questions that regularly appear when encountering observability, such as why it’s important and what problems it solves. You also learned how observability and monitoring differ. Kealan Parr is a senior software engineer at Amber Labs. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,222
2,022
"How CISOs can drive revenue gains and advance their careers | VentureBeat"
"https://venturebeat.com/security/how-cisos-can-drive-revenue-gains-and-advance-their-careers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How CISOs can drive revenue gains and advance their careers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One of the quickest ways for a CISO to earn a promotion is to prove that their security team can deliver revenue gains by protecting customers and strengthening their trust. Any organization’s security posture is core to the customer experiences it delivers. Protecting customers’ identities and data can mean the difference between being in business next year and being gone. Forrester Research’s Security and Risk Forum 2022 session provided practical, pragmatic advice and insights to security and risk professionals. It challenged them to take control of cybersecurity initiatives, which is a core competency of their businesses. Two presentations provided insights into how CISOs can deliver more value and advance their careers. One was “Cybersecurity Drives Revenue: How to Win Every Budget Battle” from Jeff Pollard, VP and principal analyst at Forrester. The other was “Communicating Value: A CISO’s Business Acumen Primer” from Chris Gilchrist, also a principal analyst at Forrester. CISOs need to flex their growing influence How trusted and proven a given enterprise’s security posture is affects its revenue and deal pipeline. How close is an enterprise to achieving its zero-trust initiatives, including Multi-Factor Authentication (MFA), Identity Access Management (IAM) and Privileged Access Management (PAM)? The answer will determine if it will qualify for cyber insurance and what the premiums will be. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! And a company must show enterprise buyers that cyber insurance is in place before it will qualify for larger sales opportunities and deals, and before buyers will sign a purchase contract and issue their first purchase orders. “When something touches as much revenue as cybersecurity does, it is a core competency. And you can’t argue that it isn’t,” Pollard said during his presentation on how cybersecurity drives revenue. >>Don’t miss our special issue: Zero trust: The new security paradigm. << CISOs need to flex their growing influence and prove they and their teams can be counted on to help drive revenue. A great way to do that is by focusing their teams on how investments in cybersecurity protect and grow customer trust. “This means that security is now a driver of corporate strategy rather than buried as an operational line item only to be managed and measured as a cost. In other words, security now has the latitude to defend and drive growth,” said Gilchrist. “I’m seeing more and more CISOs joining boards. I think this is a great opportunity for everyone here [at Fal.Con] to understand what impact they can have on a company. From a career perspective, it’s great to be part of that boardroom and help them on the journey — to keep business resilient and secure,” George Kurtz, co-founder and CEO of CrowdStrike , said during his keynote at his company’s annual event. He continued, “Adding security should be a business enabler. It should be something that adds to your business resiliency, and it should be something that helps protect the productivity gains of digital transformation.” As cybersecurity is a cost of doing business, CISOs’ roles are now strategic and can turn into board-level positions. CISOs who excel at leading their teams in delivering revenue gains are key to helping boards of directors understand how technology reduces enterprise-wide risk. “While CISOs need to continue working on translating technology and technical risk into business risk, and be able to better deliver that risk story to their board, on the other side of the aisle, we need the board to be able to understand the true implication of cyber risk on the ultimate shareholder value and business goals,” said Lucia Milica, global resident CISO at Proofpoint. Proofpoint’s recent report, Cybersecurity: The 2022 Board Perspective , found that 73% of boards have at least one member with cybersecurity experience. In addition, most board members (77%) believe cybersecurity is a top priority for their board itself. Thus, “the role of the CISO is evolving from technical specialist to the business executive who can understand where business value is coming from and articulate to the board how to protect it,” said Betsy Wille, director of The Cybersecurity Studio and former CISO at Abbott. How CISOs can drive revenue gains A few critical areas CISOs and their teams need to concentrate on to drive revenue include: identifying how cybersecurity practices affect deal flows; reducing barriers to entry into new markets by meeting regulatory requirements; and reducing breach costs. Jeff Pollard’s presentation proposed a four-step approach to identifying the revenue impact of security spending. Identify requirements for security controls. Quantify the overall current contract value and lifetime customer value. Link spending allocations for all controls that satisfy those requirements. Then, total each of those items separately as reasons for security spending allocations. One major benefit of following this framework is that it quantifies the value of reducing customer risks. In addition, CISOs attending board meetings with quantified risk assessments are speaking board members’ language. That’s a great career strategy for earning visibility and promotion. The Forrester methodology’s goal is to determine how much a specific security investment costs per customer, and how much revenue that specific customer segment generates. In essence, the methodology looks at the return on security investment while also quantifying what is at stake if the customer base is unprotected. Knowing how many customers rely on an organization to protect their identities by using privileged identity management (PIM), and how much revenue those customers contribute, helps determine what percentage of the security budget needs to be spent on PIM. “We spend Z; they’re responsible for Y revenue. You can also tabulate the revenue that’s at stake if you got rid of that control … if you didn’t have the budget to renew that control, to renew licensing … to support it,” Pollard explained during his presentation. For example, assume 330 customers require enterprise-grade PIM to protect their identities, at an annual cost of $250,000. The cost per customer is $757.58. The analysis then takes the total annual revenue of the customers needing PIM and divides it by the costs of implementing a PIM system, resulting in the costs per revenue of security coverage for the customer base. Thus Forrester’s analysis also delivers value to CISOs by helping them quantify the risk to revenue of not protecting customers adequately. CISOs can use this analysis to protect their budgets by asking if it’s worth putting millions of dollars in revenue at risk by not spending the $250,000 to protect it. Expanding this across all line items in a budget gives a CISO significant bargaining power in negotiations with a CFO and board. It also provides a consolidated financial view of the cost of risks if budgets are cut. Also, for CISOs interested in advancing their careers, risk quantification is what boards of directors focus on today. CISOs need to be bold about delivering value CISOs face a number of challenges, including consolidating their tech stacks, getting more done with fewer people thanks to a chronic security labor shortage, and continuing pressure to cut budgets. Therefore they need a methodology to defend their budgets. As security budgets go, so go the careers of entire departments. Showing how security drives revenue and knowing how to quantify risk is a valuable skill for CISOs and their teams to develop. Boards of directors think and talk in these terms. So CISOs who develop them as a skill set early on will boost their careers and may eventually earn a promotion and a role on the board of directors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,223
2,023
"Why CISOs need to make software bills of materials (SBOMs) a top priority in 2023 | VentureBeat"
"https://venturebeat.com/security/why-cisos-need-to-make-software-bills-of-materials-sboms-a-top-priority-in-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why CISOs need to make software bills of materials (SBOMs) a top priority in 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Software supply chains are soft targets for attackers looking to capitalize on the lack of transparency, visibility and security of open-source libraries they use for embedding malicious code for wide distribution. Additionally, when companies don’t know where code libraries or packages being used in their software originate from, it creates greater security and compliance risks. The latest Synopsys Open Source Security and Risk Analysis Report found that 97% of commercial code contains open-source code, and 81% contains at least one vulnerability. Additionally, 53% of the codebases analyzed had licensing conflicts, and 85% were at least four years out of date. It’s common for development teams to use libraries and packages found on GitHub and other code repositories. Software bills of materials (SBOMs) are needed to keep track of each open-source software (OSS) and library used during the devops process, including when it enters the software development life cycle (SDLC). Securing software supply chains Software development leaders need to take action and integrate SBOMs throughout their SDLC and workflows to avert the risk of Log4j and comparable infected OSS components corrupting their code and infecting their customers’ systems. Software composition analysis (SCA) and the SBOMs they create provide devops teams with the tools they need to track where open-source components are being used. One of the critical goals of adopting SBOMs is to create and keep inventories current on where and how each open-source component is being used. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “A lack of transparency into what software organizations are buying, acquiring and deploying is the biggest obstacle in improving the security of the supply chain,” said Janet Worthington, senior analyst at Forrester, during a recent interview with VentureBeat. The White House Executive Order 14028 on improving the nation’s cybersecurity requires software vendors to provide an SBOM. EO 14028 concentrates on solving the lack of software supply chain visibility by mandating that the NTIA, NIST and other government agencies provide greater transparency and visibility into the purchasing and procurement process for software throughout its product lifecycle. In addition, the executive order mandates that organizations supplying software must provide information on not only direct suppliers but also their suppliers’ suppliers, tier-2, tier-3, and tier-n suppliers. The Cybersecurity and Infrastructure Security Agency (CISA) software bill of materials resource center also provides valuable resources for CISOs getting up to speed in SBOMs. EO 14028 was followed on September 14 of this year with a memorandum authored by the director of the Office of Management and Budget (OMB) to the heads of executive branch departments and agencies addressing the need for enhancing the security of the federal software supply chain further than the executive order called for. “The combination of the executive order and the memo mean SBOMs are going to be important in the not too distant future,” said Matt Rose , ReversingLabs field CISO. What’s most noteworthy about the memorandum is that it requires agencies to obtain self-attestation from software providers that their devops teams follow the secure development processes defined in NIST Secure Software Development Framework (SP 800-218) and the NIST Software Supply Chain Security Guidance. SBOMs help create trusted code at scale Integrating SBOMs throughout devops processes, over and above compliance with EO 14028, ensures that every downstream partner, customer, support organization and government entity receives trustworthy apps built on solid, secure code. SBOMs do more than protect code. They also protect the brands and reputations of the organizations shipping software globally, especially web-based apps and platforms. There’s a growing lack of trust in any code that isn’t documented, especially on the part of government procurement and purchasing organizations. The challenge for many software providers is achieving a more successful shift-left strategy when integrating SBOMs and SCA into their continuous integration/continuous delivery (CI/CD) process. Shift-left security looks to close the gaps attackers look for to inject malicious code into payloads. “CISOs and CIOs increasingly realize that to move fast and achieve business goals, teams need to embrace a secure devops culture. Developing an automated development pipeline allows teams to deploy frequently and confidently because security testing is embedded from the earliest stages. As the result of a security issue escaping to production, having a repeatable pipeline allows for the offending code to be rolled back without impacting other operations,” Worthington advised. CISOs also need to become familiar with the formal definitions of SBOMs now, especially if they’re part of a software supply chain that provides applications to the federal government. Formal standards include Software Package Data Exchange (SPDX) , Software ID Tag (SWID) and CycloneDX. Of these, CycloneDX is the most often used standard. These standards aim to establish a data exchange format and a common infrastructure that shares details about every software package. As a result, organizations adopting these standards find they save time in remediating and solving disconnects while increasing collaboration and the speed of getting joint projects done. For SBOMs, compliance is just the beginning EO 14028 and the follow-on memorandum are just the beginning of compliance requirements that devops teams and their organizations must comply with to be part of the federal government’s software supply chain. SBOM requirements from the Federal Energy Regulatory Commission (FERC), Food and Drug Administration (FDA), and the European Union Agency for Cybersecurity (ENISA) are also now requiring SBOM visibility and traceability as a prerequisite for doing business. With SBOMs becoming core to how U.S. and European governments define whom and how they will do business with, CISOs need to make this area a priority in 2023. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,224
2,021
"SentinelOne bolsters its big data analytics with $155 million Scalyr acquisition | VentureBeat"
"https://venturebeat.com/2021/02/09/sentinelone-bolsters-its-big-data-analytics-with-155-million-scalyr-acquisition"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SentinelOne bolsters its big data analytics with $155 million Scalyr acquisition Share on Facebook Share on X Share on LinkedIn SentinelOne protecting against ransomware attack Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SentinelOne , an AI-powered cybersecurity platform focused on endpoint protection , has acquired Scalyr , a log management, server monitoring, and event data analytics service founded by former Google engineers in 2011. The $155 million cash and equity deal, which precedes a much-anticipated IPO, represents SentinelOne’s first known acquisition. Mountain View, California-based SentinelOne protects all entry points to a company network, from servers to employee mobile devices, and automatically monitors for malware and script-based attacks, among other exploits. The AI engine is embedded at each endpoint, identifying threats through behavioral analysis rather than relying on signatures for known malware. Scalyr, for its part, offers a cloud-based platform for speedy log management and server monitoring that companies use to monitor and track issues across their infrastructure. SentinelOne has raised around $700 million since its inception in 2013, securing $267 million at a valuation north of $3 billion a few months back. Scalyr had raised nearly $28 million from big-name backers such as Alphabet’s GV. With Scalyr on board, SentinelOne will now be able to ingest and monitor data from any source, extending SentinelOne’s reach beyond endpoint protection and “across the entire enterprise and cloud attack surface,” according to a press release. This will broaden SentinelOne’s ability to work with data from any on-premises data source or cloud, including AWS, Microsoft, and Google Cloud. In other words, this acquisition bolster’s SentinelOne’s big data analytics credentials considerably. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! SentinelOne said it expects to close the acquisition in Q1 2021. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,225
2,021
"Hunters advances adoption of its XDR security platform with $30M | VentureBeat"
"https://venturebeat.com/2021/08/24/hunters-advances-adoption-of-its-xdr-security-platform-with-30m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hunters advances adoption of its XDR security platform with $30M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Hunters , a provider of an extended detection and response (XDR) platform, today revealed it has garnered an additional $30 million in investment to help drive the adoption of a security platform that promises to obviate the need for legacy security information event management (SIEM) platforms. The Hunters XDR platform collects data from a wide range of security tools that is then aggregated on the data management platform from Snowflake residing on the Amazon Web Services (AWS) cloud. That approach enables security analysts to query data residing in a multi-tenant software-as-a-service (SaaS) platform in addition to viewing analytics that are automatically generated in a way that provides more context than a traditional SIEM platform does, said Hunters CEO Uri May. The $30 million in funding is led by Bessemer Venture Partners, with participation from existing investors YL Ventures, Blumberg Capital, Microsoft’s venture fund M12, and U.S. Venture Partners (USVP). The new funding brings the total investment to $50.4 million, which was previously raised from investors that included Okta Ventures and Snowflake. The additional investments are an affirmation of faith in an XDR category poised for mainstream adoption, said Ofer Schreiber, partner at YL Ventures, which co-led the seed round for Hunters alongside Blumberg Capital. “We are now executing on that vision,” he said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Fine-tuning security analysis SIEM platforms typically require security analysts to know what queries to launch against the data organizations have been able to collect. The Hunters XDR platform has been pre-integrated with a range of security platforms that are already widely employed by enterprise IT environment, said May. That capability eliminates the need for security teams to deploy agent software to collect data that already largely exists in most IT environments, he said. Once that data is aggregated, the Hunters XDR platform is then able to analyze a wide range of alerts and signals to give security analysts more context about threats. The goal is to reduce the overall volume of alerts by synthesizing signals to surface more actionable analytics results, said May, by using graph technologies embedded with the Hunters XDR platform. Competition among providers of XDR platforms seeking to supplant SIEM is intensifying. Enterprise IT teams are looking to move beyond SIEM platforms that provide a database of security events that can be queried only if security analysts know the right questions to ask. The challenge is that as attack vectors change and evolve, it’s become even more difficult for cybersecurity analysts to understand which alerts signal an ongoing attack versus, for example, a reconnaissance of the system’s defenses. Ultimately, May said the goal is to enable cybersecurity teams, which are often understaffed, to better prioritize their efforts. Just as importantly, automating the analytics more should reduce the level of cybersecurity expertise required to identify an attack. Justifying investments In general, May said that as enterprise IT environments become more extended, it’s only going to become more difficult to identify cybersecurity attacks. Each new platform added to an IT environment increases the number of alerts that are likely to be generated, he added. “The attack surface is becoming more distributed,” he said. “Alert fatigue is a real problem.” The challenge, of course, is convincing organizations to invest more in cybersecurity. The ongoing onslaught of cybersecurity attacks has many organizations questioning its return on investment. That disillusionment often makes it challenging for cybersecurity teams to convince senior leaders of the company to make additional investments. Like it or not, however, the cybersecurity landscape continues to evolve in ways that require organizations to adapt. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,226
2,022
"SentinelOne XDR enables growing list of top incident response firms | VentureBeat"
"https://venturebeat.com/2022/01/25/sentinelone-xdr-enables-growing-list-of-top-incident-response-firms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SentinelOne XDR enables growing list of top incident response firms Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SentinelOne today announced it has expanded the ranks of its incident response partners with a prominent addition, KPMG, which is utilizing the vendor’s Singularity XDR platform to bring greater automation to its cyber investigations for customers. SentinelOne told VentureBeat that it now has more than 130 incident response (IR) partners in total, up from 29 at the beginning of 2021. The cybersecurity firm — which went public last June and has a market capitalization above $11 billion — began providing technology to enable IR partners in 2020. Other major IR partners for SentinelOne include Kroll, Alvarez & Marsal, and Blackpanda. KPMG’s cyber practice employs 550 security professionals in the U.S. and 5,000 globally, and the firm has been using SentinelOne’s extended detection and response (XDR) technology to aid its investigations of data breaches. In particular, XDR technology from SentinelOne’s acquisition of Scalyr last year has proven to be a “game changer” in terms of automating and accelerating KPMG’s IR work, said David Nides, principal for cyber response services at KPMG. The Scalyr XDR technology brings capabilities for rapidly ingesting and correlating data from endpoints, making it possible for IR investigators to more easily search and query the data, according to SentinelOne and KPMG. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The technology provides “everything in a centralized location for you to query — and to ultimately answer the questions that need to be answered,” Nides told VentureBeat. “How did the attackers get into the environment? What type of unauthorized activities did they perform? Did they remove anything from the environment? For all of those really concerning types of questions — we’re able to get better information, and get it faster.” Growing the ranks KPMG became an IR partner of SentinelOne last July, though the partnership was not disclosed until today. Along with KPMG, IR firms that became SentinelOne partners during the second half of 2021 included BlueVoyant, Orange Cyberdefense, Sopra Steria, Dubex, and UMB AG. Revenue from IR partners grew by four times last year, compared to the year before, said Nicholas Warner, chief operating officer at SentinelOne. “We made a strategic decision a few years ago to totally focus on being a solution and technology provider, not a services firm,” Warner said in an interview with VentureBeat. “And so what these large firms, like a KPMG, can be totally assured of is, we’re not competing with them. We’re enabling their business. We’re focused on delivering the right technology for them to supercharge their own incident response and other security services.” SentinelOne’s Singularity XDR leverages AI and machine learning technologies to provide threat mitigation and remediation, as well as ransomware rollback. “Really what makes us different, from a usability perspective, is that we’re far more autonomous,” Warner said. “What we’ve built relies much more heavily on machine learning than any other technology in the space. And what that means is, we require a lot less human intervention.” While less than 5% of organizations are using XDR today, that’s expected to climb to 40% by 2027, according to a recent report from Gartner. Along with SentinelOne, XDR vendors listed by Gartner in the report include Check Point, Cisco, CrowdStrike, Cybereason, Microsoft, Palo Alto Networks, Sophos, and VMware. The report also mentions McAfee Enterprise and FireEye, which merged in October and rebranded as Trellix last week , with the stated goal of focusing on the XDR market. ‘Hyper fast’ analytics When it comes to SentinelOne’s XDR offering, technology from Scalyr is now at the core, Warner said. Scalyr — which SentinelOne acquired in February 2021 for $155 million — was not originally focused on security, however. Scalyr was founded by Google Docs creator Steve Newman, and developed a cloud-native data analytics platform focused on log management and observability. “They weren’t security specialists [but] we felt it was really the best-performant data analytics platform in the space,” Warner said. After spending several quarters integrating the Scalyr technology, the SentinelOne XDR is now powered by Scalyr’s “unbelievably powerful and hyper-fast analytics platform,” he said. “Especially as it relates to IR, that really is what makes it go.” The platform automates the collection of forensic artifacts — digital traces such as browser histories, downloaded files, and event logs — then streams the output into a data lake, where it can be searched and queried as part of a breach investigation. Increased automation At KPMG, bringing automation to this process helps to scale the firm’s collection and review of artifacts, Nides said. The capabilities are critical for investigations of companies that hadn’t been using an endpoint detection and response (EDR) tool, he said. The Scalyr-powered SentinelOne XDR platform essentially allows investigators to “go back in time to answer these really important questions,” Nides said. While a growing number of vendors have begun offering XDR, when it comes to IR use cases such as this, Nides said that the SentinelOne platform with Scalyr’s technology is the first truly “commercial” version of the capability that he’s seen. The bottom line for KPMG, Nides said, is that the technology is helping the firm to “do more incident response.” “There’s a war for talent. And as important as people are in this process—I don’t think you’re ever going to entirely be able to replace people [in IR]—it’s about doing more with the people that you have,” he said. “Having technology and automated processes like this just allows us to take on more engagements.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,227
2,022
"With Mandiant, Google can challenge Microsoft's security dominance | VentureBeat"
"https://venturebeat.com/2022/03/08/with-mandiant-google-can-challenge-microsofts-security-dominance"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages With Mandiant, Google can challenge Microsoft’s security dominance Share on Facebook Share on X Share on LinkedIn Mandiant CEO Kevin Mandia said the planned merger between his company and Google Cloud "allows us to be the brains behind so much of those [security] controls that people are depending on." Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Recent years have seen Microsoft emerge as the company to beat in cybersecurity, with an extensive suite of security offerings and an unparalleled view into business applications, cloud workloads and devices. Really, who could take them on? Who would even dare to try? CrowdStrike has taken its shots — and has seen some strong growth that validates that it is a serious challenger to at least some parts of Microsoft’s security business (particularly in endpoint). But Google Cloud may be the first vendor that is truly positioned to challenge the whole of the Microsoft security machine. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google’s $5.4 billion deal to acquire Mandiant, announced today, will allow Google Cloud to deliver an “end-to-end security operations suite to help enterprises stay protected at every stage of the security lifecycle,” said Phil Venables, CISO at Google Cloud, during a news conference. Now, that sounds a lot like what Microsoft aims to offer enterprise customers, doesn’t it? Mandiant adds a significant amount in terms of security to Google Cloud, far beyond the company’s well-known incident response (IR) service offering. Mandiant’s platform spans threat intelligence, security validation, automated defense, attack surface management and managed defense. And in terms of services, in addition to IR, Mandiant provides strategic readiness, technical assurance and “cyber defense transformation” — i.e., helping customers to develop and mature their security posture. Supporting the SOC Google Cloud’s approach to getting to the outcome of “end-to-end” security for customers is very different than that of Microsoft, however, according to Peter Firstbrook, a research vice president and analyst at Gartner. Microsoft is trying to support all of its own products and services to deliver security to customers, while “Google is a little more interested in supporting the SOC – the security operations center,” Firstbrook said. Google Cloud is thus focused on ensuring that customers “have everything that they need” for their SOC team, he said. “So, regardless of what security controls they have in place — whether it’s from Palo Alto or Microsoft or Cisco or Trellix or Zscaler — then they can filter all that information in one place, and make sense of it,” Firstbrook said. “And then they need somebody who can clear those alerts, that is smart enough to do that.” Mandiant helps with that part, too, thanks to its managed services offerings, he noted. During the news conference today, Mandiant CEO Kevin Mandia emphasized the fact that his company will have the freedom to support environments that “use lots of different security technologies to secure themselves.” “I feel this merger between Mandiant and Google Cloud allows us to be the brains behind so much of those controls that people are depending on,” Mandia said. The ultimate offering is Mandiant, plus Google Cloud, plus partnerships with “all the different products that people rely on,” he said. “We can work with your heterogeneous environments — whatever endpoint [security] you’re using, whatever firewall you’re using, whether you’re on-prem or in the cloud, we can take that security telemetry, put it in Chronicle, use Siemplify’s capability to go from alert to fix [and] use Mandiant threat intel to get better telemetry on, ‘here’s what matters most,” Mandia said. He referred to the Google Chronicle security analytics and Siemplify, a provider of security orchestration, automation and response (SOAR) technologies that Google acquired in January. Chronicle and Siemplify are all about “interoperability between a ton of other technologies — [they] work with every firewall company, work with all the endpoint companies, work with logs generated from different applications,” Mandia said. Leveraging partners In a recent interview with VentureBeat, Sunil Potti, vice president and general manager for Google Cloud’s security business, said the contrast between Google Cloud and Microsoft’s approaches to security should be obvious. “Microsoft has been very clear that they want to compete in security against all the partners, and everybody,” Potti said. In terms of solution sets for many different areas within cybersecurity, “Microsoft chose to build all those themselves,” he said. Google, on the other hand, has chosen “a few markets we believe a cloud provider alone should drive,” and is offering first-party products just in those spaces, Potti said. “But around each of those first-party products, we’ll create an ecosystem that leverages partners,” he said. That, again, is “unlike Microsoft, who wants to touch everything,” Potti said. Microsoft declined to comment for this article when reached by VentureBeat. ‘Shot across the bow’ Industry analysts said today that Google Cloud most definitely has had Microsoft in its sights with the deal to acquire Mandiant. Microsoft, in fact, had reportedly been considering making a bid for Mandiant itself before those talks fell through, and Google Cloud stepped in. In the wake of Google’s acquisition of Siemplify in January, “acquiring a strong services provider like Mandiant is the next important step to round out its set of offerings in an effort to lead on security on more than one front,” said Forrester analyst Allie Mellen. “Microsoft has been dominating the security industry for the past several years, and this string of acquisitions by Google shows its interest in playing a bigger role in the industry.” And Mandiant appears to be an excellent choice for enabling such aspirations. Mandiant “has a very strong brand and reputation for a reason,” said Hank Thomas, CEO at venture capital firm Strategic Cyber Ventures. “They are the best of the best at what they do. There is no way this doesn’t convince some people to move to the Google Cloud.” In a note to investors today, Daniel Ives, managing director for equity research at Wedbush Securities, said that Mandiant has established itself as the “Navy Seals of cybersecurity” during the past decade. “This deal was a shot across the bow from Google to Microsoft and Amazon with this flagship cybersecurity acquisition of Mandiant,” Ives wrote. Amazon Web Services (AWS) continues to maintain its lead in market share for cloud infrastructure services (at 33%), according to Synergy Research Group, followed by Microsoft Azure at No. 2 (with 21% market share) and Google Cloud at No. 3 (with 10% of the market). Managed services Notably, with Mandiant, Google Cloud will not only be able to compete in the realm of “end-to-end” security — but might actually out-match Microsoft in terms of managed security services. The fact that Microsoft itself had reportedly considered acquiring Mandiant is one indicator of this. Amid the continued cybersecurity talent shortage, the ability to deliver security as a service will only become more essential going forward, Firstbrook said. “Nobody has enough people to do security,” he said. “If you want to sell a [security] product, you have to deliver it as a service now. It’s not enough to just sell software — because most of the buyers don’t have the people that can use that software.” All in all, “we just see a huge interest in managed security services and managed services — because this whole security market is becoming far too complicated for the average organization,” Firstbrook said. And in that vein, Google Cloud’s ultimate goal is to make security essentially “invisible” to customers, Potti said — to “automatically provide a lot of good hygiene under the cover, and only tell you things that you need to pay attention to.” Going forward, true differentiation will be about “how delightful and invisible you make security,” he said. “Because security is a pain right now.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,228
2,022
"Targeted threat intelligence is key to protecting enterprises against cyberattacks | VentureBeat"
"https://venturebeat.com/2022/03/15/targeted-threat-intelligence-key-for-protecting-enterprises-against-cyberattacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Targeted threat intelligence is key to protecting enterprises against cyberattacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The war against cyberattacks is raging fiercely across the enterprise ecosystem, as cyberattackers continue to evolve with new tactics. Last year, a report by Sophos revealed ransomware-as-a-service (RaaS) attacks increased at a rapid rate in the past 18 months. Another study by Forrester Consulting on behalf of Cyware showed a considerable gap between how fast organizations detect ransomware and the quickness of an attack — highlighting how unprepared many organizations are to identify and mitigate cyberattacks. The Gartner 2022 Audit Plan Hot Spots lists ransomware as one of the 12 key issues auditors will have to grapple with this year. “ Ransomware attacks have become increasingly prevalent and sophisticated,” said Zachary Ginsburg, research director for the Gartner Audit and Risk practice. “Ransomware is resulting in revenue and data loss, compromised data, reputational damage, significant operational disruption and more.” According to Ginsburg, regardless of their size or revenue, organizations should assume they will be targeted with ransomware and examine their prevention, detection, mitigation, response and recovery measures. As ransomware attacks continue to exploit an ever-widening enterprise attack surface, how can organizations win this fierce war against cyberattackers? Cyberint , an Israel-based digital risk protection and threat intelligence company, claims its proprietary Argos Edge technology offers an answer, by giving enterprises real-time actionable threat intelligence alerts that help IT teams protect digital assets beyond the traditional security perimeters. Yochai Corem, CEO at Cyberint, told VentureBeat that for organizations to stay protected against attacks, they need to know the exact channels threat actors use for communicating and interacting. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Threat detection and mitigation becomes difficult when organizations are unable to do this swiftly and effectively, according to Corem. He said Cyberint’s proprietary machine learning (ML) algorithms continuously monitor and automatically identify threat actors, enabling security teams to swiftly identify targeted cyberattacks against their organization. A searchable database for enhanced threat intelligence Corem said there are different types of malware operated as a service that can be bought and distributed easily, enabling malicious actors to infect machines and steal credentials. “Threat vectors are linking from one source to another — from the dark web, to Telegram channels and many more,” he said, adding that Cyberint can continuously monitor and automatically identify millions of linkages from threat actors with the technology the company has built from over ten years of research and development. “ML and AI enable us to automatically classify over a billion pieces of data and verify them, looking at those that are most critical and most relevant to the problem our customers are attempting to solve,” he said. “So, for example, out of the 14 million pieces of data we collected in January, I can actually go and look for exposed credentials like credit cards and see the exact attack tools or methods that were used to get them.” Cyberint claims it has data that no one else does because it created a searchable database of the dark web. It also infiltrated hacker groups on Telegram to gain intelligence on RaaS families and threats across millions of machines around the world. Corem said Cyberint’s platform continuously scans the entire internet to identify which IPs and domains relate to the company’s customers, and then verifies that there is no open window with access a threat actor can explore and exploit. “Every attack starts with reconnaissance — information gathering — and then exploitation,” he said. “Our goal as a company is to identify weaknesses in an organization’s attack surface via our unique attack surface management models, providing actionable insights that address any exposure and ensure critical assets are protected.” Ransomware predictions for 2022 A report by the Cyberint research team showed that the United States is one of the top targeted countries for ransomware attacks. “The report further revealed an overall number of 2,845 ransomware cases last year, with the industrial energy, retail and finance sectors as the top three sectors hit by successful campaigns,” he said. Corem said ransomware attacks will continue to grow in 2022, as Cyberint saw an 84% increase in ransomware cases in the second half of 2021, compared to the first half of the year. “There’s a RaaS competition today, with our report showing the Conti ransomware gang as leader of the competition,” said Corem. “And even if organizations have the best endpoint security and the best antivirus firewalls, attackers can still infiltrate their systems using several techniques.” Companies need to be “super-focused” on how they protect their assets, he added: “They need support from experts like us.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,229
2,022
"Cloudera claims its new 'all-in-one' data lakehouse cuts ownership costs by up to 35% | VentureBeat"
"https://venturebeat.com/data-infrastructure/cloudera-claims-its-new-all-in-one-data-lakehouse-cuts-ownership-costs-by-up-to-35"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloudera claims its new ‘all-in-one’ data lakehouse cuts ownership costs by up to 35% Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Santa Clara-based data company, Cloudera , today announced the launch of Cloudera Data Platform (CDP) One, a new all-in-one data lakehouse software-as-a-service (SaaS). The offering complements the 2019-launched flagship CDP product and provides enterprises with a single, centralized data platform that includes built-in cloud compute, cloud storage, machine learning (ML), streaming data analytics and enterprise-grade security. It’s designed to deliver capabilities for the entire data lifecycle, enabling every enterprise user to perform ad-hoc and highly customizable analysis, as well as exploratory data science on any type of data (structured or unstructured ). This ultimately gives enterprise users a way to get faster and easier access to mission-critical business insights. “Empowering everyone in your business to get the real-time insights they need to make the right decisions requires building a truly modern data architecture ( lakehouse ) in the cloud,” Ram Venkatesh, CTO at Cloudera, said. “Many businesses don’t have the resources, time or expertise to make this transformation happen. CDP One [shaves] months or even years from implementation timelines and [provides] comprehensive data security.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cloudera’s continuous optimization The all-in-one lakehouse not only provides a unified service to reduce enterprises’ time to insights – making data users more productive – but also cuts down the total cost of ownership (TCO) and risk. This is because the platform is continuously optimized for health, security, compliance and performance, without roping in a specialized cloud, security or monitoring operations staff. According to Cloudera, in comparison to do-it-yourself cloud solutions, CDP One reduces the overall TCO by 20% to 35% when including the initial setup and operations of platform ops, secops and support. The company also claims that the service uses the same Cloudera technology that was rated better for analytic and operational use cases than point product vendors or point services from hyperscalers. The data lakehouse is the latest paradigm shift in the data industry, where capabilities of both data warehouse and lake are offered through a single platform. Companies like Databricks, Snowflake and Dremio (which recently launched a free lakehouse ) are racing in this segment. However, according to Cloudera, most of its competitors in this space solve only part of the problem or lock users into a limited number of analytic tools – which is not the case here. CDP One availability The new offering is currently accessible to select enterprises, including travel management company CWT, and will become widely available later this year. It’s deployed on a private, single-tenant cloud infrastructure managed by Cloudera. “We needed to build a data lakehouse to enable more users to run analytics on their complex and sensitive data, but [who] didn’t have the right expertise to manage it or time to hire additional resources,” Gordon Coale, senior director and enterprise architect for data at CWT, said. “CDP One rapidly delivered secure and compliant global data science and advanced analytics. The solution was ready to accept data in just two days, and some use cases went into production in just four weeks. And we did not need any new staff to make this happen.” The global demand for solutions like CDP One is only expected to increase as the volume of enterprise data grows across multiple cloud environments and on-premises locations. According to IDC, the amount of data in the world will grow from 33 Zettabytes (Zb) in 2018 to 175 Zb by 2025. That’s 175 trillion USB sticks with a 1GB capacity. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,230
2,022
"Onehouse brings a fully-managed lakehouse to Apache Hudi | VentureBeat"
"https://venturebeat.com/data-infrastructure/onehouse-brings-a-fully-managed-lakehouse-to-apache-hudi"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Onehouse brings a fully-managed lakehouse to Apache Hudi Share on Facebook Share on X Share on LinkedIn Onehouse billboard Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Much has been written about the respective benefits of date warehouses and date lakes. The former is generally a better solution for housing transformed, structured data that’s easier to query. The latter, however, is best for unstructured data in its purest, most flexible form that can be queried in more-or-less limitless ways as a company’s needs evolve. There are pros and cons to both, of course, but a third data management architecture has recently emerged that meshes the best of lakes and warehouses — it’s called, somewhat predictably, a data lakehouse. Built on an open data architecture, a data lakehouse can manage all data formats, including structured, unstructured, or semi-structured. Addiitonally data lakehouses can support multiple data workloads, and can be deployed on top of low-cost cloud storage solutions, similar to a data lake. With that in mind, a new company called Onehouse emerged from stealth this week with a mission to bring the benefits of data lakehouses to the enterprise. It plans to do this by selling a managed service on top of the Apache Hudi open source project, which was developed internally at Uber back in 2016 to bring data warehouse-like functionality to data lakes. In the intervening years, Hudi has been adopted by major companies such as Amazon, Disney, and Bytedance. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Former Uber data architect and Hudi creator Vinoth Chandar set up Onehouse in early 2021, building on the community that has sprung up around the open source project over the previous five years — it now claims in the region of 1 million downloads each month. Onehouse officially launched out of stealth yesterday with $8 million in seed funding from Greylock Ventures and Addition. “By combining breakthrough technology and a fully-managed easy-to-use service, organizations can build data lakes in minutes, not months, realize large cost savings and still own their data in open formats,” Chandar wrote. “Onehouse aims to be the bedrock of your data infrastructure as the one home for all of your data.” Data challenges While data has often been referred to as the “ new oil ,” companies often have difficulties scaling their data architectures as they grow. They may start with a data warehouse for simpler business intelligence and analytics use-cases, but as their data increases and needs evolve — particularly relating to AI and machine learning workloads. Companies typically turn to a data lake, given that it’s cheaper to store data and can run more complex and advanced queries. But this comes at a cost. “The investment in a lake comes with a whole new set of challenges around concurrency, performance, and a lack of mature data management,” Chandar said. “Most companies end up living between a rock and a hard place, juggling data across both a lake and a warehouse.” Hudi goes some way toward solving that problem by bringing key warehouse features to data lakes, such as transactions, indexing, and scalable metadata. And that, essentially, is what Onehouse is looking to capitalize on. While any well-resourced company could take Hudi and deploy it themselves, it requires a lot of time and effort — building a data lake, or a data lakehouse, can take months. Onehouse takes much of the spadework out of that, by offering a cloud-native managed service that ingests, self-manages, and optimizes the data automatically. “While a warehouse can just be ‘used’, a lakehouse still needs to be ‘built’,” Chandar noted. “Having worked with many organizations on that journey for four years in the Apache Hudi community, we believe Onehouse will enable easy adoption of data lakes and future-proof the data architecture for machine learning and data science down the line.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,231
2,022
"Get the most value from your data with data lakehouse architecture | VentureBeat"
"https://venturebeat.com/datadecisionmakers/get-the-most-value-from-your-data-with-data-lakehouse-architecture"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Get the most value from your data with data lakehouse architecture Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Gunasekaran S., director of data engineering at Sigmoid. Over the years, cloud data lake and warehousing architectures have helped enterprises scale their data management efforts while lowering costs. Conventionally, the steps in the data management architecture typically include enterprise data extraction from operational data repositories and storing them in a raw data lake. The next step is to execute another round of ETL processes to shift critical subsets of this data into a data warehouse to generate business insights for decision-making. However, the current set-up has several challenges, such as: Lack of consistency: Companies may often find it difficult to keep their data lake and data warehouse architecture consistent. It is not just a costly affair, but teams also need to employ continuous data engineering tactics to ETL/ELT data between the two systems. Each step can introduce failures and unwanted bugs affecting the overall data quality. Constantly changing datasets: The data stored in a data warehouse may not be as current as the data in a data lake which depends upon the data pipeline schedule and frequency. Vendor lock-in: Shifting large volumes of data into a centralized EDW becomes quite challenging for companies not only because of the time and resource required to execute such a task but also because this architecture creates a closed-loop causing vendor lock-in. Additionally, data stored in the warehouses is also harder to share with all data end-users within an organization. Poor maintainability : With data lakes and data warehouses, companies need to maintain multiple systems and facilitate synchronization which makes the system complex and difficult to maintain in the long run. Data governance: While the data in the data lake tend to be mostly in different file-based formats, a data warehouse is mostly in database format, and it adds to the complexity in terms of data governance and lineage. Advanced analytics limitations: Advanced machine learning applications such as PyTorch and TensorFlow aren’t fully compatible with data warehouses. These applications fetch data from data lakes where the data quality is often not governed. Data copies and associated costs : Data available in data lakes and data warehouses leads to an extent of data copies and has associated costs. Moreover, commercial warehouse data in proprietary formats increases the cost of migrating data. A data lakehouse addresses these typical limitations of a data lake and data warehouse architecture by combining the best elements of both data warehouses and data lakes to deliver significant value for organizations. The data lakehouse: A brief overview A data lakehouse is essentially the next breed of cloud data lake and warehousing architecture that combines the best of both worlds. It is an architectural approach for managing all data formats (structured, semi-structured, or unstructured) as well as supporting multiple data workloads (data warehouse, BI, AI/ML, and streaming). Data lakehouses are underpinned by a new open system architecture that allows data teams to implement data structures through smart data management features similar to data warehouses over a low-cost storage platform that is similar to the ones used in data lakes. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A data lakehouse architecture allows data teams to glean insights faster as they have the opportunity to harness data without accessing multiple systems. A data lakehouse architecture can also help companies ensure that data teams have the most accurate and updated data at their disposal for mission-critical machine learning, enterprise analytics initiatives, and reporting purposes. The benefits of data lakehouse There are several reasons to look at modern data lakehouse architecture in order to drive sustainable data management practices. The following are some of the key factors that make data lakehouse an ideal option for enterprise data storage initiatives: Data quality delivered through simplified schema : A data lakehouse comes with a dual-layered architecture where a warehouse layer is embedded over a data lake enforcing schema which provides data quality and control and orchestrates faster BI and reporting. Reduction of data drift : A data lakehouse architecture mitigates the need for multiple data copies and significantly reduces challenges related to data drift. Faster query: Faster interactive query coupled with true data democratization facilitates more informed decision-making. The architecture allows data scientists, engineers, and analysts to quickly access the required data. This results in a faster time-to-insight cycle. Effective administration: By implementing a data lakehouse architecture, companies can help their data teams save significant time and effort because it requires less time and resources in storing and processing data and delivering business insights. In fact, a single platform for data management instituted through a data lakehouse can reduce significant administrative burdens as well. Seamless data governance: A data lakehouse serves as a single source, thereby allowing data teams to embed advanced features such as audit logging and access control. Effective data access and data security : Data lakehouses provide data teams with the option to maintain the right access controls and encryption across pipelines for data integrity. Additionally, in a data lakehouse model, data teams are not required to manage security for all data copies which makes security administration a lot easier and cost-effective. Low chances of data redundancy: A data lakehouse architecture mitigates the need for multiple data copies required in processes of implementing data lakes and data warehouses, thereby reducing data drift. High scalability: A data lakehouse offers high scalability of both data and metadata. This allows companies to run critical analytics projects with a fast time-to-insight cycle. Emerging data lakehouse patterns The Azure Databricks Lakehouse and Snowflake are the two leading lakehouse platforms that companies can leverage for their data management initiatives. However, the decision to opt for one should be based on a company’s requirements. There are several companies that leverage these platforms together, including Databricks for data processing and Snowflake for data warehousing capabilities. Over time, both these platforms have gradually started building on the capabilities that the other has to offer in the quest to emerge as a platform of choice for multiple workloads. Now, let’s have a look at these distinct lakehouse patterns and how they have evolved over time. Databricks: A data processing engine on data lakes adding data lakehouse capabilities Databricks is essentially an Apache Spark-driven data processing tool that provides data teams with an agile programming environment with auto-scalable computing capability. Companies need to just pay for the computational resources in use. The Databricks platform is best suited for data processing at early stages in the pipeline where there is a need to prepare and ingest data. Companies can also leverage it to prepare data for transformation and enrichment but it falls short when it comes to processing data for reporting. In the last few years, Databricks has focused on building capabilities around traditional data warehouses. The platform comes with a built-in DQL-query interface and intuitive visualization features. Apart from this, Databricks also comes with a table structure that is similar to a database which is specifically developed in Delta file format. This format is leveraged to add database capabilities into data lakes. The format allows for data versioning through ACID transactions and schema. Key differentiators of the Azure Databricks lakehouse Comes with a ready-to-use spark environment with no need for configuration Embedded open-source Delta Lake technology that serves as an additional storage layer Delivers better performance by consolidating smaller files in Delta tables ACID functionality in Delta table helps ensure complete data security Has several language options such as Scala, Python, R, Java, and SQL Platform supports interactive data analysis with notebook-style coding Provides seamless integration options with other cloud platform services such as Blob Storage, Azure Data Factory, and Azure DevOps Provides open source library support Snowflake: Cloud data warehouse extending to address data lake capabilities Unlike Databricks, Snowflake transformed the data warehousing space a few years back by offering computation capability which is highly scalable and distributed. The platform achieved this by separating storage and processing capability in a data warehouse ecosystem. This is one of the approaches that Snowflake embraced in expanding the solution in the data lake space. Over the years, Snowflake has been gradually expanding its ELT capabilities, allowing companies to run their ELT processes in conjunction with the platform. For instance, while some companies leverage Snowflake Streams and Tasks to complete SQL tasks in Snowflake, others “dbt” with Snowflake. Key differentiators of the Snowflake data lakehouse Comes with built-in export and query tools The platform can seamlessly connect with BI tools such as Metabase, Tableau, PowerBI, and more The platform supports JSON format for querying and output of data Provides secured and compressed storage options for semi-structured data Can be connected easily with Object Storage like Amazon S3 Comes with granular security to deliver maximum data integrity There’s no noticeable limit to the size of a query Presence of standard SQL dialect and robust function library Comes with virtual warehouses that allow data teams to separate and categorize workloads according to requirements Promotes secure data sharing and simple integration with other cloud technologies Dremio and Firebolt – SQL lakehouse engine on data lake Besides Snowflake and Databricks, data lakehouse tools such as Dremio and Firebolt are also coming up with advanced querying capabilities. Dremio’s SQL Lakehouse platform, for instance, has the capability to deliver high-performance dashboards and intuitive analytics directly on any data lake storage, thereby eliminating the need for a data warehouse. Similarly, Firebolt comes with advanced indexing capabilities which helps data teams shrink data access down to data ranges that are even smaller than partitions. An evolution over cloud data lakes and warehouses A data lakehouse is an evolution over cloud data lake and warehousing architectures that provides data teams with an opportunity to capitalize on the best of both worlds while mitigating all historical data management weaknesses. When done right, a data lakehouse initiative can free up the data and enable a company to use it the way it wants and at the desired speed. Going forward, as cloud data warehouse and data lake architectures converge, companies may soon find vendors that combine all the capabilities of all the data lakehouse tools. This may open up endless opportunities when it comes to building and managing data pipelines. Gunasekaran S is the director of data engineering at Sigmoid. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,232
2,021
"Google debuts new data-powered cloud analytics products | VentureBeat"
"https://venturebeat.com/business/google-debuts-new-data-powered-cloud-analytics-products"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google debuts new data-powered cloud analytics products Share on Facebook Share on X Share on LinkedIn (Photo by Adam Berry/Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today during its Google Cloud Next 2021 conference, Google unveiled a range of data-focused products including Intelligent Product Essentials and enhancements to Vertex AI, BigQuery, Contact Center AI (CCAI), and DocAI. The new analytics and industry solutions are designed to simplify how organizations derive value from data, Google says — whether they’re developing a new product or enhancing existing ones. AI adoption and analytics are rising during the pandemic, with 20% of companies claiming they’ve boosted their usage of business analytics compared with the global average. But while 97% of execs say data science is “crucial” to maintaining profitability, several major challenges stand in the way. A Dremio report found that only 22% of data leaders have realized a return on investment in data management in the past two years. “The focus on intelligent products that Google Cloud is [launching] provides a digital option for [customers],” IDC group VP Kevin Prouty said in a statement. “IDC sees faster and more effective decision-making as the fundamental reason for the drive to digitize products and processes. It’s how you can make faster and more effective decisions to meet heightened customer expectations, generate faster cash flow, and better revenue realization.” Intelligent Product Essentials Intelligent Product Essentials aims to assist manufacturers in developing hardware products. With it, they’re able to deliver AI-enabled devices that can update over-the-air and provide insights using analytics in the cloud, according to Google. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Intelligent Product Essentials can be used to create personalized customer experiences — for example, a chatbot that contextualizes responses based on product status and customer profiles. The service can also deploy updates to products in the field and gather performance insights, as well as evolve capabilities over time with monetization opportunities. Intelligent Product Essentials predicts parts and service issues, detecting operating thresholds, anomalies, and failures so it can proactively recommend service using AI. Customers can leverage the offering to connect and ingest raw or time-series product telemetry from various device platforms to support over-the-air updates. In addition, Intelligent Product Essentials lets developers build companion apps that work on smartphones, tablets, and computers using a prebuilt API that incorporates product and security, device registration, and app behavior analytics. “Intelligent Product Essentials [can] manage, update and analyze fleets of connected products via APIs,” Google wrote in a blog post. “[Companies can] create new features or capabilities for [their] products using AI and machine learning … [and] integrate data sources such as enterprise asset management, enterprise resource planning, customer relationship management, systems and others.” Vertex AI, BigQuery, and Spark Google introduced Vertex AI , a managed AI platform, in May at Google I/O 2021. Today, it’s expanding the service with Vertex AI Workbench, a user experience to build and deploy AI models faster, accelerating time-to-value for data scientists and their organizations. Data scientists spend the bulk of their time cleaning and organizing data, according to a 2016 survey conducted by CrowdFlower. In a recent Alation report , a majority of respondents (87%) pegged data quality issues as the reason their organizations failed to implement AI. That’s perhaps why firms like Markets and Markets anticipate that the data prep industry, which includes companies that offer data cataloging and curation tools, will be worth upwards of $3.9 billion by the end of 2021. Whereas Vertex AI is designed to help companies accelerate the deployment and maintenance of AI models, Workbench focuses specifically on integrating data engineering capabilities into the data science environment. Workbench incorporates Dataproc, BigQuery, Dataplex, Looker, and other Google Cloud services, facilitating the ingestion and analysis of data from a single interface. “Delivered through managed notebooks, these capabilities help data scientists rapidly build workflows and perform the coordination, transformations, security, and machine learning operations, all within Vertex AI,” Google wrote. On the BigQuery side, Google is making generally available BigQuery Omni , which allows businesses to analyze data across Google Cloud, Amazon Web Services, and Microsoft Azure. The managed, cross-cloud analytics solution helps to answer questions and share results from a single pane of glass across datasets, complementing Google’s Dataplex service (which will be generally available this quarter) to make data accessible to more analytics tools. Google also today announced a preview of Spark on Google Cloud, which the company claims is the world’s first autoscaling and serverless Spark service for Google Cloud. It allows data engineers, data scientists, and data analysts to use Spark from their preferred interfaces, writing apps and pipelines that autoscale without manual infrastructure provisioning or tuning. Looker and Spanner To complement the rest of its data-focused offerings, Google is continuing to make Cloud Spanner , its fully managed, relational database, available to customers via a PostgreSQL interface (in preview). The interface supports several popular PostgreSQL data types and SQL features, allowing schemas and queries built against the PostgreSQL interface to be ported to another Postgres environment. Beyond this, Google debuted new integrations with Looker that it says will allow customers to “operationalize analytics” and more effectively scale deployments. Tableau customers and Connected Sheets users will soon be able to leverage Looker’s semantic model, with the Connect Sheets integration launching in preview by the end of the year. Looker’s new solution for CCAI will help to contextualize support calls coming in to enterprise call centers. And the forthcoming Looker Block for Healthcare NLP API, which is compatible with the Fast Healthcare Interoperability Resources (FHIR) , will provide health care providers, payers, and pharma companies access to insights from unstructured medical text from clinical sources. Google Earth Engine Touching on the geospatial, Google unveiled Google Earth Engine on Google Cloud, which makes Google Earth Engine’s catalog of over 50 petabytes of satellite imagery and geospatial datasets available for analysis. Google says that Google Cloud customers will be able to integrate Earth Engine with BigQuery, Google Maps Platform, and Google Cloud’s AI technologies, giving data teams “a way to better understand how the world is changing and what actions they can take” — from saving energy costs to understanding business risks and serving customer needs. Investments in “green” practices aren’t just beneficial for the environment — they make business sense. According to a 2017 study on corporate social responsibility, 87% of consumers have a more positive image of companies that support social or environmental issues. Moreover, 87% say they’d buy a product with a social and environmental benefit, and 88% would more loyal to a company that supports those efforts. “For over a decade, Earth Engine has supported the work of researchers and nongovernmental organizations from around the world, and this new integration brings the best of Google and Google Cloud together to empower enterprises to create a sustainable future for our planet and for your business,” Google wrote. CCAI and DocAI Google Cloud’s CCAI, which offers AI-powered virtual agents and other features, entered general availability in 2019, while the company’s AI-powered document processing service DocAI rolled out in April. Now, the two services are each gaining new features in CCAI Insights and Contract DocAI. CCAI Insights provides out-of-the-box and custom data modeling techniques, and Contract DocAI — now in preview — brings features purpose-built for contract lifecycles and processing. Over the past several years, businesses have increasingly turned to cloud-based contact centers to address budding customer service challenges. The pandemic accelerated that move — service conveniences were put in place out of necessity, which gave customers more options for interacting with companies. For example, 78% of contact centers in the U.S. now intend to deploy AI in the next 3 years, according to Canam Research. And research from The Harris Poll indicates that 46% of customer interactions are already automated, with the number expected to reach 59% by 2023. CCAI Insights uses AI to mine raw contact center interaction data for actionable information, regardless of whether that data originated with a virtual or human agent. It provides out-of-the-box analytics on customer conversations including Smart Highlighters, which automatically highlights important conversation moments such as when an agent authenticates or a customer confirms that their issue has been resolved. Meanwhile, integration with Google’s Cloud Natural Language Processing (NLP) identifies positive or negative sentiment and labels various entities within conversations by types, including date, person, contact information, organization, location, events, products, and media. CCAI Insights — which can hand off calls and chats handled by Dialogflow and Agent Assist — also categorizes conversations with custom highlighters, which let customers defines rules, keywords, and natural language training phrases. Topic modeling — another capability — leverages NLP technologies so teams can create an AI model of their data to define the taxonomy of conversation drivers. As for Contract DocAI, it taps NLP, knowledge graph technology, and optical character recognition to parse contracts for key terms like those involving start and end dates, renewal conditions, parties involved, contract type, venue, or service level agreements. It automatically discerns important terms and the relationships among them, potentially leading to faster and less expensive contract processing, Google claims. “All of these new additions will help transform businesses by making the power of AI more accessible and more focused on achieving business outcomes,” Google wrote. “[The] announcements build on the momentum we’ve been seeing with our AI solutions in delivering business value to our customers.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,233
2,023
"Freedom of choice? How recent Zoom AI policy changes betrayed consumer trust | VentureBeat"
"https://venturebeat.com/ai/freedom-of-choice-how-recent-zoom-ai-policy-changes-betrayed-consumer-trust"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Freedom of choice? How recent Zoom AI policy changes betrayed consumer trust Share on Facebook Share on X Share on LinkedIn Image by Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Video conferencing and messaging provider Zoom is facing severe backlash for changes it quietly made to its Terms of Service (TOS) back in March related to AI — raising new questions about customer privacy, choice and trust. These questions will apply to every company grappling with AI at a time of growing debate around how large language models (LLMs) are trained on individuals’ data; but it is particularly concerning for the reputation of a company like Zoom, which has become ubiquitous for everything from office meetings to remote school. Yesterday, reports spread widely that Zoom had made changes to its TOS clarifying that the company can train AI on user data, with no way to opt out. The news appeared to begin with a post on X (formerly Twitter) yesterday from author Ted Gioia — a post that now has over 2 million views. According to Katie Gardner, a partner at international law firm Gunderson Dettmer, it’s common for companies to frequently update their Terms of Service as their practices change, and some privacy regulations, such as the CCPA, require companies to update their Privacy Policies annually. “Companies need to notify users of material changes to their practices if they want the changes to be legally enforceable against them,” she told VentureBeat in a phone interview. “At least in the case of Zoom, if done quietly, it was likely because the change wasn’t material — it was just stating more explicitly something it [had] already retained the rights to do.” That said, she pointed out that tech companies are currently making these updates because they’re seeing backlash from regulators. “The methods by which companies are collecting consent for using user data for training purposes are targets of enhanced regulatory review,” she said, including the FTC’s recently announced resolutions of actions against Ring and Amazon related to the transparency and accuracy of notices to users about the use of their data for training models. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “In addition to fines, the outcome in both was to require the companies to delete proprietary models — a penalty that will be meaningful for any company investing heavily into training their own models,” she said. Zoom responded to the uproar this morning This morning, in response to the uproar, Zoom posted on X (formerly Twitter), saying that “as part of our commitment to transparency and user control, we are providing clarity on our approach to two essential aspects of our services: Zoom’s AI features and customer content sharing for product improvement purposes. Our goal is to enable Zoom account owners and administrators to have control over these features and decisions, and we’re here to shed light on how we do that.” In a linked blog post that seemed to only raise further confusion, Zoom wrote: “to reiterate: we do not use audio, video, or chat content for training our models without customer consent.” However, the company’s highly complex, lengthy Terms of Service document is difficult to decipher. A quick glance does not make the AI policies clear to any regular user. In addition, Zoom’s own generative AI features are bewildering: For example, the company explained that it recently introduced Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose “on a free trial basis to enhance your Zoom experience.” These features, it explained, offers automated meeting summaries and AI-powered chat composition. “Zoom account owners and administrators control whether to enable these AI features for their accounts,” the company wrote. When those services are enabled, “you will also be presented with a transparent consent process for training our AI models using your customer content,” the company blog post says. “Your content is used solely to improve the performance and accuracy of these AI services. And even if you chose to share your data, it will not be used for training of any third-party models. ” However, the blog post does not state that the service is turned on by default with a small check box. When a call begins, other people in the call get notified that “Meeting Summary has been enabled.” The popup says “The account owner may allow Zoom to access and use your inputs and AI-generated content for the purpose of providing the feature and for Zoom IQ product improvement, including model training.” Participants can either click “Leave Meeting,” or “Got it.” That means if users don’t leave the call, they automatically agree to allow Zoom to collect data to build and improve its AI — but do users really have the choice to leave a work meeting or a remote classroom? In addition, the reality is that today’s use of the web makes it impossible for most people to understand how companies are using their data, said Gardner, even if they are given a place to exercise the choices they are presented with. Yet with video and audio, especially in scenarios with children, there may be even more consumer discomfort around the use of personal data. “When it comes to U.S. regulation, there is a focus on risk scenarios that cause the most harm, such as to children,” she explained. “And video is an area that seems private, so this idea that other people are listening, it gives people more discomfort, for sure.” Zoom is no stranger to AI controversy Zoom is no stranger to controversies around the use of AI in its products. In April 2022, the company came under fire after saying it might soon include emotion AI features in its sales-targeted products. A nonprofit advocacy group, Fight for the Future, published an open letter to the company saying that Zoom’s possible offering would be a “major breach of user trust,” is “inherently biased,” and “a marketing gimmick.” Over the past few months, Zoom has gone all in on generative AI. In March, it announced a partnership with OpenAI, and recently said it is teaming up with AI startup Anthropic to integrate Anthropic’s Claude AI assistant into Zoom’s productivity platform. The company has also made an investment of an undisclosed amount in Google-backed Anthropic through its global investment arm. But the current controversy, which is going viral across not just social media but mainstream media today, comes at a particularly precarious time for Zoom’s business. The company benefitted from the shift to remote work during the COVID-19 pandemic, but its shares plummeted in late 2022 as people began to resume their normal routines and work commutes. Even Zoom itself reportedly has been telling employees to come back into the office. The last thing Zoom needs now is a backlash that further alienates users. It is a conversation that, of course, goes far beyond Zoom to all tech companies and publishers: How will corporate America tackle taking advantage of AI while also holding onto customer trust, privacy and consent? And what can they learn from what appears to be Zoom’s epic PR fail? Companies may intend to minimize or mitigate the risk of regulatory scrutiny, said Gardner — but they should consider the current environment as well. “If you’re a company that is under the microscope, people are going to pay attention to these minor changes,” she said. “In this current environment, where everyone is very attuned to what companies are doing with user data, there’s this balance and this line that companies need to walk — between avoiding regulatory scrutiny and maintaining trust with their end users.” Update 2:09 p.m. (Pacific Time): A Zoom spokesperson issued the following statement in an email to VentureBeat: “Zoom customers decide whether to enable generative AI features, and separately whether to share customer content with Zoom for product improvement purposes. We’ve updated our terms of service to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent. ” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,234
2,022
"The future of hybrid work | VentureBeat"
"https://venturebeat.com/datadecisionmakers/the-future-of-hybrid-work"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The future of hybrid work Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Tim Rowley, CTO and COO at PeopleCaddie. If you’re still thinking about when, or even if, the working world will return to a pre-pandemic “normal,” you’re wasting valuable time. All signs point to the rise of a new professional paradigm that meets employees in the middle, incorporating remote work and greater flexibility into traditional models. The most forward-thinking companies are already thinking in terms of how best to support employees’ work-life balance. The future of hybrid work has arrived. In fact, Kate Lister, president of Global Workplace Analytics, recently told CNBC that her research found that 56% of U.S. workers have a job that can be done at least partially remotely. Still, what does this mean for employers? What does hybrid work look like — or, more aptly, what should hybrid work look like in order for companies to create the most productive work environments and attract the best talent? The answers will depend on the industry, the company and, to some extent, the expectations of employees. But for any business leaders to begin painting a clearer picture of hybrid work within their own organization, the best place to start is by thinking critically about the following questions: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How will the evolution of technology impact hybrid work? In the past two years alone, we have witnessed massive wholesale changes in the way we employ technology to accommodate out-of-the-office employees. But the future of hybrid work isn’t just about landing on the next iteration of Slack or Zoom. Embracing this new paradigm means thinking not only about tech advancements that improve productivity, but also those that help support or enhance the increased flexibility workers are seeking in their lives. For example: simpler, better integrated communication that is, at the same time, more mindful of employees’ time and off-premise privacy. How does a company support a hybrid work model? Today, we have a better understanding of the downside of “hustle culture” and the risks of burnout in the workplace, which is why employers should welcome the benefits of hybrid work. (Did you know that employees who are offered a remote option tend to be more productive ?) Still, synching workers’ schedules for meetings and collaborative project work, creating opportunities for unplanned but serendipitous professional interactions, and finding ways to foster social engagement useful in building productive and enduring working relationships under this new model are valid concerns for most companies. Tech solutions and smarter scheduling offer some simple strategies to keep the lines of communication open and ensure that employees are operating with integrity. Just remember: Employers who hope to inspire (and retain) high-quality workers need to think less in terms of all access, all the time and more about optimizing a designated overlap in employees’ schedules. What are some ways to make the workplace more flexible for employees without sacrificing productivity? This may trigger some managers, but it must be said: start scaling back meetings. Too many conference-room gatherings and Zoom calls, frankly, aren’t worth the time suck and mental shift demanded from front-line employees. When possible, reduce the number of meetings overall, ensure invitees are only those workers who are mission-critical and limit meetings to a consolidated block of time each day. Make them shorter. Institute no-meet Fridays. Building in the flexibility for employees to work from home at least one day a week or get home early for family dinners and kids’ activities goes a long way toward worker satisfaction and retention. How can hybrid models impact diversity initiatives? Remote and hybrid work help democratize the workplace in ways that may go unseen by the average employer. Workers with children at home, employees with disabilities and people who don’t have a car or are priced out of neighborhoods near the office benefit greatly from this new paradigm. And remote work opens up the employee pool to those not just outside the company’s neighborhood, but workers across the world. That opens the doors to new perspectives and lived experiences that make for a richer, better-equipped workforce. New AI-based tech solutions can track the speaking time for women and voices with an accent in remote meetings. Hybrid work doesn’t merely accommodate equal representation – it supports diversity across a company. Given the flexibility that employees seek, could contract work be a legitimate solution in an uncertain labor market? Absolutely. Most larger companies already outsource certain streams of their business or employ freelancers to provide specialized services. It shouldn’t be a leap for organizations to begin considering contract employees for at least a percentage of a workforce long believed to be strictly the domain of permanent staff. Whether a food distributor is in need of on-call IT consultation or an accounting firm has to temporarily beef up its seasonal staff, contractors working remotely or on site offer employees unmatched flexibility. As we begin to think of hybrid work as a more permanent fixture across industries, it’s important that business leaders avoid latching their mindsets to antiquated models. Hybrid work may vary by department or within a team. It will require establishing clear guidelines for when, and how often, employees are expected in the office or available to communicate remotely. But the new paradigm is here. Employers who recognize that hybrid work is the future of work – and then plan accordingly – will have the best chance to attract and retain the best employees available. Tim Rowley is CTO and COO at PeopleCaddie. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,235
2,022
"Mind control: The metaverse may be the ultimate tool of persuasion | VentureBeat"
"https://venturebeat.com/virtual/mind-control-the-metaverse-may-be-the-ultimate-tool-of-persuasion"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Mind control: The metaverse may be the ultimate tool of persuasion Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If we’ve learned anything about technology over the last few decades, it’s that we don’t prepare for the downsides until the problems are so egregious we can’t ignore them. The poster child is social media, which was hailed as utopian when it first arrived, but is now widely considered a destructive and destabilizing force in society. It took over a decade for this to sink in, but these days a large majority of Americans believe social media has a mostly negative effect on our world. The reasons cited include the spread of misinformation, hate, harassment, polarization and partisanship. Of course, there’s nothing about the technology itself that creates these problems. It’s the business models behind social media that have driven platforms to mediate information flow across society, filtering and amplifying content in ways that distort our thinking. This is a form of mind control, and it’s about to get much worse. I’m talking about the metaverse. The metaverse and feedback control Unless regulated, the metaverse could become the most dangerous tool of persuasion ever created. I don’t make this warning lightly. I’ve been a technologist in this field for over 30 years, starting as a researcher at Stanford, NASA and the U.S. Air Force and then founding a number of early companies in the space. I genuinely believe the metaverse can be a positive force for humanity, but if we wait for the problems to become egregious as with social media, it will be too late to undo the damage. To raise awareness about the issues, I’ve written many articles about the dangers of the metaverse and the need to protect human rights , but I have not explained from a technical perspective why immersive technologies are so much more dangerous than traditional social media. To do so, I’d like to introduce a basic engineering concept called feedback control. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! It comes from a technical discipline called control theory , which is the method used by engineers to control the behaviors of a system. Think of the thermostat in your house. You set a temperature goal and if your house falls below that goal, the heat turns on. If your house gets too hot, it turns off. When working properly, the thermostat keeps your house close to the goal you set. That’s feedback control. Of course, engineers like to make things more complex than they need to, so the simple concept above is generally represented in a standard format called a control system diagram as follows: In the heating example, your house would be the system , a thermometer would be the sensor , and the thermostat would be the controller. An input signal called the reference is the temperature you set as the goal. The goal is compared to the actual temperature in your house (i.e., measured output ). The difference between the goal and measured temperature is fed into the thermostat which determines what the heater should do. If the house is too cold, its heater turns on. If it’s too hot its heater turns off. That’s a classic control system. Of course, control systems can get very sophisticated, enabling airplanes to fly on autopilot and cars to drive themselves—even allowing robotic rovers to land on Mars. These systems need sophisticated sensors to detect driving conditions or flying conditions or whatever else is appropriate for the task. These systems also need powerful controllers to process the sensor data and influence system behaviors in subtle ways. These days, the controllers increasingly use AI algorithms at their core. With that background, let’s jump back into the metaverse. Referring back to the standard diagram above, we see that only a few elements are needed to effectively control a system, whether it’s a simple thermostat or a sophisticated robot. The two most important elements are a sensor to detect the system’s real-time behaviors, and a controller that can influence those behaviors. The only other elements needed are the feedback loops that continually detect behaviors and impart influences, guiding the system towards desired goals. The human in the loop As you may have guessed, when considering the danger of the metaverse, the system being controlled is you — the human in the loop. After all, when you put on a headset and sink into the metaverse, you’re immersing yourself in an environment that has the potential to act upon you more than you act upon it. Said another way, you become an inhabitant of an artificial world run by a third party that can monitor and influence your behaviors in real time. That’s a very dangerous situation. In the figure above, system input to the human user are the immersive sights, sounds and touch sensations that are fed into your eyes, ears, hands and body. This is overwhelming input — possibly the most extensive and intimate input we could imagine other than using surgical brain implants. This means the ability to influence the system (i.e. you) is equally extensive and intimate. On the other side of the user in the diagram above is the system output — that’s your actions and reactions. This brings us to the sensor box in the diagram above. In the metaverse, sensors will track everything you do in real time — the physical motions of your head, hands and body. That includes the direction you’re looking in, how long your gaze lingers, the faint motion of your eyes, the dilation of your pupils, the changes in your posture and gait — even your vital signs are likely to be tracked in the metaverse, including your heart rate, respiration rate and blood pressure. In addition, the metaverse will monitor your facial expressions and vocal inflections to track your emotions in real time. This goes beyond sensing expressions that other people notice; it also includes subconscious expressions that are too subtle for humans to recognize. Known as “ micro-expressions ,” these events can reveal emotions that users do not intend to convey. Users may not even be aware of feeling those emotions, enabling metaverse platforms to know your inner feelings better than you do. This means when you immerse yourself into the metaverse, sensors will track almost everything you do and know exactly how you feel while doing it. We can represent this in the diagram by replacing the sensor box with the metaverse (behavioral and emotional tracking in real time) as shown: Of course, in an unregulated metaverse, the behavioral and emotional data will not just be tracked, it will be stored over time, creating a database that reflects how individuals are likely to react to a wide range of stimuli throughout their daily life. When processed by AI algorithms, this extensive data could be turned into behavioral and emotional models that enable platforms to accurately predict how users will react when presented with target stimuli (i.e., system input ) from a controller. And because the metaverse is not just virtual reality but also augmented reality , the tracking and profiling of users will occur not just in fully simulated worlds but within the real world embellished with virtual content. In other words, metaverse platforms will be able to track and profile behaviors and emotions throughout our daily life, from the moment we wake up to the moment we go to sleep. Of course, the danger is not that platforms can track and profile us; it’s what they can do with that data. This brings us to the controller box in the diagram above. The controller receives a measured error, which is the difference between a reference goal (the desired behavior) and the measured output (a sensed behavior). If metaverse platforms are allowed to adopt similar business models as social media, the reference goal will be the agendas of third parties that aim to impart influence over users (see diagram below). The third party could be a paying sponsor that desires to persuade a user to buy a product or service or to believe a piece of propaganda, ideology or misinformation. Of course, advertising and propaganda have been around forever and can be quite effective using traditional marketing techniques. What’s unique about the metaverse is the ability to create high-speed feedback loops in which user behaviors and emotions are continuously fed into a controller that can adapt its influence in real time to optimize persuasion. This process can easily cross the line from marketing to manipulation. To appreciate the risks, let’s dig into the controller. At its core, the controller aims to “reduce the error” between the desired behavior of a system and the measured behavior of the system. It does this by imparting system input , shown on the diagram above as an innocent-looking arrow. In the metaverse, this arrow represents the ability of platforms to modify the virtual or augmented environment the user is immersed within. In other words, in an unregulated metaverse, the controller can alter the world around the user, modifying what they see and hear and feel in order to drive that user towards the desired goal. And because the controller can monitor how the user reacts in real time, it will be able to continually adjust its tactics, optimizing the persuasive impact, moment by moment, just like a thermostat optimizes the temperature of a house. Immersed in danger To make this clear, here are some examples: Imagine a user sitting in a coffeehouse in the metaverse (virtual or augmented). A third-party sponsor wants to inspire that user to buy a particular product or service or believe a piece of messaging, propaganda or misinformation. In the metaverse, advertising will not be in the pop-up ads and videos that we’re familiar with today but in immersive experiences that are seamlessly integrated into our surroundings. In this particular example, the controller creates a virtual couple sitting at the next table. That virtual couple will be the system input that is used to influence the user. First, the controller will design the virtual couple for maximum impact. That means the age, gender, ethnicity, clothing styles, speaking styles, mannerisms and other qualities of the couple will be selected by AI algorithms to be optimally persuasive to the target user based on that user’s historical profile. Next, the couple will engage in an AI-controlled conversation amongst themselves that is within earshot of the target user. That conversation could be about a car that the target user is considering purchasing and possibly framed as the virtual couple discussing how happy they are with their own recent purchase. As the conversation begins, the controller monitors the user in real time, assessing micro-expressions, body language, eye motions, pupil dilation and blood pressure to detect when the user begins paying attention. This could be as simple as detecting a subtle physiological change in the user correlated with comments made by the virtual couple. Once the target user is engaged, the controller will modify the conversational elements to increase engagement. For example, if the user’s attention increases as the couple talks about the car’s horsepower, the conversation will adapt in real time to focus on performance. As the overheard conversation continues, the user may be unaware that he or she has become a silent participant, responding through subconscious micro-expressions, body posture and changes in vital signs. The AI controller will highlight elements of the product that the target user responds most positively to and will provide conversational counterarguments when the user’s reactions are negative. And because the user does not overtly express objections, the counterarguments could be profoundly influential. After all, the virtual couple could verbally address emerging concerns before those concerns have fully surfaced in the mind of the target user. This is not marketing, it’s manipulation. And in an unregulated metaverse, the target user may believe the virtual couple are avatars controlled by other patrons. In other words, the target user could easily believe they are overhearing an authentic conversation among users and not realize it’s a promotionally altered experience that was targeted specifically at them, injected into their surroundings to achieve a particular agenda. And it’s not just adults who will be targeted in this way, but children, who already have a hard time distinguishing authentic content from promotional material. Already Roblox, provider of a metaverse used by 50 million children, announced plans to roll out “immersive ads” in the near future. What chance does a child have if approached by a giant lovable teddy bear who follows them around while playing with a particular brand of toy or eating a particular brand of cereal? And that’s a relatively benign example. Instead of pushing the features of a new car or toy, the third-party agenda could be to influence the target user about a political ideology, extremist propaganda, or outright misinformation or disinformation. In addition, the examples above target the user as a passive observer of a promotional experience in his or her metaverse surroundings. In more aggressive examples, the controller will actively engage the user in targeted promotional experiences. For example, consider the situation in which an AI-controlled avatar that looks and sounds like any other user in an environment engages the target user in an agenda-driven promotional conversation. In an unregulated metaverse, the user may be entirely unaware that he or she has been approached by a targeted advertisement, and instead might believe he or she is in a conversation with another user. The conversation could start out very casual but could aim towards a prescribed agenda. In addition, the controller will likely have access to a wealth of data about the target user, including their interests, values, hobbies, education, political affiliation, etc. — and will use this to craft dialog that optimizes engagement. In addition, the controller will have access to real-time information about the user, including facial expressions, vocal inflections, body posture, eye motions, pupil dilation, facial blood patterns, and potentially blood pressure, heart rate and respiration rate. The controller will adjust its conversational tactics in real time based on the overt verbal responses of the target user in combination with subtle and potentially subconscious micro-expressions and vital signs. It is well known that AI systems can outplay the best human competitors at chess, Go, poker and a wealth of other games of strategy. From that perspective, what chance does an average consumer have when engaged in promotional conversation with an AI agent that has access to that user’s personal background and interests, and can adapt its conversational tactics in real time based on subtle changes in pupil dilation in blood pressure? The potential for violating a user’s cognitive liberty through this type of feedback control in the metaverse is so significant it likely borders on outright mind control. To complete the diagram for metaverse-based feedback control, we can replace the generic word controller with AI-based software that alters the environment or injects conversational avatars that impart optimized influence on target users. This is expressed using the phrase AI agents below. As expressed in the paragraphs above, the public should be aware that large metaverse platforms could be used to create feedback-control systems that monitor their behaviors and emotions in real time and employ AI agents to modify their immersive experiences to maximize persuasion. This means that large and powerful platforms could track billions of people and impart influence on any one of them by altering the world around them in targeted and adaptive ways. This scenario is frightening but not farfetched. In fact, it could be the closest thing to “playing God” that any mainstream technology has ever achieved. To protect against this scenario, industry leaders, politicians and policymakers need to take action, implementing regulatory safeguards , promoting industry standards and guaranteeing immersive rights to consumers before platforms adopt business models that are dangerous to the public. Had such safeguards been put in place early in the evolution of social media, the world might be a safer place. Louis Rosenberg, PhD is a pioneer of virtual and augmented reality. His work began over 30 years ago in labs at Stanford and NASA. In 1992 he developed the first interactive augmented reality system at Air Force Research Laboratory. In 1993 he founded the early VR company Immersion Corporation (public on Nasdaq). In 2004 he founded the early AR company Outland Research. He earned his PhD from Stanford, has been awarded over 300 patents for VR, AR, and AI technology and was a tenured professor at California State University. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,236
2,023
"Sightful launches Spacetop augmented reality laptop | VentureBeat"
"https://venturebeat.com/business/sightful-launches-spacetop-augmented-reality-laptop"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sightful launches Spacetop augmented reality laptop Share on Facebook Share on X Share on LinkedIn It’s time to meet Spacetop, an augmented reality (AR) laptop which the maker Sightful built over three years. Created by a team of more than 60 spatial computing experts — including veterans from Apple, Microsoft, and Magic Leap — Spacetop represents the next generation in personal computing and the first application of AR that seamlessly fits into users’ daily lives, the company said. With customized hardware and a proprietary spatial environment, Spacetop leverages AR to remove the physical constraints of standard laptops. The result is a first-of-its kind product that allows users to carry with them a big private, virtual workspace designed and customized by the user to be their most creative self, no matter where they are – all in a familiar laptop form factor. The company said it is like having a 100-inch laptop in your backpack. “Two worlds sit at a crossroads: Laptops are the centerpiece of our daily working lives, but the technology has not evolved with the modern, work from anywhere, privacy matters, ‘road warrior’ mentality. Meanwhile, augmented reality is full of potential and promise, but is yet to find its daily use case,” said Tamir Berliner, CEO of Tel Aviv-based Sightful, in a statement. “We are at the perfect moment for a significant paradigm shift in a device we all know and love, and Spacetop Early Access is the first step in that journey.” Designed specifically for the “work from anywhere” movement, Spacetop takes full advantage of AR to transform the world around users into a portable home office. With a more than 100-inch virtual canvas, Spacetop users design their perfect working environment – uncluttered and organized in the exact way that helps them focus and do their best work. Unconstrained by 13-inch to 16-inch screens that lead to endless tabs, buried applications, and constant window switching, Spacetop users are able to focus, with their key applications visible and accessible at any moment, all overlaid on the real world, while users still remain present in the real world. The result is an experience that delivers: Intuitive Augmented Reality: Spacetop operates seamlessly and intuitively as a laptop. Users enjoy a familiar but dramatically expanded experience in AR with no complicated gesture controls to learn, and no external hardware or software awkwardly integrated with non-AR devices. Limitless Digital Workspace: Spacetop users carry a multi-monitor setup as large as their work requires, whether sitting on a couch in their home, working over breakfast at a local cafe, or squeezed into an airplane seat on a cross-country flight. Privacy by Design: The Spacetop work environment, called the Canvas, and the user’s work, are completely invisible to those who are not using the device. No more wandering eyes from nosy neighbors, no more privacy screen filters. As a company, Sightful has raised $61 million in funding to date, from global investors including Aleph, Corner Ventures, and more. “To date, every other company’s approach to Augmented Reality has, ironically, been completely removed from reality,” said Eden Shochat, equal partner at Aleph, in a statement. “Sightful focused on an immediate utility which advances human productivity, a personal passion of mine; rather than trying to convince the world that we need to live in a metaverse or create an entirely new way of working, they focused on building a product people can use now. This is the right approach, at the right moment, with the right team charting a new path for an everyday device, similar to the iPhone or Roadster before them.” Spacetop hardware brings together two of the largest names in both personal computing and AR in Wistron and NReal. Combining customized NReal glasses with the proprietary Spacetop environment, users access all of their important web applications on clear, high-resolution augmented reality windows overlaid onto the real world. Whether on a Zoom call, working in Google Docs, reviewing a Figma design, or browsing the web, the Spacetop environment feels tangible and immersive while still allowing users to naturally interact with people around them. Partnering with Wistron brings the experience and scale of one of the world’s top laptop hardware manufacturers to Sightful’s mission. Wistron is a global leading technology service provider supplying ICT (information and communication technology) products, along with a robust R&D infrastructure and deep experience in product development, all of which have been crucial to the development of Spacetop. “Spacetop is the bridge between ‘reality’ and augmented reality, combining the utility and versatility of a laptop with the magic of painting information on the world around us,” said Marvin Tien, a partner at Corner Ventures, in a statement. “Corner is committed to leveraging all its connections across Asia Pacific, Europe, and anywhere else in the world where our network may reach to help align the Sightful team with the right leaders across spatial computing, personal computing, and product innovation.” The Spacetop Early Access program is now open with 1,000 early adopters invited to join. Those interested in being the first to receive their Spacetop can apply at www.sightful.com. The company was founded in 2020. Berliner, cofounder of PrimeSense, started the company with Tomer Kahan, COO, an early executive at Magic Leap. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,237
2,019
"Slack launches Workflow Builder for businesses to make apps without code | VentureBeat"
"https://venturebeat.com/business/slack-launches-workflow-builder-for-businesses-to-make-apps-without-code"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack launches Workflow Builder for businesses to make apps without code Share on Facebook Share on X Share on LinkedIn Slack logo at Slush 2018 conference in Helsinki, Finland Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Slack today introduced Workflow Builder, a new way to help employees of all skill levels build apps for their organizations inside the team collaboration app. “What we’re doing with Workflow Builder is extending the availability of our platform to essentially everybody in an organization so that people without development skills and experience can build basic workflows and applications inside of Slack,” Slack platform product head Andy Pflaum told VentureBeat in a phone interview. The Slack platform for automated bots and apps was first introduced for developers in 2015, and organizations have been able to create their own custom apps for internal use for years. Block Kit, a simpler, more media-rich way to create Slack apps, first became available in February. More than 90% of paid Slack customers use some kind of app or integration, but Workflow Builder is designed to make app creation possible for anyone. Doing so will enable people to create workflows for any number of tasks with a visual builder without leaving Slack by using things like forms and surveys and sharing certain instructions, documents, and links. Rich media like videos and SaaS solutions may be incorporated into Workflow Builder apps in the future. “The first version we’re releasing is essentially for workflows built inside of Slack. So you’re inputting the information in Slack and the output goes to a direct message or channel or to the person who’s entered the channel, for example,” Pflaum said. “But subsequent versions will allow you to connect to other applications and services as well. So you can imagine your CRM or your help desk ticket system or your bug tracking system, connecting into these workflows and being able to develop not just workflows self contained, inside of Slack, but also connecting up to additional proprietary or third party apps and services.” The news was shared today at the Slack Frontiers conference in San Francisco alongside other features like the ability to reply to Slack messages via email, deeper integration of calendars in Slack channels, and a shared channels beta due out later this year. Slack joins a long line of companies inventing ways to make apps and automation accessible to people who don’t know how to code. Earlier this week, Microsoft declared machine teaching — teaching experts to transfer knowledge to automated systems — the next frontier in AI, and a number of conversational AI services have similar projects. Amazon’s Alexa, for example, introduced its Blueprint voice app templates for people to make custom apps at home as well as in the workplace with Alexa for Business. Also new today: Slack is bringing shared channels to Enterprise Grid , its service for helping entire organizations join Slack, just not a handful of teams within an organization. Like shared channels for paid users, which was introduced at the first Frontiers conference in 2017 , shared channels for Enterprise Grid can only connect two organizations today. A shared channels beta will be made available this summer. Approximately 13,000 teams currently use shared channels on Slack, Pflaum said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,238
2,022
"3 ways to cultivate success for women in tech | VentureBeat"
"https://venturebeat.com/business/3-ways-to-cultivate-success-for-women-in-tech"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 3 ways to cultivate success for women in tech Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the climb up the corporate ladder, women remain underrepresented. McKinsey data finds that women comprise less than 25% of executive-level positions and women of color account for only 4% of executive-level positions. However, this adversity extends past the C-suite — industries such as technology are dominated by men, with women making up only a quarter of the tech workforce. With American Business Women’s Day just behind us, tech companies of all sizes are expressing their commitment to gender equality in the workplace — and one of the best ways to drive change is to listen and learn from women who have broken through the glass ceiling. Here, I’ll use my experience as a working woman and working mother to share three ways tech companies can advance more women in the technology sector. Launch mentorship and education programs that empower women Since women are remarkably underrepresented in tech, it can be difficult for them to envision a successful career in the industry. Organizations must help create a sense of belonging in the workplace and they can start by implementing mentorship programs. Connecting women in junior-level roles with women and men in higher-level executive roles can empower staff to expand their knowledge, grow connections and eliminate boundaries within the workplace. While both men and women can make excellent mentors, women may further benefit from building relationships with other women at work. For example, I was able to ask one of my mentors, also a working mother, specifics about navigating motherhood and a career. She provided me with honest answers to my questions, helping me strategize and prioritize tasks to meet the overall needs of the business while taking time for my family. If you are a woman in leadership, this might be one of the most important things you can do — I recommend to everyone on my team to find mentors they can trust. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Tech companies should also look to provide educational resources to help women succeed in the workplace. Leaders can offer seminars, coaching programs and reskilling opportunities to help educate the workforce on key skills and strategies needed for success and career advancement. If office cultures predominately cater to men, women will likely feel out of place and undervalued. Mentorship and educational programs not only provide an opportunity for learning and career advancement, but can also demonstrate leaders’ interest in women’s careers while cultivating a sense of belonging in the workplace. Provide inclusive and expansive benefits In the tech industry, 57% of women have felt burnt out at work, compared to 36% of men, according to Trustradius. Since the pandemic, workers have started to prioritize their mental health and personal lives above work, and companies have developed programs and resources that cater to employee wellness. But, it is vital that women’s unique needs are taken into consideration when implementing these programs. Trustradius data finds that 78% of women in the tech industry feel they have to work harder than men to prove themselves. So, it makes sense why 33% of women have recently taken time off of work to prioritize their mental health. It is imperative that companies offer equal programs and resources that cater to mental health, employee appreciation and education to help women feel valued and empowered at work. Inclusive benefits must extend beyond mental health benefits. For working parents, equity in parental leave has a significant impact on women’s mental health and is one of the most crucial benefits for parents as a whole. When companies offer contrasting parental leave options for each parent, the results only exacerbate outdated notions of parental responsibilities. Companies must reevaluate their parental leave programs and incorporate equal leave for both parents, to allow partners an equal share in parental responsibilities. Offer flexible workplace policies Workers are no longer willing to be part of a company that ignores (or rescinds policies based on) the changes brought on by the pandemic, such as working from home and flexible schedules. In fact, Flexjobs data finds that 60% of women say that if their company forces them back into the office full time, they will look for opportunities elsewhere. Even so, Deloitte data found that more than half of women in tech are expected to change jobs as a result of inadequate work-life balance — and New View Strategies data finds that most have seen their workload significantly increase since the pandemic. Employees are increasingly valuing flexibility and autonomy over their schedules, and this is particularly true for working moms. For example, I hired a senior product manager part-time as she was looking to return to full-time employment while balancing parenthood of two teenage boys and her passion for competitive track coaching. After a while, she moved into a full-time role and continued to excel professionally as she drove great outcomes for our business. Had I not been flexible in my approach, I would have missed out on this incredible talent. Tech companies must not only be open and transparent in talking about the challenges that working moms face but, more importantly, they must offer greater flexibility so that they do not lose out on valuable talent. While flexible workplace policies help women succeed in their personal and professional lives, expanding the talent search to include more women in the hiring pipeline is also helpful. In recent years, there has been much progress for women in the workforce. Today, there are now 41 women-led Fortune 500 companies , compared to just two in 2000. But, as companies celebrate this progress, it is an important time to reassess whether companies are cultivating a successful workplace that empowers and advances women. By implementing mentorship programs, providing inclusive benefits and offering flexible workplace environments, companies can help their current employees succeed and attract new and valuable women to their talent pool. Denise Hemke is chief product officer at Checkr. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,239
2,023
"Fueling female hiring in tech | VentureBeat"
"https://venturebeat.com/enterprise-analytics/fueling-female-hiring-in-tech"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Fueling female hiring in tech Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s well-known that the pandemic disproportionately impacted women in the workforce. With the increased amount of caregiving necessitated by lockdowns, women, who still perform the majority of those roles, were forced to drop out of the workforce at an alarming rate. More than two years since the start of the pandemic, there are still 808,000 fewer women in the labor force compared to February 2020. By comparison, male workers regained all jobs they had lost as a result of the pandemic by January of this year. Today, there are 693,000 more men in the labor force than in February 2020. This situation only exacerbated an issue that tech has been grappling with for a while now, which is the lack of women within our ranks, and begs for change. One of our mandates as HR leaders is to create teams where everyone can thrive regardless of their background, origin, or any other differentiating factors. Exposure to diversity has been proven to improve innovation, creativity and problem-solving skills — attributes that every tech company values for their ability to affect the bottom line. In fact, companies with a diverse workforce are 35% more likely to experience greater financial returns than their non-diverse counterparts. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In other words, it’s high time that we increase the presence of women within our ranks. Female hiring starts with an inclusive culture Hiring and retention are major challenges for today’s organizations, as demand for both technical and soft skills confront the Great Resignation. An organizational culture that drives a sense of belonging in the workplace is a critical asset in this environment. An inclusive culture helps its members feel connected, valued and vital to the enterprise, enabling them to positively impact their organization. This sense of belonging is highly correlated with business success: Employees who feel they belong are comfortable being their authentic selves, which helps foster psychological safety and positive employee engagement. Ultimately, this contributes to drastically greater performance (high belonging is linked to a 56% increase in job performance) and, in the long run, higher retention. This is where the heart of empowerment and hiring lies. By creating a culture of inclusivity, employers can lay the groundwork for women to be heard, valued and set up for success in the workplace, and, in the process, cultivate an environment of respect and trust that applicants will be drawn to. Designing a more inclusive hiring process In the ideal workplace, everyone works together as a single team towards clear, common goals. Companies that fully recognize the value of teamwork understand the crucial importance of nurturing a culture in which everyone has a voice and can make an impact, regardless of their demographic group. This all begins with a company’s hiring process. My company employs people in more than 80 offices in over 28 different countries, which gives us the opportunity to experience multiple cultures. It’s important to us to go beyond that baseline, however, and ensure gender diversity within our ranks. So we set ourselves a goal: 25% of our new hires in 2022 would be women. While this may seem like a small number, it’s an aggressive target in the technology and telecom industries, which are notorious for their underrepresentation of women. As we embarked on this hiring goal, we learned some huge lessons about creating equal opportunities for all employees and putting forth initiatives to further diversify our workforce. These included: Put inclusion first; diversity will follow Don’t just look to hire diverse candidates, knowing that it would be very hard to make them feel represented. Before working on hiring, take the lay of the land: Talk to your current employees to understand what’s working and where you need to focus your efforts to drive inclusiveness and belonging within your teams. It’s important to ensure current employees feel respected for their individual talents and for their ability to grow based on their motivation and skills. This will lay the foundation for inclusion, as employees must feel respected and valued if you want to become an employer of choice to a diverse workforce. By starting your diversity, equity and inclusion (DEI) journey with an understanding of how your people feel, you can address internal weaknesses and simultaneously identify successes that should be maintained and replicated. This may include conducting a DEI employee survey, which will allow your team to get a complete viewpoint across all global regions and set a baseline for your DEI efforts. Questions should seek to determine things such as how many employees feel like they are valued and respected by their colleagues regardless of their demographic background and whether men and women feel like they have equal opportunities to advance. Take a holistic approach, but work gradually DEI is a long-term proposition. Focus on one community to start, and use the results of these efforts to start building a community of people within your organization that can help scale DEI to additional populations, creating a virtuous cascade effect. For example, companies may decide to invest more time in working with managers on how to avoid biases that could affect advancement opportunities between men and women. This can include launching a series of workshops, lectures and webinars on various DEI topics for managers. Key components can include training hiring managers on how to create job descriptions that appeal to female candidates and reviewing hiring processes to ensure that there are no unintended biases. Once managers are trained, you can then make resources available to all employees, leveraging management to help promote inclusive practices. Make objectives visible and drive transparency People want to stay with companies that embrace who they are. So, make it easy for applicants to understand who you are as an organization and the role inclusion plays by making a public commitment to share your DEI goals. Communicate why inclusivity is a core part of your culture and how it helps define your organization on your website, blog and career pages. On social media and other external media, ensure you’re highlighting female employees and leaders , and their collective accomplishments. This allows candidates to easily understand how inclusivity relates to your values and actually see those values in action. Then, build a regular cadence to share ongoing progress toward those goals and to demonstrate the value of your efforts and how you plan to continue improving. Much more to accomplish in female hiring As we have followed these practices, almost 30% of the new hires we welcomed so far this year have been women. Now that we’ve surpassed our initial target, we’re moving ahead with a charter to have women hold 30% of all management positions by 2030. Reaching that goal will require access to female talent that is ready to lead, address the obstacles that women face when pursuing management positions and create a community for change. Fortunately, these best practices have laid a solid foundation for inclusivity that will make attaining this goal possible. While we’re proud of what we’ve been able to accomplish so far, there is more the industry can collectively be doing to create a more inclusive work environment, elevate the presence of women and expand diversity efforts across the board. Only by making real, measurable commitments to foster inclusivity will the technology industry finally create workplaces that are more representative of the world we live in. Petrena Ferguson serves as SVP of HR at Ribbon. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,240
2,023
"Diversity in tech: How 3 companies are closing the gap | VentureBeat"
"https://venturebeat.com/programming-development/diversity-in-tech-how-3-companies-are-closing-the-gap"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs Diversity in tech: How 3 companies are closing the gap Share on Facebook Share on X Share on LinkedIn It is no secret that tech has had a long-standing diversity and inclusion problem. Leadership roles continue to be dominated by homogenous teams lacking diverse backgrounds, while women, people of color, and those belonging to the LGBTQ2S+ community continue to lack representation. According to a report by Statista, women average only 34.4% of the workforce across big tech companies such as Amazon, Google and Facebook. And when it comes to women of color, these statistics drop sharply: Research shows that just 3% of computing jobs are occupied by African-American women, 6% by Asian women, and 2% by Latinx women. Diverse tech talent underrepresented and underpaid Despite accounting for 13% of the U.S. workforce, Black professionals account for 4% of all tech workers. Similarly, Hispanic professionals hold just 8% of all STEM jobs , even after accounting for 17% of the national workforce. Overall, it’s estimated that only 22% of workers in tech are ethnic minorities. With fair salary and compensation being a critical factor for top talent, it’s surprising that in many categories, diverse tech talent is still underpaid. Women continue to experience pay gaps in all areas (including maternity leave). Black engineers are paid significantly less than engineers of all other backgrounds, earning 13% less than white engineers. Even when hired, some may leave, or consider leaving tech jobs due to concerns about feeling unwelcome or uncomfortable at a higher rate than their white counterparts. And of course, this problem then becomes self-sustaining — the less diversity there is in tech leadership and management teams, the more minority tech professionals will lack inspiration, mentorship and representation, often to the detriment of their career progression. New steps Now, amid building social pressure, the industry is pledging to take new steps to narrow its persistent diversity gap. Beyond the moral reasons for cultivating a more inclusive organization, experts agree that diversity enables tech companies of all sizes to stay relevant with customers and win at innovation while remaining competitive in a fierce talent market. According to research by McKinsey , the most diverse organizations are now more likely than ever to outperform their less diverse peers. What’s more, the desire to work in a diverse workplace is among candidates’ top priorities when leaving their current organizations for more inclusive ones. Reinventing recruitment To bring in diverse candidates at all levels, many organizations are reinventing recruitment efforts. Some initiatives include making remote work the norm and hiring beyond their own borders, striving to eliminate bias throughout the hiring process, and partnering with educational institutions or external organizations that emphasize STEM careers or STEM education. Developing and publishing a formalized plan with quantifiable goals and metrics to track the results of diversity recruiting and hiring can signal a company’s commitment to the process, and show a willingness to be held publicly accountable. Now it’s more important than ever for tech companies to prove their commitment to closing the gender and diversity gap in tech. Some companies appear to be responding with more substantial policies than before. Check out global tech organizations on the VentureBeat Job Board like Microsoft which has poured considerable resources into diversifying its workforce over the past several years. The company launched the Microsoft Enabler Program in 2020 to improve the employability of people with disabilities through digital skills and corporate training, internships and job shadowing. It now plans to invest an additional $150 million into D&I and to double the number of Black managers, senior individual contributors and senior leaders by 2025. Microsoft also recently partnered with the Milwaukee Bucks, Green Bay Packers and Milwaukee Brewers to form a venture capital partnership that will invest in minority-owned firms. Want to work here? Check out all of Microsoft’s current vacancies. Discover a similar dedication to diversity hire at Apple , which cites 53% of its new hires are from underrepresented minorities including women and people who identify as Black, Hispanic, Native American, Native Hawaiian and Other Pacific Islander. Apple also offers Diversity Network Associations — employee-led groups designed to foster a culture of belonging through education, leadership programs and networking. The tech giant claims that more than 25,000 employees participate in groups such as Black@Apple, Accessibity@Apple, Women@Apple and more, including faith-based groups. Browse all of Apple’s current job opportunities now. Leading the way for future workplaces are progressive companies like IBM which prides itself on its inclusive culture. In 2021, over 41% of hires in IBM were women globally, and in the U.S., 15% were Black, 20.1% were Asian employees, and 10.2% were Hispanic. The company has also committed to dedicating 15% of its first-tier diversity supplier spending to Black-owned businesses by 2025. See all open roles at IBM here. For thousands more opportunities and to find a role that fits, visit the VentureBeat Jobs Board today.. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,241
2,023
"IBM: Quantum computing poses an ‘existential threat’ to data encryption  | VentureBeat"
"https://venturebeat.com/security/ibm-quantum-computing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM: Quantum computing poses an ‘existential threat’ to data encryption Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For years, encryption has played a core role in securing enterprise data. However, as quantum computers become more advanced, traditional encryption solutions and public-key cryptography (PKC) standards, which enterprise and consumer vendors rely on to secure their products, are at serious risk of decryption. Today, IBM Institute for Business Value issued a new report titled Security in the Quantum Era, examining the reality of quantum risk and the need for enterprise adoption of quantum-safe capabilities to safeguard the integrity of critical applications and infrastructure as the risk of decryption increases. The report argues that quantum computing poses an “existential risk” to classical computer encryption protocols, and notes that cybercriminals are potentially already exfiltrating encrypted data with the intention of decrypting it once quantum computers advance as part of “harvest now, decrypt layer attacks.” The problem with traditional encryption and quantum computing One of the central limitations of traditional cryptographic protocols like RSA is that they’re reliant on mathematical problems like the factorization of large numbers, which are simple enough for a quantum computer to solve with brute force. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With a quantum computer, cryptographic protocols “can in theory be solved — and solved within a few hours — with the help of Shor’s algorithm,” the report said. “This makes protocols like RSA an insufficient cryptographic scheme in a future where quantum computers have reached their full potential.” While this process hasn’t taken place just yet, more and more organizations are taking the risk of this decryption seriously. In December 2022, President Biden signed the Quantum Computing Cybersecurity Preparedness Act encouraging government agencies to adopt technology that’s resistant to post-quantum decryption. Likewise, last year NIST concluded its search to identify quantum-resistant algorithms that had been ongoing since 2016, choosing four algorithms as finalists, and selecting CRYSTALS-Kyber , a public-key encryption algorithm and CRYSTALS-Dilithium a digital signature algorithm, as its top two chosen standards. Investing in quantum security is now becoming a necessity for enterprises. From our point of view at IBM, it’s important for CISOs and security leaders to understand quantum-safe cryptography,” said Dr Vadim Lyubashevsky, cryptography research at IBM Research. “They need to understand their risk and be able to answer the question: what should they prioritize for migration to quantum-safe cryptography? The answer is often critical systems and data that need to be kept for the long term; for example, healthcare, telco, and government-required records,” Lyubashevsky said. IBM’s lattice-based approach to quantum-safe encryption With the global quantum cryptography market expected to grow from $89 million in 2020 to $214 million by 2025, IBM has been active in establishing itself as a leader within the space alongside other providers like Intel , which has helped contribute to NIST’s post-quantum cryptography standards. Just last year, IBM launched IBM z16 , a quantum-safe, AI-driven data inference-optimization solution designed for processing mission-critical data. The company had also contributed to three of the four post-quantum algorithms chosen by NIST. Part of IBM’s quantum-safe strategy is to use lattice-based cryptography, a method for constructing security primitives that’s based on the geometry of numbers, which can be used to construct encryption protocols that are harder for quantum computers to crack than those that rely on factorization. IBM notes that this approach first emerged in the 1990s out of two research papers, Brown University’s NTRU: A new high speed public key cryptosystem by Jeffrey Hoffstein, Jill Pipher and Joseph Silverman; and IBM scientist Miklos Ajtai’s Generating Hard Instances of Lattice Problems. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,242
2,022
"Preparing for quantum cryptography, U.S. Air Force partners up with SandboxAQ  | VentureBeat"
"https://venturebeat.com/security/us-air-force-post-quantum-cryptography"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Preparing for quantum cryptography, U.S. Air Force partners up with SandboxAQ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The clock is ticking for public key encryption. With researchers anticipating that quantum computers will be able to decrypt public key algorithms as soon as 2030, organizations are under increasing pressure to find quantum-resistant algorithms to protect their data from threat actors. >>Don’t miss our special issue: Zero trust: The new security paradigm. << One such organization is the United States Department of Air Force, who today, entered into a partnership with AI and quantum security provider SandboxAQ , awarding the vendor a Phase 1 Small Business Innovation Research (SBIR) contract. As part of the contract, the provider will conduct post-quantum cryptographic inventory analysis and performance benchmarking. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! More broadly, the Air Force’s partnership with SandboxAQ highlights that the threat of post-quantum computing isn’t merely an abstract, theoretical threat, but a plausible risk that enterprises need to prepare to address now. The mandate for quantum cryptography This new partnership marks SandboxAQ’s first military contract since being spun off from Alphabet In March earlier this year, and is part of the Air Force’s attempt to prepare for The Quantum Computing Cybersecurity Preparedness Act , which requires US federal agencies to upgrade to post-quantum encryption. The announcement comes amid a wave after NIST chose four post-quantum encryption algorithms that will become part of its post-quantum cryptographic standard, and after Google Cloud announced it has deployed a post-quantum cryptographic algorithm to help secure its internal ALTS protocol. While the momentum of post-quantum cryptography may appear speculative on first glance, the risks posed by quantum computing can be seen now. For instance, Harvest now decrypt later or store-now-decrypt-later attacks mean that nation-state actors and cybercriminals can collect and store encrypted data today, to decrypt at a later date. “U.S adversaries are gathering encrypted data with the intent to exploit it once they deploy quantum computers – these are known as ‘store-now-decrypt-later’ attacks,” said President of Public Sector at SandboxAQ, Jen Sovada. If successful, these attacks would enable threat actors to decrypt protected information at will. “Quantum computers in the hands of adversarial nation states could devastate U.S national security if post-quantum cryptography, or PQC, is not urgently implemented. PQC deployment across national security systems is expected to take years and SandBoxAQ is proud to support the Air Force in this critical first step,” Sovada said. The quantum cryptography market SandboxAQ falls within the quantum cryptography market , which researchers estimate will grow from a value of $102.34 million in 2021 to reach $476.83 million by 2030, growing at a CAGR of 18.67% as more enterprises look to prepare for Y2Q. As the market grows, other post-quantum providers like PQShield are also attracting significant interest, raising $20 million in Series A funding earlier this year, offering enterprises cryptography on chip and in the cloud. This includes IoT firmware, public key infrastructure, server technologies and end-user applications. It’s worth noting that PQShield researchers also contributed to the development of each of the first international PQC NIST standards. Another promising provider in the space is Post Quantum , which provides a quantum-safe end-to-end encrypted messaging app, post-quantum VPN, and quantum-ready multi-factor biometric identity system for passwordless sign in. According to Crunchbase , Post-Quantum has raised $11.2 million in funding to date. SandboxAQ’s partnership with the US Air Force, and its plans to forge further relationships across the public sector will help to situate it as one of the most “battle-tested” post-quantum cryptography providers in the market. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,243
2,023
"Canva announces developers platform and a $50M fund for app development | VentureBeat"
"https://venturebeat.com/ai/canva-announces-developers-platform-and-a-50m-fund-for-app-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Canva announces developers platform and a $50M fund for app development Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Canva , the visual communication platform, unveiled its Canva Developers Platform today at Canva Extend , its inaugural developer conference in San Francisco. The new platform consists of a comprehensive set of APIs and development tools, empowering developers to connect with Canva’s 135 million-strong monthly active user base. To foster the creation of innovative experiences for the Canva community, the company also revealed the establishment of a $50 million Canva Developers Innovation Fund. The investment initiative aims to provide support to app developers in building, growing and marketing their apps on the Canva App Marketplace. The fund will offer financial grants and expert guidance to help developers, especially those from groups underrepresented in the global market, to transform their app concepts into tangible products. According to the company, a crucial component of this offering is the Canva Apps SDK, which grants access to resources such as javascript libraries, documentation, sample apps, UI guides and, notably, the new Canva Apps APIs. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Those APIs — Asset, Design, Data, Fetch and User — enable the creation of a wide range of applications, from AI-powered tools and workflow integrations to design enhancements. Canva Apps will allow customers to retrieve and add content, edit their designs, authenticate users, export and publish their work and provide data. Canva said that the APIs function harmoniously to provide novel avenues for Canva users to excel in their work. “The first set of apps developed in beta launched in March and have been used over 11 million times in just two and a half months. These experiences have enabled us to optimize the new Canva Apps SDK for speed and ease — we’ve enabled developers to get an app up and running in minutes,” Anwar Haneef, Canva’s head of ecosystem, told VentureBeat. “We want simple apps to build, discover and use, and we have done our best to make this achievable for developers. You can now get an app running locally, render it in Canva, and immediately use it in your designs, rather than uploading it to the product and fiddle with it to get it to run.” In addition to these announcements, the company unveiled a diverse collection of over 20 apps at Canva Extend, some already available and others slated for imminent release. These apps bolster Canva’s capabilities, enabling users to publish magazines via the publishing platform Issuu , generate audio using Soundraw’s AI technology, and create AI-generated avatars with D – ID. Expanding opportunities for application developers Canva asserts that the newly launched Developers Platform’s free app-building capability presents a substantial opportunity for developers to bring their ideas and services directly to a wide range of audiences, including large corporate teams, students, teachers, nonprofits and individual creators. Developers can monetize their creations through off-platform billing, while Canva plans to explore additional monetization strategies through the Canva Developers Innovation Fund. “Our goal is to accelerate the growth and adoption of apps on Canva with support, including monetary grants, marketing support and expertise, and exploring effective monetization strategies,” said Canva’s Haneef. “The fund is intended to be agile, so we can evaluate our users’ needs and the needs of our community and use it to build the most sustainable ecosystem possible.” Haneef said that one of the fund’s goals is to seek out developers who might be underrepresented in the global market and provide them with what they need to bring their amazing idea to life. “There’s so much talent out there, and we want to open up opportunities and resources for those who might not ordinarily have access to them. We’re thrilled about the opportunities this will create for our community and will share more details on the program with the Canva Developers community later this year,” he added. The Connect APIs will launch later this year, with a waitlist opening today. These integrations, the company said, would facilitate capabilities such as including streamlined file and design management, programmatic asset uploads into folders to expedite team collaboration, and seamless access to completed designs from any platform. “The Connect APIs let you connect any app with Canva to sync designs, assets, comments and more between platforms. This is critical to furthering our vision of becoming the most pluggable platform,” said Haneef. “Developers can also use these to manage user access to Canva assets, folders and designs.” Concerning the APIs’ role in supporting AI app development, Haneef said that the new Canva Apps APIs offer the capability to construct applications for AI-driven photo, video, audio and text generation. For instance, an upcoming image generation app called PeopleMaker by Visual will soon be available on Canva, enabling users to create lifelike photos of individuals. Another example is Voice by Play.HT, a text-to-speech app, which customers can use to generate voiceovers for their designs. “More than 200 million unique images have been generated in nine months with our generative AI products like Text to Image and Magic Edit. The space is evolving rapidly, and partnering with developers means we can bring more AI-powered solutions to our community faster,” Haneef explained. “As generative AI evolves past text prompt interfaces, our APIs also enable apps to read design elements and update them accordingly. In many cases, users don’t need to create something from scratch. With the Design API, your app can read what’s in the design and generate new content based on that.” What’s next for Canva? According to Haneef, the Apps Marketplace serves as the cornerstone of the company’s vision to become the most adaptable visual communication platform, empowering teams of any size. The company is confident that by granting access to its design engine to developers and integration partners, it can swiftly broaden its product offerings. “We often say we’re only 1% of the way there. We have a huge vision for all that Canva can do, and we can’t build it all alone,” he said. “Our goal is to foster a vibrant ecosystem of apps (and in the future, Connections via the Connect API) so that we can make the latest innovations (including AI) available to our users and continue offering intuitive solutions to common design problems.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,244
2,021
"AI-powered marketing copy generator Anyword secures $21M | VentureBeat"
"https://venturebeat.com/uncategorized/ai-powered-marketing-copy-generator-anyword-secures-21m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-powered marketing copy generator Anyword secures $21M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Anyword , an AI-powered platform for fine-tuning marketing copy, today announced that it raised $21 million in a financing round led by Innovation Endeavors with participation from Lead Capital and Gandyr Ventures. CEO Yaniv Makover says that the proceeds, which bring the company’s total raised to $30 million, will be used to bolster hiring, build out Anyword’s technology, and onboarding customers to the platform. Anyword’s growth comes as marketers increasingly express a willingness to embrace AI-driven creation tools. According to a survey by Phrasee, an Anyword rival, 63% of marketers surveyed would consider investing in AI to generate and optimize ad copy. Statista reports that 87% of current AI adopters are already using — or considering using — AI for sales forecasting and for improving their email marketing. And 61% of marketers say that AI is the most important aspect of their larger data strategy. “The company was originally founded as Keywee, a platform used by publishers such as the New York Times, NBC, and CNN to analyze each article they wrote and find audiences based on the keywords in these articles,” Makover told VentureBeat via email. “Writing has pretty much stayed the same process in the last few hundred years. Computers and word processing helped, but they didn’t materially change how we write to convey a message or a narrative, specifically for an intended audience and with a goal in mind. In marketing and sales, we are writing for someone and usually with a measurable objective. Incorporating data about which words, concepts, and styles work better for a specific audience and industry was our goal [when we pivoted].” Optimizing copy with AI Anyword claims to have trained a copy-generating model on two billion data points from A/B testing messages across industries, channels, and marketing objectives. Leveraging it, Anyword customers can create copy — including headlines, subheaders, email subject lines, text messages, descriptions, and captions — while understanding how different demographics might react to variations of the same copy. The platform’s tools can connect ad accounts and incorporate keywords and promotions (e.g., “new arrivals” and “free shipping”), tailoring copy to a specific length. Beyond this, they can optimize on-site copy to display specific messages to specific audiences. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Train our AI copywriting tool to write copy in your brand voice, similar to your competitors, or similar to your top performing live ads,” Anyword explains on its website. With Anyword, marketers plug in a URL, summary, or product description to generate copy. After choosing a format and tone, Anyword creates several versions of the copy, scored and sorted by predicted quality. From there, Anyword can rewrite and show comparisons between the variations, improving over time over existing ads. “Predicting how well a text variation will do for a goal and audience necessitates a special dataset. First, you need to know how a text variation did historically for a given audience. You need to have a breadth of data covering many styles and topics,” Makover said. “Our datasets of text variations and their respective performance metrics consist of millions of variations [to improve, for example,] conversion rates for … websites, emails, ads, social posts, and blog posts.” Competition Spurred by digital transformations that accelerated during the pandemic, a larger share of companies are expected to adopt AI technologies that automatically suggest and tailor marketing and sales materials. According to the Phrasee survey, 65% of marketers trust that AI can generate desirable brand language, and 82% believe that their organization would benefit from data that provides insights into how consumers respond to that language. Fifty-employee, Tel Aviv- and New York-based Anyword competes with Phrasee, which partnered with Walgreens early in the pandemic to create a targeted email campaign about COVID-19 vaccine availability. Other competitors include Instoried, CopyAI, Copysmith, Writesonic , and New York City-based Persado AI. While new startups in the “AI in marketing tech” segment arise with some frequency, Anyword is betting that its technology will enable it to stand out in a market that could be worth $40.09 billion by 2025. “We’ve been growing 35% month-over-month on average since launching Anyword in March,” Makover said. “Since the end of Q1 2021, we have acquired 1,200 customers. Our customers range from small businesses looking for better performance from their marketing content and ecommerce offerings to publishers who have significant volumes to agencies and enterprises looking for deeper integrations with their products and services.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,245
2,023
"Hollywood is on strike over AI, but companies see creative potential in digital humans | VentureBeat"
"https://venturebeat.com/ai/hollywood-is-on-strike-over-ai-but-companies-see-creative-potential-in-digital-humans"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hollywood is on strike over AI, but companies see creative potential in digital humans Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Hollywood actors and writers are currently striking , and one of their biggest concerns is the impact of generative AI on their industry and their jobs. In a news conference last Thursday, Fran Drescher, president of the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) union, said AI poses an “existential threat to creative professions, and all actors and performers deserve contract language that protects them from having their identity and talent exploited without consent and pay.” However, a flock of high-flying generative AI video startups, including Synthesia, Hour One and Soul Machines, don’t see it that way. They view AI-generated avatars, or digital humans, as filled with powerful creative potential for business, Hollywood, and celebrities who consent to the use of their AI likenesses. Tackling the challenges of traditional video production Last November, for example, VentureBeat spoke with Natalie Monbiot, head of strategy at synthetic media company Hour One, who said she dislikes the word “deepfakes.” “Deepfake implies unauthorized use of synthetic media and generative artificial intelligence — we are authorized from the get-go,” she told VentureBeat. The idea, she explained, is that businesses can use synthetic media — in the form of virtual humans — to tackle the expensive, complex and unscalable challenges of traditional video production, especially at a time when the hunger for video content seems insatiable. In addition, synthetic media allows businesses to quickly and easily offer content in different languages, as well as to produce promotional video content at scale. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Just today, for example, Los Angeles-based startup Soul Machines , which recently added a ChatGPT integration to its “digital person” product, announced a partnership with K-Pop celebrity Mark Tuan , a member of boy band GOT7, with the launch of “ Digital Mark. ” The company claimed the launch is the “first time a celebrity is attaching their likeness to GPT,” allowing Tuan’s social following of 30 million fans to have one-on-one conversations with “him” on virtually any topic. A press release said that as K-Pop’s fan base continues to grow across the globe, Tuan’s new digital twin will “enable him to speak in multiple languages — starting with English but adding Korean and Japanese language capabilities in the near future.” Synthesia CTO calls digital humans a ‘natural progression’ for video creativity Jon Starck, chief technology officer at the London-based startup Synthesia , which recently hit a $1 billion valuation for its AI-powered platform that helps businesses generate promotional or educational videos from plain text — and got an infusion of funding from Nvidia — said that AI-powered digital humans have both creative and efficiency potential that can’t be ignored. “Video is a very creative thing. It’s a storytelling thing. It’s very visual and engaging,” he said. “But the whole process of creating video is probably the least creative thing you can imagine.” With today’s AI-powered video generation opportunities, “everyone becomes a great storyteller,” he added. Starck told VentureBeat this is a “natural progression” from previous AI-generated efforts in film and says the future could hold an entire movie made from synthetic data. It’s a bold statement, but Starck has been working on digital humans for two decades, when “nobody had ever heard of computer vision ” and he was working in the film industry, bringing 3D computer vision to technical artists working on movies. The problems he is working on now are “exactly the same problems we were working on 20 years ago,” he said. “I used to have eight cameras, now I’ve got 78 cameras. Now there are 24-megapixel cameras. Now we have the capability of solving the problems that I couldn’t [before].” Using actors to get the best dataset of high-fidelity human performance Synthesia’s researchers have taken a big step towards solving one of the thorniest computer vision problems: representing human performance at high fidelity, an essential building block in applications from film production and computer games to video conferencing. Right now, for example, AI tools like Synthesia’s are two-dimensional and don’t show a human being fully in motion with a 360-degree view, like you would see in a TV advertisement or a movie. To close the gap to production-level video quality, Starck and his team recently released HumanRF , an AI research project that captures a human being’s full-body appearance in motion from multi-view video input, and enables playback from novel, unseen viewpoints. To meet this challenge, Synthesia researchers needed to create a high-fidelity dataset of clothed humans in motion — which required, ironically, real actors. The company created the dataset, called ActorsHQ — consisting of 39,765 frames of dynamic human motion captured using multi-view video with a proprietary multi-camera capture system — by accessing the movements and performances of real actors in a U.K. studio, including some who are already available as avatars on the Synthesia platform. The actors “wanted to come back and be part of this future of potential 3D representations for 3D synthetic actors,” said Starck. Asked about the complaints of Hollywood’s striking writers and actors, Starck emphasized that Synthesia is not in the movie business. “We’re not replacing actors,” he said. “We’re not replacing movie creation. We’re replacing text for communication. And we’re bringing synthetic video to the toolbox for businesses.” That said, he said that from a personal standpoint, as someone who has worked in visual effects, he sees every invention as a new enabler. In the movie industry, he explained, it could take 18 months and millions of dollars to produce a couple of seconds of a blockbuster movie. “There are hundreds of artists sitting in dark rooms with very complicated tooling to be able to produce very exact results,” he said. “My view on the AI explosion is this is something that enables creativity for humanity.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,246
2,023
"Adobe product leader says AI won't kill graphic design, even as employees worry | VentureBeat"
"https://venturebeat.com/ai/adobe-product-leader-says-ai-wont-kill-graphic-design-even-as-employees-worry"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Adobe product leader says AI won’t kill graphic design, even as employees worry Share on Facebook Share on X Share on LinkedIn Image by Adobe Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Scott Belsky, chief product officer of Adobe, told VentureBeat on X-formerly-known-as-Twitter yesterday that while some industries “may not need to exist in the age of AI,” graphic designers will continue to flourish. His comments come following a report by Business Insider yesterday that inside Adobe, some employees are concerned that Adobe’s Firefly AI tools and Photoshop Generative Fill will kill graphic designer jobs and undermine the company’s business model. Belsky first wrote that in the age of AI, “the greatest innovation in a space is often eliminating the need for the space entirely.” in the age of AI, the greatest innovation in a space is often eliminating the need for the space entirely. When asked if he was referring to Adobe, Belsky responded with an example: “[T]ranslation of copy is an industry that may not need to exist in the age of AI.” Then, when asked if this applied to traditional graphic design, he said no. “No; AI will increase the surface area that creatives can consider and explore before finding even better solutions to pursue and iterate. [W]e see this in early research on how creative pros leverage these new tools: they’ll create more and better content … faster. [N]ot less,” he wrote. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Business Insider article highlighted an internal staff meeting in June where an employee asked whether generative AI was putting Adobe “in danger of cannibalizing” its enterprise-targeted business. And the article pointed out that in an Adobe earnings call in June, Jefferies analyst Brent Thill said the “number one question” he gets from investors is whether AI will reduce Adobe’s “seats available.” Adobe sells its cloud subscriptions based on the number of seats, or licenses, for product access. If AI tools make it faster and easier to design, designers could be laid off and demand for the licenses could shrink. >>Follow VentureBeat’s ongoing generative AI coverage<< But when asked by VentureBeat whether there is a risk organizations will buy less graphic design software or fewer licenses as a result of AI tools, Belsky replied that “when any ambitious or growth oriented company can get more ‘ingenuity per person,’ they want more people (so they can do more products, create more content, achieve more). it’s a default human desire. engineers have become more productive annually for decades, yet demand grows.” when any ambitious or growth oriented company can get more “ingenuity per person,” they want more people (so they can do more products, create more content, achieve more). it’s a default human desire. engineers have become more productive annually for decades, yet demand grows. The discussions around Adobe’s AI impact come after VentureBeat coverage last month about the fact that a vocal group of contributors to Adobe Stock, which includes 300 million images, illustrations and other content that trained the Firefly generative AI models, say Adobe used their stock images for AI without express notification or consent. While this is certainly an issue for other text-to-image generative tools such as DALL·E 2, Stable Diffusion and Midjourney (which were trained on scrapes of imagery posted to the public web, including copyrighted imagery), they say it is particularly concerning for a company like Adobe, which has been deeply intertwined with the creative economy for decades. Now, Adobe Stock creators say Firefly’s popularity is making it far less likely that users will purchase stock images. In addition, a flooding of gen AI images into Adobe Stock is cannibalizing the platform, the creators say. >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,247
2,023
"How to navigate today's conversational AI and text generative landscape | VentureBeat"
"https://venturebeat.com/ai/how-to-navigate-todays-conversational-ai-and-text-generative-landscape"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to navigate today’s conversational AI and text generative landscape Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI’s revolutionary chatbot ChatGPT has been all over the news in recent months, triggering technology giants such as Google and Baidu to accelerate their AI roadmaps. ChatGPT is built on OpenAI’s GPT language model and provides a variety of functions, such as engaging in conversations, answering questions, generating written text, debugging code, conducting sentiment analysis, translating languages and much more. Looking at the technologies of this moment in time, nothing seems to be as pivotal to the future of humanity as generative AI. The idea of scaling the creation of intelligence through machines will touch on everything that happens around us, and the momentum in the generative AI space created by ChatGPT’s sudden ascent is inspiring. How should enterprise business leaders react to this? We thought that, by looking under the hood of ChatGPT and disassembling the application to its individual capabilities, we could demystify the product and enable any sufficiently-innovative enterprise to identify the elements most appropriate for their strategic relevance. Thus was born this analysis and research. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! We analyzed the various functions that ChatGPT provided and created an industry landscape map of the companies that fulfill one or more of these functions. You can think of this as dissecting ChatGPT into its various anatomical parts and finding potential alternatives for each function with its own unique and targeted capabilities. The resulting text generative and conversational AI Landscape is shown below and consists of ten functional categories with a sampling of representative companies for each category. Breaking down the text generative and conversational AI landscape Generative AI is a term rising in popularity with ChatGPT. It refers to AI technology that can create original content such as text, image, video, audio and code. Our landscape is focused on the area of text generative AI because that’s the predominant function of ChatGPT. As you can see, the language models are at the bottom of the landscape because they form the fundamental building blocks of natural language processing ( NLP ) used for all the other functions. The sampling of language models shown here includes OpenAI’s GPT, Google’s LaMDA and BigScience’s BLOOM. To the left of the landscape, we have grouped the categories of text summarization, sentiment analysis and text translation into the overarching category of text analysis, which refers to the process of using AI to analyze unstructured text data for patterns, insights and intent. Text summarization companies use AI to summarize written texts into excerpts of the most important points. Companies in this category include QuillBot, Upword and spaCy. Sentiment analysis companies use AI to determine the emotions, opinions and tones inherent in written texts. Companies in this category include MonkeyLearn, Repustate and Cohere. Text translation companies use AI to translate written texts from one language to another. Companies in this category include ModernMT, TextUnited and Phrase. Human-like interaction; code, text and search capabilities In the middle of the landscape, we have grouped the categories of virtual assistants, chatbot-building platforms, chatbot frameworks and NLP engines into the overarching category of conversational AI. This encompasses technologies that interact with people using human-like written and verbal communication. Virtual assistant software responds to human language and helps the user with a variety of tasks and queries. Companies in this category include Augment, Replika and SoundHound. Chatbot-building platforms enable non-technical users to create and deploy chatbots without writing code. Companies in this category include Amelia, Avaamo and Boost AI. Chatbot frameworks and NLP engines enable developers to create chatbots using code, and also build the core components of NLP. Companies in this category include Cognigy, Yellow AI and Kore AI. To the right of the landscape, we have the categories of writers, coders and search. Writers use AI to create original written content and edit existing written content for grammar and clarity. Companies in this category include Jasper, Writesonic and Grammarly. Coders use AI to generate code from natural language inputs and debug existing code. Companies in this category include Tabnine, Replit and Mutable AI. Finally, search comprises AI-based search engines for the entire web or for an enterprise’s internal knowledge base. Companies in this category include Neeva, Perplexity AI and You.com. The ten categories Text summarization: These companies use AI to identify the most important information from long form texts and summarize them into short digestible excerpts. Other functions of these companies include keyword extraction, text classification and named entity recognition. Sentiment analysis: These companies use AI to determine the sentiment of the text as either positive, negative or neural, as well as the tone, emotion and intent behind the text. Sentiment analysis is often used in analyzing customer feedback and brand attitudes. Text translation: These companies use AI to translate text from one language to another, mostly for written text but also for voice and video recordings. Virtual assistants: These companies create voice-enabled or text-enabled assistants that help the user with a variety of tasks such as taking notes, scheduling appointments, recommending products and providing mental health therapy. Chatbot building platforms: These companies provide an interface for non-technical users to build and deploy chatbots without needing to write code. They usually include a visual builder to designate the flow of interaction with the chatbot. Chatbot frameworks and NLP engines: These companies provide an environment for developers to build and deploy chatbots using code, as well as companies that build the core component of natural language processing which converts human language into machine inputs. Writers: These companies use AI to generate written text for given topics such as essays, poems, blog posts and sales copy. They also help edit and paraphrase written text for grammar, tone, clarity, and style. Coders: These companies use AI to assist developers in generating code from natural language descriptions. They also help debug existing code and explain the reasoning behind their code edits. Search: These companies use AI to search the web for answers to questions about general knowledge, as well as companies that build custom search solutions for an enterprise’s own internal knowledge base. Language models: These models learn from an abundance of human written and spoken texts, and predict the probability of the next word in a specific sequence of words. They form the fundamental building blocks of NLP used for text generative and conversational AI. Broad landscape, evolving challenges As you can see, the landscape of functions similar to ChatGPT is broad, with a growing number of companies competing in each function. This infographic shows only a fraction of the 700-plus companies we have uncovered in the space, with more products and companies launching daily. Similar to other major technology shifts we have seen with the internet, mobile, and more recently in crypto, this early spring tide of market buildup consists of an explosion of activity that will continue to accelerate before shaking out and consolidating in the years to come. The obvious challenge for enterprise leaders in this phase of the market evolution will be navigating the landscape and identifying the true signals. What are the opportunities that can accelerate their businesses, provide new value to their customers or keep them competitive in a rapidly changing market? Facing the plethora of competing generative AI products, enterprise leaders need precise criteria for weighing and selecting the right ones for their creative and knowledge workforce. It may turn out that a portfolio of solutions would work best, and the role of knowledge and creative workers evolves from creating original content to comparing, collating and editing the best creative output from the multitude of generative AI tools. One thing is for sure; every enterprise must have a generative AI plan. Dong Liu and Nader Ghaffari are co-founders at Daybreak Insights. Special thanks to Arte Merritt for his review and feedback. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,248
2,023
"Aisera embraces Microsoft AI copilot for advanced enterprise service experience | VentureBeat"
"https://venturebeat.com/ai/aisera-embraces-microsoft-ai-copilot-for-advanced-enterprise-service-experience"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aisera embraces Microsoft AI copilot for advanced enterprise service experience Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, Aisera , a provider of conversational AI tooling for customer and employee service, extended its partnership with Microsoft to deliver an AI copilot. The technology will leverage integration with Azure OpenAI service, which includes GPT-4 , and help enterprises optimize service experiences with behavior-driven personalization and smoother interactions. The new AI copilot is also expected to cut down costs as more and more companies look to meet customer and employee service expectations with limited resources and reduced budgets. AI copilot to enhance Aisera’s offering Aisera drives its business with an AI service experience platform (AISX) that incorporates different domain-specific conversational AI tools for employee and customer service, as well as AIOps. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With the latest engagement, AISX will get support for the new AI copilot — backed by Microsoft Azure OpenAI service. This way, Aisera will bring together the latest ChatGPT and generative AI capabilities with its industry- and domain-specific large language models (LLMs), enabling enterprises to improve self-service, productivity, user engagement and personalization while decreasing resolution times. >>Follow VentureBeat’s ongoing generative AI coverage<< “Our partnership with Microsoft is to help deliver the ChatGPT experience to organizations that desire human-like conversations built on their domain-specific and industry-specific dictionary — an interaction that is personalized and contextually relevant to that enterprise,” Muddu Sudhakar, CEO and cofounder at Aisera, told VentureBeat. “This also mitigates hallucinations and accuracy issues with general-purpose models like ChatGPT and GPT-3/4. True, enterprises can fine-tune these models, but this requires having a strong team of data scientists. Yet enterprises usually do not have the staff for this — or the time to create these models,” Sudhakar said. Using the copilot, users can ask natural language questions via a chatbot in Teams, mobile app or voice and have it understand and generate answers that are contextually relevant to the organization. The technology is poised to handle many of the tasks for tier 1 and tier 2 support and auto-resolve user requests, lowering operating expenses and reducing workload for overwhelmed support desks. “Employees can ask what 401(k) funds are available to them, compare the funds and then change their allocation — all through our solution. Customers can inquire about their order status, make updates to their account or simply inquire about return policies. Lastly, organizations can benefit from copilot in auto-generating knowledge articles based on customer support logs or IT help desk tickets,” Sudhakar said. Results so far While the AI copilot has just been publicly announced, Aisera notes that multiple companies have already tested it, including Chegg and Gap. Chegg, in particular, has seen notable benefits such as 75% auto-resolution of support tickets, 73% improvement in employee satisfaction and 68% improved employee productivity for the service desk. The move adds Aisera to the growing list of enterprises leveraging OpenAI’s technology to enhance their product. Recently, analytic database Kinetica announced an integration with ChatGPT for conversational querying, while New Relic launched Grok , an assistant powered by OpenAI LLMs for enhancing observability. Microsoft itself is in the process of adding AI copilot across its suite of enterprise productivity tools, including Teams. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,249
2,023
"Creating the next wave of computing beyond large language models | VentureBeat"
"https://venturebeat.com/ai/creating-the-next-wave-of-computing-beyond-large-language-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Creating the next wave of computing beyond large language models Share on Facebook Share on X Share on LinkedIn Presented by VAST Data With access to just a sliver of the 2.5 quintillion bytes of data created every day, AI produces what often seem like miracles that human intellect can’t match — identifying cancer on a medical scan, a viable embryo for IVF, new ways of tackling climate change and the opioid crisis and on and on. However, that’s not true intelligence; rather, these AI systems are just designed to link data points and report conclusions, to power increasingly disruptive automation across industries. While generative AI is trending and GPT models have taken the world by storm with their astonishing capabilities to respond to human prompts, do they truly acquire the ability to perform reasoning tasks that humans find easy to execute? It’s important to understand that the current AI the world is working with has little understanding of the world it exists in, and is unable to build a mental model that goes beyond regurgitating information that is already known. Yann LeCun, AI Chief at Meta, recently said that current artificial intelligence systems like ChatGPT “are not even as smart as a dog,” though the limited reasoning abilities of large language models (LLMs) are offset by their large associative memory capacity. This makes them “a bit like students who have learned the material by rote but haven’t really built deep mental models of the underlying reality.” So, for all the hype, generative AI as we know it is only the beginning of the deep learning and automated discovery era. We are now just starting to see a glimmer of something that is greater than the ability to correlate and generate data when using simple language models, says Jeff Denworth, co-founder at VAST Data. “An AI that exists beyond the automation of routine tasks will be marked by machines that can understand the natural world—that can reason about that natural world,” he says, “and it will create mental models that will serve as the basis for entirely new discoveries.” He points to AlphaDev: the artificial intelligence (AI) system built by Google DeepMind that recently uncovered brand-new sorting algorithms that are up to 70% faster for shorter sorting sequences and about 1.7% faster for large ones, blowing away the algorithms that data scientists and engineers have been fine-tuning for decades. “That’s very different from asking a chatbot what the diameter of the earth is,” he adds. “Those are things that we know. But what you’re starting to see is that computers are starting to discover things that we don’t know.” We’re on the cusp of what he calls “AI-automated discovery,” or the potential to evolve AI from LLMs, which are currently limited to performing routine tasks, like business reporting or collating and synthesizing known information, into data-driven triggers where AI is autonomously seeking answers to questions unprompted by humans as new, natural, rich data enters a dataset. Unlocking brand-new knowledge at lightning speed Humans can take 20 years to become domain specialists, and then apply that thinking toward solving real problems. That specialization can be achieved by an AI computer today in a matter of minutes or seconds. A thousand data centers around the world all working on the same problem, each with trillions of cores and hundreds of petabytes or exabytes of data, can become a global computer, playing through scenarios in simulations at internet speeds—advancing the process of how we learn by light years, and making discoveries faster than humans will ever be capable of on their own. This kind of data-driven and event-driven automation expedites AI discovery for the types of use cases that impact all of humanity—and even expands the possibilities of discovery in areas uncharted or even not yet imagined by humans to date. “Imagine these machines tackling crop production, new approaches to sustainable energy, the elimination of disease,” Denworth says. “We think that these machines will find and discover whole new domains of science and mathematics on their own that are beyond our current evolutionary step.” But much has to change to make it happen, he adds. This new paradigm will require a brand-new way of approaching AI infrastructure. Building a central corpus of the world’s data This future of computing requires what Denworth refers to as a thinking machine (a nod to the 1980s parallel computing company), and will require us to embrace several new computing paradigms, from the nature of data structures to the nature of computing on data. And it will require a way to simplify and automate the process of implementing AI. “It’s easy to say we have a lot of data and a lot of machines, and therefore we’re ready,” he explains. “But the hard job is bringing it all together, so that the machines can see and share the data, particularly when organizations deal with things like data gravity and data privacy. You need to build new approaches to extend our understanding of data on a global scale and to create a form of anti-gravity for data and data processors.” The concept of a data platform also needs to change. Today’s leading data platform providers are largely integrating machine learning solutions upon systems that were fundamentally designed for business reporting, but numbers and tables are not the data constructs most humans use to interact with the world. “Sight, sound, touch, smell and taste – these are the senses that humans use to perceive the natural world, and by synthesizing the real-time data that comes from these sensors with our neural networks we develop understandings and realizations,” he says. “We want to build a computer that acts like that, a system that understands data (not in tables) but one that creates structure and understanding from the rich natural data that comes to us from all over the world.” Once this richer class of data gets fed into a thinking system, such a machine instantly and innately starts doing things like interpreting, correlating, and building new realizations upon this data, so that it’s just perpetually getting smarter about what’s happening around it, rather than just being prompted to process and learn upon human request. In order to give AI systems the greatest chance of creating discoveries, we must put data at the center of the system as a knowledge store and an experience trigger, where each data event becomes an expansion against our past learnings that in turn create new understandings. “If you can give training models access to the world’s data and the world’s processors and provide mechanisms for organization and processing, then we should be able to reduce the time it takes for us to achieve new discoveries,” he says. “At that point, machines won’t just assist humans to achieve new discoveries—these systems will allow us to advance the rate of discovery from generational cycles to processor clock cycles.” The need for an unstructured database This future depends on a next-generation approach to data management and database architecture, however, to lay the foundation for intelligent computers to collect, process and collaborate on data at a global scale in one unified computing environment. “The reality is that the next era of deep learning requires an integrated solution designed for the imperatives of tomorrow,” Denworth says. But here in the present, data-driven companies are launching increasingly sophisticated AI initiatives, and today’s data management constructs cannot easily deal with the multi-variant types of data these AI initiatives ingest. Organizations are forced to stitch together databases, data warehouses, data lakes, file systems and streaming platforms to make sense of this data deluge. What connects these systems are APIs, which often work with each other according to some lowest common denominator. VAST Data is simplifying the data management experience by breaking the tradeoffs that have resulted in this soup of infrastructure, and then by rethinking the relationship between structured data and unstructured data at the fundamental level. “Unstructured data, GPUs, global data sets, a variety of on–premises and cloud computers — these are the hallmarks of the environments that are being deployed by leaders in deep learning,” Denworth says. “The biggest hyperscale organizations have been building infrastructure for decades, but this has been the property only of the computing elite. On August 1, VAST will take organizations to a new place where these systems won’t be built from independent technologies that have been designed upon legacy concepts. With a full rethink, we can democratize systems of AI-automated discovery for everyone.” For a deep dive into VAST’s vision for the future of AI infrastructure, plus a look at how customers like Pixar, Zoom and The Allen Institute and partners like NVIDIA are harnessing this powerful new approach to deep learning, don’t miss VAST’s Build Beyond event on August 1st. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,250
2,023
"Kinetica adds ChatGPT integration for 'conversational querying' of data | VentureBeat"
"https://venturebeat.com/data-infrastructure/kinetica-adds-chatgpt-integration-for-conversational-querying-of-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kinetica adds ChatGPT integration for ‘conversational querying’ of data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Arlington, Virginia-headquartered Kinetica , a company providing an analytic database to unlock value from spatial, time-series and temporal data, today announced an integration with OpenAI’s ChatGPT. The move, the company explains, enables “conversational querying” where enterprise users can enter natural language prompts to query their data assets. Previously, the process involved writing complex structured query language (SQL) queries, which restricted analytics to a select group of users and took time. This is the latest effort from a vendor to loop generative AI into its product and make it more intuitive and accessible for end customers. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How exactly does ChatGPT querying work? In order to perform a conversational query, an enterprise user has to go on Kinetica’s workbench UI and ask a question about their proprietary data, be it something plain and simple or a complex, previously-unknown query. Once the question is put in, the ChatGPT interface, powered by GPT-3.5 turbo, converts it into SQL and runs the query to provide insights for decision-making. “Workbench is a SQL notebook system and at the top of every notebook is a little text prompt that lets you ask a question that will generate a SQL block in your notebook,” Nima Negahban, the CEO and cofounder of the company, told VentureBeat. For instance, in order to optimize inventory, a user could ask something like “What is the status of our inventory levels and should we reroute active delivery vehicles to reduce the chances of products being out of stock.” The prompt, Kinetica says, will provide action-oriented insights in seconds. It could also be backed up with follow-up questions, which could uncover unexpected correlations and relationships that may not have been immediately apparent through traditional querying methods. While most analytic databases require data engineering, indexing and tuning to ensure rapid querying, Kinetica delivers similar performance through native vectorization. As part of this, it stores data in fixed-size blocks called vectors and performs query operations on these vectors in parallel, rather than on individual data elements. “Enterprise users will soon expect the same lightning-fast response times to random text-based questions of their data as they currently do for questions against data in the public domain with ChatGPT,” Negahban noted. Data remains safe Since the ChatGPT-driven querying capability is dealing with company data, which can be sensitive at times, the CEO also emphasized that the solution works in such a way that the model doesn’t capture any part of the data it queries. “Kinetica automatically hydrates the GPT-3 context in an anonymized fashion with the necessary prompts and rules derived from the database metadata. This allows for the GPT model to generate the correct SQL query, given a user’s data model and question, without exposing the underlying detailed data to GPT,” he explained. Moving ahead, Kinetica plans to enhance the querying feature with GPT-4 integration and make it more widely available to enterprises. For the latter, the company will launch a programmatic SQL API that will open the capability up to be used by other developers in their own analytic applications. It is also exploring other areas where its database might benefit from large language models (LLMs). “We have a number of ideas about how we are going to be leveraging LLMs in our roadmap. We are having dialogues with our clients, who are leaders in their respective industries such as healthcare, telecommunications, defense, banking, automotive and others, and they have some incredible use cases that we are exploring together,” Negahban said. Other enterprise technology vendors have also started leveraging LLMs into their products, including Salesforce , Microsoft , ThoughtSpot , Domo and SiSense. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,251
2,023
"Hollywood actors remain on strike, endorse AI 'NO FAKES ACT' | VentureBeat"
"https://venturebeat.com/ai/hollywood-actors-remain-on-strike-over-ai-endorse-no-fakes-act"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hollywood actors remain on strike over AI, endorse ‘NO FAKES ACT’ Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While the Hollywood writers’ strike has officially ended with new protections against AI in screenwriting , there is no such luck for their creative counterparts, the actors, whose union the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) published a statement today saying talks with the film and TV industry CEOs have broken down, after the CEOs reportedly refused to counter SAG-AFTRA’s proposals for a new contract. SAG-AFTRA had proposed safeguards limiting the use of AI and 3D scanned likenesses of actors. “It is with profound disappointment that we report the industry CEOs have walked away from the bargaining table after refusing to counter our latest offer,” SAG-AFTRA’s statement, published on its website and social media channels, reads, later elaborating: “These companies refuse to protect performers from being replaced by AI, they refuse to increase your wages to keep up with inflation, and they refuse to share a tiny portion of the immense revenue YOUR work generates for them.” A new bill offers actors’ hope in controlling AI likenesses At the same time, SAG-AFTRA today announced its support for a new bill introduced by four bipartisan U.S. Senators — the “NO FAKES ACT” which stands for “Nurture Originals, Foster Art, and Keep Entertainment Safe Act,” from former presidential candidate Amy Klobuchar (Democrat from Minnesota), Marsha Blackburn (Republican from Tennessee), Chris Coons (Democrat from Delaware), and Thom Tillis (Republican of North Carolina). The bill text , uploaded by Deadline, “would prevent a person from producing or distributing an unauthorized AI-generated replica of an individual to perform in an audiovisual or sound recording without the consent of the individual being replicated.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That would make it much harder for amateurs and aspiring creators to use AI to generate parodies of leading actors, but it would also help actors maintain control and profit off their likeness. However, the bill still needs to earn a majority of votes from both houses of Congress — the Republican controlled House of Representatives and Democrat-controlled Senate — and be signed by President Biden, to be enacted into law. That could take a while — and it’s far from a sure thing. Hollywood is in the midst of a collision of two rapidly advancing technologies: 3D scanning and AI As VentureBeat previously reported in our deep dive on the history, present, and future of AI and 3D scanning Hollywood , AI and 3D scanning are two fundamentally distinct technologies. The former is much newer to the industry, but the latter has been used for decades, and was pursued by auteur filmmakers including James Cameron and David Fincher. 3D scans have until recently still needed an actor’s performance anchoring them to look real — with AI, they could be used to perform on their own, adopting the mannerisms of a living human being, or be used to create an entirely synthetic person, as some companies like visual effects studio Digital Domain are researching. In addition, The Information reported that major stars are also in talks with startups to digitize and 3D scan their likenesses and license them out. One such startup, Metaphysic, recently announced its own PRO tier exactly for this purpose , and it was previously reported by The Hollywood Reporter to be working with Tom Hanks, Anne Hathaway, and Octavia Spencer. Interestingly, Tom Hanks and news anchor Gayle King have also posted on their social accounts and issued statements warning their fans that both of their likenesses were being used (separately) in AI-generated videos to shill products without their advanced knowledge nor consent, something that the new NO FAKES ACT would prohibit. Yet, Metaphysic itself was born out of technology that was used to make unauthorized “deepfake” replicas of Tom Cruise that went viral on the social network TikTok several years ago. There is some irony, though perhaps a predictable one, in the project that started as an unauthorized parody is now going “legit” and earning the business of the very kinds of stars it was founded to lampoon. What SAG-AFTRA wants Like the Writers Guild of America (WGA), the union representing screenwriters, SAG-AFTRA has been openly concerned from the beginning of the strike about the growing availability and use of AI and 3D scanning in film and TV, specifically as a way of replacing human labor. As a letter from SAG-AFTRA’s general counsel Jeffrey Bennett states: “SAG-AFTRA maintains that the right to digitally replicate a performer’s voice or likeness to substantially manipulate a performance, or to create a new digital performance, is a mandatory subject of bargaining. In addition, the use of performer’s voice, likeness or performance to train an artificial intelligence system designed to generate new visual, audio, or audiovisual content is a mandatory subject of bargaining.” Yet, a number of actors have come forward this year alone stating they were paid only for one or a few days of work as “background” actors or extras, only to be asked to have their full bodies scanned for a 3D likeness that could be used repeatedly without paying them for additional days of work. In the case of the writers, the concern was about programs like OpenAI’s ChatGPT or Sudowrite (trained on GPT-3) being used to draft screenplays, and writers hired to “touch up” or edit said material and ensure it could be copyrighted (due to the fact that the U.S. Copyright Office has repeatedly stated AI-produced work is ineligible for copyright , and that a human creator must be involved for work to be eligible for copyright). While ChatGPT and Sudowrite are open to the public, in the case of actors, the actual technology behind 3D scanning their likeness is far less accessible — for now. Startups including Move AI are working on AI-driven 3D motion capture with a single smartphone (Move already has its Move One app with this capability being used in beta), which would lower the cost and resources needed to do this immensely. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,252
2,022
"How Anonybit plans to crack honeypots storing identity data | VentureBeat"
"https://venturebeat.com/business/how-anonybit-plans-to-crack-honeypots-storing-identity-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Anonybit plans to crack honeypots storing identity data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Here’s one way to protect personal and business biometric data so that bad guys can’t find it and make money off it: Don’t store it all in one honeypot – whether it’s a primary or backup database. Startup Anonybit, which launched on Tuesday and announced a $3.5 million Series A funding round, has developed what it calls a “breakthrough decentralized biometrics infrastructure” that it claims addresses a market need for improved management of personal data and digital assets across a wide range of vertical industries. This is not a purely SaaS or on-premises security solution. Anonybit dices up sensitive identity data, including biometrics, private keys, and other digital assets, into anonymized bits that are distributed throughout a peer-to-peer network of nodes. The system then applies multi-party computing in a proprietary, patented manner in order to reconnect the bits in a decentralized way. In this way, there is never any identity data for hackers to use for creating false credentials. [Related: Decentralized identity: The key to the digital era? ] “Managing identity is central to every digital interaction we have today, and there is no organization that is immune to the challenge,” CEO Frances Zelazny told VentureBeat. “Our approach secures personal data and digital assets, filling a need that banks, fintech, retailers, crypto wallets, government agencies, and other stakeholders for strong authentication without maintaining central honeypots of personal data.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 2021 was a particularly bad one for cybersecurity, with the total number of cyberattack-related data compromises up 27% from 2020. Addressing digital security has been viewed as costly, time-consuming, and complicated, as evidenced by the $1.7 trillion that is expected to be spent over the next five years on cybersecurity and identity management. On the privacy side, numerous legal frameworks have emerged to address usage and consent issues. However, little has been done to deal with the root cause of the identity problem – central storage of personal data, Zelazny said. Anonybit, founded in 2018, uses AL and ML in all its processes and offers three products: Decentralized identity cloud for biometric solution and identity service providers to use with their algorithms and build privacy-preserving identity solutions Turnkey decentralized biometric authentication for enterprises and embedded partners, leveraging state-of-the-art detection, biometric matching, decentralized storage, and integration into orchestration systems; Digital asset vault for private keys, backup passphrases, and crypto assets, using the platform’s biometric authentication capabilities to ensure that only the authorized user has access to these assets. “Anonybit gets to the root of the problem, giving attackers nothing to find and nothing to steal while protecting precious data and assets,” said Switch Ventures’ managing partner Paul Arnold, who led the Series A funding. “Their unique approach to solving the problem is disruptive.” How the AI is implemented In order for technologists, data architects, and software developers to learn more about how to utilize AI, VentureBeat asked the following questions of Zelazny, who offered our readers these details: VentureBeat: What AI and ML tools are you using specifically? FZ: We leverage open-source AI and ML biometric models and adapt them in a proprietary manner for Anonybit’s decentralized biometric network. VentureBeat: Are you using models and algorithms out of a box — for exaFZle, from DataRobot or other sources? FZ: We use some out-of-the-box models. For the biometric algorithms, we have our own, but the uniqueness of our platform is that it can support any modality or algorithms. In fact, for our decentralized biometrics cloud offering, we allow biometric solution providers to adapt their algorithm to our infrastructure so they can go to market with a privacy-by-design alternative to their traditional offering. VentureBeat: What cloud service are you using mainly? FZ: The infrastructure is designed to be cloud-agnostic. VentureBeat: Are you using a lot of the AI workflow tools that come with that cloud? FZ: We leverage many of the workflow tools, but when it comes to biometric processing, we had to develop some of our own. VentureBeat: How much do you do yourselves? FZ: Most of Anonybit’s technologies are home-grown. Today, Anonybit leverages AWS services extensively to build its cloud and ensure its scalability and resilience, but can easily work on Azure or Google Cloud. VentureBeat: How are you labeling data for the ML and AI workflows? FZ: We are using both manual tagging and automation to continuously train our biometric neural network. VentureBeat: Can you give us a ballpark estimate on how much data you are processing? FZ: The Anonybit network is developed with Kubernetes, so it is designed to scale. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,253
2,023
"Smart video analytics are redefining use cases from retail to healthcare | VentureBeat"
"https://venturebeat.com/ai/smart-video-analytics-are-redefining-use-cases-from-retail-to-healthcare"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Smart video analytics are redefining use cases from retail to healthcare Share on Facebook Share on X Share on LinkedIn Presented by Western Digital Whether we like it or not, artificial Intelligence (AI) and machine learning (ML) are becoming the digital brains of many advanced use cases today. When combined with the latest smart video cameras, high-res videos and images, and network video recorders (NVRs), smart video analytics is enhancing and reshaping our lives in ways we cannot even imagine and it is relevant across virtually all industries — education, transport, manufacturing, smart cities, healthcare, sports, agriculture, the list goes on. What was once impossible to do with the human eye, is now possible through the lens of smart video analytics. These systems can simultaneously look at and analyze multiple streams of video with incredible speeds, and zero-in on the smallest pixel of detail with high accuracy to detect and identify objects, people, events and anomalies in real-time — all without human intervention. Effective implementation of AI can help to eliminate human error with improved quality and accuracy. The technology is evolving, learning and growing in leaps and bounds every day. With great promise for the future, smart video analytics has given rise to innumerable and invaluable use cases. For example, today, smart videos can identify and detect variances in consumer behavior, which might be ignored by the human eye, and can help avoid the occurrence of unwanted activity. It is also proving to be extremely helpful in retail surveillance as it can detect potential shoplifters based solely on their movements. The cameras use AI technology to create real-time body position and motion models. Deep learning systems then train these smart cameras to analyze these models in real-time and recognize specific body movements associated with shoplifting. The camera can alert the store manager if it detects suspicious behavior to prevent loss. Another important use case is using AI-assisted video analytics to automate monitoring and notifications in agriculture to monitor crops, detect pests and diseases and optimize crop yields. The technology can also be used to monitor livestock, detect potential health issues and optimize feeding schedules. In sports, video analytics can analyze player and team performance, help coaches make tactical decisions, help monitor fan engagement, and show real-time graphics and stats for broadcast. Also, consider the healthcare industry. Smart video analytics can be used for a variety of things such as monitoring and improving exam room flows and wait times; safeguarding patient, visitor, facilities and medical equipment; and even providing patient alerts or helping nurses make virtual hospital rounds. Smart video analytics brings a greater ability to extract insights from video data, which ultimately allows businesses to drive improved operations, efficiency and security. Whether in a corporate setting, manufacturing, retail, city management, healthcare and so on, there are a lot of opportunities for AI-based video analytics, outside of the typical security realm. Smart storage for smart video While there are many facets to smart video infrastructure, let’s talk about storage implications. As camera resolutions increase, the greater detailed output not only delivers a larger picture, but it allows analytics systems to “see” and process more information. AI video analytics can take up a ton of storage, and it requires the right storage to bring information and insights to life. Moving from Full HD (1080p) resolution to Ultra HD / 4k (2160p) can double or even triple the size of the video stream, increasing bit rates and requiring more storage for video data. Using the surveillance storage capacity estimator , for example, a solution that employs 16 cameras recording 24 hours a day, 7 days a week, at 18 frames per second at 4K resolution and using H.265 compression generates 1.5 TB of video data every day. That’s about 45 TB in just one month or 547.5 TB in a year. In addition to transmitting a main video stream, modern network cameras also output auxiliary video streams, picture streams, and video metadata information, which all have different data characteristics, including structure, size and frequency of transmitted data for the hard disk drive (HDD) to process and store. These additional streams enhance both the usability of the video solution as well as the effectiveness of AI, and add workload and storage capacity requirements to the recorder’s storage subsystem. AI-based solutions may also implement an image reference database for pattern matching. Metadata information ties these images to video segments for security functions such as video verification. For example, when an employee swipes their access control card against a card reader, an intelligent security system may grab a high-resolution image of the employee taken from a camera at the entry point in near real-time, comparing the frame image against the employee’s ID photo for verification. All these factors drive increased workload demand in network recorders and create challenges for HDD storage. Not all HDDs are equal With growing trends to capture multiple data types and video streams per camera, and with deep learning solutions needing more video to train AI algorithms, there is one fundamental underpinning technology that cannot be overlooked, and that is storage. And not just any storage. While flash is ideal for on-camera, always-on video cameras and edge devices, HDDs are still the most efficient and cost-effective solution for storing massive amounts of video data, especially for analytics. Storing all this rich data comes with complexities. Western Digital has analyzed hundreds of DVRs and NVRs from various manufacturers and found that different methods are used to write data to storage devices. This leads to non-optimal data placement, inefficient drive activity, and impacts performance and reliability. In addition, the HDD workload in a typical DVR or NVR appears to be random due to mixing of sequential video stream writes with metadata and AI database writes. Seek efficiency, or finding the data on the drive quickly, is the main contributor to successful HDD operation in surveillance recorders, and drives that can optimize seeks would be better suited for DVR and NVR workloads. Utilizing on-drive intelligence, a new generation of HDDs can recognize incoming video stream characteristics and data types, coalesce data together in cache and place data in specific track locations on disk, for optimal performance. It’s basically bunching all of the video and data streams together so data is not written all over the HDD. Advanced AI-enabled recorders, video analytics servers and deep learning solutions also require additional huge capacity, performance and workload capability. They must include storage that is designed and tested to stand up to 24/7 operations while minimizing frame loss or dropped frames to reliably capture the footage. With the right purpose-built drives, a smart video system can handle up to 64 single-stream HD cameras in addition to 32 concurrent AI streams. A high-capacity smart video HDD can also help businesses scale to meet growing storage needs. Keeping an eye on the future There is a significant value that businesses could be capturing from deploying AI in their facilities, which could include improving response times to critical events, enabling more sophisticated analysis of visual data or analyzing patterns to improve business intelligence. Storage — the right storage solution — will continue to be the key foundational technology for new smart video use cases. Smart video surveillance and video analytics has come a long way. No longer are we using video security and monitoring purposes just to see what is happening. We’re seeing the use of AI, ML and deep learning on video to predict what will happen. The future potential comes from extracting actionable insights, and we can only imagine what will come next. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,254
2,021
"Exposed admin password leads to massive surveillance camera breach at hundreds of businesses | VentureBeat"
"https://venturebeat.com/business/exposed-admin-password-leads-to-massive-surveillance-camera-breach-at-hundreds-of-businesses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exposed admin password leads to massive surveillance camera breach at hundreds of businesses Share on Facebook Share on X Share on LinkedIn A surveillance camera at a processor plant Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ( Reuters ) — A small group of hackers viewed live and archived surveillance footage from hundreds of businesses — including Tesla — by gaining administrative access to camera maker Verkada over the past two days, one of the people involved in the breach told Reuters. Swiss software developer Tillie Kottmann, who has gained attention for finding security flaws in mobile apps and other systems, shared with Reuters recordings from inside a Tesla factory in China and a showroom in California. Additional footage came from an Alabama jail, hospital rooms, a police interview area, and a community gym. Kottmann declined to identify other members of the group. The hackers sought to draw attention to the pervasive monitoring of people after having found login information for Verkada’s administrative tools publicly online this week, Kottmann said. Verkada acknowledged an intrusion, saying it had disabled all internal administrator accounts to prevent unauthorized access. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Our internal security team and external security firm are investigating the scale and scope of this issue, and we have notified law enforcement [and customers],” the company said. Kottmann said Verkada cut off the hackers’ access hours before Bloomberg first reported the breach on Tuesday. The hacking group, if it had chosen to, could have used its control of the camera gear to access other parts of company networks at Tesla and software makers Cloudflare and Okta, according to Kottmann. Cloudflare said its security measures are designed to block a small leak from becoming a wider intrusion and that no customer data was affected. Okta said it was continuing to investigate but that its service was not affected. Tesla did not respond to a request for comment. A list of Verkada user accounts provided by the hacking group and seen by Reuters includes thousands of organizations, including gym chain Bay Club and transportation technology startup Virgin Hyperloop. Reuters could not independently verify the authenticity of the list or screenshots distributed by Kottmann, but they included detailed data and matched other materials from Verkada. Madison County Jail in Alabama, Bay Club, and Virgin Hyperloop did not respond to requests for comment. Verkada says on its website it has over 5,200 customers, including cities, colleges, and hotels. Its cameras have proven popular because they pair with software to search for specific people or items. Users can access feeds remotely through the cloud. In a 2018 interview with Reuters, CEO Filip Kaliszan said Verkada had deliberately made it easy for many users at an organization to watch live video feeds and securely share them, such as with emergency responders. Verkada has raised $139 million in venture capital, with the latest financing announced a year ago valuing the Silicon Valley startup at $1.6 billion. Verkada drew scrutiny last year after Vice reported that some employees had used company cameras and its facial recognition technology to take and share photos of female colleagues. Kaliszan later described the behavior as “egregious” and said three people had been fired over the incident. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,255
2,022
"Emotion AI's risks and rewards: 4 tips to use it responsibly | VentureBeat"
"https://venturebeat.com/ai/risks-and-rewards-of-emotion-ai-4-tips-for-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature Emotion AI’s risks and rewards: 4 tips to use it responsibly Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the past two weeks, emotions have run high around the evolution and use of emotion artificial intelligence (AI), which includes technologies such as voice-based emotion analysis and computer vision-based facial expression detection. Video conferencing platform Zoom came under fire after saying it might soon include emotion AI features in its sales-targeted products. A nonprofit advocacy group, Fight for the Future, published an open letter to the company: It said Zoom’s possible offering would be a “major breach of user trust,” is “inherently biased,” and “a marketing gimmick.” Meanwhile, Intel and Classroom Technologies are working on tools that use AI to detect the mood of children in virtual classrooms. This has led to media coverage with unfortunate titles such as “ Emotion-Tracking Software Could Ding Your Kid for Looking Bored in Math. ” Finally, Uniphore, a conversational AI company with headquarters in Palo Alto, California and India, is enjoying unicorn status after announcing $400 million in new funding and a $2.5 billion valuation back in February. In January 2021, the company acquired Emotion Research Lab , which uses “advanced facial emotion recognition and eye-tracking technology to capture and analyze interactions over video in real-time to enhance engagement between people.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Last month, it introduced its Q for Sales solution, which “leverages computer vision, tonal analysis, automatic speech recognition and natural language processing to capture and make recommendations on the full emotional spectrum of sales conversations to boost close rates and performance of sales teams.” But computer scientist and famously fired, former Google employee , Timnit Gebru, who founded an independent AI ethics research institute in December 2021, was critical of Uniphore’s claims on Twitter. “The trend of embedding pseudoscience into ‘AI systems’ is such a big one,” she said. What does this kind of pushback mean for the enterprise? How can organizations calculate the risks and rewards of investing in emotion AI ? Experts maintain that the technology can be useful in specific use cases, particularly when it comes to helping customers and supporting salespeople. Commitment to transparency is key But, they add, an emotion AI investment requires a commitment to transparency. Organizations also need a full understanding about what the tools can and can’t do, as well as careful consideration around potential bias, data privacy and ROI. Today’s evolving emotion AI technologies “may feel a little bit more invasive,” admitted Annette Zimmerman, a vice president and analyst at Gartner who specializes in emotion AI. “For the enterprise, I think transparency needs to be the top priority.” In December 2021, Zimmerman published a Gartner Competitive Landscape report for the emotion AI space. She pointed out that since the pandemic, organizations are “seeking to add more empathy in customer experiences.” However, organizations also need to be sure the technology works and that the system is trained in a way that there is no bias introduced, she told VentureBeat. “For example, computer vision is very good at detecting obvious emotions like happiness and deep frustration,” she explained. “But for more subtle things like irony, or slightly annoyed versus very angry, the model needs to be trained, particularly on geographic and ethnic differences.” Emotion AI could become key differentiator Zimmerman, who highlighted Uniphore in her competitive landscape report, wrote that combining computer vision and voice-based emotion analytics “could become a key differentiator for the company.” In an emailed comment to VentureBeat, Patrick Ehlen, vice president of AI at Uniphore, said, “it’s important to call out that meeting recordings and conversational intelligence applications have become mainstream in today’s business world.” The company’s intent with Q for Sales, he continued, “is to make virtual meetings more engaging, balanced, interactive and valuable for all parties.” There are a few ways “we ensure there is no creepiness,” he added. “We ask for consent before the call begins, we don’t profile people on calls and we don’t perform facial ID or facial recognition.” In addition, he explained, all participants have the choice to opt-in rather than just opt-out with complete two-party consent at the beginning of each video meeting. Ehlen also wanted to address “confusion about whether we are claiming to have developed AI that ‘detects emotions’ or knows something about people’s internal emotional states.” This is not Uniphore’s claim at all, he said: “Rather, we are reading the signals people sometimes use to communicate about their emotions, using combinations of facial expressions and tone of voice, for example.” For example, he explained, the phrase ‘Nice day, isn’t it?’ “might appear to communicate one thing if you only consider the text by itself, but if it comes with a sarcastic tone of voice and a roll of the eyes, this communicates something else.” AI-driven emotional analysis is increasingly sophisticated Sentiment analysis for text and voice has been around for years: Any time you call a customer service line or contact center and hear “this call is being recorded for quality assurance,” for example, you’re experiencing what has become highly sophisticated, AI-driven conversational analysis. Zimmerman also highlighted Boston-based Cogito in Gartner’s Competitive Landscape as “a pioneer in audio-based emotion AI technology, providing real-time emotion analytics for call agent support/coaching, as well as stress-level monitoring.” The company first provided AI solutions to the U.S. Department of Veteran Affairs – to analyze the voices of military veterans with PTSD to determine if they need immediate help. Then, they moved into the contact center space with an AI-driven sentiment analysis system that analyzes conversations and guides customer service agents in the moment. “We offer real-time guidance in understanding how the call is going and the caller’s psychological state,” said Josh Feast, CEO of Cogito. “For instance, what’s the experience like for the parties on the call? What are fatigue levels? How is receptivity or motivation?” Then, the solution provides the agent with specific cues, perhaps advising them to adjust the conversation pitch or speed. Or, it could provide recognition that the other party is distressed. “That provides an opportunity to show some empathy,” he said. What enterprises need to know before investing in emotion AI Give emotion AI C-level attention “ Executives need to know that emotion AI has great possibilities along with great responsibilities,” said Theresa Kushner, data and analytics practice lead at NTT DATA Services. “Managing these complicated AI algorithms is something that needs C-level attention and can’t be delegated to data scientist teams or to operations staff. They’ll need to understand the level of commitment that implementing and operationalizing a controversial technology such as emotion AI requires and be closely involved to ensure it doesn’t get out of hand.” Consider the ROI When talking to different vendors, make sure they really demonstrate the ROI, said Zimmerman: “You need to understand the benefit of investing in this particular technology – does it help me to increase customer satisfaction? Or does it help me to increase retention and reduce churn?” Uniphore’s Ehlen added that organizations should also look for a solution that can bring an immediate ROI. “Solutions in this realm should be able to help augment human interactions in real time and then become more intelligent and bespoke over time,” he explained. Understand the algorithm and data collection Questions about data collection and integration with other vendor solutions should always be top of mind, said Kushner, while when it comes to emotion AI specifically, organizations should make sure the technology doesn’t violate any of their ethical boundaries. “Consider asking if they can explain the AI algorithm that generates this emotional response? What data do they use for the emotional side of emotion AI? How is it collected? What will we have to collect to enrich that dataset?” It’s also important to understand the technology’s real capabilities and limitations, Ehlen added: “Is it single mode or multimode AI? Siloed or fused? This will determine the level of context and accuracy that you can eventually derive.” Implement a test and learn framework These days, emotion AI technology has evolved to the point that organizations are deploying large-scale projects. “That requires thinking carefully about change management, setting up a steering committee and, critically, implementing some type of test and learn framework,” Feast said, which can lead to new use case ideas. “For example, we have customers who tested our technology to give agents real-time guidance, but they also realized they could use it to signal when agents are getting tired and need a break.” Balancing emotion AI’s risks and rewards According to Gartner’s Zimmerman, emotion AI technology adoption still has a long way to go, particularly when it comes to Big Tech. “I assumed that, given some of the technology advances that Amazon has revealed and some discussions that Google has had, that many more devices would have this functionality, but they can’t. I think from a technology perspective they could do it, but maybe it is the privacy issues.” Enterprise customers, too, have to weigh the risks and rewards of emotion AI. Kushner points out that a business may think they’d want to know how a customer really feels about their interaction with an online call center and employ emotion AI technology to find out. “But this risks alienating a customer if the emotion AI technology didn’t represent the customer’s feelings appropriately and customer support responds in a way that doesn’t fit the emotion the customer had expressed,” she said. To strike the right balance, said Uniphore’s Ehlen, vendors and customers alike need to build on trust, which, in turn, is built on open communication and choice. “We are openly addressing what our solution can do and being clear on what it cannot do,” he said. “We are giving customers the choice to integrate this tool into their engagements or not. For those who do opt in, we follow industry best practices for data privacy and protection.” The bottom line, said Feast, is that to succeed with emotion AI, enterprises need to make the technology use a win-win-win: “With every use case, I think organizations need to ask themselves ‘Is it good for the enterprise? Is it good for employees? Is it good for consumers?” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,256
2,017
"GPU database startup MapD raises $25 million led by NEA | VentureBeat"
"https://venturebeat.com/2017/03/29/gpu-database-startup-mapd-raises-25-million-led-by-nea"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GPU database startup MapD raises $25 million led by NEA Share on Facebook Share on X Share on LinkedIn MapD platform Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. MapD Technologies , a startup that provides businesses with a data analytics service backed by graphics processing units (GPUs), announced today that it has secured an additional $25 million. The startup combines a GPU engine with visual analytics, allowing data analysts and data scientists to run queries on billions of rows of data. Initially, MapD was only available as a cloud service on top of Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and IBM SoftLayer. But MapD is now also available as on-premises software. “Organizations are struggling to keep up with the exponential rise in data volumes they are facing using CPU compute,” wrote MapD’s founder and CEO Todd Mostak, in an email to VentureBeat. “This is driving a huge uptick in adoption of GPUs, and we are pushing to broaden that adoption curve to encompass analytic SQL and visualization workloads.” MapD claims to have dozens of customers across different sectors, such as energy, financial services, and retail. According to Mostak, Verizon uses MapD to accelerate its log analytics workloads, the U.S. government deploys the software to query and visualize geospatial data, and a New York City-based hedge fund uses it to find tradable insights. MapD sells annual subscriptions with support contracts. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Competitors include the very companies on which MapD runs its cloud service. AWS offers databases and a business intelligence tool, for example. IBM, Oracle, and others also pose competition in the database and analytics areas. Aside from these, Mostak names smaller GPU players like Kinetica, SQream, and BlazingDB as companies competing in this sector. New Enterprise Associates (NEA) led today’s round. Existing investors Nvidia, Vanedge Capital, and Verizon Ventures also joined. To date, MapD has raised a total of $37 million. It will use the new capital for hiring in sales and marketing and to further development of the product. MapD was founded in 2013 and is based in San Francisco, where it currently employs approximately 30 people. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,257
2,023
"Salesforce links with Databricks, Snowflake to build stronger enterprise data foundations | VentureBeat"
"https://venturebeat.com/data-infrastructure/salesforce-links-databricks-and-snowflake-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce links with Databricks, Snowflake to build stronger enterprise data foundations Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Salesforce is kicking off Dreamforce with some major partner announcements. Today, the Marc Benioff-led customer relationship management (CRM) software leader announced that its proprietary Salesforce data Cloud, which brings together data points from different sources to host unified customer profiles in real-time, will support bi-directional data sharing and access with Databricks’ data lakehouse platform. The move, set to go live at a later stage, will allow joint customers of the companies to enrich their datasets and power additional use cases, including building and deploying more capable models targeting different business-critical problems. “Access to trusted, governed data is critical for every company, and the ability to combine that data with AI is now essential to remain competitive. Delivering best-in-class integrations with Salesforce builds on our longtime partnership…, unlocking massive value for our mutual customers, Adam Conway, SVP of Products at Databricks, said in a statement. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, as it turns out, Databricks isn’t the only one linking up with Salesforce for these benefits. Just last Friday, Snowflake, the Ali Ghodsi-led company’s arch-rival , also announced a similar strategic partnership to connect its data cloud with Salesforce’s data platform. How does integration with Salesforce data cloud work? Traditionally, moving data across platforms involved complex extract, transform and load (ETL) processes, forcing teams to invest significant resources in data engineering or ETL tools. With the new zero-ETL integration, dubbed BYOL or Bring Your Own Lake, Salesforce will make it as if the unified data (from its own and the other connecting platform) is housed at a single location – while maintaining the highest levels of governance, security and trust at the same time. At the core, Salesforce explained, BYOL features two key capabilities: Data Sharing and Data Federation. The former allows users to access Salesforce data cloud information from within the connecting platform, be it Databricks or Snowflake, while the latter allows the information from these two platforms to be accessed from within the Salesforce data cloud. This bi-directional data movement has two-way benefits for joint users. On Salesforce, external data from Snowflake or Databricks can help teams build richer customer profiles and use them across existing applications and processes across Customer 360 to deliver better customer experiences. Meanwhile, Salesforce’s CRM data on Databricks and Snowflake will enhance the data foundation of these platforms, allowing teams to power downstream use cases, including AI/ML or deeper analysis, for better business outcomes. For instance, by connecting Salesforce CRM data like transaction history and credit score with market analysis and economic trends, a Databricks user can create custom cross-selling models that recommend additional products or services to advisors based on real-time engagement data from clients. Similarly, combining Salesforce data such as sales and website visits with external market trends in Snowflake can help retailers identify changes in customer behavior, allowing for smarter inventory management and marketing decisions. Databricks models can be moved to Salesforce While Snowflake users will also be able to create custom AI models with the enriched data, the announcements suggest that Databricks users will be the first ones to get support to move their models to the Salesforce data cloud to power any application on the platform. “Data scientists and developers want to use their preferred tools and ML frameworks to build models that can drive predictions across the Salesforce platform. Databricks unifies the data and AI platform so those AI/ML teams can now build, train and govern their AI models in Databricks and easily bring those models into Salesforce and apply them across the Customer 360 platform,” Conway told VentureBeat. This movement of models will be executed through Salesforce’s Bring Your Own Model experience delivered through Einstein Studio announced last month , the company confirmed. “Customers (will) have the ability to view and control access, sharing capabilities, and permissions at the admin level through Unity Catalog. These permissions will apply across platforms so mutual customers will control the flow of their data/models as they move through the Lakehouse and Salesforce,” Conway noted while detailing the company’s effort on the security front. When can teams unify their data? Even though BYOL integrations have been announced for both Databricks and Snowflake, the connectors are not fully ready yet. For Snowflake, Salesforce confirmed that only BYOL Data Sharing is generally available and Data Federation is expected to roll out at a later stage. For Databricks, on the other hand, the integrations are still being developed. “Our engineering teams are working on building these integrations now and we expect them to be available for customers to preview early next year. We have thousands of mutual customers and it’s important that we make it as easy as possible to access and securely share data across platforms without having to maintain complex or costly data pipelines in multiple places,” Conway said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,258
2,023
"Anthropic and BCG form new alliance to deliver enterprise AI | VentureBeat"
"https://venturebeat.com/ai/anthropic-and-bcg-form-new-alliance-to-deliver-enterprise-ai-to-clients"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Anthropic and BCG form new alliance to deliver enterprise AI to clients Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI unicorn startup Anthropic , known for taking on OpenAI’s ChatGPT with its Claude 2 large language model (LLM) assistant, is moving to take its offerings to more enterprises. Today, the Dario Amodei-led company announced it has partnered with Boston Consulting Group (BCG) to provide its clients with “direct access” to Claude 2 and Anthropic’s AI tech. The engagement will see BCG clients use the AI models across different strategic solutions, ideally driving innovation and improving the productivity of their respective teams. It’s the latest in a series of team-ups between giant global consulting firms (BCG is among the “ Big Three ” management consultancies in the world by some measures) and AI tech providers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It is another big win for Anthropic ‘s bid to remain competitive in the white-hot enterprise AI space, coming on the heels of the $100 million investment and work agreement to build AI for South Korean telecom giant SKT announced last month. How exactly will Anthropic help BCG clients? While Anthropic will provide the technology, BCG will advise the clients on its strategic applications and help them integrate the models for business results. BCG hasn’t publicly shared specific applications for Anthropic’s AI, but has confirmed that the integration will be used for synthesizing long-form documents and research, including supporting market research and customer insight synthesis. It emphasized Anthropic’s “constitutional” approach to safe AI — in which AI models and applications are bound to specific moral and ethical frameworks determined by their human creators — noting that Claude has been designed to be “helpful, honest, and harmless.” Other areas of application will be accelerating fraud detection, demand forecasting and writing-related tasks. This includes drafting test scripts and specifications in enterprise resource planning transformations or supporting HR with writing job specifications and finance with report generation. BCG said it will run pilots for enterprises interested in deploying the models. Bottom-lining the impact of AI on enterprises “The large enterprises I talk with are focused on harnessing value and bottom-line impact from AI, and doing that in the most effective and ethical way possible,” Sylvain Duranton, global leader of BCG X, BCG’s tech build and design unit, said in a press release. “Aligning these two aspects of AI is a challenge and the price for getting it wrong can be immense, both financially and in reputational harm. Our new collaboration with Anthropic will help deliver that alignment on ethics and effective GenAI.” Even prior to their partnership being announced today, BCG had used Anthropic’s models for its own internal systems. Plus, the companies also recently co-hosted a workshop at the United Nations on how to leverage the opportunities of generative AI while managing the risks at the same time. “Enterprise leaders today want to tap into AI’s transformative potential while managing risks responsibly. BCG is a leader in guiding enterprise companies through technological change and an active advocate for safe AI deployment. We look forward to supporting their clients as they build and innovate with Claude,” Neerav Kingsland, head of strategic partnerships at Anthropic, said in the statement. Deployment of AI growing via leading providers While the partnership with BCG marks a growth step for Anthropic, which is valued at an estimated $5 billion , it is not the only AI company pushing its offering through industry partnerships — and driving the adoption of AI across functions. Back in February, Microsoft-backed OpenAI, the highest-funded player in the space, announced a partnership to combine its AI tools and platforms, including ChatGPT, with Bain & Company’s digital implementation capabilities and strategic expertise to help the latter’s clients around the world identify and implement the value of AI. Similar partnerships have also been forged between Cohere and McKinsey as well as between PwC and OpenAI. Most recently, EY made headlines with the launch of EY.ai , a platform that brings together a complete AI ecosystem with capabilities to boost clients’ adoption of AI. As these efforts continue, access to AI will be democratized, while driving the business of the AI vendors and providers ahead. According to estimates from McKinsey , with generative AI’s implementation, retail and consumer packaged goods companies alone could see an additional $400 billion to $660 billion in operating profits annually. Across sectors, it has the potential to generate $2.6 trillion to $4.4 trillion in global corporate profits. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,259
2,022
"What technical leaders can learn from my experience with a $500,000 bug | VentureBeat"
"https://venturebeat.com/datadecisionmakers/what-technical-leaders-can-learn-from-my-experience-with-a-500000-bug"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community What technical leaders can learn from my experience with a $500,000 bug Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. When I started the social media app Skout in 2007, I was blown away by our traction. We had over 50 million installations globally, had raised over $22 million from Andreessen Horowitz, and there were hundreds of incredibly talented employees making it all happen. Despite our success, and unbeknownst to us, there were many 1-star reviews on the Google Play app store in the Polish language, dinging our Android app in a language we didn’t understand. This small piece of the overall Skout community was frustrated and telling us exactly what we needed to know, that the app was unusable in Poland, but we weren’t listening. The big cost of a tiny bug As it turned out, a tiny bug cost us $500,000 in lost revenue and lasted more than six months. No one in Poland could use the Skout Android app for half a year because our data parser couldn’t identify the location of our Polish customers — forcing the app to crash every time it was opened. We eventually stumbled on our parser problem by happenstance, and it took us 10 minutes to fix. It took us another six months to recuperate our revenue losses. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! We had no tools or processes in place that could’ve helped us understand the potential impact of this bug, and the reality is most companies are in a similar situation. Being a good product leader means knowing what information should inform your decision-making, and how to use this information to determine priorities and team direction. Throughout this process, I learned three major lessons about building technical products that prioritize users. 1. Create strong and constant feedback loops. The people building your product should be in constant communication with the people interacting with customers, like sales and customer success. Every company should ensure there are systems in place that allow for the two teams to consistently collaborate and share information. This feedback loop should also be formalized and documented so that it’s easier for everyone at the company to understand and implement key insights. 2. Foster a culture of data-driven decision-making. While “trusting your gut” has its place in business, when prioritizing product decisions, new features or fixing bugs, data is critical. Humans inherently have biases that can lead to decisions that might impact product utilization, customer happiness and even revenue (as seen in my case). For example, a U.S.-based team might prioritize Apple before Android, because it’s a platform that’s more popular in the U.S. — even if the company is building a global product with an international customer base. Data-driven decision-making eliminates any opinions and ensures what’s best for the business comes first. 3. Ensure you track and measure the right indicators. It can be easy to feel bogged down by data or to be unsure about what data sources are actually most important to the product and business. The good news is that there is one entity testing your product every day, for every update in every language and configuration, and on every platform and device. That entity is your user base, and they’re telling you what is and what is not working. So take the time to understand what your users are telling you to ensure consistent measurement. From reviews in the App Store or on Google Play, customer support tickets in Zendesk, and social media interactions in Twitter or Reddit — it’s possible to paint a clear picture of what features customers need and what bugs should be prioritized by looking at these attributes together. Ensuring you understand what your users are telling you in real-time and in any language, and then acting on that information, is critical in today’s fast-paced development environment where there are so many solutions to choose from. Christian Wiklund is founder & CEO of unitQ. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,260
2,023
"Why the latest $1 billion AI startup doesn't want to beat OpenAI | VentureBeat"
"https://venturebeat.com/ai/why-the-latest-1-billion-ai-startup-doesnt-want-to-beat-openai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why the latest $1 billion AI startup doesn’t want to beat OpenAI Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Imbue, formerly known as Generally Intelligent, may now be enjoying a $1 billion valuation — and unicorn status — after this week’s funding round of $200 million led by Astera Institute , NVIDIA, Cruise CEO Kyle Vogt, Notion co-founder Simon Last, and other investors. But the AI research lab, which focuses on building custom, reasoning AI agents, doesn’t see itself in direct competition with OpenAI, Anthropic and other foundation model companies. Cofounder Josh Albrecht told VentureBeat that “we’re quite bullish on the diversity and to be more of an ecosystem.” And the idea of a diverse ecosystem where different companies provide different models for different needs is exciting for Imbue, one of the very few woman-led AI unicorns, added the company’s other cofounder Kanjun Qiu. “It feels like we’re at the very beginning of something huge,” she said. “This is the first time computers have had intelligence. That’s so crazy. So what we’re really excited about is like, how can we make that accessible to everyone so that everyone can imbue intelligence and be able to use that intelligence.” That speaks to Imbue’s M.O., which is developing large language models (LLMs) optimized for reasoning abilities. “We build foundation models, large foundation models optimized for reasoning,” said Qiu. “We believe, essentially, that reasoning is the core blocker to agents that work really well.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company believes that the ability to reason is crucial for effective AI agents. Robust reasoning allows agents to handle uncertainty, adapt their approaches, gather new information, make decisions, and deal with the complexities of the real world. Imbue adopts a “full stack” approach to develop reasoning models, which involves training foundation models, prototyping experimental agents and interfaces, building robust tools and infrastructure, and studying the theoretical foundations of deep learning. AI agents with reasoning can open the black box Imbue trains its own very large models optimized for reasoning, with over 100 billion parameters. This allows them to iterate rapidly on training data, architecture, and reasoning mechanisms. Powering Imbue’s “wide” approach to training datasets is a ~10,000 strong Nvidia H100 GPU cluster. The goal is to train models with strong reasoning capabilities in order to build “AI agents that are able to help us accomplish bigger goals in the world,” said Qiu. Currently, Imbue focuses on building agents for everyday tasks like writing code or analyzing policy submissions. As Albrecht explained, “We have some of those and but for the most part, they’re really bad. And, you know, I think you’ll see that when you look at where agents are today.” While Imbue’s ultimate objective is to enable anyone to build their own AI agents, the company’s initial focus is on developing reasoning models for internal enterprise uses. Imbue specifically concentrates on agents that can code, as coding improves reasoning and provides a practical test-bed for evaluating the effectiveness of their models. According to a blog post, Imbue believes that coding agents are strategically important and can significantly enhance research and engineering capabilities. A key difference from models like ChatGPT, according to Qiu, is Imbue’s focus on “making models explain their reasoning and give some references.” This “unpacking the black box,or explainability, improves transparency and trust over models that just output answers. As AI systems grow more powerful, understanding what’s happening inside the “black box” becomes increasingly important. Hence why Imbue is working to shed light on model reasoning. “Making it not a black box is a good user experience,” said Qiu. Imbue’s goal is developing AI where “you can check [it]. It’s just a better experience when you can really understand what’s going on.” Albrecht noted this focus on explainability is key to building trust: “We want these things just to be software tools that you use. It just does what you expect.” Imbue is also advancing scientific understanding of different models and approaches to training neural networks. As Albrecht explained, “We want to understand what’s happening inside those weights, what’s happening inside those black boxes inside of deep learning.” AI ecosystem primed for a PC-like revolution When asked about Imbue’s business model, Qiu and Albrecht signaled an open approach. Qiu noted, “it’s going to really depend on what capabilities of the model are easy and hard to build with.” If building applications directly on their models proves difficult, “it might make more sense for us to go direct to the end user and build agents that they can use,” Qiu explained. However, “If it’s the case that it’s actually much easier to build on top, then we might enable other people to build on top of them.” Albrecht agreed Imbue’s focus is on “tools for empowering other people to build on top of” the company’s work.. By making its reasoning-focused models accessible, Imbue aims to serve both businesses and individuals. As Qiu said, “probably both will happen just like Apple in their App Store.” “It really harkens back to what was the dream of the personal computer, the whole idea is that it was personal,” said Qiu. “Hopefully in the future, I have all of my own custom software, and you have all of your own custom software. It’s all different.” Rather than direct competition with other AI labs, Imbue envisions “an ecosystem where different people can actually really change their own computer to what they want it to be,” Albrecht said, “We very much want this to be democratized, individually driven. The user is the one who’s in control.” In its first year since coming out of stealth, Imbue has made progress experimenting with internal agents built on its models. “Now we’re training models and we’re experimenting with agents internally and that’s where we are going to keep going on that and continue pushing the models on reasoning,” said Qiu. Their goal remains developing systems that can reliably help users through an emphasis on robust and “street smart” AI. While some question high valuations in the sector, Imbue remains focused on the foundational work. As Albrecht noted, “We’re going to keep doing the same thing that we’ve been doing and just trying to understand how these things work and build things that are useful.” The vision is that reasoned, trustworthy AI tools will unlock vast potential when widely accessible. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,261
2,023
"How to use large language models and knowledge graphs to manage enterprise data | VentureBeat"
"https://venturebeat.com/ai/how-to-use-large-language-models-and-knowledge-graphs-to-manage-enterprise-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to use large language models and knowledge graphs to manage enterprise data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In recent years, knowledge graphs have become an important tool for organizing and accessing large volumes of enterprise data in diverse industries — from healthcare to industrial, to banking and insurance, to retail and more. A knowledge graph is a graph-based database that represents knowledge in a structured and semantically rich format. This could be generated by extracting entities and relationships from structured or unstructured data, such as text from documents. A key requirement for maintaining data quality in a knowledge graph is to base it on standard ontology. Having a standardized ontology often involves the cost of incorporating this ontology in the software development cycle. Organizations can take a systematic approach to generating a knowledge graph by first ingesting a standard ontology (like insurance risk) and using a large language model (LLM) like GPT-3 to create a script to generate and populate a graph database. The second step is to use an LLM as an intermediate layer to take natural language text inputs and create queries on the graph to return knowledge. The creation and search queries can be customized to the platform in which the graph is stored — such as Neo4j, AWS Neptune or Azure Cosmos DB. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Combining ontology and natural language techniques The approach outlined here combines ontology-driven and natural language-driven techniques to build a knowledge graph that can be easily queried and updated without extensive engineering efforts to build bespoke software. Below we provide an example of an insurance company, but the approach is universal. The insurance industry is faced with many challenges, including the need to manage large amounts of data in a way that is both efficient and effective. Knowledge graphs provide a way to organize and access this data in a structured and semantically rich format. This can include nodes, edges and properties where nodes represent entities, edges represent relationships between entities and properties represent at-tributes of entities and relationships. There are several benefits to using a knowledge graph in the insurance industry. First, it provides a way to organize and access data that is easy to query and update. Second, it provides a way to represent knowledge in a structured and semantically rich format, which makes it easier to analyze and interpret. Finally, it provides a way to integrate data from different sources, including structured and unstructured data. Below is a 4 step approach. Let’s review each step in detail. Approach Step 1: Studying the ontology and identifying entities and relations The first step in generating a knowledge graph is to study the relevant ontology and identify the entities and relationships that are relevant to the domain. An ontology is a formal representation of the knowledge in a domain, including the concepts, relations and constraints that define the domain. Insurance risk ontology defines the concepts and relationships that are relevant to the insurance domain, such as policy, risk and premium. The ontology can be studied using various techniques including manual inspection and automated methods. Manual inspection involves reading the ontology documentation and identifying the relevant entities and relationships. Automated methods use natural language processing (NLP) techniques to extract the entities and relationships from the ontology documentation. Once the relevant entities and relationships have been identified, they can be organized into a schema for the knowledge graph. The schema defines the structure of the graph, including the types of nodes and edges that will be used to represent the entities and relationships. Step 2: Building a text prompt for LLM to generate schema and database for ontology The second step in generating a knowledge graph involves building a text prompt for LLM to generate a schema and database for the ontology. The text prompt is a natural language description of the ontology and the desired schema and database structure. It serves as input to the LLM, which generates the Cypher query for creating and populating the graph database. The text prompt should include a description of the ontology, the entities and relationships that were identified in step 1, and the desired schema and database structure. The description should be in natural language and should be easy for the LLM to understand. The text prompt should also include any constraints or requirements for the schema and database, such as data types, unique keys and foreign keys. For example, a text prompt for the insurance risk ontology might look like this: “Create a graph database for the insurance risk ontology. Each policy should have a unique ID and should be associated with one or more risks. Each risk should have a unique ID and should be associated with one or more premiums. Each premium should have a unique ID and should be associated with one or more policies and risks. The database should also include constraints to ensure data integrity, such as unique keys and foreign keys.” Once the text prompt is ready, it can be used as input to the LLM to generate the Cypher query for creating and populating the graph database. Step 3: Creating the query to generate data The third step in generating a knowledge graph involves creating the Cypher query to generate data for the graph database. The query is generated using the text prompt that was created in step 2 and is used to create and populate the graph database with relevant data. The Cypher query is a declarative language that is used to create and query graph databases. It includes commands to create nodes, edges, and relationships between them, as well as commands to query the data in the graph. The text prompt created in step 2 serves as input to the LLM, which generates the Cypher query based on the desired schema and database structure. The LLM uses NLP techniques to understand the text prompt and generate the query. The query should include commands to create nodes for each entity in the ontology and edges to represent the relationships between them. For example, in the insurance risk ontology, the query might include commands to create nodes for policies, risks and premiums, and edges to represent the relationships between them. The query should also include constraints to ensure data integrity, such as unique keys and foreign keys. This will help to ensure that the data in the graph is consistent and accurate. Once the query is generated, it can be executed to create and populate the graph database with relevant data. Ingesting the query and creating a knowledge graph The final step in generating a knowledge graph involves ingesting the Cypher query and creating a graph database. The query is generated using the text prompt created in step 2 and executed to create and populate the graph database with relevant data. The database can then be used to query the data and extract knowledge. The graph database is created using a graph database management system (DBMS) like Neo4j. The Cypher query generated in step 3 is ingested into the DBMS, which creates the nodes and edges in the graph database. Once the database is created, it can be queried using Cypher commands to extract knowledge. The LLM can also be used as an intermediate layer to take natural language text inputs and create Cypher queries on the graph to return knowledge. For example, a user might input a question like “Which policies have a high-risk rating?” and the LLM can generate a Cypher query to extract the relevant data from the graph. The knowledge graph can also be updated as new data becomes available. The Cypher query can be modified to include new nodes and edges, and the updated query can be ingested into the graph database to add the new data. Advantages of this approach Standardization Ingesting a standard ontology like insurance risk ontology provides a framework for standardizing the representation of knowledge in the graph. This makes it easier to integrate data from different sources and ensures that the graph is semantically consistent. By using a standard ontology, the organization can ensure that the data in the knowledge graph is consistent and standardized. This makes it easier to integrate data from multiple sources and ensures that the data is comparable and meaningful. Efficiency Using GPT-3 to generate Cypher queries for creating and populating the graph database is an efficient way to automate the process. This reduces the time and resources required to build the graph and ensures that the queries are syntactically and semantically correct. Intuitive querying Using LLM as an intermediate layer to take natural language text inputs and create Cypher queries on the graph to return knowledge makes querying the graph more intuitive and user-friendly. This reduces the need for users to have a deep understanding of the graph structure and query language. Productivity Traditionally, developing a knowledge graph involved custom software development, which can be time-consuming and expensive. With this approach, organizations can leverage existing ontologies and NLP tools to generate the query, reducing the need for custom software development. Another advantage of this approach is the ability to update the knowledge graph as new data becomes available. The Cypher query can be modified to include new nodes and edges, and the updated query can be ingested into the graph database to add the new data. This makes it easier to maintain the knowledge graph and ensure that it remains up-to-date and relevant. Dattaraj Rao is chief data scientist at Persistent. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,262
2,023
"Lack of trust slowing down AI revolution in medical settings: GE Healthcare report | VentureBeat"
"https://venturebeat.com/ai/lack-of-trust-slowing-down-ai-revolution-in-medical-settings-ge-healthcare-report"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Lack of trust slowing down AI revolution in medical settings: GE Healthcare report Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At a time when AI is all the rage, a new survey from GE Healthcare has highlighted a significant level of distrust and skepticism around its use in medical settings. The Reimagining Better Health study of 5,500 patients and patient advocates and 2,000 clinicians found that the majority of doctors believe that AI has the potential to transform healthcare. At the same time, many feel that the technology is not ready yet — and remains marred by roadblocks such as biases. The findings come as a number of healthcare giants continue to look at and experiment with AI models, including generative technologies like ChatGPT and conversational AI, to improve patient experience and outcomes, automate tasks and enhance productivity. AI is here but concerns remain Today, whenever anyone talks about AI, they mention how the technology is revolutionizing patient care, be it via drug discovery or predicting an individual’s best treatment plan. In the GE Healthcare survey, clinicians iterated similar benefits, with 61% saying the technology can help with decision-making, 54% saying it enables faster health interventions and 55% suggesting it can help improve operational efficiencies. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The possibilities are endless, but many remain concerned about the risks associated with the adoption of AI in the field. Specifically, 55% of survey respondents said AI technology is not yet ready for medical use and 58% implied that they do not trust AI data. For clinicians with more than 16 years of experience, the skepticism level was even higher, with 67% lacking trust in AI. Clinicians indicated that the biggest reason for this distrust is the potential for algorithms to produce unfair or discriminatory outcomes due to various factors such as incomplete training data, flawed algorithms or inadequate evaluation processes. As many as 44% of the respondents said the technology is subject to built-in biases. Secondly, clinician awareness on the technologies involved is not often up to the mark. The study found that only 55% of surveyed clinicians feel they get adequate training on how to use medical technology. How to build confidence? As GE Healthcare CTO Taha Kass-Hout points out, a thoughtful, data-driven approach — where efforts are made to ensure data quality and transparency — is the key to building confidence among clinicians who are on the fence about AI technology. “We pay special attention to where data sets come from and the characteristics of the population sampled,” Kass-Hout told VentureBeat. “We also evaluate the algorithms that classify and organize data and look at the AI formulation itself and clinicians’ feedback when updating these algorithms.” To get the ball rolling, the CTO said, companies should drive training/education programs where clinicians are guided on all things AI, starting from how it works to how it can augment their work. “As an industry, we need to build clinician understanding of where and how to use it and when it can be trusted fully versus leaning on other tools and human expertise,” said Kass-Hout. “I refer to this as ‘breaking the black box of AI’ to help clinicians understand what is in the AI model.” This includes what data it comprises — age, gender, lab results, remote monitoring, medical history, genetic variant or biomarker, lesion progression in subsequent images — so clinicians can better understand what is influencing the AI output. “Transparency on what influences the model and how it can be adjusted with a consistent feedback loop over time is critical to building confidence in AI technology among clinicians,” he noted. Massive potential As healthcare systems around the world face extreme pressures, clinicians are burning out and considering leaving the industry. In fact, according to the World Health Organization , there could be a shortage of 10 million health workers by 2030, when 1.4 billion people will be 60 or more. In such scenarios, AI-driven systems could come in and eliminate repetitive low-level tasks to help workers focus solely on patients’ care, said Kass-Hout. “There are places where technology can help reduce administrative tasks, better allocate resources and reduce burnout,” he said. GE HealthCare’s Command Center is a great example of this, he said. The platform is helping hospitals use real-time utilization data to better allocate resources. “Using AI technology, hospitals can redirect ambulatory services to bring patients to facilities with lower utilization — helping to reduce burnout,” Kass-Hout said. In another example, Hyro , a company providing plug-and-play conversational AI assistants for the healthcare industry, is automating tasks like patient registration, routing, scheduling, IT helpdesk ticketing and prescription refills, which constitute roughly 60-70% of inbound calls and messages into health systems. “While we are still in the early stages of seeing the true impact of these technologies, with appropriate human supervision, AI can help to reduce the burden of data query and analysis on clinicians so that they can be focused on what really matters: Improving patient outcomes,” Kass-Hout noted. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,263
2,023
"Friend or foe: Defining industry responsibility for AI-based technology | VentureBeat"
"https://venturebeat.com/ai/friend-or-foe-defining-industry-responsibility-for-ai-based-technology"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Friend or foe: Defining industry responsibility for AI-based technology Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI) is technology’s latest streaking comet: A rapidly advancing and spectacular novelty promising more intelligence, automation and ease of life. Yet, are we enabling a technology that will one day overpower humans and command life as we know it? Specifically, the arrival of AI-based tools such as ChatGPT and DALL-E have dazzled us with their dynamic capabilities as well as unnerved us with their staggering potential. The current debate on AI is hinged on broad philosophical questions and the public’s response. What do people make of all this? But as these tools are increasingly embraced by corporations and organizations seeking to spark innovation, improve productivity and drive profits, a new question is raised: What is the industry’s responsibility to the public in introducing AI to our daily life? ChatGPT: AI darling or danger? In an astonishingly short period of time, AI has advanced from a crawl to an all-out sprint. No piece of tech embodies that acceleration more completely than ChatGPT. The average person probably doesn’t understand the level of change this tool has introduced. In a nutshell, ChatGPT collects knowledge from around the internet, aggregating information and building answers to virtually any question or problem. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! On the surface, this sounds less than revolutionary, but ChatGPT has the extraordinary ability to answer user questions with an informed, coherent, well-reasoned answer — and almost instantly. It can also follow complex instructions in natural language and solve difficult problems with statistical accuracy. Some say this is an amazing “breakthrough” that represents thinking, rationalization and decision-making outside of the human mind. Two considerations to raise before casting your vote: First, the internet isn’t always accurate. GIGO, a popular acronym in AI circles, which means “garbage in, garbage out.” If the knowledge you collect and add up is based on garbage, your answer will be garbage. The ChatGPT model is designed to collect data from the internet to yield an answer that, based on statistical probabilities, is the most accurate. But if “most accurate” isn’t actually accurate, then a user’s answer is still, essentially, garbage. Will ChatGPT replace search? So what’s the harm? Let’s play out the scenario of shifting from the widespread use of search engines to aggregating tools such as ChatGPT, which has already begun in earnest. If AI tools are aggregating information, users don’t see the original content page, which means they also don’t see any ads. If they don’t see ads, eventually, real content will become scarce. The world of SEO will disappear, as the only information available for aggregation will be that generated by parties of interest. Funded organizations will create content based on their own goals, and the open internet as we know it today will no longer exist. There simply will be no incentive to write objective content. The internet is the essence of freedom of speech. With easy access and no controls around who and what you can post, search engines are a sea of opinions and bias. In most cases, information found on the web has not been evaluated for authority, reliability, objectivity, accuracy and currency. In early 2020, particularly with the onset of the pandemic, the internet has been a hosting vehicle for misinformation or “fake news,” some intentional and some simply due to a lack of credible sources. Nonetheless, ChatGPT , along with an array of other AI data aggregation tools, marks the latest leap in the ongoing evolution of AI. Without more controls to ensure content posted to the internet is objective, validated, accurate and current, the value propositions of these types of tools is questionable. Credible AI advances deserve support The story of AI isn’t strictly a cautionary tale. The technology is already being used by companies in areas ranging from enhanced safety and security to improved shopping experiences. And its long-term potential has far more significant implications. AI’s ability to consume huge amounts of information that mankind simply can’t grasp is wildly promising. Take healthcare, for instance. A few years ago, at one of my startups, we found that doctors were following nearly 50 diagnostic protocols per case — a huge drain on time and resources because humans can’t process that amount of information (which is, by the way, constantly growing). Consequently, we created a decision support system that does the processing for doctors. If you can harness AI tools to summarize, make recommendations and suggest proper procedures in those situations, they could dramatically improve the healthcare industry’s reach and effectiveness. The aggregation and use of several types of data can easily be verified through human sources and other platforms. ChatGPT , for instance, has many practical uses and reduces, if not eliminates, several types of research work that is resource and time-intensive. Furthermore, competencies will surely be developed to address this uncharted area of the AI realm, in which the ability to ask the right questions of aggregating software will become a profession of its own. What is the industry’s responsibility for AI innovation? While there are many companies with altruistic intentions, the reality is that most organizations are beholden to stakeholders whose chief interests are profit and growth. If AI tools help achieve those objectives, some companies will undoubtedly be indifferent to their downstream consequences, negative or otherwise. Therefore, addressing corporate accountability around AI will likely start outside the industry in the form of regulation. Currently, corporate regulation is pretty straightforward. Discrimination, for instance, is unlawful and definable. We can make clean judgments about matters of discrimination because we understand the difference between male and female, or a person’s origin or disability. But AI presents a new wrinkle. How do you define these things in a world of virtual knowledge? How can you control it? Additionally, a serious evaluation of what a company is deploying is necessary. What kind of technology is being used? Is it critical to the public? How might it affect others? Consider airport security. As a public citizen, what is the level of privacy that you’re willing to sacrifice to achieve a higher degree of safety and security? That’s a question between vendors, users and lawmakers. In some locales, for instance, facial recognition technology isn’t allowed. But what if airports were provided a very specific list of known terrorists that this technology could instantly identify? At the end of the day, responsibility and deployment decisions are shared between vendors and end users. An idealistic approach to AI The whole world of AI is just now awakening. We see it every day in all that we do, from the facial recognition security on our iPhones to the viewing recommendations we receive from Netflix. But much of what drives progress in AI is the financial incentive. I’m passionate about AI and my company’s contributions because I believe our work translates into a positive impact. But I’m not naïve: Not everyone working with these technologies feels the same way. And that’s all the more reason that we should tirelessly strive to put AI regulation and deployment in the hands of responsible, trustworthy people. AI holds the power to dramatically change the world — for better or worse. Wielding that power isn’t the responsibility of the industry alone. In the same way that companies have begun to define and make their organizational values regarding climate change, human rights and similar issues public, they will need to evaluate (and constantly re-evaluate) their stance on the purpose and usage of AI. If the industry fails to hold itself to account in that regard, the public — as it has shown time and again — most certainly will. Liam Galin is CEO of BriefCam. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,264
2,022
"10 startups riding the wave of AI innovation | VentureBeat"
"https://venturebeat.com/business/10-startups-riding-the-wave-of-ai-innovation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 startups riding the wave of AI innovation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Organizations are increasingly adopting AI-enabled technologies to address existing and emerging problems within the enterprise ecosystem, meet changing market demands and deliver business outcomes at scale. Shubhangi Vashisth, senior principal research analyst at Gartner , said that AI innovation is happening at a rapid pace. Vashisth further noted that innovations including edge AI, computer vision, decision intelligence and machine learning will have a transformational impact on the market in coming years. However, while AI-powered technologies are helping to build more agile and effective enterprise systems, they usher in new challenges. For example, Gartner notes that AI-based approaches if left unchecked can perpetuate bias, leading to issues, loss of productivity and revenue. AI is fueled by data and if there are errors along the data pipeline , AI models will produce biased results. Only 53% of AI projects make it from prototype to production, according to Gartner research. But it’s not all doom and gloom for the ecosystem. A new survey by McKinsey revealed AI high performers following the best practices are deriving the most benefits from AI and professionalizing or industrializing their capabilities. As more startups ride the next wave of AI to innovate for the enterprise, some startups look poised to lead the pack in 2022 and beyond. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Tracking the pack A report published last month by Statista showed the number of AI-focused startups worldwide was 3,465 in 2018, with 1,393 in the U.S. alone. Another State of AI report from CBS Insights last year said AI startup funding hit a record high of $17.9 billion in Q3. Many players in the ecosystem are jostling to lead the pack with enough investment dollars. But which startups in the ever-evolving AI startup space might require a closer look for enterprises? Here are 10 AI startups that are demonstrating upward growth trajectories in a fast-paced market and whose CEOs have articulated to VentureBeat over the past few months a broader context to their key differentiators, strategies and traction. Below are vital details on these 10 AI startups that are worth watching across diverse industries, including retail, finance, cybersecurity, devops and more. Each company is ranked by its total funding to date, with quotes and metrics supplied during interviews with VentureBeat. DataStax Founded: 2010 Founder(s): Jonathan Ellis, Matt Pfeil Headquarters: California, U.S. Total funding to date: $227.6 million Real-time data company, DataStax , says it helps enterprises unleash the value of real-time data to quickly build the smart, high-growth applications required to become data-driven businesses. Some of the leading digital services used daily for streaming, gaming, social networks, ecommerce and many others are built on DataStax. Companies like Verizon, Audi, ESL Gaming and many others are using DataStax solutions — including DataStax NoSQL cloud database, Astra DB and unified event streaming technology, Astra Streaming — to build real-time, high-scale applications that power their businesses. According to DataStax Chairman and CEO, Chet Kapoor, DataStax provides an open stack for all the real-time data built on the world’s most scalable database (Apache Cassandra) and the most advanced streaming technology (Apache Pulsar), in an open, cloud-native architecture. The company’s open stack helps developers easily build real-time applications that run their businesses. Kapoor said these developers continue to tap the power of advanced event streaming technology based on Apache Pulsar to act instantly on data, drive dynamic customer experiences and take advantage of ML and AI — all on a single data stack that works. He said DataStax uses modern APIs that allow developers to skip the complexity of multiple OSS projects and APIs that don’t scale. DataStax claims its modern data APIs “power commerce, mobile, AI/ML, IoT, microservices, social, gaming and interactive applications that must scale-up and scale-down based on demand.” Kapoor noted that DataStax has an edge over other players in the industry because it’s only open stack that unifies data in motion and data at rest for real-time use, available on any cloud and with pay-as-you-grow pricing. Visier Founded: 2010 Founder(s): John Schwarz, Ryan Wong Headquarters: Vancouver, Canada Total funding to date: $216.5 million Canadian SaaS company Visier Inc. (also Visier) is a HR analytics platform offering cloud-based solutions for workforce analytics and workforce planning. To achieve better team and business management outcomes, leaders need to start by asking the right questions about their workforce. Ryan Wong, cofounder and CEO at Visier, told VentureBeat that Visier provides solutions that relay fast, accurate people data so businesses can enhance productivity and performance, increase employee satisfaction and retention, ensure profitable career planning and ethically upgrade future decision making. Wong said Visier develops its solution with a combination of Scala, Angular, open-source algorithms and proprietary technologies. He said Visier uses AI to enrich the data of an organization with standardized information, enabling organizations to better compare and understand trends over time. He also said Visier provides proven ML predictions that have been verified across hundreds of enterprises. “The prediction models learn patterns from the employee data or organizations and synthesize them into easy-to-understand and actionable information. Visier also uses AI to support analysts in the organization by analyzing the data of an organization as it is created, highlighting and alerting users to new patterns, outliers and potential issues.” While Visier has competition in niche people analytics vendors like One Model and Crunchr, Wong said the company is designed to help organizations accelerate their people analytics strategy in three key areas where other systems and analytics processes fail or fall short. These areas include data management, deployment and user experience. Visier’s list of competitors also includes HCM suite analytics vendors like Workday and Oracle, as well as DIY people analytics using generic BI tools like Tableau and PowerBI. The company continues to focus on answering the important questions that business owners need to grasp how to shape a better business model overall. Having raised $125 million in a series E funding round last year, Visier is on the path to expanding its global influence. Customers include Electronic Arts, Uber, Adobe and more. Visier is expanding its presence in 75 countries with much room to grow. Vic.ai Founded: 2016 Founder(s): Alexander Hagerup, Kristoffer Roil, Rune Løyning Headquarters: New York, U.S. Total funding to date: $62.7 million The founders of Vic.ai set out to reimagine accounting using autonomy and AI. Kristoffer Roil, cofounder and COO at Vic.ai, said Vic.ai is ushering in a new era of intelligent accounting by eliminating manual data entry and completely automating invoice processing — the most manual and inefficient task in accounting. According to Vic.ai cofounder and CEO, Alexander Hagerup, Vic.ai uses proprietary AI technology with algorithms that, having been trained on more than half a billion pieces of data, can handle invoices of all types and formats. The AI operates at up to 99% accuracy, and customers see up to 80% process improvement. Vic.ai also provides customers with business intelligence. By deriving valuable information from financial transactions in real time, leaders can gain a financial edge by making better decisions faster. Unlike RPA solutions, Roil said Vic.ai’s platform doesn’t require rules, templates or configuration to work as it’s been trained on over half a billion invoices and continues to learn from data every day. Reading an invoice is easy, he said, but classifying it correctly requires intelligence — either by a human or more efficiently by an AI solution like Vic.ai. “By pretraining Vic.ai with historical data, you start with incredibly high accuracy rates. Over time, the system learns, adapts and improves to the point where a large percentage of invoices can be processed autonomously. It isn’t only able to read the invoice, but it’s also able to classify a number on an invoice and the correct type of cost,” said Hagerup. While Vic.ai’s biggest competitors include AppZen, ABBYY , Smartli and Mineraltree, the company will continue to pioneer the use of autonomy and intelligence to improve productivity, decision-making and ROI within accounting and finance processes. BUDDI.AI Founded: 2013 Founder(s): Ram Swaminathan, Sudarsun Santhiappan, Venkatesh Prabhu Headquarters: New York, U.S. Total funding to date: Undisclosed The healthcare industry is seeing an astronomical increase in the use of AI, with a report by Gartner saying healthcare organizations’ strategic understanding of AI has matured rapidly. New York-based deep learning platform company, BUDDI.AI , is on the quest to bring digital transformation to the healthcare industry with AI. BUDDI.AI provides clinical and revenue cycle automation solutions for healthcare. The company claims its AI-enabled solutions help to turn unstructured data in healthcare organizations into actionable insights for those along the continuum of care. BUDDI.AI cofounder and CEO, Ram Swaminathan, told VentureBeat that BUDDI.AI’s platform extracts clinical context and automates functions that improve patient care, enhance clinical documentation, streamline medical coding accuracy and improve reimbursements — all of which are integral to a healthy revenue cycle. Swaminathan said since the last 6+ years, BUDDI.AI has innovated an ensemble of proprietary algorithms to perform natural language processing, clinical contextual graphs, natural language generation, negation detectors, optical character recognition, tabular column extraction and several more. The company has 50+ AI as a Service (AIaaS) offerings specifically designed for automating healthcare functions, while offering one of the industry’s best efficacies for production use, according to Swaminathan. BUDDI.AI’s competitors include traditional manual medical coding and medical billing shops that consider practically all other semi-automation companies like Optum, 3M, EPIC, Cerner, Eclinicalworks or Athena Health as collaborators. However, Swaminathan said BUDDI.AI is differentiated from all of them because it autonomously performs medical coding and medical billing across all outpatient medical specialties. He said BUDDI.AI does this by using deep learning algorithms combined with sophisticated systems built by experts — offering contractual guarantees of over 95% accuracy on codes and claims for more than 70% of the monthly volumes. Hyperproof Founded: 2018 Founder(s): Craig Unger Headquarters: Bellevue, Washington Total funding to date: $22.3 million Hyperproof is a compliance operations SaaS platform that aims to make it easier for companies to follow security and compliance protocols. CEO and founder, Craig Unger, began Hyperproof to ensure businesses could complete their compliance work without the redundant, time-consuming and faulty manual processes that often exist. According to Unger, Hyperpoof plans to leverage ML in several different ways — including eliminating repetitive compliance tasks and providing meaningful risk insights to users so that they can make better, more strategic decisions. “Hyperproof will use ML to help our users automatically identify/flag the overlapping requirements across various compliance frameworks — so they can see areas where they’re already meeting requirements and reuse their compliance artifacts to satisfy new requirements.” Later this year, Hyperproof will unveil ML-enabled solutions that automatically identify opportunities for users to set up integrations that will pull in compliance data and also help users to gauge how prepared they are for an upcoming audit. Coalfire’s 2020 survey found that 51% of cybersecurity professionals are spending 40% or more of their budgets on compliance. With $16.5 million series A funding raised in Q4 of 2021, Hyperproof is helping businesses scale and gain visibility by staying compliant. Unger said Hyperproof is the only platform laser-focused on compliance operations to support the people in the trenches who are overwhelmed with compliance/assurance demands from their organization’s customers and regulatory bodies. Data privacy is a major concern in business today, according to Unger. He said the capability to efficiently track, implement and enforce ongoing compliance measures enables organizations to meet higher goals while securely protecting their employees, customers and shareholders. All of this contributes to risk management, audit preparedness and seamless operations. Hyperproof has built dozens of integrations with cloud services that house compliance data including AWS, Azure, GitHub, Okta, Jamf, Jira, ZenDesk and others — enabling automated evidence gathering and seamless collaboration between organizational stakeholders. Strivacity Founded: 2019 Founder(s): Keith Graham and Stephen Cox Headquarters: Virginia, U.S. Total funding to date: $11.3 million Keith Graham and Stephen Cox claim they are reinventing the customer identity and access management (CIAM) space by putting the “C” back in CIAM. Legacy vendors in this space built their solutions primarily for B2E use — prioritizing security and compliance above all else, usually leaving customer experience as an afterthought. Strivacity provides a low-code solution that adds secure customer identity and access management (CIAM) capabilities to a brand’s online properties fast so they can scale to customer demand, grow revenue, stay compliant with fast-changing privacy regulations and personalize their service. Strivacity ingests data derived from ML-based behavioral models as a risk indicator at any point in the consumer lifecycle — helping companies make critical decisions like whether to allow a particular lifecycle event to proceed, or shut down an event entirely when it seems too risky and more. Companies that believe that customer experience, security and compliance are equally vital to the success of their business benefit from Strivacity’s approach to CIAM. For example, a technology company that works with Strivacity noted that Strivacity provides a comprehensive approach to CIAM, and they’re intentional about making sure they’re meeting all the right stakeholders’ needs, from customers to security teams to marketing. “We hear from our customers that, on average, using Strivacity versus another provider reduces development and operational costs by 50% with our workflows and APIs that you can drop right into your apps,” said Graham. Lucinity Founded: 2018 Founder(s): Gudmundur Kristjansson Headquarters: Reykjavík, Iceland Total funding to date: $8.1 million Lucinity CEO and founder, Gudmundur Kristjansson, told VentureBeat Lucinity is on a quest to change the world with its anti-money laundering (AML) technology which empowers banks, fintechs and others in the financial services ecosystem to make data-driven decisions. Today, hardly 1% of money laundering instances are detected or recovered, despite growing regulations and strains on compliance professionals, according to Kristjansson. He said Lucinity’s API-first approach enables it to deploy cutting-edge tech throughout the company’s stack — such as Spark, Kubernetes and React — which has shown to be a successful scale strategy. Lucinity’s unique experience in banking, compliance, regulation and data science has helped them develop a new approach to tackling money laundering — harnessing the best of human intelligence and augmenting it with advanced AI. Their proprietary SaaS platform helps banks quickly identify suspicious behaviors and risk exposures. Lucinity’s behavioral detection empowers compliance teams to not only observe customers’ activity, but to understand them holistically and in-depth, ensuring a leading position in compliance. Other companies try to solve money laundering with AI for AI’s sake, said Kristjansson, but Lucinity focuses on the intersection of humans and machines instead. “At Lucinity, we use Human AI to explain AI findings so that every compliance professional can take on financial crime with the help of technology. We evolve our models with every new client and our programs get better every day. We work with clients to future-proof their business,” he said. With a focus on simple-to-use systems that work with analysts, not against them, Lucinity helps banks and fintechs to get that time and money back with a beautiful, efficient and effective interface designed around the specific needs of modern compliance. Verikai Founded: 2018 Founder(s): Brett Coffin Headquarters: San Francisco, California, U.S. Total funding to date: $6 million Verikai stands as a predictive risk assessment software for the insurance industry. Using ML to help insurance companies and underwriters assess risk, the company says it’s currently the only predictive data tool in the “insurtech market.” With a database of over 1.3 trillion data markers, 5,000 behavior patterns and an abundance of factors that account for over 250 million people, Verikai gives insurance companies insights into individual and group risk like never before. CEO Jeff Chen said Verikai is a predictive data and risk tool for insurance underwriters and brokers. He said alternative data and ML are the core base of Verikai’s products, and they will always have a huge impact on the tools the company provides. Calculating clinical outcomes and behavioral attributes using big data can now give insurance providers accurate, cost-effective forecasts. Real-time census risk reports from Verikai help professionals reduce losses, strategize and improve the complete underwriting process. The company is also providing its business customers with access to suitable insurance products to help HR and employees receive the insurance they need. “As our ML models continue to mature and as we discover new data sources, the ability to provide our customers with the best product models is always our number one priority,” said Chen. HIVERY Founded: 2015 Founder(s): Jason Hosking, Franki Chamaki, Matthew Robards and Menkes van den Brielki Headquarters: Sydney, Australia Total funding to date: $17.7 million HIVERY hopes to fundamentally change the way consumer packaged goods (CPG) companies and retailers collaborate with regard to assortment and space decisions. HIVERY Curate uses proprietary ML and applied mathematics algorithms that have been developed and acquired from Australia’s national science agency — CSIRO’s Data61. With HIVERY Curate, a process that takes six months is reduced to around six minutes, all with the power of AI/ML and applied mathematics techniques. Jason Hosking, cofounder and CEO at HIVERY, said HIVERY’s customers are able to make rapid assortment scenario strategies simulations around SKU rationalization, SKU introduction and space while considering any category goal, merchandising rules and demand transference with HIVERY Curate. Once a strategy is determined, said Hosking, HIVERY Curate can generate accompanying planograms for execution. HIVERY’s proprietary ML models use recommender systems. These ML models can learn from clients’ datasets to make recommendations on assortment at store-level or at any cluster store count required. HIVERY combines ML with applied mathematics methods, often called “operations research” or “OR.” While HIVERY’s ML models recommend products, its OR algorithms factor in real-world rules or constraints to ensure that any recommendations are practical, operational and product-space aware at store level. Hosking said retailers and CPGs currently require multiple solution providers to determine assortment or category strategy, optimize assortment, space, and generate store-level planograms. HIVERY, however, can run assortment strategy simulation and take into consideration any category goals and merchandising constraints into its recommendations — all in one solution — which Hosking said no company does as of now. The company earned a spot on Forbes Asia’s 100 to Watch list last year and was more recently named by CB Insights in its 2022 Retail Tech 100 report — an annual ranking of the 100 most promising B2B retail tech companies in the world. Prospero.Ai Founded: 2019 Founder(s): George Kailas, Adam Plante and Niles Plante Headquarters: New York, U.S. Total funding to date: Undisclosed Prospero.Ai says it’s committed to leveling the playing field in investing with AI and ML as the pillars of its solution. Prospero’s cofounders, George Kailas, Adam Plante and Niles Plante, created a platform that aims to make finance more fair and prosperous for all. Previously from the hedge fund world, CEO George Kailas is passionate about providing institutional-quality investment research for free without conflict of interest. Other fintech companies don’t offer their users the most valuable commodity — the predictions derived from their data — but Prospero is doing things differently, said Kailas. Prospero’s joint IP with NYU, a proprietary AI system, simplifies stock analysis into 10 key signals and educates on how to leverage their predictions to invest better. “Prospero is the first platform that’s completely free while protecting users’ privacy completely. Currently in beta, it aims to reverse the deterioration of the middle class by providing financial tools and literacy for all,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,265
2,023
"Microsoft adds Image Creator to Bing, plus GPT-4 now available in Azure OpenAI Service | VentureBeat"
"https://venturebeat.com/ai/microsoft-adds-image-creator-to-bing-plus-gpt-4-now-in-azure-openai-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft adds Image Creator to Bing, plus GPT-4 now available in Azure OpenAI Service Share on Facebook Share on X Share on LinkedIn Image Credit: Microsoft Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. What would a week be like without generative AI news from Microsoft? I can hardly imagine. Today, Microsoft announced that it is bringing Image Creator to the new Bing preview, and new AI-powered visual Stories and updated Knowledge Cards to all Bing users. Bing Image Creator, which will be available starting today in the new Bing preview on desktop and mobile, as well as in Edge, is powered by an “advanced” version of OpenAI’s DALL-E model (Hmm…is DALL-E 3 on the way?). By typing in a description of an image, providing additional context like location or activity, and choosing an art style, Image Creator will generate an image. Microsoft: Bing is only browser with ‘integrated AI-powered image generator’ Microsoft says this makes Bing the first and only browser with an “integrated AI-powered image generator.” Meanwhile, Stories, the company says, is an AI-powered visual summary, which users can click through to learn more about the topic they are searching. This, Microsoft says, provides a “more engaging way to search and interact with content, offering images and short videos for easy consumption.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Finally, Knowledge Cards 2.0 is an AI-powered infographic-inspired experience that has been updated to include interactive, dynamic content like charts, graphs, timelines and visual stories. GPT-4 is available in Azure OpenAI Service Today, Microsoft also announced that GPT-4 is available in Azure OpenAI Service. As part of the preview, customers and partners can join the waitlist to access GPT-4 and start building it into their own applications and services. In the blog post, Microsoft shared testimonials from customers about GPT-4 in Azure OpenAI Service: “Coursera is using Azure OpenAI Service to create a new AI-powered learning experience on its platform, enabling learners to get high-quality and personalized support throughout their learning journeys,” said Mustafa Furniturewala, senior vice president of engineering, Coursera. “Together, Azure OpenAI Service and the new GPT-4 model will help millions around the world learn even more effectively on Coursera.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,266
2,023
"Getting cyber-resilience right in a zero-trust world starts at the endpoint | VentureBeat"
"https://venturebeat.com/security/getting-cyber-resilience-right-in-a-zero-trust-world-starts-at-the-endpoint"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Getting cyber-resilience right in a zero-trust world starts at the endpoint Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With the White House announcing a new national cybersecurity strategy that prioritizes cyber-resilience and holds software companies more accountable for how secure their products are, Absolute’s 2023 Resilience Index is noteworthy. CNN reports that the administration is working with Congress to develop legislation addressing software liability and inadequate protection against cyberattacks. Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA) , calls on technology companies to take greater responsibility when it comes to the cybersecurity of their products, many of which are integral to the foundations of society. Speaking at Carnegie Mellon University earlier this year, she said, “We often blame a company today with a security breach because they didn’t patch a known vulnerability. What about the manufacturer that produced the technology that required too many patches in the first place?” Challenges enterprises face in becoming more cyber-resilient Cyber-resilience minimizes a data breach’s blast radius or impact on an organization’s IT, financial and customer-facing systems and operations. Realizing that not every intrusion attempt will be predictable or easily contained enables enterprises to adopt the right mindset and become more prepared. Absolute’s 2023 Resilience Index accurately assesses what CIOs and CISOs are telling VentureBeat about how challenging it is to excel at the comply-to-connect trend Absolute also found in their research. Balancing security and cyber-resilience is the goal. Key insights from the study include the following: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! An increasingly chaotic IT landscape makes endpoint visibility and control a significant challenge Employees switching between corporate and off-corporate networks create visibility, control and cybersecurity gaps that limit an IT team’s ability to diagnose and fix end-user issues and reduce cybersecurity risks. Further stretching IT teams thin, this requires managing various networks, hardware, OS versions and patches. Absolute’s anonymized telemetry data found that Windows 10 is used on more than 80% of devices. With 14 versions and over 800 builds and patches, IT professionals struggle to keep their employees’ endpoints up to date. Remote workers’ fluid movement between multiple global locations compounds the challenge Absolute found that its customers had an average of four enterprise device locations per device in February 2023, up 15% year-over-year. CISOs VentureBeat spoke with at RSAC 2023 said one of their most significant endpoint challenges today is securely switching between devices and networks across remote locations. Application sprawl proliferates, resulting in 1 in 6 devices running on outdated OS versions The typical enterprise device has 67 applications installed, with 10% having more than 100 installed. Regarding web application usage, enterprise devices are used most of the time to access Google Mail and Salesforce. The greater the application sprawl and workload on an endpoint, the higher the probability that an attacker will find a way to exploit memory conflicts and identify where software decay leaves a device vulnerable. Overloading endpoints with agents creates a false sense of security, leading to memory conflicts Absolute found that the typical enterprise device has 11 security agents installed, creating memory and resource conflicts that attackers can exploit. Enterprise devices typically have multiple security applications for endpoint management, antivirus, antimalware and encryption. These are required by industry standards (e.g., ISO/IEC 27001, NIST CSF, PCI DSS, GDPR) and government regulations (e.g., HIPAA, HITECH, FISMA). The findings suggest that many organizations don’t know their device fleet’s software inventory, are running more security agents than needed, or believe that the more tools deployed, the safer they are. What CISOs can do now Like zero trust, cyber-resilience needs to be considered an ongoing framework that adapts and flexes to the changing needs of an organization. Every CEO and CISO VentureBeat interviewed at RSAC 2023 said the most fast-moving, challenging threat surfaces to protect are employee- and company-owned endpoint devices. Finding new ways to improve the efficacy of zero trust with endpoints is a hot topic today for CISOs across all industries. The following are recommendations of what CISOs can do now to become more cyber-resilient: Look to application resilience for greater efficacy gains across EPP, EDR and remote-access solutions As part of their Resilience Index, Absolute evaluated the top security vendors across endpoint protection platforms (EPP), endpoint detection and response (EDR) and remote access, cited as industry leaders in analyst reports and used by Absolute customers. These companies included Cisco , Citrix , CrowdStrike , Microsoft, Netskope , Palo Alto Networks , SentinelOne , Sophos , Trend Micro and Zscaler. Absolute tracked the percentage of protected or healthy devices as a baseline, then applied application resilience policies. Efficacy gains by platform varied, with the EPP/EDR category seeing a net gain of 26% and remote access seeing a 23% gain. Automate patch management to free up IT resources for more significant projects It’s time to move beyond an inventory-based approach to patch management and consider alternatives for handling patch and configuration management at scale. Government organizations are 214 days behind on completing Windows 10 patches, while education and healthcare are 188 and 156 days behind, respectively, according to Absolute’s analysis of their telemetry data. Enterprises are 142 days behind on Windows 10 patches. Limit endpoint, application and system access to authorized administrators IT and cybersecurity teams need to automate how endpoint, application and system access is granted and revoked to improve zero trust at the endpoints. Enforcing least privileged access and knowing the access rights for every identity an endpoint supports is critical, especially when it comes to third-party contractors and outside vendors. Audit and track all identity-related activity to reduce trust gaps and insider attacks. Remove expired account access privileges. Cyber-resilience is the future of endpoint security Resilient, self-healing endpoints that can regenerate operating systems and configurations are the future of EPP, EDR tools and remote access solutions. Absolute’s 2023 Resilience Index provides new insights into what’s driving the comply-to-connect trend that balances security and cyber-resilience to ensure an organization’s employees can confidently get to work and keep working, regardless of risk. “When we’re talking to organizations, what we’re hearing a lot of is: How can we continue to increase resiliency, increase the way we’re protecting ourselves, even in the face of potentially either lower headcount or tight budgets? And so it makes what we do around cyber-resiliency even more important,” said Christy Wyatt, Absolute CEO, in a BNN Bloomberg interview earlier this year. “One of the unique things we do is help people reinstall or repair their cybersecurity assets or other cybersecurity applications. So a quote from one of my customers was: ‘It’s like having another IT person in the building.’” [Updated 5/2/23 at 10:45 am ET to add resilience table.] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "