content
stringlengths 71
484k
| url
stringlengths 13
5.97k
|
---|---|
This paper describes the relevant characteristics of available joint coating types and examines different testing protocols to explore these characteristics. The objective is to assist in the selection of appropriate, practical, cost effective girth weld protective coatings. that will provide good long-term corrosion protection.
This paper begins with a brief discussion of essential properties of all pipeline coatings, and a listing of multi-layer systems designed to meet specific needs. It then focuses on special considerations regarding application parameters for multi-layer systems that use fusionbonded epoxy as the primer.
To maintain strict environmental conditions, tanks are dehumidified during the surface preparation, coating application and curing. A practical approach, detailed project calculations, practical field problems & their solutions and on-site implementation of dehumidification of huge crude storage tanks is described in this paper.
This paper is a significant update to “Costing Considerations for Maintenance and New Construction Coating Work”1 on protective coating costing and selection co-authored by M. F. Melampy, M. P. Reina and K. R. Shields in 1998. Designed to assist the coatings engineer or specifier in identifying suitable protective coating systems for specific industrial environments.
Performance of formulations of antifouling coatings to protect carbon steel from effects of microbiologically influenced corrosion (MIC) and marine biofouling in a tropical harbor seawater was assessed by field and laboratory experiments. • Scanning electron microscopy (SEM). • Energy dispersive spectroscopy (EDS). • X-ray diffraction (XRD). • Seawater immersion.
The primary application for coatings made with Fluoroethylene vinyl ether (FEVE) resins has been in architectural markets. This paper will discuss the chemistry and physical characteristics of FEVE resins, including data on weatherability. A brief review of FEVE resin product types will be given. Both laboratory and offshore corrosion test results will be addressed.
The research described in this paper was carried out with the objective of establishing any correlation between coating performance and the results of cathodic disbondment testing. Experiments were carried out using 13 coatings. Nine samples of each coating were studied in a total of 117 experiments.
This paper presents several case histories where ductile iron pipeline sections have been investigated to ascertain the corrosioncontrol benefits of polyethylene encasement. Investigative procedures, included cell-to-cell potential surveys, side-drain technique measurements, in-situ and laboratory soil tests, pipe-to-soil potential measurements and excavation inspections.
New high-solids coatings are sometimes required to meet volatile organic compound (VOC) legislation. Lack of data on same needs accelerated test methods to evaluate performance. The test method must: • Be performed using commercially available equipment • Take a relatively short time period. • Correlate with real life exposure. | https://store.nace.org/coatings?pagenumber=2 |
What Is A Nano Exactly?
A nano is a measurement at the microscopic scale and represents a tiny fraction of measurement. How tiny? One nanometer is a billionth of a meter, or 10-9 of a meter but in Laymen's terms there are about 100,000 nanometers in the width of a single sheet of paper. In other words, it's pretty small. You need a microscope to view a single nanometer and if your finger nail was one nanometer then an actual meter would be almost as large as the Earth.
What Is Nano Technology?
Nanotechnology was first theorized as the result of our growing ability to see and predict the behaviors of atoms at the microscopic level. The idea was first introduced by physicist Richard Feynman in 1959. Feynman was working at the University Of California Technology when during a meeting he brought up the idea in his publishing now called "There's Plenty Of Room At The Bottom". During this time Feynman argued that while advances in science seen by the naked eye were rapid, the advancement of science regarding the microscopic level had slowed since the creation of the atom bomb. Feynman lead the way for science fiction writers everywhere to write terrifying stories about microscopic computers who eat the Earth, slowly devouring all they find while reproducing at an accelerated rate. His thoughts on nanotechnology were very simple however, and he admitted that while the ability to make a computer the size of a nanometer was impractical, it should be investigated due to our complex microscopic biological systems. Thus the idea of Nanotechnology and its study were born.
How Technology Is Making It Possible
Is It Possible To Build Nanobots
If we were talking 57 years ago then the idea of nanobots would be absurd. Computers at the time encompassed entire rooms and sometimes buildings. The World Wide Web was still a dream of the Department Of Defense and we were thinking of storage in the terms of bytes. If you pulled out a 128 gig USB drive then you are now holding more storage than was available in the entire world at that time. Pretty powerful stuff if you really think about it. Yet today things have changed and a device smaller than your thumb holds 128 gigs. While impressive, even the device you now hold is out of date compared to the terabyte USB drives that are coming out now. With current technology we are around a decade away from functional nanobots that could remove cancer from a patient and that's just the beginning of this technology.
Quantum Computing Will Make It Simple
Google claims to have a Quantum Computing Device already in service for its company. While the device Google has does work on the quantum level in some sense, it's far from the photon shooting, processing at the speed of light, atom harvesting machine humans will build in the next couple of decades. Depending on who you ask, that technology is seven to seventy years away. I personally believe it will arrive within the next 20 to 25 years based on the theory of Moore's Law. When you couple this with the idea that scientist seem to be transporting photons in a transporter across measurable distances, the future is very bright for Nanotechnology.
How Could Nanobots Cure Cancer Or Reverse Aging
The first step would be to program the nanobots so they can do things. Like writing any program you would want to have an algorithm. Some of these Nanobots could be programmed to have different jobs and all Nanobot communication could take place via a centralized unit. This unit could be located in a doctors office, planted in your skin, or an application on your smartphone. For example, having a group of 1000 nanobots that could swim through a persons blood stream taking samples of tissue in different areas. When the Nanobots encountered samples that were irregular they could be programmed to return to a certain part of the body. A doctor could then abstract these nanobots with the small tissue sample in hand. They could then test it to determine the cause of the irregularity. If it were determined to be cancer the Nanobots could then be signaled move to the area and remove the cancer internally with no invasive procedure and very little damage to the surrounding area. This would eliminate the cancer and the Nanobots can even be programmed to monitor the area for re-occurrence and remove any issues before they could develop.
The same process could reverse aging by invigorating cells that are aging. Nanobots could be programmed and equipped to inject anything from B-12 to an artificial embryo's DNA into cells causing them to repair. This would strengthen bones, revitalize skin tissue and increase the longevity of cells inside of the body. It could even force cells to regrow that previously could not such as brain cells. Making an 80 year old look and feel like they were 30 again over time.
Nanosensory
Okay, so Nanosensory isn't really a thing yet but it is my personal theory of where we are heading with Nanotechnology. Nanosensory would involve the ability of Nanobots to plug themselves right into sensory pathways inside of your brain. These sensory receptors and neural pathways would serve as an alert for Nanobots that there is something wrong. These Nanobots can then send a signal to other Nanobots and pinpoint the location of the issue. In short, if you cut yourself the nanobots could theoretically seal the wound as soon as you feel it. Maybe the thought of having a machine instantly heal you is the scariest part of Nanobots verses the tales of science fiction where little robots eat us from inside. Basically Nanobots would become our new and improved white blood cells.
What It Means For Our Future
A New World Of Longer Life
Impacts On Life
- Live Longer - Now this may seem like it would lead to you living forever but there are some things even Nanotechnology couldn't save you from. Eventually you would die from aliments that you could no longer be treated for. In addition to that things like decapitation would still end your life. Being electrocuted would also be a huge challenge for Nanobots and serious trauma from various injuries could lead to death as well. Things like breaking your neck would still spell instant death. Most of us would double our life spans with this technology and that would cause some issues with regards to population. Also starvation or dehydration would still kill you.
- Pharmaceutical Companies Would Never Let It Happen- This is actually an illogical line of thinking. Imagine using Nanobots inside of a pill. Now you have to buy these little pills to get rid of cancer and what's worse is you have to continue buying it to make certain the cancer stays gone. That means these companies have you as a life long customer and life just got a lot longer. Selling a pill to prevent and reverse aging would also drive up profits as well as the devices they pawn off to doctors and hospitals to control them. How much would Kim Kardashian pay to always look like she does now? In the end it's clear to see why these companies would actually love this technology.
- Population Issues- Yes, the ability to have children whenever you want would no longer be an option for those who used Nanotechnology. I know this would cause issue with some religions as sex has to be used for reproduction purposes. The fact is that if we are living twice as long and our children are living twice as long we end up with many generations of families before the oldest generation passes away. The world is simply not ready for hundreds of billions or even trillions of people. Some things would have to be given up in order to move forward. I suspect a one to two child limit would be placed on generations and Nanobots could eliminate those accidents as well as the need for abortion or birth control at all.
- Crime Issues- With Nanotechnology we should actually see a decrease in the crime rate. The reason being is a lot of mental illnesses and learning disabilities could be removed from our genome with advanced technology. In theory, Nanotechnology could be use to find genome defects and introduce fixes related to DNA protein sequences that control such illnesses. This part of Nanotechnology is likely a century away however.
In the end it is clear we are moving towards a longer future. Some people will question if such technology is playing "God" and that is a question risen from fear. Nanotechnology doesn't create humans it just extends our ability to use healthcare knowledge and science to better our lives. You will always have a choice to live out a normal life but for some of us the future looks like it may get interesting very soon. I am by no means telling you to stop saving for retirement but tomorrow could bring a world free of unnecessary pain and disease. | https://hubpages.com/education/Nanotechnology-And-The-Near-Future-Of-Nanobots |
Assembling a microrobot used to require a pair of needle-nosed tweezers, a microscope, steady hands and at least eight hours. But now U of T Engineering researchers have developed a method that requires only a 3D printer and 20 minutes.
In the lab of Professor Eric Diller (MIE), researchers create magnetized microrobots — the size of the head of a pin — that can travel through fluid-filled vessels and organs within the human body. Diller and his team control the motion of these microrobots wirelessly using magnetic fields.
Each microrobot is built by precisely arranging microscopic sections of magnetic needles atop a flat, flexible material. Once deployed, the researchers apply magnetic fields to induce microrobots to travel with worm-like motion through fluid channels, or close its tiny mechanical ‘jaws’ to take a tissue sample.
“These robots are quite difficult and labour-intensive to fabricate because the process requires precision,” says Tianqi Xu (MIE MASc candidate). “Also because of the need for manual assembly, it’s more difficult to make these robots smaller, which is a major goal of our research.”
That is why Xu and his labmates developed an automated approach that significantly cuts down on design and development time, and expands the types of microrobots they can manufacture. Their findings were published today in Science Robotics.
Smaller and more complex microrobots are needed for future medical applications, such as targeted drug delivery, assisted fertilization, or biopsies.
“If we were taking samples in the urinary tract or within fluid cavities of the brain — we envision that an optimized technique would be instrumental in scaling down surgical robotic tools,” says Diller.
To demonstrate the capabilities of their new technique, the researchers devised more than 20 different robotic shapes, which were then programmed into a 3D printer. The printer then builds and solidifies the design, orienting the magnetically patterned particles as part of the process.
“Previously, we would prepare one shape and manually design it, spend weeks planning it, before we could fabricate it. And that’s just one shape,” says Diller. “Then when we build it, we would inevitably discover specific quirks — for example, we might have to tweak it to be a little bigger or thinner to make it work.”
“Now we can program the shapes and click print,” adds Xu. “We can iterate, design and refine it easily. We have the power to really explore new designs now.”
The researchers’ optimized approach opens the doors for developing even smaller and more complex microrobots than the current millimetre-size. “We think it’s promising that we could one day go 10 times smaller,” says Diller.
Diller’s lab plans to use the automated process to explore more sophisticated and complicated shapes of microrobots. “As a robotics research community, there’s a need to explore this space of tiny medical robots,” adds Diller. “Being able to optimize designs is a really critical aspect of what the field needs.”
Learn more: No assembly required: U of T Engineering researchers automate microrobotic designs
The Latest on: Microrobots
via Google News
The Latest on: Microrobots
- Microswimmers are inanimate microparticles, but they move like moths to the lighton November 27, 2020 at 9:53 am
The Freigeist group at TU Dresden, led by chemist Dr. Juliane Simmchen, has studied an impressive behavior of synthetic microswimmers: as soon as the photocatalytic particles leave an illuminated zone ...
- Miniscule robots of metal and plasticon November 24, 2020 at 2:12 am
Such microrobots will one day revolutionise the field of medicine. Robots so tiny that they can manoeuvre through our blood vessels and deliver medications to certain points in the body – researchers ...
- Fabrication Simplified for Medical Microrobotson November 23, 2020 at 4:00 pm
Researchers have reduced the geometric requirements for fluid motion in microrobots to simplify the fabrication of machines for tiny medical tasks such as incising tissue, puncturing veins, or ...
- ‘Medical microrobots’, ‘room temperature superconductivity’ among 2020 science breakthroughson November 19, 2020 at 8:25 am
Near room-temperature superconductivity for energy transmission without loss, medical microrobots to carry out risky surgeries in hard-to-reach body parts, and “revolutionary” protein based ...
- ‘Like having billions of tiny 3D printers’: Scientists train BACTERIA to build complex microscopic structureson November 11, 2020 at 9:13 am
Researchers at Finland’s Aalto University have successfully turned bacteria into a microscopic workforce of nanobots, using molds made of hydrophobic material to create incredibly intricate ...
- ‘Medical microrobots’, ‘room temperature superconductivity’ among 2020 science breakthroughson November 9, 2020 at 9:55 am
(Pixabay) Near room-temperature superconductivity for energy transmission without loss, medical microrobots to carry out risky surgeries in hard-to-reach body parts, and “revolutionary ...
- Two-Photon Polymerization Produces 3D Printed Microrobotson November 1, 2020 at 4:00 pm
A curated collection of industry and product deep-dives. Al Siblani is a 3D printing pioneer who got his start over 19 years ago. He worked with layered object manufacturing—the paper, and laser ...
- Does a Video Show a ‘Butthole’ Surfing Robot?on October 30, 2020 at 8:21 am
In future applications, the microrobots can be coated in drugs that could be administered directly to the organ being targeted, reducing potential adverse side effects from drugs as they pass ...
- All-terrain microrobot flips through a live colonon October 23, 2020 at 10:01 am
A video explaining the work is available on YouTube. The microrobots, cheaply made of polymer and metal, are nontoxic and biocompatible, the study showed. Commonly used roll-to-roll manufacturing ...
- Jumping spiders and flying bees: The rise of bio inspired microrobotson May 30, 2018 at 3:08 am
He is presenting some of his research, “Spiders Attack: The rise of bioinspired microrobots” at Manchester’s Industry 4.0 Summit on Thursday 1 March. Here Dr Nabawy explains why micro robots really ... | https://www.innovationtoronto.com/2019/04/3d-printing-magnetic-microrobots-of-many-different-shapes-and-sizes/ |
What is Backpropagation?
Deep learning systems are able to learn extremely complex patterns, and they accomplish this by adjusting their weights. How are the weights of a deep neural network adjusted exactly? They are adjusted through a process called backpropagation. Without backpropagation, deep neural networks wouldn’t be able to carry out tasks like recognizing images and interpreting natural language. Understanding how backpropagation works is critical to understanding deep neural networks in general, so let’s delve into backpropagation and see how the process is used to adjust a network’s weights.
Backpropagation can be difficult to understand, and the calculations used to carry out backpropagation can be quite complex. This article will endeavor to give you an intuitive understanding of backpropagation, using little in the way of complex math. However, some discussion of the math behind backpropagation is necessary.
The Goal of Backprop
Let’s start by defining the goal of backpropagation. The weights of a deep neural network are the strength of connections between units of a neural network. When the neural network is established assumptions are made about how the units in one layer are connected to the layers joined with it. As the data moves through the neural network, the weights are calculated and assumptions are made. When the data reaches the final layer of the network, a prediction is made about how the features are related to the classes in the dataset. The difference between the predicted values and the actual values is the loss/error, and the goal of backpropagation is to reduce the loss. This is accomplished by adjusting the weights of the network, making the assumptions more like the true relationships between the input features.
Training A Deep Neural Network
Before backpropagation can be done on a neural network, the regular/forward training pass of a neural network must be carried out. When a neural network is created, a set of weights is initialized. The value of the weights will be altered as the network is trained. The forward training pass of a neural network can be conceived of as three discrete steps: neuron activation, neuron transfer, and forward propagation.
When training a deep neural network, we need to make use of multiple mathematical functions. Neurons in a deep neural network are comprised of the incoming data and an activation function, which determines the value necessary to activate the node. The activation value of a neuron is calculated with several components, being a weighted sum of the inputs. The weights and input values depend on the index of the nodes being used to calculate the activation. Another number must be taken into account when calculating the activation value, a bias value. Bias values don’t fluctuate, so they aren’t multiplied together with the weight and inputs, they are just added. All of this means that the following equation could be used to calculate the activation value:
Activation = sum(weight * input) + bias
After the neuron is activated, an activation function is used to determine what the output of the actual output of the neuron will be. Different activation functions are optimal for different learning tasks, but commonly used activation functions include the sigmoid function, the Tanh function, and the ReLU function.
Once the outputs of the neuron are calculated by running the activation value through the desired activation function, forward propagation is done. Forward propagation is just taking the outputs of one layer and making them the inputs of the next layer. The new inputs are then used to calculate the new activation functions, and the output of this operation passed on to the following layer. This process continues all the way through to the end of the neural network.
Backpropagation
The process of backpropagation takes in the final decisions of a model’s training pass, and then it determines the errors in these decisions. The errors are calculated by contrasting the outputs/decisions of the network and the expected/desired outputs of the network.
Once the errors in the network’s decisions have been calculated, this information is backpropagated through the network and the parameters of the network are altered along the way. The method that is used to update the weights of the network is based in calculus, specifically, it’s based in the chain-rule. However, an understanding of calculus isn’t necessary to understand the idea of behind backpropagation. Just know that when an output value is provided from a neuron, the slope of the output value is calculated with a transfer function, producing a derived output. When doing backpropagation, the error for a specific neuron is calculated according to the following formula:
error = (expected_output – actual_output) * slope of neuron’s output value
When operating on the neurons in the output layer, the class value is used as the expected value. After the error has been calculated, the error is used as the input for the neurons in the hidden layer, meaning that the error for this hidden layer is the weighted errors of the neurons found within the output layer. The error calculations travel backward through the network along the weights network.
After the errors for the network have been calculated, the weights in the network must be updated. As mentioned, calculating the error involves determining the slope of the output value. After the slope has been calculated, a process known as gradient descent can be used to adjust the weights in the network. A gradient is a slope, whose angle/steepness can be measured. Slope is calculated by plotting “y over” or the “rise” over the “run”. In the case of the neural network and the error rate, the “y” is the calculated error, while the “x” is the network’s parameters. The network’s parameters have a relationship to the calculated error values, and as the network’s weights are adjusted the error increases or decreases.
“Gradient descent” is the process of updating the weights so that the error rate decreases. Backpropagation is used to predict the relationship between the neural network’s parameters and the error rate, which sets up the network for gradient descent. Training a network with gradient descent involved calculating the weights through forward propagation, backpropagating the error, and then updating the weights of the network.
AI 101
What are Quantum Computers?
Quantum computers have the potential to dramatically increase the variety and accuracy of computations, opening up new applications for computers and enhancing our models of physical phenomenon. Yet while quantum computers are seeing increasing media coverage, many still aren’t sure of how quantum computers differ from regular computers. Let’s examine how quantum computers work, some of their applications, and their coming future.
What Is A Quantum Computer?
Before we can meaningfully examine how quantum computers operate, we need to first define quantum computers. The short definition of a quantum computer is this: a computer, based on quantum mechanics, that is able to carry out certain complex computations with much greater efficiency than traditional computers. That’s a quick definition of quantum computers, but we’ll want to take some time to really understand what separates quantum computers from traditional computers.
Regular computers encode information with a binary system: representing each bit of the data as either a one or zero. Series of ones and zeroes are chained together to represent complex chunks of information like text, images, and audio. Yet in these binary systems, the information can only ever be stored as ones and zeroes, meaning that there is a hard limit to how data is represented and interpreted and that as data becomes more complex it must necessarily become longer and longer strings of ones and zeroes.
The reason quantum computers are able to more efficiently store and interpret data is because they don’t use bits to represent data, rather they use “qubits”. Qubits are subatomic particles like photons and electrons. Qubits have a couple interesting properties that make them useful for new methods of computation. Qubits have two properties that computer engineers can take advantage of: superpositions and entanglement.
Quantum superpositions allow qubits to exist in not just the “one” state or the “zero” state, but along a continuum between these states, meaning more information can be held using qubits. Meanwhile, quantum entanglement refers to a phenomenon where pairs of qubits can be generated and if one qubit is altered the other qubit is altered, in a predictable fashion, as well. These quantum properties can be used to represent and structure complex data in more efficient ways.
How Quantum Computers Operate
Quantum “superpositions” get their name from the fact that they can be in more than one position at a time. While bits can be in just two positions, qubits can exist in multiple states at once.
Thanks in part to the existence of quantum superpositions, a quantum computer is capable of calculating many different potential outcomes at the same time. Once the calculations are done, the qubits are measured, which creates a final result through the collapse of the quantum state to either 0 or 1, meaning the result can then be interpreted by traditional computers.
Quantum computing researchers and engineers can alter the position the qubits are in by using microwaves or precision lasers.
Computer engineers can take advantage of quantum entanglement to dramatically improve the processing power of computers. Quantum entanglement refers to the fact that two qubits can be linked together in such a way that changing one of the qubits alters the other qubit in a reliable way. It’s not fully understood why qubits can establish such a relationship or how this phenomenon works exactly, but scientists do understand it well enough to potentially take advantage of it for quantum computers. Because of quantum entanglement, the addition of extra qubits to a quantum machine doesn’t just double the processing power of a computer it can scale the processing power exponentially.
If this has all seemed a bit too abstract, we can describe how superpositions are useful by imagining a maze. For a normal computer to attempt to solve a maze, it must try each path of the maze until it finds a successful route. However, a quantum computer could essentially explore all the different paths at once, since it isn’t tied down to any one given state.
All of this is to say that the properties of entanglement and superpositions make quantum computers useful because they can deal with uncertainty, they are capable of exploring more possible states and results. Quantum computers will help scientists and engineers better model and understand situations that are multi-faceted, with many variables.
What Are Quantum Computers Used For?
Now that we have a better intuition for how quantum computers operate, let’s explore the possible use cases for quantum computers.
We’ve already alluded to the fact that quantum computers can be used to carry out traditional computations at a much faster pace. However, quantum computer technology can be used to achieve things that may not even be possible, or are highly impractical, with traditional computers.
One of the most promising and interesting applications of quantum computers is in the field of artificial intelligence. Quantum computers have the power to improve the models created by neural networks, as well as the software that supports them. Google is currently using its quantum computers to assist in the creation of self-driving vehicles.
Quantum computers also have a role to play in the analysis of chemical interactions and reactions. Even the most advanced normal computers can only model reactions between relatively simple molecules, which they achieve by simulating the properties of the molecules in question. Quantum computers, however, allow researchers to create models that have the exact quantum properties as the molecules they are researching. Quicker, more accurate molecule modeling would aid in the creation of new therapeutic drugs and new materials for use in the creation of energy technology, such as more efficient solar panels.
Quantum computers can also be used to better predict weather. Weather is the confluence of many events and the formulas used to predict weather patterns are complicated, containing many variables. It can take an extremely long time to carry out all the calculations needed to predict the weather, during which the weather conditions themselves can evolve. Fortunately, the equations used to predict weather have a wave nature that a quantum computer can exploit. Quantum computers can help researchers build more accurate climate models, which are necessary in a world where the climate is changing.
Quantum computers and algorithms can also be used to help ensure people’s data privacy. Quantum cryptography makes use of the quantum uncertainty principle, where any attempt to measure an object ends up making changes to that object. Attempts to intercept communications would influence the resulting communication and show evidence of tampering.
Looking Ahead
Most of the uses for quantum computers will be confined to academics and businesses. It’s unlikely that consumers/the general public will get quantum smartphones, at least not anytime soon. This is because it requires specialized equipment to operate a quantum computer. Quantum computers are highly sensitive to disturbance, as even the most minute changes in the surrounding environment can cause qubits to shift position and drop out of the superposition state. This is called decoherence, and it’s one of the reasons that advances in quantum computers seem to come so slowly compared to regular computers. Quantum computers typically need to operate in conditions of extreme low temperatures, isolated from other electrical equipment.
Even with all the precautions, noise still manages to create errors in the calculations, and researchers are looking for ways to make qubits more reliable. To achieve quantum supremacy, where a quantum computer fully eclipses the power of a current supercomputer, qubits need to be linked together. A truly quantum supreme computer could require thousands of qubits, but the best quantum computers today can typically only deal with around 50 qubits. Researchers are constantly making in-roads towards creating more stable and reliable qubits. Experts in the field of quantum computers predict that powerful and reliable quantum devices may be here within a decade.
AI 101
What Are Nanobots? Understanding Nanobot Structure, Operation, and Uses
As technology advances, things don’t always become bigger and better, objects also become smaller. In fact, nanotechnology is one of the fastest-growing technological fields, worth over 1 trillion USD, and it’s forecast to grow by approximately 17% over the next half-decade. Nanobots are a major part of the nanotechnology field, but what are they exactly and how do they operate? Let’s take a closer look at nanobots to understand how this transformative technology works and what it’s used for.
What Are Nanobots?
The field of nanotechnology is concerned with the research and development of technology approximately one to 100 nanometres in scale. Therefore, nanorobotics is focused on the creation of robots that are around this size. In practice, it’s difficult to engineer anything as small as one nanometer in scale and the term “nanorobotics” and “nanobot” is frequently applied to devices which are approximately 0.1 – 10 micrometers in size, which is still quite small.
It’s important to note that the term “nanorobot” is sometimes applied to devices which interact with objects at the nanoscale, manipulating nanoscale items. Therefore, even if the device itself is much larger, it may be considered a nanorobotic instrument. This article will focus on nanoscale robots themselves.
Much of the field of nanorobotics and nanobots is still in the theoretical phase, with research focused on solving the problems of construction at such a small scale. However, some prototype nanomachines and nanomotors have been designed and tested.
Most currently existing nanorobotic devices fall into one of four categories: switches, motors, shuttles, and cars.
Nanorobotic switches operate by being prompted to switch from an “off” state to an “on” state. Environmental factors are used to make the machine change shape, a process called conformational change. The environment is altered using processes like chemical reactions, UV light, and temperature, and the nanorobotic switches shift into different forms as a result, able to accomplish specific tasks.
Nanomotors are more complex than simple switches, and they utilize the energy created by the effects of the conformational change in order to move around and affect the molecules in the surrounding environment.
Shuttles are nanorobots that are capable of transporting chemicals like drugs to specific, targeted regions. The goal is to combine shuttles with nanorobot motors so that the shuttles are capable of a greater degree of movement through an environment.
Nanorobotic “cars” are the most advanced nanodevices at the moment, capable of moving independently with prompts from chemical or electromagnetic catalysts. The nanomotors that drive nanorobotic cars need to be controlled in order for the vehicle to be steered, and researchers are experimenting with various methods of nanorobotic control.
Nanorobotics researchers aim to synthesize these different components and technologies into nanomachines that can complete complex tasks, accomplished by swarms of nanobots working together.
How Are Nanobots Created?
The field of nanorobotics is at the crossroads of many disciplines and the creation of nanobots involves the creation of sensors, actuators and motors. Physical modeling must be done as well, and all of this must be done at nanoscale. As mentioned above, nanomanipulation devices are used to assemble these nano-scale parts and manipulate artificial or biological components, which includes the manipulation of cells and molecules.
Nanorobotics engineers must be able to solve a multitude of problems. They have to address issues regarding sensation, control power, communications, and interactions between both inorganic and organic materials.
The size of a nanobot is roughly comparable to biological cells, and because of this fact future nanobots could be employed in disciplines like medicine and environmental preservation/remediation. Most “nanobots” that exist today are just specific molecules which have been manipulated to accomplish certain tasks.
Complex nanobots are essentially just simple molecules joined together and manipulated with chemical processes. For instance, some nanobots are comprised of DNA, and they transport molecular cargo.
How Do Nanobots Operate?
Given the still heavily theoretical nature of nanobots, questions about how nanobots operate are answered with predictions rather than statements of fact. It’s likely that the first major uses for nanobots will be in the medical field, moving through the human body and accomplishing tasks like diagnosing diseases, monitoring vitals, and dispensing treatments. These nanobots will need to be able to navigate their way around the human body and move through tissues like blood vessels.
Navigation
In terms of nanobot navigation, there are a variety of techniques that nanobot researchers and engineers are investigating. One method of navigation is the utilization of ultrasonic signals for detection and deployment. A nanobot could emit ultrasonic signals that could be traced to locate the position of the nanobots, and the robots could then be guided to specific areas with the use of a special tool that directs their motion. Magnetic Resonance Imaging (MRI) devices could also be employed to track the position of nanobots, and early experiments with MRIs have demonstrated that the technology can be used to detect and even maneuver nanobots. Other methods of detecting and maneuvring nanobots include the use of X-rays, microwaves and radio-waves. At the moment, our control of these waves at the nano-scale is fairly limited, so new methods of utilizing these waves would have to be invented.
The navigation and detection systems described above are external methods, relying on the use of tools to move the nanobots. With the addition of onboard sensors, the nanobots could be more autonomous. For instance, chemical sensors included onboard nanobots could allow the robot to scan the surrounding environment and follow certain chemical markers to a target region.
Power
When it comes to powering the nanobots, there are also a variety of power solutions being explored by researchers. Solutions for powering nanobots include external power sources and onboard/internal power sources.
Internal power solutions include generators and capacitors. Generators onboard the nanobot could use the electrolytes found within the blood to produce energy, or nanobots could even be powered using the surrounding blood as a chemical catalyst that produces energy when combined with a chemical the nanobot carries with it. Capacitors operate similarly to batteries, storing electrical energy that could be used to propel the nanobot. Other options like tiny nuclear power sources have even been considered.
As far as external power sources go, incredibly small, thin wires could tether the nanobots to an outside power source. Such wires could be made out of miniature fiber optic cables, sending pulses of light down the wires and having the actual electricity be generated within the nanobot.
Other external power solutions include magnetic fields or ultrasonic signals. Nanobots could employ something called a piezoelectric membrane, which is capable of collecting ultrasonic waves and transforming them into electrical power. Magnetic fields can be used to catalyze electrical currents within a closed conducting loop contained onboard the nanobot. As a bonus, the magnetic field could also be used to control the direction of the nanobot.
Locomotion
Addressing the problem of nanobot locomotion requires some inventive solutions. Nanobots that aren’t tethered, or aren’t just free-floating in their environment, need to have some method of moving to their target locations. The propulsion system will need to be powerful and stable, able to propel the nanobot against currents in its surrounding environment, like the flow of the blood. Propulsion solutions under investigation are often inspired by the natural world, with researchers looking at how microscope organisms move through their environment. For instance, microorganisms often use long, whip-like tails called flagella to propel themselves, or they use a number of tiny, hair-like limbs dubbed cilia.
Researchers are also experimenting with giving robots small arm-like appendages that could allow the robot to swim, grip, and crawl. Currently, these appendages are controlled via magnetic fields outside the body, as the magnetic force prompts the robot’s arms to vibrate. An added benefit to this method of locomotion is that the energy for it comes from an outside source. This technology would need to be made even smaller to make it viable for true nanobots.
There are other, more inventive, propulsion strategies also under investigation. For instance, some researchers have proposed using capacitors to engineer an electromagnetic pump that would pull conductive fluids in and shoot it out like a jet, propelling the nanobot forward.
Regardless of the eventual application of nanobots, they must solve the problems described above, handling navigation, locomotion, and power.
What Are Nanobots Used For?
As mentioned, the first uses for nanobots will likely be in the medical field. Nanobots could be used to monitor for damage to the body, and potentially even facilitate the repair of this damage. Future nanobots could deliver medicine directly to the cells that need them. Currently, medicines are delivered orally or intravenously and they spread throughout the body instead of hitting just the target regions, causing side effects. Nanobots equipped with sensors could easily be used to monitor for changes in regions of cells, reporting changes at the first sign of damage or malfunction.
We are still a long way away from these hypothetical applications, but progress is being made all the time. As an example, in 2017 scientists created nanobots that targeted cancer cells and attacked them with a miniaturized drill, killing them. This year, a group of researchers from ITMO University designed a nanobot composed of DNA fragments, capable of destroying pathogenic RNA strands. DNA-based nanobots are also currently capable of transporting molecular cargo, The nanobot is made of three different DNA sections, maneuvering with a DNA “leg” and carrying specific molecules with the use of an “arm”.
Beyond medical applications, research is being done regarding the use of nanobots for the purposes of environmental cleanup and remediation. Nanobots could potentially be used to remove toxic heavy metals and plastics from bodies of water. The nanobots could carry compounds that render toxic substances inert when combined together, or they could be used to degrade plastic waste through similar processes. Research is also being done on the use of nanobots to facilitate the production of extremely small computer chips and processors, essentially using nanobots to produce microscale computer circuits.
AI 101
What Are Deepfakes?
As deepfakes become easier to make and more prolific, more attention is paid to them. Deepfakes have become the focal point of discussions involving AI ethics, misinformation, openness of information and the internet, and regulation. It pays to be informed regarding deepfakes, and to have an intuitive understanding of what deepfakes are. This article will clarify the definition of a deepfake, examine their use cases, discuss how deepfakes can be detected, and examine the implications of deepfakes for society.
What Is A Deepfakes?
Before going on to discuss deepfakes further, it would be helpful to take some time and clarify what “deepfakes” actually are. There is a substantial amount of confusion regarding the term Deepfake, and often the term is misapplied to any falsified media, regardless of whether or not it is a genuine deepfake. In order to qualify as a Deepfake, the faked media in question must be generated with a machine-learning system, specifically a deep neural network.
The key ingredient of deepfakes is machine learning. Machine learning has made it possible for computers to automatically generate video and audio relatively quickly and easily. Deep neural networks are trained on footage of a real person in order for the network to learn how people look and move under the target environmental conditions. The trained network is then used on images of another individual and augmented with additional computer graphics techniques in order to combine the new person with the original footage. An encoder algorithm is used to determine the similarities between the original face and the target face. Once the common features of the faces have been isolated, a second AI algorithm called a decoder is used. The decoder examines the encoded (compressed) images and reconstructs them based off on the features in the original images. Two decoders are used, one on the original subject’s face and the second on the target person’s face. In order for the swap to be made, the decoder trained on images of person X is fed images of person Y. The result is that person Y’s face is reconstruction over Person X’s facial expressions and orientation.
Currently, it still takes a fair amount of time for a deepfake to be made. The creator of the fake has to spend a long time manually adjusting parameters of the model, as suboptimal parameters will lead to noticeable imperfections and image glitches that give away the fake’s true nature.
Although it’s frequently assumed that most deepfakes are made with a type of neural network called a generative adversarial network (GAN), many (perhaps most) deepfakes created these days do not rely on GANs. While GANs did play a prominent role in the creation of early deepfakes, most deepfake videos are created through alternative methods, according to Siwei Lyu from SUNY Buffalo.
It takes a disproportionately large amount of training data in order to train a GAN, and GANs often take much longer to render an image compared to other image generation techniques. GANs are also better for generating static images than video, as GANs have difficulties maintaining consistencies from frame to frame. It’s much more common to use an encoder and multiple decoders to create deepfakes.
What Are Deepfakes Used For?
Many of the deepfakes found online are pornographic in nature. According to research done by Deeptrace, an AI firm, out of a sample of approximately 15,000 deepfake videos taken in September of 2019, approximately 95% of them were pornographic in nature. A troubling implication of this fact is that as the technology becomes easier to use, incidents of fake revenge porn could rise.
However, not all deep fakes are pornographic in nature. There are more legitimate uses for deepfake technology. Audio deepfake technology could help people broadcast their regular voices after they are damaged or lost due to illness or injury. Deepfakes can also be used for hiding the faces of people who are in sensitive, potentially dangerous situations, while still allowing their lips and expressions to be read. Deepfake technology can potentially be used to improve the dubbing on foreign-language films, aid in the repair of old and damaged media, and even create new styles of art.
Non-Video Deepfakes
While most people think of fake videos when they hear the term “deepfake”, fake videos are by no means the only kind of fake media produced with deepfake technology. Deepfake technology is used to create photo and audio fakes as well. As previously mentioned, GANs are frequently used to generate fake images. It’s thought that there have been many cases of fake LinkedIn and Facebook profiles that have profile images generated with deepfake algorithms.
It’s possible to create audio deepfakes as well. Deep neural networks are trained to produce voice clones/voice skins of different people, including celebrities and politicians. One famous example of an audio Deepfake is when the AI company Dessa made use of an AI model, supported by non-AI algorithms, to recreate the voice of the podcast host Joe Rogan.
How To Spot Deepfakes
As deepfakes become more and more sophisticated, distinguishing them from genuine media will become tougher and tougher. Currently, there are a few telltale signs people can look for to ascertain if a video is potentially a deepfake, like poor lip-syncing, unnatural movement, flickering around the edge of the face, and warping of fine details like hair, teeth, or reflections. Other potential signs of a deepfake include lower-quality parts of the same video, and irregular blinking of the eyes.
While these signs may help one spot a deepfake at the moment, as deepfake technology improves the only option for reliable deepfake detection might be other types of AI trained to distinguish fakes from real media.
Artificial intelligence companies, including many of the large tech companies, are researching methods of detecting deepfakes. Last December, a deepfake detection challenge was started, supported by three tech giants: Amazon, Facebook, and Microsoft. Research teams from around the world worked on methods of detecting deepfakes, competing to develop the best detection methods. Other groups of researchers, like a group of combined researchers from Google and Jigsaw, are working on a type of “face forensics” that can detect videos that have been altered, making their datasets open source and encouraging others to develop deepfake detection methods. The aforementioned Dessa has worked on refining deepfake detection techniques, trying to ensure that the detection models work on deepfake videos found in the wild (out on the internet) rather than just on pre-composed training and testing datasets, like the open-source dataset Google provided.
There are also other strategies that are being investigated to deal with the proliferation of deepfakes. For instance, checking videos for concordance with other sources of information is one strategy. Searches can be done for video of events potentially taken from other angles, or background details of the video (like weather patterns and locations) can be checked for incongruities. Beyond this, a Blockchain online ledger system could register videos when they are initially created, holding their original audio and images so that derivative videos can always be checked for manipulation.
Ultimately, it’s important that reliable methods of detecting deepfakes are created and that these detection methods keep up with the newest advances in deepfake technology. While it is hard to know exactly what the effects of deepfakes will be, if there are not reliable methods of detecting deepfakes (and other forms of fake media), misinformation could potentially run rampant and degrade people’s trust in society and institutions.
Implications of Deepfakes
What are the dangers of allowing deep fake to proliferate unchecked?
One of the biggest problems that deepfakes create currently is nonconsensual pornography, engineered by combining people’s faces with pornographic videos and images. AI ethicists are worried that deepfakes will see more use in the creation of fake revenge porn. Beyond this, deepfakes could be used to bully and damage the reputation of just about anyone, as they could be used to place people into controversial and compromising scenarios.
Companies and cybersecurity specialists have expressed concern about the use of deepfakes to facilitate scams, fraud, and extortion. Allegedly, deepfake audio has been used to convince employees of a company to transfer money to scammers
It’s possible that deepfakes could have harmful effects even beyond those listed above. Deepfakes could potentially erode people’s trust in media generally, and make it difficult for people to distinguish between real news and fake news. If many videos on the web are fake, it becomes easier for governments, companies, and other entities to cast doubt on legitimate controversies and unethical practices.
When it comes to governments, deepfakes may even pose threats to the operation of democracy. Democracy requires that citizens are able to make informed decisions about politicians based on reliable information. Misinformation undermines democratic processes. For example, the president of Gabon, Ali Bongo, appeared in a video attempting to reassure the Gabon citizenry. The president was assumed to be unwell for long a long period of time, and his sudden appearance in a likely fake video kicked off an attempted coup. President Donald Trump claimed that an audio recording of him bragging about grabbing women by the genitals was fake, despite also describing it as “locker room talk”. Prince Andrew also claimed that an image provided by Emily Maitilis’ attorney was fake, though the attorney insisted on its authenticity.
Ultimately, while there are legitimate uses for deepfake technology, there are many potential harms that can arise from the misuse of that technology. For that reason, it’s extremely important that methods to determine the authenticity of media be created and maintained. | https://www.unite.ai/what-is-backpropagation/ |
The Nobel Prize in Chemistry for 2016 has been awarded to Jean-Pierre Sauvage, Sir J. Fraser Stoddart and Bernard L. Feringa for developing molecular machines, the world’s smallest machines that may one day act as artificial muscles to power tiny robots or even prosthetic limbs.
Inspired by proteins that naturally act as biological machines within cells, these synthetic copies are usually constructed of a few molecules fused together.Also called nanomachines or nanobots, they can be put to work as tiny motors, ratchets, pistons or wheels to produce mechanical motion in response to stimuli such as light or temperature change.
Jean-Pierre Sauvage of France, J. Fraser Stoddart of Britain and Bernard Feringa of the Netherlands “developed molecules with controllable movements, which can perform a task when energy is added”. The three laureates will share the eight million Swedish kronor (around $933,000) prize equally.
Importance:
- Also called nanobots, these tiny machines can be put to work as motors, ratchets, pistons or wheels.
- The development of computing demonstrates how the miniaturisation of technology can lead to a revolution.
- The Academy says molecular machines “will most likely be used in the development of things such as new materials, sensors and energy storage systems”.
The chemistry prize is the last of this year’s science awards. The medicine prize went to a Japanese biologist who discovered the process by which a cell breaks down and recycles content. The physics prize was shared by three British-born scientists for theoretical discoveries that shed light on strange states of matter.
The Nobel Prizes will be handed out at ceremonies in Stockholm and Oslo on December 10, the anniversary of prize founder Alfred Nobel’s death in 1896. | https://www.rajras.in/nobel-prize-chemistry-goes-builders-molecular-machines/ |
How will Nanoparticles and Neurons Get Along?
Much of the predicted future of neurotechnology is grounded in the continuing success and development of nanotechnology. This field is broad, for sure, and is even a primary target of the US Federal Government (see the NNI).
A particularly critical aspect, however, considers the development of nanoparticles. A great deal of research is already underway on developing very tiny capsules that will one day float around in our bodies and drop off exact doses of drugs to a specific cell. Or, pint-sized nanobots with full on-board electronics will maneuver through our circulatory system looking for tissues to repair, cells to manipulate, and observations to report back to the host.
The prospects for this sort of technology might be exciting, and even a little scary. But, what is really important to think about right now is how will the human body actually get along with the nano-invaders? Will our immune system run in overdrive to try to stop the little buggers? Will we have to force an evolutionary leap to develop new symbiotic relationships with metallic pellets that are only just trying to be beneficial to our survival?
Three researchers from North Carolina State University are addressing this important issue that must be resolved before any real human trials of nano-particle infestations are implemented. Dr. Jim Riviere, Dr. Nancy Monteiro-Riviere, and Dr. Xin-Rui Xia are collaborating to figure out a way to pre-screen a nanoparticle’s characteristics in order to predict how it will behave once inside the body.
As soon as any foreign object slips into the human body, our sophisticated immune system kicks into high gear. Everything that is native to a body is essentially key-coded with a biological pass that tells any immune response that “I’m OK to be here, thank you!” If something inside isn’t coded properly, then a rapid kill response is launched through a biochemical cascade of the complement system (learn more), which attacks the surface of unrecognized cells and objects with a variety of binding proteins.
This is certainly a natural response that we would not want to occur if we were voluntarily injecting ourselves with nanobots. The brain might be able to consciously will our hands and feet to move as we see fit, but our species has not yet figured out how to mentally control our internal processes (or, can we?). Until thought-invoked immune suppression is possible, it will be more useful to clearly understand the biochemistry of the interactions between nanoparticles and our tissues, and use this characterization to correctly modify the nano-stuff to stay functional while surfing in the blood stream. | https://dynamicpatterns.com/research/category/neuronnews/page/2/ |
Jinxing Li pioneered the use of tiny robots—just a few micrometers across—to treat disease in a living animal.
Li designed rocket-like micromotors that run on gut fluids in a living animal and biodegrade after completing their mission.
The bots are made from polymer-coated balls of magnesium, which react with stomach acid to create hydrogen bubbles that propel them through the gut. Li and collaborators loaded one of the polymer layers with antibiotics, and the bots were administered to mice with stomach infections. On entering the stomach, they fired into the lining and stuck to the stomach wall before gradually dissolving to release their cargo over a long period to treat the infection.
Li recently showed that magnetically powered nanomotors cloaked in membranes from platelet cells could navigate efficiently through blood to remove toxins and pathogens without being cleared by the immune system or getting covered in sticky biomolecules, as foreign particles normally do.
The next step is to create “cyborg cells,” says Li, by taking the body’s immune cells, which hunt and destroy bacteria or cancer cells, and merging them with nanobots to navigate toward the disease site. | https://www.technologyreview.com/lists/innovators-under-35/2019/pioneer/jinxing-li/ |
Key points from article :
A new way to directly deliver gene-editing tools into specific tissues & organs in mouse models has been developed.
Such tools include CRISPR that can add, remove or change a gene precisely.
Used to treat diseases like sickle cell anaemia, multiple myeloma and liposarcoma .
Targeting such treatments to specific tissues has been difficult and expensive.
Led by Qiaobing Xu, researchers have found a way to package such gene editing kits so they could be injected into target cells.
Tiny bubbles of lipid molecules called lipid nanoparticles (LNP) that can envelop the editing enzymes and carry them to specific cells or tissues were used.
These applications could open new line of strategy in the treatment of neurological conditions, cancers, infections, and autoimmune diseases.
Clinical trials will be needed to determine the efficacy and safety of the delivery method in humans.
Study by Tufts University published in Angewandte Chemie International Edition 2020. | https://liveforever.club/article/tiny-lipid-bubbles-may-deliver-gene-editing-kit-to-your-tissues |
Ingestible Tiny Robots Can Now Save Your Life
Scientists make robots in all sizes, but did you know some are so small we can ingest them? Just how small can robots get?
Source: DNews
Read more: Nanobots
– Ingestible Origami Robot
– Are Ingestible Cameras The Future Of Medicine? | http://futuristicnews.com/ingestible-tiny-robots-can-now-save-your-life/ |
The literary interview, an in-depth discussion between an author and the media, is the topic of a comprehensive book published in French by Prof. Yanoshevsky.
Prof. Doron Aurbach (Chemistry) Wins Prime Minister's Prize for Innovation
BIU Researchers, Have Reconstructed Neandertal Ribcage, Offering New Clues to Ancient Human Anatomy
An international team of scientists has completed the first 3D virtual reconstruction of the ribcage of the most complete Neandertal skeleton unearthed to date, potentially shedding new light on how this ancient human moved and breathed.
Tuning in to the Cocktail Party Effect
Tuning in to a conversation at a party can be a challenge, especially when people are milling about, discussing a wide range of topics and the softest of music plays in the background. Dr. Elana Zion Golumbic, of BIU’s Gonda (Goldschmied) Multidisciplinary Brain Research Center, is making significant headway in understanding this “cocktail party effect” and its implications for everyday speech
Improving Urban Life through BIU’s Smart Cities Project
Bar-Ilan University’s Smart Cities Impact Center offers researchers the rare opportunity to make a difference in one of the “hottest” academic fields around, with the potential to greatly contribute to the community and the environment, notes Dr. Eyal Yaniv, head of BIU’s Graduate School of Business Administration
Targeting Enzymes that Energize Cancer Cells
Seeking the mechanism responsible for energy production in cancer cells, Prof. Uri Nir and his team at the Mina and Everard Goodman Faculty of Life Sciences found a component, the FerT enzyme, which is absent in healthy cells and can generate energy in cancer cells even under stressful conditions. When the researchers damaged the function of this enzyme (located in the mitochondria – the cell’s power stations which produce energy in cancer cells), the malignant cells failed to generate energy – and died.
A Smart Fiber that Measures Multiple Biomedical Parameters
A groundbreaking new technology enables simultaneous monitoring of a patient’s heart beat, blood pressure, and respiratory rate– all made possible by embedding a sensory fiber into his or her clothing. This is the newest in a long line of innovative developments achieved in Prof. Zeev Zalevsky’s lab at BIU’s Alexander Kofkin Faculty of Engineering. The novel technology is currently patent pending.
Electric Cars: Fun Driving and Environmentally Sound
Climate changes and futuristic predictions were supposed to turn electric cars into our main form of transportation. But in fact, the rate of adoption of electric cars in the western world (Israel included) is still surprisingly low. This appears to be due to such factors as high price, insufficient travel range and long battery charging time.
Fathoming How the Sea Responds to Pollution
Israel’s first permanent marine station to study the deep Mediterranean Sea was recently launched by Bar-Ilan University in collaboration with Israel Oceanographic and Limnological Research and scientists from other universities. The objective of the DeepLev Research Station is to understand the ways the sea responds to pollution, leaks or accidents that result from the increasing number of gas exploration and production platforms in the eastern Mediterranean, as well as to join the effort to understand the role of seas in mitigating atmospheric CO2.
Toning Down Violence in the School
For years violence in the school was perceived to be inevitable as children mature and become exposed to the world-at-large. Several extreme cases in the US have brought about a change in thinking in the school system and, with it, a concerted attempt to prevent the violence. BIU Prof. Rami Benbenishty, a research pioneer in this area and 2016 EMET Prize winner, has developed educational models that have helped reduce violence in Israel and abroad.
Choosing the Right Research Topic
Doctoral fellows are the spearheads of academic excellence. The dissertation defines the researcher’s academic interests, and sometimes sows the seeds for groundbreaking scientific discoveries. With this much at stake, how do PhD students choose a doctoral dissertation topic?
Strategizing for Transformational Change
Bar-Ilan University’s newly-inaugurated president, Prof. Arie Zaban, is an avid proponent of challenge-driven research that can create meaningful change and impact upon key areas of our lives. He seeks to improve the university’s international ranking, make academic education more relevant to the times, and transform the BIU campus into a vital social, intellectual and cultural hub.
As Small as It Gets
Dr. Doron Naveh constructs transistors from two dimensional and topological materials, and studies dichalcogenide spintronics, hoping to change the paradigm of nano-electronics
Playing with Time
After successfully cloaking a short event in time during his Post Doc, Dr. Moti Fridman is using temporal optics to develop advanced temporal technologies.
Science fiction literature is inundated with time travel and time manipulation stories. It’s only science fiction, of course. But Dr. Moti Fridman, a 38 year old father of two kids, has managed to manipulate time in real life.
The Wonders of Microbiota
Trillions of bacteria, known as “microbiota,” reside within the human body, and many of these actually contribute to our physical and mental health. Dr. Omry Koren, of BIU’s Azrieli Faculty of Medicine in the Galilee, is one of the world’s leading researchers of microbiome, a young, developing discipline which holds promise for exciting medical breakthroughs.
Using Smell to Shoo Disease- Carrying Mosquitos Away
Every year millions of people die from mosquito-transmitted diseases. Last year the Zika virus, which can spread from a pregnant woman to her fetus resulting in microcephaly and other birth defects, raised fears among those travelling to Latin America and the Caribbean.
Broadcasting Live…From Inside Our Bodies
No thicker than a pinhead, a tiny new endoscope developed by BIU researchers, is able to transmit high-resolution images from internal human organs, which were out of reach until now.
Nanobots to the Rescue: Programmed Drug Delivery in the Body
Smaller than viruses and designed to encapsulate the drug within folded DNA, nano-robots (nanobots) are increasingly being enlisted to fight disease. Injected into the patient’s bloodstream, they deliver their medicated payload directly to the infected organ. Treatment and drug dosage are tailor-made for each patient according to the doctor’s orders
Rejuvenation!
Aging is an inevitable biological process, not a pathological condition. Still, the search for an anti-aging skin-care product that stops sagging – or at least delays or lessens wrinkles – drives the cosmetics industry and consumers to spend vast amounts of money on anti-aging techniques and products.
The Center for Scientific Instrumentation
The Center for Scientific Instrumentation is more than just an equipment facility; it is a hub of research, providing state-of-the-art tools and technology and expert advice to scientists from BINA, Bar-Ilan University, as well as other academic institutions and the industry. Its unique instrumentation systems enable addressing a wide range of research questions and topics in the nano-realm. | http://www.biubogrim.org.il/?CategoryID=282 |
A team of researchers from the University of California San Diego and the University of Science and Technology Beijing has developed a way to engineer platelets to propel themselves through biofluids as a means of delivering drugs to targeted parts of the body. In their paper published in the journal Science Robotics, the group outlines their method and how well it worked when tested in the lab. In the same issue, Jinjun Shi with Brigham and Women's Hospital has published a Focus piece outlining ongoing research into the development of natural drug delivery systems and the method used in this new effort.
Medical scientists have been working with roboticists over the past several years to determine if it might be possible to launch tiny robots into the human body to carry drugs to specific parts of the body, such as an organ with a bacterial infection or a cancerous tumor. Most such efforts have involved injecting tiny capsules with metallic coatings that can be controlled using an external magnet. But as Shi notes, such efforts tend to be quite inefficient. Because of that, researchers have started to look at the possibility of engineering natural cells in the body to perform as programmed robots. In this new effort, they have devised a way to allow platelets to propel themselves through biofluids. Platelets, Shi also notes, were a good candidate because they are naturally able to carry material around in the body.
Under normal conditions, platelets are not able to move on their own; they are transported through the blood to different parts of the body. To give them a means of propulsion, the researchers asymmetrically coated them with an enzyme called urease—when it is exposed to urea, a reaction occurs that results in a force that can be used to propel the platelet. By coating the platelets asymmetrically, the team ensured that they were pushed in just one direction. The researchers noted that the speed of the platelet movement could be controlled by the concentration of the urease—and that the application of urease did not harm the platelet surface or its protein profile. | https://techxplore.com/news/2020-06-drug-carrying-platelets-propel-biofluids.html?deviceType=mobile |
- blue goo
- (BLOO goo)n.Proposed nanotechnological machines that would monitor and control other machines to ensure that their replication does not get out of control.Example Citation:The nano-enthusiasts also occupied themselves considering whether such "gray goo" might be effectively countered by "blue goo," policebots that would form a nanotechnological immune system.— Bill McKibben, Enough, Times Books, April 2003Earliest Citation:Mr. Joy said mankind is on the threshold of creating tools ''that are so powerful that they fundamentally threaten the social contract,'' the ceding of individual liberties to the state in return for the benefits of civilization. ''We are likely to empower those people who have an agenda,'' he said, and he was not referring to a Filofax. But if the best solution is to put some kind of governmental or supragovernmental authority in charge of deciding what science is good and what is not, then we would do better to hope that the invisible hand of Adam Smith's marketplace provides a solution. Which it might: A possible solution to gray goo is blue goo: tiny self-replicating police robots that keep the other ones from misbehaving.— Mitchell Martin, "Technology's Little-Heeded Prophet," International Herald Tribune, October 23, 2000First Use:Gragu can be recognized by what it does (nanovandalism). It can hide all it likes, but eventually it has to attempt to perpetrate some crime. Otherwise, it isn't gragu, is it? Imagine the world INFESTED with repair and defense nanoagents, at high density, ready to spring to action at the first sign of inimical activity (NAT MAN and ROBIN? :-) :-)). Of course, the problem then becomes the reliability and security of the nanopolice. Can they be trusted? Subverted? Could a traitorous strain be introduced that would out-replicate the bugs in blue, displacing them from the target area, and then striking? (Oh no, have I invented a new term: "Blue Goo" (The Nanopolice))?Blue Goo may work — if it can obtain and maintain a technological lead over the purveyors of Gray Goo.— Alan Lovejoy, "Re: Miscellaneous," sci.nanotech, May 31, 1989Notes:When I posted global ecophagy back on April 24, I mentioned the gray goo problem. To recap, if nanotechnologists get their way, some time in the future (10 years? 50 years? no one knows), some or all manufacturing will take place at the nano (one billionth of a meter) scale. This will be done by assemblers — unfathomably tiny machines that can be programmed to build just about anything atom-by-atom; this includes copies of themselves made by replicating assemblers or nanoreplicators. If there was a Worst Case Scenario Handbook for nanotech, it would include the possibility that this replication would somehow get out of control and the resulting trillions of assemblers would destroy everything in sight, leaving only an undifferentiated mass of nanoreplicators: gray goo.One "solution" to this hypothetical problem is adding extra nano-machines to the mix that would prevent the replication from getting out of hand. Since these machines would be effectively "policing" the other assemblers, one nano-wag (see the first use, below) dubbed them blue goo (which certainly puts the phrase "thin blue line" in a new context).Are there other nano-goos out there? Why yes there are, thanks for asking. There's golden goo, nanobots designed to extract gold from seawater; khaki goo, nanotechnology used for military purposes; and red goo, a collection of replicators designed to cause harm (a kind of nano-terrorism).Related Words: Categories:
New words. 2013.
Look at other dictionaries: | https://new_words.academic.ru/614/blue_goo |
Osteoarthritis (Figure 1) is the most common chronic disease in our country. As many as 1.2 million people have osteoarthritis and struggle daily with the pain and movement limitations that osteoarthritis entails. In addition to joint replacement surgery in the end stage of the disease, there is no effective drug. Patients are therefore dependent on physiotherapy and palliative care for years. Risk factors of osteoarthritis are age, obesity, mechanical overload and hereditary predisposition.
What’s the problem?
An important problem that hinders the development of medicines is that the pharmaceutical industry has until now been looking for a medicine for the average osteoarthritis patient. But it doesn’t exist. Osteoarthritis is caused by various causes (eg hereditary predisposition, overload, overweight) and the disease manifests itself in different ways. Which patient has which form of osteoarthritis has not yet been properly investigated and is difficult to recognize. Not even with the help of modern equipment like the MRI. The different disease processes arise because the cells in the joint react differently to damage to the tissue.
The RAAK study
In the research we use the RAAK study (Figure 2). In this study, we collected blood and joint tissues from osteoarthritis patients during joint replacement surgery. The tissues must be removed in order to place the artificial joint. Because the joint tissues (bone, cartilage, mucosal layer) are collected immediately after surgery, we can also remove living cells from the tissues.
The research
Mapping diversity We want to map the diversity of the molecular pathways, or disease processes, that lead to osteoarthritis and thus distinguish different patient groups from each other. We therefore study in the joint tissues, the hereditary make-up, the activity of properties (genes) and the mechanism of cells to control the activity of these properties.
Investigate osteoarthritis disease process in detail. In order to study the various disease processes in detail, we are developing a joint-on-a-chip together with Eindhoven University of Technology (Figures 3 and 4). This is a tiny device that has two connected spaces. One is filled with cartilage cells, the other with bone cells from the RAAK study, and new cartilage and bone tissue is created in the device using growth factors. Because we expose the cells to mechanical stress, for example by hitting the cells with tiny hammers, we can mimic the osteoarthritis process. We also change the genetic makeup of the cells to investigate patient-specific causes. By accurately determining where things go wrong during osteoarthritis in different patients, we also get leads for the development of (new) medicines. We also call this tailor-made treatment.
Signaling molecules to observe disease process
There is no adequate diagnostics available that can observe and track the precise osteoarthritis process in the joint tissues over time. This poses major problems when testing new drugs. This is because it is not possible to select patients who, on the basis of the disease process present, will probably benefit most from the specific target point of the developed drug. The research is therefore aimed at using a promising new molecular signaling molecule. These are so-called micro-RNAs. Micro-RNAs are small pieces of genetic material that regulate and control gene activity and thus specific processes in tissues (Figure 5). What is unique is that micro-RNAs also ensure the exchange of messages about the condition of the tissues in the body via the bloodstream. This is when we can intercept the micro-RNAs and their message and use it as a biomarker.
Most Recent Publications Osteoarthritis
The role of TNFRSF11B in development of osteoarthritic cartilage. | https://molepi.nl/osteoarthritis?lang=en |
Building Room for Better Debate
Recently a planned debate between two political candidates in South Bangalore was scuttled after a clash between party workers. This is just one example of how difficult it is to build serious and necessary dialogue into the political process. As the election season winds down, all we are left with is posturing on ideology and values.
There is no doubt that we live in ideologically divided times. From time to time, wedge issues – a ban on a controversial book, the ecology versus development debate – emerge to polarize us further. But what fuels our arguments when it comes to these issues: logic or emotions? This is what one of our contributors examines in her article.
Facts and information can be surprisingly elusive even in debates and decisions that involve public welfare. One of our experts uses recent policy debates to highlight how credible research could have been leveraged for better results in these cases.
Debate and discussion are a big part of the modern workplace. But are today’s leaders mindful of what it takes to drive productive discussions and effective problem solving within their organizations?
Lastly, if public opinion is the ultimate judge, how are offline and online channels converging to influence the way people think about specific issues? | https://the-viewpoint.net/2018/04/03/about-this-issue-13/ |
Voter suppression is alleged to be a strategy to influence the outcome of an election by discouraging or preventing people from exercising the right to vote.
An election is a formal decision-making process by which a population chooses an individual to hold public office.
Voter Suppression In Debbie Wasserman Schultz Race? by The Young Turks
It is distinguished from political campaigning in that campaigning attempts to change likely voting behavior by changing the opinions of potential voters through persuasion and organization.
Voting is a method for a group such as a meeting or an electorate to make a decision or express an opinion, usually following discussions, debates or election campaigns.
A Political campaign is an organized effort which seeks to influence the decision making process within a specific group.
Emergency Voter Suppression Lawsuit Filed In New York by TYT Politics
Voter suppression, instead, attempts to reduce the number of voters who might vote against a candidate or proposition advocated by the suppressors.
The tactics of voter suppression can range from minor dirty tricks that make voting inconvenient, to illegal activities that physically intimidate prospective voters to prevent them from casting ballots.
Voter suppression could be effective if a significant amount of voters are intimidated or disenfranchised. | http://gossipsloth.com/article/voter-suppression |
Most HR experts will agree that two topics should be avoided at work – politics and religion. We are seeign charged up sentiments across the world, be it nationlism, security, job losses or immigrant workers, you are bound to come across some chatter regarding politics at your workplace, and it does not have to become a war of words.
Discussing politics at work or anywhere can turn contentious very soon, and this may have a lot of undesired effects on your professional life most of the times. You may feel strongly about a particular issue, candidate or a political party and that’s great, but discussing the same and especially debating about it may be a bad idea.
Political debates can almost, every time, vex those who may hold a different view. You don’t want that person to be a co-worker especially your boss. This can undo the efforts you have put in for creating that excellent rapport with him/her. Opinions on areas such as politics can often differ and this difference can seep into personal and work relations, causing some detrimental effects.
Remember, no matter how public the political issues are, opinions about the same can be very personal. Political discussion can often lead to you or other people around you to create biased assumptions about each other, affecting the dynamics of the team immensely.
Engaging in debates about politics may also invite vitriol, particularly on your social media where your colleagues may feel more comfortable expressing their opinions.
A little chatter about politics as break room conversation does not always have to be harmful. But, since political discussions or even just a perfunctory comment can sometimes be so volatile, it is best to avoid such discussions to avoid a debate around it. Here are some do’s and don’ts that you can keep in mind.
If at all you are faced with a conversation about political views, try to keep an open mind and make non-confrontational statements. Ensure that whatever you say is around ‘I’ and not ‘you’ because the moment the latter is used more, it can suddenly make the conversation hostile.
Hear out what the other person has to say without jumping into it.
Read the room. A lot of times conversation around politics is just for making small talk which is usually harmless. But if you feel that the discussions around you are more about validation, choose to not be a part of it by just not participating in it.
Seek common ground. Instead of addressing which candidate or political party can do a better job, switch the conversation to a direction where both the parties can agree.
Even if you find it difficult to deflect from conversations about politics, learn to draw boundaries for yourself. If you ever feel that your words may be building up to a debate, take responsibility for it and disengage. Do it by stating clearly that you do not intend to create any discomfort for anyone.
Don’t engage in discussions about elections and politics even after work with your colleagues and boss. Hanging out with your work buddies after work is a great way to blow off steam but keep politics out of it too as it may find its way back to your professional life.
Don’t touch hot button issues when someone initiates political conversations as just small talk.
Even if you are faced with a situation that is difficult to avoid, do not voice your opinion in a way that vilifies the opposing views.
Don’t be entirely candid about your political views at work. This can be very helpful in avoiding heated conversations around politics at work.
Don’t decorate your cubicle and desk with signs and symbols of any political party or leader. This may invite unnecessary debates with colleagues who may not share the same affiliation. Moreover, this may be against your organisation’s policies.
There are a whole lot of topics that you can discuss at work instead of politics. Talk about your plans for the weekend, hobbies, recommend movies, or tell your colleague about the new restaurant you visited for making any small talk. You are allowed to be opinionated but remember that a debate on the political views during the lunch hour is less likely to change anyone’s views and more likely to affect your work relationship in a negative manner. So keep calm and don’t discuss politics! | https://humanresourcesblog.in/2019/04/07/how-to-avoid-political-debates-at-work/ |
Khabar Khair (Only Good News) – Ruqiya Danana
Reading public opinion, introducing the organization’s role and projects, permanent attendance, and attracting public opinion are what civil society organizations publish and aim to be close to the targets.
Recently, “civil society groups have become more technically savvy because they use social platforms, formats and social media such as video and podcasts to raise awareness of their issues”. (www.weform.org.)
Means of Directing Public Opinion
Civil society organizations in Yemen have endeavored to conduct many research studies, opinion polls, etc. in order to reach the public opinion and direct it towards the social issues that need to be highlighted, especially those that were considered in a previous period of time as socially and culturally prohibited that cannot be talked about, and by presenting these issues on social media, it became possible to know the different opinions of the public towards them, the extent of their response and the impact they will have on, if the issues raised in the questionnaires, which are presented through social media, were adopted, which contributed to overcoming the fear barrier towards some issues and becoming acceptable to talk about.
Here, Mona Turki, the mobilization and communication officer at Manasati 30 says about the role of the platform, “My platform 30 is interested in making the voice of Yemeni youths heard through participating in a monthly questionnaire on the most important issues that concern Yemeni youth and society”.
Turki added, “The results of the questionnaire are converted into press material supported by graphs and infographics and shared with various press and media institutions in Yemen. Therefore, the content resulting from the project consists of written texts, audio files, videos, cartoons, and infographics usually produced by young people specialized in sensitive issues like peace for example. “Manasati 30” provides specialized training for its young contributors to be able to produce disciplined and impactful content”.
Turki said, “With this approach, “Manasati 30” enables young public opinion to grow and emerge, which creates an influence on decision-makers and creates opportunities for change”.
She continues, “In the field of digital activities, the project organizes events and debates that seek to provide a free platform for young people to express their views, put forward their proposals and discuss their problems loudly”.
These events and debates are taking place in cooperation with other active youth institutions and initiatives. This cooperation allows building the capabilities of the youth components, and strengthens their presence, enabling them to influence more effectively.
“Promoting societal issues is what has given priority to research and studies centers and some Yemeni platforms, especially in the current conflict period, to access content created by young people themselves”, says Mona Turki.
She asserts that “integrating young people into discussions of public affairs is a very important step to create peace-making opportunities, and this is what we are working on.”
Negotiations and Peace First…
The questionnaires presented on social media usually distinguish the dimension of what is called a political phobia. So, people who have a fear of delving into political issues find an outlet to talk about.
Turki says, “The most important issues affecting the Yemeni arena were recently raised by means of questionnaires, the most recent of which was the March questionnaire in which young people were asked about the priorities of negotiation files”.
She adds, “In the picture, the results of one of the questionnaires in which the public opinion said that: it prefers Yemeni-Yemeni negotiations without external interference. This questionnaire was attended by more than 1,200 people, and its results were published more than 10 times in local media. The results are with the responsible authorities in the local political parties, including the Minister of Foreign Affairs and the Prime Minister’s Office”.
Between an opinion and another
Finally, the “Civil Alliance for Peace” platform raised a question that focused on the role played by the UN envoy in Yemen: “Does the UN envoy play an effective role in building peace in Yemen?” “This question provoked widespread reactions in the community on the platform on Facebook, said Abdulaziz, one of those interested in the case. “In the next few days, there will be a positive and effective role for everyone, because everyone is unanimous and convinced that the time has come to achieve peace in Yemen, and end the war, the suffering and tragedies of the Yemeni people with the support of the international community and decision makers”.
Omar Hadi, who saw that the role of the UN envoy is ineffective in Yemen, while the opinions of the latest discussions showed how the international role should be central and effective in terms of peace in Yemen, disagreed with him.
Other Intermediaries
While civil society organizations in Yemen benefit from various social media outlets regarding their civil work, they are still taking slow steps when compared to international organizations. Some relevant authorities have developed new partnerships, for example with UNICEF, to create robotic programs to communicate with youth on social media platforms. UNICEF Robot is a free tool for social monitoring (report) via SMS. It assesses how the youth feel about important issues based on responses to opinion polls and SMS alerts. (www.weform.org.)
Therefore, in order for NGOs in Yemen to achieve high performance effectiveness, the modern means with regard to social media, especially in issues related to youth, must be attracted. | https://khabar-khair.yic-yemen.net/2021/05/20/social-media-is-a-window-for-organizations-in-yemen-to-guide-public-opinion/ |
OPPORTUNITY TO APPRAISE OUR FUTURE PRESIDENT
Mongolia’s presidential election is taking place very soon. Having decided that MPRP leader N.Enkhbayar does not meet the legal requirements, the authorities registered former MP S.Ganbaatar as MPRP’s presidential candidate. This development has finalized the names of three presidential candidates from the three political parties who have seats in the parliament. The nominees are now ready to compete in the presidential race happening on June 6-24.
In a democracy, people cast their vote only after they obtain a good understanding of each candidate and what they stand for. A very effective tool that gives people an opportunity to weigh up the candidates is a debate broadcasted on TV. However, such culture of organizing debates has not embedded well in Mongolia. The debate also offers a great opportunity for candidates to express themselves and tell their story to the public and reach many people in short time. Mongolia’s first debate broadcasted live on TV was in 1993, just before votes were cast for our first presidential election between L.Tudev and P.Ochirbat. The last debate took place in 2013, when Ts.Elbegdorj, N.Udval, and B.Bat-Erdene were candidates. But, almost each of the past debates took the form of a pre-planned Q&A session rather than a real debate.
As the people of Mongolia, we want to know more about what our future president stands for and what kind of a person he is, before the election, and not afterwards.
Debate – a mandatory event for our time
In my previous column, I reflected on the fact that the people today do not know how well our presidential candidates understand the principles of capitalism, how they would yield their power to promote free competition and let the market regulate itself, and if they have the capability and desire to do so.
Mongolian political campaigns have been focusing more on self-promotion and making others look bad, instead of talking about plans and programs. This is causing people to become divided, moving them away from important discussions around political and socio-economic issues.
When debates reveal what leadership qualities the candidates possess, who their family members and relatives are, and what approach and attitude they take on various issues, the debates can really influence the opinion of voters. A 2012 report from the Pew Research Center suggests that the live televised debates influence the two thirds of the voters significantly or to some degree, which proves to be more effective compared to other communication methods.
An election campaign does not really target supporters, but focuses on the ‘floaters’ – people who have not made up their opinion. It has been concluded by scholars (Holbrook, Hall, Gottfried) that live debates play an important role in delivering information critical for voters to make an informed decision about the candidates.
During debates, new or emerging issues often surface and stay under public discussion even after the election. It was also concluded by a study (McKinney Rill, 2009) that debates help the younger people obtain a better understanding of democracy and how it works.
A TV debate broadcasted live throughout the country would also be cost effective. The maximum amount of campaign expenditure for one candidate has been set as 6.8 billion MNT from the political party, and 3.9 billion MNT from oneself. Instead of spending large sums to visit all aimags and soums, a televised debate offers a much cheaper option that would cover more people.
Format and content of debate
Another cause for the governance crisis in Mongolia today is the growing power granted to the president. The Constitution grants the president over ten different privileges, which can be divided into three categories: foreign relations, security, and ensuring independency and balance in governance. Therefore, the TV debate should be organized for each of these categories as a theme.
We need to have the candidates talk about their proposed policies on economy, society, and foreign relations, let them elaborate on their long term vision, and ask questions in relation to their past actions and quotes. There needs to be discussions around what the candidates think of our current political and economic institutions and how and what they intend to change. In order to stop and prevent from the deep-seated corruption in the government, we need to pose well-researched questions on political party financing and campaign funding of each candidate.
How ready the president is going to be on the first day of his office and what outcomes he will deliver are largely depending on whether the candidates have developed or publicized their policies on the abovementioned categories and if they have their team ready to work with. There is no time to prepare for these things after you get elected.
The debate must not allow the questions to be disclosed beforehand, and has to have specific questions rather than general ones. Opinions of the candidate on some topics have to be further clarified, and every nominee must be given an opportunity to express his opposing views and give rebuttals.
The moderator is critical to organizing such debates. The only role of moderators in the previous debate was to ask questions, which made the whole affair boring, slow, and uninteresting. Every candidate has to be treated equal, starting from the seating position to opportunities to speak.
If the debate has the right format, asks the right questions, and makes people think, it will help everyone make a more informed decision when voting.
It is clear from the traditional and social media that Mongolians have high expectations for the presidential election debate this year, having recently seen extremely interesting, competitive, multi-phased debates of elections in the United States, France, and South Korea.
It is unimaginable to think that the head of our state and the leader of Mongolian democracy would be unable to articulate their opinion clearly and freely. I would like to strongly encourage the presidential nominees to participate in a TV debate broadcasted live. | http://defacto.mn/en/research/live-debate-opportunity-to-appraise-our-future-president/ |
On issues ranging from vaccines to genetically modified crops, to climate change, hot button public controversies about science shape public opinion and influence policymakers. I study these controversies and find they are often not actually about science. Instead, science provides an arena in which public figures debate deeper issues — such as the relationship between capitalism and the environment or how society should deal with risk. Science, in other words, serves as a playing field for these much broader political and philosophical disagreements.
But science is often an inappropriate focus for discussions that are really about other issues. Scientific debates are typically restricted to elites and use narrow conceptual frameworks. As a result, debates about science and scientific consensus can conceal broader concerns and distract from more important discussions about shared values and disagreements about how to solve societal problems. Several current debates are all cases in point.
Debates about Genetically Modified Foods
The debate over genetically modified foods — called GMOs for short — is typically framed as a debate over safety: will eating these foods give you cancer? I first encountered this debate in a local food cooperative in northern Indiana, where many members were opposed to GMOs. Even though members sometimes raised concerns about safety or environmental impacts, they often seemed more concerned about certain large agricultural biotechnology companies that control much of the commodity seed market and have a reputation for aggressively protecting their intellectual property. Widely circulated rumors suggested that lawyers were harassing farmers. There is no solid evidence of this practice, but out-of-court settlements would not necessarily leave a paper trail. In short, deeper concerns than scientific findings about GMO safety are involved here. Many opponents see GMOs as the flagship technology for “industrial agriculture,” which they oppose on cultural and environmental grounds. So being “anti-GMO” is often a shorthand way to express a broader opposition to the heavy use of synthetic pesticides and fertilizers and the role of the profit motive in agriculture.
Vaccine Statistics and Different Understandings of Risk
Much like GMOs, the vaccine controversy is typically framed as a debate over safety. But recent work has uncovered some more complicated social dynamics. Many parents who are hesitant to vaccinate their children are highly educated, middle class, white women. People in this privileged demographic are not likely making choices out of simple ignorance. A better explanation starts by recognizing that epidemiologists and parents typically work with very different conceptions of risk.
- Epidemiologists study whole populations using statistical conceptions of risk. For vaccines, they ask how many will get sick from a preventable disease versus how many will suffer vaccine side effects. The vaccine is judged safe and effective if it reduces the rate of disease more than it causes side effects.
- By contrast, parents hold to an individual conception of risk. They are rightfully concerned with risks to their child, who is not interchangeable with other children. Parents want to know whether their particular child is more or less susceptible to a disease or a severe side effect from vaccination. Statistical studies do not provide the answers parents want.
Climate Change Controversies
Over 99% of climate scientists agree that climate change is real, will have severe effects, and can be affected by human choices. But there are still some persistent critics, by which I do not mean conspiracy theorists but refer to some experts with backgrounds in economics, engineering, or physics, who offer highly technical criticisms of the mathematical methods used in climate science. The notion of “inductive risk” can help understand why these critics are so persistent. This is the idea that standards for scientific claims depend, in part, on their downstream policy or social consequences. If the downstream consequences of a claim are mild, then it would be appropriate for the standards of evidence to be relatively low; if the downstream consequences are severe, then much higher standards of evidence are appropriate.
If scientific claims about climate change are correct, of course, that means humanity should work quickly to make a drastic change, to transition away from fossil fuels. Many of the technical critics of those claims have connections to the fossil fuels industry, which would be devastated by such a transition. From their perspective, the downstream consequences of climate change findings are severe — and the principle of inductive risk indicates that, from their perspective, very high standards of evidence are appropriate. At the same time, many mainstream climate scientists argue that the transition would cost much less than climate inaction. For these climate scientists, the inductive risk of climate change is low, and so the standards of evidence are more modest. From the mainstream perspective, critics are pointing to relatively minor doubts.
What Can be Done?
There is no silver bullet for preventing narrow scientific debates from distracting from the deeper, more difficult conversations that need earnest attention on the public stage. Nevertheless, policymakers, media outlets, and other civic leaders should recognize that clashing public values can imply different scientific research questions and standards of evidence. Science that is relevant to one social group may not address other concerns and interests.
Often the important political issue is whose interests — whose science — should shape policies affecting everyone. Public actors might do better to stop privileging science and instead articulate the deeper issues that are at play in hot button scientific controversies. Increased transparency about what is really at stake in these controversies could make them less confusing and help leaders and citizens decide on effective solutions to the underlying challenges.
Read more in Hicks, Daniel J, “Scientific Controversies as Proxy Politics.” Issues in Science and Technology, 33, no. 3 (2017); and Daniel J. Hicks, “The Safety of Autonomous Vehicles: Lessons from Philosophy of Science.” IEEE Technology and Society Magazine 37, no. 1 (2018): 62–69. | https://scholars.org/contribution/how-scientific-controversies-inappropriately-end-center-bigger-political-battles |
The UK Government’s Alcohol Strategy (GAS), published in March 2012, unexpectedly included a commitment to introduce minimum unit pricing (MUP) for alcohol in England, following the adoption of similar measures by the Scottish Government. Yet just 16 months later, the introduction of MUP was placed on hold indefinitely. Our recent article published in Policy and Politics seeks to explain how and why MUP came so unexpectedly onto the policy agenda in England, before disappearing just as suddenly, and what this tells us about the evolving political dynamics of post-devolution and post-Brexit Britain.
In Scotland, MUP passed into law at the second attempt in 2012 and came into force in 2018 following a six-year legal battle with the Scotch Whisky Association and other industry actors. The emergence of MUP as a viable policy option was, however, a ‘cross-border’ process with developments in Scotland inextricably linked to those ‘down South’, particularly the support for, and background work on, alcohol pricing within the Department of Health. Following its adoption in Scotland, a ’policy window’ opening in which MUP came onto the policy agenda in England also. However, this proved to be short lived. Our article argues that the success of MUP in Scotland and its failure in England can largely be explained in terms of the differing levels of political commitment to the policy in each context. Continue reading How minimum unit pricing for alcohol almost happened in England and what this says about the political dynamic of the UK
Anne Skevik Grødem & Jon M. Hippe
In the current political climate, academic knowledge and topical expertise do not appear to be the most sought-after qualities in political leaders. Increasingly, life in the world’s capitals is portrayed as a battle for power between politicians and civil servants. Incoming politicians are often charismatic, prone to sweeping statements on complex issues, and portray themselves as representatives of the people who will “drain the swamp” and “get things done”. Among the swamp creatures, more often than not, they place civil servants: the dull nerds, obsessed with their rules and budgets, far removed from the people they are supposed to be serving. In this picture, there is a clear rift between the dynamic, if ignorant, politician, and the change-averse, but smart, civil servant. Against this background, it seems more important than ever to discuss: what is the relationship between knowledge and action in politics? Or, to put it differently, does it matter whether politicians know what they are doing? Continue reading Does it matter if politicians know what they are doing?
Sarah Brown,
Journal Manager, Policy & Politics
New virtual issues from Policy & Politics:
Evidence in policymaking and the role of experts
The importance of using evidence in policymaking and debates over the role of experts has never been more crucial than during the current coronavirus pandemic and ensuing public health crisis. From prevailing, long-standing debates over both topics in Policy & Politics, we bring you a collection of our best and most recent articles.
Continue reading Virtual issue on Evidence in policymaking and the role of experts
Markus Holdo, Per Ola Oberg & Simon Magnusson
Political debates often become dominated by the same kind of people: pundits, lobbyists, politicians, and experts, who know how to grab people’s attention and articulate their viewpoints convincingly. These people persuade viewers and listeners, shape public opinion, and influence political decision-makers more than other people do. But debating skills are not necessarily matched by knowledge, nor by a concern about the interests and views of ordinary citizens. In that sense, it could be viewed as a democratic problem that the public conversation is usually shaped by the narrow perspectives of a privileged few.
But how, then, could our public discussions become more inclusive and responsive to ordinary citizens? To this question, political theorists have given two very different answers. Continue reading Do People Use Stories or Reasons to Support their Views?
Thank you to all our reviewers in 2019
On behalf of the authors and readers of Policy & Politics, the Co-Editors wish to wholeheartedly thank those who reviewed manuscripts for us in 2019.
With a high 2 year impact factor of 2.028, and a 50 year tradition of publishing high quality research that connects macro level politics with micro level policy issues, the journal could not exist without your investment of time and effort, lending your expertise to ensure that the papers published in this journal meet the standards that the research community expects for it. We sincerely appreciate the time spent reading and commenting on manuscripts, and we are very grateful for your willingness and readiness to serve in this role.
We look forward to a 2020 of exciting advances in the field and to our part in communicating those advances to our community and to the broader public.
Policy & Politics Co-Editors: Sarah Ayres, Steve Martin & Felicity Matthews
If you enjoyed this blog post, you may also be interested to read: | https://policyandpoliticsblog.com/category/politics/ |
Briefing Paper 100: Online Threats to Democratic Debate:
A Framework for a Discussion on Challenges and Responses
Michael Meyer-Resende (Executive Director) and Rafael Goldzweig (Social Media Research Coordinator) wrote this briefing paper.
Executive Summary
Online disinformation in elections has been one of the major themes of the last years, discussed in countless articles, investigations and conferences. With this paper we want to challenge some of the notions and points of focus in the debate, namely:
The Problem
The focus on elections is too narrow. The US presidential elections in 2016 pushed online disinformation into the limelight, and as a result people have often discussed it as a danger to electoral integrity. Elections are a necessary part of democracy, but by no means sufficient. Participation takes place in many other forms. People work in political parties, engage in pressure groups, and demonstrate and share their opinions in many different ways. Journalists investigate and report, politicians discuss, propose and act. These are all essential ways of engaging in a democracy and they happen daily. And every single day these processes may be affected by online disinformation. The focus then needs to be on all these aspects of democracy.
The focus on ‘disinformation’ is often unclear. Many different issues, in particular cyber-‐security, are conflated with disinformation. Some of these issues have overlaps, but they are not the same. Hacking into accounts or disabling electoral infrastructure is a major problem and it is not easy to defend against, but it does not raise wide-ranging normative questions. In most cases cyber-attacks are a crime, or are widely seen as crimes, and the only question is a technical one about how to prevent them. The question of democratic discourse is far-more complex.
A Wider Understanding of Threats
Nothing less than democratic debate and discourse is under threat. A democracy needs a functioning public space where people and organisations freely exchange arguments. That is why freedom of expression is essential to any democracy, but it is also the reason why all democracies spend money on public broadcasting: they acknowledge that an informed public debate does not emerge by the
mere forces of the market. Democratic discourse needs to be understood widely. It encompasses all exchange of arguments and opinions, in whatever form, and can relate to public policy choices.
Discourse that is relevant to democracies includes a wide range of activity from discussions on deeply‐held beliefs (world views) to simple information that may not affect any opinion, but that may affect politically relevant action (such as finding a polling station, deciding to go there or not; deciding on joining a demonstration).
Why is it necessary to start with things as far‐reaching as worldviews? The answer is that democracy is premised on some common ground. It can live with many disagreements and different interests – indeed, it is designed to allow people to live together peacefully, despite disagreement, but it does need some common ground. If, for example, many people believe that the Earth is flat, they are rejecting
scientific evidence. Without accepting basic assumptions of science, it is simply impossible to discuss most major political questions. Again, this should not be too controversial. Democracies invest
heavily in school curricula that try to establish that common understanding.
We propose a layered understanding of threats to democratic discourse that appear at different levels of opinion and behaviour formation. These range from the level of fundamental beliefs of ethical or religious assumptions to political ideology (conservative? socialist? ecological?), voter choice to behaviour choices (vote or not, and vote where? Demonstrate or not, and where?) that may not even
impact an opinion. Threats to opinion at the deeper levels are continuous, because opinions are formed continuously. Threats to short-term choices are more likely to emerge around specific events (such as trying to deter people from going to vote by spreading false news about police checks at polling stations). The tech firms’ remedies have focussed more on the short-‐term threats than the longer systemic threats.
To discuss the entire panoply of challenges, we prefer the term ‘threats’ to other terms like propaganda or disinformation. The latter are mostly used with the assumption that a particular actor is actively and intentionally disinforming. But many threats to democratic discourse are non-‐intentional. Most importantly, the entire architecture of social media and other digital offers rests on choices that are full of unintended consequences for democracy. Just think of YouTube recommending videos that veer viewers to extremist content. It recommends sensational content to keep users on the platform, but it was not designed to help extremists.
The Phenomena
‘Fake news’ has become the shorthand for all the internet’s ills. As many experts have pointed out, the word has been so abused and means so many things to so many people that it has become useless.
The boom of the term points to a deeper problem of the debate: it has centered on the question of the “message.” Is the message true or false? Is it harmful to specific persons or groups of people? Should
it be censored? These are the questions typically emerging in the debate. The focus on seeing content as the main problem resulted in fact-checking becoming one of the favourite remedies.
But many problems of online speech are unrelated to the message. When Russian agents bought advertising on Facebook to support the ‘Black lives Matter’ movement, the messages were not the problem. We would not discuss them had genuine members of the movement posted them. The messenger was the problem. When bot networks amplify and popularise a theme or a slogan, the message may not be the problem, nor the messenger, but the messaging is problematic, i.e. the way the message is spread – implying a popularity that does not exist. Imagine a major street demonstration for a legitimate cause and later it turns out that most participants were robots or people paid to participate. We would consider that problematic.
We therefore propose to distinguish three phenomena (“3M”) that need to be discussed in their own rights:
It's Not Only About Freedom of Speech
The focus on the message meant that most debates focus on freedom of speech issues. Viewing the broader issues of threats to democratic discourse across the “3 Ms”, it becomes clear that the rights issues are more complex. The blind spot of legal debates has been the right to political participation and to vote, which presupposes – in the words of the UN’s Human Rights Committee – that public discourse
should not be manipulated. It turns the focus from the expression of opinions to the question about how opinions are formed – the concern that stands behind the financing of public broadcasting by states. It provides the basis for discussing many of the questions related to inauthentic messengers and manipulated messaging/distribution of content. This should not be understood as a facile road to censorship, but rather showing that concerns about social media architecture – what decisions guide what users can see – are based on a human rights concern.
1. Why This Paper?
Ever since the US elections in 2016 and the Cambridge Analytica scandal, there has been a wide-‐ranging debate on the threats to democracy in the digital space and particularly in social media. Countless conferences, reports and media pieces describe and analyse a large range of issues and challenges. Catchwords abound: Disinformation, computational propaganda, fake news, filter bubbles, dark ads, social bots or inauthentic behaviour, to name but a few.
Building on the work of other organisations, we propose a framework to disaggregate these various phenomena more clearly. We hope that this will contribute to structuring debates and conferences, to
develop practical methodologies to monitor and to respond to threats to democratic discourse online and to discuss regulation.
2. What is the Problem?
How should one describe a desirable online discourse? The tech companies sometimes use frames borrowed from biology. Facebook for example often mentions ‘healthy discourse’1, Twitter’s CEO Jack
Dorsey asked for help to measure Twitter’s health. Words like ‘toxic discourse’ or ‘contamination’ abound. But biology is a bad frame to discuss threats to online discourse.
Social media and the digital sphere are being created by humans. The digital space has no ‘natural’ qualities. The idea of natural qualities confuses the debate. For example, a widely held misunderstanding
suggests that there is a natural way of how posts are seen in social media platforms and there should be no ‘tampering’ with algorithms. Nothing we see in our Facebook, YouTube or Twitter feeds is natural.
It is entirely based on complex algorithms designed by humans to keep users on the platforms and to gain new users, to make the platforms ultimately more attractive for advertisers. If Facebook decides to reduce the reach of a post, it is not reducing its ‘natural’ position. It only gives it less prominence compared to other posts.2
There is no obvious definition of what a ‘healthy’ discourse may be. For example, in the US the limits to freedom of speech are drawn very wide and include speech that would be characterised as incitement
to racial or religious hatred in many European countries. Neither approach is ‘naturally’ better, both have good arguments on either side. Talking about online discourse using health as a frame implies
that we only need to find the right formula to solve this problem and that it may be a matter for experts more than others. There is no such formula for human debate.
Other authors suggest that the information space should be seen as an ‘order’, meaning that ‘disorder’ is a problem.3 However, especially social media discourse, conducted by millions of people at the same
time, is disorderly and why should it not be? What order would be appropriate and who decides? Much information on social media is irrelevant for democratic discourse and order is not required.
The term computational propaganda is also used and may be useful to describe specific threats, but by implying malicious intent of actors, it is too narrow to describe the full range of threats to democratic discourse online. For example, the above-mentioned question of how algorithms make choices in ranking posts is as such not a matter of propaganda. It stems from a company’s interest in profit-making.
We propose the term ‘threats to democratic discourse’. Threats can follow from the intentional actions of people seeking to do harm, but threats can also be the unintended consequences, for example, from the way that social media platforms are designed.
3. What is the Democratic Discourse?
Democratic discourse is the pluralistic debate of any issue that relates directly or indirectly to public policies. A lot of interaction on social media, such as discussion of sports or celebrities, has often no strong relation to public policy and is therefore of no particular interest for a discussion on online threats to democracy.
Then again, in recent years the threat of electoral interference has often narrowed the debate. Democratic discourse is a larger concept than electoral integrity. Political participation in a democracy is
exercised around the clock and not only during elections. Citizens inform themselves, they debate (online or offline), they may demonstrate for issues or they may be active in associations or political parties. Elections are an essential element of democracy, but even the most reduced academic definition includes more than just casting votes.4 More importantly, international law is clear on the set of political rights that make a democracy, which go beyond the right to vote and to stand in elections. They include the freedoms of association, assembly and expression - summarised as political rights.5
Democratic discourse takes place constantly. When public discourse is manipulated, it may not only affect elections, it may equally be targeted at public policy choices. A high-‐profile example is the sudden, online-‐generated opposition against the UN Migration Pact. While opposition to the pact is legitimate in any democracy, the campaign showed elements of online disinformation. Massive resistance emerged suddenly at a late stage in the process, when there had been little opposition during the long process of negotiating the pact. Online manipulation may target even deeper roots of democracy. It may attempt to turn engaged citizens apathetic, cynical or fundamentally distrustful of the entire system of democracy.
Therefore, protecting democracies means adopting a wide notion of democratic discourse. If, for example, many people start believing that the earth is flat, a whole range of public policy debates will
become impossible (how do you discuss climate and weather patterns, if you believe the Earth is not round? If many people reject the science of vaccination, how can we discuss health policies?) And
worse: if people believe that all governments, scientists and journalists are part of a conspiracy to conceal the fact that the Earth is flat, they will not meaningfully participate in public discourse. These threats may not result from anti-democratic intentions.
YouTube recommends videos that are sensationalist because they are more likely to be watched (the company promised to reduce such promotion). That the Earth is flat sounds more interesting than an
explanation that it is not. Our new information infrastructure follows the rules of sensationalist tabloids to catch the attention of viewers and users. This challenges democracy.
In authoritarian states deep distrust in institutions is a sign of realism. In democracies scepticism in institutions is appropriate, but if it turns into conspiratorial thinking or rejection of facts-‐based debate,
democracy loses its basis. It is for this reason that different levels of human reasoning and behaviour can be threatened, either by disinformation or by the way that online content is organised and presented. These levels include.
- Worldview/Weltanschauung: The worldview is the deepest level of a personal belief system, for example, a belief in rationality (even if it may not be an absolute belief), religious, moral and ethical convictions. There are far-reaching social science debates on what a worldview is, but for our purposes fit is enough to distinguish between the deepest level of beliefs and assumptions about the world from political and ideological leanings. For example, a person who believes in relative human progress (“you can improve things”) may turn to various ideologies. She could be a conservative or a liberal, but would be unlikely to turn to more totalitarian ideologies. A person who believes in absolute progress (“if we try hard, everything will become ever better and at some point perfect”) is likely to turn to more utopian (or dystopian) ideologies like communism or fascism. Democratic compromise will feel like treason to that person. A person who turns to religious fundamentalism is unlikely to remain adaptable to democracy. Disinformation and other online manipulation try to weaken democracies’ deep roots at the level of worldviews. They will try to turn citizens into cynics (“I cannot do anything anyway”) or into paranoids, who work against democracy (“I have to bring down the false facades”). Specific myths (such as that of a flat earth or Chemtrails) may seem crazy, but they have a destructive power, because they question everything. More insidiously, the concept of science may not be attacked, but the credibility of scientists is undermined tactically to serve a political purpose, as has been the case with climate deniers. The end result is cynicism and distrust in a professional community that provides essential information for a facts-‐based democratic discourse. The same is true when they are directly related to critical
democratic institutions (“all journalists are liars”). If we identify worldviews as a specific target of influence operations, it becomes also clearer where to look for threats. For example, typically adolescents do not yet have firm worldviews, so actors who seek to undermine them would look for platforms that are used by them, such as Instagram or gaming platforms.
- Political beliefs, ideology: Actors of disinformation try to influence political beliefs and ideologies that usually have an impact on electoral choice and general positioning in public discourse. For example, lobbyists for the coal industry may try to undermine climate scientists, reframing the perception of coal as a ‘green’ natural resource. They do not aim to change somebody’s worldview (the person believes in the need for clean energy), but they try to change their political belief in a specific topic. At this stage disinformation may become propaganda. It may not present false content, but its selection is one-‐sided to build a political belief (if the only crime that a supposed news site reports are those committed by immigrants, it serves a propaganda purpose, not a news purpose). Fake news sites with such propagandistic purposes remain one of the major challenges for Facebook. Impact at this level prepares the ground to influence the next level of behaviour, namely electoral or other concrete political choices.
- Electoral and other choices of political action: Disinformation may not aim to influence a political belief, but simply to influence an electoral choice or other choices. The campaign during the US 2016 presidential elections, for example, portraying Hillary Clinton as a criminal, did not try to turn Democratic voters into Republican ones. It signalled to democratic voters: even if you like that party, do not vote for this particular candidate. Operatives of the Democratic Party tried to divide the support for Republican candidates in the Alabama Senate elections in 2018; it did not try to change their political beliefs. The Russian Internet Agency published posts calling for demonstrations that would not have happened otherwise. It activated existing beliefs, but it did not create or change them. Such threats usually have a more short-term horizon, for example, aiming at influencing a specific upcoming election.
- Electoral behaviour: Disinformation may also try to change electoral behaviour without attempting to change the voters’ minds about a candidate or a party. An example would be an ad posted in the 2018 elections in Brazil feigning support for the Workers’ Party, but indicating a wrong election day (one day too late) or misleading pictures that showed police checks at polling stations in the US, potentially deterring vulnerable voter groups who fear the police.
A wide notion of democratic discourse, which includes anything from shaping world views to influencing specific decisions, reflects the importance that discourse has in democracies. This is not a novel idea. Almost all democracies invest significantly in public broadcasting, because they consider impartial information to be more than a commercial good and that citizens need to engage and be engaged in the public sphere.6
4. Disaggregating Digital Phenomena: Message, Messenger, and Messaging
The discussion on threats to discourse on social media focuses on many different phenomena which tend to be discussed all at once. The Council of Europe’s report on Information Disorder provided important guidance for this debate, but it had a strong focus on “the message”, i.e. the content that is spread online.
Symptoms of a strong focus on message are:
- The popularity of the ‘fake news’ label
- The focus of many discussions that seek remedies of fact-checking
- The centrality of freedom of speech in the debate
Thus, for example ,the European Commission established an Expert Group on “Fake News and Online Disinformation”, which defined disinformation as “all forms of false, inaccurate, or misleading information”, in other words, a message problem. Consequently, its strategy puts fact-checking at the center of its response.
The focus on message is too narrow. Content may be unproblematic, but the way it is spread is problematic. For example, the American ‘Black lives matter’ movement is a legitimate pressure group. When Russian agents bought ads to support it, there was no particular problem with their messages. The problem was the messenger. A foreign country secretly increased the voice of a domestic pressure group to exacerbate tensions. When political parties resort to building elaborate bot networks to amplify their messages, the problem is often not the message (it may be unproblematic), but the manipulation of the perception of popularity. They will become visible, show up as ‘trending’, suggesting that an issue has much popular support. To use a comparison from the offline word, we may not be against a street demonstration, we may even join it, but we would be disconcerted if we discovered that most demonstrators were robots pretending to be humans.
It is noteworthy in this context that Facebook does not consider messaging/distribution as the main problem (though it has changed policies in this area, too). For example, the company believes that it
largely controls social bots (if they are not hybrids between human and automated action) by deleting such accounts. Its public reports now often focus on the take-down of inauthentic, orchestrated accounts. But Facebook says little about its own decisions on ranking and what its effects may be on displaying content and thereby shaping public opinion.
To distinguish these levels more clearly, we propose to break down the discussion of threats into three components with the third one differing from the Information Disorder report.7
Message/content: The message is the content provided. It may be text, but it can also be a picture, a meme, a film or a recorded message. False messages are part of disinformation and their review and possible debunking is the realm of fact-checkers. Hate speech, intimidation, incitement of violence are problems that also have to do with the message. Policies of online companies have a lot to do with content, for example the prohibition and take-down of messages containing terrorism and nudity on Facebook.
Messenger: The person, group or organisation that created and published or posted a message. This may include several players, for example when one person creates a message, but another person publishes it. Here it is important to look at phenomena such as authenticity of messengers, their identity/anonymity, their location and their motivations.
Messaging/distribution: How is a message distributed? Here one would look at issues like artificial boosting of content by gaming algorithms (issue of bot networks, tweaking of Google search results),
the way algorithms work in ranking content (Facebook, Twitter, YouTube), recommending (YouTube) or displaying (Google) content as well as boosting of content for money (targeted ads).
The third component (messaging/distribution) appears useful to discuss phenomena like algorithms that decide the ranking of posts, their manipulation (e.g. through social bots) and boosted content (targeted ads). There may be problems of distribution, even if the message may not be disinformation or the messenger problematic. Problems include the infamous filter bubbles, the promotion of sensationalist content (even if not disinformation) or the trade with data to target people (even if the messages and messengers are as such not problematic).
The table below shows in more detail how the various phenomena relate to these specific levels.
The breakdown into the three Ms – message, messenger, messaging – shows that some problems of message can only be addressed by a focus on messenger and messaging. For example, it is not forbidden
to lie either online or offline. Nobody should be prohibited to claim that the Earth is flat or to claim that the Pope endorsed Donald Trump. However, if algorithms favour such attention-grabbing false messages, so that they are shown to many people, the problem can only be addressed at the level of messaging/distribution.
5. The Neglected Human Rights to Political Participation
Using the framework of the ‘3 Ms’ also exposes blind spots in the legal debate. We support putting human rights at the center of the debate, as argued by many others. As mentioned above, online discourse and its manipulation is human-made, the law provides a framework to discuss its effects and ways to shape it. Laws are human-made, they are debated and consulted, and they can change over time.
As digital content and social media are mostly global in reach, international human rights law provides an obvious starting point.8 But the international law debate focuses mostly on the freedom of expression9 and to a lesser degree on the right to privacy. But neither of these two rights provide much guidance on many questions of messaging and distribution, in particular on algorithmic preferences
for certain content over other content.
The unexplored aspect is the right to political participation and to vote and to stand as a candidate in elections, enshrined in Article 25 of the International Covenant on Civil and Political Rights (ICCPR).
Looking at the context in which people participate in politics, Article 25 focuses also on the forming of opinions and not only on their expression.
The UN’s Human Rights Committee, the monitoring body of the International Covenant on Civil and Political Rights) noted in its General Comment on Article 25:
“Persons entitled to vote must be free to vote for any candidate for election and for or against any proposal submitted to referendum or plebiscite, and free to support or to oppose government, without
undue influence or coercion of any kind which may distort or inhibit the free expression of the elector's will. Voters should be able to form opinions independently, free of violence or threat of violence,
compulsion, inducement or manipulative interference of any kind.”10
The mention of undue influence, distortion, inhibition and manipulative interference points to the relevance of Article 25 for the quality of public discourse. Indeed, election observation missions have found elections to be problematic, not because of some technical flaws or fraud in voting but simply because the opposition did not get any (or only negative) coverage in the media.
Given that one of the major concerns of online campaigns are manipulations, such as inauthentic behaviour, Article 25 is an important point of reference. Reducing online manipulation is not a restriction of rights, it is a protective measure to secure political participation.
Importantly, the non‐manipulation language should not be read as meaning that Article 25 would justify any kind of deletion of content or prohibitions. However, it provides a basis for discussions whether social media companies’ (algorithmic) decisions for example on ranking posts or on registering users, enable manipulation or whether they make it more difficult. Yet, so far it has not entered legal debates that were more focused on the nexus of message and freedom of expression.11
A balanced approach would therefore need to take into account of freedom of expression, the right to privacy and the right to political participation across the three levels of international law, national
legislation and the self-regulation of companies (or ‘co-regulation’ where states are involved in defining codes of conduct and similar commitments).12
6. Conclusion
The transformation of the public sphere by the digital space in general and social media in particular raises major questions in conceptualising the problem for democracy, the phenomena that need to be addressed and the regulatory framework for responding to these.
In many instances the problem is described too narrowly (electoral interference through false content), when a full debate needs to look at all levels of democratic discourse, all of the time, and not only during elections. It needs to take into account the different challenges that arise at the levels of message, messenger and messaging and look at these through the lens of multiple human rights provisions. There are not many easy and obvious answers on what should be done to make online discourse more compatible with democracy, but a clear framework for discussion should help getting there.
Refrences:
- https://newsroom.fb.com/news/2018/01/hard-‐questions-‐democracy/
- A strictly chronological display of a feed could be considered natural, but social media
and networks do not work that way. Messages are displayed chronologically on WhatsApp,
hence there is no debate on algorithmic sorting in that case.
- See the report by Claire Wardle and Hossein Derakhsan, which brought more clarity into the debate. This paper builds on the report while proposing a different emphasis on some issues. Claire Wardle, Hossein Derakhsan, ‘Information Disorder – Toward an interdisciplinary framework for research and policy-‐making’, Council of Europe 2017. It can be downloaded here: https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for‐researc/168076277c.
- Joseph Schumpeter’s ‘competitive struggle for votes’ is considered the narrowest definition, but even that is about much more than just voting. Many elections do not even qualify for this minimum definition due to the absence of real competition.
- For a detailed overview, see ‘Strengthening International Law to support democratic governance and genuine elections’, April 2012, Democracy Reporting International and The Carter Center. It can be downloaded here: https://democracy-reporting.org/dri_publications/report-strengthening-international-law-to‐support‐democratic‐governance‐and‐genuine‐elections/.
- For example, the Charter on the BBC states: “The Mission of the BBC is to act in the public interest, serving all audiences through the provision of impartial, high-‐quality and distinctive output and services which inform, educate and entertain.” One of its purposes is “to provide impartial news and information to help people understand and engage with the world around them (…)”.
- Page 7 of the report.
- Mark Zuckerberg asked for new global regulation – that is not likely to happen anytime soon. Major powers like the US, the EU, China or India do not see eye to eye on fundamental questions. Existing international law is a global framework that can guide the discussion on regulation by states as well as attempts of self-regulation by the companies.
- Countless policy documents on freedom of expression and the internet have been adopted at the international level in the last years. For more on this, see New Frontiers.
- UN Human Rights Committee, General Comment 25, 1996, point 19.
- Even new draft guidelines on public participation by the Office of the United High Commissioner for Human Rights merely note: “Information and Communication Technologies (ICTs) could negatively affect particiaton, for example when disinformation and propaganda are spread through ICTs to mislead a population or to interfere with the right to seek and receive, and to impart, information and ideas of all kinds, regardless of frontiers.” (point 10). The do not make a link to opinion formation, unintentional manipulation and normative guidance that may emanate from Article 25.
- Important cross-cutting rights issues which affect all three rights mentioned above: non‐
discrimination, the right to an effective remedy and ‘business and human rights’ obligations. We will explore legal issues in more detail in another paper.
Download the Briefing Paper below. | https://democracy-reporting.org/en/office/global/publications/bp100-online-threats-to-democratic-debate-1 |
The Institute for Politics and Society, or IPS, analyses important economic, political, and social areas that affect today’s society. Created in October 2015, the Institute’s external activity was established in February of 2015. With the inception of this activity, the very first conference, “Acute Problems of Europe”, was held and was attended by Guy Verhofstadt, Andrej Babis and Pavel Telicka.
The Institute hosts various debates, round table discussions, business breakfast events, and regularly issues policy papers on a wide range of issues.
Notably, in October of 2014 IPS became a member of the ELF (European Liberal Forum). The ELF consists of 46 think-tanks throughout Europe.
Our partners include the Friedrich Naumann Foundation, Ceska sporitelna, Czech Industry Magazine, and many others.
- What is your goal?
One of our main goals is the cultivation of the Czech political and public space through relevant open discussion and providing a living platform focusing on a variety of issues and solutions. These platforms consists of international conferences, workshops, public debates, and political and social analyses. We aim to make all of this available to the Czech society. Our belief is that through open discussion we can provide the necessary conditions for successful political and social solutions.
- How many people work in the Institute?
We currently employ a closed group of employees of up to 5 people. Nevertheless, there is an available growing base of independent consultants. These external experts engage in the wide range of fields which define our mission statement.
- Do you work like a real “think tank“? How?
A think tank is defined as an institute or group organized to study and provide information, education, ideas, and advice on particular areas of issue. The Institute for Politics and Society does just this. Through public debates, we are able to educate our society on the political, economic, and social issues facing today. The Institute hosts a number of business breakfast events with a variety of top political representatives of the Czech Republic, and with some of the most significant representatives of domestic Czech companies. The Institute also regularly provides policy papers to its supporters on various selected themes, typically focusing on domestic and international issues. We do so by outsourcing the extensive part of our analyses to our external experts. Additionally, we regularly work and cooperate with other Czech and international think tanks.
- Which are the main themes of IPS?
We primarily focus on international and security politics, defense, European issues, education, digitization, economics, energy, urbanism, issues related to the values within politics, and human rights both locally and globally.
- How are you connected to ANO movement?
We have been established as a think tank that is close to the ANO movement. However, we are an independent organization, and our ties with ANO are open and unbiased. We involve experts, independent consultants, and politicians from the whole political spectrum. Our events are open to all interested persons regardless to political inclusion.
- Who funds you? Are you financially independent to ANO movement?
We are funded by sponsorship from various companies. Our funding is transparent and can be seen in our website, www.politikaspolecnost.cz, under the section “Partners”.
- What sets the Institute apart from other organizations?
We are currently the only think-tank within the Czech Republic that is a member of ELF (European Liberal Forum), where we work in a range of activities including debates, round-table discussions, and educational events. These activities particularly reflect liberal and European themes. In addition, the Institute covers a variety of issues that are not addressed by ELF, as discussed above.
In contrast, many other think tanks and similar organizations within the Czech Republic are focused predominately on international issues or the European Union. These events become highly repetitive in both topics discussed and solutions offered. The main purpose of IPS is to educate and influence the affairs within society, and both domestic and international politics. All of our events and policy papers are publicly accessible.
- How many members of Parliament attend your events?
The answer depends on the type of hosted activity. For instance, the MPs are particularly interested in the internal events of the Institute – round table debates and business breakfast events.
- Do the MPs contact you in order to take a part in your Institute?
They do, and we are open to such challenges. These requests may consist of writing policy papers on specific topics they are interested in, arranging for a debate where they may attend as a speaker, or engaging in a work group in which they can meet other experts and discuss issues at hand.
11. Do you believe you have an effect on politicians?
We do, especially regarding the activities which are initiated by politicians’ own proposals (see the question n. 10).
- Is there a possibility to get involved in the Institute’s activity?
There are several ways; such as becoming a sponsor, an intern or an independent consultant. For more information, please send an email to [email protected], preferably with a certain suggestion for your proposed role within the Institute for Politics and Society. | https://www.politikaspolecnost.cz/en/faq-2/ |
I still have distinct memories burned in my mind, memories of late-night screaming matches with my father on whether billionaires should exist or not; dinner table discussions on whether India is right for occupying Kashmir, which quickly turned ugly and personal.
There are times when I don’t even try to argue with my parents. On the contrary, I find myself nipping any burgeoning quarrel in the bud.
These memories seem similar to my classmates complaining about me being a ‘smart Alec’, or my teachers asking me ‘why don’t you give the others a chance [to speak]?’ It has also become all too common to hear admonishments like ‘Why are you so headstrong?’ or ‘Take a chill pill.’
Granted, people are becoming more accepting of those who speak their mind openly and loudly, but why are they still so hostile to those who are labelled ‘argumentative’, or to the thought of arguing, at all?
On being and becoming argumentative
I don’t know how I became this argumentative. It wasn’t a value that was instilled in me by my parents. In fact, they are probably the ones most annoyed by my need to express my opinion at all times.
Maybe I developed it because I wanted to rebel against my parents as an angsty preteen. Maybe I felt that I could become a better critical thinker by arguing all the time. I don’t even think that ‘argumentative’ is the accurate descriptor for me (I try not to be confrontational most of the time). But it has now become an integral and essential part of my everyday life.
The word ‘argue’ usually conjures the image of an explosive fight between two people, or a friendly debate that has gone awry. But that is not the only definition of what it means to argue, or to be argumentative.
According to Oxford Dictionary, ‘argue’ as a verb could also mean, “to give reasons why you think that something is right/wrong, true/not true, etc., especially to persuade people that you are right.” In other words to engage in evidence-based reasoning to support or criticise an idea or a theory. This evidence can include both personal anecdotes and empirical statistics. Typically when I argue, I use evidence-based reasoning. But any argument can turn hostile and cruel, depending on how each party responds to the other.
Being argumentative ensures that my perspective is heard in any discussion, especially in discussions where I feel personally invested. It compels others to consider the opposing perspective. Asserting myself need not necessarily mean that I am being confrontational, or quarrelsome, it just means that I value my opinion and can contribute meaningfully by speaking up. Moreover, during debates (that are usually civil), I stand to discover opposing or differing opinions, which better informs my worldview. In some cases, this has helped me to strengthen my own arguments and beliefs.
Learning about others’ perspectives has also helped me become more empathetic. If it weren’t for such civil debates, I might still be going around shouting edgy, simplistic slogans like ‘God is dead!’ (yes, I did that as a teenager), hurting the sensibilities of other people.
Today, even though I might be considered liberal in my views, I still put in the time to understand conservative viewpoints, so that I do not disrespect anyone’s views or feelings (though I’d be the first to admit that I need to do a lot more work in becoming empathetic). This has also enabled me to argue in a more nuanced and thoughtful way.
The case for arguing
It is easy to see why many people shun the concept of argument, and some even choose not to speak up. The omnipresent ‘cancel culture’ looms over us. People think that there is a ‘correct’ opinion, and if their opinion does not fit what is deemed ‘morally right’, they fear being ostracised.
We see this in intellectuals and other public figures writing open letters about the “threat facing liberalism and free speech”, that we are becoming less tolerant of different viewpoints.
Harper’s Magazine published one of the most notable open letters last year, at the height of the Black Lives Matter movement. This letter, which had many notable signatories such as Margaret Atwood, stated that: “We are already paying the price in greater risk aversion among writers, artists and journalists who fear for their livelihoods if they depart from the consensus.” Going against the grain is sometimes seen as immoral or unfashionable, preventing the free exchange of ideas from taking place.
We see this in how people are losing their jobs over Tweets made many years ago. Most recently, Alexi McCammond, who was hired as Editor-in-Chief at Teen Vogue, eventually left her job due to public pressure after her insensitive tweets about Asians from nearly a decade ago resurfaced at the height of the #StopAsianHate movement, earlier this year.
On the other side of the (political) spectrum, journalist Emily Wilder was let go from Associated Press in May, after her tweets from a few years ago supporting Palestinian liberation were brought to light by the Stanford Federalist Society, a conservative student organisation at her alma mater, Stanford University.
These incidents sound alarm bells for many people over what they say out loud and who they engage in debate with, because many are unsure and fearful of just how far-reaching the consequences for speaking one’s mind can be. What this does is to make people think twice about engaging in arguments.
Classmates and friends have told me that they are afraid to say what they really think, because they do not want to be publicly shamed. There is a lot of pressure to have the ‘correct’ or ‘acceptable’ opinion, or to justify your beliefs, which runs contradictory to the spirit of discourse.
Discourse is about exchanging ideas freely and in a respectful atmosphere. It must allow for people to challenge not just the views of others, but their own views, as well. In the process, when people are exposed to differing opinions, they might develop a sensitivity towards other views. What discourse is most certainly not, is existing in an echo chamber where everyone agrees on an already-formed consensus, or fears being slandered or ostracised for expressing a contrarian view.
In fact, many even wonder how a correct opinion is determined, or who has the moral authority to determine and decide what the ‘correct’ opinion is. Rather than encourage discourse, it can instead stifle healthy discussions between people, allowing toxicity to fester. When we are unable to engage in civil speech, we end up burning, not building bridges.
Besides, debating is said to have benefits. It makes us better critical thinkers, honing our skills in evidence-based reasoning. It also opens us up to constructive criticism, enabling us to be more humble and empathetic to other views. The development of empathy also helps us to forge more amicable bonds on a personal or professional level.
In a BBC Radio Four series on How to Disagree: A Beginners Guide to Having Better Arguments, leading science writer Timandra Harkness attempts to persuade the reader that disagreement is “worth the pain.” For example, she writes that it “tests your ideas against competing ideas.” And goes so far as to recommend that people “get into a good argument at least once a day.”
No matter the benefits of arguing, many will continue to choose not to speak up for fear of turning a conversation into a vicious argument, not because they feel that they might get ‘cancelled’, but rather to preserve their mental health.
Many people, especially queer people and people from other marginalised communities, lament on social media platforms, such as Instagram and Tumblr, about how tiring and taxing it is to debate issues that are pertinent to them, such as racism and homophobia. For them, speaking up means to inevitably draw the attention of those who seek to play devil’s advocate. A popular post on Tumblr from user ‘supernatasha’, states that, “I no longer want to debate about whether or not I should have basic human rights.” Hence, many in marginalised communities choose not to engage in any debate, but elect to retreat into safe spaces, both online and physical.
Already we have seen many on the conservative side of the political spectrum accusing marginalised communities for “making everything about race/gender / sexuality / fill-in-the-blank”, and further dividing people along these deeply politicised and contentious lines. Many discussions, especially about contentious topics like race, disrupt existing narratives and the existing status quo, which can be jarring for people who are not affected by the same issues that marginalised communities are. This gap in understanding leads to the question: “Why are you making everything about race/gender/sexuality/fill-in-the-blank?”
For members of marginalised communities, such questions can seem like a direct affront to their experiences. For non-marginalised people, they might raise that question to defend themselves because they feel that they are being attacked in an argument.
Earlier this year, in response to a question about the transphobia controversy, Lawrence Wong, then Singapore’s Education Minister, said in Parliament that issues of gender identity have become “bitterly contested sources of division” in some Western countries, and that Singapore should not “import these culture wars”. The issue had come up after a trans student accused her school and the Ministry of Education for interfering in her hormonal treatments and refusing to support her.
There is still a lot of hesitation to engage in meaningful debate about touchy topics. For those who are affected by these ‘touchy topics’, it feels dehumanising to them to debate their lived experiences with someone who does not share the same lived experiences, as though their lived experiences are up for question. For those who do not share the same experiences, they feel like they are being antagonised for no reason at all, and often get defensive. These days, it seems nearly impossible to reconcile both sides, contributing to further polarisation.
Healing the divisions
So how then do we continue to engage in healthy debates, if people are becoming increasingly polarised? Is it even possible to be civil with one another, especially on social media?
At the risk of being called ‘argumentative’ or ‘disputatious’ (again), I would argue that the possibility of achieving civility is irrelevant, because civility is the antidote to poisonous arguments and must be at the heart of every conversation
We have to achieve civility in order to have meaningful debates that encourage all of us to think beyond our bubbles. We have to respect the fact that people might have something to say, and possibly something meaningful to contribute. We must keep an open mind and listen to what they have to say. We must also set rules for ourselves and others; like avoid insults and crude language, do not talk over others, and set boundaries that must be respected and not crossed. If there are certain topics that you feel are not up for debate, because you feel personally affected by them, or might be triggered negatively by them, explain how it matters to you personally. Similarly, we should also ask others in the conversation if they are comfortable to talk about the issue at hand, so that we will not debate, and as a result delegitimise, their lived experiences.
Give a chance to those who disagree with you to explain their views. Preventing the formation of echo chambers is important, because we can only expand our worldview and understand other perspectives better when we engage in discussions with people who are opposed to us, or simply share opposing views to us.
Nevertheless, there is value in having safe spaces, where boundaries that have been agreed in advance are not crossed, hence protecting those engaged in the discussion from being triggered.
Civility is hard work. It requires all of us to be conscientious in observing and respecting all the rules and restrictions. It also demands of us to be patient with others and to listen with an open mind and heart, instead of jumping in defence from the get-go. But it must be achieved, so that we can remedy and heal the widening polarisation in our society.
Shreya Lakshminarayanan is a Politics, Law and Economics student at Singapore Management University, and currently interning with TheHomeGround Asia. She is keen on arguing about politics and social issues.
Join the conversations on TheHomeGround Asia’s Facebook and Instagram, and get the latest updates via Telegram. | https://thehomeground.asia/destinations/singapore/opinion-arguing-for-the-benefits-of-being-argumentative/ |
The UMass Amherst Poll provides opportunities for undergraduate students to work closely with faculty, become knowledgeable about central debates in the fields of political behavior and public opinion, and learn some of the most valuable and on-demand skills for employment including data collection and analysis. UMass Poll offers a new model for undergraduate education by making hands-on research for students a vital component of education.
Political Polling and Survey Research:
This undergraduate course leads students through the development, implementation, and analysis of exit polls during election seasons. The course teaches students how to constructively critique existing research and requires them to develop their research questions into a final paper. In 2010, 2012, and 2014, students in the course went to polling places in and around Massachusetts to conduct statewide exit polls.
Public Opinion:
In this course undergraduates learn how to conceptualize and measure public opinion, link it to characteristics of citizens, and why it matters. What is public opinion? How do we measure it? Where does it come from? Does it—and should it—matter for policies and political outcomes? The course broadly addresses fundamental questions about the sources of public opinion and how these opinions shape American democracy. Students learn about how members of the public think about issues, and why they think the way they do. It examines whether or not political leaders follow the "the will of the public" or manipulate public opinion to achieve their own aims.
Political Psychology:
This undergraduate course provides an introduction to the field of political psychology. It focuses primary attention on psychological explanations of individual political attitudes and actions, among both elites as well as the masses. Students examine the sources of public opinion, individual attitudes, and political behavior through the application of psychological theories and concepts.
Media in American Politics:
This course examines the changing role of media in American politics. Key issues include how media shapes citizens’ thinking about politics, how politicians and citizen activists try to advance their goals through media, and how media outlets themselves shape what is considered news. It also considers the rise of new media forms from 24-7 cable news to blogs and social media to new forms of entertainment media, and whether these new forms of communication can enhance democratic governance or simply accelerate the fragmentation of media and polarization of the American public.
Advanced Survey Data Analysis:
This course focuses on advanced topics in survey design and analysis. Topics covered include different approaches to sampling, how to construct and use survey weights, and tools for analyzing and enriching survey data, including approaches to conducting matching as well as the construction and analysis of panel data. The course will also focus on designing and analyzing survey experiments. | https://polsci.umass.edu/research/umass-poll/teaching |
New Reuters Institute report:
Terms like echo chambers, filter bubbles, and polarisation are widely used in public and political debate but not in ways that are always aligned with, or based on, scientific work. And even among academic researchers, there is not always a clear consensus on exact definitions of these concepts.
In this literature review we examine, specifically, social science work presenting evidence concerning the existence, causes, and effect of online echo chambers and consider what related research can tell us about scientific discussions online and how they might shape public understanding of science and the role of science in society.
Echo chambers, filter bubbles, and the relationship between news and media use and various forms of polarisation has to be understood in the context of increasingly digital, mobile, and platform-dominated media environments where most people spend a limited amount of time with news and many internet users do not regularly actively seek out online news, leading to significant inequalities in news use.
When defined as a bounded, enclosed media space that has the potential to both magnify the messages delivered within it and insulate them from rebuttal, studies in the UK estimate that between six and eight percent of the public inhabit politically partisan online news echo chambers.
More generally, studies both in the UK and several other countries, including the highly polarised US, have found that most people have relatively diverse media diets, that those who rely on only one source typically converge on widely used sources with politically diverse audiences (such as commercial or public service broadcasters) and that only small minorities, often only a few percent, exclusively get news from partisan sources.
Studies in the UK and several other countries show that the forms of algorithmic selection offered by search engines, social media, and other digital platforms generally lead to slightly more diverse news use – the opposite of what the “filter bubble” hypothesis posits – but that self-selection, primarily among a small minority of highly partisan individuals, can lead people to opt in to echo chambers, even as the vast majority do not.
Research on polarisation offers a complex picture both in terms of overall developments and the main drivers and there is in many cases limited empirical work done outside the United States. Overall, ideological polarisation has, in the long run, declined in many countries but affective polarisation has in some, but not all, cases increased. News audience polarisation is much lower in most European countries, including the United Kingdom. Much depends on the specifics of individual countries and what point in time one measures change from and there are no universal patterns.
There is limited research outside the United States systematically examining the possible role of news and media use in contributing to various kinds of polarisation and the work done does not always find the same patterns as those identified in the US. In the specific context of the United States where there is more research, it seems that exposure to like-minded political content can potentially polarise people or strengthen the attitudes of people with existing partisan attitudes and that cross- cutting exposure can potentially do the same for political partisans.
Public discussions around science online may exhibit some of the same dynamics as those observed around politics and in news and media use broadly, but fundamentally there is at this stage limited empirical research on the possible existence, size, and drivers of echo chambers in public discussions around science. More broadly, existing research on science communication, mainly from the United States, documents the important role of self-selection, elite cues, and small, highly active communities with strong views in shaping these debates and highlights the role especially political elites play in shaping both news coverage and public opinion on these issues.
In summary, the work reviewed here suggests echo chambers are much less widespread than is commonly assumed, finds no support for the filter bubble hypothesis and offers a very mixed picture on polarisation and the role of news and media use in contributing to polarisation. | https://electionlawblog.org/?cat=58 |
Nature Magazine journalist Rex Dalton interviewed SMU archaeologist David J. Meltzer as an expert source to weigh in on the claim by University of Oregon archaeologists who say they've found the oldest known artifact in the Americas.
Dalton's Nov. 5 article, "Oldest American Artifact Unearthed," quotes a number of expert sources on the discovery of a scraper-like tool in an Oregon cave. The discovery team dates the tool to 14,230 years ago.
Forbes in its Oct. 26 online news has covered the geothermal energy research of SMU Hamilton Professor of Geophysics David Blackwell, Maria Richards and the SMU Geothermal Laboratory. Blackwell and Richards, the Geothermal Lab coordinator, released a new map earlier this week that documents significant geothermal resources across the United States capable of producing more than three million megawatts of green power — 10 times the installed capacity of coal power plants today.
New research from SMU’s Geothermal Laboratory, funded by a grant from Google.org, documents significant geothermal resources across the United States capable of producing more than three million megawatts of green power – 10 times the installed capacity of coal power plants today.
D Magazine journalist Dawn McMullan reported on the accomplishments of SMU archaeologist David J. Meltzer in the monthly magazine's "Dallas' Big Thinkers" article, which published Sept. 21.
A member of the National Academy of Sciences, Meltzer researches the origins, antiquity, and adaptations of the first Americans who colonized the North American continent at the end of the Ice Age. He focuses on how these hunter-gatherers met the challenges of moving across and adapting to the vast, ecologically diverse landscape of Late Glacial North America during a time of significant climate change.
D Magazine journalist Dawn McMullan reported on the accomplishments of SMU paleobotanist Bonnie F. Jacobs in the monthly magazine's "Dallas' Big Thinkers" article, which published Sept. 21.
Jacobs, one of a handful of the world's experts on the fossil plants of ancient Africa, is part of a team of paleontologists hunting plant and animal fossils in Ethiopia's prolific Mush Valley, as well as elsewhere in Africa. Jacobs is an associate professor in SMU's Roy M. Huffington Department of Earth Sciences. | https://blog.smu.edu/research/tag/huffington-department-of-earth-sciences/ |
Researchers say stone tools and broken mastodon bones unearthed in California show humans might have reached the Americas about 130,000 years ago, ten times earlier than previously thought.
In what may be one of the most significant discoveries ever in archaeology in the Americas, researchers on Wednesday said stone tools and broken mastodon bones unearthed in California show humans had reached the Americas about 130,000 years ago, far earlier than previously known.
The researchers called five rudimentary tools – hammerstones and anvils discovered in San Diego County alongside fossil bones from the prehistoric elephant – relatively compelling evidence, though circumstantial, for the presence of either our species or an extinct cousin like Neanderthals.
San Diego Natural History Museum palaeontologist Tom Deméré said until now the oldest widely accepted date for human presence in the New World was 14,000 to 15,000 years ago, making the San Diego site nearly 10 times older.
The finding would radically rewrite the understanding of when humans reached the New World, though some scientists not involved in the study voiced scepticism.
"If the date of 130,000 years old is genuine, then this is one of the biggest discoveries in American archaeology," said University of Southampton palaeolithic archaeologist John McNabb, who was not involved in the research and called himself "still a little sceptical."
No human skeletal remains were found. But the stone tools' wear and impact marks and the way in which mastodon limb bones and molars were broken, apparently in a deliberate manner shortly after the animal's death, convinced the researchers humans were responsible. They performed experiments using comparable tools on elephant bones and produced similar fracture patterns.
"People were here breaking up the limb bones of this mastodon, removing some of the big, thick pieces of mastodon limb bones, probably to make tools out of, and they may have also been extracting some of the marrow for food," said archaeologist Steven Holen of the Center for American Paleolithic Research in South Dakota.
US Geological Survey geologist James Paces used state-of-the-art dating methods to determine the mastodon bones, tooth enamel and tusks were 131,000 years old, plus or minus about 9,000 years.
Some sceptics suggested alternative explanations about the material excavated beginning in 1992 at a freeway construction site, suggesting the bones may have been broken recently by heavy construction equipment rather than by ancient humans.
Researchers defend findings
The researchers defended their conclusions, published in the journal Nature. "It's hard to argue with the clear and remarkable evidence that we can see in all of this material," said archaeologist Richard Fullagar of Australia's University of Wollongong, calling the conclusions "truly incontrovertible."
Our species, Homo sapiens, first appeared in Africa about 200,000 years ago and later spread worldwide. Timing of the New World arrival has been contentious. Genetic data suggests it was roughly 23,000 years ago, though archaeological evidence is lacking.
The researchers said the humans at the site could have been Homo sapiens or an extinct species such as Neanderthals, already known to have lived in Siberia, or Denisovans, known from only scant remains.
Holen said humans may have walked from Siberia to Alaska on a now-gone Bering Sea land bridge or perhaps traveled by boat along the Asian coast, then over to Alaska and down North America's western coastline to California.
"It's a huge deal if it's true," McNabb said.
But McNabb wondered whether there was anything in the chemistry of the soil or ground water that might have affected the way the date of the material was calculated, and whether anything else could have produced the impact and damage patterns on the material other than humans. | https://www.trtworld.com/americas/xxx-344171 |
Archaeologists have discovered an ancient Egyptian shipwreck which proves the Greek Historian Herodotus was correct about the observations he made about Egyptian vessels nearly 25 centuries ago. The shipwreck, discovered in the Nile River near the ancient, and now sunken,...
Africa
2000 Year Old Fetus Found Inside Egyptian Mummy
A 2000-year-old fetus was discovered in the belly of an Egyptian mummy by Polish reseachers recently, the first time in history that such a find has ever been recorded. The Warsaw Mummy Project, headed by bio-archeologist Marzena Ożarek-Szilke from the...
Ancient Greece
Tomb with Greek Mummy Unearthed in Aswan, Egypt
A Greco-Roman-era tomb with a Greek mummy was unearthed recently in Aswan, Egypt, archaeologists announced this week. And in an extremely unusual discovery, the archaeologists found a copper plaque with the man's name -- Nikostratos -- near his body. The...
films
The Greeks of Cairo: The Fascinating Bond Between Greece and Egypt
The small, but vibrant, community of Greeks, with roots deep in history, remains in the Egyptian capital of Cairo to this day.
Africa
Historic St. Catherine’s Monastery on Mount Sinai
Saint Catherine's Monastery at the foot of Mount Sinai in the town of Saint Catherine, Egypt is one of the most important Christian monasteries in the world
Africa
The Greek Pioneers Who Dug the Suez Canal
Greece has been connected to the Suez Canal since it was first envisioned as a pie-in-the-sky project in the mid-nineteenth century.
Africa
Egypt Unveils “Avenue of the Sphinxes” with Spectacular Display
Egypt unveiled on Thursday the renovated "Avenue of the Sphinxes" with a spectacular display aimed at highlighting the country’s archaeological treasures. The avenue at the ancient city of Thebes, now renamed Luxor, is nearly two miles long and about 250...
Ancient Greece
Undersea Eastern Port of Alexandria Reveals 2,000-Year-Old Secrets
Franck Goddio, the marine archaeologist who discovered the underwater city of eastern Alexandria and Heracleion, Egypt, will hold a presentation in December highlighting many of the discoveries he has made in the last 25 years. The talk, which will be...
Africa
Egyptian Body Shows Mummification 1,000 Years Older than Thought
New evidence shows that the mummy of an Egyptian nobleman is 1,000 years older than originally thought, proving that the science behind mummification is much older than previously believed. The advanced embalming processes used in the preservation of the body... | https://greekreporter.com/tag/egypt/page/2/ |
Remtravel's Excavation[35, 85]
is named after the famous archaeologist, Prospector Remtravel. At the digsite, he has discovered a most unusual fossil of mystic properties, but the absent-minded prospector has forgotten where he put it. The digsite has also been invaded by Gravelflint troggs and huge golems which have been unearthed in the dig, and entering the excavation without the proper training can be very hazardous.
In Cataclysm
This section concerns content exclusive to Cataclysm.
Remtravel's Excavation is flooded, though it still remains as a quest area.
References
Community content is available under CC-BY-SA unless otherwise noted. | https://wowwiki-archive.fandom.com/wiki/Remtravel's_Excavation |
Take a look at the issues that will change the way we live our lives in the future. Hannah Fry delves into the data we have today to provide an evidence-based vision of tomorrow. With the help of science experts Hannah tries to discover whether we could ever live forever or if there will ever be a cure for cancer. She finds out how research into the human brain may one day help with mental health, and if it is possible to ever ditch fossil fuels. Hannah and her guests also discover the future of transport - and when, if ever, we really will see flying cars. She discovers whether a robot will take your job or if, as some believe, we will all one day actually become cyborgs. The programme predicts what the weather will be like and discovers if we are on the verge of another mass extinction. Hannah's tenth prediction is something she - and Horizon - are confident will definitely happen, and that is to expect the unexpected!
A Plastic Ocean
2016 Nature
The film begins when journalist Craig Leeson, searching for the elusive blue whale, discovers plastic waste in what should be pristine ocean. In this adventure documentary, Craig teams up with free diver Tanya Streeter and an international team of scientists and researchers, and they travel to twenty locations around the world over the next four years to explore the fragile state of our oceans, uncover alarming truths about plastic pollution, and reveal working solutions that can be put into immediate effect.
Africa the Greatest Show on Earth
2013 Nature
Sir David Attenborough takes a breath-taking journey through the vast and diverse continent of Africa as it's never been seen before. From the richness of the Cape of Good Hope to blizzards in the high Atlas Mountains, from the brooding jungles of the Congo to the steaming swamps and misty savannahs, Africa explores the whole continent. An astonishing array of previously unknown places are revealed along with bizarre new creatures and extraordinary behaviours. Using the latest in filming technology including remote HD cameras, BBC One takes an animal's eye view of the action. The journey begins in the Kalahari, Africa's ancient southwest corner, where two extraordinary deserts sit side by side and even the most familiar of its creatures have developed ingenious survival techniques. Black rhinos reveal a lighter side to their character as they gather around a secret waterhole. Springbok celebrate the arrival of rains with a display of 'pronking'. Bull desert giraffes endure ferocious battles for territory in a dry river bed.
Series
:
Africa with David Attenborough
Aftermath Population Zero
2008 Nature
Imagine if one minute from now, every single person on Earth disappeared. All of us. Human history just stopped. What would happen to the world without us? Aftermath: Population Zero features what scientists and others speculate the earth, animal life, and plant life might be like if humanity no longer existed, as well as the effect that humanity's disappearance would have on the artefacts of civilization. this documentary is inspired by Alan Weisman's The World Without Us.
An Inconvenient Sequel Truth to Power
2017 Culture
A decade after An Inconvenient Truth brought climate change into the heart of popular culture comes the riveting and rousing follow-up that shows just how close we are to a real energy revolution. Vice President Al Gore continues his tireless fight, traveling around the world training an army of climate champions and influencing international climate policy. Cameras follow him behind the scenes-in moments private and public, funny and poignant-as he pursues the empowering notion that while the stakes have never been higher, the perils of climate change can be overcome with human ingenuity and passion.
Renowned filmmakers Bonni Cohen and Jon Shenk have taken the baton from 2006 Academy Award-winner Davis Guggenheim. What started then as a profound slide show lecture has become a gorgeously cinematic excursion. Our extraordinary former vice president invites us along on an inspirational journey across the globe that delivers the tools to heal our planet. The question is: Will we choose to take the baton?
1
2
3
... | https://www.documentarymania.com/results-alphabetical.php?search=Environmentalism&genre= |
Pre-order from:
Feb 01, 2022 | ISBN 9780593132883
Feb 01, 2022 | ISBN 9780593132890
Feb 01, 2022 | ISBN 9780593556078
480 Minutes
A stirring, eye-opening journey into deep time, from the Ice Age to the first appearance of microbial life 550 million years ago, by a brilliant young paleobiologistThe past is past, but it does leave clues, and Thomas Halliday has used cutting-edge science to decipher them more completely than ever before. In Otherlands, Halliday makes sixteen fossil sites burst to life on the page.This book is an exploration of the Earth as it used to exist, the changes that have occurred during its history, and the ways that life has found to adapt―or not. It takes us from the savannahs of Pliocene Kenya to watch a python chase a group of australopithecines into an acacia tree; to a cliff overlooking the salt pans of the empty basin of what will be the Mediterranean Sea just as water from the Miocene Atlantic Ocean spills in; into the tropical forests of Eocene Antarctica; and under the shallow pools of Ediacaran Australia, where we glimpse the first microbial life. Otherlands also offers us a vast perspective on the current state of the planet. The thought that something as vast as the Great Barrier Reef, for example, with all its vibrant diversity, might one day soon be gone sounds improbable. But the fossil record shows us that this sort of wholesale change is not only possible but has repeatedly happened throughout Earth history. Even as he operates on this broad canvas, Halliday brings us up close to the intricate relationships that defined these lost worlds. In novelistic prose that belies the breadth of his research, he illustrates how ecosystems are formed; how species die out and are replaced; and how species migrate, adapt, and collaborate. It is a breathtaking achievement: a surprisingly emotional narrative about the persistence of life, the fragility of seemingly permanent ecosystems, and the scope of deep time, all of which have something to tell us about our current crisis.
Thomas Halliday is a palaeontologist and evolutionary biologist. He holds a Leverhulme Early Career Fellowship at the University of Birmingham, and is a scientific associate of the Natural History Museum. His research combines theoretical and real data to investigate long-term… More about Thomas Halliday
“Thomas Halliday’s debut is a kaleidoscopic and evocative journey into deep time. He takes quiet fossil records and complex scientific research and brings them alive—riotous, full-colored, and three-dimensional. You’ll find yourself next to giant two-meter penguins in a forested Antarctica 41 million years ago or hearing singing icebergs in South Africa some 444 million years ago. Maybe most important, Otherlands is a timely reminder of our planet’s impermanence and what we can learn from the past.”—Andrea Wulf, author of The Invention of Nature“Deep time is very hard to capture—even to imagine—and yet Thomas Halliday has done so in this fascinating volume. He wears his grasp of vast scientific learning lightly; this is as close to time travel as you are likely to get.”—Bill McKibben, author of Falter: Has the Human Game Begun to Play Itself Out?
Visit other sites in the Penguin Random House Network
Stay in Touch
Start earning points for buying books! Just for joining you’ll get personalized recommendations on your dashboard daily and features only for members. | https://www.penguinrandomhouse.com/books/612030/otherlands-by-thomas-halliday/ |
The fiercest, strangest, and wildest creatures in the animal kingdom face off in a countdown of the most incredible animal moments ever recorded. Across arid deserts, through dense rainforests, and into the deepest of oceans, witness remarkable scenes of animal activity, from deadly showdowns to wild romances.
Dinosaurs Myths and Monsters
2011 Science
From dinosaurs to mammoths, when our ancient ancestors encountered the fossil bones of extinct prehistoric creatures, what did they think they were? Just like us, ancient peoples were fascinated by the giant bones they found in the ground. Historian Tom Holland goes on a journey of discovery to explore the fascinating ways in which our ancestors sought to explain the remains of dinosaurs and other giant prehistoric creatures, and how bones and fossils have shaped and affected human culture.
In Classical Greece, petrified bones were exhibited in temples as the remains of a long-lost race of colossal heroes. Chinese tales of dragons may well have had their origins in the great fossil beds of the Gobi desert. In the Middle Ages, Christians believed that mysterious bones found in rock were the remains of giants drowned in Noah's Flood.
Tom encounters a medieval sculpture that is the first known reconstruction of a monster from a fossil, and learns about the Native Americans stories, told for generations, which contained clues that led bone hunters to some of the greatest dinosaur finds of the nineteenth century.
Carlsbad Caverns
2013 Nature 3D
Carlsbad Caverns are located in the Chihuahuan Desert of southern New Mexico, within the Carlsbad Caverns National Park. There are more than 100 caves. The Natural Entrance is a path into the namesake Carlsbad Cavern. Stalactites cling to the roof of the Big Room, a huge underground chamber in the cavern.
The Great Flood
2009 Nature
The great flood in the Okavango turns 4,000 square miles of arid plains into a beautiful wetland. Elephant mothers guide their families on an epic trek across the harsh Kalahari Desert towards it, siphoning fresh water from stagnant pools and facing hungry lions. Hippos battle for territory, as the magical water draws in thousands of buffalo and birds, and vast clouds of dragonflies. Will the young elephant calves survive to reach this grassland paradise? The experienced mother elephants time their arrival at the delta to coincide with the lush grass produced by the great flood.
In a TV first, the programme shows the way they use their trunks to siphon clean water from the surface layers of a stagnant pool, while avoiding stirring up the muddy sediment on the bottom with their feet. Lechwe swamp deer, zebras, giraffes, crocodiles and numerous fish and thousands of birds arrive in the delta. And, in a phenomenon never before filmed in the Okavango, thousands of dragonflies appear - seemingly from nowhere - within minutes of the flood arrival, mating and laying eggs. As the flood finally reaches its peak, elephants and buffalo, near the end of their epic trek across the desert, face the final gauntlet of a hungry pride of lions. In a heart-wrenching sequence, a baby elephant is brought down by a lion in broad daylight.
Series
:
Nature Great Events
Africa the Greatest Show on Earth
2013 Nature
Sir David Attenborough takes a breath-taking journey through the vast and diverse continent of Africa as it's never been seen before. From the richness of the Cape of Good Hope to blizzards in the high Atlas Mountains, from the brooding jungles of the Congo to the steaming swamps and misty savannahs, Africa explores the whole continent. An astonishing array of previously unknown places are revealed along with bizarre new creatures and extraordinary behaviours. Using the latest in filming technology including remote HD cameras, BBC One takes an animal's eye view of the action. The journey begins in the Kalahari, Africa's ancient southwest corner, where two extraordinary deserts sit side by side and even the most familiar of its creatures have developed ingenious survival techniques. Black rhinos reveal a lighter side to their character as they gather around a secret waterhole. Springbok celebrate the arrival of rains with a display of 'pronking'. Bull desert giraffes endure ferocious battles for territory in a dry river bed.
Series
:
Africa with David Attenborough
1
2
3
...
10
Complete Series
Prehistoric America
2003 Nature
The Universe
2010 Science
George Harrison Living in the Material World
2011 History
Leaving Neverland
2019 Culture
Depeche Mode: Live in Berlin
2014 Art
Reel Rock
2014 Culture
The Nazis, A Warning From History
1997 History
Out of the Cradle
2019 History
Follow Our Releases! | https://www.documentarymania.com/results.php?search=Desert&genre= |
professor of ancient Greek, philologist, ordained Methodist minister in the Colored Methodist Episcopal (CME) Church, and missionary to the Congo, was born in Hephzibah, Georgia, not far from Augusta, to Gabriel and Sarah Gilbert. His parents were field hands, and scholars are not certain whether John was born free or enslaved. Some sources give his birth date as 6 July 1864. As a child he was eager to learn, but he had to mix long hours of farm work with brief periods of school. At last overwhelmed by poverty he was forced to withdraw from the Baptist Seminary in Augusta. After a three-year hiatus from schooling he resumed his work when Dr. George Williams Walker, a Methodist pastor who had come to Augusta to teach in 1884, and Warren A. Candler pastor of Augusta s St John Church offered him assistance With the help ...
Article
Michele Valerie Ronnick
Article
Pedro de Weever
was born on 21 November 1955 in Bartow, Florida. Raised in the central Florida area, he was the son of Jay B. Haviser Sr., an attorney, and Carolyn H. Haviser, an artist. He has a younger brother, Michael Haviser.
Haviser’s earliest moments of scientific and archaeological discovery made an impression on him that would last a lifetime. At the age of 13, he entered into a science fair in his hometown. While the contractors at a nearby construction site were excavating, moving dirt and debris from one location to the next, Jay noticed certain artifacts. He first notified the State Archaeologist at the Florida Bureau in Tallahassee. After inspecting the site, the office allowed Jay to take samples of the prehistoric artifacts, where the soil layers were evident.
Following this experience Haviser saw how scientific methods are essential in archaeology a premise that intrigued him Thanks to his efforts in ...
Article
Robert Fay
Louis Leakey was born in Kabete, Kenya, to British missionaries working in colonial Kenya. Even before he received his doctorate in anthropology from the University of Cambridge in England, Leakey was convinced that human evolution began in Africa, not in Asia as was commonly believed among his contemporaries. To prove his theory, Leakey focused his archaeological research on expeditions to Olduvai, a river gorge in Tanganyika (now Tanzania). He found important fossils and Stone Age tools, but until 1959 Leakey had not found definitive evidence that Africa was the cradle of human evolution.
On an expedition to Olduvai in 1959, his wife, Mary Douglas Leakey, with whom he had worked since 1933, discovered the partial remains of a 1.75-million-year-old fossil hominid. Louis Leakey classified it as Zinjanthropus (later classified as Australopithecus boisei). From 1960 to 1963 the Leakeys unearthed other important remains including another fossil hominid ...
Article
Jeremy Rich
physical anthropologist and archaeologist who discovered evidence of early human life in the Rift Valley of East Africa, was born Mary Douglas Nicol on 6 February 1913 in London, England. Her father was the painter Erskine Edward Nicol and her mother was Cecilia Marion (née Frere) Nicol. During Mary’s childhood, her family moved around a great deal. Erskine Nicol painted various portraits and subjects in England, France, Italy, Egypt, and elsewhere. Mary’s prolonged sojourns in southern France provided her with the chance to develop a fluent command of French. While she enjoyed greatly her talks and walks with her father, she found her mother’s Catholic faith stultifying even as she developed some friendships with individual priests. Her childhood came to a sudden end in the spring of 1926 when her father passed away from cancer Mary s mother decided to place her daughter in a Catholic convent but ...
Article
Robert Fay
Mary Leakey’s deep interest in the study of prehistory began at the age of eleven, when she viewed cave paintings of the Dordogne in southern France. Although she later took courses in anthropology and geology at University College, London, and participated in excavations in England, she never earned a degree. In 1933 paleoanthropologist
Louis Leakey s controversial theories drove their research throughout much of their careers During the twenty years that the Leakeys spent attempting to prove that human evolution occurred in Africa and not Asia Mary developed rigorous excavation techniques that set the standard for paleoanthropological documentation and excavation A tireless worker after long days of carefully sifting the Olduvai earth for fossils ... | https://oxfordaasc.com/browse;jsessionid=691349C4639D3B37CDE03941D3EA95C1?pageSize=20&sort=titlesort&t=AASC_Occupations%3A507 |
The discovery near Xi'an of a Qin Dynasty tomb group, believed to be the largest in China, has delighted archeologists but also attracted the attentions of grave robbers.
Excavations undertaken ahead of a railway improvement project in Shaanxi Province unearthed 604 tombs in Qujia Village, Lintong County.
"I was astounded by the sheer number of tombs," said Sun Weigang, a researcher with the Shaanxi Institute of Archaeological Research. "We know Shaanxi is rich in cultural relics, with over a thousand tombs unearthed every year. But we have never found so many in such a small area".
Most of the tombs are of ordinary people and do not contain particularly valuable objects, but are of enormous interest to archeologists researching the social life of the period. A vast collection of pottery and bronze ware has been unearthed including cauldrons, pots, jars, axes and swords, as well as more than 200 complete human skeletons.
"The remains are mainly of adult men who died from natural causes. They don't appear to have had a close clan relationship with each other," according to Chen Liang, associate professor of Archaeology, Northwest University.
Archaeologists hope the discovery of the tombs will help them locate the site of the ancient Qin Dynasty city of Liyi. It had been thought that Liyi was near a village called Liuzhai, based on sporadic discoveries of Qin relics. "But the tombs are over 5 kilometers away from Liuzhai Village, and the custom of the time was to locate burial grounds close to the city," an archaeologist said.
It takes about a week for two or three workers and one technician to excavate a tomb. Local villagers have been employed on the dig because of the large number of tombs. "The digging is not difficult, but the cleaning work is very tough, and will take at least a year if not more," an excavator named Quan Xihong told reporters.
The old men in Qujia Village knew there was an ancient city nearby. Villagers often found jars and pots in the fields but thought it was bad luck to take them home. Recently though, grave robbing has been on the increase.
A local man said that while grave robbery wasn't a problem in Qujia, there were entire grave robbing networks in other, nearby villages. One of the site excavators said that he had personally come across grave robbers. | http://www.china.org.cn/culture/2008-03/21/content_13239511.htm |
"No continent hosts as many different environments and landscapes as Africa. From wide savannahs, resting gently on vast desert dunes, to dense rain forest, bordering the equator.
See how many tribes, such as the Massai in the savannah, have kept their traditional way of life to this day and watch how animals have had to adapt their behaviours in order to survive.
Mankind. Join us on a journey through Africa! | http://reliancehvg.co.in/store/product.php?productid=19714&cat=249&page=1 |
BY HUMPHREY KARIUKI
Africa is the most beautiful and inspiring continent in the world.
Home to iconic and diverse wildlife renowned the world over and made up of spectacular ecosystems extending from deserts to ice-capped mountains, tropical forests to Mediterranean shores, and pristine savannahs to rich ocean reefs.
This natural beauty is unique to our continent and is a proud legacy handed down by generations of Africans who have come before us – a heritage which we all have an obligation and necessity to continue.
But today, Africa is at a pivotal moment in its history.
Development across the continent is bringing millions out of poverty, increasing life expectancies, educating our children and opening up unprecedented opportunities.
But unprecedented demand is also placing unsustainable pressures on the environment.
While we cannot allow our development to be hampered or slowed, as this would only serve to consign millions to continued poverty, we must manage competing priorities and ensure that conservation and development work hand-in-hand.
Because only by protecting some of the world’s most revered ecosystems and wildlife can we provide a long-term legacy for future generations of Africans – one which is capable of both enriching people’s lives and spurring sustainable development.
This is not just a moral legacy but an economic one too.
Our rich biodiversity provides every person with our most basic needs – food and water.
Without which, as too many Africans already know, there is no path out of poverty. And it also provides vast opportunities for millions of jobs and multi-billion-dollar growth, through developing thriving tourism and allied industries based around our natural beauty.
But in order to do any of that we must first build widespread awareness of the need to protect our natural beauty and get as many Africans, especially young people, engaged with this cause as possible.
That is exactly what I am proud to be doing alongside the Mt Kenya Wildlife Conservancy and Animal Orphanage, which hosts more than 10,000 children a year. Teaching them about the value of our conservation efforts and in particular our vital work saving the critically endangered Mountain Bongo. As we pray the world emerges from the pandemic at last, may it be every African’s resolution to really understand and value the immense natural heritage our forefathers left us. | https://humphreykariuki.com/a-long-term-legacy-for-africas-future/ |
South Africa is experiencing its severest drought in more than a century. Savannahs – grasslands scattered with trees and scrubs, which cover about half of Africa – are some of the most productive environments of the continent, supporting livestock and rural livelihoods.
Revolutionary System Monitors Water Pollution
Toxic microalgae, viruses and chemical contaminants are floating in our waters. These hazardous materials pose a high risk to the livelihood of the sea dwellers. Especially the aquaculture is affected by this rising problem.
When your water is contaminated
Statistically, drinking water in Europe is the safest in the world. But according to the World Health Organisation every year more than 300.000 Europeans are falling ill due to contaminated tab water. ...
Lucia Doyle: combining irrigation and fertilisation in open-fields agriculture
The term fertigation is used in agriculture to refer to the combination of irrigation and fertilisation, in one step.
Ralf Otterpohl: a second life for unsuspected nutrient-rich waste
Every day cities in Europe discard a useful nutrient-rich resource that could be used to grow crops. Ironically, we treat and process human wastes while we mine non-renewable phosphate and potassium and we consume fossil fuel to make nitrogen fertiliser. | http://www.youris.com/tag.aspx?rsec=&tag=852,322,320,828 |
Flowing through nine African countries, the Zambezi creates many different worlds. Each world presents challenges for life: on the vast Liuwa floodplains a family of cheetah struggle to find a meal.
A look at the landscape and animals inhabiting the rivers of Africa - from parched desert and scrubland, through grassy savannahs and even through the deepest African jungles.
The Okavango Delta is a wetland surrounded by arid plains. Once a year, a huge flood engulfs the swamps and spills over to the flood plains, a haven for wildlife.
The Sand River is home to one Africa's most diverse animal populations. There, animal mothers from leopards to lionesses struggle to raise their young in this harsh world. | https://www.enhancetv.com.au/video/africa-river-wild/51195 |
Ancient-ape remains discovered in Kenya
Researchers have found fossils from an approximately 9.8 million-year-old ape that lived in eastern Africa. The creature belonged to a new genus, dubbed Nakalipithecus nakayamai, that may have evolved into a common ancestor of African apes and humans, proposes a team led by Yutaka Kunimatsu of Kyoto University in Japan.
Fieldwork in Kenya yielded a partial lower jaw containing three teeth as well as a dozen individual teeth, all attributed to Nakalipithecus. The fossils were dated by measurements of radioactive-argon decay in volcanic-ash layers at the African site.
Science News headlines, in your inbox
Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.
Thank you for signing up!
There was a problem signing you up.
The newly unearthed fossils display a few similarities to fossil teeth of a previously reported ape that lived from 9.6 million to 8.7 million years ago in what is now Greece. Kunimatsu’s group has yet to compare Nakalipithecus with fossils of a 10 million-year-old ape recently discovered in eastern Africa (SN: 11/3/07, p. 280).
Apes evolved in Africa from 11 million to 5 million years ago, the scientists say in an upcoming Proceedings of the National Academy of Sciences. Other investigators speculate that, during that span, European and Asian apes spread into Africa and evolved into various lines of African apes. | https://www.sciencenews.org/article/ancient-ape-remains-discovered-kenya |
The widely anticipated Bob Dylan Center will open to the public on May 10, 2022, in downtown Tulsa, appropriately just a few feet from the Woody Guthrie Center. Early in his career, Dylan was a musical disciple of Guthrie, a folk-music icon.
According to the announcement Wednesday from the center’s website, the facility devoted to the highly influential songwriter will exhibit more than 100,000 items owned by Dylan over seven decades, including original manuscripts, unreleased recordings, unseen film performances, photographs and more.
The BDC will feature cutting-edge and immersive technology in a multimedia environment that is designed to be as impressive and revealing to visitors new to Dylan’s work as it will be to long-time fans and aficionados.
Among the many highlights that will be found at The Bob Dylan Center are:
— An ever-evolving curated display of elements that illuminate the depth and breadth of the Bob Dylan Archive® collection.
— An immersive film experience that will initiate visitors through an innovative cascade of archival music and film, directed by renowned Dylan chronicler Jennifer Lebeau.
— A recreation of an authentic studio environment where visitors will experience what it was like to be present at one of Dylan’s historic recording sessions.
— The Columbia Records Gallery, which will provide an in-depth look at the creation, performance and production of timeless Dylan songs such as “Like a Rolling Stone,” “Tangled Up in Blue” and “Chimes of Freedom.”
— A screening room that will showcase Dylan-related scripted films, documentaries and concert performances, including never-before-seen material unearthed from the Archive.
— A multimedia timeline of Dylan’s life from his early years in Minnesota through the present day, written by award-winning historian Sean Wilentz.
— The Parker Brothers Gallery, which will explore the creative process through the work of other innovative artists, in an initial exhibit curated by influential author Lewis Hyde.
One sterling example of the treasures to be found in The Bob Dylan Archive is a recording of Dylan performing “Don’t Think Twice, It’s All Right” in the autumn of 1962. This heretofore-unknown recording was made by Milton (Mell) and Lillian Bailey, friends and early champions of the young Bob Dylan when he was a fixture in New York’s Greenwich Village folk scene. This version of the song, recorded in the Baileys’ apartment at 185 East 3rd St., features alternate lyrics and is the earliest known recording of the song that was eventually released in 1963 on The Freewheelin’ Bob Dylan. Another example is a recently unearthed image of Bob Dylan onstage during his 1974 tour with The Band, taken by renowned photographer Barry Feinstein.
The Tulsa-based George Kaiser Family Foundation acquired Dylan’s archive in 2016. Media outlets that included Rolling Stone reported the foundation paid between $15 million and $20 million for the collection, which filled two semi-trucks.
The Kaiser group also acquired the Guthrie archive in 2011.
Dylan perhaps is best known for his worldwide hit single from 1965, “Like a Rolling Stone.” He probably is the most influential songwriter of the second half of the 20th century. Years ago, a pop-music reference book argued he was one of a handful of people responsible for ending the Vietnam War.
Guthrie’s links to Route 66 are many. But links to the Mother Road and Dylan are scant. Early in his career, Dylan claimed to have spent part of his childhood in the Route 66 town of Gallup, New Mexico. That probably was Dylan messing around with a reporter; he actually grew up in Hibbing, Minnesota.
In terms of highways, Dylan was more associated with U.S. 61. That highway went through his hometown, and he recorded the well-known “Highway 61 Revisited” for a now-classic album of the same name in 1965.
The Dylan Center will be only about three blocks from the old Route 66 alignment of Second Street/Detroit Avenue in downtown Tulsa. | https://www.route66news.com/2021/05/14/bob-dylan-center-in-tulsa-will-open-to-the-public-in-may-2022/ |
CWRU researchers among team discovering “remarkably complete” cranium of early human ancestor species
Yohannes Haile-Selassie—a Case Western Reserve University adjunct professor and curator at the Cleveland Museum of Natural History—and a team of researchers have discovered a “remarkably complete” cranium of a 3.8-million-year-old early human ancestor in Ethiopia.
Working for the past 15 years at the Woranso-Mille paleontological site—located in the country’s Afar region—the team unearthed the cranium in February 2016; in the years since, they have conducted extensive analyses of the fossil, while geologists determined the age and context of the specimen—and made significant discoveries that increase our understanding of early human ancestors.
The results of the team’s findings are published online in two papers in the international scientific journal Nature.
The cranium represents a time interval—between 4.1 and 3.6 million years ago—from which early human ancestor fossils are extremely rare.
The findings also indicate that Lucy’s species (Australopithecus afarensis) and the early human ancestor species of the cranium (Australopithecus anamensis) coexisted for approximately 100,000 years, challenging previous assumptions of a linear transition between these two early human ancestors.
“This is a game-changer in our understanding of human evolution during the Pliocene,” said Haile-Selassie, an adjunct professor of anthropology and cognitive science in Case Western Reserve’s College of Arts of Sciences.
Due to the cranium’s rare near-complete state, the researchers also identified never-before-seen facial features in the species.
“[The early human ancestor] has a mix of primitive and derived facial and cranial features that I didn’t expect to see on a single individual,” Haile-Selassie said.
The A. anamensisspecies was previously only known through teeth and jaw fragments, all dated to between 4.2 and 3.9 million years ago, and researchers determined similarities between the preserved dentition of the cranium and these previously found fragments.
Cranium age
Beverly Saylor—a professor in the Department of Earth, Environmental, and Planetary Sciences at Case Western Reserve—and her colleagues determined the age of the fossil using an array of techniques.
By looking at the magnetic properties and chemistry of volcanic rock layers nearby the fossil—and combining field observations with an analysis of microscopic biological remains—Saylor and her colleagues were able to determine the landscape, vegetation and hydrology where (and when) the early human ancestor died.
The findings were published in a companion paper published in the same issue of Nature.
Reconstructing a pre-historic landscape
The early human ancestor likely lived near a large lake in a region that was dry, according to the findings.
The cranium was found in sandy deposits of a delta where a river entered a lake.
The river likely originated in the highlands of the Ethiopian plateau, while the lake developed at lower elevations—where rift activity caused the Earth’s surface to stretch and thin, creating the lowlands of the Afar region.
Volcanic debris flows occasionally descended into the otherwise quiet lake, which was ultimately buried by basalt lava flow—a dramatic landscape change common in rift settings.
“Incredible exposures and the volcanic layers that episodically blanketed the land surface and lake floor allowed us to map out this varied landscape and how it changed over time,” said Saylor.
Also, fossil pollen grains and chemical remains of fossil plant and algae were preserved in the lake and delta sediments and provided clues about the ancient environmental conditions—specifically indicating that the lake near where the early human ancestor finally rested was likely salty at times and its watershed was mostly dry.
Yet, there were also forested areas on the shores of the delta or alongside the river that fed the delta and lake system, researchers determined.
A plentiful paleontological project
The Woranso-Mille project has been conducting field research in the central Afar region of Ethiopia since 2004. The project has collected more than 12,600 fossil specimens representing more than 80 mammalian species. The fossil collection includes about 230 fossil hominin specimens older than 3.8 million years to around 3 million years.
The first piece of the cranium fossil—the upper jaw—was found by Ali Bereino, a local Afar worker, on Feb. 10, 2016, at a site around 34 miles north of Hadar (“Lucy’s” site). The specimen was exposed on the surface, and further investigation of the area resulted in the recovery of the rest of the fossil.
“I couldn’t believe my eyes when I spotted the rest of the cranium. It was a eureka moment and a dream come true,” said Haile-Selassie.
An international team
Work on the analysis and geological and paleoenvironmental context of MRD was conducted by an international team of paleoanthropologists, geologists, geochemists and paleobotanists from renowned institutions, including the Max Planck Institute for Evolutionary Anthropology in Germany, Pennsylvania State University and the University of Bologna in Italy.
Work to understand the age and landscape setting included researchers from the Universitat de Barcelona in Spain, the Berkeley Geochronology Center in California, Addis Ababa University in Ethiopia, Franklin and Marshall College in Pennsylvania. Reconstructing the environmental conditions was conducted by researchers at University of Michigan, University of Southern California, Los Angeles, and Aix-Marseille University in France.
A co-author on both Nature papers, Stephanie Melillo, of the Max Planck Institute for Evolutionary Anthropology in Germany, graduated from Case Western Reserve in 2005 with a Bachelor of Arts in anthropology.
For more information, contact Bill Lubinger at [email protected]. | https://thedaily.case.edu/a-3-8-million-year-old-fossil-reveals-the-face-of-lucys-ancestor/ |
Welcome to Kenya, a land of vast savannahs peppered with immense herds of wildlife and snow-capped equatorial mountains. This country is filled with traditional peoples who bring soul and colour to the earth. When you think of Africa, you’re probably thinking of Kenya. It’s the lone acacia silhouetted on the savannah against a horizon stretching into eternity. Filling the country’s landscape, adding depth and resonance to Kenya’s age-old story, are some of Africa’s best-known peoples, the Maasai, the Samburu, the Turkana, the Swahili, the Kikuyu. It is the land of the Masai Mara, of wildebeest and zebras migrating in their millions with the great predators of Africa following in their wake, of endangered species like black rhinos managing to maintain their precarious foothold. Kenya is also home to the red elephants of Tsavo, to Amboseli elephant families in the shadow of Mt Kilimanjaro and to the massed millions of pink flamingos stepping daintily through lake shallows. Africa is the last great wilderness where these creatures survive. This is the perfect place to answer Africa’s call of the wild.
When you think of Africa, you’re probably thinking of Kenya. | https://www.perfectsafaris.com/kenya/ |
Cape Town - Auditing firm KPMG SA was a “willing participant in state capture” and its leadership hasn't fully grasped the magnitude of what they were involved in, according to former finance minister Pravin Gordhan.
He was in conversation with radio host Eusebius McKaiser on CapeTalk on Tuesday morning.
Gordhan also said his hope for SA would be “dimmed” if former AU chairperson Nkosazana Dlamini-Zuma were elected the next president of the ANC at the party’s upcoming National Conference in December.
He said that Dlamini-Zuma’s main rival, Deputy President Cyril Ramaphosa, would be a better choice.
Gordhan said Ramaphosa's candidature carried the hope that the ANC could self-correct, and indicated he may leave the ANC if Dlamini-Zuma emerged victorious.
Willing participant
The former finance minister said KPMG SA’s years-long auditing of multiple Gupta-owned companies had to be understood in the context of state capture, and could not be represented as a merely “technical matter”.
“In fact, they (KPMG SA) were willing partners so to speak. Maybe they didn’t realise the magnitude of what they were doing but, for a fee, they were willing partners in a state capture project, as were many others,” he said.
Gordhan said that KMPG SA's leadership, which was cleared out on Friday September 15, had been “ducking and diving” for too long about the work they had done for Gupta-owned companies.
KPMG International on Friday evening announced a new independent investigation into KPMG SA’s work for the Gupta family and its role in authoring the controversial SARS ‘rogue unit’ report.
The global auditing firm’s chairperson John Veihmeyer said the investigation would be led by a senior SA legal figure who was “completely independent” of both KPMG South Africa and KPMG International.
This followed an earlier promise by the auditing firm to pay back the R23m it earned in fees from SARS for its now-retracted ‘rogue unit’ report, and donate the R40m it earned from auditing Gupta-related companies to charity.
But Gordhan said this offer was too little.
“I think their offer to pay back R24m, plus R40m, was gratuitous to say the least. It was something that they cooked up at a distance, and completely out of touch with reality in South Africa,” he said.
“I think it is going to take a bit of time… for the magnitude of what they have been involved in, and the magnitude of the impact, to sink into senior managers, both internationally and in South Africa.”
Asked by McKaiser what effect KPMG’s auditing of Gupta-owned companies has had on the SA economy, Gordhan said it was “huge”.
“Ultimately you can say that in relation to the audits they performed for the companies - and that varies from seven years to 15 years - that any shortsightedness of oversight, or refusal to cut through the fog to understand what was actually going on - is the direct cause of allowing hundreds of billions of rands to leave South Africa.”
Gordhan added that “for now (KPMG) has to bear the responsibility of the billions we have lost”.
Hope dimmed
Speaking about the ANC succession race, Gordhan said his hope for the future of SA would be “dimmed” if Nkosazana Dlamini-Zuma were elected ANC president at its upcoming conference.
The party’s 54th National Conference is set to take place between December 16 and 20 in Gauteng.
“I think (narrative of hope) would be significantly dimmed,” he said, adding that he saw hope coming from a Ramaphosa campaign win.
Gordhan said he supported Ramaphosa in terms of personality, the team around the deputy president and his “potential programme”.
Asked by McKaiser if he would quit the ANC if Dlamini-Zuma won, Gordhan said he would need to find another way to continue.
“The answer is very simple, I haven't heard anything from that (Dlamini-Zuma) campaign yet that says we are going to have a radically different South Africa to the one we have experienced over the last five or so years.
“It’s clear that one can't, or certainly I can’t, go through what we've just been through for the last couple of years, both in terms of the declining potential of this county and the kind of almost conscious efforts to place SA economically and otherwise in a position where the majority is constantly losing out.
“In those circumstances, people like myself must find some other role and some other way of continuing to contribute,” he said.
SUBSCRIBE FOR FREE UPDATE: Get Fin24's top morning business news and opinions in your inbox. | https://www.news24.com/fin24/Economy/gordhan-kpmg-sa-was-a-willing-partner-in-state-capture-20170926 |
I find myself tiring of euro-centric bedroom decor. It all seems so similar and safe and contrived. I can appreciate white, off white, grey and blue grey and how it contributes to a calm sleep environment but I am finding that as a South African surrounded by eclectic, colourful cultures it seems a pity to shy away from all things bold and beautiful (minus the Bold & Beautiful soapy).
Once a month we will be having a look at one of our awesome cultures - where they have come from (in terms of bedroom decor) and where they are at currently (in terms of global decor trends).
The images below resonate with what we know of traditional Xhosa heritage. The circular kraal, the wooden headrest, the minimalist woven mat and the colourful, yet practical blankets that double as coats and capes. Its what you expect to see in the Transkei and is still the norm in many of the more rural parts of the beautiful Eastern Cape.
However, there are many designers hailing from Eastern Cape heritage who are finding their work has success not only in the Johannesburg and Cape Town CBDs but also internationally. They are bringing their unique Xhosa touch to fashion and decor on the world stage.
Have a deeper look at the work of Laduma Ngxokolo and Stoned Cherrie for further inspiration. Why not claim a little SA heritage or support one of our many talented local designers by adding some Proudly South African to your home? | http://blog.sealy.co.za/xhosa-inspired-bedding/ |
Please add any recently published resources into the SANREM Knowledgebase (SKB).
All SANREM participants are required to document the information resources they contribute to SA & NRM knowledge, such as:
- articles
- reports
- books
- presentations
- photos
- videos
- extension and training materials
- et cetera
To help collect that information and make it available to the wider SA & NRM community, SANREM has created the SANREM Knowledgebase (SKB), a searchable database of these publications.
When it comes time to put your information together for the Semi-annual and Annual reports, the information resources you’ve contributed to SA & NRM knowledge are documented on Form 18. The more you keep the SKB up-to-date with your publications, the easier it is for you and the ME to compile the list for this form.
To add to the SKB, visit the SANREM Knowledgebase page. There you will find some general information, plus links to:
- view/search the Knowledgebase proper,
- add resources to it (called “Metadata Entry”), and
- a document with instructions for adding resources (called the “Metadata Guide”).
Start by by clicking the “SKB metadata entry” link and logging in. Follow the instructions detailed in the SKB metadata guide!
Proper Marking and Branding
Remember, when developing any work for publication (in the SKB or elsewhere), you must adhere to the following, from the POP-Manual under “publications” (page 37):
Publications and presentations funded by the SANREM Award should acknowledge USAID support with the following statement:
“This publication/presentation was made possible by the United States Agency for International Development and the generous support of the American People for the Sustainable Agriculture and Natural Resources Management Collaborative Research Support Program under terms of Cooperative Agreement No. EPP-A-00-04-00013-00 to the Office of International Research and Development at Virginia Polytechnic Institute and State University.”
More Information
Questions about this process may be directed to our SKB Research Assistant Lauren Moore at [email protected]. | https://sanremcrsp.cired.vt.edu/partners/team-room/skb-instructions/ |
This package is not currently in any snapshots. If you're interested in using it, we recommend adding it to Stackage Nightly. Doing so will make builds more reliable, and allow stackage.org to host generated Haddocks.
strongswan-sql
Configuration of StrongSwan SQL backend
This library allows for the manipulation of strongSwan connection configuration stored in a MySQL database in a manner that is compatible with the strongSwan SQL plugin for charon.
- How to use this module:
The strongSwan IPsec package offers the means to store connection configuration in a SQL database. This module offers some facilities to manipulate these config elements from Haskell code in a simplified abstracted way. This library offers two approaches to manipulating strongswan configuration in an SQL database as expected by the SQL plugin. See Managed vs Manual API below.
-
Managed API Since managing each configuration object per hand and establishing the relationships amongst them can be tricky and demands internal knowledge of the SQL plugin inner workings, a special API is offered in which all configuration parameters are bundled together in a single type (see
IPSecSettings). The simplified API allows then for writing, reading and deleting these, while behind bars the required elements are created and linked together unbeknownst to the caller. This of course greatly simplifies things /but/ the catch is that the ability to share configuration elements amongst connections is of course lost. Each managed connection configuration gets a separate IKE, Child SA, Peer config etc and no attempt is made to reuse them amongst managed connections.
-
Manual API
The different strongswan configuration elements are mapped to a Haskell type and they can be manually written or read from the SQL database. This offers utmost control in terms of what elements get created and how they are interlinked. So for example one can create a single IKE session configuration to be shared for all connections or have some child SA configurations being shared amongst peers of a given type, etc. The downside of course to this level of control is that it requires for the user of the library to be familiar with the (poorly documented) way in which the plugin expects the relationships to be expressed in terms of entries in the SQL tables etc.
The manual API has been reverse engineered based on the SQL table definitions available here
- Child SA : All configuration parameters related to an IPsec SA.
- IKE Configuration : Configuration applicable to the IKE session (/phase 1/ in IKEv1 parlance).
- Peer Configuration : All elements related to configuration of a peering connection.
A peer connection links to a specific IKE configuration (by means of ID), and it is
furthermore associated to the Child SA by means of a
Peer2ChildConfigtype.
- Traffic Selectors: These are independent values linked to a Child SA by means of a
Child2TSConfigtype. | https://www.stackage.org/package/strongswan-sql |
Eletrobras Privatisation could fetch up to $6.3bn30/08/2017
The privatization of Brazilian electric utility Eletrobras could raise up to 20 billion Reais ($6.3 billion), Mining and Energy Minister Fernando Coelho Filho told Reuters, adding that the process may involve the sale of new shares.
“We are proposing the issuance of new shares and, by doing that, current shares would be diluted,” he said in a late Monday interview, adding that one buyer may not be able to take a controlling interest.
His comments underscored the range of issues yet to be settled in the privatization of Centrais Eletricas Brasileiras SA, as the government’s holding company is formally known.
The interview followed an Energy and Mines Ministry statement, which said the model and terms of the Eletrobras privatization are yet to be decided. It added that the government will remain a shareholder and reserve the right to veto some strategic decisions.
Details of the proposal are expected at a press conference in Brasilia at 10 a.m. (1300 GMT).
Still, the prospect of a privatization sent New York-listed shares of Eletrobras soaring in after-market trading on Monday, highlighting investor enthusiasm for the free-market reforms that President Michel Temer has embraced.
The government is comparing the Eletrobras proposal to the successful privatizations of planemaker Embraer SA and miner Vale SA in the 1990s.
Yet for many Brazilians the privatizations of that era were tarred by scandals in the telecommunications sector, and the subject remains politically fraught even as the government has opened airports, highways and ports to private investment.
The proposal to sell control of Eletrobras will be formally presented to the council of the government’s Investment Partnership Program on Wednesday, a person with direct knowledge of the plan told Reuters on Monday.
The value of Eletrobras’ equity stood at 31 billion reais at the end of May, according to the company’s website. Brazil’s federal government owns 41 percent of the company and a majority of its voting shares. State development bank BNDES owns about 20 percent of common shares and 14 percent of preferred shares.
The minister said the government’s plan will not allow a single group or investor to buy a large concentration of shares. | http://tambaba-resort.co.uk/investment-news/eletrobras-privatisation-could-fetch-up-to-$6.3bn.php |
The latest labour force survey of about 30,000 households released by Stats SA confirmed the baleful state of the labour market, which is the growing mismatch between supply and demand for workers. In the third quarter, the supply of potential workers increased by 153,000, or 0.4%. This was much faster than the demand for them, which increased by 92,000 (to 16.4 million). The number defined as unemployed (those not working but actively looking for work), increased by 127,000 to 6.2 million, pushing the unemployment rate up to 27.5% of the potential workforce.
But not all the news on the employment front was bad – depending on your perspective. While the formal sector continues to shed jobs, the informal sector is adding them at a rapid rate. In the third quarter, informal employment outside of agriculture rose by 188,000, and by 327,000 (12.2%) during the past year to over 3 million workers employed informally, which is over 18% of all employed.
The decline in formal and the increase in informal employment is not a coincidence. Formal employment has been subject to a rising tide of intervention by government and trade unions (with more to come soon, in the form of a national minimum wage). These have provided those in jobs with consistently improved wages and other valuable employment benefits, as well as security (to a degree) against dismissal and compensation for retrenchment.
The informal sector’s employers and workers largely escape these constraints on the freedom to offer and supply employment opportunities. If formal employment (decent jobs, as they are described) is unattainable, the choice may only be informal employment or not working at all.
While formal employment outside of the public sector has stagnated, the share of employment costs in total value added by private business has not fallen. The bill for employment benefits has gone up in real terms, as have employment benefits for those in work, even as the numbers employed have gone down (see figures below).
Non-financial corporations’ share of value added: Operating surplus and compensation of employees
Source: SA Reserve Bank and Investec Wealth & Investment
Real value added by non-financial corporations (1995=100) using the household consumption deflator
Source: SA Reserve Bank and Investec Wealth & Investment
Non-financial corporations’ growth in real value added and real compensation (using the household consumption deflator)
Source: SA Reserve Bank and Investec Wealth & Investment
If the wage bill in any sector of the economy goes up faster than the decline in union membership (as it has been doing), the pool of income upon which to draw union dues deepens. Strikes that increase benefits at the expense of employment are therefore not irrational from the perspective of union leaders if wages increase at a faster rate than employment declines.
The "Decent Jobs Summit"
The jobs summit would have been better described as the “Decent Jobs Summit”, for which the heralded Landmark Framework Agreement is but a wish list of everything that can be imagined to promote the demand for labour. It’s a plan, however, that gives no consideration to the impact of the rising cost of hiring labour, and the more onerous conditions imposed on this hiring. This may have had something to do with the disappointing volume of employment provided.
Most of us would like to see decent jobs for all who are able and willing to work. We also hope that economic growth can make it possible – as it has largely in the developed world. But the truth is that too few South Africans have the skills, the qualifications or experience to allow them to be employed on decent terms by cost-conscious employers.
The soon to be imposed National Minimum Wage (NMW) of R3 500 per month will make it more difficult to find employment outside of the informal sector because these minimums are well above what many in employment currently earn.
For all of the many (including economists who should know better) who wish wages higher and poverty away, it has been convenient to ignore the findings of one comprehensive and highly relevant study. The study is by Haroon Bhorat and colleagues on the impact of higher minimums etc. on employment in SA agriculture, introduced after 2003.
The impact on employment (down 20%) and improved wage benefits for those still employed, were correctly described by the analysts as significant. There is every reason to conclude that the impact of the NMW on employment will be as significant and destructive for those who lose their jobs. And it will be helpful for those who retain their jobs on improved terms, as they will be even more carefully selected for the skills and strengths they bring to their tasks. The informal sector will have to come to the rescue of the larger numbers of unemployed workers while they wait impatiently for economic miracles.
About the author
Prof. Brian Kantor
Economist
Brian Kantor is a member of Investec's Global Investment Strategy Group. He was Head of Strategy at Investec Securities SA 2001-2008 and until recently, Head of Investment Strategy at Investec Wealth & Investment South Africa. Brian is Professor Emeritus of Economics at the University of Cape Town. He holds a B.Com and a B.A. (Hons), both from UCT. | https://www.investec.com/en_za/focus/economy/wishful-thinking-employment.html |
More disruptions at Walvis airport
The withdrawal of navigational instruments causes chaos in bad weather.
01 August 2019 | Transport
Sources said the conditions at Walvis Bay were “extremely foggy”, making it impossible for aircraft to land without the assistance of ground-based navigational instrument approaches recently withdrawn by the Namibia Civil Aviation Association (NCAA).
Flights by Air Namibia, SA Express, and SA Airlink were all affected. SA Airlink's marketing and sales manager Karin Murray said its aircraft was forced to return to Johannesburg after numerous attempts to land at Walvis Bay.
According to one source the SA Airlink aircraft circled the Walvis Bay airport three times in the hope that the skies would clear, but had to divert due to continuing bad weather. The Air Namibia and SA Express flights diverted to Hosea Kutako International Airport (HKIA) outside Windhoek.
Unhappy passengers ranted about the inconvenience caused to them, accusing the airlines and the NCAA of incompetence.
The airlines said they cannot be held liable for the inconvenience, citing terms and conditions of air flight which absolve them from any liability in unforeseen circumstances such as bad weather.
The interim executive director of the NCAA, Reinhard Gärtner, reiterated that the authority had no choice but to withdraw the instrument approaches due to “possible legal implications”.
“We are engaging and exploring all possible avenues,” Gärtner said, adding that there was no indication when the situation would be normalised.
“We can reinstate the old instrument approaches as a temporary relief but that is not an option. We will go ahead and forge a solution,” Gärtner said.
Gärtner held an emergency meeting with transport minister John Mutorwa on Monday to discuss the matter, especially in view of the many visitors expected to land at Walvis Bay for the inauguration of the fuel storage facility at the Walvis Bay harbour this Friday.
It is understood that the NCAA was forced to withdraw the instrument approaches after “inconsistencies” such as illegal software from the suppliers were revealed.
Gärtner would not say who the suppliers were, saying the matter was sub judice.
He said the NCAA would now have to invite new tenders for instrument approaches to be installed at HKIA and the Walvis Bay airport. | https://www.namibiansun.com/news/more-disruptions-at-walvis-airport2019-08-01/ |
The Vatican has put into action in Switzerland its promise of “rigour, clarity and sobriety” in its finances.
– Centralisation of investments a key goal to weather post-COVID recession: Economy ‘Minister’
Rumours have been swirling for weeks that the coronavirus recession could put the Holy See at risk of default.
An internal memo of the Vatican Secretariat for the Economy revealed that officials were anticipating losses of up to 146 million euros in a worst-case COVID-19 scenario, but the Prefect of the Secretariat, Juan Antonio Guerrero, denied to Vatican News May 13 that the Holy See was heading for bankruptcy, even if he did admit there would be “difficult years” ahead in terms of finances.
“We must… understand what is and what is not essential”, Guerrero said last week, adding that in terms of spending and investments “our approach must be the maximum sobriety and the maximum clarity”.
“We had already decided, when approving this year’s budget, that expenses should be reduced in order to reduce the deficit”, Guerrero explained, affirming that “the post-Covid crisis forces us to do so more determinedly”.
Insisting that “we must be sober, rigorous… we must manage the finances with the passion and diligence of a good family man”, the Prefect for the Economy said that Vatican entities had been asked “to do everything possible to reduce expenses while safeguarding the essential services of [their] specific mission”, including centralising financial investments and improving personnel and procurement management.
In terms of those investments, Guerrero said the aim was “not only to centralise but to go about it professionally, without conflict of interest according to ethical criteria. It is not only that unethical investments are to be avoided, but that those investments linked to a different vision of the economy, to integral ecology, to sustainability are to be promoted”.
– Holy See merges nine Swiss companies into one
Proof that Holy See economic officials under Guerrero’s leadership are serious about centralising investments is the fact that the Vatican has now merged nine of the real estate and finance companies it owned in Switzerland into one single entity.
Those nine companies are the four of the Société Immobilière Florimont, which date from 1930, the three of the Société Immobilière Sur Collonges (1933) – all based in Lausanne – as well as companies SI Rieu-Soleil SA in Geneva (1973) and Diversa SA (1930) in Fribourg.
The nine are now united under the umbrella of Profima Société Immobilière et de Participacions in Geneva, which dating back to 1926 is the oldest Vatican holding in Switzerland.
According to an April 27 extract from the public Swiss commercial registry, the total value of the companies amounts to 46.6 million Swiss francs, or 44.2 million euros.
That figure, however, represents the companies’ historic worth, from the time Vatican officials invested in them money that in large part came from the compensations the Holy See received for the loss of the Papal States in the Lateran Pacts with Italy.
The significance of the Vatican company mergers in Switzerland is that now the Administration of the Patrimony of the Apostolic See, or APSA – the Vatican ‘central bank’ – has now concentrated its overseas investments in just three companies: Profima in Switzerland, Grolux Investments in the United Kingdom and Sopridex SA in France.
The mergers also mean fewer company directors and hence, lower costs. | https://novenanews.com/vatican-switzerland-rigour-sobriety-finances/ |
What’s on in South Australia – SA
Captioning Studio offers regular captioned performances with producers and venues in South Australia. You’ll find details of venues and a complete list of upcoming GoTheatrical captioned performances below. Please check back regularly as we are constantly adding new venues and performances. | https://theatrecaptioning.com.au/sa-whats-on/ |
Devoteam SA is a France-based information technology (IT) consulting company. It provides advisory services on marketing, telecom and systems security relating to IT, solution and system integration, project management, application development, provides outsourcing, implementation and customer support. The Company's main clients include operators in the industry and distribution, banking and insurance, utilities and public sector, as well as the telecommunication industry. Devoteam SA is present in more than 20 countries, including: Austria, Belgium, the Czech Republic, Denmark, France, Germany, Norway, Sweden, Poland, the Netherlands, Spain, Switzerland, the United Kingdom, the United Arab Emirates, Saudi Arabia, Turkey, Russia, among others.
|Last Annual||December 31st, 2020|
|Last Interim||June 30th, 2021|
|Incorporated||October 25, 1995|
|Public Since||October 28, 1999|
|No. of Shareholders:||n/a|
|No. of Employees:||7,623|
|Sector||Technology|
|Industry||Software & IT Services|
|Index|
|Exchange||Euronext - Paris|
|Shares in Issue||8,193,775|
|Free Float||(0.0%)|
|Eligible for||
|
✓ ISAs
✓ SIPPs
|Address||73 rue Anatole France, LEVALLOIS-PERRET, 92300, France|
|Web||https://france.devoteam.com/|
|Phone||+33 1 41494848|
|Contact||Vivien Ravy (Director Group Controlling & Investor relations)|
|Auditors||Grant Thornton|
As of 22/12/21, shares in Devoteam SA are trading at €168.5, giving the company a market capitalisation of £1.15bn. This share price information is delayed by 15 minutes.
Shares in Devoteam SA are currently trading at €168.5 and the price has moved by 59.56% over the past 365 days. In terms of relative price strength - which takes into account the overall market trend - the Devoteam SA price has moved by 32.87% over the past year.
Of the analysts with advisory recommendations for Devoteam SA, there are there are currently 0 "buy" , 3 "hold" and 0 "sell" recommendations. The overall consensus recommendation for Devoteam SA is Hold. You can view the full broker recommendation list by unlocking its StockReport.
Devoteam SA is scheduled to issue upcoming financial results on the following dates:
Devoteam SA does not currently pay a dividend.
Devoteam SA does not currently pay a dividend.
Devoteam SA does not currently pay a dividend.
To buy shares in Devoteam SA you'll need a share-dealing account with an online or offline stock broker. Once you have opened your account and transferred funds into it, you'll be able to search and select shares to buy and sell. You can use Stockopedia’s share research software to help you find the the kinds of shares that suit your investment strategy and objectives.
Shares in Devoteam SA are currently trading at €168.5, giving the company a market capitalisation of £1.15bn.
Here are the trading details for Devoteam SA:
We were not able to load our ranking data for Devoteam SA
Shares in Devoteam SA are currently priced at €168.5. At that level they are trading at 9.27% premium to the analyst consensus target price of 0.00.
Analysts covering Devoteam SA currently have a consensus Earnings Per Share (EPS) forecast of 6.117 for the next financial year.
An important predictor of whether a stock price will go up is its track record of momentum. Price trends tend to persist, so it's worth looking at them when it comes to a share like Devoteam SA. Over the past six months, the relative strength of its shares against the market has been 41.08%. At the current price of €168.5, shares in Devoteam SA are trading at 31.49% against their 200 day moving average. You can read more about the power of momentum in assessing share price movements on Stockopedia.
The Devoteam SA PE ratio based on its reported earnings over the past 12 months is 27.44. The shares are currently trading at €168.5.
The PE ratio (or price-to-earnings ratio) is the one of the most popular valuation measures used by stock market investors. It is calculated by dividing a company's price per share by its earnings per share.
The PE ratio can be seen as being expressed in years, in the sense that it shows the number of years of earnings which would be required to pay back the purchase price, ignoring inflation. So in general terms, the higher the PE, the more expensive the stock is.
We were unable to find the directors for Devoteam SA.
Here are the top five shareholders of Devoteam SA based on the size of their shareholding: | https://kohana.stockopedia.com/share-prices/devoteam-sa-EPA:DVT/ |
• 81% of SA respondents felt their low income would prevent them from being able to spend time with family and friends over the holiday period.
ACOSS, SACOSS and the SA Anti-Poverty Network are meeting with SA Mayors today to continue their campaign calling on the Federal Government to increase Newstart (the government payment for people locked out of paid work). They are also calling on the federal Labor Party to commit to increasing the payment at their upcoming conference in Adelaide.
The online survey of 461 people receiving payments (mostly Newstart, at 72% of respondents, or Youth Allowance, at 12% of respondents) ran from November 14 – 25, 2018. The survey included 72 respondents from SA (16% of respondents). The survey is a sample only and is not representative.
“While many are counting down to the holiday season, it is filling people on low incomes with dread.
“With nothing to spare already, it’s often impossible to find money for gifts, for food to contribute at social events, or transport to get to them. Too many people low incomes, such as Newstart and Youth Allowance recipients, find themselves feeling especially isolated at this time of year.
“Newstart, the payment for people locked out of paid work, has not been increased in real terms for 24 years and is just $39 a day, which is simply not enough to cover the basics of life.
“Many South Australian households face cost of living pressure at the best of times and we know Christmas can be a stressful and isolating time for many South Australians living in poverty. | https://www.sacoss.org.au/survey-shows-south-australians-poverty-are-dreading-christmas |
Hang on for a minute...we're trying to find some more stories you might like.
Email This Story
Several students in the Black Student Alliance (BSA) presented concerns regarding Students’ Association’s support for minorities at Monday’s meeting.
BSA president Amanda John referenced a photo that circulated on Twitter of a past SDSU student in blackface dressed as Colin Kaepernick. In response, John said BSA members would like to see more support from SA. The students from BSA also expressed they would like to see senators present at more of their meetings and events.
Programming and Public Relations Chair Alex Farber said many SA members are involved on campus, but try to do as much as they can to support student organizations.
“Your concerns as BSA show that we aren’t communicating our values at SA well because we do support your student organization,” Sen. Nick Lorang said.
Senators discussed solutions to this problem and planned to form a student support system task force, which will address student concerns.
During open forum, Chief University Librarian Kristi Tornquist presented on the Briggs Library renovation. One update they hope to make soon is adding a 24-hour access to the Writing Center next summer.
Also, there are plans to fix the entrance because many people with disabilities can’t enter the library due to the slope of the entrance, Tornquist said. One donor is also providing funds to fix the clocks.
In new business, Resolution 17-06-R showing SA’s support for diversity and inclusion was tabled indefinitely and reintroduced as Ordinance 17-01-O, which will be discussed in the future.
Additionally, Senators approved the constitution for the College Diabetes Network.
The next SA meeting will be at 7 p.m. Monday, Nov. 6, in the Lewis and Clark room of The Union. | https://sdsucollegian.com/684/news/senate-creates-student-support-system-task-force-responds-to-bsa/ |
in Las Vegas, Nevada, age 52
Aliases:
Beajaye Sm Arreola, Beajaye Sm Sacamos, Beajaye Sacamos, Beajaye S Sacamos, Sacamos B Arreola, B S Arreolasacamos, Beajay Sacamos, Beajaye Arreola
Current Address:
2860 S Miller Ln, Las Vegas, NV 89117
Lived At:
92-7004 Kahea St, Kapolei, HI 96707
Profile Summary:
Beajaye Sa was born in 1969 and is currently 52 years old. Beajaye currently lives at 2860 S Miller Ln, Las Vegas, NV 89117. Relatives & associates include Casey Marie, Steve Sacamos and Mary Arreola. Beajaye Sa's phone number is (702) 405-0037.
This report may contain arrest records, criminal records, federal & state records and potential felonies. View Full Report to find out.
View Full Report to view possible photos from social media & the deep web.
Current Address:
Has Lived In:
We found social media profiles matching the name Beajaye Sa. This section may contain links to Facebook, LinkedIn, Twitter, Pinterest and potential dating profiles.
Frequently asked questions for Beajaye Sa.
Find public records, court records, and more.
Search yourself, a friend, relative, neighbor, or a date.
Try adding a middle name, a current or previous city, approximate age, and/or expanding your search to All States. | https://checkpeople.com/search/Beajaye-Sa/in-NV/Las-Vegas/63155077dfbb4a35939b3c8d2ac018bb |
One of the largest supply chain finance programmes in the world has proved so popular with suppliers, there isn’t enough funding in place yet to keep suppliers satisfied – though perhaps, soon, there will be.
What’s more, the hugely successful scheme was implemented in near-record breaking time. That, in fact, was one of a number of reasons why steel group ArcelorMittal South Africa won the Supply Chain Finance Award for the manufacturing and industrial category last December.
And yet the whole project came about almost by accident.
Heinrich Pretorius had been in the treasury manager role at ArcelorMittal SA for a little more than a year after joining the steelmaker from KPMG when the head of accounts payable received an email from Propell, a business that offers supply chain finance programmes to companies in Africa, in partnership with PrimeRevenue.
“At that time, I was quite keen on implementing various kinds of working capital initiatives,” Pretorius recalls. “I really, really liked what they brought to the table.” He was sure that the scheme would be appealing to ArcelorMittal SA’s many small suppliers. Having carefully scrutinised it, “I couldn’t find any issue with it,” he says. “It kind of looked too good to be true.”
What was appealing was the fact that it could give ArcelorMittal SA a cash flow benefit by extending payment terms while helping cash-strapped suppliers who don’t have ready access to affordable capital.
“The fact that there’s a benefit for both parties was quite a draw for us,” Pretorius says.
The proposal was examined by the CFO and then presented to the chief executive in early 2015. Within five minutes the CEO said, “You’ve got three months to implement.”
“The success has been self-fulfilling”
Three months. “The harder the targets, the quicker it will get done,” Pretorius says. “I had to drive the team very, very hard. I have a very competent team around me. It was really focused: weekly steering committee meetings with specific actions that had to be met – and very harsh rules if targets slipped. I probably gave up sleep for a while, but it was a very driven project.”
So much so that, reputedly, it’s been the second-fastest SCF programme implementation worldwide. Things slowed down a bit after that, however. “This was a brand new concept in South Africa,” Pretorius says, “So obviously there was a bit of scepticism at the time.” But the company kept the explanation of the scheme quite simple – and promised suppliers that, if they didn’t think the programme worked the way ArcelorMittal SA said it would, then they would put suppliers back on their original payment arrangements. “To date, we haven’t had anybody come off the programme,” Pretorius says.
The company has actually negotiated a deal with Barclays to allow it to go over 100% of the agreed facility to accommodate some of the suppliers over the month-end.
The steel industry – then and now – had been suffering from volatility and overcapacity so suppliers were tempted by the opportunity to get access to additional liquidity. In fact, the company made another commitment to suppliers to win them over: the invoices of suppliers joining the programme would be processed within five days of receipt. That’s an immediate benefit for suppliers, who can tell within a week if there is a query or delay on their invoice and then deal with it.
And with ArcelorMittal SA’s payment terms now being 60 days from month-end, suppliers can typically get payment 75 days earlier than they otherwise would. Pretorius says the scheme was quite slow to get going at first, “But now, we’re literally emailed on a daily basis by suppliers who want to join the programme. We had to go and market it at first, but now it’s just ‘word of mouth’. The smaller suppliers are knocking on our door to join. The success has been self-fulfilling.”
So successful has the programme been that it is now running at full capacity in terms of the funding available. The initial target was for the facility to reach 50% capacity utilisation by December 2015. Even that target was “a bit aggressive”, Pretorius recalls. The funding was actually around 80% utilised by that time, 96% by the following January and fully-utilised by February. The company has actually negotiated a deal with Barclays, the finance provider, to allow it to go over 100% of the agreed facility to accommodate some of the suppliers over the month-end.
Now, with about 50 suppliers on the programme and another 70 more queuing up to join, Pretorius is negotiating to double-up the financing available: “We’re the highest-utilised SCF programme in the world. We’re extremely proud of our programme and we want to find a way to extend it,” he says. “There’s always a point where you reach saturation and that’s where we’re at. We now want to grow the programme and so we’re working on phase two.”
When the programme was launched, it was made available to all of ArcelorMittal SA’s suppliers though, as Pretorius says, the vision was always that it would be of most help to the steelmaker’s smaller suppliers. “And that’s also where we expect to grow this facility,” he adds. “For [the benefit of] smaller suppliers.”
Financial benefits
Pretorius won’t go into details of how suppliers benefit from cheaper financing costs, other than to say that SMEs ordinarily might be able to borrow at a few hundred basis points over prime (the policy rate in South Africa is currently 10.50%), but with the SCF facility they can typically get access to cash at less than prime.
“This is not an IT implementation, it’s not a financial implementation, it’s not a legal implementation. It’s all of it: it’s a total business implementation.”
Heinrich Pretorius, ArcelorMittal South Africa
Tech firm Propell, which works in partnership with PrimeRevenue, says on its website that one of the steelmaker’s suppliers, Premier Logistics Solutions, reduced its financing cost by 16% per annum compared with traditional invoice discounting rates. “We’ve been greatly assisted by Barclays in terms of our financing terms, to make sure that it’s not an expensive funding line,” Pretorius says.
Of course, ArcelorMittal SA is benefitting, too. “We managed to unlock a billion rand [£60m; €69m] in cash flow for the company,” he says.
“Just a few hiccups”
What was the most important lesson Pretorius learned during the process? “You need to surround yourself with the right people. We had a multi-tasked team: the lawyers, procurement, IT, Treasury, our financial controllers were involved. This is not an IT implementation, it’s not a financial implementation, it’s not a legal implementation,” he insists. “It’s all of it: it’s a total business implementation.”
Having headed up not only such a successful implementation but the second-fastest, as well, there can’t have been many things that went wrong. “Just a few hiccups,” he says. “This is a programme we’re very, very proud of as a company. We’re very grateful for the award that we received. It was nice to get the recognition, especially as a local South African entity.”
But what, we ask, would he do differently, if he were to go back and start again? “I’d find out who the fastest implementation was,” says Pretorius, “and beat them by one day.”
Can you match ArcelorMittal SA’s achievements? Entries are now being accepted for the 2017 Supply Chain Finance Awards. Full details available by clicking here. Deadline: Friday 1st October – Awards ceremony at the Supply Chain Finance Community Forum in Amsterdam on 29th November. | https://www.scfbriefing.com/quick-wins-arcelormittal-sas-rapid-and-popular-scf-scheme/ |
I’ve been trying to find the time to write about that February jobs report. Trying, because this was an important report and a full run through what’s going on takes some time.
Most would have heard two things about the report — probably from Peter Martin — that the report was biased up by population growth and a technical issue to do with sample rotation. In this post, I’ll try to show what these issues amount to.
The headline facts are (by now) well-known — the household survey reported that employment increased by an eye-popping 71.5k in February, thanks to a 17.8k increase in full-time employment and a 53.6k increase in part-time employment. This was the largest monthly increase in employment since July 2000 (+82.9k), due to the fact that there was a decent contribution from both full-time and part-time employment.
Like the Olympics, we ought to expect ‘records’ every four years or so — as a larger population inflates ‘thousands of jobs’ monthly results. Additionally, population growth these last two months has accelerated to ~0.2%m/m from ~0.13%m/m, so we must also control for that when thinking about the labour market.
Full-time employment growth was ~0.2%m/m and part-time growth was ~1.5%, making for total employment growth of ~0.6%m/m.
The change in the proportion of the population engaged in the employment removes the population’s level and growth and scales for the respective shares of part time and full time — the above chart shows the change in the proportion of the population engaged in either type of employment (full-time or part-time).
In these terms February was still a pretty good month, and still a near term record: full-time employment held about steady as a share of the population, while part-time engagement rose by ~0.25%, taking employment as a share of the population up by ~0.25%. That’s pretty good — not awesome, but still pretty good.
The reason we need to be careful about such things is that the employment survey is a household survey — not an establishment survey like the US non-farm payrolls survey.
The survey unit is a physical abode, and the responses are then scaled up according to population benchmarks. If the population were estimated to be lower or growing more slowly, reported employment growth would have been lower in the month.
Assumed population growth accelerated from ~0.13%m/m to ~0.2%m/m in Q1’13 (who knows what actually happened?!) so if one wishes to compare the jobs-per-month estimates across time, some adjustment must be made. Once these adjustments are made (by looking at things as a share of the population) the data suggests that things are picking up in the part-time sector, but that full-time jobs are about flat.
Now for the sample stuff.
This is trickier to explain: the survey is like a deck of cards, and each month we take a card off the top and place a new one at the bottom. The new card is now the 8th card (with the 2nd card becoming the top card, the 8th the 7th, and so on) and it stays in the deck for the next seven months. Some months, the card you pull off is very different to the card you add to the bottom; when this occurs the estimate of employment may change sharply. This is what happened in February.
Looking at the gross flows data, we can track those who remained in the survey (in terms of the above example, the 2nd to 8th cards) — the residual is therefore (with some quibbles) the new sample (or the ‘new card’ in terms of our example).
Now we can split this to see what came from the old sample, and what came from the new sample: to frame this in terms of the SA’d numbed we see discussed, I’ve made the assumption that the seasonal factors for the total sample are the same — which is certainly wrong, but probably not too wrong.
Looking at the above chart, you can see that sample rotation has been a part of the recent improvement in employment estimates. Indeed, the unmatched part of the sample explains all of the employment growth over the past three months. The matched sample has reported job shedding.
My (roughly seasonally adjusted) matched sample employment estimate has declined by ~367k to 9180k over since the November peak, while unmatched employment has risen by 451k to 2448k, for a net gain of 84k jobs (which is the total SA’d gain over those three months — showing that my rough seasonal adjustment isn’t too far off!).
So what’s the truth about the labour market? Is it adding jobs (as the unmatched sample suggests) or losing jobs (as the matched sample suggests)?
Who knows, the survey isn’t designed to measure the number of jobs. What it’s designed to measure is the unemployment rate. It does a pretty good job of that.
According to the ABS, the unemployment rate been stable at ~5.4% for some time, which is better than most expected: most expect that it would have kept rising. It is, however, not so great given that the headline number hides a multitude of sins.
This entry was posted in AUD, economics and tagged employment. Bookmark the permalink. | https://ricardianambivalence.com/2013/03/30/about-that-feb-aussie-jobs-report/ |
Return to Stylus Studio EDIFACT D96B Messages page.
Sales data report message
0. INTRODUCTION
This specification provides the definition of the Sales data report message (SLSRPT) to be used in Electronic Data Interchange (EDI) between trading partners involved in administration, commerce and transport.
1. SCOPE
1.1. Functional Definition
A message to enable the transmission of sales data related to products or services, such as corresponding location, period, product identification, pricing, monetary amount, quantity, market sector information, sales parties. It enables the recipient to process the information automatically and use it for production, planning, marketing, statistical purposes, etc.
1.2. Field of Application
The Sales data report message may be used for both national and international trade. It is based on universal commercial practice and is not dependent on the type of business or industry.
1.3. Principles
The message intent is to provide sales information for one or more locations for a series of products within a specified time period.
The message is transmitted either from a seller to its supplier or from a headquarters, coordination or distribution centre to a manufacturer, supplier or a third party, such as a marketing institute for statistical analysis. It allows the recipient to know for a specific product the:
Though the message is location driven, it is conceivable that the recipient process the data to derive information based on other variables such as a specific product and all its related sales locations and/or addresses or periodic turnover and the related locations.
2. REFERENCES
See UNTDID, Part 4, Chapter 2.6 UN/ECE UNSM - General Introduction, Section 1.
3. TERMS AND DEFINITIONS
See UNTDID, Part 4, Chapter 2.6 UN/ECE UNSM - General Introduction, Section 2.
4. MESSAGE DEFINITION
4.1. Data Segment Clarification
This section should be read in conjunction with the Branching Diagram and the Segment Table which indicate mandatory, conditional and repeating requirements.
The following guidelines and principles apply to the whole message and are intended to facilitate the understanding and implementation of the message:
All specified dates/times should be in the format 'yymmdd'/'hhmm' unless all parties involved in the transaction agree that there is a functional requirement for an alternative format. Periods should be specified as whole numbers representing the required period as indicated in the format qualifier (weeks, months, etc.).
Where a choice of code or text is given only the code element should be used wherever possible. Due to the high volume of data that will be usually transmitted in the Sales Data Report message, it is highly recommended to use codes for products and locations.
Conditional data that is not required in the message should not be included.
Care must be taken that the segment qualifier in dependent segments do not conflict with the segment qualifier of the trigger segment of a group.
4.1.1. Header section
Information to be provided in the Header section:
0010 UNH, Message header
A service segment starting and uniquely identifying a message. The message type code for the Sales data report message is SLSRPT.
Note: Sales data report messages conforming to this document must contain the following data in segment UNH, composite S009:
0020 BGM, Beginning of message
A segment by which the sender must uniquely identify the sales data report by means of its type and number.
0030 DTM, Date/time/period
A segment specifying general dates and, when relevant, times related to the whole message. The sales report preparation date and the sales period covered by the report must be specified using this segment.
0040 Segment Group 1: NAD-SG2
A group of segments identifying the parties with associated information.
0050 NAD, Name and address
A segment identifying names and addresses of the parties, in coded or clear form, and their functions relevant to the sales data report. Identification of the sender of the report and the recipient is mandatory for the sales data report message. It is recommended that where possible only the coded form of the party ID should be specified e.g. the sender and receiver of the report are known to each other, thus only the coded ID is required, but when a new address might have to be clearly specified, this should be done preferably in structured format.
0060 Segment Group 2: CTA-COM
A group of segments giving contact details of the specific person or department within the party identified in the NAD segment.
0070 CTA, Contact information
A segment to identify a person or department, and their function, to whom communications should be directed.
0080 COM, Communication contact
A segment to identify a communications type and number for the contact specified in the CTA segment.
0090 Segment Group 3: RFF-DTM
A group of segments for giving references and where necessary, their dates, relating to the whole message e.g. contract number.
0100 RFF, Reference
A segment identifying the reference by its number and where appropriate a line number within the document.
0110 DTM, Date/time/period
A segment specifying the date/time related to the reference.
0120 Segment Group 4: CUX-DTM
A group of segments specifying the currencies and related dates/periods valid for the whole sales data report. The Segment Group 4 may be omitted in national applications but will be required for international exchanges.
0130 CUX, Currencies
A segment identifying the currencies specified in the sales data report e.g. the currency in which the sales amounts or product prices are expressed in. A rate of exchange may be given to convert a reference currency into a target currency.
0140 DTM, Date/time/period
A segment specifying the date/time/period related to the rate of exchange.
4.1.2. Detail section
Information to be provided in the Detail section:
0150 Segment group 5: LOC-DTM-SG6-SG7 A group of segments providing details of the location for which sales are being reported and the period or sub-period during which the sales took place. There must be at least one occurrence of Segment group 5 within a sales data report.
0160 LOC, Place/location identification
A segment indicating in coded form the location to which the sales data being reported apply e.g. a retail outlet, a geographic area.
0170 DTM, Date/time/period
A segment identifying the sub-period during which the sales being reported occurred if different than the period specified in the heading section e.g. within a biweekly sales data report as specified in the heading section, sales are reported in sub-periods of one week.
0180 Segment Group 6: RFF-DTM
A group of segments giving references at an intermediate level relating to several lines items, e.g. an invoice, shipment, notification, etc.
0190 RFF, Reference
To specify a reference.
0200 DTM, Date/time/period
To specify date, and/or time, or period.
0210 Segment Group 7: LIN-PIA-IMD-PAC-RFF-DOC-ALI-MOA-PRI-GIN-
SG8 A group of segments providing details per location and period of the individual products sold in terms of product family or group, promotional flags, total sale monetary amount and sale price.
0220 LIN, Line item
A segment identifying the line item by the line number and configuration level, and additionally identifying the product or service that has been sold.
0230 PIA, Additional product id
A segment providing either additional identification to the product specified in the LIN segment or providing any substitute product identification. In the Sales Data Report the PIA segment can be used when a product specified in LIN has to be associated with a group or family of products whose identity could be specified in PIA.
0240 IMD, Item description
A segment for describing the product in the line item.
0250 PAC, Package
A segment specifying the number and type of packages.
0260 RFF, Reference
A segment for referencing documents or other numbers pertinent to the line item.
0270 DOC, Document/message details
A segment identifying and providing information relating to documents.
0280 ALI, Additional information
A segment indicating that the line item is subject to special conditions owing to origin, customs preference, embargo regulations or commercial factors. In the Sales Data Report the ALI segment can be used to specify promotional flags, e.g. to indicate what type of promotion if any was in effect when the product specified in LIN was sold.
0290 MOA, Monetary amount
A segment specifying any monetary amounts relating to the product. For the sales data report the MOA segment can be used to express the total monetary amount of the product sold in one location for one period.
0300 PRI, Price details
A segment to specify the price type and amount. The price used in the calculation of the total sales monetary amount will normally be the selling price.
0310 GIN, Goods identity number
A segment to specify identity numbers related to units of the product identified in the LIN segment, e.g. serial number.
0320 Segment Group 8: QTY-MKS-NAD
A group of segments providing split delivery sales parties and relevant quantities information.
0330 QTY, Quantity
A segment identifying the product quantity, i.e. quantity sold.
0340 MKS, Market/sales channel information
To identify market and sales channel details for products and services information.
0350 NAD, Name and address
To specify the name/address and their related function, either by CO82 only and/or unstructured by CO58 or structured by CO80 thru 3207.
0360 UNT, Message trailer
A service segment ending a message, giving the total number of segments in the message and the control reference number of the message.
4.2. Data segment index (Alphabetical sequence by tag)
4.3. Message structure
4.3.1. Segment table
Return to Stylus Studio EDIFACT D96B Messages page. | http://www.stylusstudio.com/edifact/D96B/SLSRPT.htm |
Back to Top 4.
FlexRay – Wikipedia
While specific network implementations vary, typical FlexRay networks have a cabling impedance between 80 and ohms, and the end nodes are terminated to match this impedance. Illustration of a static segment with 3 ECUs transmitting data to 4 reserved slots.
FlexRay uses unshielded twisted pair cabling to connect nodes together.
The smallest practical unit of time on a FlexRay network is a macrotick. For security-critical applications, the devices connected to the bus may use both fexray for transferring data. FlexRay is commonly used in a simple multi-drop bus topology that features a single network cable run that connects multiple ECUs together. This article covers the basics FlexRay. Other nodes on the network wait for the sync frames to be broadcast, and flexrag the time between successive broadcasts in order to calibrate their internal clocks to the FlexRay time.
FlexRay Automotive Communication Bus Overview
Articles needing additional references from January All articles needing additional references All articles with specifically marked weasel-worded phrases Articles with specifically marked weasel-worded phrases from April Adoption of a new networking standard in complex embedded designs like automobiles takes time.
So in the worst case the flxeray middle bits are correct, and thus the sampled value is correct.
January Learn how and when to remove this template message. The startup frames basixs analogous to a start trigger, which tells all the nodes on the network to start.
While FlexRay will be solving current high-end and future mainstream in-vehicle network challenges, it will not displace the rpotocol two dominant in-vehicle standards, CAN, and LIN. Solutions for FlexRay Networking. FlexRay Basics Many aspects of FlexRay are designed to keep costs down while delivering top performance in a rugged environment.
This allows very high-speed control rates to be realized on a FlexRay network. Target Group This E-Learning module is intended for all those who wish to gain a better understanding of FlexRay communication technology.
Header, Payload, and Trailer. Once the network is started, all nodes must synchronize their internal oscillators to the network’s macrotick.
Embedded networks are different from Basicss networks in that they have a closed configuration and protoclo not change once they are assembled bssics the production product. The first series production vehicle with FlexRay was at the end of baics the BMW X5 E70 enabling a new and fast adaptive damping system.
The segment is a fixed length, so there is a limit of the fixed amount of data that can be placed in the dynamic segment per cycle. The Field Bus Exchange Format, or FIBEX file is an ASAM-defined standard that allows network designers, prototypers, validaters, and testers to easily share network parameters and quickly configure ECUs, test tools, hardware-in-the-loop simulation systems, and so on for easy access to the bus.
As synchronization is done on the voted signal, small transmission errors during synchronization that affect the boundary bits may skew the synchronization no more than 1 cycle. Most embedded networks have a small number of high-speed messages and a large number of lower-speed, less-critical networks.
FlexRay_E: Learning Module FlexRay
To prioritize the data, minislots proocol pre-assigned to each frame of data that is eligible for transmission in the basixs segment. CAN, for example, used an arbitration scheme where nodes will yield to other nodes if they see a message with higher priority being sent on a bus. If two nodes were to write at the same time, you end up with contention on the bus and data becomes corrupt. To start a FlexRay cluster, at least 2 different nodes are required to send startup frames.
This article needs additional citations for verification.
The ECUs make use of this idle time to make adjustments for any drift that may have occurred during the previous cycle. Most applications require data to be baaics in real decimal values with units, scaling, and limits. | https://tasavvuf.info/flexray-protocol-basics-10/ |
My research interests include the relationship between attention, cognition, and intentional action.
Publications
We present a novel approach to temporal clustering of patient treatment information based on the semantic similarity of longitudinal histories.
Wei-Nchih Lee, Will Bridewell, Amar K. Das
Social Network Analysis of Physician Interactions: The Effect of Institutional Boundaries on Breast Cancer Care
We looked at registry-based data on breast cancer care at two neighboring healthcare institutions with a specific focus on whether organizational boundaries determine the physicians that a patient will see. From an initial patient-oriented data set, we developed a social network of physicians, modeling their interactions over the course of the provided treatments.
Will Bridewell, Amar K. Das
We propose that ontology pruning be used to remove unneeded concepts so that the resulting ontology better reflects the semantic distinctions of a particular domain. In this paper, we present a novel pruning strategy for drug ontologies.
Wei-Nchih Lee, Will Bridewell, Amar K. Das
Network visualization of temporal data offers insights into the practical application of treatment guidelines. Using publicly …
Will Bridewell, Wei-Nchih Lee, Amar K. Das
Science as an Anomaly-Driven Enterprise: A Computational Approach to Generating Acceptable Theory Revisions in the Face of Anomalous Data
To determine whether anomaly-driven approaches to discovery produce more accurate models than the standard approaches, we built a program called Kalpana. We also used Kalpana to explore means for identifying those anomaly resolutions that are acceptable to domain experts.
Will Bridewell
An explanation generator implementing (part of) John Stuart Mill’s Method of Induction was constructed that divides the available data into meaningful subsets to better resolve the anomalies. We found that using relevant subsets of data can provide plausible explanations not generated when using all the data and that identifying plausible explanations can help select among equally possible revisions.
Will Bridewell, Bruce G. Buchanan
Automatically identifying findings or diseases described in clinical textual reports requires determining whether clinical observations are present or absent. We evaluate the use of negation phrases and the frequency of negation in free-text clinical reports.
Wendy W. Chapman, Will Bridewell, Paul Hanbury, Gregory F. Cooper, Bruce G. Buchanan
We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. | https://paravidya.com/project/example/biomedical-informatics/ |
For this geomorphology research project, I selected a segment of the Oregon Klamath Mountains for prime attention, although for some tasks I also looked at adjacent areas. The following map displays the area, on which I drew terrane boundaries, labeled the terranes, and assigned stratigraphic symbols to each (the first letter(s) denote(s) their ages in terms of geologic periods, with Tr = Triassic; J = Jurassic; K = Cretaceous; Tel = Tertiary; Q = Quaternary). I digitized the boundaries on the map to allow "cookie-cutting" of geographic data in other maps, so that he could group the data into categories for terrane analysis.
As a visual aid to realizing that there are distinctly different terranes, we reproduce here a simple computer-generated map (unlabelled) that assigns a different color to each terrane. Match with the map above. The easternmost terrane shown is Jrv, which stands for Rogue Valley (Western Triassic-Paleozoic may not be a terrane). As a guide, the Brown stands for Rogue Valley, the Dark Blue, Medium Blue and Blue Gray denote Dry Butte, Smith River, and Yolla Bolly respectively; the Light Blue marks Elk River, the Purplish tone specifies Pickett Peak, the Brownish-red unit against the ocean is Sixes River and the Medium Tan represents the Tertiary units. Yellow denotes Quaternary deposits - both river and lowlands fill; Red near the right corresponds to intrusive granites (Quartz Monzonites).
By comparing the above terranes map with the geologic-units map for all of western Oregon (below), you may be able to see that there is some correspondence between some units and their enclosing terranes, a broader correspondence for others, a number of individual units that are scattered about within the terranes, and some units that seem to cross over or overlap terrain boundaries.
Because there are so many units [generally, at the formation level], I don't show the map's legend. Also, in the legend sequence are units that aren't present in the Klamaths. Two examples of exceptions are the deep purple unit (Jui), which is actually outliers of ultramafic (ophiolitic) lavas, once part of oceanic crust, which appear similar in the field but originated locally within their terranes, and the pinkish-red unit, which consists of intrusive rocks that invaded several terranes, already in place. One expects this general agreement between terranes and their constituent stratigraphic units (i.e., internal consistency), along with the exclusivity of some units to single terranes, inasmuch as terranes typically develop from rocks formed in different source areas at different times.
The Oregon Klamaths are among the most rugged landforms units in the state. We can gain a sense of their appearance by inspecting this 3-D perspective image of nearly all of Oregon, generated from DEM data:
Features of interest in this shaded map are: A = Klamath Mountains.; B = Coast Ranges; C = Willamette Valley (Portland at the north end); D = High Cascades; E = Mount Hood; F = Crater Lake; G = Paulina Mountains (Newberry Crater); H = Blue Mountains.; I = Abert Rim; J = Summer Lake area (the circular area is somewhat an artifact of the illumination)
You can get a feel for the appearance from the ground of several of the Klamath terranes through these photographs taken during the writer's time in the field:
The upper left scene shows terrain around Canyonville, with Yolla Bolly terrane in the foreground and Sixes River terrane in the distance. Next to it is a view from the coast of the Elk terrane. In left center is a segment of a canyon dissected by the Rogue River, in which sedimentary units of the Rogue Valley terrane (Jrv) are exposed. To its right is landscape typical of the higher elevations in the Klamath Mountains. The lower left scene is an exposure of serpentinite (the metamorphosed ophiolitic basalts associated with oceanic crust). The lower right shows alternating layers of siliceous (cherty) shales that constitute island arc sediments. Here they are part of the Yolla Bolly terrane.
The wildest part of the Klamaths is the Rogue River which cuts deep canyons into the rocks. This is one of America's premium whitewater rafting streams. This photo captures it ruggedness:
We proffer a preview of the topographic expression of the Klamath Mountains, which will be presented in Landsat format shortly, by this enlargement of the area made from the same DEM data. Notice Crater Lake (nearly circular on top of a mountain) in the upper right part. Another simple terrane map is placed below the DEM image. This map is a photo negative of a transparency (terrane boundaries in black) made to superpose onto a Landsat mosaic (discussed later).
In this visual version, we have trouble seeing any significant variations in terrain that call attention to noticeable terrane differences. Of course, we can vary the illumination directions to emphasize contrasted ridge and valley orientations. We did vary the illumination directions on this data set, giving several distinct expressions but, again, obvious patterns that correlate with the various terranes as mapped did not stand forth. However, when one learns where to look, after familiarization with terrane locations, strong hints of certain differences for several of the terranes are discernible. These differences can also be disclosed by appropriate analysis of topographic data in map-sheet or DEM formats and, in fact, this approach proved superior to visual differentiation.
The reason we would postulate for expected geomorphic differences is that each terrane consists of a group of rock types that normally differ from other nearby terranes - this is known from field work. Fault discontinuities also bound each one, at least partly. A terrane will respond to regional erosive action according to its mix of lithologies. Thus, any one terrane may develop landform characteristics that differ from its neighbors and can appear visually as separable. We should be cautious about expecting the differences to stand out sharply across terrane boundaries, because, as the landscape develops within any one terrane, the equilibrium forms tend to exert some influence on terrains outside the boundary. Nevertheless, the hope in testing the hypothesis of distinctive terrane-controlled topographic expression is that real differences do occur when each terrane is considered in its totality as compared to its neighbors.
In planning the geomorphic analysis, I set forth this strategy:
This effort is predicated on my being able to make a series of quantitative measurements that can be tested numerically and statistically to ascertain valid differences in the topographic character of each terrane. As it turned out, most of the measurements could not be made from Landsat as a test bed. In general, it is quite difficult to extract any of these measures with confidence from Landsat imagery. However, one overlapping SPOT image pair was available, so that a stereo image is achieved, allowing limited recovery of measurable variables that depend on topographic variations. So, in the exposition that follows, Landsat's prime role is to help to confirm the terrane-terrain association by visual recognition rather than extraction of terrain parameters. | http://facegis.nuarsa.info/?id=335 |
The Brain of an IoT System: Analytics Engines and Databases
This is the fourth article in our series on building an IoT platform using open source components. In the earlier articles in this series, we looked at basic components of an IoT system and discussed a messaging based architecture which can serve as the nervous system to exchange information between the various IoT components. Now, let us look at what could be considered as the brain of an IoT system – the analytics engine which decides what to make of the information and the database(s) storing the information.
There are multiple different ways of building this ‘nervous system and brains’, and which specific approach we take depends on the use case. It usually involves trade-offs between speed, size, cost, features, support, etc. In this article, we develop a configuration which is suitable for both real-time as well as non-real time scenarios.
Choice of Analytics Engines and Databases
We have looked at the messaging architecture in the previous article. Now let us expand the scope further to add the details for analytics engine, storage, and connectivity beyond the MQTT message broker.
IoT Platform: MQTT and Apache Storm
The data received from the MQTT broker may need further processing, like adding a time-stamp (if not already present), identifying missing readings, filtering, etc. Tools like Apache Pig can be used for this. Another good alternative is Apache Storm, which provides preprocessing abilities in limited scale but also provides a very good analytics system. A single instance of Apache Storm can handle both of these tasks. For a large system, deploying separate instances can be considered. Apache Storm also has support for MQTT (by means of an MQTT Spout), which makes integration easy.
As mentioned elsewhere, the data in an IoT system should be stored at various points for historical archival purposes as well as for avoiding unnecessary repeat computations. Instead of writing data directly to a database through an MQTT client, an alternative way is to write it through Storm. Storm provides a mechanism called Bolt which can be used to interface with various databases. For storing incoming raw data and the preprocessed data, Cassandra or CouchDB (both from Apache) are good alternatives. The same databases could be used to store the results of the analytics and the reports. It is a good idea to provide an ability to write to HDFS via a bolt so that Hadoop can be used later on for off-line batch processing of really large data sets. This combination of the databases increases the complexity but gives more flexibility in real life use cases.
The bolts in Apache Storm are the processing units and can be chained together to perform stepwise, complex calculations. Bolts can be stateless (to monitor a single event) or can maintain state for calculation rolling metrics using sliding windows, event correlations, etc. Apache Storm provides real-time analytics capabilities and hence is very suitable for IoT systems. It has a distributed architecture and manages distribution of messages (data) itself without requiring any external components.
Another powerful alternative is Apache Spark, which primarily supports batch processing mode and provides only near-real time analytics capabilities. A comparison of these two is out of the scope here.
The visualization tools in the above diagram could be external custom tools or open-source tools like JasperReports. Similarly, the notification and alerting mechanisms could be third-party email clients and SMS service.
Scalability
The architecture proposed above is suitable for smaller systems, but it does not scale very well for large system. In addition, MQTT does not provide any buffering mechanism. Both of these features are necessary when a large amount of data is coming in from a multitude of different sources. An intermediate messaging system like RabbitMQ or Apache Kafka can be used. Using such an intermediate broker between the MQTT broker and the analytics system helps improve the overall system performance as well as provides easy scalability.
The following diagram shows this enhanced solution with Apache Kafka.
IoT Platform: MQTT, Apache Kafka, and Storm
Apache Kafka already has MQTT support which makes integration effortless. Kafka sends the data received from an MQTT broker to different Kafka consumers. For example, one Kafka consumer could be used to send raw data to a database and another Kafka consumer could be used to send data to Storm for analytics.
Conclusion
In this series of articles on building an IoT platform using open source components we looked at what are the common logical building blocks in an end-to-end pipeline, how they communicate with each other and how they work together to provide a much bigger value. We also explored one of the many possible ways to create such a platform with specific open-source components with standards and modularity in mind. This modular architecture provides interoperability, scalability, performance, and fast time to market because it can be used as a fundamental building block for a variety of IoT use cases. The beauty of this open-source approach is that the integration of the individual components yields something much bigger than the sum of all components. After all, that is what IoT is all about!
Image Credits: zliving.com
Dr. Siddhartha Chatterjee is Chief Technology Officer at Persistent Systems. Umesh Puranik is a Principal Architect at Persistent Systems. We thank Sachin Kurlekar for his insightful comments and Ken Montgomery for his editorial assistance. | https://www.persistent.com/blogs/the-brain-of-an-iot-system-analytics-engines-and-databases/ |
Manage and arrange every book in the library.
Learn about individual personality using names and birth numbers.
Display subliminal messages on your desktop.
Convert between 564 units of measurement.
Type and edit text in Japanese characters on western versions of Windows.
Translate words, phrases, sentences, and text from 22 different languages online.
Put hundreds of Bibles, commentaries, dictionaries, books and maps at your fingertips.
Insert mathematical formulas, equations, and expressions in your MS Word documents.
Analyze your workouts and track your body weight, body fat percentage, and heart rate.
Educate children with learning disabilities.
Study and learn French vocabulary with a quiz program based on a popular TV show.
Correct your spelling and grammar mistakes based on the context of the whole sentence. | http://windows.dailydownloaded.com/en/educational-software/?by=latest |
A computer-implemented system and method for processing audio in a voice response environment is provided. A database of host scripts each comprising signature files of audio phrases and actions to take when one of the audio phrases is recognized is maintained. The host scripts are loaded and a call to a voice mail server is initiated. Incoming audio buffers are received during the call from voice messages stored on the voice mail server. The incoming audio buffers are processed. A signature data structure is created for each audio buffer. The signature data structure is compared with signatures of expected phrases in the host scripts. The actions stored in the host scripts are executed when the signature data structure matches the signature of the expected phrase.
1. A system for processing audio in a voice response environment, comprising: a database of host scripts each comprising signature files of audio phrases and actions to take when one of the audio phrases is recognized; a load module to load the host scripts and initiate a call to a voice mail server; a buffer module to receive during the call, incoming audio buffers from voice messages stored on the voice mail server and to process the incoming audio buffers; a signature module to create at least one signature data structure for each audio buffer; a comparison module to compare the signature data structure with signatures of expected phrases in the host scripts; and an action module to execute the actions stored in the host scripts when the signature data structure matches the signature of the expected phrase.
2. A system according to claim 1, wherein the actions comprise at least one of sending a DTMF sequence over the call, starting audio capture and saving the captured audio into message files, playing audio files, recording one of a progress and an error message, and terminating the call.
3. A system according to claim 1, further comprising: a new signature module to create a new signature file for a new phrase comprising capturing audio for the new phrase, selecting the new signature for the new phrase, and editing one of the host scripts to include the new signature.
4. A system according to claim 1, further comprising: a label module to assign a label to each of the expected phrases and to associate one or more additional phrases with at least one of the labels.
5. A system according to claim 1, wherein the signature comprises a two letter prefix identifying a host followed by a name of the corresponding phrase.
6. A system according to claim 1, further comprising: a recognition module to perform audio recognition after at least two of the audio buffers have been received.
7. A system according to claim 1, further comprising: a login module to log in to the voice mail server comprising identifying a security message and providing a password.
8. A system according to claim 1, wherein the call is initiated during at least one of on demand, periodically, or based on a combination of on demand and periodically.
9. A system according to claim 1, further comprising: a download module to download one or more of the voice messages; and a distribution module to distribute the one or more voice messages.
10. A system according to claim 1, further comprising: a display to present the voicemail messages for management and manipulation.
11. A method for processing audio in a voice response environment, comprising: maintaining a database of host scripts each comprising signature files of audio phrases and actions to take when one of the audio phrases is recognized; loading the host scripts and initiating a call to a voice mail server; receiving during the call, incoming audio buffers from voice messages stored on the voice mail server and processing the incoming audio buffers; creating a signature data structure for each audio buffer; comparing the signature data structure with signatures of expected phrases in the host scripts; and executing the actions stored in the host scripts when the signature data structure matches the signature of the expected phrase.
12. A method according to claim 11, wherein the actions comprise at least one of: sending a DTMF sequence over the call; starting audio capture and saving the captured audio into message files; playing audio files; recording one of a progress and an error message; and terminating the call.
13. A method according to claim 11, further comprising: creating a new signature file for a new phrase, comprising: capturing audio for the new phrase to be learned; selecting the new signature for the new phrase; and editing one of the host scripts to include the new signature.
14. A method according to claim 11, further comprising: assigning a label to each of the expected phrases; and associating one or more additional phrases with at least one of the labels.
15. A method according to claim 11, wherein the signature comprises a two letter prefix identifying a host followed by a name of the corresponding phrase.
16. A method according to claim 11, further comprising: performing audio recognition after at least two of the audio buffers have been received.
17. A method according to claim 11, further comprising: logging in to the voice mail server comprising: identifying a security message; and providing a password.
18. A method according to claim 11, wherein the call is initiated during at least one of on demand, periodically, or based on a combination of on demand and periodically.
19. A method according to claim 11, further comprising: downloading one or more of the voice messages; and distributing the one or more voice messages.
20. A method according to claim 11, further comprising: presenting to a user, a display to access, review, manage, and manipulate the voicemail messages.
This U.S. patent application is a continuation of U.S. patent application Ser. No. 13/252,185, filed Oct. 3, 2011, pending, which is a continuation of U.S. Pat. No. 8,032,373 issued Oct. 4, 2011, which is a divisional of U.S. Pat. No. 7,330,538, issued Feb. 12, 2008, which claims priority to U.S. Provisional Patent Application Ser. No. 60/368,644, filed Mar. 28, 2002, the disclosures of which are incorporated by reference.
The present invention pertains to a system and method for identifying audio command prompts for use in a voice response environment.
A voice response (VR) system allows a human user to listen to spoken information generated by a computer system. The user enters dual tone multi-frequency (DTMF) tones, or speaks commands, to navigate through the functions of such a VR system.
The implementation of VR systems that respond to tones or spoken commands is well known, but these systems are designed with the assumption that humans will be providing the commands to a computer over a communication link. Furthermore, these systems are typically designed to use human speech in the form of stored audio files that are played over the telephone line in order to communicate with the outside world. Communication with VR systems is thus normally via an analog interface. U.S. Pat. Nos. 4,071,888 and 4,117,263 are representative of basic patents in the field of VR systems. Modern VR systems are largely similar to the centralized systems described in these patents.
In contrast to VR systems, electronic mail (email) employs digital electronic signals for communications between users. Messages are encoded as numbers and sent from place to place over digital computer networks. Furthermore, email can be used to exchange voice messages in the form of digital audio files. However, the interface between email software systems and the underlying network is digital--not analog.
As a result of this analog-digital interface dichotomy, there is currently virtually no integration between voicemail and email. Since voicemail is the most common application of VR systems today, it is the best example. Accessing a voicemail system using a telephone handset, a user may listen to commands and send DTMF (Touchtone®) responses in order to listen to, save, forward, and delete their voicemail messages. However, commercial voicemail systems have a limited message capacity (both in time and space), and the lack of a digital interface in voicemail systems makes integration of voicemail with email and digital audio difficult. Not only is voicemail management using traditional dial-in systems cumbersome, it can be expensive, as cellular and mobile phone users must often incur the user peak-rate phone charges to access their voicemail. In addition, if the user has multiple telephones with voicemail accounts then each voicemail account must be checked with a separate phone call, and the user must manage each voicemail box separately. Voicemail is therefore a transient, untrustworthy, and cumbersome medium for communication.
Note that email and voicemail systems both use a "store & forward" model for message delivery. It would thus be desirable to construct a bridge between them (allowing voicemail to reach the Internet and Internet audio messages to reach the phone system), which should enable a number of applications of great utility to be implemented. For example, if voicemail messages were available on a user's computer in digital form and freely available for distribution via email, then several advantages to users of voicemail systems would result. For example, such a system would enable the following benefits: (1) voicemail messages could be captured securely and permanently, just like any other type of computer file; (2) voicemail messages could be distributed and used wherever digital audio files are used, in particular, for transmission to remote locations via email (note the cost of retrieving email remotely is far lower than the long distance charges or peak roaming charges that may be incurred to make calls to voicemail); and, (3) because no direct connection is required to a modem, except at one location (the server), users would be able to receive voicemail on non-telephone devices, i.e., with the same devices used for email.
The prior art identifies the value of integrating voicemail with computers and in particular, personal computers (PCs). U.S. Pat. No. 6,339,591, for example, describes a system for sending voicemail messages over the Internet, using proprietary methods (i.e., not email). The most likely configuration that might be used to integrate voicemail with the computer network would effect this integration at the centralized voicemail switch. In such a system, because voicemail messages are stored as digital audio files on the voicemail switch and because that switch is on the computer network, those voicemail messages might then be made available to computers on the network.
U.S. Pat. No. 5,822,405 discloses a method of using a PC or other device equipped with a special modem to retrieve voicemail over a telephone line and store each message in a file on the computer; however, this patent makes no mention of digital distribution of the voicemail messages retrieved. This patent comes close to solving the central problem of interacting between a computer and a VR system, namely the need to use speech recognition in many cases, but room for improvement exists. For example, improvements can be made in the analysis of the audio signals received by a user's computer, and no utility is provided in this prior art patent for the digital distribution of the retrieved messages.
Where voicemail messages are to be saved for later use in a conventional voicemail system, the voicemail messages are kept stored within the voicemail system. For example, U.S. Pat. Nos. 6,295,341; 4,327,251; 6,337,977; and 6,341,160 describe such systems. Even when computers are employed, the messages are generally kept in the answering device (as disclosed in U.S. Pat. No. 6,052,442). U.S. Pat. No. 6,335,963 even teaches that email be employed for notifying a user of voicemail, but not for delivery of the messages themselves.
There is much use made of voice recognition in VR applications, but in almost all these applications, voice recognition is used by a computer to recognize the content of a human voice speaking on the telephone (e.g., as taught in U.S. Pat. Nos. 6,335,962; 6,330,308; 6,208,966; 5,822,405; and 4,060,694). Such human voice recognition techniques are computationally expensive. Readily available human voice recognition applications compare real-time spoken words against a stored dictionary. Because of variations in the human spoken word and variations in the quality of the communications channels, the comparison of a spoken word with a dictionary of words must take into account variations in both the length and the spectral characteristics of the human speech being recognized. Thus, solving the problem of human speech recognition in real-time consumes significant computational resources, which effectively limits the applications of human speech recognition used in conjunction with fast, relatively expensive, computers. Where non-standard audio recognition methods are used, they are typically restricted to narrow applications, as disclosed in U.S. Pat. Nos. 6,324,499; 6,321,194, and 6,327,345.
It should be noted that VR systems often emulate (i.e., "speak") the human voice, but do not produce it. Instead, they use stored audio files that are played over the telephone communication link. Therefore, the speech that these VR systems produce is identically spoken every time it is played. The recognition of repetitive identical audio signatures is, in fact, a much simpler problem to solve than the problem of recognizing actual spoken human voice produced by a variety of speakers. It would be preferable to provide a system employing such techniques for recognizing stored audio file speech, thereby enhancing computational performance and enabling less expensive processors to be employed.
Another issue with conventional voice-recognition methods applied to VR applications is that the recognition of whole words and phrases can involve considerable latency. In VR applications, it is preferable to keep recognition latency to a minimum to avoid lost audio and poor response. Reduced processing overhead within the application will allow latency to be reduced within the recognition system.
In the prior art, voice recognition is always proceeded by a learning step, where the recognizing computer system processes speech audio to build a recognizer library. Many VR and voice recognition inventions include such a learning process, which may be used to teach the computer what to say, what tones to send, or what words to recognize (e.g., as disclosed in U.S. Pat. Nos. 6,345,250; 6,341,264; and 5,822,405). It should be noted that in the prior art, when a system is learning words to be recognized, the learning method is independent of the context of the audio being learned. That is to say, the recognition method stands alone and can distinguish between a word being recognized and all other words (at least theoretically). It would thus be desirable to provide a computer-driven VR system wherein the learning method is simplified to take into account the invariant nature of the messages and the known context of their expression, to require fewer computational resources to be employed.
Much prior art in the field of automatic control of VR systems with a computer depends upon the calling computer knowing the context of the VR system at all times. For example, the application described in U.S. Pat. No. 6,173,042 assumes that the VR system works identically every time, and that tones can be input to the VR system at any time. The prior art recognizes that the context of recognition is important (e.g., as disclosed in U.S. Pat. No. 6,345,254). It would be desirable to provide a programming language to describe VR interactions, which includes a syntax powerful enough to express such context in a general manner.
Many VR control applications (such as described in U.S. Pat. No. 5,822,405) use some form of interpreted programming language to tell the application how to drive the remote VR system. In the prior art however, the scripting language is of a very restricted syntax, specific to its application (for example, voicemail retrieval). In order to build a general purpose VR response system, it would be helpful to have a programming language that is sufficiently powerful to address a wide range of VR applications (e.g., retrieval of stock quotes, airline times, or data from an online banking application).
Another aspect of the learning process that can have a major impact on its efficiency is the user interface (UI). A UI that is too generalized may result in complex manipulations of the interface being required to achieve full control of the learning process. Such a situation arises often when the learning portion of an invention's embodiment is performed with a general purpose tool, as is in U.S. Pat. No. 5,822,405. It would be desirable to provide a computer-driven VR system, wherein the UI is specifically adapted to enable easy navigation and control of all of the aspects of the VR system, including any learning method required.
A different issue with conventional voice recognition methods applied to VR applications, is that the recognition of whole words and phrases can involve considerable latency. It would be desirable to provide a computer-driven VR system, wherein recognition latency is kept to a minimum to avoid lost audio content and poor response.
When designing a VR control application (such as described in U.S. Pat. No. 5,822,405) it may be necessary to develop some form of interpreted programming language, to tell the application how to drive the remote VR system. In the prior art, however, the scripting language is of a very restricted syntax, specific to its application (for example, voicemail retrieval). In order to build a general purpose VR response system, it would be desirable to employ a programming language that is sufficiently powerful and more general in nature to address a wide range of VR applications (e.g., retrieval of stock quotes, airline times, or for accessing data in an online banking application). If a bridge such as that noted above can be built between voicemail and the Internet, it would make voicemail as easy to review, author, and send, as email. Voicemail, originating in the telephone system, might be integrated directly with messages created entirely on the Internet using an audio messaging application.
Many integrated messaging systems have been built. These systems seek to integrate some combination of voicemail, text messaging, and email into one interface. However, the prior art with respect to unified messaging (UM) is exclusively concerned with creating a closed universe within which the system operates. Such systems, although at times elegant, do not cater to users who have a need to access voicemail from different voicemail systems (such as from home and from work), through an Internet connection. For example, U.S. Pat. No. 6,263,052 archives the voice messages within the voicemail system. It would be desirable to enable the voicemail messages to be available on the computer network, thereby enabling a user to reply to those messages offline, and to forward the reply to the original caller using email, or to make a voicemail response that is delivered by the computer system. If integrated messaging systems could interface directly with any VR system over the public service telephone network (PSTN), then UM would become easier to apply, and would also become more useful.
Often after voicemail messages are received, a user will wish to reply to such messages. It is convenient for the user to be able to reply to the voicemail at their leisure, and have the reply forwarded to the original sender as another voicemail. Such a system is described in U.S. Pat. No. 6,263,052.
In the prior art it is assumed that if two computers are to communicate with each other they will do so using some foiin of digital encoding, and that if they are using a telephone line to communicate they will modulate a signal on that line with an audio signal that follows the structure of the digital sequence they wish to communicate. U.S. Pat. Nos. 4,196,311 and 3,937,889 are exemplary of such art. On the other hand, humans communicate with each other over the telephone using analog, not digital, communications. However, if two computer systems, each equipped with voice recognition and the ability to communicate using analog voice communications, were placed in communication with each other in a peer-to-peer configuration, a useful form of two-way communication might result. If the recognition of audio from one computer can drive a program on the other computer, which can in turn send audio responses to the first computer, then secure encoded communications might be effected by use of a normal telephone voice call.
Clearly, it would be desirable to provide a software system, running on a suitably equipped computer, which can be flexibly programmed and easily taught to navigate a VR system using audio signature recognition and which can download chosen audio segments to the computer system as digital audio files. Such a system will preferably enable the automatic scheduled retrieval of audio files from the VR system and enable these files to be automatically forwarded via email to the intended recipient, over the Internet.
It would further be desirable for digital audio files to be played over the telephone system and to leave voicemail messages that can be played directly by the recipient. Yet another desirable feature of such a system would be the use of computationally efficient waveform recognition algorithms to maximize the number of telephone lines that can be simultaneously supported by one computer.
It would still be further desirable to provide flexible interfaces, functions, and programming language to enable general purpose applications to interface with the VR retrieval and forwarding system. Such a system would automatically recognize duplicate audio files (i.e., files which have been downloaded twice from the same VR system), and provide means for the user to prepare digital audio files as replies to received messages, or as new voice messages, and to have those digital audio files delivered via email or over the phone line, to the intended recipient.
Further desirable features of such a system would include means for teaching the software to recognize new audio signatures and to incorporate them into a program script, and such learning processes should be enabled both locally (at a computer with a modern), and remotely (by employing a computer and a modem receiving commands via email from a remote computer). It would further be desirable to provide a system that enables two computers to communicate over an audio communications channel, to achieve an audio encoded computer-to-computer communications system.
The present invention is directed to a system and method for enabling two computer systems to communicate over an audio communications channel, such as a voice telephony connection. Another aspect of the invention is directed to an Internet and telephony service utilizing the method of the present invention.
One of a number of preferred embodiments of this invention is directed to the use of a VR management application to automate interaction with a VR system. In a preferred implementation, the VR management application resides on a server, and multiple users can access the VR management application remotely. Users teach the VR management application how to access one or more VR systems associated with each of the users. For each audio command prompt likely to be issued by the VR system, the VR management application learns to recognize the audio command prompt, and how to respond to that audio command prompt. A user can then instruct the VR management application to automatically interact with the VR system to achieve a result, based upon a desired level of interaction. In a preferred embodiment, the interaction includes retrieving the user's voicemail. The VR management application will establish a logical connection with the VR system, receive audio communications from the VR system, and compare each communication with the audio command prompts that were previously learned. The VR management application provides the appropriate responses and receives additional audio communications, until a desired level of interaction is achieved. When the desired level of interaction is retrieving voicemail, a user is preferably enabled to receive such voicemail either via email, via a network location, or via a telephone.
In a preferred embodiment, the learning process includes generating a discrete Fourier transform (DFT) based on at least a portion of each audio command prompt to be learned. When the VR management application automatically interacts with a VR system, at least one DFT will be generated, based on the audio communication received from the VR system. Each learned DFT will be compared with the newly generated DFT to recognize the command prompt corresponding to the audio received.
Another aspect of the present invention is a computationally efficient method of recognizing an audio signal. The method requires that a plurality of known DFTs be provided, each known DFT corresponding to a specific audio signal. At least one unknown DFT is generated for each audio signal to be recognized. The at least one unknown DFT is compared to each known DFT, and a match with a known DFT enables the audio signal to be identified.
Preferably, the audio signal to be identified is stored in an audio buffer, and the audio buffer is separated into a plurality of equally-sized sample buffers. Then, an unknown DFT is generated for each sample buffer. Each unknown DFT is compared to each known DFT. When an audio signal is processed to produce a plurality of unknown DFTs, one or more of a plurality of DFTs generated from a known audio signal is selected to be used as the known DFT for that audio signal.
Another aspect of the invention is directed to a method for using a computing device to interact with a VR system. In at least one embodiment, the VR system is an audio message service, and the interaction is managing a user's voicemail account, including retrieving audio messages from the remote audio message service. While not limited to use with VR systems that comprise an audio message service, when so employed, the method includes the steps of first establishing a logical connection between the computing device and the audio message service. Then a communication is received from the audio message service. In response, the computing device generates at least one unknown DFT based on the communication. The at least one unknown DFT is compared with at least one known DFT. Each known DFT corresponds to a command prompt that is likely to be received from the message service. If an acceptable level of correlation exists between the at least one unknown DFT and a known DFT, then the computing device provides the message service with the appropriate response to the command prompt identified by matching the at least one DFT to the known DFT. The steps of receiving a communication, generating unknown DFTs, matching unknown DFTs to known DFTs, and providing a correct response to the message service are repeated until the communication from the message service indicates that the next communication will be an audio message, rather than a command prompt. The messages stored by the message service are then retrieved.
The logical connection is preferably a telephonic connection. Once the messages are retrieved, the computing device optionally provides the message service with the appropriate response required to instruct the message service to delete each message after it has been received by the computing device. In one related embodiment, instead of causing the message service to delete retrieved messages, the computing device generates a key for each message received from the message service, so that during a future message retrieval operation, the computing device can ignore already received messages that have not been deleted from the message service. Preferably, the keys are produced by generating a DFT of the message, and encoding the DFT to generate a unique key that is stored using relatively few bytes. Then, before retrieving a message, the computing device generates a key for an incoming message and checks the key for the incoming message against stored keys. If the key for the incoming message is the same as a stored key, the incoming message is ignored, since it was previously retrieved.
Preferably, before the logical connection is established to retrieve messages stored by the message service, the computing device is taught how to recognize and respond to each command prompt likely to be received from the message service. To teach the computing device how to recognize and respond to each command prompt likely to be encountered, a logical connection is first established between the computing device and the message service. A command prompt is received from the message service, and at least one DFT based on the command prompt is generated. A user provides the correct response to the command prompt, and the computing device stores the correct response, as well as the DFT corresponding to the command prompt. Preferably, the correct response is stored as a program script that enables the computing device to duplicate the correct response for the DFT. The program script and DFT corresponding to that command prompt are stored in a memory accessible to the computing device. These steps are repeated for each command prompt likely to be encountered.
To enhance the method of retrieving an audio message described above, preferably each communication received from the message service is stored in at least one audio buffer. Then, each audio buffer is separated into a plurality of window buffers. A DFT is generated for each window buffer. Each window buffer DFT is then compared with each known DFT.
In one preferred embodiment two different, identically-sized audio buffers are used. Each audio buffer is sized to accommodate N samples, N having been selected to reflect a desired time resolution. Each audio buffer is sequentially filled with N samples of the communication, such that a first audio buffer is filled with older samples, and a second audio buffer is filled with newer samples. A plurality of window buffers are generated by segregating each audio buffer of size N into identically sized sample windows of size W, such that each sample window includes a whole number of samples, and such that N is both a whole number and a multiple of W. The next step involves iteratively generating window buffers of size N using the sample windows of size W, such that each window buffer includes multiple sample windows (totaling N samples), and each sequential window buffer includes one sample window (of size W) not present in the preceding window buffer.
Preferably, any messages that are retrieved are stored in a digital format. Once in a digital format, the messages can be forwarded to a user's email address. It is also preferred to enable the user to access any stored message at a networked location. A preferred digital format is the MP3 file format, but other formats might alternatively be used.
It is contemplated that the computing device will be programmed to establish a connection with a message facility according to a predefined schedule, so that messages are retrieved on a defined reoccurring basis.
Still another aspect of the present invention is directed to a method of training a computing device to automatically interact with a VR system, where successful interaction requires providing a proper audio response to audio prompts issued by the VR system. While not limited to VR systems such as voicemail services, one preferred embodiment is directed to training a computing device to automatically manage a voicemail account, including retrieving, saving, and deleting messages. Steps of the method include launching a message retrieval application on the computing device, and then establishing a logical connection between the computing device and the remote message facility using either a telephonic connection or a network connection. Further steps include receiving a communication from the remote message facility, and then capturing a command prompt from the remote message facility in an audio buffer. A correct response to the audio command prompt (such as DTMF tone sequence or a audio message) is required to navigate a menu associated with the remote message facility to retrieve the desired messages. A user is enabled to provide the correct response, which is stored in a memory of the computing device. Additional steps include generating at least one DFT based on at least a portion of the audio buffer, the at least one DFT identifying the command prompt and thereby enabling the computing device to automatically recognize the command prompt during a subsequent automated message retrieval operation. A program script is generated for execution by the computing device, to duplicate the correct response. The final step requires storing the at least one DFT and the program script in a memory accessible by the computing device, such that the at least one DFT and program script enable the computing device to automatically recognize the command prompt and duplicate the correct response to the command prompt during a subsequent automated message retrieval operation.
Preferably, the steps are repeated so that at least one DFT and a program script are generated for each different command prompt likely to be encountered when navigating a menu associated with the remote message facility. The computing device then automatically recognizes all command prompts likely to be issued by the remote message facility, and duplicates the correct response for each such command prompt during a subsequent automated message retrieval operation.
It is further preferred that the contents of the audio buffer be separated into a plurality of equally sized sample buffers before generating the at least one DFT. The step of generating the at least one DFT preferably includes generating a plurality of sample DFTs, one for each sample buffer.
Still another aspect of the present invention is directed to a method for enabling two computing devices to communicate using audio signals. Each computing device is provided a plurality of known DFTs that each corresponds to a specific audio signal. When a first of the two computing devices receives an input signal, the input signal is processed to perform one of the following functions. If the input signal is not an audio signal, then the input signal is converted into an audio signal, such that the audio signal thus generated corresponds to an audio signal whose DFT is stored in the memory of each computing device; the audio signal is then transmitted to the second of the two computing devices. If the input signal is already an audio signal but there is no known DFT corresponding to that input signal, then the input signal is separated into a plurality of audio signals such that each of the plurality of audio signals corresponds to an audio signal whose DFT is stored in the memory of each computing device, and each audio signal is transmitted to the second computing device. If the input signal is already an audio signal and there is a known DFT corresponding to that input signal, then that audio signal is transmitted to the second computing device. The second computing device processes each audio signal it receives by generating an unknown DFT based on an audio signal received, comparing the unknown DFT generated from the audio signal received with each known DFT, and identifying the audio signal received to reconstruct the input signal. The second computing device can then respond to the first computing device in the same manner.
Still another aspect of the present invention is directed to a method for enabling a user to retrieve a digital copy of an audio message from a network location, when the audio message has been left at an audio message facility. The audio message facility provides audio command prompts to which appropriate responses must be made in order to successfully navigate through the audio message facility to retrieve any audio messages. The method involves the steps of establishing a logical connection between the user and the network location, and enabling the user to teach the network location how to recognize and respond to the audio command prompts issued by each audio message facility utilized by the user. The recognition is based on a comparison of a DFT of an audio command prompt with stored DFTs corresponding to each command prompt likely to be issued by each audio message facility utilized by the user. The method further involves enabling the user to instruct the network location to retrieve audio messages from at least one audio message facility utilized by the user. For each audio message facility utilized by the user from which the network location has been instructed to retrieve messages, the following steps are performed. A logical connection between the network location and the message facility is established to receive an audio signal from the message facility. An unknown DFT is generated based on the audio signal received. The unknown DFT generated from the audio signal received is compared with each known DFT to identify the command prompt being issued by the message facility, and the correct response to the command prompt is provided. These steps are repeated until access to messages stored by the message facility is granted. The messages are retrieved and converted into a digital format, so that the user is able to access the messages in the digital format.
A further embodiment provides a computer-implemented system and method for processing audio in a voice response environment. A database of host scripts each comprising signature files of audio phrases and actions to take when one of the audio phrases is recognized is maintained. The host scripts are loaded and a call to a voice mail server is initiated. Incoming audio buffers are received during the call from voice messages stored on the voice mail server. The incoming audio buffers are processed. A signature data structure is created for each audio buffer. The signature data structure is compared with signatures of expected phrases in the host scripts. The actions stored in the host scripts are executed when the signature data structure matches the signature of the expected phrase.
Other aspects of the present invention are directed to a system for executing steps generally consistent with the steps of the methods described above and to articles of manufacture intended to be used with computing devices, which include a memory medium storing machine instructions. The machine instructions define a computer program that when executed by a processor, cause the processor to perform functions generally consistent with the method steps described above.
FIG. 20 is an exemplary embodiment of a Web page for a Voice-Messaging Web site ("http://mygotvoice.com"), used in conjunction with the audio messenger application, in accord with a preferred embodiment of the present invention.
In FIG. 1A, a first computer system is a VR system 104, which answers telephone calls, generates audio messages 106 and receives and acts upon a response 110 (DTMF or audio) from a caller. A voicemail system or a 411 information service are examples of VR system 104. A second computer system 102 makes calls to VR system 104 and uses a signal processing technique to recognize the audio signals (i.e., phrases) that are issued by VR system 104. Particularly when VR system 104 is a voicemail system, audio messages 106 are command prompts that require a specific response. System 102 sends response 110, either as voice-band audio or as tones, in response to command prompts from VR system 104, to establish control of the remote VR system. System 102 is controlled by a recognition program 108 specifically adapted to interact with VR system 104. The recognition program can instruct system 102 to call, interrogate, download, and manage a voicemail account residing at VR system 104, without human intervention. It should be understood that management of a voicemail account is not limited to merely retrieving messages, but encompasses normal voicemail management functionality, including message retrieval, message deletion, and message storage (e.g., storing messages as "new" messages).
A spooling computer system 144 provides a bridge between the Internet and the PSTN, over which messages can flow in both directions, based on the method described in conjunction with FIG. 1B. The Service supports online access to the user's messages via a conventional Web browser application 120 (such as those executed on a PC, or a portable computing device), and/or a streaming media player 142. Users may also receive messages using an email application 126 via an Internet connection 127 or via a dialup VR interface 140 using a PSTN connection 135 and a standard telephone handset 139. In addition, new audio messages can be composed on a computer device equipped with a microphone 143 and an audio messenger application 123. These messages are sent via email to an inbound email gateway 125 using internet connection 124. From email gateway 125, the messages can be directed to one or more of a Message Store 128 of an existing user, a VR system 137 (i.e., a VR based voicemail system) that of the user (using a PSTN connection 133), or to a telephone 136 associated with the user (such as a cellular telephone, a mobile telephone, or a land line using a PSTN connection 132).
FIG. 2 illustrates a second and related embodiment in which both computer systems 202 and 204 are capable of audio pattern recognition and audio response generation. In this case, these two computer systems can conduct an audio conversation with each other, in accord with their own individual recognition programs 210A and 210B. First computer system 202 sends audio messages 206A and 206B to computer system 204, which recognizes them and sends its own audio responses 208A and 208B to computer system 202. Both systems are controlled by respective programs 210A and 210B in accord with the present invention. The present invention, in its various embodiments, has applications in both civilian and military computer communications.
FIG. 16, and the following related discussion, are intended to provide a brief, general description of a suitable computing environment for practicing the present invention. In a preferred embodiment of the present invention, an audio recognition application is executed on a PC. Those skilled in the art will appreciate that the present invention may be practiced with other computing devices, including a laptop and other portable computers, multiprocessor systems, networked computers, mainframe computers, hand-held computers, personal data assistants (PDAs), and on devices that include a processor, a memory, and a display. An exemplary computing system 330 that is suitable for implementing the present invention includes a processing unit 332 that is functionally coupled to an input device 320, and an output device 322, e.g., a display. Processing unit 332 includes a central processing unit (CPU) 334 that executes machine instructions comprising an audio recognition application (that in at least some embodiments includes voicemail retrieval functionality) and the machine instructions for implementing the additional functions that are described herein. Those of ordinary skill in the art will recognize that CPUs suitable for this purpose are available from Intel Corporation, AMD Corporation, Motorola Corporation, and other sources.
Also included in processing unit 332 are a random access memory (RAM) 336 and non-volatile memory 338, which typically includes read only memory (ROM) and some form of memory storage, such as a hard drive, optical drive, etc. These memory devices are bi-directionally coupled to CPU 334. Such storage devices are well known in the art. Machine instructions and data are temporarily loaded into RAM 336 from non-volatile memory 338. As will be described in more detail below, included among the stored data are data sets corresponding to known audio signals, and program scripts that are to be executed upon the identification of a specific audio signal. Also stored in memory are operating system software and ancillary software. While not separately shown, it should be understood that a power supply is required to provide the electrical power needed to energize computing system 330.
Preferably, computing system 330 includes a modem 335 and speakers 337. While these components are not strictly required in a functional computing system, their inclusion facilitates use of computing system 330 in connection with implementing many of the features of the present invention, and the present invention will generally require a modem (conventional, digital subscriber line (xDSL), or cable) or other form of interconnectivity to a network, such as the Internet. As shown, modem 335 and speakers 337 are components that are internal to processing unit 332; however, such units can be, and often are, provided as external peripheral devices.
Input device 320 can be any device or mechanism that enables input to the operating environment executed by the CPU. Such an input device(s) include, but are not limited to a mouse, keyboard, microphone, pointing device, or touchpad. Although, in a preferred embodiment, human interaction with input device 320 is necessary, it is contemplated that the present invention can be modified to receive input electronically, or in response to physical, molecular, or organic processes, or in response to interaction with an external system. Output device 322 generally includes any device that produces output information perceptible to a user, but will most typically comprise a monitor or computer display designed for human perception of output. However, it is contemplated that present invention can be modified so that the system's output is an electronic signal, or adapted to interact with mechanical, molecular, or organic processes, or external systems. Accordingly, the conventional computer keyboard and computer display of the preferred embodiments should be considered as exemplary, rather than as limiting in regard to the scope of the present invention.
In FIG. 3, a telephone communications path exists between a PC 302 (such as a PC disposed in a user's home or work place, or spooling computer system 144 of FIG. 1B), and a voicemail server 304 (likely disposed at a telephone company's facility). A first portion of the communications path is an analog telephone line 308 carrying an analog audio signal, which couples voicemail server 304 to a modem 312. A second portion of the communications path is a digital data cable 314 (such as a universal serial bus (USB) cable, a serial port cable, an IEEE 1394 data cable, a parallel port cable, or other suitable data cable) carrying a digital signal from modem 312 to PC 302. Thus, at PC 302, digitized incoming audio packets are available in real-time for use by applications running on PC 302. Furthermore, applications running on PC 302 can output digital audio signal via digital data cable 314 to modem 312, which then generates an analog audio signal to be transmitted over analog telephone line 308. Note that a modem, which enables the passage of digitized audio between it and the host computer system, is commonly referred to as a "voice modem."
At the telephone company, the telephone line terminates at a line card installed in a telephone switch 306. Digitized audio is then sent to and received from the line card and the voicemail server 304. Any DTMF sequences generated by modem 312 or PC 302 are recognized by switch 306 and passed as digital messages over a computer network 310 to voicemail server 304. In response to any commands encoded in the DTMF sequences, voicemail server 304 passes digitized audio messages to telephone switch 306, where the digitized audio messages are turned back into analog audio for delivery over the telephone line, back to the caller.
One preferred embodiment of the present invention is implemented in a software application that runs on PC 302. Hereafter, this application will be referred to as the "voice server." The voice server application makes calls over telephone voice circuits to voicemail server 304 to retrieve any voicemail for the user. Such a connection is made periodically (i.e., according to a predefined schedule), on demand, or both (as required or selectively initiated by a user). Once the connection is made, the audio (i.e., one or more spoken messages) output by voicemail server 304 is passed to the application running on PC 302. The voice server application compares the incoming audio with a dictionary of phrases it holds in encoded form. If a phrase is recognized, the calling computer executes a script that can take certain predefined actions, such as sending a command to the voicemail system as a DTMF command, or hanging up. In the preferred embodiment the calling computer executes a script that downloads and captures the user's voicemail from a voicemail switch. Once downloaded, each voicemail message is available as a compressed digital audio file in the popular MP3 format. This file can be sent by email or be otherwise distributed electronically via a data connection 318 to a network 316 such as the Internet. Message files can also be carried with the user by being stored in the memory of a personal device such as a PDA or mobile telephone. Preferably, the voice server application has a GUI that allows the user to easily fetch, review, manage, and manipulate his voicemail messages, as if they were email messages.
In addition to the voice server, a preferred implementation of the present invention includes two other elements; the "service," which is an Internet service built around the voice server, and the "audio messenger," which is an Internet client application.
The service portion of the preferred embodiment is schematically illustrated in FIG. 1B. The service enables multiple users to share access to a small number of voice servers comprising a spooling computer system 144. A service center 141 preferably includes a minimum of two computers. One computer, which in a preferred embodiment executes a Linux® operating system, implements a message store 128, a Web Interface 122 (by which users are enabled to gain access to their messages), and a backend telephone voicemail retrieval system 140. In addition, the Linux® operating system acts as an email gateway 125 for communicating with other applications, such as an email client 126, or an audio messaging application 123 (residing on computer a computing device). In the following discussion, a preferred embodiment of audio messaging application 123 is referred to as the audio messenger. One or more additional computers are attached to the telephone system via voice modems and are connected to the computer running the Linux® operating system over a LAN (see spooling computer system 144). These computers implement the voice server functions of sending and retrieving voicemail messages over the telephone. Note that voice server 129 (sending function) and voice server 130 (retrieving function) can each be implemented on one or more individual computers, such that spooling computer system 144 includes one or more computers dedicated to the sending function, and one or more computers dedicated to the retrieving function. Of course, voice server 129 and voice server 130 can be implemented on a single computer, such that spooling computer system 144 is a single computer. Preferably, spooling computer system 144 executes a version of Microsoft Corporation Windows® operating system. Those of ordinary skill in the art will recognize that the selection of a specific operating system is largely an element of preference, and that other operating systems, such as the Linux® operating system, could be employed.
The audio messenger portion in one preferred embodiment is shown in FIG. 1B, as audio messaging application 123 that is executed on the computing device. In an exemplary implementation of the present invention, the audio messenger is a small Windows® application, which enables a user to record voice messages and send them directly into service 141 via email gateway 125. An exemplary implementation of the GUI of the audio messenger is illustrated in FIG. 18. The audio messenger application may be replaced with a third party application, as long as such third party application is properly configured to communicate with email gateway 125.
An exemplary voice server application has been implemented as a software application running on a general purpose computer equipped with a voice modem connected to an analog telephone line. The exemplary voice server application is written in the popular C++ programming language and is designed to be portable. A beta version currently runs under both Microsoft Corporation's Windows® and the Linux® operating system.
FIG. 4 shows the overall structure of the preferred voice server application. The software runs on the PC and interfaces with the outside world through a GUI 402. A call control function 436 interfaces with a telephone service via a PSTN service interface 440. The underlying implementation of this interface is normally provided by the modem manufacturer. The voice server application also makes use of other TCP/IP network services, such as domain name system (DNS) resolution, which are implemented by the underlying operating system.
GUI 402 provides a user with functions to control and manage the application. FIG. 4 shows the major functions supported by the GUI. These are: message management 410; message playback, reply, and forwarding 412 (referred to hereafter simply as message playback 412); local application configuration 414; voicemail host configuration 416; call scheduling 418; and manual calling 420. Commands to the application can be executed through the GUI 402 or they can arrive as email messages containing remote commands. These commands are processed by a remote commands processor 422.
Remote commands processor 422 communicates with the outside world via a job spooling directory 426, into which command requests are placed by one or more other applications. In one preferred embodiment of the present invention, the service portion (described above in conjunction with FIG. 1B), uses spooling directory 426 and also accesses incoming messages, from within a message store 424. The remote command processor enables the voice server application to be controlled and configured remotely.
Other core functions within the voice server application, as shown in FIG. 4, include a scheduling engine 428, and a host manager 430. A voicemail retrieval function 432 uses call control function 436 to make, manage, and terminate telephone calls. Call control function 436 employs telephone PSTN service interface 440 to make telephone calls over the voice modem. The recognition of incoming audio is performed by a recognition engine 434, which utilizes a host library 438. The generation of the host library is described below. Messages may be heard utilizing a PC audio output, connected to a speaker 444.
FIG. 5 shows a flow diagram for the main software loop of the voice server application. When the program starts at a block 518, it first checks to see that a compatible voice modem is installed and operational in the host computer as indicated by a decision block 520. If there is no modem, the voice server software disables all functions within the software that require a modem, as indicated in a block 522. This step enables a subset of manual operations to be performed locally, and control passes directly to the main command loop at a block 528.
If a modem is present, the voice server software starts the call scheduler. This step involves loading a schedule in a block 524, which is retrieved from a file location, as indicated by a block 525. The voice server application starts a timer at a block 526. The timer causes a schedule cycle to be executed when a predefined interval expires (the timer value determines the granularity of scheduling), at a block 532. Typically the scheduler runs every few seconds, e.g., every 15 seconds.
Following the initiation of the schedule cycle, the software application waits for the schedule cycle or interval to expire, as indicated by the timer. Commands can be initiated either from a user interface (when the scheduled cycle is not running), or as a result of the scheduler choosing a remote command or local schedule entry to be executed. Blocks 502, 504, 506, 508, 510 and 512 correspond to user selectable commands, which can be received from the UI, as indicated by a block 516.
When the schedule cycle is running and after the timer interval has expired, the voice server application determines if a call is in progress, in a decision block 534. If it is, then the schedule cycle terminates, the timer is restarted, and control returns to the command loop, as indicated by block 528. If there is no call in progress, then in a block 536, the voice server application determines if there are any waiting jobs in the schedule cycle (i.e., any calls to start). If not, control again returns to the command at block 528.
If there is no call in progress and there are jobs in the schedule queue, a call is initiated. A first step in making a call is setting a call-in-progress indicator, as indicated in a block 540. Before the call is made, the voice server software loads the data required to communicate with the chosen host in a block 542. The host data includes a host script and a collection of signature files. Signature files each contain data used in the recognition of audio phrases by the remote VR system, and they are referenced by name from within the host script. For example, the signature defined in the file vwEnterPassword.sng is referenced in the host script as vwEnterPassword, the file extension being omitted. The host script contains a program script that instructs the voice server software what actions to take when a given signature phrase is recognized. The term host is used to refer to the combination of a host script, and associated signature files. Multiple hosts can share signature files, but they each have a unique host script file. Additional details relating to signature files, such as how they are generated and how the recognition of audio phrases using signature files is achieved, are provided below. Data corresponding to the host script are stored in a file location indicated by a block 546, while data associated with signature files are stored in a file location indicated by a block 544.
In any case, once the host data (script and signatures) have been loaded in block 542, the voice server application starts a telephone call using the modem, as indicated in block 550. Then the host script routine is initiated in a block 548. Once the connection is established, the voice server application waits for incoming audio to be received, as indicated by a block 552. The incoming audio is being received from a voice modem identified as a block 592. Once incoming audio signals are received, the voice server software enters a main recognition and action loop and begins processing incoming audio buffers as they arrive, as indicated in a block 554. A predefined timeout (indicated by a block 594) prevents the voice server software from being stuck in an infinite loop, which can occur in situations where the voice server software does not recognize any of the phrases in the audio signals that are received. Within the main recognition and action loop (i.e., in block 554), the voice server software continually processes these incoming audio packets. By default, these audio packets are received in an uncompressed pulse code modulation (PCM) format with 8000, 16-bit samples per second. Each sample represents the amplitude of the audio signal expressed as a signed 16-bit integer. Each incoming audio buffer contains N samples, where N is chosen to reflect the desired time resolution of the recognizer. Typically N is 2000, representing 250 ms of real-time. Each time an audio buffer is received, it is processed to create a signature data structure, and this real-time signature is compared with the signatures of the expected phrases, as specified in the host script that was earlier loaded. When a host script is loaded, all of the referenced signature files are also loaded. If the current audio buffer does not match a signature phrase, the voice server application waits for the next audio buffer to be received from the modem, as indicated by block 592. If the current audio buffer matches an expected phrase, the voice server program executes the actions that properly correspond to that phrase, in a block 556, where the required action is specified in the host script that was earlier loaded. In a preferred embodiment, the following actions are available: 1. Send a DTMF (Touchtone®) sequence over the telephone line to the voicemail host being called. These tones can either be generated via the modem or by the computer as audio played over the telephone line. 2. Start audio capture, and when instructed, stop capture and save the captured audio into message files. 3. Play audio files over voice-modem 292. 4. Record a progress or error message in the log file and/or on the computer console. 5. Terminate the call.
Once these actions have been executed in block 556, any timeouts are reset, and the voice server application determines if the call should be terminated in a block 558. The termination can occur as the result of a hang-up action, as the result of user intervention, or because of a default timeout expiring. Timeouts need not cause a call to terminate; instead, they can have actions of their own, which can result in continued processing, as if a phrase had been recognized. Under normal circumstances the call is terminated when all relevant voicemail messages have been retrieved following a dialog between the software and the remote voicemail server.
If a call is to be terminated, then control passes out of the main recognition loop, the telephone call is terminated in a block 560, and the voice modem device is closed. The call-in-progress flag is cleared in a block 569, and control returns to the main command loop, as indicated by block 528. As provided by this block, in the main command loop, the voice server application is waiting for a next schedule cycle to initiate a call (see block 540), or for a user input (see block 516).
Messages are captured and saved in message store 424 (shown in FIG. 4) during the execution of actions in block 556. The message capture and storage elements of block 556 are described in greater detail below.
Note that for each UI function indicated by blocks 502, 504, 506, 508, 510 and 512, there is a corresponding function within the command loop, as indicated by blocks 530, 580, 582, 584, 586 and 588.
Note that manual calling is the function of initiating the call, under user control, from a menu, rather than having the call initiated by the scheduler. The user selects manual calling from a menu, enters the telephone number to call, and selects the script to be used (from a menu list of available scripts).
FIG. 6 shows a schematic diagram of the main recognition and action loop of the program (more generally indicated by block 528 in FIG. 5). The voice server software calls a remote voicemail system 601 (i.e., a VR based voicemail system) over a PSTN line 603 using a voice modem 605. Each incoming audio packet is processed as indicated by process block 607 and compared with a number of signatures, each representing a possible audio phrase to be recognized. The comparison is performed by a recognition engine 609, using stored signatures 611. Recognition engine 609 of FIG. 6 is the same as recognition engine 434 in FIG. 4.
If a signature is recognized, then the actions associated with the recognized phrase in host script 615 are executed in a block 613. These actions include sending a DTMF tone 617 over voice-modem 605 to the remote host 601, and starting and stopping audio capture.
In the case of audio capture commands, the actions control whether the incoming audio indicated by block 621 is to be routed to a message audio file 625. The incoming audio is analyzed by process block 607. Audio not part of a message is discarded.
The phrases that are to be recognized are determined by the host script being executed. An example of part of a host script is shown in Table 1.
In the above example, a label (:getmessage) is associated with three expect clauses, and a timeout value of 60 s (i.e., if nothing happens in 60 seconds, the voice server application terminates the connection). Each expect clause instructs the program to compare the signatures of incoming audio packets with the signature for an existing phrase (i.e., the signatures vwEndOfMessage, vwNextMessage, and vwEndOfMessages). There can be multiple parallel expect clauses, as shown in the above example. In this case, the incoming audio is compared with three identified possible phrases. If one of the phrases is recognized, the actions associated with the expect clauses are executed.
In this example, if vwEndOfMessage is recognized by the voice server software then a status message "End Of Message" is output, the message is saved in the Inbox of the message store 424 (see FIG. 4), and a "9" DTMF code (or whatever DTMF code that particular VR system requires to save a message) is sent to the remote VR system to also save the message in its predefined storage.
If vwNextMessage is recognized (signifying the start of a new message), the message "Message Saved" is output, and the capture of the new message begins. The parameter 1000 on the "capture" statement indicates that the first 1000 ms of audio should be trimmed from the message (for cosmetic reasons). If vwEndOfMessages is recognized (indicating the end of the last message), the voice server software terminates the call.
FIG. 15 provides details of how the recognition of incoming audio phrases proceed. Recognition does not begin until two audio buffers have been captured from the voice modem. Audio buffers 1500A and 1500B are each N samples in length. At each cycle of the recognition loop (indicated by block 554 of FIG. 5), the N samples comprising the last audio sample and the current (most recently arrived) audio sample are processed by iterating through a series of sample windows, of width N samples, starting at positions 0, W, 2W and 3W, where W is an exact fraction of N (in our example, it is assumed that W=N/4). At each iteration, the start of the sample buffer is advanced W samples.
Use of this sliding window arrangement to derive successive input audio buffers is intended to compensate for the fact that the voice server application does not know where the real-time audio starts relative to the start of the recorded signature that is being compared with it. By ensuring that successive buffers overlap with each other, the discrimination of the recognition is improved, and the possibility for signatures to go unrecognized is reduced. This aspect of the invention is further discussed below, in the relation to signature creation.
The audio amplitude data in each window sample buffer (i.e., buffers 1508A-1508D) are processed to create a corresponding DFT of itself, thereby producing DFTs 1509A-1509D. The generation of such DFTs is well-known to those of ordinary skill in this art. Each DFT represents the spectral characteristics of the audio data. Each data item in the DFT represents the normalized power present at a particular audio frequency. For an audio dataset of N samples, the DFT consists of N/2 values. For each of these values i, where i ranges from 1 to N/2, the value represents the power present at the frequency i. If the original N audio samples represent T seconds of real-time, then the real frequencies represented by the DFT are in the range of 1/T<=f<=N/2T. For example, if N is 2000 and T is 1/4 second, then the range of the audio frequencies represented by the DFT is 4 Hz<=f<=4 KHz.
For the four DFTs created (i.e., DFTs 1509A-1509D), each is compared with pre-computed DFT buffers (DFTs 1510A-1510C are three such DFT buffers), which are the signatures of the audio phrases to be recognized. A correlation function 1512 is applied to each pre-computed DFT (i.e., DFTs 1510A-1510C) and each sample DFT (i.e., DFTs 1509A-1509D) in turn, and if the correlation reaches a predetermined threshold, the phrase represented by one of the signatures 1510A-1510C is deemed to have been recognized, and this recognition is output at a block 1514. Correlation functions for comparing normalized data are well-known in the field of signal processing. The creation of signatures and the setting of correlation thresholds is a function of the learning process, which is described below.
Preferably, buffers 1500A and 1500B (the recognition buffers) each include 1/4 second of audio data. Thus, buffer chunks A-H each include 1/16 second of audio data. Four buffer chunks combined include 1/4 second of audio data. As described in conjunction with FIG. 10, the best DFTs used for the signature (i.e., signature DFTs 1510A-1510C) are preferably based on 1/4 second of audio data. It should be understood that DFTs could be generated based on different lengths of audio data, as long as the DFTs in the signature file and the DFTs generated from incoming audio, as described in FIG. 15, are based on the samples of comparable size. Empirical data indicate that samples of 1/4 second provide good results.
As described above, once a phrase is recognized, the actions associated with its expect clause are executed, as defined in the current host script. The host script typically contains multiple labels, each associated with one or more expect clauses and actions. One of the results of recognition, therefore, can be the transfer of control from one label to another in the state table program. This transfer of control is performed via the "goto" statement. Table 2, which follows, shows examples of the "goto" statement in host scripts.
In the example of Table 2 there are three labels: ":start," ":password," and ":preamble." Control starts at the label ":start," and the program waits for the remote voicemail system to say, "Please enter your telephone number." This action triggers the expect clause for the signature "nxEnterPhoneNumber," at which point, the script sends the telephone number (followed by an *) to the remote VR system as a sequence of DTMF tones "send &n,*". A "goto" statement is then used to pass control to the label ":password". The ":password" label expects to hear "Please enter your password" (nxEnterPassword) within 20 seconds. If it does not, the program executes the timeout clause and terminates the call with an error report "E Number_Rejected".
If the password request arrives in time, the expect clause associated with "nxEnterPassword" is executed. The password is sent as a sequence of DTMF tones, and control passes via another "goto" statement, to the label ":preamble," where message processing begins.
The host scripts shown in Tables 1 and 2 are simple examples. In practice it is often necessary to have multiple expect clauses under the same label. Table 3 illustrates the use of multiple expect clauses.
In the example of Table 3, there are three expect clauses associated with the label ":howmany." When the voice server is executing this script at the label ":howmany," it compares the incoming audio with all three signatures. If the audio matches one of these signatures, then the corresponding expect clause is executed. The script in this example can therefore distinguish between no messages, one message, and multiple messages, and in response, displays the appropriate text "You_Have_No_Messages," "You_Have_One_Message," etc. to the operator.
FIG. 7 shows a flowchart detailing the processing of a call from the voice server application to a remote telephone voicemail system. Once the call has been started in a block 700 and audio processing has begun, the voice service software completes logging in to the remote voicemail system by identifying a security message in a block 702, and responding with the proper password in a block 704. In a block 706, the voice server application processes and identifies the mail box status message, and in a decision block 708, the voice server determines if the mail box is empty. If there are no messages to retrieve, then the call is terminated in a block 720. Otherwise, message playback begins. Note that in some cases, a first message begins immediately following login, and in some cases, a DTMF tone sequence must be sent to begin message playback. Thus, in a decision block 710, the voice server application determines if message playback is to begin immediately. If not, then in a block 712, the correct DTMF tone sequence is sent to begin message playback. In any case, in a block 714, the voice server application waits for any of: a timeout; a "Start of Message" indication; or an "End of All Messages" indication (indicating the last message has been captured).
If a timeout occurs, then the call is terminated in block 720, as indicated above. If receipt of a "Start of Message" indicator occurs, message capture begins in a block 716, until the voice server application program identifies an end-of-messages indicator or a timeout, as indicated in block 718. If a timeout occurs, the audio is captured for later review in a block 722, and the call is terminated in block 720. If an "End of Message" indicator is recognized, then the audio that has arrived since the capture was initiated is saved to a message file in a block 726. At that point, the logic loops back to block 714 to await an additional message, a timeout, or an end of message indicator, as described above. Multiple messages are captured in this way, until an "End of All Messages" indicator or timeout is received, in which case the call is terminated in block 720, as previously described. In a preferred embodiment, the captured audio messages are encoded in the popular MPEG-1, level 3 (MP3) format.
One of the problems with voicemail retrieval is that it is often desirable to keep existing messages within the VR system for extended periods. If a message remains in the user's voicemail box, however, it will be repeatedly downloaded by the software and the user will be confused by multiple copies of the same message. The invention provides a method for recognizing messages that have already been seen. Duplicate messages can then be discarded, hidden from view, or otherwise disposed of.
Each message file, as it is processed, has a key built for it. The key is a short sequence of numbers, saved in a key file associated with the message. This key is based on a compact encoding of the audio spectrum (DFT) of the message. This key can be compared with the keys of other messages using a correlation function. If the keys of the messages correlate, it is assumed that the two messages are identical. By choosing the length of the encoding window to be large with respect to the word length used in the messages (e.g., greater than two seconds), the correlation of messages with differing audio heads and tails (resulting from timing variations during calls to the VR system), but similar bodies, remains high. Because message keys are short (typically 100 bytes or less), the key for a new message can be correlated with a very large number of messages in a short time. A preferred key is the audio spectrum of the whole message, divided into 20 segments. The resulting 20 values, plus the message length and the message position (in the external voicemail box), are stored as American Standard Code for Information Interchange (ASCII) text in a key file.
FIG. 8 schematically illustrates how message keys are used to recognize similar messages and distinguish dissimilar messages. A new message indicated by an arrow 806A is retrieved by the voice server application in a block 804A. The voice server application processes the message to create a message key file 800A and a message audio file 802A. At some later time, the same message, as indicated by arrow 806B, is retrieved again in a block 804B. Once again, message key 800B and message audio file 802B are created. After message key 800B is created, the voice server application compares message key 800B with all other stored message keys. If a match is found, as is indicated by line 808 connecting message key 800A and 800B, the voice server application knows that message audio files 802A and 802B are for the same message. Message key 800B and message audio file 802B (or message key 800A and message audio file 802A) can be safely deleted, if desired. Now a third message (indicated by arrow 806C), different from the other two, is retrieved at a block 804C. A message key 800C and message audio file 802C are generated. Message key 800C is compared with all previous messages (including 802A and 802B, if both have been saved). In this case, the keys do not match, as indicated by line 810, and the message is considered distinct (i.e., not the same as any other message previously received).
In the above description of the voice server application implemented in one preferred embodiment, the recognition engine (corresponding to recognition engine 434 in FIG. 4 and recognition engine 609 in FIG. 6) uses signatures 611 (shown in FIG. 6) to recognize phrases in incoming audio. FIG. 15 schematically illustrates, and the above discussion explains, the method by which these signatures are compared with the incoming audio. Before a phrase can be recognized by the software, however, it is necessary for the software to be taught to recognize that phrase and to prepare a signature for it.
Thus, before a signature (e.g., vwEnterPassword) can be used in a host script it must be learned by the voice server software. FIG. 9 illustrates the steps involved in teaching the voice server software to recognize a new phrase. In the terminology used herein to describe the voice server application, a phrase represents the audio sequence to be turned into a signature. For example, the signature vwEnterPassword might be associated with a phrase containing the audio "Please Enter your Password."
The basic steps in creating a new signature file are as follows. Make a call using a host script and capture the audio containing the new phrase to be learned. Use the signature creation tool (shown in FIG. 12 and described in detail below) to examine the captured audio sequence offline, to choose the new phrase to be recognized and make a signature for it. Save the signature to a file. Preferably, by convention, signature files are named with a two letter prefix, signifying the host and a name spelling or identifying the corresponding phrase. Thus, the name "vwEnterPassword," includes "vw" to identify the host (in this case Verizon Wireless®) and "EnterPassword" to identify the phrase. Edit the host script to include the use of the new signature and make a test call using it.
Each of the high level steps used to create a new signature file are shown in FIG. 9. In this Figure, boxes 901 and 903 respectively separate the steps into online and offline groups. A block 900 indicates a start of the sequence of steps, while a block 918 indicates an end of the sequence of steps. The first step is to make a call to the remote VR system (i.e., to the host) whose phrase is to be learned, as indicated by a block 902. This call is made with a partial script that enables the voice service application to navigate the remote VR system to the point where the host speaks the phrase to be recognized. At this point, the voice service application captures the audio signal, as indicated by a block 904. If a capture command has been executed (as described above), but the call ends before a save command has been issued, the software saves all of the audio after the capture command in a message for diagnostic purposes. Therefore, scripts used for learning purposes usually contain a capture command, just before the new phrase is likely to be issued. Because the script generally cannot yet identify the new phrase, a timeout normally occurs after the capture of the new phrase, to end the call, as noted in a block 906. The captured audio is saved as a normal voicemail message within the voice server message store.
The GUI of the voice server software preferably enables any message to be selected as containing the audio for a new signature. In most error or unexpected phrase situations, the scripts will capture trailing audio automatically, and therefore, it is rarely necessary to make extra calls to capture new phrases to be recognized, except when building the basic scripts for a host for the first time.
Once the audio containing the phrase to be learned has been captured in a message, a user selects the create signature tool from the GUI in a block 908. In one preferred embodiment, when using the create signature tool, only one message (corresponding the next phrase to be recognized) is processed at a time. The message presented to the operator will be the last message captured by the voice server (see block 906 of FIG. 9). When the create signature tool is launched, the last message will be used as the audio source. The users utilize the create signature tool to select a signature reference phrase in a block 910, as will be discussed in greater detail below. In a block 912, the create signature tool generates the signature by applying a DFT to the audio. In a block 916, the DFT is saved. Thus, each signature file contains the DFT of the phrase audio. Signature creation is described in greater detail below. As already described, this DFT is compared with incoming audio within the recognition engine of the voice server application. Once the DFT has been checked manually and any parameters adjusted (see below), it is saved to a signature file, and the new phrase may now be used in a host script.
In creating a script from scratch, the process illustrated in FIG. 9 is repeated until all the phrases used by a specific host have been learned, and the script for that host is completed. In most situations, only five or six phrases occur in the dialog with a particular host. Therefore, creating support for a completely new host is a relatively simple and quick process.
In most cases, the selection of the phrase to be recognized is straightforward. As will be described in detail below, one preferred embodiment uses signatures that represent a 1/4 second portion of the audio file. Therefore, each phrase is best recognized by that 1/4 second portion of audio that is unique to that phrase (unique in the context of recognizing that phrase from other phrases). At any given time during a call, the "recognition context" is the set of all possible messages that may be heard. For example, in a typical situation during a mail box login, the context is very simple, likely consisting of a phrase similar to "please enter your password," and a timeout error message such as "press the star key for more options." In such a recognition context, the present invention requires the generation of a signature to enable the phrase "please enter your password" to be recognized. It is likely that this phrase will be repeated a plurality of times without interruption, before the error message is played. Because this recognition context is simple, any 1/4 second portion of the phrase "please enter your password" will yield a signature that is readily distinguished over another signature, such as that produced by any 1/4 second portion of the phrase "press the star key for more options."
Table 3 (above) provided a more complex example in which portions of three messages were very similar. Similar messages will likely be encountered when navigating through a menu of a voicemail system. The three messages include: "You have no messages," "you have one message," and "you have <N> messages" (where N is any number corresponding to the number of messages received). Because these messages have parts in common, the portion of the message to create a signature (i.e., the reference phrase) must be carefully selected. The phrases "you have no messages" and "you have one message" never vary, while the phrase "you have <N> messages" (where N is any number) includes the variable N. The following procedure can be used to select a portion of a message to enable that message to be distinguished from similar messages. 1. Recognize that the identical portions of similar messages (i.e., "you have") cannot be selected for generating signatures that will distinguish similar messages. The selected portion must be based on the non-identical portions of the messages (in the instant example, the selected portion that can be used includes "no messages," "one message," and "<N> messages.") 2. When possible, select distinguishable and non varying portions of the phrases. In the instant example, the phrases "you have no messages" and "you have one message" can be distinguished by producing a signature based on the word "no" for the former phrase, and the word "one" for the latter phrase. 3. For remaining messages or phrases, select a portion of the remaining phrase that is shared in common with similar phrases, such that the portion in common occurs later in other phrases than their signature portion. Note in the present example the words "no" and "one" occur before the word "message." Thus the word "messages" can be used to generate a signature for the phrase "you have <N> messages", because recognition of the phrase "you have no messages" occurs at "no", and recognition of the phrase "you have one message" occurs at "one".
The operation of the create signature tool (a function of the voice server that is used to select reference phrases and to create new signatures based on the reference phrases) is discussed in detail below.
FIG. 12 shows an exemplary embodiment of the GUI of the create signature tool. It is a typical Windows® dialog box. As indicated above, this tool is invoked at block 908 of FIG. 9, and the last audio file collected will be provided to the create signature tool. The name of the audio file being manipulated to produce a signature is displayed in a text field 1210, while a name selected for the new signature is displayed in a text field 1206. Once a signature has been created, it will be included in a "Completed Signatures" field 1208. As will be described in detail below, multiple signatures can be derived from the same audio file. The hostname for which the signature is being prepared is optionally entered in a text field 1212. By convention, the string entered in field 1212 is the name of the script for which the signature was first developed. Such data are for informational purposes only, and are not required by the voice server.
The audio sequence (i.e., the audio file) for which a signature will be made can be many seconds long, and the audio sequence is displayed as an audio amplitude waveform in a panel 1220. The create signature tool is coupled to the speaker output of the computer, and control buttons 1228, 1232, 1234, and 1236 may be used to listen to the selected audio. Button 1236 is a stop button that terminates audio playback. Button 1234 is a play-all/pause button, and if this button is activated, the entire audio sequence is played, starting at the beginning. Button 1228 is a play phrase button that causes only a selected portion of the audio sequence to play. That selected portion corresponds to the portion residing between phrase cursors 1241A and 1241B. The phrase cursor indicates the reference phrase (i.e., the segment) of audio from which the new signature will be built. In a preferred embodiment, phrase cursor 1241A is a green line, and phrase cursor 1241B is a black line, but these colors are not important. Under a default setting in this embodiment, the reference phrase delineated by phrase cursor 1241A and phrase cursor 1241B is five seconds in length. The phrase cursors can be moved within the audio sequence using a cursor slider 1232.
The user chooses the best reference phrase (i.e., the best selected segment of the audio sequence displayed in panel 1220) using cursor slider 1232, and playloop button 1228. The slider can be moved while the audio is playing, and this feature is of great utility in finding the right phrase (the slider is moved until the phrase is heard). Once the reference phrase has been chosen, and the chosen name for the signature has been entered in "Select Token" text field 1206, the user presses a "Make DFT" button 1226.
The process performed by the create signature tool in response to the activation of "Make DFT" button 1226 is schematically illustrated in FIG. 10. The process involves five steps. Initially, the entire audio sequence is divided into three segments: a segment 1003 corresponding to audio under the reference cursor, a segment 1002 corresponding to the audio preceding the reference cursor, and a segment 1004 corresponding to the audio following the reference cursor. In a first step of the create signature process, the trailing audio (segment 1004) is discarded. In a second step, the remaining audio (segments 1002 and 1003) is divided into 1/4 second segments, resulting in a plurality of buffers 1006 corresponding to segment 1002, and a plurality of buffers 1008 corresponding to segment 1003.
Next, in a third step, a DFT operation is performed on the contents of each of audio buffers 1006 and 1008, resulting in a plurality of DFT buffers 1010 and 1012, each of which is the result of processing the corresponding audio buffers with the DFT function. Buffers 1010 and 1012 are thus referred to as DFT buffers. Note that DFT buffers 1010 correspond to segment 1002 and buffers 1006, while DFT buffers 1012 correspond to segment 1003 and buffers 1008. Thus, DFT buffer 1011 is based on a single 1/4 second buffer from segment 1002.
In a fourth step, the create signature tool selects a single DFT buffer corresponding to the audio under the reference cursor (i.e., from the plurality of DFT buffers 1012, each of which are based on segment 1003). For convenience, the selected DFT buffer will be referred to as the selected DFT (or the best DFT). The selected DFT preferably is least like any of the DFTs derived from the preceding audio (i.e., DFT buffers 1010). A function described in detail below is used to evaluate the differences among the DFTs, to facilitate the selection of the single DFT. As illustrated in FIG. 10, DFT buffer 1016 has been selected as the best DFT. In a fifth step, the selected DFT is saved in a signature file 1020.
While the method by which the best DFT to form the new signature is chosen is very simple, it is quite important. In fact, the selection of a best DFT is an important element in enabling successful functioning of the voice server application. It can be understood with reference to the following observations: 1. The preceding audio (i.e., segment 1002) contains the audio between the start of the message and the reference phrase audio (i.e., segment 1003). This segment of audio represents the ambient environment in which the phrase occurs and may include other "phrases" that are not used as a basis for recognition. 2. It is very important that the best DFT correlates poorly with any of the preceding audio, so that the preceding audio is not incorrectly recognized as the reference phrase. 3. It is very important that the best DFT correlates well with the reference phrase (i.e., segment 1003), so that the recognition engine can be easily triggered.
In order to choose the best DFT, which meets the criteria defined by observation 2 and observation 3 (as described above), the processing proceeds as follows: For each of the plurality of DFT buffers 1012 corresponding to the reference cursor audio portion (i.e., corresponding to segment 1003), a correlation coefficient, c, is calculated between it and each DFT of the preceding audio region (i.e., for each of the plurality of DFT buffers 1010). For each DFT in the reference cursor audio region, the maximum value of c, over all the DFT buffers 1010 corresponding to the preceding audio portion, is recorded as cMAX. While FIG. 10 appears to indicate that DFT buffers 1012 include five individual DFT buffers, in a preferred embodiment, each DFT buffer is based on an audio sample 1/4 second in length, and the reference cursor audio portion is 5 seconds in length. Thus, a reference cursor audio portion (i.e., segment 1003) 5 seconds in length will include 20 discreet 1/4 second samples (i.e., 5/1/4=20), from which 20 different DFT buffers 1012 can be generated. For each DFT in the reference cursor region (i.e. DFT buffers 1012), a correlation coefficient, k, is calculated between itself and all the other DFT buffers 1012 in the reference cursor region, excluding itself. For each DFT, the largest value of k is recorded as kMAX. For each DFT buffer 1012 in the reference cursor region, the value L, is calculated according the following formula.
Li=sqrt((1-cMAX)2+k2MAX) The values of c and k lie between 0 and 1. Li is the distance of the particular DFT from the origin the two-dimensional Euclidean space defined by (1-c) and k. High values of L are therefore preferred, as they indicate low values of c (high values of 1-c) along with high values of k. The DFT with the greatest value of L is chosen as the best DFT for use in the signature.
Referring once again to FIG. 12, the best DFT selected by the above function (and the associated data) is saved in the signature file using a save button 1218. Preferably before the new signature is saved, the signature is inspected to determine if it is a good candidate. One such inspection process would be to test the selected best DFT against the audio file selected, to see if the selected best DFT properly identifies the audio file in question. This process is described in greater detail below. If it is determined that the best DFT selected based on a specific reference cursor audio portion does not provide the desired audio file recognition performance, slider 1232 can be used to move reference cursors 1241A and 1241B, so that a different reference cursor audio portion is selected. Then "Make DFT" button 1226 may be pressed again, so that the five step process described in conjunction with FIG. 10 is executed once again. This can be repeated as often as desired before the signature is saved. The create signature tool is closed using a cancel button 1219.
The determination of whether a given DFT is a good candidate is ultimately a matter of judgment and experience. To aid in the choice, the create signature tool provides a number of aids to assist a user in determining if a selected best DFT will provide the desired audio file recognition performance. These aids, identified in FIG. 12, include: The audio spectrum of the chosen reference signature (i.e., DFT 1016 from FIG. 10) is displayed in an upper panel 1242 of the create signature tool, whenever "Make DFT" button 1226 is pressed. The spectral display enables the experienced operator to distinguish between noise and speech, and therefore to adjust the reference point to correspond to a clean segment of speech. The DFT shown in panel 1242 of FIG. 12 exhibits ordered spectral peaks, and thus likely corresponds to a clean speech segment of audio. The value of the c and k for the best DFT, correlated with each DFT in the preceding audio portion (i.e., DFTs 1010 corresponding to segment 1002 of FIG. 10) and the reference phrase (i.e., DFTs 1012 corresponding to segment 1003 of FIG. 10) is displayed in red as an overlay 1238 on the audio timeline. The y scale in this case covers the range 0 to 1. A green horizontal line 1240 indicates the maximum value of k. When a DFT is calculated, phrase cursor 1241A (a vertical green line in this embodiment) moves to indicate the start of the chosen signature block. The value of k is displayed in a dialog box 1215.
In order to determine if the chosen signature block is a good choice, a number of heuristics are applied, as follows: If the audio segment corresponding to the best DFT does not look like speech (as indicated by observing the DFT displayed in panel 1242), that best DFT should be rejected. This event is very unlikely, if the reference phrase corresponds to speech. If the value of k (as displayed in dialog box 1215) is below 0.75, that best DFT should be rejected. If the peak values of c, as displayed in red overlay 1238 are above 0.4, then the DFT should be rejected, as values over that amount are likely to result in incorrect recognition.
The example in FIG. 12 matches well with the above defined parameters, and is therefore an excellent candidate for use in creating the tsEnterPassword signature.
In any event, if the user is dissatisfied with the best DFT selected, the user can mover slider 1232 to another portion of the audio file, as represented in panel 1220, to select a different best DFT.
In addition to the controls described above, the user has access to a number of additional controls over signature parameters from within the create signature tool. A quantum control field 1230 can be used to improve the discrimination of the values of c. According to the value of this integer value (q>=1), each reference DFT 1012 is compared to the preceding audio, as is schematically illustrated in FIG. 17.
FIG. 17 illustrates the case where q=4. In a preferred embodiment of the voice server application a value of 10 is used, hence the default value shown in field 1230, but 4 is a good value for illustrative purposes. The method illustrated in FIG. 10 implies a value for q of 1, again for illustrative purposes. Referring to FIG. 17, an audio buffer 1704 contains all the preceding audio with which a candidate reference DFT 1712 will be compared.
DFT 1712 corresponds to the DFT of a specific 1/4 second buffer of the reference phrase segment (segment 1003 from FIG. 10). It is DFT 1712 for which the values of c are being calculated (as indicated in FIG. 17, c is a result 1714 of the comparison of DFT 1712 with DFTs 1708A-1708E). The 1/4 second size of each buffer is a default value. The width of each preceding audio buffers 1706A-1706E, from which the preceding audio DFTs 1708A-1708E are calculated, must be the same width as the reference phrase segment. Thus, if the audio reference phrase segment is 5 seconds long, and each buffer is 1/4 second, then the audio reference phrase segment includes 20 buffers, and each preceding audio buffer 1706A-1706E includes 20 (1/4 second) buffers. In other words, the audio reference phrase segment and each preceding audio buffer 1706A-1706E have a width of N audio samples.
The value of q determines how far the starting point of the "preceding audio" buffer is advanced for each DFT calculation. N must be exactly divisible by q in the same manner as N/W must be an integer in the discussion of FIG. 15, above.
If q=1, then the starting points S0-S4 (respectively labeled 1716A-1716E) advance by exactly N between each successive portion, and the audio buffers used to calculate c values never overlap. If q is greater than 1, the buffers overlap. The overlap is important, because in the operational mode the starting point of any preceding audio portion cannot be predicted exactly, therefore this variability needs to be introduced into the calculations. If q is greater than 1, the time resolution of the calculations are effectively increased by a factor of q. The higher the value of q, the greater the processing burden, and while this is not a major issue during the operation of the create signature tool (which is not a real-time activity), it is a significant operational trade-off. It has been empirically shown that a value of 10, with a sample size of 1/4 second, performs quite satisfactorily in a preferred embodiment of the present invention.
The method schematically illustrated in FIG. 17 is similar to the sliding window technique used by the recognition engine, described above and shown in FIG. 15. The use of an overlapping audio window in both the recognition engine and the create signature tool is an important factor in providing satisfactory performance in the present invention. Without overlapping windows, the performance of the preferred embodiment is marginally satisfactory. However, by using sliding windows (as described in conjunction with FIGS. 15 and 17), the performance of the present invention improves remarkably.
A mean factor control 1224 is available in the create signature tool GUI of FIG. 12 and is used to selectively control the DFT samples that are to be considered in the calculation of c values. Each DFT sample is examined and compared to a value, and only DFT samples that exceed that value will be used in the correlation calculations. The specific value employed is the mean of the preceding sample DFTs multiplied by a mean factor. The mean factor can be adjusted using mean factor control 1224. For example, if mean factor control 1224 is set to 2, then only DFT values that exceed twice the mean value will be used in the correlation calculations. Proper adjustment of this control has the effect of removing noise (which has a low amplitude) from the comparisons. It has been empirically determined that selecting a mean factor of 2 usually provides good results.
Referring once again to FIG. 12, a timeout field 1216 corresponds to a functionality that was used in testing and is now obsolete. The timeout value is specified in the host script (see the above description of FIG. 5).
A threshold correlation coefficient displayed in field 1214 corresponds to a critical value. The threshold correlation coefficient determines the sensitivity of the recognition process. When the signature is created, the default value indicated here is defined as equal to one half the difference between k (displayed in field 1215) and 0.5. Typically, for good signatures, the value calculated is greater than 0.62 (indicating a value for k of 0.84 or greater). The user can manually adjust this value (using the slide bar adjacent to field 1214) if desired, before the signature is saved. Threshold correlation coefficient values below 0.6 are suspect, as are k values below 0.8. The threshold correlation value displayed in field 1214 is saved in the signature file and is used by the recognition engine. Note that field 1215 is not a user selectable field.
Since signatures are files, they can readily be copied between voice servers, and signatures prepared on one voice server can be used by other voice servers. Typically, in a multi-server operation (see below), one computer running the create signature tool is employed to prepare signatures that will be used by multiple Voice servers. The create signature tool can therefore be implemented as a separate application built around the Voice server, but operated independently of the operational servers.
It may be desirable to recalculate an existing signature. The create signature tool can function as an editing tool for this purpose. When the voice server application is operating in manual mode, the create signature tool can be started at any time. In this case, all the installed signatures are displayed and may be chosen from a drop-down selection box 1204. Since the system keeps the audio for all existing signatures, panels 1220 and 1242 instantly shows both the audio file and the DFT of the existing signature for the audio file. The phrase cursor is positioned over the existing reference phrase, and the name of the audio file associated with the signature is displayed in a dialog box 1210.
At this point, the user may recalculate the DFT after moving the cursor, delete the signature (using delete button 1222), change the name of the signature (using text field 1206), and/or modify the threshold correlation value in field 1214. Once any such changes are complete, the existing signature can be overwritten using save button 1218. If the name has changed, a new signature is created, so it is possible to derive new signatures from old signatures at any time. If the DFT has not been recalculated, only the changed, non-DFT values (e.g., the threshold correlation coefficient) are saved.
As described above and as discussed in greater detail below, the present invention enables the distribution of digital audio messages via email. Furthermore the service element of a preferred embodiment of the present invention enables one computer, attached to a voice modem, to act as a server for remote devices that lack a voice modem. In the simplest situation, the configuration of the voice server application to learn how to interrogate a new type of host (i.e., a new voicemail service, or VR system) is executed and controlled by a user using the computer that implements the voice server application.
On the other hand, it is sometimes useful to enable a user to teach the Voice server application to handle a new voicemail host remotely (i.e., from a remote computer that lacks a voice modem). For example, the voice server application may be physically remote from the system administrator. The method of remotely configuring the voice server application to support a new VR host is illustrated in the flowchart of FIG. 13, which enables the voice server application to generate signatures that are to be used to recognize one or more phrases. The process begins at a start block 1300 (and subsequently ends at an end block 1336). The remote computer, upon which the voice server application resides, prepares a host script in a block 1302, and any signature files needed by another server to gain access to the VR host. Once the server computer has access to the VR host using this script, the script enables the server computer to obtain new phrases (i.e., audio prompts to which a specific response is required to navigate a menu in a VR host) from a VR host. That captured audio is returned to the remote computer, and the voice server application residing on the remote computer then generates new signatures that will enable the voice server application to recognize such phrases at a later time.
In a block 1304, the host script prepared in block 1302, and any other configuration information required to enable the server computer to gain access to the VR host, are sent via email to the server computer. When the server computer retrieves this email, the host script and information supplied by the voice server application residing at the remote computer are used by the server computer (running the voice server software and using the scripts and signatures sent by the remote computer) to call the remote VR host (i.e., the remote voicemail system), as indicated by a block 1310. The server computer uses its voice modem to connect to the VR host. Once the connection is established, the server computer executes the host script (emailed from the remote user) in a block 1312. The script enables the server computer to navigate the VR host to the point where the phrase to be learned begins. In a block 1316, the server computer captures the audio containing the new phrase to be learned, as described above with respect to FIG. 9. Since the server computer does not know precisely where the phrase being learned ends, the script captures all the trailing audio (in the manner described above). In a block 1318, the server computer terminates the connection, and then in a block 1320, the server computer returns the captured audio (via email) to the voice server application residing at the remote computer. Once the captured audio has been retrieved by the remote computer (via email, as indicated in a block 1324), it is processed in a block 1328 using the create signature tool, as described in conjunction with FIG. 12, to create a signature for the new phrase. In a block 1330, the new signature and supporting data are added to the host script for the VR host to which the server computer is connected. The process of configuring a new host is normally a multi step process. In a decision block 1332, the voice server application determines if additional phrases need be learned. If so, the process returns to block 1302, and additional script is prepared to once again enable the server computer to capture a new phrase from the VR host. If, in decision block 1332, it is determined that no more phrases need to be learned, then the modified host script is saved in a final version in a block 1334. The process then terminates in a block 1336.
As discussed above, the preferred embodiment consists of three elements. The voice server application has been described above. The second element is the Service, which is built around the voice server application to enable multiple users to access and manipulate their voicemail and other audio messages over the Internet. Thus, in one embodiment, the voice server application resides on one or more server computers, enabling a plurality of clients to access the functionality of the voice server application using the service. The following discussion relates to FIG. 1B, which schematically illustrates the service.
By maintaining scripts for multiple hosts, a single voice server can serve multiple VR systems and multiple users simultaneously. For users sharing the same VR system, no new signatures need be learned. Only the users' passwords and telephone numbers, etc. need be substituted into the host script for their particular type of VR system.
The service functions as an Internet service, with the primary user interface operating over the World Wide Web (although versions of the service could also function on private networks). Users pay for a subscription to the service, and each user has a private Webpage where the user can review and manage the user's voicemail messages. A user can set up an account to retrieve voicemail from any of the Voicemail services supported by the host scripts installed on voice servers 129 and 130 (as described above, voice servers 129 and 130 can be implemented on one or more computers that collectively make up spooling computer system 144). Although the voice server application works fine over long distance, or even International telephone circuits, in its normal configuration, the service supports scripts for all public voicemail services, and any private scripts for commercial customers, all of whom can be reached by a local call from service center 141. With the exception of the voice servers, each of which in a preferred embodiment are implemented on their own separate computer using the Windows® operating system, all other functionality can be provided by a single computer running a Linux® operating system. The Web interface is provided through a familiar and standard Web site server software package (e.g., the Apache® Web site server software), and the service uses off-the-shelf components to complete the application, including a relational database, a scripting language (personal home page or PHP scripting language), and the Linux® email system. Messages are stored as files in Linux-based message store 128, and such messages are accessible by both Linux® programs and the voice servers using a standard network file system (the Samba® software is employed in a preferred embodiment of the present invention).
A number of scripts and C++ programs run on the computer running the Linux® operating system to interface between the Web site and the system control and configuration functions. The primary control function is to place jobs in the schedules of voice servers 129 and 130. In addition, a preferred embodiment includes a C++ application that runs on the computing device running the Linux® operating system and routes incoming messages. Those of ordinary skill in the art will recognize that such functionalities are standard with respect to spooling systems and can be implemented using a variety of techniques. The specific techniques described in a preferred embodiment of the present invention are not intended to be limiting. In such a spooling system, a queue of commands (the jobs queue) is generated by one application, and the queue is read and its commands are executed asynchronously by a second application. One advantage of the spooling system is that the two applications may function independently from each other, enabling their functions to spread across multiple computers without the need for sophisticated synchronization.
Referring once again to FIG. 1B, Web interface 122 is the primary user interface with the service. The user uses a Web browser application 120 to communicate with the service. Once the user has completed a login step (a preferred embodiment uses subscriber's telephone numbers and voicemail PINs as the password), the user reaches the Voicemail homepage of the user. An exemplary homepage 2000 is illustrated in FIG. 20. The voicemail messages are displayed, one to a line, in a main frame 2030 of the page. Each message is tagged with a telephone number 2020 from which the message was retrieved, a time and date 2010 of retrieval, and a length 2009 of the message in minutes and seconds. A space 2007 is provided for each message so that messages can be given a textual memo by the user, or by the system. The user can play a message by clicking on a speaker icon 2006 to the right of the message. This action causes the user's installed streaming media player 142 (FIG. 1B) for MP3 files to start and play a stream of audio delivered by the service.
Users may select one or more messages using checkboxes 2011 at the left of each message, and they may then apply various actions to those messages using the buttons 2002, 2003, 2004, and 2005, which perform the labeled action on the selected message(s). Selecting add Memo button 2002 enables the user to change the text memo associated with the selected message(s). Email button 2003 enables the user to forward the selected messages as attachments by email. Delete button 2004 moves the selected message(s) to a trash folder. Put in Folder button 2005 is a pull-down menu list of the folders displayed at the left of the page, in a frame 2012. These folders are created by the user to manage the messages received by the user more easily. The saved and trash folders are provided by the operating system. All deleted messages are kept in the trash folder until the user affirmatively deletes them. A user may move between folders and have the messages displayed on the mainframe by clicking on the chosen folder, in frame 2012. The new folder in frame 2012 leads to a user interface for managing folders.
The user can also control message retrieval by the voice servers from their Webpage. Note that a frame 2013 (labeled Voicemail Boxes) of homepage 2000 indicates that three telephone numbers are supported in this exemplary account. By clicking on a telephone icon 2022 that is disposed next to the appropriate number, a user can initiate voicemail retrieval for that number. By pressing on a trashcan icon 2024 next to a number, a user can delete the messages still saved on that telephone voicemail account, using the voice server. A "Retrieve All Voicemail" button 2026 is provided to retrieve messages from all their telephone voicemail accounts in one step. Activation of buttons 2022, 2024, and 2026 causes the system to create jobs in the jobs queue of voice server 130 (FIG. 1B). The progress of any retrieval calls is displayed on a call status bar 2008. Various configuration, help, and account administration functions are provided through tabs 2001, on Webpage 2000.
Referring once again to FIG. 1B, Messages and commands can be sent into the system via the email gateway 125. Audio messaging application 123 (described in detail below) can be used to send a message, composed on an Internet computing device, to email gateway 125 via email. If this message is correctly addressed, the message can be deposited in the Inbox of one of the service's users in message store 128, or forwarded by telephone to an external telephone number via a job being placed on the job queue of the "send by telephone" voice server 129. The job command includes a copy of the message to be sent.
Telephone text messaging services can be used to send commands directly from mobile telephones 166 to the service using PSTN line 164, via email gateway 125. Typically, such commands are used to initiate the fetching of voicemail before the user is at their computer. This ability for users of the service to initiate retrieval remotely, without Internet access, enables the service to avoid polling users' voicemail accounts except when the users want their voicemail, but at the same time, enables the users' messages to be ready before they reach their computer. For example, users can send text messages to the Service from within their cars before they reach home, and the service will retrieve their messages, such that the messages are ready for review by the time the users arrive at their homes.
Outgoing Internet email interface 127 enables two functions of the service. A first function relates to the forwarding of copies of messages by email, either on user demand, or automatically, as part of the service. For example, automatic email forwarding will enable a user to automatically receive copies of all voicemails for the user on the user's PDA. The second function of email interface 127 is to allow a user to automatically receive voicemail within the user's email client 126. In the latter application, each user is provided with an email address on the service (e.g., [email protected]). Whenever a user retrieves email at this address (by calling the service over email interface 127), the user will initiate a call that will retrieve voicemail saved for the user's telephone number(s). The user will thus receive an email with the voicemail messages included as attachment(s).
Since the service enables its users to consolidate voicemail from multiple telephone accounts in one place, it functions as a universal voicemail service. In order to capitalize on this feature, the service itself offers a standard Voicemail system interface 140 to its users. In a preferred embodiment, voicemail system interface 140 is a standard Linux® software package (Vgetty®) that interfaces with message store 128. Users dial-in using telephone 139 and PSTN line 135 to reach the service's voicemail access number and then listen to their messages, just as done with conventional voicemail system. However, the present invention enables each user to access all the user's voicemail, for all of the user's telephone accounts, with one call. Interface 140 provides all the standard telephone voicemail message review and management features, controlled from the telephone keypad.
One of the functions of the service is a Send-by-Phone function. This functionality uses the voice server application differently. Instead of capturing audio, the voice server application plays audio down the telephone connection. The voice server calls the recipient of the message directly, even if they are not a subscriber to the service. The host script used to send the message can discriminate between the telephone being answered by a human and one answered by a machine. When the telephone is first answered, the voice server plays a message such as "press star for an important message from <whomever>." If a human answers and presses the * key on their telephone, the human will hear the message directly. If however, the incoming audio is interrupted by a beep, the voice server starts playback and leaves the message on the recipient's voicemail or answering service telephone. If no star key is pressed and there is no beep, the message is retained and the call is attempted again at a later time. The above sequence is very important, because it minimizes the annoyance to the recipient and ensures delivery of the voicemail. In order to make send-by-phone function in this manner, two additional recognition features of the preferred embodiment are used. The first allows the host script to distinguish between spoken voice and machine generated tones (i.e., beeps). By placing the statement "expect Voice" in the script, the associated actions will be executed whenever human speech is heard by the voice server. If the statement "expect Tone" is placed in the script, then the associated actions will be executed whenever a tone (of any frequency) is heard. Tables 1-3 provide examples of other expect statements, and the "expect Voice" and "expect Tone" statements are prepared in a similar manner. These functions are implemented in the voice server as built-in signatures that are triggered based on the number of frequency peaks in the incoming audio. If the number of frequency peaks in the DFT of the incoming audio falls below a threshold, then `expect Tone` is triggered. If the number of frequency peaks in the DFT exceeds a certain threshold, then `expect Voice` is triggered. In a preferred embodiment the value 6 (i.e., 6 peaks) is used as the threshold for Tone recognition and the value 20 (i.e., 20 peaks) is used as the threshold for Voice recognition, as speech normally includes more spectral peaks than does a machine generated tone or beep. The second feature which supports send by phone is the ability of the host script to be triggered by an incoming DTMF tone from the user (e.g., */star in the above example). In order to recognize a particular DTMF tone, the statement `exdtmf <tone>`, where <tone> is any single DTMF character (0123456789*#ABCD), is used. When the user enters the "A" DTMF tone, the actions associated with any corresponding exdtmf clause are executed.
As discussed above, it is possible to compose messages using an Internet appliance (such as computing device executing Audio messaging application 123) on the Internet, and then forward these messages to the service over Internet connection 124, via email gateway 125. Such messages can be routed to message store 128, and either retained there until the recipient retrieves them, or the messages can be sent by telephone via voice server 129, as described above. When coupled with mailing lists comprising multiple telephone numbers, the send by telephone service can be used to construct interesting vertical applications, for example, in the field of telemarketing.
The messages arrive in service center 141 by two means: either as email (via email gateway 125) or by telephone (via voice server 130). If the messages arrive by email, they are distributed by a program running on the mail gateway's input, directly into message store 128 or placed into the outgoing message job queue of Send-by-Phone voice server 129. If the messages arrive by telephone, they arrive in a directory (preferably named the "arrival directory") owned by the voice server and accessible by the computer running the Linux® operating system, over the network. A routine runs periodically (preferably every minute) on the Linux computer and checks for any new messages in the arrival directory. A time stamp of the last check is used to detect new files, and a lock file is used by the voice server to lock out the Linux program during file creation, when there is a danger of copying partial messages. Each message consists of a WAV file containing the message in uncompressed PCM audio format, and a meta-file containing the routing information for the message, its time of retrieval, its length, and other housekeeping data for the message. If a new message is found, the Linux program encodes the audio from the WAV file into another file in compressed MP3 format. This MP3 file is moved directly to the message store directory of the intended recipient. The newly arrived message can then be viewed with Web interface 122. This method has two advantages: (1) the interface is simple and asynchronous, making the system simpler and more reliable; and, (2) keeping copies of the original messages in the arrival directory provides for redundancy and further improves the system's overall reliability.
The third element of a preferred embodiment of the present invention is the audio messenger application (see FIG. 1B, audio messaging application 123). Audio messenger application 123 is a simple popup application that runs on the user's Internet connected computing device. This device should be equipped with a microphone and audio playback capabilities, typically provided through headphones 143.
Using audio messaging application 123, the user may record new audio voice-messages locally and then send them to the service via email gateway 125. These messages are delivered as described above and can be routed to either message store 128, or to the send-by-telephone job queue in voice server 129. An exemplary Windows® operating system version of a user interface 1800 for Audio messaging application 123 is shown in FIG. 18. A preferred embodiment of Audio messenger was written in the C++ programming language and has been designed to be ported to multiple computer platforms. The user interface includes the following elements: A record button 1801 is used to start recording a message entered through the microphone. Each time record button 1801 is pressed, the old (previously recorded) message is overwritten. A play/stop button 1802 is used during playback to stop playback of the audio. If a message has already been recorded and the stop button pressed, then this button displays a play icon (>), and pressing the button starts playback of the recorded audio. Thus, when audio is playing, this button functions as a stop control and when audio has been recorded but is not currently playing, it functions as a play button. An audio progress indicator 1803. When audio is being recorded or played back, this indicator is animated to provide feedback to the user showing the extent of the message (or relative position within a recorded message that is being played). A Memo field 1813 is provided to enable a user to type a text memo to appear with the delivered message (if delivered directly into the message store). An Address pull-down 1812 contains a list of addresses entered in the address book by the user. Entries in the address book preferably include three elements: the address name (e.g. John Smith); the addressee's telephone number (e.g. 8088767766); and (optionally) the addressee's email address. Entries are added to the address book using a + button 1811, which displays a dialog box that enables a new address entry to be added. Entries may be edited using a = button 1810, which enables the currently selected address book entry to be edited and re-saved. A - button 1809 is used to delete a selected address book entry. A send button 1805 dispatches a correctly recorded and addressed message to the service, via email. A setup button 1806 displays a dialog box for use in setting up the application. This setup process involves providing the application with personal preferences and login information for the different voice hosts. A by phone checkbox 1807, if checked, directs the service to attempt to send the message over the telephone using the "send-by-telephone" service of voice server 129 (FIG. 1B). If this checkbox is unchecked, an attempt is make to deliver the message into message store 128 (FIG. 1B). A hifi checkbox 1808 enables the user to direct the system to encode the message at a higher fidelity than that used for telephone messages. If this checkbox is checked, then the message is encoded in the higher quality format, which enables messages containing, for example, music or a high quality speech recording, to be sent to the service without the loss of fidelity associated with passage over a telephone voice circuit. This option has no effect on the send-by-phone functionality. Normally, a preferred audio messenger application 123 encodes messages in a 16 kbps, monaural MP3 format. If the hifi checkbox is set, then they are encoded in a 64 kbps monaural format.
A flowchart of the process of recording and sending a voice-message with the Audio messenger is shown in FIG. 19. This process starts at a block 1900 when the audio messenger application is started. In a decision block 1902, the audio messaging application 123 checks to see if there are any messages saved from the last (offline) session. If no messages are saved, the next step in the process is to wait for the user to record a message, as indicated in a block 1906. If there are saved messages ready to send, the audio messaging application makes an attempt to send them via email to the appropriate gateway, at a block 1904. Each branch from decision block 1902 leads to block 1906. In order to record a message to be sent, the user uses record button 1801 (FIG. 18) to start recording, and "stop" button 1802 (FIG. 18) to stop the recording when finished. The manipulation of buttons 1801 and 1802 correspond to block 1906.
Once the message has been recorded, it can be reviewed in a block 1908, using stop/play button 1802 (FIG. 18). In a decision block 1910 the user determines whether the message is satisfactory. If the message is not satisfactory, a new message can be recorded (over the old message), as noted above in a block 1906. Of course, should a user wish to skip the evaluation of decision block 1910, a user can proceed directly to the next step.
If the message is satisfactory, the user can enter a short text memo in a block 1911, which will be delivered to the service with the message. Such entry is optional. In a block 1912, the message is addressed by selecting an entry from an address pull-down list box 1812 (FIG. 18). If necessary a new address is added to the address book first using + button 1811 (FIG. 18). Once the message has been addressed, the user selects any options, such as hifi or send-by-telephone in a block 1913, to prepare the message for delivery. Once any options desired have been selected, an attempt to send the message is made in a block 1914, using send button 1805 (FIG. 18).
In a decision block 1915, the audio messaging application determines if the gateway needed to send the message is accessible. If so, then in a block 1918, the message is sent by email to service email gateway 125 (FIG. 1B). If the service email gateway is not accessible, then in a block 1916, the message is saved locally for sending when the gateway is next available (see block 1904).
In a decision block 1920, the logic determines if the user desires to send another message. If so, control passes back to block 1906 to wait for the user to record another message. If no more messages are to be sent, the user terminates the Audio messenger program, as indicated by a block 1922.
In a preferred embodiment of the present invention, the service element is implemented using multiple service centers, similar to service center 141 of FIG. 1B. FIG. 11 shows an implementation of the service element that includes three service centers 1100, 1102, and 1104. Each service center serves a different area code. One service center per local calling area is required to enable messages to be retrieved and delivered by telephone at local calling rates. (For the sake of this example, it is assumed that each area-code corresponds to a local calling area for rate purposes).
Each service center, also known as a point-of-presence, or POP, supports all the accounts for telephone numbers within its calling area and also serves as the retrieval and dispatch point for all voice-messaging within the calling area. Voice Messaging, as used herein, refers to the generalized function of sending voicemail messages or messages recorded using audio messenger 1106 by telephone or Internet. Audio messenger 1106 has the same functionality as audio messenger application 123 of FIG. 1B, and is intended to represent audio messenger applications residing on a plurality of Internet-connected user computer devices. Each POP contains at least one voice server performing those functions, and each POP also includes an email gateway function (see email gateway 125 of FIG. 1B) for its calling area.
If a message is to be sent from audio messenger 1106, then it must be directed at the right POP gateway (i.e., to the POP gateway for the recipient's local call area-code). There is no central email gateway, and the various service centers function independently of each other. Messages are routed according to their area-codes and the telephone number part of the address is therefore the critical element. Each POP is represented on the Internet by an Internet hostname corresponding to the area code (or codes) it supports. By convention these service centers are named <area-code>.<service domain>. Therefore, if the service domain is gotvoice.com, then the three POPs illustrated in FIG. 11 have the hostnames as indicated (i.e., 206.gotvoice.com, 425.gotvoice.com, and 808.gotvoice.com). Each of these service centers has a special receiving email address to which messages are directed by audio messenger 1106. Thus, messages for area code 206 telephone numbers (1112) are sent to receiving @206.gotvoice.com, messages for area code 425 telephone numbers (1110) are sent to receiving @425.gotvoice.com, and messages for area code 808 telephone numbers (1108) are sent to receiving @808.gotvoice.com.
It is the function of Audio messenger 1106 to route messages directly. If the area code of the recipient is known, then the Audio messenger can correctly address the message and send it to the correct service center. The routing is implicit in the addressing scheme, and there is no need for any directory or routing infrastructure other that that provided by the Internet's base services (e.g., the DNS service).
Although a preferred embodiment of the present invention that will be commercially employed does not yet include the following functions, they are expected to be added later, to provide enhanced desirable functionality for the present invention. These functions include: Providing subscriber specific address books at service centers. Although the user's address book may be stored and maintained locally on the computer where the user runs the audio messenger, providing a centralized address book service, connected to the service, will enable the user access to their address book from any location (or from any device), in a similar fashion to the buddy lists of popular Instant Messenger applications. This facility is of great advantage to a user, since the user need not carry a device in which the address book is stored. Providing versions of the audio messenger application compatible with other operating systems will provide other options. By doing so, the voice-messages need not be limited to a personal computer or a laptop computer platform. For example, some PDAs and some smart telephones include dictation functionality. In order to make voice messaging as ubiquitous as possible, it is contemplated that versions of the audio messenger application will be provided for such platforms, including without limitation, computing devices running Microsoft Corporation's Pocket PC® OS, those running the Palm OS®, Linux® or the Symbian® OS. Providing import functionality to the audio messenger application, such that in addition to recording messages directly using a microphone, the user may also import WAV and MP3 files into the audio messenger, for delivery using the service. These formats have been chosen for their ubiquity, however those of ordinary skill in the art will recognize that many other formats could be used with minimal modifications to the preferred embodiment. Thus the identified formats are not intended to limit the invention. Providing multimedia functionality. For example, enabling video messages recorded with a WebCam to be sent to the service gateway. Incorporating video playback capability to the service Web site, and adding video messaging to the service represent straightforward extensions of the technology described above.
In a preferred embodiment described above, the method of the invention is used by a first computer to communicate with a second computer (such as a VR system), where the second computer does not implement the present invention. One additional embodiment of the present invention is directed to two computers that each implement the present invention. When both computers are configured to utilize the present invention, those two computers can be connected using an audio communication channel, such as a telephone line. This embodiment is illustrated in FIGS. 2 and 14. In FIG. 14, an operator/sender 1400 (human or mechanical) inputs the ASCII text "HELLO" into a capture text program in a block 1404, which creates an audio stream encoding message 1402 (i.e., HELLO) as a sequence of audio clips or segments, as indicated by a block 1406. The individual audio clips of the sequence are based on a library 1408 of stored audio clips, or "words". In the example of FIG. 14, it is assumed that each letter in the Roman alphabet is represented by its audio equivalent from the international telephonetic alphabet (i.e., "A" is represented by the spoken word "alpha," "B" by "bravo," etc.). As will be described in more detail below, the specific audio signal employed to represent a particular text entry can be abstract, as long as the system corresponds a specific audio signal to a specific text entry.
A call is made to the remote computer using the telephones 1410 and 1414, and audio sequence 1412 (encoding "HELLO") is played across the telephone connection linking telephones 1410 and 1414. In this example, the sequence for HELLO comprises the words: "hotel" . . . "echo" . . . "lima" . . . "lima" . . . "oscar" . . . .
Using the method of the present invention, the second computer recognizes the incoming words/phrases in a block 1416, using a library 1418 of signatures/DFTs (corresponding to the words stored in the sender's library 1408), and a script recognition program 1420 (based on the voice server application described above). When "hotel" is received by the second computer over the audio communication link, the process in block 1416 involves generating a DFT of the incoming audio, and then comparing that incoming DFT with each DFT stored in library 1418, enabling the second computer to identify the text entry corresponding to the audio signal (in this case, an "H" text entry corresponds to the audio signal "hotel"). As the incoming audio signals are recognized, corresponding text is generated in a block 1422, to be communicated to operator/receiver 1426, for example, on a display or as an audible word 1424.
In the example, both the computers are operating in a full-duplex configuration. Each computer has available a library of audio signals that correspond to a specific text entry, and a library of DFTs corresponding to every audio signal that corresponds to a text entry. Thus each computer can convert a text entry into an audio signal, and use the DFT library to recognize an audio signal to recreate a text entry corresponding to that audio signal. Thus operator/receiver 1426 can not only receive messages, but can also send messages back to operator/sender 1400, using the method described above. Thus operator/receiver 1426 can use the second computer to capture a word 1428 as text (as indicated in block 1430), and employ a library 1434 to create an audio stream of sequences in a block 1432. That sequence 1438 is then sent from telephone 1414 to telephone 1410. To enable operator/sender 1400 to decode sequence 1438 in a block 1442, the first computer (i.e., the computer being used by operator/sender 1400) will need to include a library 1450 of signatures/DFTs, and a recognition program 1444.
In the above example, there was a clear correlation between the audio signal (i.e., "hotel") and a text entry (i.e., "H"). It should be understood that the correlation could be entirely arbitrary, enabling coded messages to be sent and received. As long as each computer coupled by an audio link includes matching libraries that correspond audio signals to text, and DFTs to audio signals, communication over an audio link is facilitated. It should also be recognized that in a broadest sense an audio signal does not need to be linked to a single letter of text; rather each audio signal can be linked to a specific data token. Each data token could correspond to a word, a phrase, a sentence, etc.
Although the present invention has been described in connection with the preferred form of practicing it and modifications thereto, those of ordinary skill in the art will understand that many other modifications can be made to the invention within the scope of the claims that follow. Accordingly, it is not intended that the scope of the invention in any way be limited by the above description, but instead be determined entirely by reference to the claims that follow. | http://www.patentsencyclopedia.com/app/20130003944 |
This X12 Transaction Set contains the format and establishes the data contents of the Transportation Carrier Shipment Status Message Transaction Set (214) for use within the context of an Electronic Data Interchange (EDI) environment. This transaction set can be used by a transportation carrier to provide shippers, consignees, and their agents with the status of shipments in terms of dates, times, locations, route, identifying numbers, and conveyance.
What is an EDI 214?
An EDI 214 Shipment Status Message communicates the current progress of a given shipment or group of shipments from a carrier to a third-party. It contains information about the shipment's current location (MS4 segment), current status (AT7 segment) and tracking number (B10 segment).
How is an EDI 214 used?
For example, Carrier A sends Vendor B EDI 214 Shipment Status Messages every time the shipment is scanned at a new facility en route to the final destination. This allows Vendor B to update their customer on the delivery status of their order.
Heading
Sequence
Segment
Name
Max use
0100
Transaction Set HeaderMandatory
Max 1
To indicate the start of a transaction set and to assign a control number
0200
Beginning Segment for Transportation Carrier Shipment Status MessageMandatory
Max 1
To transmit identifying numbers and other basic data relating to the transaction set
0300
Interline Information
Max 12
To identify the interline carrier and relevant data
Detail
Sequence
Segment
Name
Max use
1000 Loop
Repeat 999999
0100
Transaction Set Line NumberMandatory
Max 1
To reference a line number in a transaction set
0200
Business Instructions and Reference Number
Max 999
To specify instructions in this business relationship or a reference number
0300
Marks and Numbers Information
Max 9999
To indicate identifying marks and numbers for shipping containers
0400
Lading Exception Status
Max 10
To specify the status of the shipment in terms of lading exception information
0500
Remarks
Max 10
To transmit information in a free-form format for comment or special instruction
0600
Bill of Lading Handling Requirements
Max 10
To identify Bill of Lading handling and service requirements
0700
Shipment Weight, Packaging and Quantity Data
Max 10
To specify shipment details in terms of weight, and quantity of handling units
1100 Loop
Repeat 10
0800
Shipment Status DetailsMandatory
Max 1
To specify the status of a shipment, the reason for that status, the date and time of the status and the date and time of any appointments scheduled.
0900
Equipment, Shipment, or Real Property Location
Max 1
To specify the location of a piece of equipment, a shipment, or real property in terms of city and state or longitude and latitude or postal code
1000
Equipment or Container Owner and Type
Max 2
To specify the owner, the identification number assigned by that owner, and the type of equipment
1100
Remarks
Max 1
To transmit information in a free-form format for comment or special instruction
1200
Seal Numbers
Max 1
To record seal numbers used and the organization that applied the seals
1200 Loop
Repeat 5
1300
Party IdentificationMandatory
Max 1
To identify a party by type of organization, name, and code
1400
Additional Name Information
Max 1
To specify additional names
1500
Party Location
Max 2
To specify the location of the named party
1600
Geographic Location
Max 1
To specify the geographic place of the named party
1700
Date/Time
Max 1
To specify pertinent dates and times
The G62 segment shall not be used to report shipment status dates and/or times. | https://www.stedi.com/edi/x12/transaction-set/214 |
Assessment 3 -Written Assignment of 2500 words (50%) due in week 9 In this assignment, you are required to introduce a conflict situation that occurred within your workplace and provide a critical review of the situation including the following issues: • The event that led or resulted in the conflict situation; • Your position in the institution/ward/workplace (i.e. RN, Associate Charge Nurse, Charge Nurse...) • Measures you applied to seek immediate resolution; • Leadership style employed in the management approach/es that you took to resolve the conflict • The outcome/s for the patient (s) and/or staff Change/s that took place, as a result, to eliminate the probability for a similar event to arise • Recommendations for improving the outcomes and/or preventing a similar situation from occurring again.
Assessment criteria • Select a minimum of 7 literature/refereed articles that provide the strongest evidence for your topic. Your articles must be < 7 years old. • USE APA FORMAT. APA website.
Introduction
Conflict can be known to create a sort of difference between the two or more people or facts within the group or departments. It could substantially be due to the variation in attitudes, behaviors along facing a value, beliefs or goals (Chávez, 2015). It would also substantially create differences in the workplace, conflicts faced on account of the organization of the conflicts, which can be due to the two sources which include a staff to staff interaction along with facing a manager to staff interaction that could be aimed at the different timing. It would be caused by the external or internal that could be realistic to note the positive or negative. The conflicts defined within the health professionals would be related to an internal along with the external conflicts every day. As a nurse, one has to think over the deliverables of the best results and at the same time, one has to match with the ways to consider the relevant ways to connect with conflict management. Overall, there can be a system in terms of the conflict that would exist within the health care setting, The conflicts would cause a long term professional along with an interpersonal relationship would cause would the negative impact on patient care that would result in the gap in communication that would create a transfer accurate information. Conflicts are considered to be an issue due to the interpersonal conflict along with an ability to compromise patient care that could result in the strategies to its resolution(Chávez, 2015).
In a closed setup, and experience related to an interpersonal conflict that would create a disagreement that would exist in between the two persons or subgroups wth respect to a nursing organization. If not adequately resolved, it would result in a significant bitterness along with dissatisfaction. Due to the interpersonal conflict, it would also create a conflicting situation for an altered interpersonal relationship that would exist among peers or coworkers. The relationship also has been observed between the connected nurses or the peers, that have to access the situation positively (Chávez, 2015). It would have to be dealt with following the significant role that could cause it significant conflicts and to find an appropriate resolution. It also has to match with an adequate measure that would have to be attained concerning the causing and identifying a conflict resolution effectively. It also has to match with the relationships that could be pleasant along with the understanding of how one can deal with the stress on account of good relationships. Often the individuals can also resort to a difficult task along with the attainment of the challenges that could be seen in the jobs, emotional support along with the well beings of the daily lives and it would be necessary to derive a companionship on account of shared activities. The conflict would be caused due to the no relationship or misunderstanding, that would be essentially caused due to the conflict along with derived distress and ineffective accompanied working environment.
As a nurse, in a close experience, the health care organizations should be able to overcome the complex form of any recruitment, lack of resources along with systematically being following the communication skills or facing the work overload. It would also be subjected to the complex interpersonal dynamics that could result in terms of colleagues along with facing the challenges policies that would be resulting in terms of identity and at the same time of job satisfaction that would distinguish would be based on the employees and patient satisfaction which can make a concrete based on the standard care that would be improved (Nieuwboer, 2019). It is also reflective of conflict encountered that could be caused on account of the professional RN, which can focus on the interpersonal conflict that would equally be challenging to the problematic type that would be stress would be derived in terms of conflict experiencing creating an edge to the workplace. In a system, it would create a challenge of how there can be that could result in the manager that can pay less attention. It could be due to any reason or could be simply due to the busy schedule and it can also be a creation to adoption of the avoidance style that could simplify the poor work-related attitudes along with facing a psychological states on account of the job dissatisfaction, organizational commitment and be facing a turnover intensions, negative emotions or simply due to the emotional exhaustion. This is also on account due to the nurse to identify and create it as a source of conflict that would impact patient care along with an ability to meet the conflict due to lack of strategies faced that could result in the effectiveness of the health care setting.
It is important to create immediate resolutions, one has to proactively lookout for the steps and actions (Nzinga, 2019). It was during my Diploma in Nursing perusal, which I faced in an intensive care unit while joining the hospital, as it was a ten bedded unit, and due to each nurse joining the three staff, there was other one medical officer and it would include one manager for each floor. In here the nurses’ turnover was showing how the employees were unable to coordinate and also have to experience an reasons that would cause a poor communication faced on account of the coworkers, physician’s attitude that could be seen visibly conflicting in the nurses along with facing a lack of cooperation such as exploiting the nurses and also necessary pulling legs and also be causing a low emotional exhaustion along with the lack of respect faced by the colleagues. It is also important to ensure, that how there can be a concrete seller initiate to ensure there can be a focus on the patients. At the same time, it would also be how one can face consequent suffering on account of the delayed care and dissatisfaction. The conflicting area was the lack of coordination and facing a consequent issue that could be faced by the physician and how the female patient has to face the urinary tract infection along with the diabetic Mellitus. As the patient is not catheterized it has subsequently also caused during the patient admission. Another way to think is how one can be experiencing a patient insulin dose that has been identically being missed by the nurse. It has caused how the nurse would eventually be faced on account of the issues that can create an issue in between the physician along with the nurses on duty. Subsequently, the patient's condition has also throughout worse and deteriorated cause due to the conflict as shown in the health care workers that have been subsequently been admitted for ten days. Another way to identify the problem was the complaint from any staff that could be a result of the interpersonal problems and it would be how the manager ignored along with an edge to face the avoidance strategy that has further worsened the situation. The nurse manager also had an autocratic style while enabling matching with the staff members and it would have been listed as an issue due to this, subsequently, the three staff nurses left the job and it included me (Storey, 2019).
Strong and effective leadership styles are the prominent cause noticed in terms of the components concerning any health care success. It would be faced on account of the leadership approaches (Sarto, 2015). As noticed in terms of the transformational leadership theory, that can be faced ideally in terms of the importance that could be faced due to the interpersonal relationship. Another theory applicable here is of the Maslow’s higher level of needs that could be subsequently be faced due to the self-esteem along with facing self-actualization that could be notably be seen in the achievement, respect along with facing the confidence, creativity, problem-solving or the other notably issues in acceptance. Leaders have to also substantially be seen in terms of a vision, creating a strategy that could help to overcome conflicts and how it would be based on an accomplishment towards a cause of the changes concerning the vision. Nurse leaders can also be facing the challenges while noting how there can be concrete evidence to relate with the motivating factor towards staff to function that would work beyond any self-interest. There has to be a notable evaluation to note, how there has to be an adequate knowledge and skills concerning the subordinates that could be anticipated with respect to the participation. It has to be substantially be aimed to relate with the need of knowledge that can hold to be effective leadership and it could be aimed in terms of the direction of transformational leadership style that can help to note, how there would-be subsequent interpretation that would help to note, to know oneself, to relate with the job and also value addict to the organization and it includes the business and the world. Another would be “effectively implementation that could be core effective in terms of the transformational leadership that would be representing to the priorities, shared valve along with holding the subsequent perceived common goals along with the reasonable effective along with the health care environment. It would be assessed in terms of the transformational leaders’ regular that would be team goals that can be represented with the focus on the use of rewards that would be held in terms of the cooperative behaviors (Chávez, 2015). That would be confronting disagreement, team members that would be motivated that would be interpreted in terms of the opposing views along with the assessment of the co-workers on the reliable information along with the best ideas (Swanwick, 2017).
Another way to identify how the nurses and the manager can together be able to substantially be able to establish a good interpersonal relationship that could represent the organization for productivity along with the ability to achieve the desired goals. As noticed, from the above scenario there have been multiple issues concerning the conflict caused by coworkers, and physicians. Eventually, the result was observed as an impact on patient care, and as the patient suffered on account of the interpersonal conflicts faced in the unit. Subsequently, the patients along with the employees have to face the consequences of the lack of job satisfaction along with an inability of the managerial style while dealing with the issues (Rosser, 2017). Another way to identify the problem was seen in the interpersonal conflict that has notably been seen in the health care organizations. As the problems were noticeably observed in the workplace the nurses have to also face the consequences of it, such as the distressing events, along with facing a violent language or the non-supportive behavior caused on account of the physician (24.4% prevalence) along with the colleague or supervisor (23.4% prevalence). To represent, the highlighted issue there has been a true representation of the identical situation that has been noted to uncertain and how one can be able to cope with the type of environment. Here, the nurses can also face the conflict with the physicians due to the negative impact as per their profession, as notably seen on account of the increased feeling of exhaustion due to the nursing profession. Another way to identify how there can be a lack of job satisfaction could be faced due to the intensions that could be faced due to the nursing profession. Conflict can be caused due to coworkers or with a supervisor eventually be causing personal and organizational outcomes. It is also the manager eventually causing in terms avoidance strategy faced due to conflict, that could be identically be faced concerning the non-strategic style that can benefit the interpersonal relationship on account of the coworkers.
Changes
Change to think about avoiding the issues would be how one can relate to self-valves and how to manage the situation that can be concrete that could enhance the interpersonal relationships. It is also how some issues, to protect self-valves, to manage the situation and to enhance the interpersonal relationship. Conflict among nurses hurts the retention of qualified staff and patient outcomes, and the shortage of nurses due to job dissatisfaction as a result of workplace conflict creating a stressful along with the unpleasant environment.
It is also how the patient would be subsequently be facing an alarming dose on account of the leader along with experiencing an out of control or the issue of the immediate skillful strategies that can be identified along with facing the matter implemented accordingly, to avoid any further damage. Another way of showing how there can be a crucial component aimed in terms of organization that could be representing to the good relationship along with facing a lack of communication skills on account of coworkers, that would be poor and a source of a conflict. So, far, as per the close evaluation, there has been a 60 – 80 % of conflict that has been faced by the organization on account of the poor relationship faced and also due to the lack of the 25 – 40% that have been spending in resolving conflict.
Recommendations
The recommendation would be done through the assessment of the Situational Analysis of Management which would create a wider model of management behavior. This would equally be represented in terms of managerial behavior that can show a sign of positivity or negativity concerning the specific situation. There would be an estimation would be depicting the three dimensions with respect to the theory as showing the first one to be the task orientation, under which the manager would be directed with respect to the subordinates’ efforts that can eventually be achieved through the goal reflected in the organizing, planning along with facing the controlling. Another way to look over it would be the relationship orientation, that could be presented as per the manager that would be reflecting in the personal job relations concerning the mutual trust along with the respect for employees and facing due to the concerns (Short, 2017). Effectiveness is eventually be faced on account of the manager that can represent in terms of the output requirements along with his/her position; a managerial style that would likely be effective or less effective that could be dependent as per the situation. It is also how the manager can be represented to the third dimension of effectiveness caused due to the lack of management style that would be faced by the situation. | https://www.abcassignmenthelp.com/conflict-situation-in-wok-place-assessment-3-answer |
Traits that Foster Success
Students read literature in order to identify traits that are exemplified by strong characteristics that foster their own success or the success of others. They generate a list of traits that facilitate success, such as tolerance, respect, and integrity. Students also use various picture books where characters demonstrate traits of success and photocopied covers of the literature. Finally, they create their own covers on poster board.
3 Views 12 Downloads
Resource Details
Included Materials - Join to Access
- Project
Start Your Free Trial
Save time and discover engaging curriculum for your classroom. Reviewed and rated by trusted, credentialed teachers.Try It Free
Narrative Writing: An Imagined Story Unit Introduction
Let's pretend! Ben Bova's short story "Moonwalk" is used as a mentor text in a unit that shows kids how to create their own imaginative narratives. A great resource that deserves a special place in your curriculum library.
4th English Language Arts CCSS: Adaptable
At the Head of Her Class, and Homeless
What does it take to overcome obstacles in life? Learners read about a high school student who demonstrates some of the character traits that prove useful when facing adversity. Class members respond to questions about the article's...
5th - 8th English Language Arts CCSS: Adaptable
Section One: What is Biodiversity?
Four intriguing and scientific activities invite learners to explore the natural resources of their town. The activities cover concepts such as genetic traits, organizing species in a taxonomy, the differences between different species...
3rd - 5th Science CCSS: Adaptable
Tuck Everlasting: Bio-Poem
Learn about the characters of Natalie Babbitt's Tuck Everlasting with a character biopoem. Readers fill in a poem format to detail the character traits of Winnie, Jesse, Miles, and Mae, and share their finished poems with their peers.
4th - 7th English Language Arts CCSS: Adaptable
Critical Thinking through Core Curriculum: Using Print and Digital Newspapers
What is and what will be the role of newspapers in the future? Keeping this essential question in mind, class members use print, electronic, and/or web editions of newspapers, to investigate topics that include financial literary,...
3rd - 12th Math CCSS: Adaptable
Start With What Isn't There
Explore a different style of writing! Read Caves, a picture book written by Stephen Kramer, which describes what isn't seen before describing the actual setting. After discussing the effect this style has on the audience, writers attempt...
2nd - 5th English Language Arts CCSS: Adaptable
CREATING A CHARACTER TRAIT MOBILE FROM THE OPERA THE LITTLE PRINCE
Students create a mobile that includes each of the six planets. They list the character traits of each of the characters from the six planets visited by the Prince. They present their project to the class and teacher. | https://www.lessonplanet.com/teachers/traits-that-foster-success-4th-6th |
Want to be a better storyteller? It’s time to get back to the basics. No matter how many tips and tricks you utilize, the most important elements in storytelling will always tie back to two things: plot and character.
The Central Plot is the Unifying Plot
Most good stories have at least one or two subplots going on in the background. That’s a great storytelling technique, but only if you remember what story you’re telling. Every piece of the puzzle, including the characters, drama, and tension of your subplots, must build into the final conclusion. The main plot rules all. It helps define your theme, highlights lead characters, and generally motivates readers to pick up your book in the first place.
Evolve Your Characters
If your character is the same at the end of the story as they were at the beginning, then something is missing. It doesn’t matter if you write an epic fantasy adventure or a slow-paced drama starring an elderly couple in a cottage by the seashore. They should experience some kind of transformation. Keep in mind that these changes may not be obvious, but they do need to tie in with the central plot. Even characters in short stories must change. Always remember, as attractive as the conflict in your story may be, readers fall in love and immerse themselves in your characters.
Go for the Meat of the Story, Not the Garnish
It’s easy to get sidetracked when you’re writing, especially if you use an exotic or original world setting. You have so much to explore, and you want to show off this dazzling place. Good showmanship doesn’t always equate to good storytelling, though. Don’t fall into the trap of throwing in extra scenes just to show off something cool that doesn’t influence the plot. Resist creating flashy scenes with random, action-packed encounters that fail to move the story forward as well. These may be fun to write in the moment, but they are nothing but distractions.
Character Motivation Is Everything
We keep talking about plot, but how do you know if you have a good one? That’s simpler than you think. Define your plot by character motivation and conflict. Decide what the character wants or needs, then present an obstacle. They may need and seek understanding with a difficult spouse. Maybe they want to save their home. Struggle generates plot. If you spot your story wandering away from the characters’ primary motivations, then that’s a sign you should sit and rethink your original plan. Keep things believable. Even in a world with spaceships or dragons (or both), readers must understand the characters. If a character is easily sidetracked from their key motivation, then they must not want it that badly. That destroys tension and releases readers from the grip of your narrative.
Every story is different, even if they all rely on these crucial elements. Dig into the roots of your own narrative. How could you tighten up your story? | https://www.inkitt.com/writersblog/four-great-storytelling-tips |
Holly Berenson (Katherine Heigl) is the owner of a small Atlanta bakery, and Eric Messer (Josh Duhamel), (often just known as "Messer") is a promising television technical sports director for the Atlanta Hawks. Their best friends Peter (Hayes MacArthur), an attorney, and Alison Novak (Christina Hendricks) set them up on a blind date that goes horribly wrong, and results in both hating each other. As the years go by, Peter and Alison get married, and have a baby girl named Sophie Christina, and select Holly and Eric as godparents of Sophie.
Trailer
Representing Family Life: Holly & Messer
Character: Holly Berenson - Katherine Heigl
Character: Eric Messer - Josh Duhamel
OPENING OF THE NARRATIVE
What do these images say about how each characters' gender was introduced to the audience?
Well written response: By Student
You can argue that the film text, 'Life As We Know It' has a implicit ideology of Gender threaded through out the narrative. Discuss this analogy using media language.
- 6 marks -
'Life As We Know It' (2010) has an implicit ideology suggesting that the protagonist and antagonist have conflicting ideologies in the narrative on their impression of gender. They are of opposite genders, allowing the director to project both femininity and masculinity in societies true form. The male character is stereotypically stern, tough and physically strong, and the antagonist embodies all of this. He is the 'typical' male figure in media, in the period of 2010. Whereas, the female character is gentle, empathetic and calming.
She is projected through colours of red to admire her delicate body and interest in material possessions, such as her car, shoes and lipstick. This red is often seen to project passion and a women's love. This red was contrasted in the narrative with the male's black clothing and motorbike, as he is seen as an indestructible, 'tough' figure that will not be broken down by a women's genuine love. Through the use of colour, this analogy is seen, as the protagonist represents the ideology of gender to reflect sensitivity, gentleness and empathetic, whereas, the antagonist reflects the ideology through being harsh, careless and physically strong.
Story Arc is another form of media language that helps illustrate this implicit ideology of gender. The narrative experiences conflict, when following the typical success to failure to overcome triumph to victory storyline. During their moments of conflict, the women appears to embody ideology to become driven, strong and independent. She is career focused, just as the antagonist. He, like men in the ideology of gender, continues to place his job and work in front of relationships with friends and family. Thus, reflecting again their different views of gender.
Well written response: By Student
In one of the texts you studied, how was multiple storylines employed to convey the ideology of gender?
- 4 marks -
The opening of the text, ‘Life as we know it’ (2010) depicts the characters of Holly Berenson (Katherine Heigl) and Eric Messer (Josh Duhamel) as two conflicting characters with contrats in lifestyle. The story line follows the two after being paired on a blind date, Messer is shown as a laid back, 'playboy' with limited responsibility. In contrast, Berenson is depicted as a woman of success, whom is education and professional. As the development of the narrative progresses, audience's becomes more aware of the interrelated storylines, where the life's of the characters intersect to impact on one another. Through the use of codes and conventions to establish and develop character possibilities by the audience, the narrative progresses to show the contrasting storylines to meet. The death of their mutual friend, left the two to work in collaboratively and look after a child together. The boisterous lifestyle of Messer and structured routines of Berenson are challenged when they are forced to cooperate and create a life for their new child where the implicit ideology of gender is shown through their interactions. The two are made to connect despite their differences, however, the director employs the use of story arch's to display the perceived ideals of a family life and thus, informing the audience of how gender stereotypes have evolved.
Cause & Effect & Ideology response to a question
Explain how cause and effect propels a narrative. Using cause and effect provide an example of a situation that underpins the ideology of gender in one of the text you studied.
- 6 marks -
For every action there is a reaction. Audiences are exposed to the cause and effects of narratives in all media productions. Without these chain of events audiences will disengage and have no anticipation of events to come.
Life as we know it (2010), is a text that provides audiences with several events that impact them and help them identify with the ideology of gender. The narrative is riddled with situations that cause the two leading characters to demonstrate their ability to be a good mother and father. This human mishap is a cause and effect storyline that is often found in the romantic comedy genre.
This text also provides the audience with a counterforce that provides conflict with the two protagonists that is resolved in the resolution. Restoring equilibrium for audiences is also another cause and effect convention employed to satisfy the audience. The example in Life as we know it, is allowing the audience to see the female and male leads unite as a family, allowing the ideology of gender to suit the time period in which this text was distributed. In 2010 American audiences want to see that children are raised with two parents that represent the ideology of gender roles within a stereotypical family. | http://www.mediaknite.org/life-as-we-know-it-2/ |
It’s a challenge for most authors to write captivating fictional characters. Story premises, plots, and concepts are worth nothing if the characters aren’t fully fleshed out, but writing great fictional characters is no easy task.
Develop Background
The first step to writing great fictional characters is establishing the character’s background. Use yourself as a reference point–what makes you who you are? How did your background affect you and impact the choices you made, are making, or will make in the future? How does your background shape you, change you, and form you as a person?
Using these ideas, develop a background for your character, and make sure to consider how this will affect your story. If Felicity grew up with an abusive father as an only child in New York City, when her boyfriend proposes to her, she may refuse because of the relationship phobia that came into occurrence after living with an abusive father. If Jack was the golden boy who had no trouble catching girls, being the star quarterback, and getting great grades in high school, that would explain his cocky personality.
The best tip to help you develop a character’s background is to ask yourself questions like the ones you asked yourself earlier: my character’s anger problems are chronic–what background could I develop to go along with this story? How would this background influence my character? Developing your character’s background adds more dimensions to your character and makes them seem like real people.
Create Relationships
A good character, even if he is an introvert, becomes truly three-dimensional when he interacts with other people. The same trick applies to real life–people judge others after seeing how they interact with the people around them. Similarly, readers will understand your characters better after they see your characters in a social situation.
Let your character build relationships with other characters. The more complex the relationships, the better. While building these relationships, use the background you developed, reflect on it, and think about how the character would act in a certain situation. For example, if the Jack we talked about above got turned down by an ordinary-looking girl in a club, would he meekly apologize and end their relationship there? No way! Jack’s background shows that he would pursue this relationship by either harassing the woman until she gives in or getting angry at her.
When introducing other characters your characters have relationships with, try to add another level of depth to each relationship. Was the girl who turned Jack down at the club his brother’s ex-girlfriend who was put into rehab for murdering Jack’s brother? How would that change and develop Jack’s character? Don’t be afraid to get complex and gritty. The further you explore, the better results your characters will yield.
Encourage Conflict
In fact, not only should you encourage conflict, but it’s a must-have in any story! Conflict is what causes characters to become anxious, what helps readers root for them, and the number-one cause of personal development in characters. Experiment with as much conflict as possible that you could inflict on your character. Referring back to Jack’s example–what if his brother’s murderous ex-girlfriend is out to get Jack next? What if she kidnaps him? The tension here is tangible and will definitely help a character grow in the eyes of a reader.
When developing conflict, think about how your characters would deal with this conflict based on their relationships and their background. Jack’s background might lead him to underestimating “an ugly girl,” and that could get him into serious trouble. Or his relationship with his deceased brother could cause him to drop his arrogance once and for all and fight his brother’s ex-girlfriend with everything he has. When you create the obstacles you’re setting out for your characters, imagine scenarios in your head and play out different options. What would happen if she made this choice? Would this conflict work, or are there any better ones?
Also, the same way complexity is better with character relationships, be complex when creating conflict. Who wants an easy, one-step problem characters can solve within the first twenty pages? Not your readers! Build rings of tension, and piggyback off ideas you’ve already formed. Little conflicts can be stemmed from bigger ones, and this makes your characters even better.
Conclusion
Each of these three tips can work individually to help you write better fictional characters. But when you combine them, when their different helpful features interconnect, you produce a character that seems almost breathing, walking, and talking. Use a character’s background to weave in some meaningful relationships–or use those relationships to create conflict. In conclusion, apply your character’s background, relationship, and conflicts together in your story to create the best possible fictional characters you can. | https://www.seekyt.com/how-to-write-great-fictional-characters/ |
Université de soutenance : University of Cape Town.
Grade : Doctor of Philosophy (PhD) 2018
Résumé
Open- (e.g. grassland, savanna, shrubland) and closed-canopy (e.g. forest) biomes frequently coexist in the same landscape, where open environments tend to be fire-prone with higher light, but lower nutrient and water availability than closed environments. Environmental heterogeneity could select for divergent floristic assemblages and adaptive traits, from which emergent differences in resource availability and fire incidence contribute to excluding species from the alternate habitat. In this thesis, I investigated whether the coexistence of open–closed canopy biomes, such as forest and fynbos in the Cape Floristic Region, is contingent on environmental heterogeneity coupled with contrasting species traits. Given the heterogeneity in multiple environmental properties between open- and closed-canopy biomes, I hypothesized that boundaries between open- and closed-canopy biomes will display greater floristic turnover compared to boundaries between structurally similar biomes (e.g. open- and opencanopy biomes). To explore this, genus- and family-level turnover were correlated with climate, fire, leaf area index (LAI : proxy for understorey light) and soil properties across biome boundaries in South Africa. Both genus- and family-level turnovers were highest across open–closed boundaries and most strongly predicted by increased differences in LAI, suggesting that contrasting light regimes provide significant adaptive challenges for plants. The potential effect of contrasting light regimes is highlighted by the absence of open-canopy species from forest understoreys, where low, dynamic light could limit the ability of plants to acquire sufficient carbon. This apparent shade intolerance led to the hypothesis that open-canopy species lack the traits to maintain a positive carbon balance under low and dynamic light. To test this, leaf traits and photosynthetic response to continuous or dynamic light were compared between forest and fynbos species grown under three light treatments. Fynbos species experienced high mortality under shade treatments, produced leaves that were thicker, up to 1000 times smaller, had lower photosynthetic rates (0.8 versus 3.4mol m-2 s -1 ) under continuous low light (400 mol m-2 s -1 ) and lower light-use efficiency during dynamic light sequences than forest species. These differences imply that shade intolerance in fynbos species is associated with traits that are inefficient at harvesting light and require relatively continuous high intensity light for carbon assimilation. Moreover, these inefficiencies would make it difficult to support the carbon intensive traits (e.g. cluster roots, lignotubers, sclerophyllous leaves) that facilitate fire survival and nutrient acquisition/conservation in open habitats. In contrast, forest species are able to colonize open habitats during the long-term absence of fire, implying that they are able to tolerate high light and low nutrient conditions. Given that plants frequently cope with contrasting conditions through the expression of phenotypic plasticity, it was hypothesized that closed-canopy species possess greater plasticity than open-canopy species. To assess this, the response of leaf traits and foliar nutrition to changes in LAI and soil nutrition were compared between forest and fynbos species in the field. Leaf size and specific leaf area in forest species correlated positively with LAI and soil nutrition, whereas fynbos species response was weak, suggesting that forest species are more plastic. This plasticity may be realised by the variable light conditions forest species experience through their canopy and the occupation of higher nutrient soils, which alleviate belowground constraints. By comparison, the occupation of low nutrient soils by fynbos may inhibit plasticity given the selection of inflexible, conservative leaves. Consequently, I propose that the coexistence of open- and closed-canopy biomes arises from the steep turnover in selective regimes, which together with the contrasting adaptive traits and degrees of phenotypic plasticity they require, act together to competitively exclude species from the alternate habitat. | http://www.secheresse.info/spip.php?article83220 |
Elements of Description and Narration in Freelance Writing The initial literary impulse of a writer has always been to report on what he observes in the world around him. In "Description," he tries to paint the picture in front of his eyes; in "Narration," he tries to relate the narrative. Both forms are used in journalism as well as in creative writing.
Narrative prose is written in the first person and describes events that happen over time. It uses the present tense to tell its story. Factual accounts, interviews, and essays are examples of narrative prose. Descriptive prose focuses on the appearance of objects or places and does not involve people's thoughts or feelings. A description is a brief sentence describing one aspect of something. A good description makes readers curious to find out more about the subject.
In fiction, narrative and descriptive elements are mixed together to create scenes. A scene is a complex, three-dimensional image of life which can be described in detail. It consists of several incidents or actions taking place at one time or over a period of time. These incidents may involve only two or three characters but still hold our attention because they are interesting and believable. Fiction writers use details to describe scenes. For example, they might write that a character wears glasses even though we know this from previous information so they are not necessary for understanding who she is. Writers also use comparisons and metaphors to make their points more clearly.
Narrative refers to something that is similar to a narrative or that tells a story. "Prose" refers to common written language that lacks metrical structure (i.e., not like a poem, or a song, or a verse). Ordinary written language that communicates a tale is referred to as narrative prose. All books, for example, are instances of narrative prose. Personal narratives, such as memoirs and autobiographies, are examples of narrative prose.
The basic form of narrative prose is a story with a beginning, middle, and end. A story can be told in many ways, but this basic structure remains the same throughout. A story can also have sub-plots, minor characters, background information - anything that takes place outside of the main plot line. These additions increase the complexity of the story and make it more interesting to read about.
Narrative prose is used by authors to tell others about their experiences. Memoirs, stories in newspapers, magazines, and journals, all use narrative prose to communicate ideas and emotions.
Narrative elements include action/description, agency, authority, character, context, dialogue, emotion, event, explanation, incident, interaction, meaning, setting, scene, source, summary, theme, and transition.
In conclusion, yes, narrative prose is an element of fiction.
7 Responses Narrative writing is used to tell a tale or a portion of a story. Descriptive writing clearly depicts a person, place, or object so that the reader may envision the topic and enter the writer's perspective. Writing that does not include specifics such as times, dates, locations, or characters is general descriptive essay.
Narrative essays are written stories that require the use of character development, setting, and plot to create a coherent account of events. As the name suggests, this type of essay uses facts to describe what happened instead of opinions to explain why it mattered. Opinion pieces, on the other hand, share views on issues in life such as politics, society, or science. They usually contain explanations for their arguments as well as references that can help readers decide for themselves if they agree with the view expressed.
Narrative essays often focus on one event or series of related events from early childhood through adulthood. The writer selects which details to include in the essay by considering what information would be most interesting and useful to the reader. Certain events such as battles or discoveries that are important enough to remember provide the basis for narrative essays. Personal narratives are stories told by individuals about their lives that may include descriptions of events but also reflect upon them emotionally.
Narrative essays and personal narratives share many similarities including the need to select and organize information regarding oneself or others.
Characters, story, conflict, place, and point of view are all aspects of narrative composition. A character is anything that exists in the imagination of the writer or speaker. Characters have physical traits and behaviors that distinguish them from one another. For example, Sherlock Holmes is a character in the novel The Adventures of Sherlock Holmes by Arthur Conan Doyle. He has distinct qualities and abilities that make him unique compared to other characters.
Narrative fiction is a story told through written words or spoken words. A narrative can be presented in the form of a book, article, film, television show, or any other medium capable of telling stories. Book covers, movie trailers, and TV commercials are examples of media that present narratives in an attempt to attract readers, viewers, or listeners.
Book covers and movie posters feature characters who play important roles in the narrative. They often serve as markers for identifying elements within the story. For example, if you had only seen Harry Potter's face on the cover of the first book in the series, you might assume it was a children's book. But because of Ron Weasley's appearance on the cover of Harry Potter and the Sorcerer's Stone, you know this story is about something more than just kids going up against Voldemort!
Narrative writing includes a tale, characters, conflict, and other fundamental elements of a story. A tale is frequently associated with narrative writing. However, if you're writing a story, you're doing narrative writing, in which a narrator tells the story. Stories are often referred to as narratives.
Narratives can be divided up into different types or categories: fictional, personal, historical, scientific, etc. The type of narrative you are writing determines how much information should be provided by the writer and how much will be inferred by the reader. For example, if you were writing a fictional novel, you would provide details about your characters' lives that aren't necessarily found out until later in the story. On the other hand, if you were writing a history book, you would only include facts that could be verified through research. The choice of what details to include is up to you as a writer.
In English class, you may be asked to write narratives about real-life events. For example, one might be required to write a short story about someone's first encounter with racism. In this case, the teacher would like you to use your imagination to fill in the gaps in the event by providing your own details. You should always try to provide as much detail about a topic as possible without boring your readers. Remember that people want to know how stories end so they can determine whether or not they want to read further. | https://authorscast.com/what-is-narrative-prose-and-descriptive-prose |
"Every period of human development has had its own particular type of human conflict---its own variety of problem that, apparently, could be settled only by force. And each time, frustratingly enough, force never really settled the problem. Instead, it persisted through a series of conflicts, then vanished of itself---what's the expression---ah, yes, 'not with a bang, but a whimper,' as the economic and social environment changed. And then, new problems, and a new series of wars."
~Isaac Asimov (I, Robot)
When I first started thinking about this topic, I considered approaching it from a basic level--discussing the main types of conflict (Man vs. Man, Man vs. Himself, etc.) that usually get touched on in high school and college literature classes. (If you do need a quick review on conflict types, there's a Wikipedia entry you can view by clicking here .) Instead, I want to discuss how you can take this information and practically apply it to your own book or series.
1) The degree you use a certain type of conflict may depend a lot on your genre.
For example, romance novels tend to deal heavily with internal and interpersonal conflicts while thriller/disaster books rely more on characters responding to external forces they can't control. One thing I love about writing sci-fi is the opportunity to use a wide range of conflicts, and something that can add uniqueness to your work is introducing conflicts readers may not typically expect.
2) Draw upon your characters and setting for initial conflict ideas.
If you have well-developed characters with varying personalities, you can find opportunities to create tension between them even when they're allies. What are your main characters' greatest strengths and flaws? How could their traits impact the overall story?
With setting/world development, you can have both natural conflicts (storms, earthquakes, etc.) and cultural conflicts (political and social issues). Like characters, setting can be a lot more interesting when conditions aren't perfect.
3) Use history and current events as prompts.
Most of us are bombarded by news on a daily basis, but this can give you an opportunity to take topics that interest you and explore the related conflicts in a fictional environment.
I've honestly just started doing this myself, but keeping an ongoing reference file with headlines and news story links could be a helpful brainstorming tool.
4) Stage your story's conflicts before you begin writing.
With my own series, I started out with the major conflicts (ones that carry across several books) and worked my way down to smaller conflicts that may be resolved over the course of one or two chapters. You can do this in an outline or even a storyboard--the main point is to have an organized plan on where you are and where you're heading.
Beyond keeping a reader entertained, conflicts have purposes. You can reveal what your characters are like under pressure and contrast them with other characters. Internal conflicts give you an opportunity to show a character's personality through action and his/her thoughts as opposed to just telling the reader about them.
5) Chapter structure impacts pacing and tension.
Once I establish the conflicts in a particular book, I alternate between them--ending a chapter on a moment of tension then picking up the next chapter with a different set of characters and conflict. Even as readers are focused on reading about one conflict, they're concerned about the others.
6) Give readers a reason to care.
Have you ever watched a movie with great special effects but no character development? The moments of tension are lost in the fact that you could walk out of the theater without the slightest concern on how it ends. The same concept applies to books. If you open your story with an action scene (which isn't a bad thing), follow it with an opportunity to know your characters better. There is a balance to it, and you can learn a lot from observing both stories that do it right and those that do it wrong.
--------------------------------------------------------------------------------------------------------------------------------------------
Sci-fi novelist Patricia Gilliam is the author of the Hannaria Series: Out of the Gray (April 2009), Legacy (Nov 2009), and No Good Deed Goes Unpunished (June 2010). Beginning her career as an online content writer, she has written over 1,000 non-fiction articles and 40 fiction short stories since 2006. She has been a preferred author on Writing.com since 2007, offering free help and resources to the site's community.
Outside of writing, she and her husband Cory are broadcast camera operators for the Christian television show Power of the Word in the Knoxville, TN area. In 2009, they adopted a rescue greyhound (racing name Lucius Malfoy) and are active volunteers for the local adoption group.
Book 4 of the Hannaria Series, Something Like the Truth, is in progress with an expected release in early 2012.
-------------------------------------------------------------------------------------------------------------------------------------------------
Novel Writing Tools & Tips is a free column on Gather.com and featured on the site's "Today on Gather" newsletter once a week. Purchase of the following writing guides and resources is optional, and using the following links also benefits Writing.com:
By Patricia Gilliam:
-------------------------------------------------------------------------------------------------------------------------------------------------
Previous articles: | https://www.writing.com/main/view_item/item_id/1232078-Conflict--Building-Suspense |
Related Links:
Learn – EXERCISES
FACILITATION GUIDES
- Dilemma Resolution
- 3H – Mapping
Purpose
The dilemma framework enables the group to shift their orientation from either/or thinking (which won’t work in the given situation) to both/and (which can work) and use Dilemma Resolution Thinking and Generative Thinking to produce new options for action which integrate the two apparently conflicting values in innovative ways.
When to Use?
This method is especially useful for projects requiring a need for transformative innovation. In trying to make progress, especially on an innovation or change project, we often encounter pairs of conflicting values and demands. These can appear as tensions, polarities and even conflicts. Sometimes these are options and we can make a choice but sometimes they are both essential for success even though they are mutually exclusive or incompatible. This can become stuck in conflict. The dilemma method is a way to re-frame the conflict and tension into a process of learning and discovery that can take you beyond the impasse.
NOTE: As well as application in general situations, this has a specific use in Three Horizons (see Three Horizon Mapping Guide ) to frame H2 as a dilemma between H1 and H3 in order to construct possibilities for H2+ innovations, and in conjunction with Navigational Scenarios to support a continuous action-learning cycle in the face of uncertainty as the project moves into H3.
Set Up
- One flip chart stand, and wall space for two flip charts fixed side by side
- Three colours of sticky hexagons
- Participants seated in U facing work area – not table or other obstruction to everyone standing in front of the visual work area
Output
The simplest way to work with dilemma resolution is to use the representation of the dilemma as a space created by putting the two conflicting or contrasting values on the orthoganal lines instead of thinking of them as polar opposites. The diagram then provides a way of framing the results of the process.
In the diagram, the vertical axis respresents the more fixed values of the dilemma (hence the rock symbol). The horizontal axis represents the more fluid values of the dilemma (hence the whirlpool symbol). A good outcome needs both values to be fully expressed in the resolution. The pathway to resolution is represented by the diagonal wavy line. The line is wavy because the process is dynamic and needs to be constantly adjusted from feedback. This feedback needs to tell us whether we are heading too strongly towards rigidity or too strongly towards vagueness. The pathway needs to avoid taking the ‘easy way out’ of compromise; it needs to navigate through the tensions of the conflct zone; it needs to generate self-organising guidance into the resolution zone; and it needs to arrive a shared vision of a transformed situation.
Steps
The main steps in facilitating a dilemma resolution exercise are shown below. Although they are presented in a sequence through time they also should be viewed as a whole in that the steps are hihgly interdependent. In a complex task it may be necessry to reiterate steps. For example, it can happen that half way through an exercise a clearer dilemma is recognised that is better to work with than the initial one. Facilitating a powerful dilemma resolution requires you to “ride the bull without falling off”.
Step 0. Scope the challenge
- Scope the dilemma discussion according to the context of the task. Use a trigger question appropriate to the scope and process context: “In moving towards our transformative goal, what are we experiencing as the main tensions between opposing forces, needs and values?” List the answers in two columns on a flip chart without processing them.
- Take each answer and through discussion restate it as a pair of seemingly incompatible factors, a polarity, as currently experienced. Do not get into problem solving at this stage. Write each side of the polarity on a hexagon and place on either side of a double head arrow on a flip chart. Note: as far as possible the two factors should NOT be opposite ends of a scale but opposing factors
- NOTE: In the context of three horizons there is a special scoping method for setting up and developing H2+ innovation opportunities. (coming soon)
Step 1. Construct the dilemma
- On two flip charts side by side label one Rock and other Whirlpool. Through discussion invite choice of the most dominant ‘hard’ issue. Develop an understanding of it as a dilemma ‘rock’ quality, write it on a hexagon, and place it on rock flip chart. Take the dominant ‘soft’ issue(can be from a different polarity, but make sure it is in opposition to the chosen hard issue), develop it as a ‘whirlpool’ quality and place on the flipchart. Build up a Rock and Whirlpool cluster by taking hexagons one at time from the polarities, restating as qualities as needed, and placing them next to the appropriate ones by discussion.
- Create short summary description for the Rock and Whirlpool completed clusters as a quality that needs to be respected in navigating to the future. Ensure these are as contrasting as possible.
- Draw up the dilemma space axes.
- Write up the rock and whirlpool issues on their respective axes.
Step 2. Clarify the compromise and conflict zones
The Compromise Zone
- Put the questions: How do we try to avoid the tension by avoiding the issues? What do we pretend that we have reached as a resolution when it is evident that it cannot endure? (Sweeping under the carpet; avoiding discomfort)
- Collect post-its on some of the typical compromises that sweep the tension between rock and whirlpool values under the carpet and place them in the compromise zone.
- Create a brief summary statement: The tempting compromise we must avoid is ……………
The Conflict Zone
- Put the questions: What are the obvious points of conflict or tension in the situation? Where is most pain in the dilemma being felt? (facing the realities; riding the bull)
- Collect post-its on how the tensions between the rock and whirlpool values can break out into conflict, and place these in the conflict zone.
- Create a brief summary statement: The difficulty we must endure in order to emerge into positive resolution is …………….
Step 3. Express the desirables
- Place a blank flip chart between the rock and whirlpool charts, and divide between top and bottom to capture rock and whirlpool desirables.
- Put the question: From the rock value perspective what is most desirable in an ideal resolution of this dilemma?
- Put the question: From the whirlpool value perspective what is most desirable in an ideal resolution of this dilemma?
- Capture these two contrasting desires in the central areas.
Step 4. Make offers and requests
- Set up a 2×2 frame on two flip charts as shown on the diagram above.
- Put these questions to the group:
- From the rock position what is ideally required from the whirlpool?
- From the whirlpool position what is ideally required from the rock?
In the spirit of creative resolution:
- What is rock willing to offer whirlpool?
- What is whirlpool willing to offer rock?
Capture in the appropriate boxes.
Step 5. Review the resolution idea
Share reflections on how you think this idea will transcend the conflict zone and suggest a way of navigating the dilemma into the future. You can consider its strengths, its weaknesses and what feels creative or unusual about it.
- Is it weighted too heavily towards the rock value and therefore top‐heavy? → Dinosaur Trap
- Is it weighted too heavily towards the whirlpool value and therefore lop‐sided? → Unicorn Trap
- Are you still going to end up heavily in the conflict zone? → Push-me‐pull-you Trap
- Have you avoided really confronting the issue and come with a wishy-washy compromise? → Ostrich Trap
- If you feel you have the basis for a transformative resolution you are flying free → Flight of the Eagle
You can repeat this with other ideas from Step 3 to find the best one.
Why this is an improvement
The process allows the people in the room to move from polarised opposition into a generative dialogue. The process respects the importance of opposing values and allows them both to contribute into a collective enquiry into creative opportunities.
The process exposes that there are four failure modes in the situation and only one success mode, making visible that if one party sticks to their own position nobody will get a solution.
Tips
- A well-stated dilemma is the easiest to work with, take time to get it really clear and simple in the initial steps. It is important to make clear the distinction between a tension arisng from there being different options and tension between clashing but equally necessary values.
- Putting the dilemma on axes causes a mild cognitive shock if done without preparation. The important step is the switch from either/or to both/and thinking. It is worth spending few minutes explaining what is happening to give people time to respond to it and start thinking in this new space. The easiest way is to introduce the idea of dilemma before starting on this method, illustrating it with something from everyday experience that will be familiar to them (e.g. in democratic societies we often find there is a dilemma between civil liberties and security).
- Many dilemmas arise between people and so a ‘rock’ camp emerges with a ‘whirlpool’ camp. The group dynamics tend to develop in a conflict resolution mode. However, in a dilemma we are not seeking compromise (one of the ways to fail) but a transformative innovation
- A significant group of people will find great difficulty in looking at the peaks of the horns of the dilemma rather than the wildest implications especially where they are personally involved in one or other of the situations. They have to be helped to break down the situation and look at bits. Making a list of the parts and then ranking them in terms of their strength o effect in that area helps the individual to let go of the least important parts without having to let go of the whole.
- Generative Thinking (1+1=3) is a challenge but it can also be exciting and fun. Try and keep the mood upbeat and ready to have a go without falling into hasty judgement. In creative thinking we often need to create ‘stepping stones’ on the way to our destination.
How to Use this License
CC BY-NC-ND
This license allows anyone to download and share this method as it is shown here with others as long as H3Uni or H3Uni.org is given credit. The methods cannot be changed in any way or used commercially without prior consent.
Related Links: | https://www.h3uni.org/project/facilitate-dilemma-resolution/ |
Characters Who Jump Off the Page
Readers engage with characters who have personalities.
It doesn’t matter how tall or short they are, or what their hair color is. Their personality – filled with quirks, strengths, and shortcomings – brings them to life on the page.
A character who is all good or all bad won’t feel human to a reader. These one-dimensional characters are flat, and the the opposite of the three-dimensional characters you want to create. Your reader has no feelings about their success or failure.
By contrast, a three-dimensional character feels like a real person to your reader. If your protagonist, your antagonist, and supporting characters appear well-rounded, they’ll work to involve your reader in your story because they feel real.
Creating characters filled with humanity requires you, the writer, to know more than you may show on the page. You’ll know how each character reacts under pressure or when relaxed. You’ll know how they speak and act. As you write your scenes, you’ll know how your characters will act in any situation. You’ll pull out the bits and pieces of their character backgrounds so they speak and interact realistically.
What You Need to Know About Your Characters
Building a background for your main and supporting characters helps you bring them alive in each scene. Use a character bible to list each character in your novel and store their background information.
You need to know the many facets of a character to use just one fact when it fits a scene. Former private investigator and author David Corbett said in his book The Art of Character:
Developing a character with genuine depth requires a focus on not just desire but how the character deals with frustration of her desires, as well as her vulnerabilities, her secrets, and especially her contradictions. This development needs to be forged in scenes, the better to employ your intuition rather than your intellect.
List all your characters. You will spend most of your time working on the main characters, but make sure you include every character. This is helpful for minor characters who appear briefly several chapters apart. You’ll be able to reference their details as you write, instead of having to remember whether the scar was on the left or right palm.
Character Information
The time you spend creating details and background for your characters helps you understand how they work, how they interact with other characters, how their flaws hold them back, and how they create conflict for the protagonist. Details bring characters to life.
To make your characters realistic and keep them from being stereotypes, you need to work on their background. The more important your character’s role in your novel, the more detail you should add.
The problem with many pre-formatted character studies is that they either don’t focus on the details enough, or they go into a long list of unnecessary details without getting to the heart of understanding your character. Your focus should be on understanding your character’s personality rather than their height, weight, eye color, hair color, distinguishing marks, etc. You could change those physical details and still have the same personality that drives your character into conflict and helps them find resolution.
You may not use all the information you record, but the more you know, the easier you’ll find writing about your character in a variety of situations. You need to know your character physically, emotionally, and socially.
Habits and Mannerisms
Details like mannerisms and habits help readers connect with your character. Vivid and realistic behavior pulls your reader into the story. A shy character may look down at the ground or turn their head away when speaking. An excellent method for getting in touch with your character is to walk them through a typical day—work habits, meals, recreation, friends.
Also, list speech patterns, favorite sayings, or repeated phrases that reveal how they respond to events.
Physical Details
Yes, each character has physical details. Although you want to skip the scene where the character looks in the mirror, you can reveal your protagonist’s physical details through responses to and with other characters. She may tower over another character or the love interest may place his hand over her tiny brown hand.
In your character bible, list the physical details for every character so you keep them straight as you write. In your novel, know the details and sprinkle them in rather than giving a long list.
Character Interactions
How a character reacts to other characters reveals their personality. When they pat someone on the back or avoid them on the street, you give your reader clues about the relationship. For example, if your main character speaks kindly to their neighbor in person and then gossips viciously behind their back, it tells readers something about their trustworthiness.
List any relationships like: familial (uncle, parent, child, etc.), work connections, friends, and enemies. Know who supports them and who doesn’t. Use these details in dialogue and action to reveal your characters to your reader.
Conflict Responses
Know how your character normally responds to conflict. Build your novel on conflict and create many plot elements to thwart your characters’ plans. Introduce your reader to the normal responses early on, then put your character where his normal response doesn’t work.
This works for antagonists, opponents, and even love interests as well as the protagonist. Spend time with this. Conflict keeps your readers reading.
Character Backstory
Characters don’t enter your story as a blank slate. Make your reader believe they’ve lived a life filled with incidents large and small that impact how they behave and react in the story.
Create vivid and impactful backstories for your characters. Hint at previous events early in the story then expand on the impact later. Knowing your character’s backstory gives you the rich details you can add to deepen readers’ interest and understanding.
How to Use Backstory
Once you have a sound background for your character, you need to show your reader their facets, the dimensions that make them human.
The key is to balance what you know about your character with the tidbits of information you place in the story. You won’t share everything about your character. You will share the information that moves the story forward.
Character information is like research: you use about 20% of what you know. What you share illustrates your character’s dimensions. Share contrasts between characters to build tension.
Strengths and Weaknesses
Contrasts add dimension to characters. You can take your characters from flat to interesting by contrasting vulnerabilities with strengths. These contrasts work whether your character is a hero or a villain. Show the reader their skills, then show their flaws.
- How they excel
- What they don’t do well
- Vulnerabilities
- Flaws
- Insecurities
The best way to show the facets of a character is to bring them up as the character meets various obstacles in the story. If a character has a known insecurity, force them to use their skills to meet the insecurity head on.
Alternating strengths and weaknesses gives your reader a sense of your character as human. When you bring up their traits in a scene, use those that apply to the scene. Readers will empathize and remember.
Avoid listing traits. Your reader has no emotional connection to a list, and it will distance your reader from the character and the story. Integrate traits in scenes as you tell the story to show readers how those trait affects that character’s behavior.
Trust Your Readers
Show character traits through action and dialogue. If a temptress tries to seduce your protagonist and he declines, you reader will know he lives by a moral code and his values. And they will know he is morally strong. That same trait may display in other situations. You don’t need to spell it out.
If your protagonist is strong-minded but lacks self-esteem in one scene and later overcomes her imposter syndrome, let your reader notice the change. Limit other character comments. A sidekick may raise an eyebrow, but let your reader put the pieces together. They’ll remember the change.
When you over-explain a character’s accomplishment or failure, your reader feels as though you don’t trust them enough to make the connection.
Reveal Your Characters
Show your characters’ traits through action and dialogue. Display their strengths and weaknesses in scenes that pivot around action and decision. You give your reader a glimpse of the character trait. You’ll build curiosity and suspense when the reader wonders how a character will reconcile two disparate traits.
Choose scenes that illustrate a character’s differing traits, to build a sense of humanity. Readers engage, sympathize, and root for three-dimensional characters.
Know your character, know their actions, know their vulnerabilities. You’ll avoid flat characters, bring your novel to life, and create memorable people that live in your reader’s mind long after they have finished your novel. | https://prowritingaid.com/art/1453/write-a-three-dimensional-character.aspx |
‘The Good Liar’ is at first presented as a con film, with the morally corrupt Roy Courtnay (Ian McKellen) weaving his way into the life of seemingly-innocent Betty McLeish (Helen Mirren), a lonely widow who wishes for love. I will say this now, the many turns and twists of the film are well interwoven into the character arcs of these two, and therefore to truly analyse ‘The Good Liar’ would require giving away some details. Of course, I won’t do this, but rather urge you to see the film for yourself, whilst I attempt to best articulate my thoughts without giving away any of the secrets that the film invites you to unfold.
Though at first a film that seems to explore the loneliness that can come from growing old, we are shown from the trailer that Roy has more villainous reasons to grow close to Betty – to get his hands on the small fortune that she has saved throughout her life. The inauspicious reasons for their meeting is quickly addressed in the opening scenes, and from here the film is able to go on to explore various other themes, such as how the actions of our past can sometimes be what narrates our future. Both children of a generation that grew up in the conflict of World War 2, the connotations of a post-war lifestyle weigh heavily on Betty and Roy’s story. Though this is a film that can entertain all ages, it is clear that the director, Bill Condon, wishes to create a sense of empathy between an elderly viewer and the characters seen on screen, as the fallout from a conflict as great as WWII is entirely personal, and often the greatest comfort for those affected is to be shown that you aren’t alone. This is brilliantly personified by Mirren and McKellen – performers that may at times be unfairly constricted by being seen as figureheads for a certain era of cinema, but show in ‘The Good Liar’ that they can still take on the challenges of modern life, just as the viewer can in their own personal life.
Many have compared ‘The Good Liar’ to a jigsaw puzzle, and I agree with their comparison. The key elements of the story are slowly and surely placed throughout the film, and it isn’t until you take a moment to look back on what is being created, that you begin to interpret the story’s message. That is, of course, until the creators wipe the puzzle off the table in the closing scenes. Though I was drawn in by the final unravelling of the story, I did find that the actual build-up to it was often times slow and lacked significant creative ambition from the director. The music, cinematography and editing were at times basic, with the occasional flair of inspiration drawing audiences back into the story. However, I do understand that this is a film which is largely dependent on the success of the overarching story, as well as the performances of our films leads. Both of these elements were well-delivered, and did justice to the clever articulation of the conclusion.
My other criticism was one which was only partially satisfied within the final act, where was Helen Mirren? Such a prestigious actress as she is, I expected the tension between her and Ian McKellen to be electrifying, especially considering the treacherous surroundings of their relationship. However, throughout most of the film, the creators seemed have been inspired more by the presentation of the performers in ‘Chinatown,’ where the focus is very much on Jack Nicholson as the lead, and rarely strays from their own personal interaction with the events of the plot. Despite this, the scenes where both performers were together on screen felt fairly lacklustre. Save for the conclusion, many of the opportunities for tension within the script were rarely capitalised on, and ultimately left the film devoid of any great feeling of risk or thrill. This I would pinpoint on the actual lines themselves, rather than how they were delivered.
This brings me into my final point – the script. The concept of the film is brilliant, and the way that many of the more audacious events of the film are portrayed is exciting and intriguing, but the script itself really lacks anything unique. There were many opportunities for the relationship between our leads to be developed, but instead the film spends its time over-emphasising clues for later in the story, as well as spending excessive amounts of time ensuring that we understand the character traits of our two leads, when we are already invested because of the status that McKellen and Mirren have as performers within the industry.
To conclude, I do believe that ‘The Good Liar’ is still an important watch, as the elements of the story guide the viewer to an ending which is entirely unexpected. There are some great moments of excitement, but ultimately I don’t believe that Bill Condon contributes anything unique to the world of film in his most recent creation. | https://activespectator.com/2019/11/07/the-good-liar-review/ |
Hello, and welcome back to my Playing God series of posts, where I explore all things world-building for the fantasy genre. It feels like AGES since I last wrote about this topic—and it has indeed been a good seven months—and today I would like to discuss the intricacies of developing the concepts of race and culture within your WIP’s world. But first, here are some definitions to keep in mind!
👥 Race
According to the Merriam-Webster online dictionary (see here), the term “race” can be defined as the following, among other things:
- a family, tribe, people or nation belonging to the same stock
- a class or kind of people unified by shared interests, habits, or characteristics
- a category of humankind that shares certain distinctive physical traits
I’m not going to consider one definition of “race” to be more valid than other. When I conduct world-building for a fantasy WIP, I think of it as a term that describes distinct groupings of people (be they human or non-human) based on shared physical, social or other cultural qualities.
🎭 Culture
Once again, according to the Merriam-Webster online dictionary (see here), the term “culture” can be defined as the following, among other things:
- the customary beliefs, social forms, and material traits of a racial, religious, or social group
- the characteristic features of everyday existence (such as diversions or a way of life) shared by people in a place or time
- the set of shared attitudes, values, goals, and practices that characterises an institution or organisation
- the set of values, conventions, or social practices associated with a particular field, activity, or societal characteristic
I see culture as a very broad term, and one that is sometimes difficult to define. However, when world-building for a fantasy project, I think of it as something that encompasses the behaviour, norms, knowledge, beliefs, arts, customs, and habits of a particular group of people.
What is the significance of race and culture, and why is it important to consider them when developing your fantasy world? Both play a vital role in shaping society and human identity in our own world, so it makes sense that the same would hold for the worlds we create. Here are a few things to think about:
🌳 Realism
Unless there is a specific plot-related or historical reason otherwise, chances are that not everyone in your world belongs to the same race or has the same culture. Keep in mind: cultural differences can develop even in very small geographic areas, and groups of people who live in different climates or environments are likely to adapt over time by developing different physical traits.
👩🏽🤝🧑🏼 Diversity & richness
It would be boring if everyone was the same, right? When race and culture are conceptualised well, they lend such richness not only to your world, but to the story itself. Readers appreciate diversity because it keeps them interested and engaged, and given your audience WILL be multicultural, it enables more readers to see themselves reflected in your work. If your world’s society is relatively uniform, then it is important to make sure its culture is well-developed!
⏳ History
Race and culture are also closely tied to history. If there are significant historical events that have occurred in your world, consider what impact these are likely to have had on the culture of each given society. Depending on how deep you want to take your world-building, it is also worth thinking about how the concept of “race” came to be, and when / how each group of people became distinct from each other. This all feeds back into realism.
📚 Plot
Close to 100% of the time, something about your world’s culture(s) will tie into the plot. Whether this is some kind of conflict, a practice or belief your characters adhere to that lead to an important decision, or something else entirely, it is a reality that can’t be ignored. Your MC’s race (or perhaps species, if your world involves non-humans) will also influence their identity, so should be considered to ensure they are properly fleshed-out.
There are many directions in which you can take this aspect of world-building, and many decisions that need to be made. Everyone will go about doing this differently, and will explore race and culture to varying degrees. Here are some of the key considerations that I think are important:
😀 Appearance
Readers will always wonder what your characters look like. Your characters are likely to take note of what other characters look like on more than one occasion. It’s therefore likely you’ll need to consider the appearance of people in your world, and to what extent race and appearance are linked. But BE VERY CAREFUL ABOUT THE LANGUAGE YOU USE to describe the appearance of different racial groups. Certain language is hurtful to real people in our world—and could therefore be hurtful to your readers. When in doubt, use a sensitivity reader to help you.
🤝🏼 Norms and traditions
At the very heart of culture lies a set of defining norms and traditions. What defines the different cultures within your WIP’s world? There are a number of things to consider here, from accepted standards of behaviour, belief systems that may be religious or secular, to a traditions that can be as simple or complex as you like. Does “marriage” exist in your world, for example? Are there specific holidays or celebrations? What are the values that the people think are important? The options are endless!
🎨 Language, art, music & writing
There are a number of other cultural elements that I find are often undeveloped in the fantasy genre. Things that aren’t necessarily vital to the story, but can take the world-building to the next level. These include concepts such as language, and the “cultural universals” that are found in all societies (e.g. art, music and literature). Don’t underestimate the significance of the role these play in your characters’ lives. I will explore them in more detail in future posts, but they’re definitely something to consider!
⚔ Conflict
One of the biggest decisions you will have to make when shaping race and culture is the degree to which racial and cultural differences generate tension and conflict. Maybe there are inter-species conflicts you need to account for as well. Consider where the conflict comes from, why it has developed, and the impacts this has on both the characters and the plot. Also BE VERY AWARE OF ANY PARALLELS TO OUR WORLD, because these need to be approached with sensitivity to ensure you are not harming or alienating certain groups among your readers.
⚠ Sensitivity
As I have mentioned on a few occasions above, if you are ever concerned about the way you’ve represented race and culture in your WIP, seek out feedback from a sensitivity reader. Actually, even if you aren’t concerned it’s still a good idea to have someone from a different background to you run their eyes over your work. You never know what things they might pick up that you weren’t aware of, and, ultimately, it’s the responsible thing to do!
I found that developing the concepts of race and culture within my WIP fantasy series Graceborn was simultaneously one of the most exciting and most terrifying aspects of world-building! I’m always on the lookout for ways to include more diversity in my work, but I’m also always worried about doing it in an appropriate and sensitive manner. Some of the key concepts to note are:
Species: Aside from humans (and other animals), there are four non-human species known as the Ancient Races. Two are humanoid, but have their own unique physical characteristics, and their cultural norms and values often differ vastly as well. The other two Ancients are all but gone from the world.
Races: There are around 6-8 different racial groups amongst the humans, depending on how you classify them. The way each group developed is almost entirely connected to their geographic distribution, which resulted from migration after the species first evolved. And the supernatural may have been involved once or twice…
Traditions: Each different culture has a variety of their own traditions, especially when it comes to music and dance, but there are just as many unifying traditions as well. There is only one religion, which has two subsets (the First Faith and the New Faith), and each nation follows a set of high-level laws set out in the Treaty of Volund.
Conflict: Racial and cultural conflict is not a significant part of my world. That’s not to say it doesn’t happen, and certain nations’ governance structures lead to discrimination more than others. But because of the Graceborn, who work to keep the peace, racial or cultural suppression haven’t really been a thing in ~1300 years!
Sensitivity: The first book in the Graceborn series is currently with beta readers, and I’m fortunate enough to have an honest and diverse group. They’ve already helped immensely in pointing out a few things for me to take another look at on this topic, and I’m very grateful for that! | https://rebeccaalasdair.com/2020/04/24/playing-god-shaping-race-and-culture-in-your-fantasy-wip/ |
This guide is adapted from the content of Good Society: Expanded Acquaintance. If you want even more detail about how to hack Good Society, you can grab the book or PDF for the full guide.
In Part 1, we talked about the process of hacking Good Society. In this post, we’ll be talking about the elements of Good Society that exist within its framework, which you can alter to fit your essential experience. You can use these changes to tell different kinds of stories without changing the flow and feel of playing Good Society.
The changes you make to create your hack can either involve working within the framework of Good Society, or altering or changing that framework. We recommend that you start by making changes within the framework, and only alter it if needed.
The Elements of Good Society
Here are the elements of Good Society that you can change to create a completely different feel for your hack without altering the game’s framework.
- Setting changes
- Collaboration changes
- Creating new character roles
- Creating new family backgrounds
- Creating new desires
- Creating new relationships
- Creating a new phase
Setting
Setting refers to the context in which your game is set. Your setting could be as broad as the Edwardian era, as narrow as a single space station, or as specific as the world of Elfhame (The Fae Courts expansion setting).
Collaboration
Collaboration is one of the most important tools in Good Society, so take the time to consider how it affects your game. There are two aspects of Collaboration you’ll need to think about:
- Do I need to change the default Collaboration options to reflect the essential experience I’m aiming for?
- Do I need a new Collaboration section that lets players define important aspects of the setting?
Character roles
Character roles are the archetypical character types found in the stories you want to tell through your game. When creating new character roles, think about the role they play in the story and the drama—not merely their label.
The best character roles have both external and internal tension built into them. External tension exists between that character and those around them, while internal tension focuses on the conflict that rages within.
A good example of this is the Good Society character role of the Hedonist. The external tension is that the Hedonist wants to enjoy the pleasures of life, while those around them want them to take responsibility. The internal tension is that the The Hedonist battles with the guilt that their self-indulgence hurts others.
Family Backgrounds
Family backgrounds illustrate a character’s place in society. It can determine their status and importance (or lack thereof), but it also determines the expectations that society puts upon them. The best family backgrounds guide the kind of characters that players create, and also put the weight of expectation on characters during the game.
For example, a character from Humble Origins will be created with their family’s lack of wealth in mind. During the game, they will also face pressures to appease those of higher wealth and rank than themselves.
Remember, family backgrounds describe a character’s history and circumstances (and not the character themselves).
Desires
Creating new desires for Good Society requires extra care and attention. It is often hard to tell if a desire will work before you playtest it—but as a useful exercise, imagine what a player with this desire might do to pursue it and what drama it may cause.
Desires should be:
- Suitably dramatic
- Hard to accomplish, but not impossible
- Connected to at least one major character (including the character who holds the desire)
- Something that can be achieved through the actions of the major character who holds the desire
The best desires also:
- Involve multiple player characters (whether directly or by necessary implication).
- Contain an inherent conflict, or at least the potential for conflict. Desires that require something difficult or significant from another major character will always fall into this category.
- Affect the push and pull between the needs of the different player characters. This can often be as much about the playset you’re designing as it is about the desire itself.
Relationships
Creating new relationships is fairly straightforward, as they will often be an obvious consequence of a desire. For example, the desire disinherit your older sibling requires the Sibling relationship to operate.
New relationships can also help evoke the essential experience of your game. For example, Downstairs at the Abbey has the relationship Superior & Junior. Superior & Junior isn’t tied in to any particular desire, but it’s an important part of exploring and understanding the relationships between the downstairs characters.
Phases of Play
Creating a new phase for the cycle of play can be a great way to highlight an important element of your essential experience. In fact, almost all of Good Society’s expansions have additional phases for this reason.
Creating a new phase involves two questions:
- What is the focus of this phase? What do I want to see play out? (e.g. rooftop duels, servant downtime, the passing of time)
- Can this phase work the same way as a novel chapter, but with additional context or framing (e.g. rooftop phase, Sunday phase), or does it need its own mechanics to achieve the desired focus (e.g. passage of time phase, interview phase)?
Hopefully this guide has helped inspire ideas and paved the process for you to create your own hack or penned to Good Society game! Enjoy, and please do let us know about your creations! | https://storybrewersroleplaying.com/2021/06/12/a-guide-to-hacking-good-society-part-2/?v=322b26af01d5 |
Human beings are social creatures. We’re made to be with others, meaning, our lives are built on a series of interwoven relationships. Deep or superficial, good or bad, these relationships are incredibly important. They contribute to our feelings about ourselves, influence our decisions and actions, and teach us how to get along with others. As such, they’re formative, and even the most introverted among us can’t live without them.
The same is true for our characters. For us to know them, we need to know the relationships that are important to them, and why.
But when it comes to storytelling, relationships can accomplish so much more. They offer support in the form of allies that the protagonist will need in order to achieve their goal. People within the protagonist’s relationships act as mirrors and foils, providing reflective opportunities that can lead to the internal change that is key to character arc. And every relationship, good and bad, can shore up story structure in the form of natural conflict that can be infused into each scene.
Because of the many ways relationships can be used to enhance a well-told story, Angela and I have decided to make this our next thesaurus topic. We’ll explore a variety of relationships, such as Soulmates, Co-Workers, Rivals, Exes, and Parent and Child, along with the aspects of those relationships that could be tailored to fit a story. Here are a few of the features each entry will cover:
Relationship Dynamics. Some of your character’s relationships will be encouraging and supportive while others are characterized by dysfunction. Many, like the people involved, are a mixed bag. We’ll be brainstorming all the variations to give you ideas on which one might be best for your character and story.
Clashing Personality Traits. Even the most loving and supportive relationships should have some tension. But authors often make the mistake of making the “good” ones too good. And without that tension, they fall flat. A natural way to spice up a boring relationship is to give the players opposing traits, and voilà: instant conflict.
Conflicting Desires. Another way to add sparks is to give the people in the relationship opposing goals and desires. Alice and her parents may have a healthy and positive relationship, but if she wants to go away to a prestigious university while her parents want her to stay nearby (and attend the local state school), sparks are going to fly. Whether you’re looking at a supportive or toxic relationship for your character, conflicting needs and wants not only add conflict but can make your character question their own desires, seeding doubt and insecurity.
Positive Influences and Change. Characters undergoing a change arc will need to be pushed in the right direction. This influence often comes from the people around them: friends, rivals, family members, the doorman in the character’s building—literally any relationship can be used to solicit the change needed to get your protagonist where they need to go. We’ll delve into the various ways each relationship can help in this area.
Themes That Can Be Enhanced. Central story ideas are important for setting your story apart and adding depth, but writing them can be tricky. Relationships can naturally tie into certain themes and provide a subtle vehicle for exploring those ideas. So whether you’ve got a theme in mind or one naturally emerges as you write and you need to flesh it out, we’ll be looking at different relationships and highlighting the themes that can be emphasized with each.
Whatever genre you write, relationships will figure largely into your story. And they should be as complex, compelling, and layered as they are in real life. Our hope is that this thesaurus will encourage you to fine-tune and develop your character’s relationships until they do exactly what you need them to do in your story.
The first entry will be coming your way next Saturday, so stay tuned!
Becca Puglisi is an international speaker, writing coach, and bestselling author of The Emotion Thesaurus and its sequels. Her books are available in five languages, are sourced by US universities, and are used by novelists, screenwriters, editors, and psychologists around the world. She is passionate about learning and sharing her knowledge with others through her Writers Helping Writers blog and via One Stop For Writers—a powerhouse online library created to help writers elevate their storytelling. | https://writershelpingwriters.net/2020/10/introducing-the-relationship-thesaurus/ |
"#Alive" was thrilling, exhilarating, and heart-pounding from end-to-end. While it doesn't present anything new into the zombie horror genre, the primal tension and fear it generates makes this a very enjoyable albeit forgettable experience.
A mysterious virus outbreak suddenly spreads throughout Seoul causing people to eat each other uncontrollably which rapidly grows out of contro due to the tight confines of the urban cityl. Oh Jun-u (Yoo Ah-in), and Kim Yu-bin (Park Shin-hye) struggle to survive in an apartment complex from those infected. Trapped without access to text, calls, or even the internet, they have no idea what is happening on the outside world. The pair, with clashing personalities, must work together to survive and stay alive.
Intense. That's the best way to describe "#Alive". From its very first scene, the film doesn't relent when it comes to its zombie outbreak. And the film perfectly captured the chaos and panic that such an outbreak might bring in if it happens in real-life. Not only the impact of losing the data-heavy technology we are taking for granted daily but also the distrust, the fear, and the confusion of an outbreak - especially in the eyes and mind of a young person such as our lead characters - was perfectly translated on the big screen. It also helps that the film was well-acted from both lead actors. Each had their own specific characteristics and mannerisms that were performed naturally and admirably. While you could say that there's a certain romantic aspect between the two characters, we liked that the film doesn't make it its primary objective. In fact, it was so downplayed that it could easily be assumed to be non-existent even. Unfortunately, the film fails to present anything new into the zombie narrative. The eventual twists and turns are predictable and as expected. We also didn't like that certain aspects in the film just didn't make any sense or any logic - like mobile phones or gadgets never running out of battery. In the end, on entertainment factor alone, "#Alive" is a definite banger.
Rating: 3 and half reels
Why you should watch it:
- thrilling and intense from first minute to last
Why you shouldn't watch it: | https://www.reeladvice.net/2020/09/alive-saraitda-movie-review.html |
Shipping to: United Kingdom
If you have placed an order and believe that your shipment has been delayed, please check how many days have elapsed since receiving confirmation of your order.
Please remember that shipping normally takes 5/6 days. We, therefore, recommend that you count the number of days elapsed starting from the day on which the products were shipped. You can find further information here.
If you wish to contact our couriers, you will find their contact details here or Track your order. | https://www.kikocosmetics.com/en-gb/customer-care/shipping-and-delivery/my-shipment-has-been-delayed-what-can-i-do.html |
- Up to 35% increase or decrease, acceleration of flowering is the key for stable production -
The National Agriculture and Food Research Organization (NARO) and the Tokyo University of Agriculture have revealed that weather in the five-day period including the flowering day and two days before and after the flowering day has greatly influenced the wheat yield in major Hokkaido production areas with "Kitahonami" as the main cultivar in recent years. It was estimated that the yield would be reduced in case of cloudy or rainy weather, with a difference of up to 35% when compared to that of clear weather. This research outcome will be useful for developing stabilization technology of wheat yield.
Overview
Hokkaido is a major production area, accounting for two-thirds of the domestic wheat production. Especially the eastern part of Hokkaido including the Tokachi and Okhotsk regions are the main production areas of wheat. The yield per unit area of wheat in the regions has been increasing year by year on average due to the change of major cultivars and improved cultivation techniques associated with breed improvement. However, the annual yield has been so fluctuating that the yield in a year could drop to 30-50% of that in the previous year. Therefore, it has been required to elucidate the causes of such big yield fluctuation in the production sites.
To date, the research group of NARO and other institutes have clarified that high temperature and cloudy weather in June-July are the greatest meteorological factors for the yield decrease in major production areas in Hokkaido. However, there have been years in which we could not identify the cause of the yield reduction even taking into account the above-mentioned meteorological factors. Therefore, we focused on the short-term weather conditions for a few days to investigate the relationship between weather and wheat yield.
Wheat yield gap (simulated potential yield minus actual on-farm yield) was analyzed in the major production areas in Hokkaido in the 1984-2020 period with the general-purpose crop growth analysis model to investigate the relationship between the yield gap and the six different meteorological conditions in the short-term of a few days. The results showed that weather in the five days including the flowering period and 2 days before and after the flowering day had the greatest impact on the yield in the years after 2011. In case of the cloudy or rainy weather during the flowering period, wheat yield decreased up to 35% in comparison with the case of clear weather. This relationship is characteristic for the current major cultivar "Kitahonami" and there was a different trend before 2011 when "Chihoku-komugi" or "Hokushin" were the major cultivars.
From the above-mentioned results, it was found that, it is most important to reach the flowering period in fine weather and to pollinate reliably, in order to achieve high yield of the current major cultivar "Kitahonami". Winter wheat in Hokkaido often flowers in mid-June when it is prone to have a long spell of rain. The Risk of flowering during the rainy season increases as much as the delay in flowering. It has been suggested that accelerating the flowering with growth-promoting techniques such as sowing at appropriate times or spraying snowmelt material to the field may lead to stabilization of the yield. | https://www.naro.go.jp/english/laboratory/harc/press/wheatyield/index.html |
What if I complain after the 7 days return time-frame has elapsed?
Items not eligible for Returns
Requirements for Returns
Return Instructions
STEP 1:
Send a Complaint
Also include pictures of the product that was delivered to [email protected] as a means of evidence.
STEP 2:
Return after Authorization
Once your claim is validated, we will provide information on the most suitable means of getting the item from you.
STEP 3:
Track Return Status
We will send you a return identification number, this will enable you to track the status of your return.
STEP 4:
Get Resolution
Once the item has been received by the seller, we will ensure you get a resolution (A replacement, exchange or refund).
What if I complain after the 7 days return time-frame has elapsed?
For product complaints outside 7 days in relation to defective items, you may:
Contact the seller of the product who will be in a better position to provide a resolution.
Contact us so we can assist in getting a resolution from the seller.
Please note: You will be responsible for the shipping cost and cost of repair (where the defect is not covered by the manufacturer’s warranty). Maxflix will not replace or issue refund for items that fall into this category.
Items not eligible for Returns
Products that have been altered from their original or opened by an authorized personnel without permission.
Product with tampered or missing serial Universal Product Code numbers (UPC).
Perishable goods cannot be returned except a valid reason is raised at the point of delivery with affirmation from the dispatcher.
Products damaged due to misuse.
Products in beauty, health and personal care category.
Jewelry, inner-wear, bed sheets, lingerie and socks. | https://maxflix.com/return-policy/ |
Abstract:
Fruits are the most important source of polyphenols - substances that have a positive effect on human health. Modern technologies for the industrial processing of fruits into juice are aimed at preserving the useful components of the raw material in it. The issue of the content of polyphenols in industrial juice products and especially the change in their concentration over time is important for understanding the nutritional value of juice products and requires further study.
The purpose of the work is to study the total content of polyphenols depending on the type of juice products and the time elapsed since the product was manufactured.
Material and methods. The total content of polyphenols in terms of gallic acid was determined by the Folin-Ciocalteu method in 4 popular types of juice products (orange, grapefruit and apple juices, cherry nectar) of various brands and with different production dates. The results of the determination of polyphenols in 60 product samples taken from Russian retail chains were analyzed.
Results. Polyphenols are found in all types of products in significant quantities: in orange juices - from 678 to 870 mg/kg, in grapefruit juices - from 447 to 798 mg/kg, in apple juices - from 264 to 1320 mg/kg, in cherry nectars - from 696 to 1090 mg/kg. The highest average content was found in cherry nectars (859±106 mg/kg), followed by orange (781±54 mg/kg) and grapefruit juices (634±91 mg/kg). In apple juices, there is a significant variation in the content of polyphenols depending on the method of juice production - the highest content of polyphenols was found in straight-pressed apple juices (1119±124 mg/kg). The content of polyphenols in products stored for six months or more does not show any significant differences from the content in fresher products.
Conclusion. The study showed the presence of high concentrations of common polyphenols in juice products.
The dependence of the content of polyphenols in the product on the time elapsed since the production of the product was not found. Juice products of industrial production can make a significant contribution to the intake of polyphenols in the human body. | https://pepsicohealthandnutritionsciences.com/publications/comparative-analysis-total-content-polyphenols-some-types-industrial-juice-products |
What does elapsed time meen?
time elapsed=final time taken - initial time taken
elapsed time ~ n. measured time of an event
Take the elapsed time away from the end time.
Elapsed time
add start +Elapsed = and get end time
what is the elapsed time between 4:30 and 5:10
Elapsed time11:45pm to 3:30 am
The elapsed time would be 18 minutes
The elapsed time between 3:35 and 6:09 is 2:34.
6 hours have been elapsed
I do it in two parts. I calculate how much time has elapsed between the AM and noon, then how much time has elapsed between noon and the PM, then add them together. | https://math.answers.com/Q/What_is_the_elapsed_time_of_516am_to_800am |
Results 1 - 10 of 97,184
Table 1: Fitting results.
2007
"... In PAGE 4: ... These inter-contact times are found to follow a log-normal distribution. Table1 presents, for each data set, the proportion of pairs for which the distribution of inter-contact times fits an exponential, a Pareto, and a log-normal distribution. We also show the proportion of pairs that were rejected for all three hypothetical distributions.... ..."
Cited by 1
Table 6.7: Average values of contact and inter-contact duration for the different mobility models in the Urban Scenario.
in Supervisor:
2007
Table 1. A comparison of the different studies of peer-to-peer contact pattern traces between Bluetooth devices in terms of scope, duration, and amount of data collected.
2007
"... In PAGE 2: ... The University of Toronto study , and the Haggle studies , gave out between 8 and 41 Bluetooth enabled devices over a course of a few days and analyzed contact and inter contact times. Table1 shows the difference between our studies and the others. It is clear that the size and scope of our data is much larger to the ones that have been obtained in the other studies.... ..."
Cited by 2
Table 2. Results of using crude initial schedules generated by simple heuristics.
Table 2. Normalized Running Time of the Four Methodsa
2006
"... In PAGE 6: ...Table2 for the normalized running time on examples of three small molecules and two proteins.) 5.... ..."
Cited by 1
Table 2. Part of the operational profile of a mobile phone model obtained through continuous PPC reporting.
2003
"... In PAGE 32: ... 2000). Operational profiles give information about how products are used ( Table2 ). This helps to focus and substantially improve the efficiency of both development and testing (Musa 1999).... In PAGE 131: ... The ranking list is updated monthly. Now, this ranking is used to prioritize error correction (see for reference Table2 on page 30). The most important features and the most annoying bugs for customers get corrected first (see also the threshold of complaint in section 8.... ..."
TABLE 1. REQUIREMENTS FOR A CLAUSE TO BE FINITE IN TWO TEMPORAL MARKING SYSTEMS*
Table 2: Raw Performance Breakdown (in seconds)
"... In PAGE 8: ... 5.3 Performance Breakdown Table2 provides a performance breakdown detailing the areas where the benchmark application spends its time in each of the configurations. The total time is broken down into the time spent reading, writing, sorting and (for the asynchronous algorithm) waiting at synchronization points.... ..."
Table 5: The compile time and run time results of the benchmarking.
1993
"... In PAGE 5: ... The LML and Haskell implementations o er several di erent garbage collectors and it would be interesting for a future study to repeat the measurements so as to determine which garbage collector performs best for a given program with a particular compiler. Table5 shows compile time and run time performance measurements. The compilation speed is reported in lines per minute real time, where the number of lines of the orig- inal Miranda program (as shown in column 2 of Table 4) determines the size of the program.... In PAGE 5: ... Fixing the heap size to one and the same value for all ex- periments shows somewhat larger execution times, but the relative ranking of the compilers does not change. Each row in Table5... ..."
Cited by 35
Table 3: Average connection time, elapsed time and ratio of connection time to elapsed time
1998
"... In PAGE 4: ... Table 3 gives average connection time, average elapsed time and ratio of connection time to elapsed time for the four workloads without a proxy. Table3 shows that average connection time ranged from a low of 0.27 to a high of 0.... ..." | http://citeseer.ist.psu.edu/search?q=inter-contact%20time&t=table&sort=rlv |
Supply chain strategies are based on corporate strategies. A firm decides on the type of product, its marketing mix, market and the way ahead. The firm need to decide on the strategy it would prefer to pursue. The three generic strategies include cost-leadership, differentiation and focus strategies. Thus the products are functional or innovative or personalized product. The corresponding operations strategy are – “make-to-stock” ; “make-to-order”, and “engineer-to-order” respectively. While the supply chain strategies would be a push (for forecast based supply chains) or pull (for order based supply chains in case of engineer-to-order) or push-pull (actual sales based supply chains for innovative products) strategy. The supply chain manager has to trade-off between fill rate and inventory turnover ratio, bulk purchase and obsolescence, EOQ and seasonal availability; and end up measuring its performance on all these parameters including the cash-to-cash cycle (the time elapsed between procurement and realization of sales of final product). He has to continuously adapt its supply chain strategies and decisions based on the performance and changing circumstances. Supply chain function involves 7 stages shown in Figure 1.
Stages in supply chain management
Figure 2 illustrates the implementation cycle of supply chain management strategy. It suggests that supply chain strategy is dynamic and continuous re-alignment is necessary to keep up the performance.
Stages in supply chain implementation
In all three different approaches, namely, make-to-stock, make-to-order and engineer-to-order the common element is inventory. But inventory decision will vary across the product lines. In case of functional products, a firm, based on forecast, can hold inventory in all stages in supply chain. It can hold inventory of raw materials and inputs to production and operation in upstream supply chain; can keep inventory of work-in-progress (WIP) in internal-supply-chain and have inventory of finished goods in downstream supply chain. The extent of holding inventory for functional and innovative products would vary across different stages. For innovative products, a firm may not hold inventory of finished goods as its implement pull strategy, where products are manufactured based on actual sales (make-to-order or assemble-to-order). Here firm may hold WIP and postpone manufacture of final product, i.e., produce after receipt of orders. In this type of supply chain different combinations of assemblies or components make different product-types. Such firms keep ready the assemblies and postpone final assembly awaiting final orders. | https://www.igi-global.com/chapter/implementing-supply-chain-strategies/224846 |
Important Statement 05/10/2020
Dear Valued customer,
We regret to inform you that after over 35 years of trading with Hornby, with immediate effect we are unable to fulfil any orders for forthcoming release or out of stock Hornby products.
Therefore over the coming days we will be cancelling pre orders and informing customers personally. We apologise unreservedly for the inconvenience this will cause you.
We would like to make it clear that this only affects Hornby products that are on pre order or out of stock. All our other extensive range of brands are unaffected. | https://railsofsheffield.com/news/articles/3528-important-statement |
Relativity says that time dilation occurs as velocity approaches c. The faster a particle moves, the slower Time flows for it. In the extreme case of photons, time does not flow at all, and photons “experience” no Time at all. Since there are no two particles in the Universe that have identical motion histories, no two particles need to agree on the amount of time that has elapsed after any particular event; for the purposes of this question, this would be the Big Bang.
So: if no two particles need to agree on elapsed time, how is it meaningful to state that the Universe is 13.5 billion years old? Is it valid to say, “13.5 billion years have elapsed on Earth, because, by virtue of being gravitationally co-located for billions of years, we have largely smoothed out an earlier chaotic phase of motion, and therefore the Age largely holds (on Earth/Solar System/local Universe).”
But this would be a weak statement, other areas of the Universe may have a radically different measure for the age of the Universe (and yet be right).
TIA for setting me straight! | https://boards.straightdope.com/t/a-question-about-the-age-of-the-universe/807783 |
ABUJA (NBMA’ Report) – The Director General of the National Bio-safety Management Agency (NBMA), Rufus Ebegba (Dr) has issued out a warning to super-stores operators in Nigeria asking them to withdraw all Genetically Modified (GM) products from their shelves within the next 7 days or face sanctions from the regulatory body.
He gave this warning during a meeting he had with representatives of super-stores operators in Abuja recently.
Ebegba explained that the warning became imperative because most of the super-stores get their supplies from countries that have long adopted the production and sales of GM foods.
He said, “There is a law in place, we will not want any segment of the society out of ignorance to act in manners that will infringe on the existing law, the Act establishing the NBMA empowers the agency to regulate the activities of modern biotechnology and the use of Genetically Modified Organisms (GMOs).”
The meeting which is aimed at creating awareness amongst the operators on the Biosafety Regulations guiding the importation of GM products into the country also warns that the consequences of dealing with sales of GM products after the expiration of the 7 days ultimatum would be very dire as the warning is not without legal backing.
Ebegba noted that the idea that Nigerian laws are not enforced and implemented by government agencies should be completely ruled out because the NBMA will not hesitate to shut down any super-store that contravenes the Act.
He therefore called on the operators to formalize their dealings by obtaining the necessary permits. | https://fmic.gov.ng/stop-sales-gm-foods-nbma-warns-super-stores/ |
Two isolates of fungal entomopathogen Beauveria bassiana (Balsamo) Vuillemin (Hypocreales: Clavicipitaceae) were grown on cooked rice using diphasic liquid-solid fermentation in plastic bags to produce and harvest spore powder. The cultures were dried and significant differences were found for isolates and time of harvest. The spores were harvested manually and mechanically and after the cultures were dried for nine days, when moisture content was near 10%. After harvesting, spores were submitted to quality control to assess concentration, germination, purity, moisture content, particle size and pathogenicity to the coffee berry borer, Hypothenemus hampei (Ferrari) (Coleoptera: Curculionidae). Spore productivity on cooked rice was less than 1×1010 spores/g using both manually and mechanically harvesting methodologies. Germination at 24 hours was over 75% and pathogenicity against H. hampei was over 92.5%. This methodology is suitable for laboratory and field studies, but not for industrial production when a high concentration of spores are required for formulation and field applications.
Introduction
The coffee berry borer, Hypothenemus hampei (Ferrari) (Coleoptera: Curculionidae) poses a serious threat to coffee production throughout the world due to its destruction of the coffee seed. Among the various biocontrol methods used to control this insect, the fungal entomopathogen Beauvena bassiana (Balsamo) Vuillemin (Hypocreales: Clavicipitaceae) is highly promising. The fungus has been shown to cause high mortality in various countries where the coffee berry borer is present (Fernández et al. 1985; Lazo 1990; Mendez 1990; Barrios 1992; Gonzalez et al. 1993; Sponagel 1994). However, strains that yield high spore production, high pathogenicity and adequate shelf life remain a challenge for mass production and field application.
Production of B. bassiana spores can be achieved using different methodologies, which can be classified into low input and industrial technologies. However, most production of fungal spores worldwide is carried out using simple technologies that demand low inputs (Ferron 1978; Hussey and Tinsley 1981; Alves and Pereira 1989; Antía et al. 1992; Jones and Burges 1997). Most of the production of B. bassiana spores in Colombia for coffee berry borer biocontrol is done using a simple sterilization technique based on cooked rice placed inside bottles. The spores are mainly used for field spray applications (Posada 1993; Bustillo and Posada 1996). The spores are harvested by washing them out from the rice media with a 1 % oil-water suspension (Antía et al. 1992). This aqueous spores suspension must be used immediately after preparation to avoid spore germination. Additionally, spore longevity is short if they are kept in the bottles because high moisture content causes rapid loss of spore viability. Harvesting the spores from the bottles is time consuming and if the cultures produced in the bottles are going to be used to produce a spore powder, and drying the rice with spores is difficult (Posada 1993; Gonzalez et al. 1993).
Another methodology for spore production involves the use of fermenters and artificial media. The advantages of this technology are that spores are easily harvested and can be used to prepare formulations. In Colombia, some private companies have been trying to develop B. bassiana formulations as wettable powders and dispersible granules (Morales and Knauf 1994; Marin et al. 2000). The quality controls over those formulations are not very consistent, based on tests conducted by CENICAFÉ, Colombia's National Coffee Research Centre. Any B. bassiana product that is going to be used for the coffee growers to apply against H. hampei is continuously evaluated for quality, using CENICAFÉ approved methodology (Vélez et al. 1997; Marón et al. 2000).
Jenkins (1995) reviewed spore yields for 13 fungi produced on semi-solid media and found only one product that yielded concentrations per gram higher than 1 × 1010 spores. The product, based on B. bassiana that is sold by Laverlam International corporation www.laverlamintl.com (previously known as Mycotech) was reported to have 5.8 × 1010 spores per gram of substrate. This implies that the conversion of substrate to spores was high, and that at least some formulations already in the market should be able to increase yield while remaining economically viable to produce.
The development of mycoinsecticides to be sprayed with controlled droplet application technology must consider the production and formulation system. The spores produced for oil formulation must be lipophilic, this means that they should be produced using surface liquid media, on solid substrates or a combination of these methodologies that are called diphasic liquid—solid fermentation (Jenkins and Goettel 1997; Lomer et al. 1997). The spores that are produced on the media after it has dried can be harvested as a dry powder. Harvesting using fine sieve meshes prevents clumped spores passing through that can block nozzles and cause difficulties in obtaining an even distribution of the spores in the droplets. Also, harvesting with sieves allows the user to know the particle size, which is an important parameter in quality control.
The aim of this study was to produce B. bassiana spores using the diphasic liquid—solid fermentation technique developed for the LUBILOSA (Lutte Biologique contre les Locustes et Sauteriaux, www.lubilosa.org) project to produce Metarhizium flavovinde (Lomer et al. 1997). The production was carried out using two B. bassiana isolates that showed high virulence to the coffee berry borer. All the production steps were closely followed to record the variables related to time of harvest, substrate moisture content, method of harvest, particle size and quality control parameters such as: concentration, germination, pathogenicity, powder spores moisture content and purity. The spores harvested were used in subsequent formulation and field application experiments.
Materials and Methods
Isolates
Two isolates of B. bassiana were prepared: (1) B. bassiana 9002 was isolated from the coffee berry borer H. hampei (Coleoptera: Curculionidae) collected in Ancuya (Nariño) in a coffee plantation and (2) B. bassiana 9205 was isolated from the sugar cane borer, Diatraea saccharalis (Fabricius) (Lepidoptera: Pyralidae), collected in Palmira (Valle del Cauca, Colombia) in a sugar cane plantation. Both isolates were selected as the most virulent coffee berry borer strains in the CENICAFÉ B. bassiana collection.
Production
Five trials were carried out, each one with a batch of 30 kg of B. bassiana cultured on rice using LUBILOSA's diphasic liquid—solid fermentation methodology (Jenkins, 1995) using plastic bags. Each bag contained 200 g of cooked rice inoculated with B. bassiana inoculum grown on liquid media. After the fungus culture had grown, spores were harvested as powder and then sieved through three sets sieves with mesh sizes of 18 (1 mm), 35 (500 m), and 60 (250 m). The spore powder, passing through all three sieves, was used for subsequent research such as spore storage in oil, spore oil formulation and feasibility of spraying using controlled droplet application technology.
Experimental design
The experiment was set up as a completely randomized design with a factorial arrangement (2 × 4). The treatments were two isolates and four different times of harvest, 15, 25, 35 and 45 days after the inoculation and incubation. Each treatment combination was replicated five times and each replicate consisted of one bag of 200 g. The data were analyzed using one-way analysis of variance (ANOVA: SAS Institute 2003)
Spore harvesting and drying
The spores were harvested at four different time intervals, 15, 25, 35 and 45 days after the inoculation, to evaluate if harvest time had any effect upon spore yield and to obtain an estimate of the optimal harvesting period. To harvest the spores as powder it was necessary to dry the fungus to reduce moisture content and allow the spores to separate from the rice substrata. The spores harvested following this procedure can be preserved for a long time without loss of germinative power or pathogenicity (Bateman 1995).
Table 1.
Weight (g) of the Beauveria bassiana culture and rice at different times of drying and before the spores were harvested.
Treatments were selected at random to be dried at each harvest time. To dry the cultures the plastic bags were opened in a room with a temperature of 15 ± 4°C and an average relative humidity of 55 ± 7 % and allowed to air dry. Harvesting was done both mechanically and manually for 20 minutes. The mechanical harvest involved the use of a shaker Ro-Tap Sieve Shaker (W.S. Tyler Inc., www.wstyler.com/) that uses horizontal circular motion and vertical taping motion to stratify and screen the particles. The manual harvest consisted of back and forth movements of the sieve. The samples were then put through 3 sieves as described above. The spore powder that was collected after sieving was weighed and kept in separate sterile vials for further assessments, such as moisture content and quality assessment (see below) and stored for later use inside a sealed silica gel vacuum desiccator and kept in a cool room under the conditions described above.
Culture moisture content assessment
In order to monitor air drying of the rice cultures, each sample was weighed every day until the weight was stable. At the same time, from independent samples that were kept under the experimental conditions, sub samples were taken daily to assess the humidity content using oven drying where the sub samples were dried for 24 hours at 105°C (Rao et al. 2006). The aim of this procedure was to find out how many days it would take the fungus to be ready to harvest and also to determine the moisture content of the rice — fungus culture.
Quality control of spores
The harvested spores were evaluated using a quality control methodology developed at CENICAFÉ which assessed spore concentration, germination, pathogenicity, purity, moisture content and particle size (Vélez et al. 1997).
Results
Moisture content lost and dry weight of the B. bassiana culture on rice
Table 1 shows the results of moisture loss during air drying of rice substrate use to culture two B. bassiana isolates for spore production throughout four different harvest times and the final dry weight. Initially, the weight of the rice used as substrate plus B. bassiana liquid inoculum to start the fungus culture was 200 g wet weight per plastic bag and the weight was recorded just prior to first harvest. The daily moisture loss of the B. bassiana culture and rice shows a similar tendency for both isolates and was directly related to the harvest time. The moisture loss was greater the longer the time before spores were harvested. The moisture loss factor at each spore harvest was determined by doing a division between wet and dry cultures. The values of the factor obtained were 2.4, 2.4, 3.2 and 3.8 at 15, 25, 35 and 45 days respectively. These data estimate the amount of cooked rice and dry rice that need to be produced and harvested to obtain spore powder. These estimates were necessary so that appropriate quantities of spores could be produced for field experiments. At each harvest time the wet to dry factor was the same for 15 and 25 days and higher for 35 and 45 days. Analysis of variance of isolate, time of harvest and culture drying time were not statistically significant different for the interaction between factors (df = 24, F = 0.95, P = 0.5358). However, the statistical analysis (ANOVA) (Table 2) of independent factors showed significant differences. The analysis of variance of isolate and period of harvest per day that it took to dry the cultures were significantly different (df = 81, P = 0.0001). The significant differences were for isolates and time of harvest mainly between the first five days of drying cultures.
Table 2.
Analysis of variance for dry weight of Beauveria bassiana culture on rice to harvest the spores.
Moisture content assessment of B. bassiana culture and rice before harvesting the spores
Figure 1 presents the moisture loss of B. bassiana culture and rice during air drying process before the spores were harvested. The B. bassiana culture and rice in to each treatment were dried for nine days at which point the moisture content was stable (Figure 2). The moisture content fell rapidly during the first four days and there was variation between days for different isolates and times of harvest (Table 1, Figure 2). After five days the moisture loss was relatively small and remained steady at around 10 % until the ninth day. This final moisture content was determined and monitored drying culture and rice subsamples daily in an oven in order to harvest the spores with the lowest moisture content.
Quality control of harvested powder spores Spore powder weight
Figure 3 shows the spore powder weight harvested from B. bassiana isolates using manual and mechanical methods at different times of harvest. The analysis of variance of isolate, method and time of harvest showed significant differences for factor interactions (df = 3, F = 24.7, P = 0.0001).
Spore powder production showed considerable variation. Production of both isolates was higher using the manual method compared with the mechanical method. The overall spore production by isolate shows that B. bassiana 9205 was slightly more productive than B. bassiana 9002 and the highest production of both isolates was obtained by manually harvesting the spores 15 days after inoculation.
Spore powder production: concentration
Figure 4 shows the spore concentration produced by both isolates and harvesting methods estimated as grams of B. bassiana spore powder. The spore concentration was highly variable for harvest methods and for isolates. The spore concentration was higher for the mechanical method. This result was the opposite for the spore powder weight where a higher yield was obtained using the manual method.
Spore concentration was higher when harvested earlier compared with later times. It was observed that after harvesting the rice cultures B. bassiana spores remained on the rice. Following sieving these cultures were washed out with water plus 1% of emulsifiable oil to extract the spores for counting and to calculate the total concentration for each treatment.
Table 3 shows by isolate, time and method of harvest, the total production of spores estimated from the concentration of spore powder and spores remaining in the rice after harvest. The extraction of the remaining spores from the rice cultures showed that a high number of spores remain in the rice after sieving; the number of spores that remained on the rice was greater than the number of spores that were harvested as powder. The data show that B. bassiana 9205 is more productive than B. bassiana 9002. Production of B. bassiana 9205 was higher 15 days after inoculation and thereafter declined gradually to 45 days, which resembles a typical growth microbial curve. In contrast, B. bassiana 9002 did not show this pattern of spore production.
Although the total spore production of isolate B. bassiana 9205 was higher than B. bassiana 9002 (Figure 4, Table 3) the spore powder weight was higher for B. bassiana 9002, which was not desirable for formulation preparation because a high proportion of the powder was starch.
Germination test
The germination test was conducted at 24 and 48 hours. The test was done for all harvest periods and isolates. Overall, the germination for all treatments was over 75% and germination at 24 hours was over 80%. Germination counts at 48 hours were difficult to perform because they were overgrown with mycelia and the risk of mistakes was higher. The germination tested at 24 hours for B. bassiana 9205 was higher after 25 days of harvest, while B. bassiana 9002 was higher after 35 and 45 days.
Pathogenicity test
Table 4 shows the pathogenicity results of B. bassiana spores using the CENICAFÉ bioassay methodology (Vélez et al. 1997). The mortality of H. hampei was not significantly different between isolates and days of harvest (df = 3, F = 1.5, P = 0.2089) and was over 92 % for both isolates and for all different times of spores harvested.
The mean mortality time (MMT in Table 4) in days of H. hampei treated with the spores harvested in this experiment was not significantly different between isolates and days of harvest (df = 3, F = 0.97, P = 0.4077). Isolate B. bassiana 9002 at harvest times of 15, 25 and 45 days did not show a significant difference in mean mortality time, but between 15 days and the spores harvested 35 days after inoculation there was a significant difference (df = 156, F = 4.4, P = 0.0056). Isolate B. bassiana 9205 harvested at 15 and 25 days showed significant differences in mean mortality time compared with the spores harvested at 35 and 45 days. For both isolates, spores harvested at 15 days were more pathogenic to H. hampei than spores harvested after 35 days. The spores harvested at days 35 and 45 showed no significant differences between them (df= 156, F = 6.3, P = 0.0004).
Purity
The presence of contaminant microorganisms was low, less than 2 % of colony-forming unit for both isolates and was present only in the spores harvested on day 25. Only the presence of contaminant fungi was detected. These were identified as Penicillium spp. and Cladosporium spp.
Spore powder moisture content
The moisture content of B. bassiana spore powder harvested at different times showed a tendency to be lower for isolate 9002 and was higher for the spores harvest after 15 days. At this time 9205 had moisture content of 13 ± 3.1 % compared to 11.4 ± 0.5 % for isolate 9002. After harvest at 25 and 35 days the moisture content was similar for both isolates. At 45 days the moisture content dropped dramatically and was 6.3 ± 0.1 % for 9205 and 5.9 ± 0.9 % for 9002.
Discussion
Moisture loss was a result of the metabolic activity of the fungus, transpiration and diffusion while the cultures were developing. Moisture was also lost when the cultures were dried for nine days in a cool room with 55 ± 7 % average relative humidity. The information obtained from drying suggests that the cultures can be dried in five days following the same process as in this study, which would save time and resources.
The lowest moisture content achieved from harvested spore powder was around 10% for the best harvest period (15 days). This moisture content, obtained under controlled conditions and after nine days of air drying was still too high to store the spores and preserve their viability. Ideally, a moisture content of 5% would have been preferable (Bateman 1995). One way to decrease the moisture content would be to keep the spore mix with a desiccant such as silica gel which would help to achieve the objective of spore storage with low moisture content (Moore and Caudwell 1997). The use of silica gel to maintain the spore moisture at a low level would not pose a technical problem even for larger scale production.
Table 3.
Concentration of spores of two strains of B. bassiana harvested as powder from rice cultures, spores remaining attached to dried rice after harvest and total estimate of spores production per gram.
Moisture content can influence the weight of the spores harvested. Although the spores with the highest weight also have high moisture content, this did not mean that they have high spore concentrations due to the presence of starch from the rice. Starch could block the nozzles of application equipment. These characteristics need to be evaluated if fungal production is to be scaled up to avoid application problems in the field situations. The powder harvested from B. bassiana cultures on rice is a mixture of starch and spores. The high weight of powder harvested from B. bassiana 9002, which had low spore concentration, means that there is a difference between B. bassiana isolates in the ratio of starch to fungal spores and this needs to be considered in mass production.
Table 4.
Pathogenicity of two strains of Beauveria bassiana against the coffee berry borer evaluated with spores harvested at four different times.
The harvest methods used showed differences in the amount of powder extracted from the cultures and the concentration of spores per gram of powder. The possible reason for this result was that the mechanical method used the same frequency during the time the cultures were being sieved, while in the manual method the rhythm of shaking changed more often because the operator became tired. Also manual shaking could be stronger than the mechanical method allowing more rice starch to pass through the sieve mesh thereby adding more weight to the spore powder.
It is not clear what happens with the spores, in terms of numbers, when the inoculation to harvest period is increased. It may be that they germinate and become mycelia, thereby, causing the loss of biomass. Alternatively cultures may begin to degrade because there are not enough resources to continue growing. To avoid a loss of spore production the harvest needs to be done early. Further experiments around 10–20 day intervals should be carried out to determine the optimum harvest period.
The results of this study show that if a B. bassiana spore production system is going to be based on rice, then a better method of harvesting the spores will be required to make the process more efficient because B. bassiana spores are strongly retained on the dry rice. Also, spore production may vary due to the different characteristics in strains, such as different patterns of spore production among various isolates.
The mechanical and manual harvesting methodologies used had low efficiency to extract the spores from the rice grain and this result combined with the low spore productivity (less than 1 × 1010 spores/g using the production methods evaluated in this study) require looking for better methods of B. bassiana mass production and spore harvesting if the fungus is going to be used as a mycoinsecticide (Ye et al. 2006; Bateman, 2007). Although spore production was improved over that obtained from culture in bottles (Posada 1993). Low yields below 1 × 1010 spores/g remain one of the major constraints in the production of reliable mycoinsecticidal products (Jenkins, 1995). If industrial production of entompathogenic fungal spores as a mycoinsecticide is planned it is necessary to adapt or change production methods to increase spore production and harvest with greater efficiency (Ye et al. 2006; Bateman, 2007). In the case of B. bassiana products based on rice, it will be necessary to improve the extraction method to remove the spores that are retained on the rice, which were twice the spores produced.
However, the production method for B. bassiana showed that high quality spores can be obtained according to the standard of purity established by CENICAFÉ (Vélez et al. 1997) and that these can be used as oil based formulation for control droplet application, even though this application technology requires very high single spore concentration (Bateman, 1995).
Additionally, the germination and pathogenicity of the spores, which are the most important quality control parameters, showed high values resulting in rapid mortality of coffee berry borer. The viability of the spores assessed for germination test at 24 hours was more accurate than when assayed after 48 hours. This implies that the assay of germination must be carried out within 24 hours as was also found by Bateman (1995). Proper evaluation of germination is an important part of determining the potential of spores for field use. Also the germination value obtained at this time is a parameter that needs to be considered independently of spore survival following their application.
The pathogenicity test showed mortality levels over 92.5% for both isolates and harvest times. The use of high quality standards for spores increases the likelihood of success when applied in the field. The mean mortality time was shorter for the spores harvested early than for the spores harvested later. A possible reason was that early harvested spores could have a higher proportion of viable spores and therefore they more promptly infected the beetles and caused more rapid mortality than spores harvested later. However, there is no evidence to support this hypothesis. The mean mortality for both isolates harvested times at 15 days were shorter and therefore both isolates caused high mortality in H. hampei compared to the other times of harvest.
The high spore quality, high spore germination, and high pathogenicity obtained using the methods used could be due to proper drying and harvesting of spores under controlled low temperature and relative humidity, which are factors recognized to cause a fast degradation of spore viability (Bateman, 1995).
The diphasic liquid—solid fermentation methodology is not suitable for B. bassiana powder spore production on an industrial scale because of its low efficiency and massive rice substrate requirement. This is illustrated in Table 5: to apply 5×1013 spores per hectare of 5000 cof- fee trees, 1×1010 spores per tree are needed. This would require 39.2 kg of dry cultures and 92.3 kg of cooked rice (wet weight). To spray 1000 ha would require 92,336 kg of cooked rice, which is clearly preposterous. Such a figure is also quite difficult to manage and scale up to industrial production, even in developing countries where labor is relatively inexpensive.
To carry out this production also requires laboratory facilities for cooking and sterilizing the rice, inoculum media, packing material and labor, a place to keep the culture while it develops, space and labor for spore harvesting, plus the hazardous risk to the workers exposed to airborne spores.
B. bassiana spore production needs to be highly efficient and productive to make a successful, inexpensive mycopesticide based solely on B. bassiana spores. Current diphasic liquid solid fermentation technologies are incapable of yielding mycoinsecticide concentrations higher than 1×1010 (Jenkins 1995). In addition to production constraints, entomopthogenic fungi have to compete against chemical insecticides and require several applications during the crop cycle (Yendol and Roberts 1970). They also need to be comparable in terms of efficacy, achieving mortality rates higher than 80 % under field conditions (Thomas et al. 1997).
The methodology evaluated in this study allows for the production of high quality B. bassiana spores, but the quantities produced are only suitable for small-scale laboratory and field trials. All the constraints analyzed above do not imply that the use of B. bassiana against the coffee berry borer should be stopped. In addition to this fungus being the most important natural enemy of the coffee berry borer (Ruíz 1996), field studies using B. bassiana and have shown that it is a promising strategy and a key component of the integrated pest management of the coffee berry borer even though they are in their initial stages (Posada 1998; De la Rosa et al. 2000; Haraprasad et al. 2001; Posada et al. 2004). The use of entomopathogenic fungi is a technology that is still being developed and improvements in production, formulation and field application are needed. There are several alternatives for using the fungus against the coffee berry borer, such as “inoculum-introduction” or “augmentative -inoculative strategies” (Hajek 1993) that can contribute to an epizootic condition and create a permanent mortality factor in the field.
Table 5.
Relationship between time of harvest, spores produced by Beauveria bassiana 9002 cultured on wet rice, spore concentration and the production of wet and dry rice necessary to treat a number of coffee trees and hectares.
Acknowledgments
Thanks to the British government through ODA, IIBC, the British Council and to the Federación Nacional de Cafeteros de Colombia through Cenicafé for financing this research. Thanks to Monica Pava- Ripoll, Tony Little, Marion Seier, Valerie Walters, Adrian Leach, Carlos Quintero, Eduardo Osorio and Patricia Marín for their help. Fernando Vega and Ann Simpkins provided helpful review and comments on an early version of the manuscript. | https://bioone.org/journals/journal-of-insect-science/volume-8/issue-41/031.008.4101/Production-of-Beauveria-bassiana-Fungal-Spores-on-Rice-to-Control/10.1673/031.008.4101.full |
The decision of the Central Bank of Nigeria to deny importers of 41 items access to the nation’s foreign exchange markets through Deposit Money Banks is paying off and is in the right direction, a financial expert has said.
Two months after the controversial decision, the Chief Executive Officer, Enterprise Stockbrokers Plc, Mr. Rotimi Fakayejo, said the policy had helped to ensure some stability for the naira and led to an increase in the country’s foreign reserves.
He said, “To a great extent, since that policy was announced, we have seen the first stability in the value of the naira, and I think that is a very good point for everybody to see.
“Also, we have seen the foreign reserves increasing; I think we are back to $31bn now – for the reserves that were dipping to a low of N29bn.”
Fakayejo stressed that the decision would only lead to some level of hardship in the short term, with greater benefits expected in the medium to long term.
He said, “If you look at that policy, it is definitely geared towards conservation of our foreign reserves and reducing pressure on the exchange rate.”
He explained that news that the nation’s refineries had started refining crude oil would also lead to a reduction in pressure on the naira.
“If that be the case, it is definitely going to reduce importation of petroleum products by about 40 per cent. The average utilisation of foreign exchange by importers of petroleum products as given by the CBN is about 35 per cent.
“Now, if the dependence is reduced by 40 per cent, then the utilisation of foreign reserves by importers of petroleum products will cascade down to about 15 per cent. That means we are going to have a conservation of about 15 per cent (on the foreign reserves).”
On the argument that Nigeria is not self-sufficient in rice production, Fakayejo said, “There is no country that will make a developmental change without having to at least ooze out some sweat before it comes out of that and start to smile.
“Countries have had to pay prices for adjustments they have had to make that are favourable in the medium to long-term development of the economy.”
He added that it was important for Nigerians to note that the imported rice is sometimes five to 10 years old and only suited for feed mills, but is being brought to Nigeria for human consumption.
He, therefore, urged Nigerians to pay the price of future growth and development by “maybe buying local products at a higher price” and in so doing conserve the country’s foreign exchange and enhance local production so that at least more people could be gainfully employed and benefit from the value chain attached to the process.
“I believe it is a price that Nigerians must be ready to pay and I expect that everybody that has the future of this country at heart should support that policy by the CBN,” he said, adding that the Federal Government had through the CBN made funds available to farmers at single-digit interest rate.
The CBN had in June stopped the sales of foreign exchange to importers of rice, private jets, textiles, tomato paste, poultry products and 36 other items in order to shore up the country’s falling external reserves.
The decision had generated debate, with many including the Lagos Chamber of Commerce criticising the decision, warning that it would lead to factory closure, job losses and encourage inflation. | https://mmsplusng.com/blog/cbns-forex-banll-boost-naira-stability-expert/ |
The low water on the Rhine does not only have a direct impact on inland navigation. Industrial production may also suffer, experts warn economists – especially against the background of the existing problems.
DAccording to the economist, prof. Stefan Kooths of the Kiel Institute for World Economy, the economic consequences of the low Rhine water level are severe. “Calculations of the consequences of the low Rhine water level in 2018 show that industrial production drops by about 1 percent if the water level at the Kaub measurement point has dropped below the critical level of 78 centimeters over a period of 30 days,” he explained. Kooty.
At its peak, industrial production fell by about 1.5 percent in 2018, Kooths continued. Over the course of a year, low water levels are likely to cost about 0.4 percent of economic efficiency. “However, the situation at that time cannot be transferred from one to one to the present day” – explained the scientist. The “fall increase” for German industrial production was then much greater.
However, the negative effects of supply bottlenecks are much worse for the industry: “Until recently, due to supply bottlenecks, industrial production was down 7%. below the level that would be expected considering incoming orders, ”said the vice president and economic director of the Kiel Institute.
“Right now, however, is the fact that low-water obstacles are affecting already very tight supply chains,” he said. Inland navigation is also an important means of transporting energy resources. However, based on the experience of 2018, companies should be better prepared for emergencies in inland navigation, e.g. by using other types of ships.
However, any additional stress factor weakens economic dynamics. And each additional hurdle to production raises prices as the mismatch between supply and demand widens. “From an inflation standpoint, it’s not just about the consequences of rising transportation costs,” explained Kooths.
According to the information, only a small part of goods transported in Germany is transported by inland waterway: in 2017 it was 6%. On the other hand, in the case of individual groups of goods, such as coal, crude oil and natural gas, coking plants and petroleum products, and chemical products, inland shipping accounted for 10 to 30 percent. transport volume. “These goods are at the beginning of many production chains, so failure to transport them can lead to production bottlenecks downstream.”
A shock to the small sector – inland shipping’s share of gross value added in Germany is below 0.2%. – could have a significant impact on other sectors.
The level at Kaub in Rhineland-Palatinate, which is important for navigation on the Rhine, dropped further Friday. According to the administration of shipping and shipping, in the morning it was 42 centimeters, which is about 5 centimeters lower than at the same time of the previous day. According to the office’s forecasts, the 40-cm mark may drop on Friday.
Crucial to navigation on Thursday was just 1.59 meters – less than on any other stretch of the middle and lower Rhine. Therefore, Kaub in the district of Rhein-Lahn is considered an important inland navigation point on the Rhine.
The Federal Institute of Hydrology (BfG) recently announced that shallow inland waterway vessels can still sail across the middle Rhine to a water level of approximately 30 to 35 centimeters at Kaub gauge. However, forecasts assume water levels towards 30 centimeters by the beginning of next week at the Kaub level. Thereafter, Rhine navigation in this area “tends to stop”. | https://herenfsdd3dfdd.com/2022/08/12/low-water-levels-are-worrying-the-economy/ |
SUBMIT BY: NOVEMBER 25TH, 2019
The recent rise of the platform economy (infused by ‘crowd work’ and ‘on demand’ work) has led to new forms of work and labour markets, emerging business models, and structural economic shifts. East and Southeast Europe, as some of the largest European pools of digital platform workers, have a strong interest to exploit the potential of digitalization in the global economy.
The Reshaping Work Conference East and Southeast European (ESE) Edition, to be hosted by the Public Policy Research Center from Serbia, aims to facilitate regional discussions and push forward in anticipating impacts of the platform economy on workers, businesses, and societies in ESE. The conference seeks to discuss current trends and debates in the region, and understand how the platform economy in its different appearances unfolds, for instance, in terms of added value creation, and strategies to prevent brain drain.
While disciplines such as management, economics, politics, and communications have developed extensive bodies of literatures on platform economy they are still predominantly studied as a global phenomenon. However, in the next step, it is necessary to enrich those perspectives by analysing local, national and regional contexts, including geographically specific characteristics and implications of platform economy and work.
Therefore, we invite the submission of high quality and timely research contributions from different backgrounds, grouped together as economics, business & technology, sociology, and law & public policy that primarily focus on regional perspectives. However, submissions may take into consideration wider outlook to comparative EU and/or global perspectives. We welcome contributions from academics, independent researchers, non-profits, social activists, policy-makers, institutional representatives, businesses, start-ups, unions, platform companies, and platform workers and other stakeholders that address relevant topics this conference focuses on.
East and Southeast European Edition will focus on the following countries: Albania, Bosnia & Herzegovina, Bulgaria, Croatia, Macedonia, Hungary, Montenegro, Romania, Serbia, Slovenia, Ukraine, Belarus, yet the contributions discussing the impact of the platform economy in other countries from this broadly defined region(s) are not excluded from consideration”.
Platform economy plays an important role throughout the economy by minimizing transactions costs between entities that can benefit from getting together. In these businesses, pricing and other strategies are strongly affected by the indirect network effects between the two sides of the platform. Digital platforms are efficient and effective technological infrastructures for matching demand and supply, which are increasingly used by firms to access human capital, allowing more varied and flexible types of work for job seekers. Digital platforms affect productivity, flexibility, and interconnectivity in the economy. They change the nature of employment, their trends and characteristics. We invite contributions from the fields of economics that may examine, among others, the following aspects:
Digital platforms promise to create opportunities and enable new business models. Platforms have the potential to allow for more varied and flexible types of work organization both for large enterprises, startups and individual entrepreneurs. Meanwhile, the platform economy brings about newly configured relationalities and responsibilities (e.g., platform companies vs. supply chain firms, gig workers vs. regular work force, etc.). Accordingly, new and ever more complex managerial and policy challenges arise, concerning work routines, distributed value creation, knowledge flows, supply chains, human resource management, corporate governance, accountability, etc. Entrepreneurs, workers, investors and also researchers become connected in new ways, at local, national, regional, and global scales. It is an unresolved question how workers, businesses, governments, consumers, intermediaries, and the environment in East and Southeast Europe will be affected by these transformations. We invite contributions exploring this question, especially from management, information systems, entrepreneurship, innovation studies and economic geography. Contributions may examine the following aspects:
Digital platforms and online marketplaces for goods and services operate across a wide variety of labour markets and socio-economic contexts. As current trends show, the participation of workers, users and prosumers from the region of ESE (particularly from Ukraine, Serbia, Hungary and Romania) to global digital platforms is significant, yet the analysis of different aspects of their position, mechanisms of inclusion, identities, their social and human capitals in relation to platform work is widely neglected. The focus of this call is on features and agency of those working and living within digital platforms, under the key hypothesis that in digital age connectivity makes the distinction between work and activity even more uncertain.
We invite contributions from the field of social sciences that address, among others, the following topics focused on the regional including comparative perspectives:
Legal and policy solutions for new forms of work stemming from platform economy, such as the “crowdwork” and “on demand work” are in the infancy stage in almost all the countries in the region. Yet the importance of these solutions is significant for nurturing platform economy and providing for regulated work conditions for platform workers. Thus, in the field of law and public policy we welcome papers that explore possible approaches to the regulation of platform work, including soft law, local or regional pilots, the ‘right to challenge’, and self-regulation. We also welcome papers that would address the international regulation of platform work, including the data transparency regulations. Papers that reflect empirical legal studies linked to one of the suggested topics are most welcome.
Those wishing to participate in the conference by presenting a research paper/report are requested to submit an extended abstract (around 800 words) by NOVEMBER 25th. Applicants should include their title, institutional affiliation, and indicate the division to which their work belongs (economics, business & technology, sociology & or law & public policy).
Abstracts should be sent to [email protected]. You may also contact Branka Andjelkovic, Head of the Committee at [email protected] with any questions you may have about scholarly contributions to this conference. Notification of acceptance will be sent by December 9th.
The Reshaping Work Conference East and Southeast European Edition will take place on February 27th & 28th, 2020 at University of Novi Sad, Novi Sad, Serbia. For practical information, please consult our website: novisad.reshapingwork.net. Early-bird tickets go on sale on November 15th. To ensure you stay up-to-date, please sign up for our newsletter or follow us on Facebook & Twitter. | https://novisad.reshapingwork.net/call-for-papers/ |
SEC Brings Enforcement Action Against Space SPAC for Alleged Misleading Disclosure and Due Diligence Failures
The U.S. Securities and Exchange Commission (“SEC”) has brought an enforcement action against a special purpose acquisition company (“SPAC”) and its major participants, highlighting enhanced regulatory scrutiny of SPACs and underscoring the importance of following appropriate diligence and other practices in the de-SPAC process.
On July 13, 2021, the U.S. Securities and Exchange Commission announced that it had brought an enforcement action against Stable Road Acquisition Company (“Stable Road”), its sponsor, SRC-NI (“SPAC Sponsor”), its CEO Brian Kabot, Stable Road’s proposed merger target, Momentus Inc., and Momentus’ founder and former CEO Mikhail Kokorich for their involvement in a SPAC business combination. Stable Road is a SPAC that completed its initial public offering of 17,250,000 units at a price of $10.00 per unit, generating gross proceeds of $172.5 million, on November 13, 2019. Momentus, a Delaware corporation, is a privately held space transportation company that plans to offer in-space infrastructure services. The two companies announced a business combination in October 2020 that would result in Momentus becoming a public company.
The SEC alleged that, ahead of a proposed business combination, (i) the respondents made materially misleading statements in their public disclosures surrounding (a) Momentus’ technology and (b) national security risks associated with Kokorich; and that (ii) Stable Road and the SPAC Sponsor made misleading disclosures compounded by the SPAC Sponsor’s insufficient due diligence. All parties except Kokorich settled with the SEC, with total penalties of over $8 million, the SPAC Sponsor’s forfeiture of its founder shares, an undertaking to give PIPE investors the ability to terminate their subscription agreements prior to the shareholder vote to approve the merger, and tailored investor protection undertakings. The SEC has filed a complaint against Kokorich in federal court based on related conduct.
The SEC further alleged that Momentus and Kokorich repeatedly told public investors that they had “successfully tested” Momentus’ key technology in space when, in fact, the test had failed to achieve its primary mission and did not even meet Momentus’ own public and internal pre-launch criteria for success. In addition, the SEC claimed that Momentus and Kokorich made false claims regarding U.S. government concerns about national security and foreign ownership risks posed by Kokorich, a Russian citizen residing in Switzerland, and that they concealed doubts about Momentus’ ability to secure essential governmental licenses.
Stable Road, the SPAC Sponsor and Kabot are accused of repeating these alleged material misrepresentations in their own public filings while also failing to review the in-space test and to follow up on red flags concerning the national security risks raised during their due diligence. SEC Chair Gary Gensler stated in his remarks that: “The fact that Momentus lied to Stable Road does not absolve Stable Road of its failure to undertake adequate due diligence.”
The SEC’s order asserts violations of antifraud provisions of the federal securities laws, including scienter-based charges against Momentus for fraud under the Securities Act and Exchange Act. It asserts negligence-based charges of fraud and violations of reporting and proxy solicitation provisions by the SPAC itself (Stable Road), and that Kabot and the SPAC Sponsor caused Stable Road’s violation of the antifraud “scheme” liability provision (Section 17(a)(3) of the Securities Act). The complaint filed against Kokorich asserts that he violated the antifraud provisions with scienter, among other claims.
As noted above, this action is one of the first of an expected series of potential enforcement actions related to SPACs. Given the rapid growth in this sector over the past few years, the SEC’s Enforcement Division has a working group focused on SPACs, and we expect more actions to come. Activity from the Enforcement Division follows staff guidance and remarks earlier this year on SPACs relating to the use of projections, accounting methodologies and celebrity involvement with SPACs. Future enforcement actions may focus on disclosures in public filings, including those relating to risks regarding conflicts of interest in SPAC transactions, with a general focus on protecting investors and in particular retail investors. With this in mind, we offer a few practice considerations:
-
Establish and Execute upon an Appropriate Diligence Process – Parties to a SPAC business combination transaction should carefully consider and implement an appropriate and tailored diligence process. Though SPAC business combinations operate on a different timeline than traditional IPOs, parties should avoid shortcuts in the due diligence process to accommodate compressed timelines. The SEC’s enforcement action shows that the SEC will scrutinize the due diligence process of private company targets by SPACs, their sponsors and other transaction participants.
-
Thoroughly Investigate Core Business and “Red Flag” Issues – The SEC pointed to two alleged diligence failures in the Momentus action. Transaction parties are reminded to appropriately diligence core business and operational issues that inform the target company’s prospects as well as “red flag” issues such as those that may arise in connection with management background investigations or company regulatory matters. Given previous SEC staff guidance on the topic, diligently forward-looking information, including a target’s financial projections and the underlying assumptions, should be a high-priority item for all transaction participants.
-
Take Action to Address Areas of Concern – In the recent enforcement action, the SEC notably took action even before the proxy statement/prospectus was finalized and sent to shareholders. SPACs and their sponsors should recognize that the SEC expects them to act on the findings that arise from the due diligence process. Not only must public disclosures be materially accurate, but SPACs and sponsors may need to consider other more significant actions based on their findings, which could include management or operational changes at the target company or walking away from the deal altogether.
-
Penalties Will Be Tailored to SPAC Transactions – The remedies in the settlement of this action were tailored to the workings of a SPAC transaction. The forfeiture of founder shares by the Sponsor and the ability of PIPE investors to terminate their subscription agreements were both substantive steps designed to address the SEC’s analysis of realities of SPAC transactions, including the economic incentives of the various transaction participants. In addition, the requirement that the target company establish an independent board committee and engage an independent consultant to conduct a comprehensive ethics and compliance program assessment relating to disclosure practices underscore that the SEC will focus on whether the target company is prepared to become a public company.
We believe that, from the SEC’s perspective, cases like this serve to force a better alignment of incentives of parties to a SPAC transaction with the interests of investors and improve public disclosures that investors rely on when making investment and voting decisions. From the perspective of SPAC transaction participants, this case serves as a forceful reminder to thoroughly conduct due diligence, take seriously the findings of the due diligence process, and consider the implications of such findings in light of required disclosures to investors and their investment decision-making process.
Copies of the SEC’s public announcement, order and complaint are available here. | https://www.natlawreview.com/article/sec-brings-enforcement-action-against-space-spac-alleged-misleading-disclosure-and |
- Generally define & describe the policy issue to be addressed.
- Identify the purpose of analysis, the targeted level of policy (i.e., clinical practice, health care systems, or public/social health) & significance of topic.
- Identify questions the policy analysis is intended to address.
- Provide details of the issue or problem, including its nature/scope, relevant literature & history, & the context within which the issue exists.
- Describe existing policy addressing the issue, if any.
- Discuss strengths & shortcomings in existing policy.
- Identify & describe key stakeholders (individuals & groups) that are or will be affected by the policy & why.
- Identify alternative policies to achieve objectives.
- Establish/identify criteria that will be used for selection of “best” policy.
- Evaluate each alternative & its potential impact relative to the healthcare & patient outcomes.
- Assess the trade-offs between alternatives.
- Based on the analysis, identify the “best” alternative to address the current issue & policy situation.
- Provide rationale for selection.
- Describe possible strategies to implement selected alternative.
- Identify barriers to implementation of selected alternative.
- Describe methods to evaluate policy implementation.
- Discuss analysis & recommendations relative to the original questions identified, & the level of policy it is intended to address (i.e., clinical practice, health care systems, or public).
- Identify limitations of analysis.
- Discuss implications for practice, education, research, & policy-making.
- Summarize findings & recommendations of analysis
- Identify questions to be addressed in future studies or policy analyses.
- List all references cited in paper. Must be completed in APA format.
- Table displaying results of analysis, including, for example, a list of alternatives & the degree to which each alternative may be most effective. Other tables & appendices as needed to support analysis. | https://studypoolcenter.com/identify-the-purpose-of-analysis-the-targeted-level-of-policy-i-e-clinical-practice-health-care-systems-or-public-social-health-significance-of-topic/ |
In a newly issued Report, the OIG has expressed concern regarding CMS’s lack of oversight of P&T Committee conflicts of interest. As the entities responsible for making Medicare Part D formulary decisions, P&T Committees must ensure that their decisions are made based on scientific evidence and not based on the personal financial interests of committee members.
Federal regulations require that Medicare Part D sponsors follow their P&T Committee’s decisions regarding which drugs to place on formulary. However, sponsors ultimately can determine the tier placement of such drugs based on P&T Committee recommendations. With respect to conflicts of interest, federal laws and regulations stipulate that at least one physician and at least one pharmacist on the P&T Committee must be free of conflict relative to the Part D sponsor, Part D plan, and pharmaceutical manufacturers.
OIG's Findings
The OIG’s Report concludes that CMS has failed to ensure that P&T Committee decisions are free from conflicts of interest. Specifically, the OIG identified the following:
- Limited Definitions of Conflict of Interest – In surveying sponsors, the OIG found that more than half of the P&T Committees’ definitions did not address conflicts of interest prohibited by federal regulations. In addition, P&T Committees’ definitions do not always address relationships with other entities, such as PBMs, that could benefit from formulary decisions. Finally, more than two-thirds of P&T Committees’ definitions do not view employment by the entity that maintains the committee as a potential conflict.
- Committee Members Allowed to Determine and Manage Own Conflicts of Interest – The majority of P&T Committees allow members to determine their own conflicts and manage them through recusal. Many P&T Committees do not even have a process for determining conflicts or for collecting financial interest information from members.
- Inadequate CMS Oversight of P&T Committee Conflict of Interest Compliance – The OIG determined that CMS does not monitor conflicts of interest on P&T Committees or review P&T Committee conflict of interest information reported by sponsors and PBMs. Additionally, during 2010 CMS did not have audit protocols to audit P&T Committee member conflicts of interest. In 2012, CMS added an optional review of P&T Committee documentation to determine compliance with federal conflict of interest requirements, but less than 10 percent of its audits included these elements.
OIG's Recommendations
To minimize the possibility that conflicts of interest may influence formulary decisions, the OIG recommends that CMS improve its oversight of P&T Committee conflicts and set minimum standards for sponsor oversight. OIG recommends that CMS:
- Specify that P&T Committee conflict of interest requirements extend to PBMs;
- Require sponsors to maintain policies and safeguards applicable to P&T Committee members who are employed by the entity that maintains the Committee;
- Require sponsors to use objective processes to determine if P&T Committee members’ disclosed financial interests do in fact constitute a conflict of interest;
- Require sponsors to use objective processes to manage disclosed P&T Committee members’ conflicts of interest, including specifying when the member must be recused from discussion and/or voting; and
- Oversee compliance with P&T conflict of interest procedures, including auditing both plan sponsor P&T Committee conflict of interest determinations and management policies.
Part D plan sponsors and their contracted PBMs should review the adequacy of their existing P&T Committee policies and procedures in light of the concerns highlighted in the OIG Report. Sponsors and PBMs should also be prepared for additional P&T audits and enhanced requirements with respect to the reporting of P&T Committee membership data in the Health Plan Management System.
Click here to hear a podcast of Ann Maxwell, Regional Inspector General in Chicago for the Office of Evaluation and Inspections, discussing the Report’s findings and the OIG’s concerns regarding CMS’s lack of P&T Committee oversight. | https://www.mintz.com/insights-center/viewpoints/2146/2013-03-oig-report-critical-pt-committee-oversight |
The strategic evaluation & use of alternative investments in defined benefit pension plans
Alternative investments have become a more prevalent aspect of multi-asset investing. Moreover, defined benefit (DB) plan sponsors are increasingly using alternatives to help address some of the key challenges they face in managing a pension plan such as funding benefits, reducing volatility of funded status and diversifying overall risk at the total portfolio level.
However, alternatives are not a one-size-fits-all solution. An effective alternatives strategy will depend on many factors such as funded status, liability profile, liquidity needs, risk/return objectives and investment beliefs.
Given such considerations, our intent in this blog post is to:
- Highlight the available opportunity set across alternative investment strategies, their fundamental characteristics and their potential role in the context of DB pension plans.
- Describe how alternatives can be used to address the challenges and issues plan sponsors face in their quest to deliver on the pension promise.
- Provide examples of how different alternative investment allocations impact key metrics associated with pension plan management.¹
Understanding the spectrum of alternative investments
PRIVATE CAPITAL
Private capital (which includes private equity, private debt and private real assets) is a broad descriptor of investments in either the equity or debt of privately held companies.
Across the private equity universe, different categories are represented by companies at various stages of their lifecycle and include venture capital, growth equity, buyouts and distressed. Within private credit, there is also a broad range of investable opportunities across corporate (cash flows from operating businesses) and asset-backed (cash flows from physical assets such as real estate).
Investors participate in private capital investments via closed-end vehicles that typically have lifespans in the order of 10 years, in the case of private equity, or five to eight years, in the case of private debt. That said, it's important to understand that given the cash flow pattern of private markets funds, investors do not have to wait until the end of the fund's life to receive money back as distributions will naturally occur as underlying investments are realized (i.e., when a portfolio company is sold, or a loan is paid off).
The strategic case for including private capital in a DB pension plan portfolio includes the opportunity to generate returns that are superior to those in the public markets and to reduce overall portfolio volatility. For example, over the trailing 20-year period to Sept. 30, 2020, private equity outperformed the S&P 500 index by 350 basis points.²
HEDGE FUNDS
Hedge funds are a diverse category of investment strategies (such as equity hedge, event-driven, relative value and tactical trading) where the investment returns are expected to be created by the skill and expertise of the manager. Importantly, hedge funds utilize a broader array of investment techniques relative to traditional equity and fixed income managers such as short selling, leverage and actively managing market risk through hedging.
Liquidity terms for hedge fund investment vehicles vary depending on the underlying strategy. For more liquid strategies such as equity hedge, redemptions may be permitted on a quarterly basis with 30 days' notice while more illiquid strategies, such as event-driven, may offer annual liquidity with 90 days' notice.
Given that equity beta is the dominant risk factor in a return-seeking portfolio, hedge funds in aggregate offer several benefits. This includes generating returns that are independent of (or at least have a lower correlation with) the general direction of markets. By having the ability to deliver absolute returns through time, while preserving capital when markets are experiencing selloffs, hedge funds are an important tool to consider to reduce funded status volatility.
PRIVATE REAL ESTATE
Core private real estate can be characterized by high quality assets located in urban areas that are leased to credit-worthy tenants. Property types include industrial, office, apartments, retail, self-storage, single family homes and senior housing.
There are two main components which underpin commercial real estate returns. The first component is the stable, bond-like income yield that comes from rents based on contractual property leases, which accounts for approximately 70%-80% of the total expected return. The second is a capital appreciation component that is linked to growth in cash flows and an increase in property values.
Core private real estate funds typically offer quarterly liquidity.
Over its history and relative to other asset classes, real estate has provided strong risk-adjusted returns. Over the long term, core private real estate is expected to generate a return between that of public equities and bonds. There are several reasons to include core private real estate in a DB plan, including: low volatility relative to return-seeking asset classes, risk diversification and income generation.
Challenges and issues
In practice, plan sponsors are faced with multiple challenges and issues in delivering on the pension promise—i.e., providing benefits to participants. These include:
Growth
Ultimately, benefit payments need to be funded by contributions or investment returns. Absent the former, pension plans need to generate the required growth in assets to fund the economic value of all benefits that will ever be earned by current (and future) participants. Each year, liabilities are expected to increase due to falling interest rates, while assets flow out of the plan to pay benefit payments, reducing the size of assets that can generate return. With the exception of frozen plans, new benefit accruals also add to liabilities each year. And for underfunded plans, additional returns will be required to reduce the funding deficit. Other DB-related factors may compound the need for additional returns, such as paying plan expenses and service providers out of plan assets. Underfunded plans also need to pay additional premiums to the PBGC.³
Avoidance of negative outcomes
The risk decomposition of typical DB plans shows us that liability-relative interest rate risk, and the volatility of public equity exposures, dominate. As a result, aside from an effective liability-driven investing (LDI) program to mitigate interest rate risks, it is also important to diversify return-seeking allocations. This is particularly noticeable in scenarios where equity markets fall materially in value, precipitating large declines in funded status.
Given the unique investment attributes inherent across the range of alternatives, plan sponsors may utilize alternatives to mitigate investment risks inherent in delivering on the pension promise.
Impact of adding alternative investments
When considering the addition of alternative investments in a portfolio, it is important for plan sponsors to understand the impact on key metrics such as total portfolio expected return, surplus volatility and contributions. In the examples below, we summarize the impact of adding alternatives for three pension plans, each with different circumstances.
Example 1 – Frozen plan ($511 million total assets)
In this case, the plan is frozen and the ultimate goal is to achieve a fully funded plan and evolve the asset allocation mix to maintain funded status. The plan's funded status is currently just over 90%, though ultimately a funded status of 105% is required to progress to the end of the desired de-risking glidepath. The plan sponsors desired to improve funded status without making additional contributions, and, while being conscious of future liquidity implications, decided to make a 5% allocation to private markets. The addition of private markets resulted in an improvement in the efficiency of the portfolio, including a reduction in surplus volatility from 9.4% to 8.7%, a $4.3 million reduction in the present value of cumulative contributions and a $7.3 million reduction in contributions in the event of a worst-case downside scenario over a 10-year horizon.
Example 2 – Closed and accruing plan ($180 million total assets)
In this instance, the plan is closed to new participants, though still accruing benefits for existing participants. As an ongoing plan, an increase in returns is required in order to keep pace with service costs. The plan's funded status is approximately 64%, and as a result closing the deficit also weighs on return requirements. As a result, the plan sponsors desired to increase returns by seeking to implement a 5% allocation in core private real estate and hedge funds. The addition of these alternative investments to the asset mix resulted in a reduction in surplus volatility from 10.2% to 8.3% and a $6.2 million reduction in contributions in the event of a worst-case downside scenario over 10 years.
Example 3 – Open and accruing plan ($77 million total assets)
Here, the plan is open and ongoing, though underfunded. In order to generate higher return potential and improve funded status, the plan sponsor wanted to allocate 8% of total assets to private markets. The addition of private markets resulted in an increase in expected returns from 6.2% to 6.8%, a $1.3 million reduction in the present value of cumulative contributions and a $3.6 million reduction in contributions in the event of a worst-case downside scenario over a 10-year horizon.
The bottom line
As plan sponsors seek to address the challenges of delivering on the pension promise—including generating returns to fund liabilities, improving / protecting funded status and mitigating downside equity risks via broader diversification—they are increasingly looking to utilize alternative investments in their asset allocation.
Ultimately, while each DB plan's circumstances are different, and there is no one-size-fits-all solution, we believe it is a worthwhile exercise for plan sponsors to consider various alternative allocations to improve potential outcomes. | https://russellinvestments.com/us/blog/alternatives-strategic-evaluation |
Unlike what many people tend to believe, managing conflicts is a very important part of business management skills. Given the close interactions among commercial parties, as well as their interactions will workers, regulators and other stakeholders, conflicts with any of them pose significant risk and a potential to cause significant harm. Dealing with this risk requires appreciation of what can work and a skill to convert objectives into action.
Conflicts are part and parcel of all social activities and organizations. Often, the ability to manage conflict can be the difference between a functional and a dysfunctional unit. There are different approaches that may suit different situations. It is equally important to understand and appreciate the origins of conflict, and attempt to take care of them even before they arise.
Conflicts can exist at different levels and in different forms. We not only need to resolve them, we also need to prevent the damage arising from them, and lastly, we need to learn to live with them. This is why we need Conflict Management skills.
In our own sphere of work, each one of us has to find ways and means for managing conflict.
Especially, when the stakes are high, as in case of a business organization, while dealing with important stakeholders or at the international level, those responsible for managing these conflicts need to know the pros and cons of adopting different strategies for conflict management.
Conflicts arise because of differences in interests of two or more agents, irrespective of whether these agents are individuals, organizations, groups of people or nations. As interests of one party can never be absolutely same as that of another, conflicts are inevitable. The same is also true of an organization, where differences will always be there between the managers and the workers, the directors and the shareholders and the organization and its business partners and agents. Differences can also be there between different sections of the same organization, as can be present among managers or even workers. Each of these differences leads to a 'conflict' which needs to be managed, with the aim of preventing it from becoming an obstacle in the functioning of the organization.
It is important to remember that conflict management does not necessarily mean resolution of conflict. In fact, while at some times it is not possible to resolve a conflict, at other times, it is not desirable to resolve the conflict because of the cost involved in terms of time, effort and attention. Thus, the first basic principle of conflict management is that there are many alternative ways for managing a conflict, and selecting the appropriate alternative is perhaps one of the most important steps in conflict management.
Different approaches to managing a conflict include:
1. FORCING: using an authority or power available for forcing a solution that satisfies your concerns without regard to the concern of the other party. This approach may be appropriate when it is impossible to satisfy the concerns of the other party, e.g.. in case of a legal dispute with a party threatening illegal use of force.
2. ACCOMMODATING: satisfying the concerns of other party while neglecting your own concerns. This may be the best approach when the cost of satisfying other's concerns is far smaller than the loss that may result from the continuation of the conflict. e.g.. minor modification of services on the request of a very important client.
3. AVOIDING: not giving any attention to the conflict and continuing the normal functioning as if the conflict does not exist. This is a common approach, which is frequently adopted by organizations as well as managers, but is appropriate only for minor and inconsequential conflicts which pose little threat of escalation.
4. COMPROMISING: it is an approach that aims to partly satisfy the concerns of both parties, but does not fully satisfy any of the parties. It is the preferable approach for resolving conflicts between two equal partners, or two parties of equivalent strength where persistence of conflict can lead to recurrent losses for both parties.
5. COLLABORATING: it is an approach which consists of expressing a clear intention of resolving the conflict, keeping the channels of communication open and working together to find a mutually acceptable solution, while continuing to work with that party. 'Listening with an understanding' is a form of this very approach, applicable in case of managing conflicts with individuals when frustration and emotions of the other person need to be calmed down first before finding the actual solution.
It is important to remember that the solution for each conflict needs to be selected on the basis of several factors. These include the power equation between the two parties, comparative importance of each party for the other, their respective dependence on each other, as well as the importance of issues and values involved. The cost of satisfying the concerns of the other party needs to be balanced against the costs that may be saved by resolving the conflict. While taking a decision for approaching the conflict, one also needs to take into account the message that it will send to other parties and how it may impact their approach towards managing conflict with you.
Negotiation is a very important tool in conflict management, and it is one art that needs to be mastered to keep the cost of conflict management down, especially while adopting the 'compromise' approach, which is one of the most common approaches adopted among equal partners. It is an art that keeps the expectations down while keeping the various options open, thereby leading to the best possible compromise deal for the party.
Conflicts can be minor or major. They can be legal or violent. Irrespective of the form in which one faces them, the basic principle of managing them remains the same. However, knowing the principles is the easiest part. Applying them effectively in a given situation is the real art!
Ordinary interest is calculated on the basis of a 360-day year or a 30-day month; exact interest is calculated on a 365-day year. The interest formulas for both ordinary and exact interest are actually the same, with time slightly differing when given as number of days..
Managerial economics helps to develop leadership qualities which are necessary for every business. It helps in effective decision making thereby profiting the company.
Publicity and advertising both are market promotion techniques. But most of the people dont know about the major differences between these two. | https://businessandfinance.expertscolumn.com/basics-conflict-management |
T-Squared: Trib Transparency, Continued
Since the beginning of the year, we've been looking for ways to disclose more about the inner workings of the Tribune. Today we've taken the next few steps along a continuing path.
Over the last few months, Trib watchers and watchdogs alike have wondered out loud (some louder than others) how we keep our journalism separate from our business operations — how we raise money to support the good and important work we do without giving corporate sponsors, donors, foundations or members any say over what we report and how we report it.
The answer, for us and for every other media organization out there, nonprofit or otherwise, is a shared belief — held by both our journalists and our business team — that if we don’t ensure our reporting is completely free of outside influence, we’ve got nothing.
But we can’t expect our readers to take our word for it, which is why we’re big advocates for disclosure. When you’re asking for people’s trust, being as transparent as possible is always the right thing to do — whether you’re an elected official or a public media organization. They say sunlight is the best disinfectant, and we’ve shined a lot of rays on the inner workings of the Trib so everyone can see not just the work we produce but who pays our bills.
We have long published lists of every person and every institution, charitable or corporate, that supports us. In the case of individuals and foundations, we’ve disclosed amounts of their donations in real time or close to it. For corporate sponsors, we’ve published a comprehensive list of every entity that has contributed to us. We’ve also posted the Form 990s we file annually with the IRS, which, per federal regulations, include the specific amounts of those sponsors that have given at least $5,000. We upload those to our site as soon as they’re completed and routinely post our audited financials, too. This degree of transparency has allowed readers to see for themselves why we are worthy of their trust — and what we’re doing do maintain it.
At the same time, there are always ways to improve — and some of the best ideas have come from our toughest critics. Since the beginning of the year, we’ve been talking to stakeholders who care about the future of the Trib, seeking feedback on ways to make ourselves and our work even more transparent. In January, we published a formal ethics policy and created a stand-alone corrections page. Today we take the next steps toward more robust disclosure:
- We’ve posted on our site the specific dollar amounts associated with all corporate sponsors, regardless of the size of their contributions, by year and over time, broken down by type of revenue (cash or in-kind) and whether the funding was for digital sponsorship or for events.
- We’ve provided details on our donor page on whether foundations helped support specific areas of editorial coverage.
- Going forward, we’re reverting to our previous policy of appending disclosures to every story indicating whether specific subjects or institutions named within are Texas Tribune donors or corporate sponsors; we'll list anyone who has contributed $1,000 or more. In May 2012, we began publishing a blanket disclosure paragraph at the bottom of stories referring readers to our disclosure pages. The idea was to keep the details of our business out of our newsroom while still providing readers a way to see for themselves whether potential conflicts might exist. But that paragraph didn’t follow our copy when our stories were picked up by other news organizations, and even for readers of Tribune stories on the Trib’s own site, we were not making it as easy as it could be to ferret out funders on the page — so it’s back to the future.
- We’ve added language into our event descriptions stating what has always been true: that sponsors do not have any role in selecting topics, panels or panelists. And we’re making sure that any financial relationships between the Trib and panelists on given programs are disclosed.
We’re not done. There are other tweaks to our disclosure policies that we’re contemplating, so stay tuned. And we’re always actively pursuing good ideas and suggestions aimed at improving transparency. We’ll continue to evaluate our policies and consider other efforts to shine still more rays of sunlight on our operations and our work.
We hope these new standards, on top of what we've already been doing, will make us the most transparently funded news organization in the country.
We’re proud of The Texas Tribune. We’re producing innovative, ambitious, aggressive and — best of all — independent public interest journalism. We’re honored by your trust and support, and we will always strive to be good custodians of it. | https://www.texastribune.org/2014/02/28/t-squared-ethics-and-us/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.