content
stringlengths
71
484k
url
stringlengths
13
5.97k
Hi there, Bit of a novice excel user here. I feel like what I'm trying to do is very possible but I'm not sure how to make it happen. I manage meeting rooms in an office tower and am tracking issues that arise in these rooms. My spreadsheet consists of: columnA: floor number columnB: room number columns C, D, E: each respective column for a different recurring issue The idea is that when an issue is discovered, a value of 1 is added to the appropriate column in the row for that particular room. I then want to sum all of the values for a given floor, to get the total issues on a floor. I can do this by manually selecting ranges using the sum function, but I would have to do this function 50 times (once per floor). In my mind I want excel to do the following: IF the value in column A is "5", THEN add the sum of columns C:E in that row, with all the other rows that also have the value "5" in column A Thoughts? Aug 08 2022 10:49 AM @Zblahout =SUMPRODUCT(($A$2:$A$25=G7)*$C$2:$E$25) You can try this formula for the data layout of the example.
https://techcommunity.microsoft.com/t5/excel/sum-range-of-cells-based-on-a-common-value-in-a-column/td-p/3594116
Q: We frequently need to select samples of populations for test of transaction work when auditing or reviewing client financial data. Currently, each staff member uses his or her own sampling method, some of which seem to produce skewed results. We would like to deploy a consistent sampling method using Excel, but we are frequently faced with producing a single sample from multiple populations of different sizes, and we want to produce unbiased samples. Can you tell us how we might accomplish this in Excel? A: It is possible to produce an unbiased sample from multiple population ranges in Excel using either the Rand or Randbetween functions, as follows. For both examples, let us assume that you want to select 30 random invoices from the following three population ranges—invoices numbered 2232 through 3723; 4000 through 5709; and 6400 through 8727. Rand Method Start by listing all invoice numbers in a single column using Excel’s Fill tool, as follows. Enter the value 2232 into Excel (cell A4 in this example), and then from the Home tab, select Fill, Series, and in the Series dialog box select the Columns radio button, enter 3723 as the Stop value, and then click OK. This process will list all invoices from 2232 to 3723. Position your cursor in the next available cell in this column (cell A1496 in this example), enter the next invoice number in your population (4000 in this example), and repeat the Fill tool process until you have listed all three ranges of invoice numbers to be sampled. In the adjacent column enter the function =RAND() (cell B4 in this example), and copy it downward so that each invoice number has its own corresponding random number, as pictured below. To prevent the values from recalculating further, convert the random numbers to values by selecting them, then copy and paste them to the same location as Values. Sort the data according to the random numbers and select the first 30 numbers from the top of the list as your random sample (shown below). Randbetween Method For an approach that is a little more complicated but takes far less Excel screen real estate, use the Randbetween function as follows. ( Note: To use Randbetween in Excel 2003, you must first activate the Analysis ToolPak add-in by selecting Tools, Add-ins, checking the box labeled Analysis ToolPak, and then clicking OK. Because Randbetween is included as a standard function in later editions of Excel, the Analysis ToolPak does not need to be activated.) List your various population ranges, calculate the population sizes for each range and in total, and then calculate the percentage of the total population for each range, as pictured. Next, create a columnar table for summarizing the 30 random invoice numbers, and in the first column (cell H3 in this example) enter the formula =RANDBETWEEN(1,100)/100, and then copy it downward to fill 30 rows, as suggested in the image below. These formulas will generate 30 random values between 1% and 100%, to be used in the next step as a basis for randomly selecting invoice numbers from the three populations, according to each range’s respective size within the total population. (The general idea, for example, is that values from a range representing just 10% of the total population will have only a 10% chance of being selected.) In the adjacent column (cell I3 in this example), enter the following formula: =IF(H3<C9,RANDBETWEEN($C$4,$C$5),IF(H3>($C$8+$D$8),RANDBETWEEN($E$4,$E$5),RANDBETWEEN($D$4,$D$5))). While this formula may appear intimidating, it simply contains three Randbetween functions, each designed to generate random invoices numbers from one of the three population ranges, depending on the random numbers (from 1% to 100%) generated in column H. For example, as shown below, the formula in cell H3 produces the random percentage of 8%. Based on this result, the Randbetween formula in cell I3 generates a random invoice number from within the first population range (2232 and 3723), because 8% falls within the 1% to 27% range. In theory, cell H3 can be expected to generate random numbers between 1% and 27% twenty-seven percent of the time. This means there is a 27% likelihood that the random invoice number generated in cell I3 will be generated from within the first range of invoice numbers (which represents 27% of the total population). Likewise, the formula ensures that invoice numbers will be generated from within the second and third ranges of data 31% and 42% of the time, respectively—hence, all values within the total population have an equal chance of being selected. Copy the Randbetween formula downward to complete the table (shown below), and then convert the entire table of random numbers to values by selecting them, then copying and pasting them to a new location as values. Although this second method is more complicated, once it is created, the Excel template can be used repeatedly by entering the population ranges in the yellow highlighted cells. Download this workbook at carltoncollins.com/random.xlsx. Note: To select a sample of random items from a list of text values (e.g., customer names) or a list of nonsequentially numbered items, list the population in Excel and number them sequentially in an adjacent column, then apply one of the methods described above to the numbered column.
https://www.journalofaccountancy.com/issues/2013/aug/sample-this.html
The loose ends in science are at least as interesting to me as what we (think we) know. Higgs boson, the elusive quarry of the biggest experiment in science: the Large Hadron Collider (LHC). The Higgs boson is the remaining particle predicted by the highly successful standard model of nuclear physics -- and yet to be detected. A very weird thing, the Higgs boson, so weird/elusive/mysterious that it's developed the nickname of "The God Particle," from the book of the same name. The Higgs boson is said to give the attribute of mass to some subatomic particles. Because the LHC's latest round of experiments has yet to detect the Higgs boson, its mass is statistically likely to be above 207 GeV / c^2. (GeV = giga [billion] electron volts.) That division by c squared, where c = the speed of light in a vacuum, comes from the mass/energy equivalence relationship, the famous E = mc^2. For reference, a proton's mass is about 938 MeV/c^2 (MeV = million electron volts). A neutron is slightly more massive, at 940 MeV/c^2. An electron's mass is a mere 511 KeV/c^2 (KeV = thousand electron volts). These particles must get their mass from other mechanisms (e.g., from the so-called strong force) and/or only partially from the Higgs. As I say, weird. How weird is it that physicists don't know the mass of the particle for which they're looking? Here's the not-quite secret of the standard model: it relies upon about twenty parameters that are, as far as present theory knows, completely arbitrary. These parameters are determined by measurement. So in the latest round of LHC experiments, physicists have just about convinced themselves that the Higgs boson is not to be found within the mass range of 144 - 207 GeV/c^2. Imagine a detective saying, "I've about convinced myself that the suspect probably doesn't weigh between 144 and 207 pounds." It wouldn't sound like said detective knew who he was after, now would it? The Trouble with Physics (highly recommended, by the way), physicist Lee Smolin names five grand challenges that have stymied theoreticians for decades. Among the five is "Explain how the values of the free constants in the standard model are chosen in nature." As in: why do particles have the masses that they do, and why are forces between particles the strengths that they are? Basic stuff that we can measure but in no way understand. Don't get me wrong. I think the standard model has been a remarkable bit of experimental science. It has guided many searches for subatomic detail. But if we don't soon find the Higgs, for sure something will have been proven incomplete in the standard model. Just as Newtonian gravitation, though it answered all scientific needs for centuries, was eventually supplanted by the more complete Einsteinian model of gravity (aka, General Relativity), so, too, the standard model may have to give way to a more complete, more basic, model of subatomic reality. And won't that be exciting ... Tuesday, March 29, 2011 What we don't know Posted by Edward M. Lerner at 11:40 AM Labels: lhc, physics, science No comments:
https://blog.edwardmlerner.com/2011/03/what-we-dont-know.html
Higgs Boson Will Destroy The Universe Eventually Astonishing news about the end of the universe--and this is NO joke--were announced today, saying as much that the Higgs boson particles are unstable and will one day cause the destruction of our whole universe by swallowing it. Ongoing calculations by scientists about the Higgs boson particle that was spectacularly discovered last year by the LHC at CERN in Switzerland indicate that it's not looking good for the future of the universe, they said Monday. "If you use all the physics that we know now and you do what you think is a straightforward calculation, it's bad news," Joseph Lykken, a theoretical physicist with the Fermi National Accelerator Laboratory in Batavia, Illinois, told reporters at the American Association for the Advancement of Science meeting in Boston. "It may be that the universe we live in is inherently unstable and at some point billions of years from now it's all going to get wiped out," said Lykken, who is a member of the science team at Europe's LHC, the world's largest and highest-energy particle accelerator. The discovery of the Higgs boson, which still needs to be confirmed by extensive calculations and further experiments, would help to answer a key question about how the universe came into existence some 13.7 billion years ago - and likewise how it will ultimately end. If it wasn't for this unfortunate (for the universe, that is) and surprising problem with the Higgs boson, the universe could exist forever because it was shown (2011 nobel prize) that it actually accelerates its expanison! But says Lykken: "This [my] calculation tells you that many tens of billions of years from now, there'll be a catastrophe." "A little bubble of what you might think of as an ‘alternative' universe will appear somewhere and then it will expand out and destroy us," Lykken said, adding that the event will unfold at the speed of light. The question about the long-term stability of our universe was already standing in the room before the Higgs discovery, but scientists could get the actual calculations going once the boson's mass began settling in at around 126 billion electron volts - a critical number it turns out for figuring out the fate of the universe. The calculation requires knowing the mass of the Higgs to within one percent, as well as the precise mass of other related subatomic particles. "You change any of these parameters to the Standard Model (of particle physics) by a tiny bit and you get a different end of the universe," Lyyken said.
http://www.scienceworldreport.com/articles/5038/20130219/higgs-boson-instability-will-destroy-universe-eventually.htm
Knowing the mass of the higgs is important because it tells us which of our theories is on the right track. For example, a very large higgs would rule out huge branches of string theory, almost killing it. Not finding it at all would rule super symmetry would destroy the standard model, with nothing left to stand it in place. The 'worst' case is that we find the higgs exactly where we expect it to be, confirming what we pretty much knew already, without adding any new real information. The 'worst' case is that we find the higgs exactly where we expect it to be, confirming what we pretty much knew already, without adding any new real information. Why is that the worst case? Science is the search for truth. Nature and reality don't change based on what we wish. That's the difference between science and magic/religion. We shouldn't care which theory wins out or what we gain from the knowledge. We should only care about which model most resembles what is real and measurable. Since we're talking about deductive reasoning, if we find that what we already know is correct, that still invalidates/eliminates entire other branches of enquiry. That means we don't have to waste time on those branches (unless there are other reasons to do so - and intellectual curiosity and the possibility of finding the unexpected might be reason enough - or we want further confirmation) What I'm trying to say is that any definite result is a good result and we shouldn't let our emotional biases get in the way of actually doing the science. We shouldn't care which theory wins out or what we gain from the knowledge. We should only care about which model most resembles what is real and measurable. Yes, that's what scientists should care about. But if you've built a life and well-known career based on something that appears to just have been invalidated, the typical human reaction isn't to accept it, and say, "oh well, time to cancel all my grants, give up my professorship, and start over, even though I'm 50 and have spent 1/2 my life 'studying' string theory". A great philosopher described that best: "Alright!" bawled Vroomfondel banging on an nearby desk. "I am Vroomfondel, and that is not a demand, that is a solid fact! What we demand is solid facts!" "No we don't!" exclaimed Majikthise in irritation. "That is precisely what we don't demand!" Scarcely pausing for breath, Vroomfondel shouted, "We don't demand solid facts! What we demand is a total absence of solid facts. I demand that I may or may not be Vroomfondel!" "But who the devil are you?" exclaimed an outraged Fook. "We," said Majikthise, "are Philosophers." "Though we may not be," said Vroomfondel waving a warning finger at the programmers. "Yes we are," insisted Majikthise. "We are quite definitely here as representatives of the Amalgamated Union of Philosophers, Sages, Luminaries and Other Thinking Persons, and we want this machine off, and we want it off now!" "What's the problem?" said Lunkwill. "I'll tell you what the problem is mate," said Majikthise, "demarcation, that's the problem!" "We demand," yelled Vroomfondel, "that demarcation may or may not be the problem!" "You just let the machines get on with the adding up," warned Majikthise, "and we'll take care of the eternal verities thank you very much. You want to check your legal position you do mate. Under law the Quest for Ultimate Truth is quite clearly the inalienable prerogative of your working thinkers. Any bloody machine goes and actually finds it and we're straight out of a job aren't we? I mean what's the use of our sitting up half the night arguing that there may or may not be a God if this machine only goes and gives us his bleeding phone number the next morning?" "That's right!" shouted Vroomfondel, "we demand rigidly defined areas of doubt and uncertainty!" Sorry, to bust your cynical bubble, but I've seen too many scientists close the book on a lifetime of research when the tests don't pan out. But you live in your little world were scientists rub their hands together and join in a global conspiracy to keep the truth hidden. You're missing the point. This isn't about merely discovering random facts. Yes, it will be nice to know the facts, no matter what, but science is more than a random collection of unanalyzed facts. Some results will do more than merely give us another random truth to add to our collection; some results will allow us to falsify certain theories and not waste time on them any more, which is better than a result that leaves us just as confused as we are now. And in response to Nutria, who also commented: you have it exactly backwards. A result which eliminates more theories is a better result from a scientific POV. If this were about scientists clinging to their pet theories, then a result which left more theories open would be better (since it would allow more scientists to cling to their favorites), but that's pretty much the opposite of what JohnFluxx was suggesting. No no no, the technocracy would like you to think that nature and reality are immutable, but as any of the other orders will tell you, the technocracy is just better at convincing the majority that they are right. It's worst case in the sense that it's not all that "interesting", it spurs no new thinking, suggests no departure from the theory stew we have now, etc. That says nothing about accepting or rejecting the data, etc, it's just not as "fun". It's a 'worse' case becasue it doesn't add anything new. Meaning, it won't be that interesting. If you find a safe in your backyard, the worse case scenario is that it's empty. It doesn't mean you don't want the truth, only that if it was filled with precious gems* it would be more exciting. *rolled from the AD&D DMG 1st ed. Not finding it at all would rule super symmetry would destroy the standard model It would destroy the SM but would not necessarily rule out Supersymmetry. Existing SUSY models only require two Higgs doublets because we think the Higgs is the way the particles gain their masses and given that assumption SUSY will need at least two of them (though more are not excluded). If the Higgs mechanism is not the way the universe works then who says the new mechanism, whatever it is, will preclude the existence of SUSY? The main argument for SUSY (to explain a light Higgs) may be gone but there a Would someone explain why mass is expressed in GeV? GeV sounds like a measure of electrical field strength. The electron volt [wikipedia.org] is a measure of energy. It is the energy gained by an electron accelerating through an electric field potential of one volt. And since energy and mass are equivalent [wikipedia.org], this miniscule measure of energy also makes for a useful miniscule measure of mass. It's a unit of energy that particle physicists use instead of mass. One eV is an electron-volt which is equal to the energy gained by an electron after being sent through a one volt potential. You can use E = m c^2 to convert between energies and masses. eV is a measure of "energy", the E in E=mc^2 1 GeV = 1.783Ã--10^â'27 kg When you're dealing with things that are really tiny, it's easier to use GeVs than 10^-27 kgs. GeV = giga electron volt = 10^9 eV. The electron volt (eV) is the amount of energy gained by an electron accelerated by a 1 volt potential. Finally, E=m c^2 so we generally interchange mass and energy as convenient. Strictly, we should write masses in units of GeV/c^2. However we generally set c=1 so there is no difference between mass and energy. Obviously, in engineering units mass and energy are not the same. However, one can always take a mass, and multiply by the speed of light (in whatever units In case anyone else is a confused about this as I was, apparently "by mass-energy equivalence, the electron volt is also a unit of mass. It is common in particle physics, where mass and energy are often interchanged, to use eV/c, or more commonly simply eV with c set to 1, as a unit of mass." And "1 GeV = 1.783×1027 kg." At least according to: http://en.wikipedia.org/wiki/Electron_volt [wikipedia.org] And 1 GeV = 1.783×1027 kg Slashdot ate your formatting it looks like. I'll write it as 1.783E-27 kg to get around it. That would be 10^-27 kg, a very small number, not 1027 kg. Turn in your geek card. How could you be confused! Energy-mass equivalence is only described by the most well known formula in history. E=mc^2 Not to diminish the importance of the work done at Fermilab, but the headline is very misleading. On Slashdot? Never! The headline is VERY misleading. There was no mention at all of what I learned this past Sunday. The minister stood right up and said at the beginning of his sermon that if the Higgs particle was 120GeV or less, that meant that Allah was god and the Muslims were right. If the mass was greater than 120GeV, then that meant that the resurrection and divinity of Jesus was right. He did say that the latest Fermi results ruled out ENTIRELY the Catholic view that the communion wafers actually turn into the body of C Okay, I only have a 4 year degree in Physics so maybe someone can help me out on this. If this particle gives the property of mass then shouldn't it have a mass less than that of the lightest particles? According to a quick Google calculation [google.com] this thing out-masses an electron by 5 orders of magnitude. WTF? I took the entire undergrad QM sequence at my school, we covered Liboff cover to cover so I know a little. I am aware that the electron is not the least massive particle, however it is the least massive particle that I know of Google having built into its calculator function. You missed the point. His point was that they are saying the elementary mass particle has more mass than a non-elementary mass particle. If a Higgs boson has more mass than an electron, what gives the electron its mass? Isn't the proton a hadron? Yes, but if it doesn't 'decay' within 4 hours, it must seek medical attention immediately. Having a degree in physics means nothing if you didn't do anything in this branch of physics. That seems a bit strong. A physics degree does mean that you can reasonably expect an explanation to be understood without too much effort on your part. First off, the electron is not the lightest particle. Strictly speaking, the electron neutrino weighs in at less than 2.2 eV, where the electron weighs in at 0.511 MeV. Then you have the tau neutrino, which weighs in at 15.5 MeV. Then you have the proton, which weighs 938 MeV. After that we have the tauon, which has a mass of 1.7 GeV. All of which, so far, are leptons. I can see where you're going, but you made a careless error. The proton is not a lepton. In the standard model, leptons and quarks are fundamental particles. Leptons and quarks are reflections of each other through a certain symmetry. But a quark never appears by itself. A quark-antiquark pair is called a meson (which is a boson because it has whole-integer quantum spin), and a triplet of quarks, like a proton or neutron, is called a baryon (which is a fermion because it has half-integer quantum spin). A hadron is any particle that interacts through the strong force; this includes mesons and baryons but not leptons. Sorry, your post contains several errors. There are three neutrinos corresponding to electron, muon, and tau, and all three of them weigh less than 1 eV. Furthermore, they all mix with each other, so there are three states, but each is a mixture of electron, muon, and tau-type neutrinos. The W and Z bosons weigh 80 GeV and 90 GeV respectively. The top quark weighs 172 GeV. The theory would be consistent with a higgs of any mass below about 200 GeV. We have searched in many experiments at lower energie You clearly knows more about this than I do; but calling the different neutrinos a mixture of the three flavours of neutrino seems a little lacking. e, muon and tau neutrinos undergo flavour oscillation, ie change type, but they appear still to be different particles - the particles comprise a mixture of flavour eigenstates that interfere through a mismatch in the mass eigenstates. No I don't understand it fully but a simple mental-model analog might be beat frequencies produced by sound waves in constructiv The three physical (mass eigenstate) neutrinos [nu_1, nu_2, nu_3] are mixtures of the three interaction states [nu_e, nu_mu, n_tau] and are related by a rotation matrix R called the MNS matrix. It's just a matrix rotation. Today we do believe we understand the "solar neutrino problem" in terms of mixing of the three states. For the solar neutrinos, in fact mixing due to matter is dominant (rather than mixing due to the masses). There are numerous neutrino experiments going on today, but so far they have Thanks - I didn't know about Double CHOOZ or Daya Bay, seems I'm a bit out of touch! One more question - do you view the 3 neutrinos mass eigenstates as 3 distinct neutrino type's or as 3 representations of one neutrino with an internal mix of flavours, or something else. This is where my internal model of particle-wave duality comes a bit unstuck. Like visualising hyperspatial forms (hypercubes or whatever), never could quite lock it down. In these theories, mass arises of interactions with the Higgs boson. Thus, the Higgs being massive doesn't exclude less massive particles. Thanks for that hint, I've now found the Higgs mechanism [wikipedia.org] which is currently in the process of giving me a headache. There's a great analogy for this which will probably help, http://www.hep.ucl.ac.uk/~djm/higgsa.html [ucl.ac.uk] . IIRC this was the result of a competition by Physics World (the magazine of the Inst. of Phys.). The link s/he posted has extremely nice analogy. Unfortunately, I have no idea if it is a correct analogy but it is definitely easy to understand and makes sense too. Maybe someone who understands the math behind Higg's mechanism can comment on the aptness of the analogy. it made it into Physics World, the mag of the IoP (body for professional UK physicists) - http://physicsworldarchive.iop.org/index.cfm?action=summary&doc=6%2F9%2Fphwv6i9a26%40pwa-xml&qt= [iop.org] and the NewScientist - http://www.newscientist.com/article/mg13918902.000----with-a-boson-at-the-tories-cocktail-party-.html [newscientist.com] and is cited at least once at arxiv.org - http://arxiv.org/abs/hep-ex/0103023v2 [arxiv.org] That is eye opening. The Higgs boson is a rumour being spread around. Forget this LHC, have they checked Snopes? Okay, I only have a 4 year degree in Physics More than I do. I transferred out in my third year, as I realized that I wanted a job instead of, not after, a PhD. If this particle gives the property of mass then shouldn't it have a mass less than that of the lightest particles? The reported mass of the Higgs is the rest mass of a real Higgs particle. Mass, according to the theory, comes from interaction with a field of virtual Higgs particles, not a real Higgs merging with a real particle. Thus, if anything But have they raised Cthulhu? Can someone explain to me why we need something to give mass to something? Can't it just be that matter warps space-time? Since Mass and Energy are equivalent, why can't it just be that energy/mass warps space-time, and that mass is simply the effect we observe in the hree dimensional universe of this warping? Occam's Razor says the whole concept of the Higgs Boson and the Higgs Field are wrong, much like String Theory. Occam's Razor would indeed say that, if it wasn't the case that the Standard Model is a very well tested model for particle physics. The Higgs mechanism is part of the Standard Model. One of the predictions of this Model is that the quantum of the Higgs field, the Higgs boson, exists. Unfortunately, if it doesn't, it means something has gone seriously wrong with the model, because it's been successful in explaining a great many things. Am I the only one who sees a problem with the circular logic of saying, "We need some particle to give particles mass -- wait, what gives mass to the particle that gives particles mass?" Either mass is an intrinsic value of matter, perhaps based off of the total potential energy bound up in the matter, or according to the standard model, mass is imbued to particles by a special particle which imbues them with mass. Whence then comes the original mass? Are you suggesting that mass is some kind of magical property of matter that affects magically? That doesn't really fly with science. No, there needs to be some kind of mechanism for the effect of particle's mass to interact with other particle's masses. Without the mechanism, "mass" has no meaning since it doesn't do anything. But clearly mass does something, so there must be some mechansim for masses to interact, either with "space-time continuum" or directly with other masses. Also there needs to be expl Am I the only one who sees a problem with the circular logic of saying, "We need some particle to give particles mass -- wait, what gives mass to the particle that gives particles mass?" It doesn't mean that the Higgs gives mass to all particles, only to some of them. The standard model requires some particles (gauge bosons) to be massless, otherwise the whole theory leads to inconsistent results. For photons and gluons, this is fine, as current experimental results are consistent with these particles being massless. However, W and Z bosons, are all but massless -- they are even 100 times heavier than protons! To fix this inconsistency, some very smart physicists came up with the idea to int > Am I the only one who sees a problem with the circular logic Apparently, yes. For one thing, it's "extra mass", not "mass". The mass of the electron is fully accounted for by it's self-energy. If you integrate the EM field energy over the electron's field, then apply E=mc^2 to that result, you get the right answer. Higgs is only needed for particles that do not follow this rule, like quarks. Quarks are heavier than their otherwise obvious self-energy can explain. So we postulate another form of "charge" ( Why does mass/energy warp space-time? What is space-time anyway? Furthermore, why does mass/energy resist being accelerated? Why is the resistance to acceleration always in the exact same proportion to the gravitational effects? For that matter, why is the universe expansion accelerating? Why does it appear to have undergone the process we call inflation?
https://science.slashdot.org/story/08/08/05/2035231/first-definitive-higgs-result-in-7-years?sdsrc=rel
Geneva, March 2015. During the 50th session of “Rencontres de Moriond” in La Thuile Italy, ATLAS and CMS presented for the first time a combination of their results on the mass of the Higgs boson. The combined mass of the Higgs boson is mH = 125.09 ± 0.24 (0.21 stat. ± 0.11 syst.) GeV, which corresponds to a measurement precision of better than 0.2%. The Higgs boson is an essential ingredient of the Standard Model of particle physics, the theory that describes all known elementary particles and their interactions. The Brout-Englert-Higgs mechanism, through which the existence of the Higgs boson was predicted, is believed to give mass to all elementary particles. It is the most precise measurement of the Higgs boson mass yet and among the most precise measurements performed at the LHC to date. The Higgs boson decays into various different particles. For this measurement, results on the two decay channels that best reveal the mass of the Higgs boson have been combined (Higgs boson decaying to two photons and to 4 leptons, leptons being electron or muon here). Each experiment has found a few hundred events in the Higgs to photons channel and a few tens in the Higgs to leptons channel, using the data collected at the LHC in 2011 and 2012 at centre-of-mass energies of 7 and 8 TeV, having examined about 4000 trillion proton-proton collisions. The two collaborations worked together and reviewed the analyses and their combination. Experts of the analyses and of the different parts of the detectors that play a major role in this measurement were closely involved. “The Higgs Boson was discovered at the LHC in 2012 and the study of its properties has just begun. By sharing efforts between ATLAS and CMS, we are going to understand this fascinating particle in more detail and study its behaviour”, said CMS spokesperson Tiziano Camporesi. “CMS and ATLAS use different detector technologies and different detailed analyses to determine the Higgs mass. The measurements made by the experiments are quite consistent, and we have learnt a lot by working together, which stands us in good stead for further combinations.”, said ATLAS spokesperson Dave Charlton. The Standard Model does not predict the mass of the Higgs boson itself and therefore it must be measured experimentally. However, once supplied with a Higgs mass, the Standard Model does make predictions for all the other properties of the Higgs boson, which can then be tested by the experiments. This mass combination represents the first step towards a combination of other measurements of Higgs boson properties, which will involve also the other decays. “While we are just getting ready to restart the LHC, it is admirable to notice the precision already achieved by the two experiments and the compatibility of their results. This is very promising for LHC Run 2”, said CERN Director of Research Sergio Bertolucci. This result was achieved by bringing together physicists of the ATLAS and CMS collaborations, representing together more than 5,000 scientists from over 50 different countries. Up to now, increasingly precise measurements from the two experiments have established that all observed properties of the Higgs boson, including its spin, parity and interactions with other particles are consistent with the Standard Model Higgs boson. With the upcoming combination of other Run 1 Higgs results from the two experiments and with higher energy and more collisions to come during LHC Run 2, physicists expect to increase even more the precision of the Higgs boson mass and explore in more detail the particle’s properties. During Run 2, they will be able to combine their results promptly and thus increase the LHC’s sensitivity to effects that could hint at new physics beyond the Standard Model.
http://www.italoeuropeo.co.uk/2015/03/17/lhc-experiments-join-forces-to-zoom-in-on-the-higgs-boson/
Scientists at Europe's CERN research centre have found a new subatomic particle, a basic building block of the universe, which appears to be the boson imagined and named half a century ago by theoretical physicist Peter Higgs. We have reached a milestone in our understanding of nature, CERN director general Rolf Heuer told a gathering of scientists and the world's media near Geneva on Wednesday. The discovery of a particle consistent with the Higgs boson opens the way to more detailed studies, requiring larger statistics, which will pin down the new particle's properties, and is likely to shed light on other mysteries of our universe. Two independent studies of data produced by smashing proton particles together at CERN's Large Hadron Collider produced a convergent near-certainty on the existence of the new particle. It is unclear that it is exactly the boson Higgs foresaw, which by bestowing mass on other matter helps explain the way the universe was ordered after the chaos of Big Bang. But addressing scientists assembled in the CERN auditorium, Heuer posed them a question: As a layman, I would say I think we have it. Would you agree? A roar of applause said they did. For some, there was no doubt the Higgs boson is found: It's the Higgs, said Jim Al-Khalili of Surrey University, a British physicist and popular broadcaster. The announcement from CERN is even more definitive and clear-cut than most of us expected. Nobel prizes all round. Higgs, now 83, from Edinburgh University was among six theorists who in the early 1960s proposed the existence of a mechanism by which matter in the universe gained mass. Higgs himself argued that if there were an invisible field responsible for the process, it must be made up of particles. He and some of the others were at CERN to welcome news of what, to the embarrassment of many scientists, some commentators have labelled the God particle, for its role in turning the Big Bang into an ordered universe. Clearly overwhelmed, his eyes welling up, Higgs told the symposium of fellow researchers: It is an incredible thing that it has happened in my lifetime. Scientists see confirmation of his theory as accelerating investigations into the still unexplained dark matter they believe pervades the universe and into the possibility of a fourth or more dimensions, or of parallel universes. It may help in resolving contradictions between their model of how the world works at the subatomic level and Einstein's theory of gravity. It is very satisfying, Higgs told Reuters. For me personally it's just the confirmation of something I did 48 years ago, he said of the achievement of the thousands who laboured on the practical experimental work which had, finally, confirmed what he and others had described with mathematics. I had no expectation that I would still be alive when it happened, he said of the speed with which they found evidence. For physics, in one way, it is the end of an era in that it completes the Standard Model,” he said of the basic theory physicists currently use to describe what they understand so far of a cosmos built from 12 fundamental particles and four forces. CERN's Large Hadron Collider is the world's biggest and most powerful particle accelerator. Two beams of protons are fired in opposite directions around the 27-km (17-mile) looped pipe built under the Swiss-French border before smashing into each other. The collisions, which mimic the moments just after the Big Bang, throw off debris signals picked up by a vast complex of detectors and the data is examined by banks of computers. The two separate CERN teams worked independently through that data, hunting for tiny divergences which might betray the existence of the new boson, a class of particle that includes the photon, associated with light. The class is named in honour of Albert Einstein's Indian collaborator Satyendra Nath Bose. Both teams found strong signals of the new particle at around 125 to 126 gigaelectron volts (GeV) - a unit of mass-energy. That makes it some 130-140 times heavier than a proton. Scientists struggling to explain the theory have likened Higgs particles to a throng of paparazzi photographers; the greater the celebrity of a passing particle, the more the Higgs bosons get in its way and slow it down, imparting it mass; but a particle such as a photon of light is of no interest to the paparazzi and passes through easily - a photon has no mass. Presenting the results, Joe Incandela at CERN showed off two peaks on a graph of debris hitting the detectors, which he said revealed the hitherto unseen presence of the enigmatic particle. That is what we are sure is the Higgs, a CERN scientist said. hurrah science :) now can someone find a use for subatomic particles?Jul 05th, 2012 - 07:01 am 0 @1 the Higg's bosun being the basis of all mass and energy in the universe could be considered a tiny little bit useful. Also since the electron is a subatomic particle one might consider that newfangled electricity to be a little bit useful...or the photons that carry the signals from my exchange to the server for mercopress over optical fibres... More seriously: the prediction of the Higg's Bosun was another fantastic scientific achievement by the UK's academics and universities, adding to a long list including the theories of gravity and optics, the theory of evolution, discovery of the structure of DNA, not to mention the numerous engineering breakthrough's from the steam engine to radar and the programmable computer. Great to see a joint European programme succeeding in finding the particle, surely an example international co-operation at it's best. To turn this back to Mercosur and the Americas: what are the joint scientific programmes similar to CERN that should be followed in this region? (serious question: no snide comments please) 2 Richfe :) lol ok. I should of said anyone know any uses for the Higgs boson, apart from answering the whole Standard model of particle physics. It is weird how much this country adds to scientific research & engineering. Commenting for this story is now closed. If you have a Facebook account, become a fan and comment on our Facebook Page!
https://en.mercopress.com/2012/07/04/discovery-of-the-higgs-boson-a-key-to-open-the-mysteries-of-the-universe
An independent news and commentary website produced by academics and journalists. An independent news and commentary website produced by academics and journalists. “You can do it quickly, you can do it cheaply, or you can do it right. We did it right.” These were some of the opening remarks from David Toback, leader of the Collider Detector at Fermilab, as he announced the results of a decadelong experiment to measure the mass of a particle called the W boson. I am a high energy particle physicist, and I am part of the team of hundreds of scientists that built and ran the Collider Detector at Fermilab in Illinois – known as CDF. After trillions of collisions and years of data collection and number crunching, the CDF team found that the W boson has slightly more mass than expected. Though the discrepancy is tiny, the results, described in a paper published in Science on April 7, 2022, have electrified the particle physics world. If the measurement is correct, it is yet another strong signal that there are missing pieces to the physics puzzle of how the universe works. A particle that carries the weak force The Standard Model of particle physics is science’s current best framework for the basic laws of the universe and describes three basic forces: the electromagnetic force, the weak force and the strong force. The strong force holds atomic nuclei together. But some nuclei are unstable and undergo radioactive decay, slowly releasing energy by emitting particles. This process is driven by the weak force, and since the early 1900s, physicists sought an explanation for why and how atoms decay. According to the Standard Model, forces are transmitted by particles. In the 1960s, a series of theoretical and experimental breakthroughs proposed that the weak force is transmitted by particles called W and Z bosons. It also postulated that a third particle, the Higgs boson, is what gives all other particles – including W and Z bosons – mass. Since the advent of the Standard Model in the 1960s, scientists have been working their way down the list of predicted yet undiscovered particles and measuring their properties. In 1983, two experiments at CERN in Geneva, Switzerland, captured the first evidence of the existence of the W boson. It appeared to have the mass of roughly a medium-sized atom such as bromine. By the 2000s, there was just one piece missing to complete the Standard Model and tie everything together: the Higgs boson. I helped search for the Higgs boson on three successive experiments, and at last we discovered it in 2012 at the Large Hadron Collider at CERN. The Standard Model was complete, and all the measurements we made hung together beautifully with the predictions. Measuring W bosons Testing the Standard Model is fun – you just smash particles together at very high energies. These collisions briefly produce heavier particles that then decay back into lighter ones. Physicists use huge and very sensitive detectors at places like Fermilab and CERN to measure the properties and interactions of the particles produced in these collisions. In CDF, W bosons are produced about one out of every 10 million times when a proton and an antiproton collide. Antiprotons are the antimatter version of protons, with exactly the same mass but opposite charge. Protons are made of smaller fundamental particles called quarks, and antiprotons are made of antiquarks. It is the collision between quarks and antiquarks that create W bosons. W bosons decay so fast that they are impossible to measure directly. So physicists track the energy produced from their decay to measure the mass of W bosons. In the 40 years since scientists first detected evidence of the W boson, successive experiments have attained ever more precise measurements of its mass. But it is only since the measurement of the Higgs boson – since it gives mass to all other particles – that researchers could check the measured mass of W bosons against the mass predicted by the Standard Model. The prediction and the experiments always matched up – until now. Unexpectedly heavy The CDF detector at Fermilab is excellent at accurately measuring W bosons. From 2001 to 2011, the accelerator collided protons with antiprotons trillions of times, producing millions of W bosons and recording as much data as possible from each collision. The Fermilab team published initial results using a fraction of the data in 2012. We found the mass to be slightly off, but close to the prediction. The team then spent a decade painstakingly analyzing the full data set. The process included numerous internal cross-checks and required years of computer simulations. To avoid any bias creeping into the analysis, nobody could see any results until the full calculation was complete. When the physics world finally saw the result on April 7, 2022, we were all surprised. Physicists measure elementary particle masses in units of millions of electron volts – shortened to MeV. The W boson’s mass came out to be 80,433 MeV – 70 MeV higher than what the Standard Model predicts it should be. This may seem like a tiny excess, but the measurement is accurate to within 9 MeV. This is a deviation of nearly eight times the margin of error. When my colleagues and I saw the result, our reaction was a resounding “wow!” What this means for the Standard Model The fact that the measured mass of the W boson doesn’t match the predicted mass within the Standard Model could mean three things. Either the math is wrong, the measurement is wrong or there is something missing from the Standard Model. First, the math. In order to calculate the W boson’s mass, physicists use the mass of the Higgs boson. CERN experiments have allowed physicists to measure the Higgs boson mass to within a quarter-percent. Additionally, theoretical physicists have been working on the W boson mass calculations for decades. While the math is sophisticated, the prediction is solid and not likely to change. The next possibility is a flaw in the experiment or analysis. Physicists all over the world are already reviewing the result to try to poke holes in it. Additionally, future experiments at CERN may eventually achieve a more precise result that will either confirm or refute the Fermilab mass. But in my opinion, the experiment is as good a measurement as is currently possible. That leaves the last option: There are unexplained particles or forces causing the upward shift in the W boson’s mass. Even before this measurement, some theorists had proposed potential new particles or forces that would result in the observed deviation. In the coming months and years, I expect a raft of new papers seeking to explain the puzzling mass of W bosons. As a particle physicist, I am confident in saying that there must be more physics waiting to be discovered beyond the Standard Model. If this new result holds up, it will be the latest in a series of findings showing that the Standard Model and real-world measurements often don’t quite match. It is these mysteries that give physicists new clues and new reasons to keep searching for fuller understanding of matter, energy, space and time. Article by John Conway, Professor of Physics, University of California, Davis This article is republished from The Conversation under a Creative Commons license. Read the original article.
https://thenextweb.com/news/w-boson-is-bigger-than-we-thought-a-physicist-explains
This initial phase is used to obtain an understanding of the existing and projected computing environment of the organization. This enables the project team to: refine the scope of the project and the associated work program; develop project schedules; and identify and address any issues that could have an impact on the delivery and the success of the project. During this phase it is recommended that a Steering Committee be established. The committee should have the overall responsibility for providing direction and guidance to the Project Team. The committee should also make all decisions related to the recovery planning effort. The Project Manager should work with the Steering Committee in finalizing the detailed work plan and developing interview schedules for conducting the Security Assessment and the Business Impact Analysis. Two other key deliverables of this phase are: the development of a policy to support the recovery programs; and an awareness program to educate management and senior individuals who will be required to participate in the project. Security and control within an organization is a continuing concern. It is preferable, from an economic and business strategy perspective, to concentrate on activities that have the effect of reducing the possibility of disaster occurrence, rather than concentrating primarily on minimizing impact of an actual disaster. This phase addresses measures to reduce the probability of occurrence. A thorough Security Assessment of the computing and communications environment including personnel practices; physical security; operating procedures; backup and contingency planning; systems development and maintenance; database security; data and voice communications security; systems and access control software security; insurance; security planning and administration; application controls; and personal computers. The Security Assessment will enable the project team to improve any existing emergency plans and disaster prevention measures and to implement required emergency plans and disaster prevention measures where none exist. A thorough Infrastructure Assessment will evaluate networks, communication systems and host and server systems for their resilience to failure; their fail-over and recovery capability. Present findings and recommendations resulting from the activities of the Security Assessment to the Steering Committee so that corrective actions can be initiated in a timely manner. Define the scope of the planning effort. Analyze, recommend and purchase recovery planning and maintenance software required to support the development of the plans and to maintain the plans current following implementation. Assemble Project Team and conduct awareness sessions. A Business Impact Assessment (BIA) of all business units that are part of the business environment enables the project team to: identify critical systems, processes and functions; assess the economic impact of incidents and disasters that result in a denial of access to systems services and other services and facilities; and assess Maximum Allowable Outage, that is, the length of time business units can survive without access to systems, services and facilities. The BIA Report should be presented to the Steering Committee. This report identifies critical service functions and the timeframes in which they must be recovered after interruption. The BIA Report should then be used as a basis for identifying systems and resources required to support the critical services provided by information processing and other services and facilities. During this phase, a profile of recovery requirements is developed. This profile is to be used as a basis for analyzing alternative recovery strategies. The profile is developed by identifying resources required to support critical functions identified in Phase 3. This profile should include hardware (mainframe, data and voice communications and personal computers), software (vendor supplied, in-house developed, etc.), documentation (IS, user, procedures), outside support (public networks, IS services, etc.), facilities (office space, office equipment, etc.) and personnel for each business unit. Recovery Strategies will be based on short term, intermediate term and long term outages. Another key deliverable of this phase is the definition of the plan scope, objectives and assumptions. During this phase, recovery plans components are defined and plans are documented. This phase also includes the implementation of changes to user procedures, upgrading of existing Information System operating procedures required to support selected recovery strategies and alternatives, vendor contract negotiations (with suppliers of recovery services) and the definition of Recovery Teams, their roles and responsibilities. Recovery standards are also developed during this phase. The plan Testing/Exercising Program is developed during this phase. Testing/exercising goals are established and alternative testing strategies are evaluated. Testing strategies tailored to the environment should be selected and an on-going testing program should be established. Maintenance of the plans is critical to the success of an actual recovery. The plans must reflect changes to the environments that are supported by the plans. It is critical that existing change management processes are revised to take recovery plan maintenance into account. In areas where change management does not exist, change management procedures will be recommended and implemented. Many recovery software products take this requirement into account. Once plans are developed, initial tests of the plans are conducted and any necessary modifications to the plans are made based on an analysis of the test results. Modifying the plans as appropriate. The approach taken to test the plans depends, in large part, on the recovery strategies selected to meet the recovery requirements of the organization. As the recovery strategies are defined, specific testing procedures should be developed to ensure that the written plans are comprehensive and accurate.
http://www.teamingenuity.com/Default.asp?ID=50&pg=Project+Methodology
Presentation is loading. Please wait. Published byMervyn Gordon Modified over 6 years ago 1 Access, Engagement, and Retention Recovery Oriented Systems of Care OETAS Fall 2009 2 Need for Engagement Early intervention into chronic diseases can shorten the duration and intensity of the disorder Neurological impairments compromise choice-making abilities during addictive addiction and early recovery, increasing relapse risks 3 Need for Engagement The primary responsibility for initiating motivation for recovery and sustaining motivation during the earliest stages of recovery lies with the treatment staff, not the client Hope is the key 4 Need for Engagement Client motivation ebbs and flows and must be actively managed Transformational change - recovery that is unplanned, sudden, positive and permanent – is possible among clients with even the poorest prognoses 5 Need for Engagement Engagement strategies must be refined for historically marginalized populations 6 Factors Influencing Engagement and Retention Over 40% of drop outs prior to admission can be re-engaged by a follow-up phone call procedure Individual characteristics of treatment dropouts are less significant than program differences 7 Factors Influencing Engagement and Retention The single best predictor of retention and dropout is the quality of therapeutic relationship between counselor and client A strong therapeutic relationship can overcome low motivation for treatment and recovery 8 Factors Influencing Engagement and Retention Positive therapeutic alliance is more important to long term recovery outcome for clients with low motivation than for highly motivated clients Culture, gender, and age specific programs are associated with higher completion rates, as is family involvement in treatment 9 Contributors to Client Dropout Lengthy and repeated assessment processes multiple appointments before treatment begins failure to give clients the treatment they requested inadequate methadone doses mixing clients at differing stages of readiness for change 10 Nature of Clinical Relationship Collaboration/ partnership/ consultant role Focus is to enhance client self- efficacy, improve problem solving skills, empower client as the expert in how self-management strategies can be refined to fit his or her lifestyle 11 Nature of Clinical Relationship Burden of disease management shifts to client and his or her family Professional acts as ongoing consultant, along with peers who have achieved self-management success 12 Nature of Clinical Relationship Clients who are more active in their treatment rate their experience more positively, remain in treatment longer, and achieve better post-treatment outcomes 13 Connecticut Department of Mental Health and Addiction Services Connecticut Department of Mental Health and Addiction Services Practice Guidelines for Recovery-Oriented Behavioral Health Care Practice Guidelines for Recovery-Oriented Behavioral Health Care 14 1.Care is offered where people are - designed around the needs, characteristics and preferences of the people receiving services 2.A “no wrong door” approach 15 3.Clinical services are also responsive to pressing social, housing, employment and spiritual needs 4.Interventions incorporate motivational enhancement strategies - meeting each client where he or she is 16 5.Address barriers to care before concluding that a person is non- compliant or unmotivated 6. “Zero reject” policy 7. “Open case” policy 17 8.Reimbursement for pre- treatment and recovery management supports 9. Outpatient counselors are paired with outreach workers to facilitate access 18 10.Mental health professionals, addiction specialists, and people in recovery are placed in critical locales to assist in the early stages of engagement 11.The agency employs staff in recovery 19 12.Housing and support options are available for those who are not ready for detoxification 13.The availability of sober housing is expanded 20 Evidence Based Practices Network for the Improvement of Addiction Treatment (NIATx) Workbook - Promising Practices Network for the Improvement of Addiction Treatment (NIATx) Workbook - Promising Practices 21 Get the client to first appointment quickly Address barriers clients face in attending assessment appointment Clearly explain to the client what he/she can expect at first appointment Model communication with the client upon Motivational Interviewing techniques Reducing No-Shows 22 Increasing Continuation Scheduling Connect the patient to a counselor and other support staff within 24 hours of admission. Build a therapeutic alliance immediately. Make it as easy as possible for patients to remember appointments and continue in treatment. 23 Scheduling Issues Treatment schedule is inconvenient Patients forget appointment times Patients have limited ability to choose treatment schedule Sessions are scheduled too far apart for patients to maintain momentum 24 Scheduling Suggestions Adjust staff schedules so that sessions are available at times most convenient for patients. Make reminder calls to help patients keep track of their appointments, and provide patients with appointment cards that list the next four treatment sessions 25 Increasing Continuation: Orientation to Treatment Provide a welcoming live or video orientation Establish clear two-way expectations Schedule, attendance, participation requirements, how to progress through phases of care Assign a peer buddy 26 Increasing Continuation On an ongoing basis, identify patients at risk of leaving and barriers to continuing in treatment. Resolve barriers to continuing in treatment. Maintain counselor resiliency with staff collaboration and personal care/development. 27 Increasing Continuation Tailor treatment to patient’s individual circumstances and needs; use individual client-driven treatment plans Avoid fixed lengths of stay in any level of care, so that patient movement occurs as soon as they are ready 28 Increasing Continuation Along with a variety of education and treatment activities, have fun. Reinforces message that sobriety is more enjoyable than using drugs Offer positive reinforcements for continuing in treatment. Contingency management programs, incentives 29 The Role of Clinical Supervision in Recovery-Oriented Systems of Behavioral Healthcare. White, Schwartz &The Philadelphia Clinical Supervision Workgroup Recovery Management and Recovery-Oriented Systems of Care: Scientific Rationale and Promising Practices. White, 2008. Connecticut Department of Mental Health and Addiction Services Practice Guidelines for Recovery-Oriented Behavioral Health Care Network for the Improvement of Addiction Treatment (NIATx) Workbook - Promising Practices References Similar presentations © 2022 SlidePlayer.com Inc. All rights reserved.
https://slideplayer.com/slide/6227241/
Welcome to the SNI Forward, our quarterly snapshot of the transformation efforts underway at California’s public health care systems, and the work of the California Health Care Safety Net Institute (SNI). Recently, SNI and CAPH staff reflected on core competencies we value and nurture internally to deliver work that best supports our members. One competency that stood out as critical is dealing with ambiguity. Developing policy and programs in a shifting political and policy landscape, evolving metric specifications for performance measurement, and awaiting federal approval for programs all keep us on our toes. As a team, we also reflected on how our members, California’s public health care systems, are faced daily with similar tensions of preparing for and acting on changes in the face of incomplete information. The Value Based Strategies program, described below, exemplifies this challenge. While trends reveal a growing movement toward value-based care, members continue to operate in fee-for-service and capitation at the same time and are strategizing when to explore new payment arrangements and implement changes. There is no one clear answer, but members can learn successful practices in the venues SNI creates. While we acknowledge the challenges of a world demanding change with imperfect information, SNI continues to be there to help members prepare for and act on new requirements in delivery system transformation. We hope you enjoy this edition of the SNI Forward. Giovanna Giuliani Executive Director California Healthcare Safety Net Institute Program Highlight: Value Based Strategies Catalyzed by the alternate payment methodology (APM) requirement in the Public Hospital Redesign and Incentives in Medi-Cal (PRIME) program and growing interest in value-based payment policies and programs, SNI launched a Value-Based Strategies initiative, bringing together members and experts to learn from one another and improve their organization’s value-based capabilities and arrangements. CAPH/SNI hosted a kickoff meeting on March 13 in Oakland for 40 interdisciplinary leaders representing 15 CAPH member systems, with lively discussion, peer questioning and learning from key partners. The day began with presentations from Jonathan Freedman (Health Management Associates), Giovanna Giuliani (SNI), and Rich Rubinstein (CAPH), who gave a bird’s-eye view of the national and California market trends, financial and environmental considerations and implications of the current waiver. Then, leaders from health plan partners, Dr. Brad Gilbert from Inland Empire Health Plan, Amy Shin from San Joaquin Health Plan, and Neal Jarecki and Ngoc Bui-Tong from Santa Clara Family Health Plan, shared their perspectives on successful partnerships with public health care systems. All three leaders stressed data as the key to improving collaborations and building organizational capacity for taking on additional risk. In the afternoon, three systems leaders – Tangerine Brigham from Alameda Health System, Ron Boatman from Arrowhead Regional Medical Center, and Reena Gupta, MD, from San Francisco Health Network – shared lessons learned in implementing the APM requirement and in evolving their systems’ value-based care capabilities over time. The speakers emphasized education for staff on managed care principles and practices, the importance of sharing data and setting up dashboards to track progress, and strengthening population health management. By the end of the convening, attendees were asked to reflect on the day and rank topics for further exploration. Trends emerged around risk stratification, cost of care data, and developing internal managed care expertise, which SNI will incorporate in building 2018 program activities. For more information, contact Giovanna Giuliani at [email protected]. By the Numbers: Reducing Health Disparities through PRIME As part of PRIME, public health care systems have made strides in addressing health disparities. As a first step, systems improved the collection of granular Race, Ethnicity and Language (REAL) data and began collecting sexual orientation and gender identity data for the first time. Using that data as a foundation, systems analyzed REAL data to identify a specific metric (e.g., blood sugar control, comprehensive diabetes care) and target population for which a health disparity existed. Then, systems created a plan to address the disparity and are now actively working to reduce it. The graph shows the number of systems targeting which PRIME metric for improvement. With PRIME, public health care systems are designing, testing and spreading successful practices to strengthen care delivery for specific populations. This work will directly improve the health of patient populations identified in these disparity reduction plans, and PHS can leverage the investments long into the future, improving the health of entire communities. Read the CAPH/SNI Reducing Health Disparities Brief for additional information, including themes and summaries of members’ disparity reduction plans. Member Profile: Ventura’s Behavioral Health Integration Efforts Based on member input, SNI has prioritized Behavioral Health Integration (BHI) as a focus area for ambulatory care support in 2018. On March 1, 13 member systems convened for a BHI Leadership Roundtable, where leaders shared approaches to integrate behavioral health services into primary care settings and reach targets for PRIME Project 1.1. Topics covered throughout the day included: models of care integration, behavioral health-primary care team ratios and roles, approaches to financing integration, and strategies to advance a culture of quality improvement among integrated care teams. Dr. Marc Avery, Clinical Professor, University of Washington School of Medicine, and Hunter Gatewood, LCSW, Signal Key Consulting, joined the session to provide content expertise and help facilitate the workshop. Ventura Health Care Services Agency shared steps that they are taking to strengthen behavioral health-primary care integration. Lucy Marrero, Quality Improvement Manager, discussed Ventura’s “Healthy [Whole] People” approach, which emphasizes universal screenings, early interventions, and brief onsite treatment, as needed. The integrated team uses “hot hand-offs” to quickly refer patients to case management and peer support. Additionally, at registration Ventura electronically administers a survey that includes PHQ-9 and other behavioral health items. The survey can be completed at home via the patient portal or in the clinic waiting room, using tablets and a mobile application. As a result of these efforts, Ventura saw a 32% increase in screening for clinical depression and follow-up between 2016 and 2017, achieving their target for PRIME Project 1.1. Going forward, the Ventura team will focus on clarifying roles and streamlining the flow of information between integrated care team members. For more information about the BHI Roundtable or SNI’s ambulatory care offerings, please contact Amanda Clarke at [email protected]. What’s Next: Highlights from SNI Programs in the Upcoming Quarter Public Hospitals & Health Plans: Improving Data Quality Together – SNI is collaborating with the Local Health Plans of California (LHPC) and the California Association of Health Plans (CAHP) to host an in-person meeting on April 18 in Oakland. Health plans and public health care systems will come together and problem solve to identify ways to collectively improve the quality and exchange of encounter and claims data. For more information, contact David Lown at [email protected]. Data Sharing and Whole Person Care – At the WPC Data Sharing Convening on May 22 in Oakland, attendees will work through the legal, technological, and cultural barriers to sharing information across sectors. CAPH/SNI is co-hosting the convening with the County Health Executives Association of California (CHEAC), and the California State Association of Counties (CSAC), and supported by the California Health Care Foundation (CHCF). For more information, contact Amanda Clarke at [email protected]. Quality Incentive Program – The Quality Incentive Program (QIP) is a new pay-for-performance program for California’s public health care systems that converts funding from previously existing supplemental payments into a value-based structure, designed to bring systems into compliance with the federal Medicaid Managed Care Rule. SNI is collaborating closely with the State, and with NCQA, to develop the reporting infrastructure for QIP and will provide improvement support through its Ambulatory Care Redesign work. For more information, contact Dana Pong at [email protected]. Ambulatory care redesign: SNI continues PRIME performance improvement support, with spring webinars on Sexual Orientation and Gender Identity data collection and prenatal and postpartum care improvement, as well as supporting foundational capabilities with a webinar on the Collaborative Care model. SNI is also planning an in-person meeting on addressing social determinants of health in summer 2018. For more information, contact Amanda Clarke at [email protected]. Opportunity: CHCF Health Care Leadership Program Applications are open for the CHCF Health Care Leadership Program, administered by Healthforce Center at UCSF. The two-year, part-time program is widely recognized as a transformative experience, helping clinicians of all disciplines better lead change in turbulent times. Alumni of the program are now found among both the CAPH and SNI Boards, the Board’s Clinical Advisory Committee, SNI, and in the leadership and senior management teams across the membership. The part-time program is for clinically trained health care professionals with at least five years of leadership experience who live and work in California. The program seeks diversity across disciplines, organizations, geography and ethnicity. CHCF covers most program costs, but fellows’ home organizations must pay tuition. As with past cohorts, the ability to pay is not a consideration in the selection process. Applications are due June 8. For more information and to apply, click here. Learn More To learn more, visit our website at www.safetynetinstitute.org. CAPH/SNI members can access program materials through SNI Link– our members’ only program portal.
https://safetynetinstitute.org/publications/sni-forward-april-2018/
There are three compelling reasons for developing effective interventions for children and adolescents: The mental health of children and adolescents can be influenced by a variety of factors. Risk factors increase the probability of mental health problems, while protective factors moderate the effects of risk exposure. Policies, plans, and specific interventions should be designed in a way that reduces risk factors and enhances protective factors. Without guidance for developing child and adolescent mental health policies, plans, and treatment program, there is a danger that systems of care will be fragmented, ineffective, expensive and inaccessible. Several different systems of care (e.g. education, welfare, health) may need to be involved to ensure that services for youth are effective. An overriding consideration is that the child’s developmental stage can influence his/her degree of vulnerability to disorders, how the disorder is expressed and how best treatment should be approached. Thus a developmental perspective is needed for an understanding of all mental disorders and for designing an appropriate mental health policy. The development of a child and adolescent mental health policy requires an understanding of the prevalence of mental health problems among children and adolescents. Their needs are inextricably linked with their developmental stages. It is also important to identify the existing financial and human resources available, the existing service organizations and the views and attitudes of mental health workers in addressing child and adolescent mental health issues. Pilot projects can provide information about successful interventions as well as why certain programs may have failed. When evaluating pilot projects and studies in the literature, it is important to consider the distinctions between efficacy (an intervention’s ability to achieve a desired effect under highly controlled conditions) and effectiveness (an intervention’s ability to achieve a desired effect within the context of a larger, non-controlled setting). The findings from a study using a well defined population group under highly controlled conditions may not necessarily be replicable in a “real life” setting; therefore caution is needed in directly applying findings from clinical trails with real life settings without appropriate consideration to implementation issues. None the less, there are a number of studies to evaluate the effectiveness using adequate methodology, the findings of which are strong enough to adopt on a broader scale. IMHC plans to hold consultations with colleagues and nongovernmental organizations from other districts, states, counties or regions when deciding upon the appropriateness of programming models that meet reasonable standards of effectiveness, for incorporation into policy. While consensus building and negotiation are important at every stage of the policy planning cycle, effective policy-makers will use the initial information gathering as an opportunity to begin building consensus. There area three reasons why it is important to hold consultation with a wide range of stakeholders: (i) The social ecology of children and adolescents is such that their interests and needs should be met in a range of settings; (ii) a consultation process can increase the buy-in of crucial stakeholders; and (iii) involvement in a policy development process may increase stakeholder’s insights into the potential contributions of their section to the mental health of children and adolescents. Exchange with significant players Different consultations [i.e. resources, regions, and state (s)] can make an important contribution to policy development, especially when the consultants have experience in several other arenas that are similar in terms of level of economic development, health system organization and governmental arrangements. National and international professional organizations can be instrumental in providing support and promoting networking. In this step, IMHC develops the core of the policy, using the outputs of the first four steps. The vision usually sets high but realistic expectations for child and adolescent mental health, identifying what is desirable for the agency. This would normally be associated with a number of values and related principles, which would then form the basis of policy objective. Many policy-makers believe it is important to address the promotion of healthy development and the prevention of illness along with the treatment of child and adolescent mental disorders, although the emphasis on each differs across countries. In developing a mental health policy for children and adolescents, IMHC will coordinate actions in several areas (listed below) to maximize the impact of any mental health policy. It is essential that all stakeholders and sectors have a clear understanding of their responsibilities. All those who were involved in the consultation process should be considered. Once the mental health policy has been completed, the next step is to develop a plan for its implementation. The development of such a plan builds on the process already established for policy development as outlined above. Information about the population’s needs, gathering evidence and building consensus are important in the formulation of such a plan. A plan consists of a series of strategies, which represent the lines of action that have the highest probability of achieving the policy objectives in a specific population. In developing and setting priorities for a set of strategies, it is often useful to conduct a SWOT analysis, in which the Strengths, Weaknesses, Opportunities, and Threats of the current situation are identified. Following a SWOT analysis, a series of actions should be taken to develop and identify priorities for a set of strategies: (i) create a comprehensive list of potentially useful proposals for each of the areas of action developed during the policy formulation phase; (ii) brainstorming with key players to develop a set of strategies for implementing each of the proposals; (iii) revise and modify strategies based on a second round of inputs from key players so that there are two or three strategies for each area of action; (iv) establish a time frame to each strategy; (v) develop details for how each strategy will be implemented. Details include setting indicators and targets, outline the major activities, determining the costs, identify available resources and creating a budget. Each strategy should be accompanied by one or more targets which represent the desired outcome of the strategy. Indicators enable an assessment of the extent to which a target has been met. The next step should be to determine the actual activities that are necessary for each strategy. Each activity should be accompanied by a set of questions: Who is responsible? How long will it take? What are the outputs? What are the potential obstacles? The budget is the product of an assessment of costs in the context of available resources. Formulated policies must be disseminated to appropriate mental health offices (i.e. DHS) and other partner agencies, and within those agencies, to individuals. The success of the dissemination of a policy, plan or programming will be maximized if children, adolescents and their families are reached at a variety of locations, such as schools, places of worship, urban and rural areas, and workplaces. No policy or plan, no matter how well conceived and well researched, has a chance of success without political support and a level of funding commensurate with its objectives. Because young people are often dependent on others to advocate on their behalf, advocates for child and adolescent mental health should seek to ensure the political and financial viability of a plan, independently of the persistent advocacy of the service users themselves. Advocates for mental health policy within a ministry of health will need to identify stakeholders in other parts of the government, and in the community, state or county at large. The implementation of a child and adolescent mental health policy plan requires the participation of a number of individuals with a wide range of expertise. Individuals with training or experience mainly applicable to adults may have to be assisted by other appropriate specialists to make planning applicable to children and adolescents. Pilot projects in demonstration areas, where policies and plans can be implemented relatively rapidly, can serve several useful functions: they can be evaluated more effectively and completely; they can provide empirical support for the initiative through their demonstration of both feasibility and short-and –long term efficacy; they can produce advocates from the ranks of those who participated in the demonstration area; and they can educate colleagues from the health and other sections in how to develop policies, plans, and programs. The chances of successful implementation of an intervention will be enhanced if service providers are sufficiently empowered and supported in terms of information, skills, ongoing support, and human and financial resources. A first step in this process is to identify which individuals, teams or organizations in the health or other sectors will be responsible for implementing the program. All sections have a stake in both the present and future physical and mental well-being of young people. Collaboration (including cost-sharing) around mental health initiatives produces win-win situations for everyone, most importantly for the young people involved. In addition to intersectoral collaboration, other stakeholders (such as officials in the areas of education and justice) need to interact on a ongoing basis to maintain support for and ensure the smooth delivery of mental health services.
http://www.iroquoismhc.com/services/goals-plans-and-strategies/
Improve business process performance and deliver results associated with customer satisfaction, shareholder value and employee satisfaction. Deliver performance improvements with continuous improvement best practices. Successfully manage key business initiatives and lead cross-functional teams. Responsibilities - Manage the Continuous Improvement function and assigned key initiatives, using continuous improvement best practices - Lead the Continuous Improvement managers and technical writers in support of key business improvements - Utilizes lean principles and systems thinking to identify complex interrelated processes that contribute to the success and failures of an organization - Facilitates complex accelerated improvement workshops for value stream analysis, standard work, improved throughput, quality improvement, cost reduction, workplace organization and other objectives - Partners with leadership team to identify best countermeasures and combined solutions to address root causes - Provides expert advice and leadership in implementing action plans and development of sustainment/control plans for solutions - Define and evolve the corporate continuous improvement strategy to best support the business - Act as an internal consultant to the business by identifying, prioritizing and leading improvement projects within and across functions - Responsible for implementing transformational strategies to respond to internal and external drivers, as well as challenging existing practices, and partnering with stakeholders to identify innovative ways of achieving strategic KPI's and goals. - Developing systems and execute strategies to implement a continuous improvement program and culture that delivers substantive productivity and/or profitability gains year over year. - Apply analytical techniques and Quality methodologies to identify projects that would benefit from continuous improvement support and best practices - Manage, train and develop Continuous Improvement team members - Provide oversight to continuous improvement initiatives - Be a change agent and utilize strategic planning and change management skills to implement and sustain change efforts within the continuous improvement knowledge space - Advocate for and partner with IT to implement digitized processes and metrics to support improved project management and completion - Author, maintain and communicate continuous improvement guidelines - For all efforts, identify, understand and maintain focus on customer requirements to ensure satisfactory outcomes are achieved Qualifications: - College Degree (BA/BS) - 10+ years of experience leading teams in a corporate environment - Six Sigma Black Belt Certification - Lean Certification - 5+ years of experience as a Black Belt / Lean Subject Matter Expert - Preferred: MBA or other post-graduate education - Preferred: 3+ years of experience in Operations / Logistics / Airline Industry - Preferred: 3+ years of experience in Statistical Analysis or Management Consulting - Preferred: Project Management Professional (PMP) Special demands : - Travel 10-20% of time may be required - Ability to travel as activities require, both domestically and internationally - 24-hour global operations environment Skills :
https://www.isixsigma.com/job/director-continuous-improvement-2/
Program Description: This three-day certification workshop presents rehabilitation and therapy protocols to manage identifiable vestibular and balance system disorders, including cortical and labyrinthine concussion. The content is evidence based to ensure therapists and clinicians produce excellent outcomes. Includes: - An overview of vestibular anatomy and physiology - Understanding sensory integration of equilibrium - Disorders affecting vestibular function - Cortical and Labyrinthine Concussion Differentiation - Interactive Concussion Management Strategies - Extensive training materials for therapy programs - Neurophysiology of Central Compensation - VRT protocols: adaptation, habituation, and substitution for patient-centered therapy - Pyschogenic factors affecting VRT outcomes - BPPV diagnosis & treatment – Canalith Repositioning Maneuvers with manual training Certification: - AIB’s 2-year certification is included in the price of the tuition. - At the conclusion of the workshop, you will be given a take home examination. Then manual portion is the CRM practice will be performed during the workshop while at AIB. Learning Objectives: - Differentiate vestibular test abnormalities that identify patients who are “appropriate” candidates for therapy. - Use diagnosis based strategies for designing and implementing therapy. - Apply specific therapy protocols within individualized programs for patients. - Initiate and implement a comprehensive vestibular rehabilitation program within their scope of practice. - Develop Return-To-Play Decisions based on your assessment and management of concussion patients. To register, click the learn more link below and make sure to select the March 31-Apr 02, 2017 (Mesa, AZ) option. Time March 31 (Friday) 8:30 am - April 2 (Sunday) 4:30 pm MST Location ATSU - Mesa Campus 5850 E Still Circle, Mesa, AZ 85206 Organizer Audiology Comments are closed.
https://ce.atsu.edu/events/vestibular-rehabilitation-certification-workshop-3-day-program/
AESS units engage in an annual process to identify and examine ways to improve the overall student experience by assessing support outcomes and/or student learning outcomes. The AESS assessment process at York is internally driven and designed to: - be meaningful and manageable; - transparent; - lead to data-driven decision-making; - maximize the effective use of institutional resources; - ensure continuous improvement; and - be relevant to the unit. Assessment is cyclical and activities reoccur to ensure data is used for continuous and sustained improvement of measurable outcomes. In connection with the unit mission and goals, there are six critical activities units will undertake in the assessment process: - Identify outcomes for each goal - Identify measures for each outcome - Collect and organize data - Analyze and interpret data, and identify recommend actions - Develop and execute action plan - Use results to make improvements (“closing the loop") These specific activities unify assessment practice across AESS units and establishes standards for engaging in continuous improvement. How it Works The AESS unit assessment cycle is governed by an annual timeline that requires units to engage in specific assessment activities throughout every year. Units use available templates and other resources to help them complete assessment activities. - The 5-Year Annual Assessment Plan template helps units draft long-term plans based on their mission, goals and identified outcomes. - The annual plan helps units draft short-term plan for the current academic year based on their mission, goals, outcomes and the five-year plan that continues to serve as a source document. The annual plan allows units the flexibility to modify the plan based on assessment findings of the previous year and/or other changes. - Each year, units are required to use data to assess at least one goal and two measurable outcomes connected to that goal. - Due to additional program requirements, some units (e.g., SEEK, Male Initiative Program) are required to assess all goals every year. - The Annual Assessment Findings Report, requires units to communicate information about assessment activities completed for the previous year. - Units describe in detail outcomes assessed, measures used to collect data, target of success, and how data was analyzed and interpreted. They also explain how actionable data was used to create action plans and make improvements. If applicable, units also share results mapped to student ILO(s). - The report provides units with the opportunity to discuss factors that may have impeded or accelerated assessment efforts, celebrate assessment accomplishments and concisely communicate outcomes for the following next year. The AESSAC uses information in plans and reports to document and provide feedback to AESS units. After assessment plans have been vetted and finalized by the AESSAC, units implement their plans. Feedback provided helps to make sure assessment activities are executable and actionable data from the previous year’s Annual Assessment Findings Report is used to inform decision making.
https://sun3.york.cuny.edu/president/institutional-effectiveness/institutional-assessment-1/non-academic/process
* Note that each of the participating CRAN Institutes have their own comprehensive and detailed Strategic Plans that can be found on their respective websites: www.niaaa.nih.gov, www.drugabuse.gov, www.nci.nih.gov. Additionally, the National Institutes of Health Office of AIDS Research has developed a strategic plan for HIV research that can be found at http://www.oar.nih.gov/strategicplan. The CRAN Strategic Plan is intended to inform the research and other communities about opportunities for common areas of research for the three Institutes. They include those emerging from recent scientific advances and policy changes, as well as the original areas identified by the Scientific Management Review Board in 2010. Mission The mission of the National Institutes of Health (NIH) partnership, Collaborative Research on Addiction at NIH (CRAN), is to provide a strong collaborative framework to enable the National Institute on Alcohol Abuse and Alcoholism (NIAAA), the National Institute on Drug Abuse (NIDA), and the National Cancer Institute (NCI) to leverage resources and expertise that will advance substance use, abuse, and addiction research to improve public health outcomes. The CRAN framework was created to foster synergies in addiction science, provide new research opportunities, and meet public health needs by broadening the research focus of participating Institutes to better address poly- or multiple substance use, abuse, and addiction. Its main priorities are to elucidate the common, specific, and interacting causes and consequences of substance exposure, and to develop effective prevention and treatment interventions. By recognizing that there are some common connections among substances of abuse, approaches taken to understand risk factors, mechanisms of action, prevention, treatment, consequences, and service delivery can be improved. To this end, CRAN seeks to fill gaps in research through implementing the following: 1) Expansion of research training to increase scientific expertise in multi-substance research; 2) Creation of core technical facilities and common databases for such research; and 3) Facilitation of a paradigm shift in addiction research culture both within NIH and across the research community. In this way, research related to one drug will be better leveraged to inform research on others, and help achieve the common goal of reducing the enormous toll of all substance use on patients, families, and society. Background and Overview Goals and Objectives Strategies and Actions A first step toward eliminating any disease is understanding its nature, incidence, prevalence, and etiology. CRAN will support research to understand better the factors that predispose a person to initiate use of substances of abuse and engage in behaviors ranging from experimentation to harmful use. These factors need to be taken into account to prevent addiction and other associated negative consequences. CRAN will support research to develop successful treatments for multi-substance addiction and improve treatment accessibility and implementation. Additionally, CRAN will leverage data systems and share resources among the CRAN Institutes and others, as appropriate. Examples of research efforts include, but are not limited to the following: - Develop animal models of multiple or polysubstance use. - Identify new targets (e.g., genes, epigenetic marks, biochemical pathways, neural circuits) to serve as biomarkers or leads for medications development to treat multiple addictions. - Determine the efficacy/utility of coordinated efforts to screen for multiple substances. - Implement multi-armed clinical trials to assess outcomes for various substances alone and in combination to maximize resources and speed medications and other treatment development. - Develop innovative and integrative approaches for treating multi-substance addictions and other comorbid conditions (e.g., mental illness, chronic pain, HIV). - Focus on poly-substance use in vulnerable populations (e.g., pregnant women, the elderly; people with medical comorbidities, such as HIV or HCV). - Identify and test strategies to overcome barriers and disincentives to adoption of researchbased treatments for multiple addictions in the community, criminal justice system, and healthcare systems. - Conduct implementation research to improve treatment accessibility in various populations, leveraging provisions in the Affordable Care Act that increase access to care. - Promote research on the adverse health consequences of poly-substance use. - Support research to develop more effective program, policy, and communication interventions targeting substance use and substance use disorders, and to disseminate and implement existing evidence-based interventions more effectively. - Development of Funding Opportunity Announcements in areas of shared interest. - Coordination of substance use and addiction training programs across ICs. - Encouragement of CRAN-related presentations Highlights - Identify individual developmental trajectories (e.g., brain, cognitive, emotional, academic), and the factors that can affect them. - Develop national standards of normal brain development in youth. - Understand the role of genetic versus environmental factors on development, enriched by comparisons of twin participants. - Examine the effects of physical activity, sleep, as well as sports injuries and other injuries on brain development and other outcomes. - Study the onset and progression of mental disorders, factors that influence their course or severity, and the relationship between mental disorders and substance use. - Determine how exposure to different substances such as alcohol, marijuana, tobacco, caffeine, and others, individually or in combination, affect various developmental outcomes and vice versa.
https://www.addictionresearch.nih.gov/cran-strategic-plan-2016-2021
AbstractBackground. Women are the fastest-growing segment of Veterans Healthcare Administration (VA) utilizers. Although men and women Veterans both report high rates of chronic pain, rates are higher in women. Addressing their unique needs is a priority. VA has placed renewed emphasis on promoting self-management for pain. Despite having a widely supported program for doing so, cognitive behavioral therapy for chronic pain (CBT-CP), several barriers to accessing this care and engaging optimally with its recommendations are noted and these may be particularly salient for women. These include logistical, healthcare delivery, and psychosocial barriers. Patient-centered efforts to address these in the context of evidence-based pain interventions, like CBT-CP, may translate to improved treatment access, engagement, adherence, and more optimal outcomes for women Veterans. Accordingly, a home-based, intervention integrating an evidence- based CBT-CP program with reciprocal peer support (RPS) has been developed (CONNECT) and is currently being pre-piloted. Results are promising but substantial refinement and feasibility testing is warranted before a full-scale trial is warranted. This proposal will optimize the feasibility and acceptability of CONNECT and examine the potential feasibility of candidate control conditions for a future randomized trial. Significance/Impact: Because CONNECT is less resource-intensive than CBT-CP and because it is home based, it may reduce costs and improve access to behavioral pain care, and its success may have implications for male Veterans with pain. It targets previously unaddressed and potentially modifiable factors (e.g. social support) thought to be relevant for adjustment and uptake of pain self-management among women Veterans. Innovation: CONNECT examines an alternate method for promoting CBT-CP that is potentially scalable, cost- effective and transportable. Specific Aims: Aim 1a. Solicit Veteran feedback on the refined recruitment strategies, treatment components and materials, duration/content, engagement strategies peer-matching and data collection methods. Aim 1b. Evaluate the feasibility (retention, adherence, assessment methods, recruitment rate) and acceptability (credibility, satisfaction) of a refined 8-week RPS pain self-management intervention (CONNECT) in a sample of 30 women Veterans with chronic musculoskeletal pain. Aim 1c: Conduct a responder analysis to classify the percentage of women Veterans that evidence clinically meaningful improvements in pain intensity/interference and depressive symptoms. Aim 2: Use qualitative methodology to a) examine participant perceptions regarding satisfaction/acceptability of CONNECT, and of specific components, and b) examine participant perceptions of underlying mechanisms. Aim 3: In preparation for a future randomized-controlled trial (RCT), conduct a feasibility analysis to determine preferences for treatment using the prospective preference assessment, which includes a) qualitative interviews to query motivations for, concerns about and factors influencing participation in a future RCT as well as survey measures to assess b) willingness to be randomized to candidate control conditions, and c) factors influencing their willingness. Methodology: A single arm pilot design and an analogue study to examine the feasibility of randomization to candidate control conditions. Next Steps/Implementation. If CONNECT is feasible a Hybrid Type 1 trial will be warranted to determine whether providers may confidently recommend CONNECT to women Veterans and to examine implementation factors. NIH Reporter Project Information: https://reporter.nih.gov/project-details/9833202 PUBLICATIONS: Journal Articles DRA: Health Systems, Musculoskeletal Disorders DRE: Technology Development and Assessment, TRL - Applied/Translational Keywords: Disparities MeSH Terms: None at this time.
https://www.hsrd.research.va.gov/research/cda_abstracts.cfm?Project_ID=2141707413
OVERVIEW: This position will lead in developing strategic goals, objectives, and efforts for R3 that are consistent and comparable with the national goals established in the National Hunting and Shooting Sports Action Plan: Strategies Recruiting, Retaining, and Reactivating Hunting and Shooting Sports Participants and in the Georgia Hunting Action Plan. It is imperative that the process for achieving R3 goals is data-driven. Thus, the coordinator position will require skills related to requesting, analyzing, and interpreting data as well as leading efforts to adapt current programs or develop new efforts based on those findings. Evaluating R3 efforts is another critical component for success and the position must focus on strategies to track and evaluate R3 program sponsors, mentors, and participants. Another important part of this position is developing and maintaining a strong, productive stakeholder alliance or workgroup and coordinating mentor/mentee opportunities. The coordinator will work with the Georgia Department of Natural Resources, Georgia Chapter Safari Club International, Quality Deer Management Association, and the National Wild Turkey Federation and its stakeholders within Georgia to continue and modify as necessary the state-specific strategic plan. This R3 plan development process will identify and assess current Georgia R3 efforts; map those efforts on the Outdoor Recreation Adoption Model; identify current gaps in Georgia’s delivery of R3; and develop prioritized strategies and action items that adapt, improve, and expand efforts to meet specific measurable R3 goals. The position will be located at the Georgia Wildlife Federation’s Alcovy Conservation Center in Covington, GA. However, remote work can be accommodated. EXPECTED RESULTS: - Improve statewide R3 strategy, program design, and outcome tracking efforts - Improve sources of information and assistance for new participants - Improve skills and training for participants based upon known barriers to hunting participation - Create a network of mentors/volunteers and promote mentor/mentee opportunities - Improve participation at shooting sports facilities and increase access to hunting and shooting sports opportunities - Improve planning and cooperation among R3 partners - Engage current hunters and shooting sports participants - Improve cultural acceptance of hunting and shooting sports - Improve social media outputs to non-hunting participants - Work with steering committee members in support of R3 initiatives DUTIES AND RESPONSIBILITIES: Coordinate Recruitment, Retention and Reactivation (R3) Efforts of Hunting and Shooting Sports Programs in Georgia Coordinator will develop prioritized strategic goals, objectives and programs for R3 that are in alignment with the national goals established in the National Hunting and Shooting Sports Action Plan: Strategies Recruiting, Retaining and Reactivating Hunting and Shooting Sports Participants, the Georgia Hunting Action Plan, and other R3 best practices resources Evaluate R3 Efforts - Request, gather, analyze, and interpret data to evaluate R3 efforts in Georgia - Lead efforts to adapt programs or develop new ones based on findings - Track and evaluate Georgia R3 program sponsors, mentors, and participants - Develop and maintain strong, productive stakeholder relationships Evaluating the Georgia Plan Coordinator will work with Georgia Department of Natural Resources staff, the Georgia R3 steering committee, and Georgia stakeholders to evaluate and modify the state-specific strategic plan as necessary. This plan should continue to identify current Georgia R3 programs; assess program effectiveness; inventory programs within the Outdoor Recreation Adoption Model (ORAM); identify gaps in the ORAM; and develop action items that adapt, improve, and expand programs to meet specific measurable goals. Facilitate Organizational Partner Mentoring Efforts - Identify organizations from traditional and non-traditional backgrounds for R3 hunting and shooting sports - Act as a resource and guide for agency and R3 partners - Work closely with R3 partners through phone contact, personal conferences, and group meetings Building Collaborative Partnerships - Develop and maintain relationships with members of Georgia sportsmen and women groups; hunting, conservation, other outdoors-related industry; and community development organizations for the purpose of fostering collaborative projects to recruit, retain and reactivate hunters and shooters - Assist in developing and coordinating communication plans designed to maintain positive relationships with the public, program stakeholders, other Georgia agencies/organizations, and related service programs - Develop partnership strategies and collaborations that leverage program funding, resources, and/or operational capacity - Solve problems by coordinating actions across organizational lines and agency departments - Supervise three Academics Afield Coordinators, evaluate the program, and look to expand the program based on evaluation with other states, colleges, and universities - Host Steering Committee Meetings as required - Host an annual Georgia R3 Summit for agency, NGOs, and industry stakeholders. Creating and Maintaining a High-Performance Environment - Define goals and/or required results at the beginning of each performance period and gain acceptance of ideas by creating a shared vision - Communicate regularly with supervisor and stakeholders on progress toward defined goals and/or required results, providing specific feedback and initiating corrective action when defined goals and/or required results are not met - Initiate discussions on a regular basis with supervisor and stakeholders to review relations climate, identify issues, and suggest solutions - Recognize contributions and celebrate accomplishments - Motivate others to improve quantity and quality of work performed and provide training and development opportunities as appropriate KNOWLEDGE, SKILLS, AND ABILITIES: - Avid outdoorsperson with hunting/shooting experience - Knowledge of hunting and shooting sports programs, training theory, instructional design and program management - Knowledge of media production, communication, and dissemination techniques and methods including alternative ways to inform and entertain via written, oral, and visual media (e.g., articles, blog posts, social media, podcasts, video) - Knowledge of business and management principles involved in strategic planning, resource allocation, human resources modeling, leadership technique, production methods, and coordination of people and resources - Ability to evaluate training and instructional materials; write reports; develop procedures and monitor budgets - Ability to create, organize, and conduct presentations, seminars, training workshops, and promotional events - Experience using MS Outlook, Word, Excel, PowerPoint, and other related software - Ability to stay on task, meet goals, and keep morale high - Ability to develop realistic goals and measure progress to stay on track - Ability to plan, organize, and coordinate work assignments that often involve multiple staff including those with other organizations - Ability to communicate effectively in oral and written forms - Ability to establish and maintain effective working relationships with peers from a variety of different organizations and backgrounds Preference will be given to applicants with marketing and communication experience in addition to the above skills and abilities. Salary commensurate with experience and background. To apply, mail or email cover letter, resume, and references to Mike Worley by November 30, 2020.
https://gwf.org/r3jobpost/
In-vehicle communication has become an integral part of today's driving environment considering the growing add-ons of sensor-centric communication and computing devices inside a vehicle for a range of purposes including vehicle monitoring, physical wiring reduction, and driving efficiency. However, related literature on cyber security for in-vehic... In recent decades, networked smart devices and cutting-edge technology have been exploited in many applications for the improvement of agriculture. The deployment of smart sensors and intelligent farming techniques supports real-time information gathering for the agriculture sector and decreases the burden on farmers. Many solutions have been prese... In the last few years, the Internet of things (IoT) has recently gained attention in developing various smart city applications such as smart healthcare, smart supply chain, smart home, smart grid, etc. The existing literature focuses on the smart healthcare system as a public emergency service (PES) to provide timely treatment to the patient. Howe... Citation: Jha, S.K.; Prakash, S.; Rathore, R.S.; Mahmud, M.; Kaiwartya, O.; Lloret, J. Quality-of-Service-Centric Design and Analysis of Unmanned Aerial Vehicles. Sensors 2022, 22, 5477. Storm sewerages are crucial infrastructures in water management. In this paper, a system was developed to detect the blockage of the sewerage and the presence of illegal spills in Storm sewerages. Different nodes with sensors measuring the water level, turbidity, conductivity, and oil presence are scattered in the sewerage. These nodes are connecte... Agriculture Farming activity near to rivers and coastal areas sometimes imply spills of chemical and fertilizers products in aquifers and rivers. These spill highly affect the water quality in rivers’ mouths and beaches close to those rivers. The presence of these elements can worse the quality for its normal use, even for its enjoying. When this p... Neighbor discovery is an important first step after the deployment of ad hoc wireless networks since they are a type of network that do not provide a communications infrastructure right after their deployment, the devices have radio transceivers which provide a limited transmission range, and there is a lack of knowledge of the potential neighbors.... An identity management system is essential in any organisation to provide quality services to each authenticated user. The smart healthcare system should use reliable identity management to ensure timely service to authorised users. Traditional healthcare uses a paper-based identity system which is converted into centralised identity management in... IoT provides applications and possibilities to improve people’s daily lives and business environments. However, most of these technologies have not been exploited in the field of emotions. With the amount of data that can be collected through IoT, emotions could be detected and anticipated. Since the study of related works indicates a lack of metho... Internet of Things (IoT) has introduced new applications and environments. Smart Home provides new ways of communication and service consumption. In addition, Artificial Intelligence (AI) and deep learning have improved different services and tasks by automatizing them. In this field, reinforcement learning (RL) provides an unsupervised way to lear... With the growing popularity of immersive interaction applications, e.g., industrial teleoperation and remote-surgery, the service demand of communication network is switching from packet delivery to remote control-based communication. The Tactile Internet (TI) is a promising paradigm of remote control-based wireless communication service, which ena... With the advent of the Internet of Things (IoT), various industries have made considerable progress , including agriculture, utilities, manufacturing, and retail. IoT solutions help to increase productivity and efficiency in factories and workplaces. Meanwhile, in smart cities, interconnected traffic lights and parking lots are established throu... The IEEE 802.15.4 standard is one of the widely adopted specifications for realizing different applications of the Internet of Things. It defines several physical layer options and Medium Access Control (MAC) sub-layer for devices with low-power operating at low data rates. As devices implementing this standard are primarily battery-powered, minimi... Clustering is a promising technique for optimizing energy consumption in sensor-enabled Internet of Things (IoT) networks. Uneven distribution of cluster heads (CHs) across the network, repeatedly choosing the same IoT nodes as CHs and identifying cluster heads in the communication range of other CHs are the major problems leading to higher energy... Turfgrass phenotyping is a potential tool in different grass program breeding. The traditional methods for turfgrass drought phenotyping in field are time-consuming and labor-intensive. However, remote sensing techniques emerge as effective, rapid and easy approaches to optimize turfgrass selection under water stress. Remote sensing approaches are... Wireless technology is offering numerous growth to develop communication systems. The Internet of Things (IoT) is combined with the sensing ecosystem to transfer and process the physical environment. Recently, IoT devices have collaborated with wireless devices to improve embedded medical applications. Many solutions are proposed to decrease the po... Internet of things (IoT) connects heterogeneous physical objects to collect the observing data and eases to development of smart transmission systems. Vehicular ad hoc network (VANET) offers many smart services for emerging vehicle-to-vehicle communication systems using sensors. Although, geographical routing solutions have been improved the proces... Flying Ad Hoc Networks (FANETs) are gaining popularity due to its extra‐ordinary features in avionics and electronics domain. FANETs are also considered as most powerful weapon in military assets as well as in civil security applications. Due to its infrastructureless design and wireless nature network, some security challenges are overhead that sh... Wireless networks and the Internet of things (IoT) have proven rapid growth in the development and management of smart environments. These technologies are applied in numerous research fields, such as security surveillance, Internet of vehicles, medical systems, etc. The sensor technologies and IoT devices are cooperative and allow the collection o... Mixed crops are one of the fundamental pillars of agroecological practices. Row intercropping is one of the mixed cropping options based on the combination of two or more species to reduce their impacts. Nonetheless, from a monitoring perspective, the coexistence of different species with different characteristics complicates some processes, requir... Irrigation is an essential input for grasslands sustainability, especially in seasons where rainfall is not regular and insufficient. The scarcity of water in many regions of the world decreases the timing and quantity of irrigation and affects the quality of grasslands. The use of remote sensing techniques as precise methods to control the efficie... The late detection of abnormalities in humans is one of the main problems that arise to face other pathologies. In this context, non-invasive techniques, such as urine tests, have currently become one of the most interesting tools. Especially, the urine tests are one of the easiest and accessible biological fluids, that provided relevant informatio... Every single day, a massive amount of data is generated by different medical data sources. Processing this wealth of data is indeed a daunting task, and it forces us to adopt smart and scalable computational strategies, including machine intelligence, big data analytics, and data classification. The authors can use the Big Data analysis for effecti... Nowadays, it is common for applications to require servers to run constantly and aim as close as possible to zero downtime. The slightest failure might cause significant financial losses and sometimes even lives. For this reason, security and management measures against network threats are fundamental and have been researched for years. Software-de... In gardening, particularly in golf courses, soil moisture management is critical for maximizing water efficiency. Remote sensing has been used to estimate soil moisture in recent years with relatively low accuracies. In this paper, we aim to use remote sensing and wireless sensor networks to generate soil moisture indexes for a golf course. In the... In this paper, a novel solution to avoid new infections is presented. Instead of tracing users' locations, the presence of individuals is detected by analysing the voices, and people's faces are detected by the camera. To do this, two different Android applications were implemented. The first one uses the camera to detect people's faces whenever th... Software Defined Networking (SDN) simplifies network management and significantly reduces operational costs. SDN removes the control plane from forwarding devices (e.g., routers and switches) and centralizes this plane in a controller, enabling the management of the network forwarding decisions by programming the control plane with a high-level lan... In irrigation ponds, the excess of nutrients can cause eutrophication, a massive growth of microscopic algae. It might cause different problems in the irrigation infrastructure and should be monitored. In this paper, we present a low-cost sensor based on optical absorption in order to determine the concentration of algae in irrigation ponds. The se... In modern technologies, the industrial internet of things (IIoT) has gained rapid growth in the fields of medical, transportation, and engineering. It consists of a self-governing configuration and cooperated with sensors to collect, process, and analyze the processes of a real-time system. In the medical system, healthcare IIoT (HIIoT) provides an... Neighbor discovery is a crucial first step after the deployment of wireless ad hoc networks, which do not have a communications infrastructure. In this paper we present analytical models of randomized neighbor discovery protocols for static one-hop environments: CDPRR (Collision Detection Probabilistic Round Robin) and CDH (Collision Detection Hell... The use of precision agriculture is becoming more and more necessary to provide food for the world’s growing population, as well as to reduce environmental impact and enhance the usage of limited natural resources. One of the main drawbacks that hinder the use of precision agriculture is the cost of technological immersion in the sector. For farmer... In modern years, network edges have been explored by many applications to lower communication and management costs. They are also integrated with the internet of things (IoT) to achieve network design, in terms of scalability and heterogeneous services for multimedia applications. Many proposed solutions are performing a vital role in the developme... Nowadays, the Internet of Things (IoT) performs robust services for real‐time applications in monitoring communication systems and generating meaningful information. The ZigBee devices offer low latency and manageable costs for wireless communication and support the process of physical data collection. Some biosensing systems comprise IoT‐based Zig... The remote location of agricultural fields leads to the difficulty of deploying Precision Agriculture (PA) systems as there is no Internet access in those areas. Therefore, the use of long-range wireless technologies such as LoRa can provide connectivity to rural areas and allow monitoring PA systems remotely. In this paper, a heterogeneous archite... The role of agriculture in society is vital due to factors such as providing food for the population, is a major source of employment worldwide, and one of the most important sources of revenue for countries. Furthermore, in recent years, the interest in optimizing the use of water resources has increased due to aspects such as climate change. This... Internet of Things (IoT) is a developing technology for supporting heterogeneous physical objects into smart things and improving the individuals living using wireless communication systems. Recently, many smart healthcare systems are based on the Internet of Medical Things (IoMT) to collect and analyze the data for infectious diseases, i.e., body... The advantage of computational resources in edge computing near the data source has kindled growing interest in delay-sensitive Internet of Things (IoT) applications. However, the benefit of the edge server is limited by the uploading and downloading links between end-users and edge servers when these end-users seek computational resources from edg... The advantage of computational resources in edge computing near the data source has kindled growing interest in delay-sensitive Internet of Things (IoT) applications. However, the benefit of the edge server is limited by the uploading and downloading links between end-users and edge servers when these end-users seek computational resources from edg... In recent times, health applications have been gaining rapid popularity in smart cities using the Internet of Medical Things (IoMT). Many real-time solutions are giving benefits to both patients and professionals for remote data accessibility and suitable actions. However, timely medical decisions and efficient management of big data using IoT-base... The IEEE 802.15.4 standard is one of the widely adopted networking specification for realizing different applications of Internet of Things (IoT). It defines several physical layer options and Medium Access Control (MAC) sub-layer protocols for low-power devices supporting low-data rates. One such MAC protocol is the Deterministic and Synchronous M... Soil moisture control is crucial to assess irrigation efficiency in green areas and agriculture. In this paper, we propose the design and calibration of a sensor based on inductive coils and electromagnetic fields. The proposed prototypes should meet a series of requirements such as low power consumption, low relative error, and a high voltage diff... The use of drones in agriculture is becoming a valuable tool for crop monitoring. There are some critical moments for crop success; the establishment is one of those. In this paper, we present an initial approximation of a methodology that uses RGB images gathered from drones to evaluate the establishment success in legumes based on matrixes operat... The COVID-19 pandemic has been a worldwide catastrophe. Its impact, not only economically, but also socially and in terms of human lives, was unexpected. Each of the many mechanisms to fight the contagiousness of the illness has been proven to be extremely important. One of the most important mechanisms is the use of facemasks. However, the wearing... The Internet of Things (IoT) is an emerging technology and provides connectivity among physical objects with the support of 5G communication. In recent decades, there have been a lot of applications based on IoT technology for the sustainability of smart cities, such as farming, e‐healthcare, education, smart homes, weather monitoring, etc. These a... Uncontrolled dumping linked to agricultural vehicles causes an increase in the incorporation of oils into the irrigation system. In this paper, we propose a system based on an optical sensor to monitor oil concentration in the irrigation ditches. Our prototype is based on the absorption and dispersion of light. As a light source, we use Light Emitt...
https://www.researchgate.net/profile/Jaime-Lloret
This article examines the current state of soil and water resources, farmland t.ch.i Azerbaijan Republic , the problem of progressive water and wind soil degradation , the need for the organization of agriculture , taking into account the introduction of automated control systems for irrigation using water saving technology and hardware equipment in it, the study of the characteristics and analysis of experience implementing measures to stabilize ecological and drainage system of agriculture in conditions of insufficient moisture areas in the country , as well as basic aspects of development of environmental reclamation approach balanced, rational use of a particular system of crop rotation and crop taking into account the requirements of economic development and environmental management. Keywords: Irrigation; Technology; Degradation; Automated Management of low Intensity Zones; Agriculture The main directions of economic and social development of the Republic is the characteristic intensify agricultural production. A powerful tool for the intensification of agricultural production in the face of his specialization is irrigation. In areas of insufficient moistening (especially typical for mountainous areas) irrigation is one of the decisive factors of the cultivation of high and stable yields of agricultural crops. The purpose of the study: For this purpose, requires the development of new technical solutions and the introduction of automated systems of low irrigation of crops eligible for Ecology and environment Wednesday to improve their environmental condition of irrigated lands, reduce water consumption per unit products and increase the productivity of those or other crops on irrigated field. Irrigated soil in Azerbaijan covers 1.45 thousand hectares. It is believed that factors directly affecting the fascination with crop yields and productivity in this area per hectare of arable land and agricultural land at minimal cost, labor and funds also apply automation application. Automated irrigation increases the effectiveness of all factors intensifying: Chemistry, integrated mechanization, intensive technology upgrade, etc. It allows you to create a large zone of guaranteed crop production. Objects of Study: The object of the study is to explore and create the correct methods for regulating water use and supply of plants by means of irrigation in regardless of weather conditions. To this end, we have developed and introduced into production design systems of automated management systems for irrigation of low- Micro-tailings of self-oscillating action, successfully passing the resource tested test on vydelochnyh soils under Orchard, Lip Hachmasskoj area on the foothills of the Mountain above level at an altitude of 600 meters sea with sloping terrain 0.02. (see the concept of impulse systems rain avtokolebatelnogo actions with automated controls Figure1. Construction and functional description of the CMO AY so for operational control of the weather conditions in the region needed to meet the challenges of planning and operational irrigation management crop fields at the local gidrometeopunkte are set measurement sensors with probes for telemetric measurement taking the readings the main parameters: a) Wind speed-V analog signal (Titus) with period recording of parameter values in the cycle of 30 minutes b) Air temperature-tv, analog signal (Titus) with period recording of parameter values in the cycle 30 min. c) Air humidity-Wb, analog signal (Titus) with period recording of parameter values in the cycle of 30 minutes. Figure 1: schematic diagram of impulse systems rain avtokolebatelnogo actions with automated controls. The reading parameter values in the telemetric heskom code is done smart object controller (to) established in paragraph transformer via a radio channel which communicates with sensorsconverters. Ko otschitannye telemetry signals codes undergo preliminary processing, homogenization and written to main memory, which stores prior to their taking the readings the communications controller (CC) that is installed in the premises of the operational control technology process (ASMO)-operator. For monitoring and control of electricity supply facilities and power consumption accounting to ASMO transform point (TP) (see structural concept of APCS for irrigation) installed transmitters: i. voltage measuring input) in TP-U (analog signal (Titus) ii. measurement load consumers-I U (analog signal (Titus) iii. electricity metering)-Wh (discrete signal integrated-TII iv. control switch settings (enable-disable consumers)-SS (discrete signal position SHH). Report parameter values in the telemetricheskom code is performed by intelligent object controller (KO) of local wire and after their initial processing and averaging is written into RAM. For monitoring and control technological process of abstraction, clarifiers (wastewater treatment plant) and pumping stations (devices increase the water pressure in the pipes) installed transmitters listed in structurally-functional schema: a. water turbidity in the ponds-m (analog signal loop to read Titus, 30 min) b. water level in chambers-ponds-n (analog signal TITUS, read in a loop 30 min); in water pressure-r installed on discharge pumps, modular and distribution reservoirs (analog signal TITUS, read in a loop 30 min) c. load dimensions electric motors-I (analog signal TITUS, read in a loop 30 min) d. provisions of valves-PZ (discrete signal SHH, readable in cycle 1) e. power switches) Regulations-VP (discrete signal SHH, readable in cycle 1) f. alarms-AU (discrete signal TCA, readable in cycle 1 with priority) g. water metering pumps and supplied in the distribution pipeline-Q (integrated signal TII, processed in the cycle of 1:00). Soil monitoring and process control of irrigation is carried out according to the specific fields of irrigation based on measurements of the agrophysical and technological parameter sensorsconverters: I. soil moisture VLP-(analog signal TITUS with a 30 min cycle) II. evaporation from soil surfaces-Exec (analog signal TITUS with a 30 min cycle); III. soil temperature) t°-(analog signal TITUS with a 30 min cycle) IV. water consumption irrigation on distribution pipeline plot-Q-(integrated signal with a 30 min cycle) V. inclusion of the GKS discrete signal readable in a cycle of 30 s VI. position switching valves (discrete signal read SHH 30 c). VII. Report telemetricheskom signal code is performed by intelligent object field radio communications controller and after their initial processing and averaging processor is written into RAM. Recorded in the memory controller objects (to) data are programmatically by radio and wire communications controller (CC) connected to your computer Tower (PD) (see circuit diagram system intensity of irrigation and automated controls), according to the specified rules and written in his memory in the structure of the telemetry files (see information provision). Computer exchange programs plays the data from RAM to the COP, transcode them and writes into the database from which displays them in real time on the display mnemoshemah, and after linearized and averaging the data on their codes programmatically are recorded in the cumulative base structure which provides information, and this generates a data bank complex tasks ASMO [1-3]. Before writing to the Bank data stream measurement data analyzed by specified algorithms and when the results of the analysis, with deviations from the values specified in the rules of the installations is operational control (OBU) process. Operating base control programmatically at the specified in the rules of the cycle is counted by the management module on technology directions and if there are deviations in the data records for this activity generates a control signal to the appropriate the executive body. A. Data Interchange on The Work of the System of Irrigation is Carried Out Via the Internet: To do this, you must connect through a computer modem to the telephone network and earn the right to Internet access through an Internet service provider. This requirement applies to each Subscriber. If these conditions are met, the computer ‘ The Center can communicate with computers on the sections of the irrigation districts of Azerbaijan and other States. B. is the site irrigation system, where visitors will see: the latest system state data, interactive pages, created by PHP technology, rapid exchange of data and messages in real time? C. Using Skype 3 users can talk on the phone and when using cameras to see each other, and when streaming video programs-view the status of the site. When measurements of parameters, it is necessary to take into account the dispersion of available measured values. The value of the parameter, which can be taken for the actual probability of 0.8, is determined by the number of repetitions of measurements is defined by the formula: n_0 8e, x = 1.64 * 0.001 (SIG_(B)) * ((W(HB)/10 * h) * 2)) + 2.27 (1) Where is: n = 0.8 (Tr)-number of retries, the measurements meet the probability 0.8; m-0.8 (tr)-measurement accuracy (mm) SB-standard error of measurement, % b (HB) W (HB)-moisture reserves, mm When humidity b (HB) in the control layer (h) (a), m. Original moisture reserves W0 soil in the active layer defined by the formula: d WHB = W (tau)-W (HB), (2) Where is: h (a) is an active soil layer, m (it is assumed that the active layer of the soil is divided into layers of 0.20 m -0.30), γ is the average density of soil layer, t/m3 entry in program code gamma_sr, βτ-soil moisture at field station in% to mass of dry soil in the program code recording the moment (Veta tau). For automated definition starting soil moisture reserves come from the fact that the value (Veta_ tau) is defined βτ measure humidity, it is installed on a stretch of fields on n0, 8 (tr) measurements (write in code, n_0 8 ex). The measured values are automatically written to the parameter file Data Par. dbf data bank on N_ code element parameter belongs (see. (c) special section ‘ Data ware ≫) [3,4]. To specify the conditions for the calculation of the value of the conditionally required variables are written in the job (see. ZADANIE_3 information). Defining the value of starting (the original) soil moisture deficit is determined programmatically moisture reserves and necessary rules. Results of solution of the problem is written to the output document DOC_3 and plotted on the graph. Searching for Values from the Database (from the Section Information Management) Parameter values automatically read from a file Data Par. dbf on N_ code element parameter belongs; the value of the N_ code element is read from the file ELEM. dbf on key: SL_SYST + SSYST + SL_MODYLE + SL_GROUP + SL_VID + SL_ TYPE. Formation of a lookup key for N_ code (see instructions to the operator). a) Select SL_SYST. dbf) from a file system to which the parameter element. b) From file SL_SSYST. d bf to select subsystem c) From a file + SL_MODYLE-module d bf. d) From the file SL_GROUP. dbf-the group to which an item belongs measured parameter e) From the file SL_VID. dbf-element kind of measured parameter. f) Of the file item type TYPE. dbf SL_ measured parameter. If the elements identified by coupling multiple (see. ZADANIE_3, write Then Each of them is Assigned a Position Number: The item number is appended to the name through the separator [_] (NAME_1 >). For formed coupling is TLS_X. d bfN_ code. From Data Par. dbf to N_ code + Z date and parameter name in ZADANIE_3 (+) programmatically is its ZNACH value for each field. The obtained values of parameters-moisture content at the specified date, or BETA_ tau stocks of moisture on the specified date W (tau) for each section of a field are written to the output DOC. 3 see layouts output documents ‘ Supply of moisture on irrigation fields ≫ After identifying the BETA_tau moisture or soil moisture reserve W (tau) is defined or moisture deficit soil moisture reserve[2-8]. Determination of moisture deficit soil moisture reserves and to stretch the field and if the software is determined by ZADANIE_3) humidity and BETA_ tau of Data Par. dbf found its importance, relatively humidity moisture deficit lowest water consumption BETA_ (HB) is [2, 4, 6, 8] dBETA _ HB = BETA_ (HB)-BETA_ tau (3) Where is, BETA_ (HB)- from SF_ Plot. dbf and Con Soil. dbf; BETA_ taufrom the 5.2.4. Moisture deficit values are automatically written to the output DOC. 3 If for ZADANIE_3 is determined by the supply of moisture in soil W (tau) and of DataPar. dbf found its value, reserve moisture deficit moisture while the smallest capacity dW (HB) is equal to: DW (HB) = W (tau)-W (HB) (4) Where is W (HB)-from SF _ Plot. dbf and Con Soil. dbf; W_ tau-from the 5.2.4. After Identifying Data for Each of the Specified Sites Field is Determined; and average value) BETA_AV humidity and soil moisture reserves generally W_AV on the field: BETA_AV = 1/n Σ (BETA_ tau) (5) Where is n is the number of balanced plots involved per from ZADANIE _3, 4 entry; (BETA _ tau) i -soil moisture is relatively dry soil from 5.2. for each plot. if defined (W_tau), the average soil moisture reserves the entire field dBETA_AV = 1/n * Σ (dBETA_ tau) I (6) the average value stock deficit soil moisture fields: dW_ AV = 1/n Σ (dW _ AW) I (7) Calculated values in clause 5.2.4. are automatically written to the string < averaging field ... .... >. i. Defined e in items 4.5 and 6 DOC. 3 bar chart displays the parameter values ‘supply of moisture on the field irrigation≫ ii. after seeing the DOC. 3 prompted < Will solve the problem for other fields on this date >. <Д а>, <Нет>. When you type <Да> < message, type the name of the field and economy in ZADANIE_3 > ZADANIE and displayed for data entry. If the database parameter value specified in ZADANIE_3, it displays an < Value specified in the ZADANIE parameters in the database are missing. Will measure these parameters? <Да>, <Нет>. If <Да> then go to 5.2.1. If <Нет>, then the solution of the problem of consummated and exit the menu. Before starting measurements determines the number of dimensions at each site provides the probability computed value not less than 0.8 at minimum cost of labour on measuring n_0, 8Ex: n_0, 8Ex: = 1, 64 * 0.001 (SIG_B) * (W (HB)/10 * h) * 2)) + 2.27 (8) Where is: SIG_B-set the value of the standard error in percent; Beta (HB)- from ZADANIE_3; -W (HB)-supply of moisture in the soil, in mm when humidity BETA(HB) of the SF_ Plot. dbf; -h is the depth of the soil layer (mm), which should be dimension. Perform n_0, 8Ex measurements specified ZADANIE_3 parameter, row 2 on each site and write to Data Par. dbf to N_ code, Z date, Z time. Calculate the mean value of the measurements of (make a selection from Data Par. dbf to N_ code + Z date. Average soil moisture reserve W_AV is equal to: W_AV = 1/n_0, 8Ex * Σ (W_0, 8Ex) i (mm) (9) Where is W _ 0.8 Ex -the value of the stock of moisture each dimension selected in item 4.2.6 (If measured soil moisture BETA_0, 8Ex, the average humidity BETA_ AV as well: BETA_AV = 1/n_0, 8Ex * Σ (BETA_0, 8) i (%) (10) Where is BETA_0, 8 Ex-the value soil moisture for each measurement computed values to assign: a) W_AV: = W (tau); b) VETA_AV: = BETA_ tau and write to output DOC. 3 as in 5.2.1 and 5.2.3 as; 5.2.4. Filled DOC. 3 is written to the folder that you want to send through the channels of the Internet. Programmed codes are shown in a separate annex. The study identified possible operational solving complex problems of an operational definition of soil-conservation settings. Editorial Manager: Email: [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Bio chemistryUniversity of Texas Medical Branch, USA Department of Criminal JusticeLiberty University, USA Department of PsychiatryUniversity of Kentucky, USA Department of MedicineGally International Biomedical Research & Consulting LLC, USA Department of Urbanisation and AgriculturalMontreal university, USA Oral & Maxillofacial PathologyNew York University, USA Gastroenterology and HepatologyUniversity of Alabama, UK Department of MedicineUniversities of Bradford, UK OncologyCirculogene Theranostics, England Radiation ChemistryNational University of Mexico, USA Analytical ChemistryWentworth Institute of Technology, USA Minimally Invasive SurgeryMercer University school of Medicine, USA Pediatric DentistryUniversity of Athens , Greece The annual scholar awards from Lupine Publishers honor a selected number Read More... We know the financial complexity of Individual read more... The annual scholar awards from Lupine Publishers honor a selected number read more...
https://lupinepublishers.com/environmental-soil-science-journal/fulltext/intellectual-irrigation-management-in-mining-frozen-farming-in-azerbaijan.ID.000113.php
报告摘要(Abstract): Water is essential for our daily life and is at the core of sustainable development. It is inextricably linked to climate change, agriculture, food security, health, equality, gender and education. Water supply crisis has been consistently recognized as one of the greatest global risks by the World Economic Forum. Population growth is the major factor causing the global water supply crisis. Water management is not a trivial concern, especially as food and water are inextricably linked. Agricultural irrigation consumes about 70% of the global fresh water withdrawals. As population growth continues, 60% more food will be needed to satisfy the demand of more than 9 billion people worldwide by 2050. However, in many regions, water allocated to irrigation is largely capped. The irrigation water-use efficiency worldwide is low (around 50% to 60%). New technologies for more efficient irrigation need to be developed; otherwise, water scarcity will become a global issue in the near future. In the current agricultural irrigation practice for large-scale agriculture fields, the amount of water to be irrigated and the time to apply the irrigation are determined in advance based on irrigators’ knowledge. The actual conditions in the field are generally not considered in determining the irrigation amount and time. From a process systems engineering (PSE) perspective, the current irrigation practice is an “open-loop” decision making process. It is well recognized in process control that open-loop control is not precise. A “smarter” approach to agricultural irrigation is to close the decision-making loop to form “closed-loop” irrigation. In the closed-loop system, sensing instruments (e.g., soil moisture sensors, evapotranspiration (ET) gauge, thermal cameras) are used to collect various real-time field information (e.g., soil moisture, ET, temperature) regularly. The various field information is then fused together to get estimates of the entire field’s conditions. The estimated field conditions are then fed back to an adaptive control system. The adaptive control system calculates the best irrigation commands for the next few hours or day based on a field model, the estimated field conditions, local weather forecast as well as other pre-specified irrigation requirements. Due to significant nonlinearities, uncertainties, and very large sizes of the fields, there are many great challenges that need to be addressed. We have been working towards this closed-loop smart irrigation vision in collaboration with different partners including sensor manufacturer, farmers, and government agency. I will share my views on the role of process systems engineering in this closed-loop smart irrigation vision, the great challenges and opportunities in modelling, sensing, and control of irrigation systems, and introduce some of our recent progress. 报告人简介(Biography): Dr. Jinfeng Liu is currently an Associate Professor in the Department of Chemical and Materials Engineering at the University of Alberta. He received his PhD in Chemical Engineering from UCLA, and MSc and BSc both from Zhejiang University. His research interests are in the general area of process systems and control engineering with the current research focus on the development of enabling modeling, estimation and control methods to address the great challenges in closed-loop smart agricultural irrigation for water sustainability. Based on the work on closed-loop irrigation, Dr. Liu was recognized as an Emerging Leader in the 69th Canadian Chemical Engineering Conference. Dr. Liu have published 3 books, 1 edited special issue, more than 100 journal and conference papers. His research has received over 4000 citations and his books have received over 27,000 chapter downloads. Two papers published in AIChE Journal were recognized as most cited articles and one paper published in Chemical and Computers Engineering was identified as a most downloaded paper. Dr. Liu currently serves as associate editors for IFAC Journal of Process Control, Control Engineering Practice, International Journal of Systems Science and MDPI journal Mathematics.
http://iot.jiangnan.edu.cn/info/1054/4720.htm
GSR did an installation on a Macadamia plantation. Contact David for access to the demo site and see for yourself the value of irrigation scheduling. Save water and electricity whilst increasing crop quality. It's a win-win situation! Sensorian potted nodes in action 29/12/2017 Enjoy the slides! Hopefully you will come to love Sensorian Telemetry as much as we do, not only are they extremely photogenic, but they are tough! Water usage 5/11/2017 South Africa is classified as a semi-arid country with 465mm of average annual rainfall. This is below the global average estimated at 860mm per annum. Roughly 20% of South Africa receives less than 200mm and 47% receives less than 400mm yearly. The figure below indicates the rainfall distribution. Figure 1: South Africa’s Mean Annual Precipitation (MAP) (Source: Schulze 2011) Water use for irrigation agriculture takes place on an estimated 1.1% of South Africa’s total land surface area, often in low rainfall areas where supplementary irrigation needs are very high or total irrigation is practiced. A relatively inefficient mode of irrigation was found to be in use in most cases. In 2000 eleven of the 19 catchments listed in the table below indicated a negative water balance. In South Africa’s major catchments local water demands exceed the reliable local yields. Table 1: Reconciliation of water availability and requirements for 2000 (million m3/annum) (Source: DWAF 2000) (* refers to the amount that can be reliably provided 98 years out of 100, with ecological reserve requirements already subtracted) The figure below indicated South Africa’s water usage by sector. Irrigation comprises nearly 60% of total water consumption. Irrigation water usage in agriculture was 6907 million m³ in 2002. This was 87% of the total water allocation for irrigation purposes (7920 million m³) as reported by the Department of Water Affairs and Forestry. Figure 2: SA water use by sector. (Data Source: DWAF 2000) levubu macadamia study group 14/10/2017 We are looking forward to the Levubu Macadamia Study Group on 19 October, 2017. We will be giving a lecture on irrigation scheduling and automation technology and look forward to meet all the mac growers. Compaction and water saturation of soils are the main barriers to soil oxygen transport, water being a more effective barrier (Papendick and Runkles, 1965; Moldrup et al., 2000a; Neale et al., 2000). The diffusion of gases in water is slower than their diffusion in air by a factor of 10⁴ (Call, 1957; Moldrup et al., 2000a; 2004; Thorbjorn et al., 2008). Suzanne DeJohn (2017) summarizes this problem adequately "Soil that’s too wet can also cause wilting, as excess water pushes air out of the soil and suffocates the roots." Plants need oxygen to absorb water and nutrients from the soil. Waterlogged soil essentially "drowns" the root-zone, impeding the biological and chemical processes necessary for healthy plant growth and crop production. As indicated above, irrigation scheduling reduces waterlogging problems, therefore assisting with soil aeration by minimizing the most significant barrier to soil oxygen transport i.e. too much water. Farmer's Weekly posed in interesting question with one of their September, 2017 articles. Essentially they asked "How many harvests are left in your soil?" Maria-Helena Semendo, speaking at World Soil Day (2016) stated that the world's topsoil could be gone within 60 years should the current degradation continue. Some of the main causes of soil degradation include chemical-heavy farming methods, deforestation, erosion and global warming. Professor Raj Patel points to runoff water from farms often contaminated with high volumes of fertilizer and other chemicals as being a culprit. “The story of industrial agriculture is all about externalizing costs and exploiting nature,” Patel states. South African farmers are already taking the lead by becoming more ecologically accountable, by incorporating green farming practices and turning to sustainable farming methods and water conservation. One of the methods South Africans employ are irrigation scheduling, or the precise control of irrigation. The question now remains how does irrigation scheduling and sustainable farming form a beneficial symbiotic relationship? The Journal of Experimental Botany describes irrigation scheduling as "conventionally aimed to achieve an optimum water supply for productivity, with soil water content being maintained close to field capacity. In many ways irrigation scheduling can be regarded as a mature research field which has moved from innovative science into the realms of use, or at most the refinement, of existing practical applications. Nevertheless, in recent years there has been a wide range of proposed novel approaches to irrigation scheduling which have not yet been widely adopted; many of these are based on sensing the plant response to water deficits rather than sensing the soil moisture status directly (Jones, 1990a)." "Irrigation scheduling is conventionally based either on ‘soil water measurement’, where the soil moisture status (whether in terms of water content or water potential) is measured directly to determine the need for irrigation, or on ‘soil water balance calculations’, where the soil moisture status is estimated by calculation using a water balance approach in which the change in soil moisture (Δθ) over a period is given by the difference between the inputs (irrigation plus precipitation) and the losses (runoff plus drainage plus evapotranspiration). Soil moisture measurement techniques have been the subject of many texts and reviews (Smith and Mullins, 2000; Dane and Topp, 2002)" The former category relying on direct soil moisture measurement is a more reliable method, especially with hourly measurements as current data cancels the need for additional data. Continuous data adapts and reacts to changing weather and other variables that would otherwise make precision irrigation scheduling a guessing game. The Journal of Experimental Botany indicates that the "the water balance approach is not very accurate". With increasing water restrictions and unpredictable dry-spells irrigation scheduling is fast becoming a crucial tool for sustainable farming. A farmer is able to accurately control their water usage through irrigation scheduling and thereby contribute to sustainable farming practices. Therefore the farmer cannot afford to use inaccurate methods of irrigation scheduling as this would impact on crop yield and soil health. According to Agriculture and Agricultural Science Procedia "Water is considered as the most critical resource for sustainable agricultural development worldwide. Irrigated areas will increase in forthcoming years, while fresh water supplies will be diverted from agriculture to meet the increasing demand of domestic use and industry. Furthermore, the efficiency of irrigation is very low, since less than 65% of the applied water is actually used by the crops. The sustainable use of irrigation water is a priority for agriculture in arid areas. So, under scarcity conditions and climate change considerable effort has been devoted over time to introduce policies aiming to increase water efficiency based on the assertion that more can be achieved with less water through better management" (Konstantinos Chartzoulakis, 2015). Precision irrigation scheduling, therefore, would become a critical tool for farmers. Precision engineered equipment (probes, nodes, valve actuators and web-based software) are vital tools of the trade that allows the farmer to essentially "see" in the soil and therefore make informed decisions about watering needs. Irrigation Scheduling has several advantages. Apart from the obvious decrease in water usage farmers find the following to be true: Adding #valve #actuation / #automation to irrigation scheduling will further enhance the benefits as #probe data can determine the length of irrigation required and shut of valves in real-time. The importance of irrigation scheduling is that it enables the farmer to apply the exact amount of water. This increases irrigation efficiency. A critical element is accurate measurement of the volume of water applied or the depth of application. A farmer cannot manage water to maximum efficiency without knowing how much was applied. This is where valve automation becomes critical. As the units are linked on the same software platform real-time, current data can determine the irrigation requirements, thus maximizing the usage of available water. AGFO Expo 2017 16/9/2017 The AGFO Expo came to a close Saturday, 16 September. The E-Irrigation team had a ton of fun and met a treasured new friend and reseller, David Guthrie. From here E-Irrigation will head to Western Cape. We haven't forgotten about our "Plaas Besoek" event and look forward to meet the farmers of Olifantsriver Valley who invited us. Enjoy the photos from the expo! Welcome gsr 16/9/2017 We are proud to welcome David Guthrie to our team! As our official reseller in the Lowveld, we could not be more excited! GSR has a wide range of innovative products, the Hog, featured below, being one of them. Once again, welcome! agfo expo and other news 9/9/2017 Electronic Irrigation Solutions will be at AGFO from 14 - 16 September. We will be teaming up with GSR at the event and look forward to meet the colorful people of Mpumalanga. Macadamia farmers be sure to take a peek at our exhibition, we will change the way you think about irrigation scheduling guaranteed. After the AGFO Expo we will be heading to Western Cape on a little tour we like to call "Plaas Besoek". We invite all farmers in the Western Cape to drop us an invitation to visit their farm and introduce them to Sensorian and the best form of irrigation scheduling and automation. We decided to enlarge our social media footprint by creating Twitter, Instagram and Google+ accounts. Please bear with us as these accounts gain momentum, since our Facebook page is still the main means of transmitting information. Search us and add us! Twitter: @e_irrigation Instagram: electronicirrigation Google+: Electronic Irrigation Facebook: Electronic Irrigation Solutions Happy farming and see you at the AGFO Expo! The ultimate node 4/9/2017 Pecan Conference 22/3/2017 The Pecan Conference was a huge success with many potential clients asking about Sensorian. A very special thank you goes out to Sensorian for sponsoring the launch at the conference. Our ad in the Sandvelder looks stunning. The online edition can be viewed at the following link: www.sandvelder.com/sandvelder-uitgawe-21/ ADvertising sneak peak 28/2/2017 The South African National Pecan Conference is a few days away, advertising material will be rife at SA Pecans in Hartswater where the conference will be held. We want to show our customers and visitors what our sexy little add will look like. Evergreen Fairway is undergoing a name change (Electronic Irrigation Solutions). Keep your eyes peeled for Sensorian. We will knock the proverbial socks off of irrigation scheduling and automation. Sneak Peak At the Launch 14/2/2017 We are getting our gear ready for the South African Pecan Conference in Harstwater that is taking place next month. Enjoy some sneak peaks at our sassy display sponsored by Sensorian. Guests will be able to see our automation system (AVC) in full action with a special display case. The South African pecan Conference 25/1/2017 14 to 15 March 2017 Electronic Irrigation Solutions will be attending the South African Pecan Conference on 14 and 15 March 2017 in order to launch an exciting new irrigation scheduling platform! Sensorian, Developed by Donix, will be the ultimate irrigation scheduling platform that allows one to automate and use irrigation scheduling on one sleek platform. Everything is web based. We can't wait to introduce Sensorian to the world... wireless without the contract 17/1/2017 This gateway unit (currently in development and to be released in March/April 2017) will connect directly to household WiFi, allowing the Router Gateway to utilize the client's current Wifi router to communicate with scheduling software. Essentially it means clients gain more independence and control over their irrigation scheduling. Data usage of the Router Gateway is minimal (up to 8mb a month) and would not cause lag in the client's internet connection, nor can the client's Wifi password be hacked via the gateway unit. The Router Gateway and system software is protected with strong encryption, that same encryption would be present in the new development to protect the client's privacy and preserve data integrity. This new gateway unit would truly upgrade the Router Gateway system to a full-blooded WiFi System. GSM (Stand Alone Unit) 12/1/2017 Ideal for smaller operations the GSM unit communicates directly to the scheduling software. The unit functions independently and connects directly to the probe. Various add-ons can turn the GSM unit into a mini weather station, affording the user more precision over their irrigation scheduling and watering needs. Technical Specifications of the GSM: -SDI 12 compatible -Rechargable battery with solar panel -GPS add-on option -Support for Davis Rain Gauge, digital temperature and humidity sensors -IP65 enclosure Wireless system 12/1/2017 The wireless system allows the farmer to effectively use the irrigation scheduling program on a large scale. The AVC integrates seamlessly with this system giving the user a unique platform where irrigation scheduling and automation is married. Technical specifications of the Router Gateway: - One router supports up to 20 probes and up to 50 AVC units -Converts 6LowPAN communication to GSM -Supports up to 20 mesh devices -Over the air firmware updates support -Solar / battery powered -SMS features The Router Gateway is responsible to communicate probe data to the software program, however it is the nodes that communicates probe data to the router. The wireless mesh network allows the nodes to search for the shortest path to the router, ensuring effortless communication. Technical specifications of the nodes: -Connection up to 700m distance with a clear line of sight -SDI 12 compatible -6LowPAN network communication -Battery / Solar powered -Onboard temperature sensor -14 days logging for 5 sensor probes at one hour sampling intervals -Over the air firmware updates support -IP65 enclosure AVC...a special piece of tech 10/1/2017 The Automated Valve Control is certainly a special piece of technology. Developed by Sensorian, this little box has a few surprizes hidden beneath the sleek exterior. Some Technical Specifications: -Drive 2 or 3 latching solenoids -Solenoid output power up to 24W -Adustable solenoid drive pulse output -On/Off 24hr schedule per valve output -Forms part of a mesh telemetry network -Flow Pulse count input -Pressure transducer input -Continously battery powered with solar panel and MPPT charger -NEMA4, IP65 enclosure -6LowPAN network communication -5 cycle, 24hr schedule -Over the air firmware updates support The AVC paired with the wireless system makes it the only automation and irrigation scheduling system that is offered on one platform. That makes for some effective irrigation scheduling. The AVC has a sleeker design for those concerned with issues of theft. The KZN build, as pictured above, has been specifically designed to discourage solar panel theft and damage, whilst retaining all the features. Paired with a pumpstart unit the AVC will operate on virtually any pump system. About US 10/1/2017 We all need a good introduction! Our Italian blood demanded that we take matters in our own hands regarding irrigation scheduling and valve actuation / automation. Agriculture is experiencing a technological makeover and e-Irrigation partnering with Sensorian and Donix are taking the lead in providing some sweet technology. Our main focus is ease of use and convenience. Who wants struggle with complicated graphs, wasting precious time trying to understand an overly complicated program, when your attention was much needed somewhere else? We understand that your time is precious, and your challenges you face are unique in your orchard. That is why we pride ourselves on being problem solvers. It makes sense after all that irrigation scheduling should embrace the digital era. | | Welcome Here we address interesting topics regarding irrigation scheduling, equipment development as well as have some fun.
https://electronicirrigation.weebly.com/e-irrigation-blog
IOT BASED SMART PRECISION AGRICULTURE IN RURAL AREAS European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 1443-1451 AbstractThe purpose of this project is to give the farmer a complete irrigation system using the Internet of Things. It may be a challenge to shape a value-efficient automatic irrigation system to scale the waste from backwater. It is important to provide different criteria in order to decide the successful amount of water for plants. The suggested scheme consists of different kinds of sensors with low cost and low power consumption. For example- soil moisture sensor, temperature sensor. The Raspberry Pi is built with sensors to control the opening of the irrigation valve. The phone is used for remote control. Both sensors communicate with The Raspberry Pi. To produce the animal sound and intimate the designated individual using a buzzer and connect with Lora, the sound module is used. The PIR sensor is used for human recognition. Using Lora touch the soil moisture sensor is used to measure the soil moisture level and thus the LCD level view. Additionally, the moisture level value is submitted to the mobile entity accessing the web page. Keywords:
https://ejmcm.com/article_1842.html
Two-thirds of Africans are currently dependent on farming for their livelihoods, with 80% of them smallholders. African soil is known to contain fewer essential nutrients than soil in other continents and therefore has historically been considered as lower quality. The average output of cereal crops in Africa, for example, is 12,650 kg/ha, whereas the average in the United States is 73,404 kg/ha. This poor soil quality leads to decreased crop yields and underutilisation of farmland on the continent; 55% of the total land area in Africa is unsuitable for cultivation. Africa thus needs to return to fundamentals and development initiatives to address poor soil quality. Fast and accurate soil information With most African countries only producing agriculture at the subsistence level, efforts regarding the collection and management of soil information are minimal. However, collection of real-time data on soil nutrients, pH, temperature, water content, and organic matter is key in assisting farmers to make informed decisions. This will ensure proper soil conservation while sustainably improving productivity. Technology will leapfrog the speed of soil data collection and analysis to inform the industry. One such technology is the UjuziKit developed by UjuziKilimo Solutions. The soil-based sensor collects data on soil pH, moisture levels, electrical conductivity, and nitrate, phosphate and potassium content. The Ujuzi software then analyses the data and, through an algorithm, matches it with the optimum conditions for different crops to boost crop production. The system sends recommendations on suitable seeds, soil treatment methods, water level requirements and fertilisation to farmers’ phones via short messages. The farmers then use this information to make fast and accurate decisions on the best crops to plant, the required fertiliser rates and general crop husbandry practices to attain maximum productivity. Innovating for irrigation Irrigation in Africa has the potential to boost agricultural productivities by at least 50%, but food production on the continent is almost entirely rainfed. The area equipped for irrigation, currently slightly more than 13 million ha, makes up just 6% of the total cultivated area. Water scarcity for agriculture is a major problem, resulting in a large percentage of soils becoming unproductive. Efficient use of the available water for irrigation is therefore very important. Technological innovations, such as soil moisture sensors, measure the volumetric water content in the soil and ensure precise irrigation, sustainably improving farm productivity to meet food demands. Sustainable soil fertility monitoring and precise irrigation are also expected to reduce the quantity of fertilisers and water used for irrigation by more than 30%, whilst increasing production by 40% within the first year. Within 3 years, it is expected that production would increase by up to 200%. Thus, as Africa races to feed the fast-growing population, the key to achieving food security lies in the ability of farmers to switch to smart farming solutions, which can sustainably boost farm yields without putting an irreparable strain on the continent’s soil.
https://spore.cta.int/en/opinions/article/a-switch-to-smart-farming-is-needed-for-africa-s-soils-sid0be2d4bc3-85a7-4bfc-aaac-e18086c88ddd
This project is based on moisture sensor used to measure humidity content in the soil. The design portion involves mainly a global system for mobile communication and a control circuitry with a microcontroller. This project used some of the softwares like basic language for programming the application software to the microcontroller and visual basic for interfacing the hardware and mobile phone. Protel or workbench schematic software is used for designing the circuit diagram for this project and express prefabricated circuit board (PCB) software is used for designing. Since PCB making is a big process and involves a number of machineries which are expensive and was therefore outsourced. Using DTMF 8870 IC will act as an interface between the user and the system as it is a receiver which links the GSM network, the microcontroller pic16f73 contains the software which states the conditions of the system which can be displayed in a liquid crystal display and transmitted via mobile phone to the dual tone multiple frequency receiver which is part of the control system in the farm. New technologies help in increasing productivity with use of less manpower as well as conservation of water in the process. Keywords: Network, Microcontroller, Conservation, Global system for mobile communication. CHAPTER 1 1.0.0 INTRODUCTION Irrigation systems are grouped into two major categories namely pressure and gravitational. Pressure system involves the use of drip irrigation and sprinklers but in gravitational system furrows and canals are utilized. It is observed that this methods when used in an irrigation system consume a lot of water and therefore contributes to wastage of this precious resource which mankind cannot survive without it, automation of irrigation offers control of water which leads to utilization of small quantities of water without affecting the overall yield of production in a farm, it’s major aim is to optimize and efficiently manage water going to an irrigation field. This eliminates the need of continual presence of an operator to control water during irrigation period and thus eliminates human errors to a minimum level. All decisions will be made by the microcontroller depending on the conditions obtained from the moisture sensor Food supply and demand is challenge to the government of Kenya this is due to ever increasing population which is directly proportional to it’s food production, this implies that water which is the main ingredient in crop production will continue to diminish according to FAO to maintain food stability for the next three decades it is advisable to increase acreage of irrigated lands by 34% in all developing countries and also channeling 14% water should be extracted for farming. Kenya is classified as a water deficit country according to the World Bank (2007), Institute of Economic Affairs, (2006), Clark and King (2004). By 2009 the Kenya population was 37 million according to Kenya National Bureau of Statistics (2009) and it was growing annually by almost a million which means by the year 2014 the population of Kenya is anticipated to be around 43 million . Kenya is endowed with a water resource of 595 cubic per metres squared which away far from the global annual poverty line of 1000 per cubic metre . According to NEMA (2003) the future of this very important resource may recede down to a further 250 cubic per metres in the next twelve years when compared to other changes occurring in sub-saharan Africa. According to Fereres and Connor (2004) globally food production obtained through irrigation accounts for 40% of the total expected, whereby only about 17% of the land set aside is specifically utilized for food production. The only biggest water problem globally is scarcity as asserted by Jury and Vaux (2003). From the above statements, it is with no doubt that in the future there will be a crisis of water supplies if proper utilization techniques are not adhered to, water is life and it is inevitable in any society since without it agriculture will not flourish .changes of climate globally will affect the ratio of quantity and quality of life mankind will live, proper utilization of water is a matter of national importance to the government of Kenya and to achieve this goal technology should be adopted to control water in any irrigation scheme. It is to this reason that I consider developing solution to enhance efficiency in food production. The core objective is to manage irrigation for optimum food production that will also prompt data to a remotely on every occurrence on the field with help of a mobile phone and a DTMF. The user will be able to switch ON and OFF the irrigation system. Features of the intended system This system normally sends signals via a mobile phone to the DTMF and if there is a problem in the system the operator of the farm can be notified and action is taken to restore the system to normalcy. This system also detects water level and measures can be taken immediately in case there is no water in the reservoir This system is applicable in isolated areas with large farms whereby operators are required to be in the fields most of the time to monitor the irrigation, it is with no doubt that this technology will be of great help to farmers as it requires few operators in the field as well as saves on water used for irrigation. The commonly used automation systems are -: Time based system Open loop system Computer based irrigation system Real time feedback system Volume based system Closed loop system Time based system makes use of controllers to gauge the amount of water to be applied in an irrigation system. According to Cardenas Lailhacar (2006) timers and controllers can give wrong information which may lead to over or under irrigation therefore requires proper programming and testing .open loop system makes use of a schedule which Bomen et. al (2006) described it as a timing of irrigation process for a period of time which either uses volume of water or time for control function. Computer based irrigation control system is interface of hardware and software section which acts as the intelligent part of the system whose function is to monitor changes in the irrigation system via a computer and can alert the user if there is a problem in the system. Real time feedback system is determined by plant requirement and specific parameters set which Rajakumar (2008) describes sensors as a means of providing feedback to the controller to enable it to effect operation. Volume based system uses a predetermined volume of water which can be applied to the field once , this obtained by using valves with meters which enables control and lastly a closed loop system makes use of a feedback from either a single sensor or several sensors which provides the irrigation decisions to be carried out based on the data obtained. 1.0.1 JUSTIFICATION The study was necessitated by the looming scarcity of water for industrial applications, human consumption and agriculture. The identification of irrigation as one of the drivers of vision 2030 justifies the need of utilization of water in an irrigation system. Secondly increasing food productivity through irrigation will insert a lot of pressure on water supplies and therefore the study will help in management to guard the future against effects arising from poor usage of water. Thirdly with only 19.6% of Kenyan land under irrigation to the NIB (2008) which uses half of available water, it is worrying trend which need urgent mitigation if more land is to be put under irrigation. The use of manual irrigation consumes a lot of time apart from being labour intensive , it needs monitoring frequently but automatic systems can be programmed to turn ON and OFF the system depending on the parameter to be controlled .irrigation control methods used in Kenya are mostly manual and a lot of water is wasted during irrigation. This project seeks to help to minimize on water usage hence enhancing conservation. 1.0.2 RESEARCH QUESTION In order to achieve the objectives of this project, the following research questions were formulated in designing an automatic irrigation control system which optimizes water in an irrigation system. 1. What modern methods can be used to determine water saving in irrigation systems? 2. What are the possible methods of controlling irrigation remotely? 3. What are the ways of cutting costs in an irrigation system? 1.0.3 OBJECTIVE Main objective The main objective of this project was s to design, construct and test an automatic irrigation control system. General objectives 1. Recognize the need for water saving in irrigation systems 2. Use mobile phone to control an irrigation system 3. Reduce the number of workforce in the farm 1.0.4 SCOPE This project is paradigm shift from manual irrigation to automatic irrigation. Sensors are used to monitor humidity level in the soil and the water level in the tank which are processed by the microcontroller indicating ON or OFF condition of the system. 1.0.5 PROBLEM STATEMENT Irrigation has been identified as one of the pillars of achieving vision 2030 by the government of Kenya in 2007. In its manifesto dubbed Kenya vision 2030 the country aims to conserve water and start new ways of harvesting as well use of rain and underground water to promote agricultural productivity. Kenya is a water scarce country according to the world food programme, the vision 2030 initiative proposes intensified application of science, technology and innovation to raise productivity and efficiency but irrigation expansion is likely to increase the scarcity of water which will lead to the competition for the available content by irrigators, industries and pastoralists.. It recognizes the critical role played by research and development in accelerating economic development in all the newly industrialized countries of the world. Recently launched irrigation schemes e.g. the one million acre Galana-Kulalu irrigation scheme still embrace the use of manual irrigation which will involve use of more water since there is no control and therefore this project proposes the use of automatic irrigation control system. 1.0.6 SIGNIFICANCE AND CONTRIBUTION OF THE PROJECT AICS is a method that utilizes automation and the use of a microcontroller makes it cheap in terms of cost and maintenance. Soil moisture content is used in various fields like agriculture to determine favorable conditions for growing various crops and also in environmental monitoring whereby biological changes can be observed in this project AICS will be able to control the amount of water used in an irrigation system by discharging the right amount at the right time. 1.0.7 AIM The aim of this project is to critically assess the automation of an irrigation system using a moisture sensor and a microcontroller as the main brain of control. It is evident that if comparison is made with theoretical knowledge there are inherent problems associated with RF signals due to interference, and also moisture sensors give different readings in respect to the depth as well different soil samples. 1.0.8 ASSUMPTIONS The following assumptions were made in this project; 1. It was assumed temperature had no effect on moisture readings 2. Interviews carried out were limited only to the people living around the Tana River Irrigation Scheme. CHAPTER 2 2.0.0 RESEARCH METHODOLOGY This chapter discusses the methods that have been used in collection and analysis of data, it explains the research design, sampling techniques and data collection methods used and describes how data collected from the research has been analyzed. Research methodology is the means of carrying out a research process, it always starts by asking a research question and followed by subsequent ways of answering the questions and may result in an answer or several answers to show conclusive evidence to audience in contribution of new knowledge to an existing system or new invention. In this project a qualitative research concept was employed which consists of three methods-literature review, case study and project design. To carry out in depth investigations in the research area as well as answer the research questions the literature review and interviews were used to collect qualitative data whereas questionnaires were used to collect both qualitative and quantitative data. To ensure data quality, reliability and validity must be tested. 2.0.1 CASE STUDY This is one of the most important methods of research used to carry out a study by observation study and to be able to learn and collect data about a particular area. In this project investigation is done about irrigation methods which tend to conserve on water usage used in irrigation schemes in Tana River delta. This will provide detailed process used in carrying out irrigation in the area. Case studies by definition are always carried in real world and thus posses a high degree of reality; according to (seaman, 1999) case studies are based mostly on qualitative data and quantitative data which always provide a better understanding of the studied phenomenon. A case study may carry some features of other research methods for example a survey can be carried out or use of archival records for analysis may be the part of the data collection. 2.0.2 LITERATURE REVIEW Literature review is an inevitable part of research since it gives a concrete knowledge of an area of research where someone intends to carry out and to learn more about this subject (see chapter 3: literature review). Literature review was carried out to identify research bottlenecks and helps one to come out with a refined research topic; this is achieved by learning from previous works done by other researchers in the same field. 2.0.3 PROJECT DESIGN A project design is described as a plan that guides the investigator in the process of collecting, analyzing and interpreting observations; it is a logical model of proof that allows the researcher to draw inferences concerning casual relationship among variables under investigation, Yin (1994). The design covers sampling techniques as well as the data methods used. Design is creation of an artifact or a prototype of automatic irrigation control system, the design stage is very useful stage because it is where the hardware components are determined which involve creation of circuit drawings which aid in actual fabrication of the artifact. A software program is also designed which helps to interface the system since the hardware cannot work without communication between various components. Researcher intends to make this work a contribution to an existing knowledge especially by using a great deal of data obtained from project design, literature review and case study. The following aspects were put into consideration to achieve the design solution; Water utilization and saving Human interaction Power consumption Reliability Future improvement 2.0.4 PROCEDURES FOLLOWED IN DESIGNING THE SYSTEM Three general procedures were followed to appropriately select the control system to measure the amount of soil moisture in the farm. 2.0.5 IDENTIFICATION OF MEASURABLE VARIABLES It is very much important to precisely identify the parameters that are going to be measured by the microcontroller data acquisition interface and how they are to be measured. The set variables typically used in agricultural farming are; temperature which affect the plant metabolic functions humidity affects the plant transpiration as well as the plant thermal control mechanisms soil moisture affects the salinity and PH of the irrigation water A sensor for measuring a variable is required to be readily available, accurate, and low in cost but if it is not available the variable cannot be incorporated to the control circuitry, therefore variables that cannot be continuously measured can be controlled in a limited way by the system e.g. measuring the nutrient value of the soil is very difficult to measure continuously over a period of time. 2.0.6 IDENTIFICATION OF THE HARDWARE AND SOFTWARE In any control system functions are specified before deciding what hardware or software to use in the project. The model chosen must have the ability to: Provide flexible and easy to use interface It must ensure a high level of precision and also must the ability to resist noise It should allow for expansion to meet the needs of future growth Control strategy Control strategy is a important element in any control system, the simplest strategy is to use sensors which give threshold values such as the moisture sensor which brings directly changes in the system like actuating of devices at given set threshold either minimum or maximum level. 2.0.7 Moisture preset levels The most important factor to be considered in any soil moisture measurement is the value of moisture content measured in unit of water fraction by volume; this is illustrated by the table below. illustration not visible in this excerpt Table1: maximum depletion of various soil textures. In this project the voltage ranges from 0vdc to 5vdc and therefore the moisture sensor will be operating within this range, this system is set such that when moisture level is 15% the reading of the voltage should be 0.75vdc which signifies the minimum level that can trigger the system ON. At 40% the reading of the voltage should be 2vdc which is the maximum level for the system being ON and value above this will trigger the system OFF. 2.0.8 OVERVIEW OF TANA RIVER REGION The Tana River is located to both Lamu and Tana River counties; it has a core delta that covers an area of 130,000ha which is mainly in Tana River district. It is classified as one of the largest wetlands in Kenya with rich diversity of flora and fauna and also it is a home for farmers, pastoralists and fishermen. The agriculture sector employs about 60% of the population while 40% work in the livestock sector, commercial irrigation of rice by TARDA is the only major modern farming ever started in the delta but currently there are also model farms for maize, beans, bananas and horticulture which are aimed to be rolled out in the mega irrigation scheme of Galana-Kulalu. illustration not visible in this excerpt Figure 1: Map of Tana River illustration not visible in this excerpt Figure2: Sample of land for irrigation illustration not visible in this excerpt Table 2 : Statistics of irrigation schemes in Tana River 2.0.9 DATA COLLECTION METHODS The method of multi-strategy approach was used to collect data, this is also referred to as triangulation and it involves the use of more than one data in the study and thereafter the findings are cross checked Bryman (2001). Glazier and Powell (1996) recommended this approach as it tends to reflect and explain issues in a more accurate way than any other single measure. Moreover triangulation gives a researcher a greater confidence in the research findings than in the case when a single method is used Clark and Dawson (1999). The methods applied to achieve this triangulation effect are discussed below. (a) Questionnaires A questionnaire is a data collection technique through which people are asked to respond to the same set of questions in a predetermined order Gray (2004). Beside the advantages of allowing wide coverage, questionnaires save a lot of time and effort since a single set of questions is duplicated and send too many respondents. According to Gray (2004) and Bryman (2001) questionnaires are less costly and allow respondents to complete them at a time and place that suits them, thereby limiting any inferences and bias that could be caused by the presence of the researcher. Several disadvantages are associated with this data collection technique as follows; (i) Low response rate (ii) Difficult in probing respondents since personal contact is lost (iii) No allowance for respondents to ask questions should clarity be needed (iv) Greater risk of missing data Some of the drawbacks raised above made the researcher to adopt interview questions instead. This was prompted by the literacy levels of people in Tana River County and the questionnaire would have required explanations to give the respondents an insight into the study in order to elicit relevant and useful data. Therefore questionnaires were not found suitable for this study. [...] - Quote paper - Stephen Kipkebut (Author), 2014, Automatic irrigation control system, Munich, GRIN Verlag, https://www.grin.com/document/286712 Comments - No comments yet.
https://www.grin.com/document/286712
Irrigation is an important topic, because Agriculture needs a lot of water—and the proof is in the numbers. A 2012 report by the USDA, which included a U.S. Geological Survey (USGS), provides some eye-opening figures about the U.S. agriculture sector’s irrigation and water usage: According to the survey, which monitors water use by economic sector estimates, agriculture accounts for approximately 80 to 90 percent of U.S. consumptive* water use. The USGS survey also noted that roughly 56 million acres—or 7.6 percent of all U.S. cropland and pastureland—are irrigated in some way (that year, nearly three-quarters of irrigated acres were in 17 of the Western-most states excluding Hawaii). Irrigation usage and irrigated areas change over time, as do irrigated acreage shares by state, in response to: - local & regional water supply/demand and agronomic conditions - economic and domestic/export crop-market considerations - long-term climate change. But the fact remains—good stewardship of water resources and advancing the efficiency of irrigation techniques is critical not only to the future profitability of agriculture but also to the survival of our land, wildlife, and human population. Dr. Wesley Porter, an Extension Precision Ag and Irrigation Specialist with the University of Georgia, is one of many experts across the country studying solutions and providing information on technologies and techniques to producers; all to help them better manage their farming operations and profitability as they amend their water management practices. “My research is focused mainly on practical, quick-result trials that can be rapidly implemented by producers to make near immediate impacts and changes on their farms.” Irrigation – What’s Now, What’s Next There are a whole host of irrigation options available to producers these days and some fascinating research is being done to improve current systems and explore new ones. Of the current irrigation types available in 2010, USGS estimated that: - 31,600 thousand acres (51 percent) were irrigated with sprinkler systems (primarily center pivot) - 26,200 thousand acres utilized surface (flood) irrigation - 4,610 thousand acres employed micro-irrigation systems The national average application rate for 2010 was 2.07 acre-feet per acre. As for research, this year, the Kansas Water Office (KWO) embarked on a three-year study to evaluate how the new Dragon Line™ precision mobile drip irrigation system, in combination with in-ground moisture probes, can allow farmers to “visualize” water usage as deep as three feet down in order to safely cut back on daily water usage and the more common practice of blanket scheduling of water applications or plain old eyeballing. Porter says while the concept of water sensors is exciting, recent surveys show that, currently, only about 10% of growers nationally implement some sort of soil moisture sensor. “Cost and how to interpret and use the data continue to be roadblocks,” explains Porter. “It’s becoming simpler to collect and interpret data to be sure, but adopting new technology is a culture switch that some farmers are reluctant to make. It’s easier for them to just run their systems on a calendar schedule and consistently stick to this schedule.” Porter believes the biggest hurdle standing in the way of broad adoption of sensors and other water-saving irrigation technologies is proof. “Hard data such as increased yields, reductions in money spent on energy to pump the water, and similar data is what will be required to get more of this technology adopted,” says Porter. “Saving water does not mean much if there’s no additional benefit to crop yield or profitability.” Tom Willis, owner of T&O Farms in Liberal, Kansas, one of three operations in the state’s three-year Dragon Line irrigation study, is pleased with the early results he’s seen using water sensors. “Using Dragon Line, ‘the water never sees the sun’ is how my son describes it,” chuckles Willis. “So far, the probe sensors and the drip irrigation have done the job they promised. They have allowed us to not put water down just based on what we see at ground level. Where we’ve used Dragon Lines, there’s also better root development, the roots go farther down.” Current Irrigation Options – What’s Best? In choosing an irrigation system, Porter says producers need to consider several important factors, including: - type of crop - fuel cost and availability - initial cost - labor requirements - size and shape of the field(s) - available water source He adds that, in some situations, there may be additional considerations such as whether the land is owned or leased, since some systems involve installation of ground or underground piping. Farmers should also explore whether some of their existing equipment, such as a well or pumping unit, could be adapted into the system to help minimize costs. Of the irrigation options currently in use, the most popular method continues to be the Center Pivot irrigation system used in both circular and square crop fields. Porter notes that even though it may seem that a Lateral Irrigation system would fit better into many of the “square” fields rather than a circle, he says the labor requirements, as well as the challenge of providing a mobile water and energy source, make it too cumbersome for producers to implement. “Center Pivots have become very advanced with many types of technology that make their operation much easier for the user,” says Porter. “Some of these technologies include remote operation, both starting and stopping, as well as continuous monitoring and warnings of problems or failures. Many also feature auto-shutdown or warning if there is rainfall detected or problems at the pump. All commercially available systems also have the option (along with a few aftermarket companies) for Variable Rate Irrigation or VRI.” Drip Irrigation, like the system being used in the Kansas water study, is considered one of the most efficient types of irrigation, as the water source can be delivered very close to the crop roots with no opportunity for evaporation. “Drip lines used in fruit and vegetable production is usually laying either on the surface, just under the surface, or under plastic beds,” says Porter. “And thicker walled drip tubing can be installed sub-surface (known as Sub-Surface Drip Irrigation or SDI) and can be utilized in row crops.” Porter adds that, “more producers are considering SDI on row crops in dry land corners or outside of the pivot circles, or in smaller irregular shaped fields where overhead irrigation systems don’t work very well.” A related method, Seepage or Sub-Irrigation, is a process of artificially maintaining the water table. The water table is kept at between 18 and 24 inches deep by delivering water from pumps, or using gravity and gate systems, to irrigation furrows (shallow open ditches). It’s especially common in the “high water table/muck soil” fields of south Florida where tomatoes, peppers and sugarcane are grown. Seepage systems are easy to build, operate, and maintain, the drawback is inefficiency—it can be challenging to maintain the water table balance to just below the crop root zone, without oversaturation, and the practice also risks not achieving proper wetting of the soil surface without rainfall to help out. While Drip Irrigation has been gaining popularity in the last twenty years, Seepage Irrigation remains a very common production system in Florida. Furrow or Surface Irrigation is an even more basic method of irrigating fields. One of the oldest techniques, the farmer simply creates small parallel channels along the field in the direction of a slope (either natural or created by the farmer). Then gravity delivers the water down the furrows using gated pipe, siphon and head ditch, or a bank-less system. Micro-Drip Irrigation is also gaining popularity, and its “cousin,” Micro-Spray Irrigation, used to irrigate tree orchards, is popular because of its ability to administer more precise applications of water at lower volumes. One other commonly-used option on many U.S. farms is the Traveling Gun. Attached to a mobile irrigation system, it uses a high-pressure big gun to send water across pastures or row crops. “Farmers looking for cheaper solutions or options for smaller fields where larger irrigation devices won’t fit, often choose the Traveling Gun,” says Porter. “Unfortunately, it’s not efficient…about 60% of the efficiency of, say, the Center Pivot…meaning if you set it to apply one inch of water, only 0.6” will reach the crop. However, if this is the only option when compared to dry land it’s certainly a valid option.” Word of Warning Porter offers one critical caveat for producers choosing new or replacement irrigation systems—not hitting the target irrigation amounts or intensities for specific soil types can lead to runoff issues. “It’s important to you know what soil types you have under your irrigation systems and how they respond to rainfall and irrigation,” advises Porter. “Research both the soil water holding capacities (SWHC) and infiltration rates of your soils, as this will help you apply the optimum amounts that your soils can handle, and also help you estimate approximately when your soil water will be depleted.” High-Tech Control Advances Asked whether he could offer a glimpse into what’s next in the world of irrigation, Porter pointed to the dramatic rise in remote data acquisition capability when it comes to gathering information and controlling devices like sensors and irrigation systems. “Now a farmer can access and control virtually all of their irrigation systems remotely from anywhere they can get internet access,” says Porter. “Connected farms are no longer an idea of the future, but are up and running everywhere across the nation.” The concurrent rise in technologies like irrigation scheduling tools such as soil moisture sensors, smartphone scheduling apps and web-based scheduling tools are already making it easier for “early adopter” producers to be more efficient with their water usage, and Porter says that, at UGA, “we are also working on some interesting advancements on automating the VRI process and hope to have some good information on that out soon.” Keep Learning, Keep An Open Mind Water-saving irrigation techniques, like so many other facets of agriculture are evolving in exciting and important ways. Producers owe it to their businesses and their families, as well as to their communities and neighbors, to not only stay informed, but also to proactively adopt the best irrigation practices available.
https://agamerica.com/blog/choices-irrigation-understanding-latest-options/
Permanent Link: https://ufdc.ufl.edu/UFE0047240/00001 Material Information Title: Development of a Precision Irrigation Control System for Horticultural Food Crops in Tanzania Creator: Fue, Kadeghe Goodluck Place of Publication: [Gainesville, Fla.] Florida Publisher: University of Florida Publication Date: 2014 Language: english Physical Description: 1 online resource (92 p.) Thesis/Dissertation Information Degree: Master's ( M.S.) Degree Grantor: University of Florida Degree Disciplines: Agricultural and Biological Engineering Committee Chair: SCHUELLER,JOHN KENNETH Committee Co-Chair: SCHUMANN,ARNOLD WALTER Committee Members: LEE,WON SUK TUMBO,SIZA DONALD Graduation Date: 8/9/2014 Subjects Subjects / Keywords: Agriculture ( jstor ) Crops ( jstor ) Irrigation ( jstor ) Irrigation management ( jstor ) Irrigation scheduling ( jstor ) Irrigation systems ( jstor ) Irrigation water ( jstor ) Sensors ( jstor ) Soil moisture ( jstor ) Soils ( jstor ) Agricultural and Biological Engineering -- Dissertations, Academic -- UF controller -- horticulture -- irrigation -- precision -- solar-powered -- wireless City of Lake Alfred ( local ) Genre: bibliography ( marcgt ) theses ( marcgt ) government publication (state, provincial, terriorial, dependent) ( marcgt ) born-digital ( sobekcm ) Electronic Thesis or Dissertation Agricultural and Biological Engineering thesis, M.S. Notes Abstract: As the Tanzania population grows, scarcity of resources such as water and electricity will pose a great risk to horticultural crop production and society activities. Precision irrigation presents a great opportunity to save water and energy in agriculture. The current off-the-shelf machines used to control water are incompatible to Tanzanian conditions, very expensive, delicate, or do not have enough features. This study proposed a precision irrigation control system (PICS) prototype design and implementation that can be used in African countries, particularly in Tanzania. The PICS is a solar-powered control system that uses low cost electronic devices to automate drip irrigation and to determine when to irrigate and how much to irrigate. The hardware and software of the PICS were designed, integrated, and constructed in USA and then in Tanzania. Under testing, the PICS worked properly and achieved a high level of reliability and maintainability. The controller is a high tech tool that can be programmed wirelessly using a laptop and the data can be downloaded using any android based smartphone. The wireless technology incorporated can be used to transfer instant data of rainfall and soil moisture content. The system updates its data every four minutes (but can be reprogrammed). The system automates data cleaning while collecting instant information of the soil moisture, temperature, humidity and rainfall. It can store data at least for seven days. ( en ) General Note: In the series University of Florida Digital Collections. General Note: Includes vita. Bibliography: Includes bibliographical references. Source of Description: Description based on online resource; title from PDF title page. Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law. Thesis: Thesis (M.S.)--University of Florida, 2014. Local: Adviser: SCHUELLER,JOHN KENNETH. Local: Co-adviser: SCHUMANN,ARNOLD WALTER. Statement of Responsibility: by Kadeghe Goodluck Fue. Record Information Source Institution: UFRGP Rights Management: Copyright Fue, Kadeghe Goodluck. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. Classification: LD1780 2014 ( lcc ) UFDC Membership Aggregations: Institutional Repository at the University of Florida (IR@UF) University of Florida Theses & Dissertations INTERNAL UF RETRO ETDS Downloads This item has the following downloads: FUE_K.pdf Full Text PAGE 1 DEVELOPMENT OF A PRECISION IRRIGATION CONTROL SYSTEM FOR HORTICULTURAL FOOD CROPS IN TANZANIA By KADEGHE GOODLUCK FUE A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE MASTER OF SCIENCE DEGREE UNIVERSITY OF FLORIDA 2014 PAGE 2 © 2014 Kadeghe Goodluck Fue PAGE 3 To my dear mom Flora , my siblings Ibrahim and Elizabeth , y ou are always making my life good PAGE 4 4 ACKNOWLEDGMENTS I praise and thank almighty God for giving me his blessings and grant me with healthy life to do all my research activities. My sincere appreciation to the USAID iAGRI (United State Agenc y for International Development Innovative Agriculture Research Initiative) for the financing of my m aster s studies to the University of Florida (UF). I am so happy and thank the USAID Feed the future program through the Norman E. Borlaug Leadership Enhancement in Agriculture P rogram ( Borlaug LEAP ) at the University of California, Davis for awarding me an outstanding research fellowship to support my research. My special gratitude goes to the Sokoine University of Agriculture (SUA) for supporting me through out the two years of my studies as their employee. I extend my heartfelt gratitude to Prof. John K . Schueller, Prof . Arnold W . Schumann and Prof. Siza D. Tumbo for all their support a nd advice which facilitated the completion of this study . I also thank my graduate committee member Prof . Won Suk Lee for his support and guidance in this research. Last but not least, I send my s pecial thanks and appreciation to Mr s. Catherine Kilasara at SUA and Mr. Kelvin Hostler at UF for all the ir technical assistance in the laboratory matters of electronics and instrumentation . PAGE 5 5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 7 LIST OF FIGURES ................................ ................................ ................................ .......... 8 LIST OF ABBREVIATIONS ................................ ................................ ........................... 10 ABSTRACT ................................ ................................ ................................ ................... 11 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 13 Background ................................ ................................ ................................ ............. 14 Purpose of the Research ................................ ................................ ........................ 18 Statement of the Problem ................................ ................................ ....................... 19 General Research Hypotheses ................................ ................................ ............... 20 Objectives ................................ ................................ ................................ ............... 20 2 LITERATURE REVIEW ................................ ................................ .......................... 22 3 MATERIALS AND METHODS ................................ ................................ ................ 28 Design Criteria for the New Irr igation Controller ................................ ..................... 28 Design Procedures Taken ................................ ................................ ...................... 28 Development Procedures ................................ ................................ ....................... 29 Phase 1 System Development ................................ ................................ ......... 29 Phase 2 Advanced System Development ................................ ........................ 30 Phase 3 System Deployment ................................ ................................ ........... 30 Design of the Irrigation Control System ................................ ................................ .. 30 Sensors Module ................................ ................................ ............................... 32 Humidity Sensor ................................ ................................ ............................... 32 Pressure Sensor ................................ ................................ ............................... 33 Rain Sensor ................................ ................................ ................................ ...... 34 Soil Moisture Transducer ................................ ................................ .................. 35 Bill of Materials ................................ ................................ ................................ ....... 40 Precision Irrigation Control System (PICS) Design and Implementation ................ 42 PICS Software Flowchart ................................ ................................ ........................ 43 The PICS Commands ................................ ................................ ............................. 43 Windows based Application for Windows (Tiny B ootloader) ................................ ... 45 Android based Software for Precision Irrigation Control System ............................ 47 Testing of the Irrigation Controller (Lab Experiments) ................................ ............ 48 Testing of the Field Application (Field Experiments) ................................ ............... 49 PAGE 6 6 Study Site ................................ ................................ ................................ ......... 49 Okra Crops Used for Testing ................................ ................................ ............ 50 Tank Elevation, Irrigatio n Installations and Settings ................................ ......... 51 Data Collection ................................ ................................ ................................ . 52 Sensor Calibration Tests ................................ ................................ .................. 52 4 RESULTS AND DISCUSSIONS ................................ ................................ ............. 71 Sensor Calibration Tests ................................ ................................ ......................... 71 Precision Irrigation Control System (PICS) Field Deployment ................................ 74 5 CHALLENGES AND EXPER IENCES GAINED IN THI S RESEARCH ................... 80 6 CON CLUSIONS AND RECOMMENDATIONS ................................ ....................... 83 LISTS OF REFERENCES ................................ ................................ ............................. 87 BIOGRAPHICAL SKETCH ................................ ................................ ............................ 92 PAGE 7 7 LIST OF TABLES Table page 3 1 Bill of materials ................................ ................................ ................................ ........ 40 3 2 The PICS commands and their meanings ................................ ............................... 43 4 1 Correlation table for sensor 4 and others in the 1st experiment .............................. 71 4 2 Sensor readings at different depth in different time taken ................................ ....... 72 6 1 Cost approximation for the PICS ................................ ................................ ............. 86 PAGE 8 8 LIST OF FIGURES Figure page 3 1 Kadeghe G Fue. Conceptual diagram of the controller . 01 June 2014. Morogoro,Tanzania ................................ ................................ ............................ 53 3 2 Kadeghe G Fue. Advanced conceptual diagram of the controller . 01 June 2014. Morogoro,Tanzania ................................ ................................ .................. 54 3 3 Kadeghe G Fue. Advanced conceptual diagram of the controller with voltage booster . 01 June 2014. Morogoro,Tanzania ................................ ...................... 55 3 4 RHT03 recommended circuit connections. Reprinted with the permission from Max Detect Technology Co. Ltd, https://www.sparkfun.com (June 20, 2014) ................................ ................................ ................................ .................. 56 3 5 The MSP 300 pressure transducer. Reprinted with permission from Measurement specialties, http://www.meas spec.com (June, 20, 2014) ............ 56 3 6 Kadeghe G Fue. Location of rain sensor and humidity sensor in the field . 09 November 2013. Morogoro, Tanzania ................................ ................................ 57 3 7 Kadeghe G Fue. SDI 12 Soil Moisture Transducer at the field experiment treatment . 26 November 2013. Morogoro, Tanzania ................................ .......... 57 3 8 Kadeghe G Fue. The PICS breadboard at the laboratory in CREC . 10 June 2013. Lake Alfred, Florida ................................ ................................ .................. 58 3 9 Kadeghe G Fue. Upper part of the PICS PCB. 30 June 2013 . Lake Alfred, Florida ................................ ................................ ................................ ................ 58 3 10 Kadeghe G Fue. The full PICS PCB . 30 June 2013. Lake Alfred, Florida .......... 59 3 11 Kadeghe G Fue. PCB plate received from PCB123 Company . 10 July 2013. Lake Alfred, Florida ................................ ................................ ............................ 60 3 12 Kadeghe G Fue. PCB with all other electronic chips soldered on it . 23 July 2013. Lake Alfred, Florida ................................ ................................ .................. 61 3 13 Kadeghe G Fue . PICS with other modules in a weather protection box . 25 July 2013. Lake Alfred, Florida ................................ ................................ ........... 62 3 14 Kadeghe G Fue. PICS installed at crop museum . 23 September 2014. Morogoro, Tanzania ................................ ................................ ........................... 63 3 15 Kadeghe G Fue. Flowchart of the PICS software . 25 June 2014. Morogoro, Tanzania ................................ ................................ ................................ ............. 64 PAGE 9 9 3 16 Kadeghe G Fue. The tiny bootloader used to load the program to the PICS MC microcontroller . 26 June 2014. Morogoro, Tanzania ................................ ... 65 3 17 Kadeghe G Fue. Virtual serial ports software . 20 June 2014. Morogoro, Tanzania ................................ ................................ ................................ ............. 65 3 18 Kadeghe G Fue. The screenshot of PICS android based software . 26 March 2014. Morogoro, Tanzania ................................ ................................ ................. 66 3 19 Kadeghe G Fue. Fie ld experiment site at Sokoine University of agriculture latitude 6.84 longitudes 37.65 downloaded from maps.google.com . 23 May 2014. Morogoro, Tanzania ................................ ................................ ................. 67 3 20 Kadeghe G Fue. Experimental plot layout . 29 January 2014. Morogoro, Tanzania ................................ ................................ ................................ ............. 67 3 21 Kadeghe G Fue. Two treatments with three replications Plots . 10 January 2014. Morogoro, Tanzania ................................ ................................ ................. 68 3 22 Kadeghe G Fue. Tower elevation and tank installations . 03 February 2014. Morogoro, Tanzania ................................ ................................ ........................... 68 3 23 Kadeghe G Fue. Straigh t pipe installations at the site . 23 February 2014. Morogoro, Tanzania ................................ ................................ ........................... 69 3 24 Kadeghe G Fue. First experiment to test si milarity of the sensors . 11 March 2014. Morogoro, Tanzania ................................ ................................ ................. 69 3 25 Kadeghe G Fue. Sensor water percolation investigation layout . 12 March 2014. Morogoro, Tanzania ................................ ................................ ................. 70 PAGE 10 10 LIST OF ABBREVIATIONS CREC Citrus research and education centre DC Direct current EM Electromagnetic ET O Evapo transpiration potential HBC H bridge chip IC Irrigation controller LCD Liquid crystal display LEAP Leadership enhancement in agriculture program MC Master controller MC Main controller or master controller PCB Printed circuit board PIC Peripheral interface controller PICS Precision irrigation control system SDI Serial digital interface SUA Sokoine university of agriculture TAHA Tanzania horticultural association TAPP Tanzania agricultural productivity program TBL Tiny bootloader TDR Time domain reflectometer TDT Time domain tensiometer UF University of Florida USAID United states aids for international development WC Water controller or slave controller PAGE 11 11 Abstract of T hesis presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirem ents for the Master of Science D egree DEVELOPMENT OF A PRECISION IRRIGATION CONTROL SYSTEM FOR HORTICULTURAL FOOD C ROPS IN TANZANIA By Kadeghe G oodluck Fue August 2014 Chair : John Schueller Co chair : Arnold Schumann Major : A gricultural and Biological Engineering As the Tanzania population grows , scarcity of resources such as water and electricity will pose a great risk to horticultural crop production and society activi ties. Precision irrigation presents a great opportunity to save water and energy in agriculture. The current off the shelf machines used to control water are incompatible to Tanzanian conditions, v ery expensive, delicate, or do no t have enough features. This study proposed a precision irrigation control system (PICS) prototype design and imp lementation that can be used in Afri can countries, particularly in Tanzania. The PICS is a solar powered control system that uses low cost electronic devices to automate drip irrigation and to determine when to irrigate and how much to irrigate. The hardware and software of the PICS were des igned, integrated, and constructed in USA and then in Tanzania. Under testing, the PICS worked properly and achieved a high level of reliability and maintainability. The controller is a high tech tool that can be programmed wirelessly using a laptop and the data can be downloaded using any android based smartphone. The wireless technology incorporated can be us ed to transfer instant data of rainfall and PAGE 12 12 soil moisture content. The system updates its data every four minutes (but can be reprogrammed). The system automates data cleaning while collecting instant information of the soil moisture, temperature, humidity and rainfall. It can store data at least for seven days. PAGE 13 13 CHAPTER 1 INTRODUCTION Precision farming provides tools and techniques that can be utilized for the modern development of farming activities in Tanzania. Tanza nia has to prove itself competitive with the high yield and high quality crops using modern technology provided through precision farming technology. Precision agriculture is the crop production management technique using localized conditions. Schueller (1 992) discusses that this management of crop production according to localized conditions is variously known as spatially variable, site specific, soil specific, precision, or prescription crop production. Precision agriculture can be profitable, avoid was tes and save the surroundings by appropriately usin g agricultural inputs . P recision agriculture provides services in almost all sectors of agriculture but this study concentrates on precision irrigation. In most of the semi arid places, water is now a scar ce factor for crop production. Precision farming can provide instant information regarding the field conditions. This information is quite necessary for instant variable rate application or offline application. It provides support for decision support man agement, even for future decision s when you are not using the proposed c ontrol system. Casadesus et al ., (2012) reported that f or supporting a precise and low labor management of irrigation, they proposed and depicted an algorithm that coordinates seven tasks which can be automated : (1) estimation of irrigation needs, (2) adaptation to a particular irrigation setup, (3) execution of the schedule, (4) soil and/or plant monitoring, (5) interpretation of sensor data, (6) reaction to occasional events and (7) tuning the algorithm to irrigation needs. Also, Casadesus et al. , (2012) proposed an approach for automated irrigation scheduling which combines a feed forward estimation of irrigation needs by water PAGE 14 14 balance method with a tuning mechanism based on feedbac k from soil or plant sensors. It provided a common basis that could be configured to support different irrigation strategies and user preferences. Khriji et al. (2014) reported that the challenge always is to create an automated irrigation system which can simultaneously reduce the water waste and be cost effective. In this study feed forward estimation will be done using real time control based upon feedback from soil sensors. The water balance method is replaced and soil moi sture readings from the sensor s estimate the irrigation . Software can be updated or apps downloaded in programmable devices such as phones to extend capabilities, fix errors, or improve performance . This irrigation control system will have such a technique where a farmer can update the software and change the algorithm to one that is up to date while using the same device. The PICS software which is android based uses Wi Fi technology to connect to the phone. It displays soil moisture content and control information. The updates of the android software can be found and downloaded directly from the internet and be installed to the phone. Background A pproximately 70% of the people in Tanzania earn their living either directly or indirectly from agriculture. However, most of these farmers f ace great challenge s due to the lack of reliable markets, dependency on seasonal rains, poor infrastructures, the cost of farm inputs and finally, ignorance of the modern farming skills such as precision agr iculture. These challenges greatly affect horticu ltural crops production in the country. Mashindano et al ., (2013) reported regarding the horticultural market in Tanzania. Horticulture market value chain development is one of the specific positioning choices made by Tanzania in its national policy frame work 1. According to these agricultural related national policy frameworks, Tanzania intends to promote the PAGE 15 15 horticulture value chain for the purpose of sustainably contributing to increased production, employment and income generation to resolve poverty in Tanzania. Horticultural p roducts have the potential of creating a strong industry in Tanzania, but the industry has been given less attention, which is disquieting given the existing po tential in production and the growing world market demands. Horti culture products are among the main export oriented crops , mainly from part of the northern zone of Tanzania (Arusha, Kilimanjaro regions) where the field survey of their study was conducted, but also in many other regions such as Coast, Morogoro (where ma in experiments of this study took place) , Iringa, Mbeya, Manyara , and Tanga. Horticultural crops that are grown in eastern, southern and northern zone regions that include Arusha, Kilimanjaro, Tanga, Morogoro, Dodoma and Iringa includes onions, beetroots, okra, carrots, tomatoes, sweet peppers, potatoes, spinach, shallots, leeks, cabbages, water melons, hot peppers and others. Since it is a big industry, Tanzania horticultural association (TAHA) was formed to develop and promote the horticultural industry i n Tanzania to become more profitable , sustainable, and participate more effectively in the development of horticultural crops production . Sergeant (2004) reported that Tanzania has exported a range of traditional agricultural plantation crops , for example coff ee, tea, cashew and sisal for many years. T he Tanzanian horticultural export sector is generally regarded as having started in the 1950s with the production of bean seed for selling in Europe, mainly through the Netherlands . Perishable horticultural ex ports to Europe started in the 1970s, when 1980s, a cut flower industry was established and this was followed by the development of a cuttings PAGE 16 16 industry based on chrysanthemums. More recentl y, there have been a number of more specialized investments, for example in propagation of hybrid vegetable seeds, higher value fruits and cut flowers other than roses. Therefore, Tanzania has a relatively broad based, if still small, horticultural and flo ricultural industry focused on supplying the European market. Since the late 1980s and early 1990s, there has been an increasingly important horticultural trade to neighboring countries. Most prominent are sales of onions, tomatoes, potatoes and oranges to inability to satisfy its rapidly increasing market demand from its own production and imports from Tanzania have made up this shortfall. Still the horticultural industry is continuing to grow as time passes . The foreign exchange generated by the horticulture industry has increased from USD 46.7 million per annum in 2006/07 to USD 112.6 million in 2008/09 a nd USD 127.7 million in 2010/11 (Mashindano et al., 2013 and MAFSC 2012). It is still continuing to increase i n the years 2011 2014. Furthermore, in Tanzania, there is a program to give assistance to the farmers on farming techniques so as to increase production yield conducted by the Tanzania Agriculture Productivity P rogram (TAPP) funded by USAID. Considering drip irrigation techniques as more effective, TAPP decided to install the systems in Tanzania. About 100 drip irrigation systems were installed in most parts of Tanzania and more will be installed in future. These irrigation systems use a pre planned scheduling to irrigate by using the evapotranspiration rate of the zone, state of development of the crop and type of the soil. With water usage, 1 acre (assume 22m by 184 m land) of drip irrigation with single line and 1 litre/emitter /hour, then 37 lines of length 184 m will be required for PAGE 17 17 planting 60cm by 60cm spacin g. This means approximately 11346 plants will be connected to the drip lines. Hence , 11346 litre/hour will be required for all plants . T his demands approxi mately 11 m 3 /h ou r of the water applied per acre of the land . Also, this can be determined by the TAPP program of ETO, which uses evapotranspiration calculated from a trusted meteorological station and a determined factor of the crop (KC) , plus the flow of drip lines. Using experience, the farmers a re recommended to apply about 22 000 litres for two hours or 33000 litres for three hours per acre . Basically, the above irrigation specification is applied for most of the horticultural crops planted in Tanzania. It also needs a technician or farmer to control manually the irrigation and hence increase s the cost of production. Also these services are provided by TAPP and later on the farmer will be allowed to continue with his/her farming activities alone. In TAPP Production manual volume 3, TAPP_USAID, the importance of good irrigation was emphasized then poor yields will be the result , but instead if there is excellent irrigation and okay fertiliz ation then very good crop yield s will be achieved. Good irrigation is more important than fertilization and that is why time should be dedicated in proper irrigation. Therefore, this study concentrated on irrigation and the development of an automatic cont rol system specifically for irrigation. Basically, horticultural crops farmers need a more advanced hassle free controlled irrigation system so as to protect them from loss of quality due to lack of water or diseases due to high humidity. Nevertheless, to my basic knowledge, there are no commercial available automatic irrigation controllers with all the necessary features PAGE 18 18 for horticultural crops production in Tanzania; and which are energy efficient, cheap in price, and durable for harsh remote rural enviro nments. Purpose of the R esearch It wa s proposed to design an advanced irrigation controller for precision irrigation in Tanzania. Th e system is designed to be able to monitor soil moisture using sensors, data loggers and software algorithms and then to con trol drip irrigation. In this research, an attempt was made to develop a precision irrigation control system that can control water using whatever different algorithms that can be programmed. It can be programmed to meet precision irrigation parameters in irrigation e amount of water in the field. iveries uniformity of water distribution and the shortfall or excess . Ortega et al., (2005) indicated that the right amount of daily irrigation supply and monitoring at th e right time within the discrete irrigation unit i s quite essential to improve irrigatio n water management . Proper irrigation scheduling can reduce irrigation demand and increase productivity. A large number of tools are available to support field irrigati on scheduling, from in field and remote sensors to simulation models. Irrigation scheduling models are particularly useful to support individual farmers and irrigation advisory services. Using the information provided from several sensors it is possible t o calculate the right amount of daily irrigation supply. In the result of this research, controllers use this PAGE 19 19 technique to calculate the right amount of irrigation requirements and then initiate the irrigation. Culibrk et al., (2014) discusses alternative methods of precision irrigation using satellite s . The satellite data of interest for the precision irrigation is primarily data relevant to the water cycle, hydrology and meteorology and provided by missions aimed at gathering such data. Statement of the P roblem The horticultural food crops industry faces a competitive market in the world that needs quality products for export across the continents. Africa in general has a lot of land that needs water which is now quit e scarce. M any horticultural crops will have difficulty surviving water scarcity . Generally, the following facts can be presented: 1. Horticultural food crops yield in Tanzania is primarily limited by insufficient quantity of water and nutrients required for optimum growth . 2. Horticultural foo d crops quality is significantly affected by traditional and cultural methods used by horticultural farms in Tanzania. 3. Tanzania has been attributed to unfavorable weather c onditions especially poor rainfall in most growing areas. 4. Increasing t he quality of Tanzanian horticultural food crops will have a great impact in the world market compared to the current situation. 5. Drip irrigation has already proved very successful for optimum production of high value horticultural crops such as ball peppers, grapes and tomatoes in Tanzania. 6. Current methods and instruments of precision agriculture may not be suitable for Tanzania or they may need special modification to apply in Africa. 7. Therefore, s mall holder farme rs of horticultural food crops need an affordable and effective method to be deployed in their farms PAGE 20 20 Mafuta et al. (2013) discussed the cost and effectiveness of the off the shelf controllers. Off the shelf irrigation controll ers are usually expensive and not effective in managing scarce water resources. General R esearc h H ypotheses The following facts are assumed to apply: p overty can be reduced and the quality of life improved in Tanzania by improving horticultural food crops production Secondly, q uantity, quality, and profitability of horticultural food crops production can be significantly improved through improved irrigation techniques Last but not least, c omp uterized precision agriculture and drip irrigation can significantly improve irrigation and production of horticultural food crops. Objectives The main goal of this research project is to develop and implement a precision irrigation control system in Tanzania. Therefore, the following are the specific objectives of this research project; Firstly, develop a robust computerized irrigation controller that will automatically supply drip irrigated horticultural crops with the optimum amount and frequency (timing) of water. Secondly, develop sof tware algorithms for the irrigation controller to automatically adapt to changing soil conditions in order to minimize user intervention and maximize resource use efficiency and horticultural crops yields. Lastly, implement and evaluate the performance of the irrigation controller in a real world h orticultural crop in Tanzania. PAGE 21 21 Satisfying these objectives will demonstrate the achieve ment of the goal to develop and implement a precision irrigation control system in Tanzania. PAGE 22 22 CHAPTER 2 LITERATURE REVIEW Precision farming also known as site specific farming aims to manage production inputs over many small management zones rather than over large zones. It is difficult to manage inputs at extremely fine scales, especially in the case of the horticultural cro ps farming system in Tanzania where the most of the farms are less than 5 acres. However, site specific irrigation management can potentially improve the overall water management in comparison to conventional irrig ated areas of thousands of acr es. A critic al element of the irrigation scheduling is the accurate estimation of irrigation supplies and their proper allocation for the actual planted areas. All irrigation scheduling procedures consist of monitoring indicators that determine the need for irrigation . The final decision depends on the irrigation criterion, strategy and goal. Irrigation scheduling is the decision of when and how much water to apply to a field. Whelan and McBrantley, 2000, and Hadley and Yule, 2009 explain that spatial variability must be correctly characterized for effective site specific management . If this is not possible, then the null hypothesis of precision agriculture applies, i.e. , uniform rate application is more appropriate than variable rate application. Spatial differences in soil moisture are likely to be one controlling factor influencing yield, but if they cannot be realistically modelled they cannot be addressed. Water availability is the major constraint to crop production in different parts of the world. Due to water d emand for rapid industrialization and high population growth, the share of water for agriculture is going to be reduced in the coming decades. The further scarcity of irrigation water for crop production, therefore, should be checked for sustaining the foo d supply through efficient water conservation and management PAGE 23 23 practices even in high rainfall areas (Panigrahi et al., 2012; Panda et al., 2004). The amount of water applied is determined by using a criterion to determine irrigation need and a strategy to p rescribe how much water to apply in any situation. The right amount of daily irrigation supply and monitoring at the right time within the discrete irrigation unit is essential to improve the irrigation water management of a scheme (Rowson and Amin, 2010). Irrigation can then be scheduled whenever the soil water content is depleted to a management allowed level (previously set threshold value ). Alternatively, a soil water potential sensor can be used to schedule irrigation whenever the soil water potential reaches a previously set threshold. The use of soil water content sensors is gaining vast support in developed countries like the United States of America. The U.S. Department of Agriculture in 2009 awarded the White River Irrigation District in Arkansas $ 4.45 million to install water measurement and monitoring technology, which includes soil water content sensors (Varble et al., 2011; NRCS, 2009). The governments of the developing countries governments are being urged to support precision irrigation as it has proved to be very successful in developed countries. Fortes et al., 2004 discusses that m any computerized tools have been used for scheduling irrigation deliveries and improving irrigation project management. The possibility for easily creating and changing scenarios allows the consideration of multiple alternatives for irrigation scheduling, including the adoption of crop specific irrigation management options. Scenarios may include different irrigation scheduling options inside the same project area applied to selected fields, crops, or sub areas PAGE 24 24 corresponding to irrigation sectors. This allo ws tailoring irrigation management according to identified requirements . The irrigation scheduling alternatives are evaluated from the relative yield loss produced when crop evapotranspiration is below its potential level ( Fortes et al. 2004 ) . Ortega et a l. , (2005) reported that irrigation scheduling is the farmers decision event. It requires knowledge of crop water requirements and yield responses to water, the constra ints specific to the irrigation method and respective on farm delivery systems, the limitations of the water supply system relative to the delivery schedules applied, and the financial and economic implications of the irrigation practice (Smith et al., 199 6). Irrigation scheduling models are particularly useful to support individual farmers and irrigation advisory services. Irrigation scheduling is considered as a vital component of water management to produce higher irrigation efficiency under any irrigati on system, as excessive or sub optimum irrigation both have detrimental effects on productivity parameters of many crops, including okra (Aiyelaagbe and Ogbonnaya, 1996). Moreover, scheduling irrigation is influenced by many complex factors such as soil, crop, environment, water supply and cultivation practices. Thus, it is essential to develop an efficient irrigation scheduling under prevailing local conditions. Various methods based on estimated crop evapotranspiration rate (Jaikumaran and Nandini, 2001) , ratio of irrigation water to cumulative pan evaporation (Aiyelaagbe and Ogbonnaya, 1996; Batra et al. , 2000), open pan evaporation rate (Singh, 1987; Manjunath et al. , 1994) and soil moisture PAGE 25 25 depletion (Home et al. , 2000 and Aiyelaagbe and Ogbonnaya, 199 6 ) are widely used for schedu ling irrigation in okra. This stu dy used okra for field testing and evaluations. Although soil water status can be determined by direct (soil sampling) and indirect (soil moisture sensing) methods, direct methods of monitoring soil moisture are not commonly used for irrigation scheduling because they are intrusive and labor intensive and cannot provide immedi ate feedback. Soil moisture sensor s can be permanently installed at representative points in an agricultural field to prov ide repeated moisture readings over time that can be used for irrigation management. Special care is needed when using soil moisture devices in coarse soils since most devices require close contact with the soil matrix that is sometimes difficult to achiev e in these soils. In addition, the fast soil water changes typical of these soils are sometimes not properly captured by some types of sensors (Irmak and Haman, 2001; Muñoz Carpena et al., 2002; Muñoz Carpena et al., 2005). Meron et al. , (2001) discussed t he use of tensiometers to automati cally irrigate apple trees. It was noted that spatial variability was problematic when the tensiometers were installed 30 cm from the drip irrigation emitters. Smajstrla and Koo (1986) discussed the problems associated wit h using tensiometers to initiate irrigation events in Florida. Problems included entrapped air in the tensiometers, organic growth on the ceramic cups, and the need for re calibration. Torre Neto and Schueller (2000) successfully used instrumented tensiome ters in a precision agriculture system to irrigate groups of four or five citrus trees. Muñoz Carpena et al. , (2005) found that both tensiometer and GMS controlled drip irrigation systems for tomato es saved water when compared to typical farmer practices. Dukes et al. , (2003) used a commercially PAGE 26 26 available dielectric sensor for lawns and gardens to control i rrigation on green bell pepper ( Capsicum annuum L. ). They found 50% reduction in water use with soil water based automatically irrigated bell pepper s when compared to once daily manually irrigated treatments that had similar yields; however, maximum yields and water use were on the farmer treatment that was irrigated 1 2 times each day. Blonquist Jr et al., ( 2006 ) discussed r ecent advances in electroma gnetic (EM) sensor technology have made automated irrigation scheduling a reality using state of the art soil moisture sensing. Estimates of water content based on electromagnetic (EM) me asurement provide real time, in situ measurements at a relatively aff ordable cost. Estimation of water content using EM sensors is based on the ability of sensors to measure the real part of the dielectric permittivity (e), or an EM signal property directly relating to e. e directly relates to volumetric soil water content (u) owing to the e contrast of soil constituents; e a = 1, e s = 2 9 and e w = 80; where the subscripts a, s and w represent air, solids and water, respectively . (Blonquist Jr et al., 2006; Qualls et al., 2001; Paul, 2002; Leib et al., 2003) demonstrated t he potential of EM sensors in irrigation scheduling. The ACCLIMA sensors used in this study are based on the EM measurement technique s of time d omain tensiometers (TDT) . Muñoz Carpena et al. (2005) stated that water supplies become scarce and polluted; therefore, there is a need to irrigate more efficiently in order to minimize water use and chemical leaching. Recent advances in soil water sensors make the commercial use of advanced technology possible to automate irrigation management for vegetable prod uction. However, research indicates that different sensors types may PAGE 27 27 not perform alike under all conditions. Reductions in water use range as high as 70% compared to farmer practices with no negative impact on crop yields. Simonne et al. (2008) explained the importance of using drip irrigation in farming activities. He depicts both advantages and disadvantages. Advantages include reduced water use, joint management of irrigation and fertilization, reduced pest problems, simplicity, low pumping needs, autom ation (this research utilized this drip irrigation advantage), adaptation and production advantages. Some demerits mentioned include substantial economic investment, required maintenance, high quality water (water filters should be used) that the water app lication pattern must match planting pattern, safety, leak repair costs and drip tape disposal which causes extra cleanup costs after harvest. Hedley and Yule, (2009) reported that i n order to increase practical functionality of precisio n irrigation; real time monitoring, decision and control systems must be developed. This research implements real time monitoring and scheduling using irrigation algorithms and data loggers. PAGE 28 28 CHAPTER 3 MATERIALS AND METHODS Design Criteria for the New Irrigation C on troller An irrigation controller (IC) was designed and implemented using several design criteria. The design criteria affected what was built, what was incorporated, and how the system was to be operated . The design includes t he IC which is low cost, small , and durable, with low energy requirements. The IC which is self adjusting, using sensors, loop feedback algorithms and fuzzy logic. The IC requires very minimum attention from farmers. The main function of the IC is to measure the daily water requirement s of the crop and respond by applying the correct amount of irrigation water regardless of changing environmental conditions. The IC created is capable of optimally supplying water to the crop using existing harsh infrastructure and resources (spring, rive r, lake water, local fertilizers, and available e nergy sources). The IC is also capable of adapting to the existing resources of the farm without any significant modification or change. The design included b asic data logging and hi story reporting features that have been incorporated in the IC design. A simple, user friendly interface of LCD d isplay, and buttons is used to configure the IC, with the remote wireless control and monitoring features with an existing Wi Fi capable cellular android smartp hone or the PC . Design P rocedures T aken To achieve the goals of this project, the following design procedures were included in the project tasks; Firstly, an i nformation gathering survey was conducted . This included baseline data collection. Baseline data from ho rticultural crops farms, existing infrastructure, PAGE 29 29 energy, water, fertilizer resources, field identification and description of existing horticultural crops production methods were fetched. Secondly, c onceptual design of controller functions / requirements , identification of suitable hardware and design of prototype software / firmware, and enclosure was done . Then, s chematic circuit design and testing phase o n a breadboard platform (solder less working electronic prototype) f o r identification of incompatibi lities and durability of hardware components was done. After that, the d esign ing and build ing the first soldered prototype electronic circuit board, using PCB123 software and tools was carefully done . The PCB was then ordered from the PCB123 Company . Insta llation of the rugged enclosure design with LCD display , Wi Fi connection and user input buttons was finally done. Finally, r epetitive and randomly modified continuous (24/7) tests were conducted to evaluate accuracy and reliability of the PICS . This rigorous rapid testing help ed to rapidly identify problems early on for deb ugging, and durability improvement . Android based software to control the irrigation system was developed to be used for farmers to control the PICS. Development P rocedures The development started earlier in May 2013 and was carr ied out for a whole year until June 2014. The development procedures were divided into three phases: Phase 1 System D evelopment 1. Identification and purchase of electronic components for the IC, softwar e, latching solenoid valves, tipping bucket rain gauges, flow meters, pressure gauges, soil moisture sensors, temperature/humidity and light sensors, etc. was done. 2. Simple program s for the Microchip PIC MCU to read a keyboard, display information on a LCD, and to log sensor data were d eveloped . PAGE 30 30 3. Individual part testing c ontinued for all parts/modules of the IC , including ; a) Real time clock b) Serial port for in circuit programming / debugging c) Serial LCD display and 12 button keypad d) EEPROM memory for data storage: suggest Microchip 24AA1025 (128 Kbytes, I2C protocol) e) Soil moisture sensors interfaced with SDI 12 half duplex serial protocol f) Design oscillator frequency and sleep modes for lowest power use g) A/D interface for reading analog voltage of the solar panel h) Pulse counting with I/O pins to read rain gauge and flow meter i) Solar panel and rechargeable battery power management algorithm j) Solar panel system design and in circuit control for charging Phase 2 Advanced System D evelopment 4. De velopment of a DC latching valve control driver with H bridge chips was done . 5. Development of simple feedback control and fuzzy logic methods for the Microchip PIC was done. These methods were incorporate d into the IC for closed loop control of irrigation f or optimal soil moisture regulation. 6. Construction of the first version of the IC printed circuit board was done . Up to this point, only breadboard had been used for development. Phase 3 System D eployment 7. Addition of Wi Fi and smart phone capabilities was done and both were tested. 8. Installation of IC and all other components to automate the irrigation for the okra plants was done . 9. Testing and tuning the IC in the okra irrigation system in Tanzania was finally done . Also, a djustment and tuning the feedback control and fuzzy logic algorithms for the okra irrigation system was done continuously until a fine control was obtained. Design of the Irrigation Control S ystem The control system is based on a feedback control loop. The water is balanced in the depletion zone near the water availability upper limit. The water is provided in a controlled manner that allows the plants to always have enough , but not excess water . It should be noted that this control system is an electronic device that needs more knowledge of electronics and programming. It has two main parts; the main controller (MC) and water control ler (WC). See the F igure 3 1 . T he MC is the master controller that has been connected with all the sensors while the water controller or slave PAGE 31 31 controller is connected only with the solenoid valves. This allows two major activities of the IC to be controlled differently. Also, this simplified approach allows ease debugging of any problems that arise . The main controller (MC) consists of all the other subparts and it acts as the master in the master slave configuration of two main PIC micr oco ntrollers. The moisture, flow, rain gauge , humidity and temperature sensor s report to it. Since the solenoid valves needs to be controlled based upon the message feeds from the main controller, then the slave need s to be off all the time except it is requested to turn on or off the latching solen oid val ves. This design reduces power usage. The wireless module is connected to the main microcontroller for the smart communication of IC with smart devices such as laptops and A ndroid phones. It is crucial that this IC needs to send information it has c ollected to the smart devices that will allow further processing of the information for post decision management . The main controller is connected to a time keeper module and a memory unit. T he t ime keeper module is a separate part from the m ain controller but sends time information to it . The MC needs the time to schedule the irrigation. The m emory unit is used to keep information needed for further processing. It is crucial that data can be downloaded from the IC for analysis . Also, the system is connected to solar panel system that is providing full power to the system. This is es sential since the system will be ds where no electrical power is provided. See F igure 3 2 for a more advanced conceptual diagram of the IC. The water controller (WC) is the slave unit of the master MC. It acts upon requests from the MC. It is always OFF unless turned on by the MC. The WC is the one PAGE 32 32 that is listening and answering to the MC . It then controls the H Bridge chips (HBC) . The HBC chips can direct the voltage in either direction to open or close the latch ing valves. The l atching solenoid valves need to be controlled using 12 v olts. They also draw significant current to open/close the valves about 135mA . The PICS operates on 5V regulated voltage. Hence, this present voltage is not enough to operate the valves. Some means of boosting voltage is required. If the supplying solar cells operate at 12V then no need for boosting. But if th e system is connected to econom ical solar panels that produce less than 12V , boosting is required to operate the valves. Experiments have shown that latching valves can actually operate at least 7V but the drawn current is significant. In case, the volta ge is not enough then volt age booster was connected (See F igure 3 3 ) to uplift the voltage from 7 V to 12V. The input voltage is provided through MC connected to MOSFET and 5V input . This connection allows the voltage booster to be on only when it is required to switch on or off the latching valves. Sensors M odule The MC has ports for digital sensors such as flow, pressure, light, humidity, rain gauge and soil moisture sensor. Each sensor port was built in the system using the requirements needed for the specific sensors such as resistors , capacitors, diodes and power supply. Humidity S ensor A digital relative humidity and temperature sensor RHT03 was selected to be incorporated in system . According the manual ( http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Weather/RHT03.pdf accessed on 28 May 2014) , RHT03 output is a calibrated digital signal. I t applies an exclusive PAGE 33 33 digital signal collecting technique and humidity sensing technology, assuring its reliability and stability. Its sensing elements are connected with an 8 bit single chip microcontroller . The Maxdetect manual states that every sensor of this model is high precision, capacitive type, temperature compensated and calibrated with the calibration coefficient saved in one time programmable memory . It i s small in size, low power consumption and long transmission distance (100m) enable the RHT 03 to be used in many applications . F igure 3 4 s hows how the RHT03 is connected. The power supplied should be between 3.3 and 6V DC. When power is first supplied to sensor no instructions should be sent to the sensor within one second to pass the unstable status. The user will only be required to connect the wires to the ports available (Note: MCU is main control unit, humidity sensor is RHT03, power +5V is V cc and ground is GND). Pressure S ensor The MSP 300 ser ies pressure transducer is used for the IC. According to the Measurement Specialties, Inc (the manufacturer), MSP300 manual ( http://www1.futureelectronics.com/doc /MEASUREMENT%20SPECIALTIES/200149 03.pdf accessed on 28 May 2014) the MSP300 is suitable for measurement of liquid or gas pressure, even for difficult media such as contaminated water, steam, and mildly corrosive fluids. See Figure 3 5 . The transducer pressure cavity is machined from a solid piece of 17 4 PH stainless steel. The standard version includes a 1/4 NPT pipe thread allowing a leak proof, all metal sealed system. There are no O rings, welds or organics exp osed to the pressure media. The durability is supposedly excellent. It measures pressure up to PAGE 34 34 10k psi or 700 Bar and give output in form of mV or amplified voltage output. It provides high accuracy output at maximum temperature range with stan dard cable fr om 20°C to +105°C. However, in this experiment, this sensor c atches rust very fast as it was observed in the field. The rust will make it permanently sticking to the pipes. This proves that the sensors are not that much suitable for outdoor applications. Rain S ensor A model TX32U rain sensor ( http://www.lacrossetechnology.com/tx32/manual.pdf accessed on 29 may 2014) measures the rainfall and sends the information to the MC. Acc ording to the manufacturer, La Crosse T echnology, for best results the rain sensor should be securely mounted onto a horizontal surface about 1 m or higher above the ground and in an open area away from trees or other coverings where rainfall may be reduc ed causing inaccurate readings. Leaves and other debris must be removed from the rain sensor. Excess rain must not collect and store at the base of the unit but can flow out between th e base and the mounting surface. The sensor was placed more than 1.5 m above the gr ound and in a free space area to avoid any ext ernal disturbances. See F igure 3 6 . This allows correct rainfall collection in the field. Due to size of the field being small , then rainfall measurement accuracy will not be affected by rainfall spatial variability. The amount of rain is determined by counting bucket tips. The calibration showed that each count corresponds to 0. 02in ( 0.508 mm) of rain. I verified this with laboratory tests. Detections of bucket tips go t o the MC which interprets and store s the information . PAGE 35 35 Soil M oisture T ransducer An Acclima SDI 12 Soil Moisture Transducer was used. The Series SDI 12 transducer is a Digital Time Domain Transmissometer (TDT) that measures the permittivity of soils by determining the propagation time of an electromagnetic wave transmitted along a waveguide through the soil. Chavez et al. (2011) reporte d that under la boratory and field conditions, the factory based calibrations for the TDT sensor accurately measured volumetric soil water content. Therefore, the use of the TDT sensor for irrigation water management seems very promising. Using a W atermark sensor was also considered. Laboratory tests indicated that a linear calibration for the TDT sensor and a l ogarithmic calibration for the W atermark sensor improved the factory recommended calibration performed very well with errors less tha n 1.2±3.9%. In the case of the W atermark (electrical resistance) sensor, the factory recommended equation, evaluated with measured soil water content from a corn irrigated field, in average overestimated soil water content by 11.2±12.6%. Blonqu ist Jr., (2006) concluded that the Accl ima D igital TDT s ensor is an EM based water content sensor that when compared to other EM (electromagnetic) sensors was shown to provide exceptional apparent permittivity ( K a ) measurement accuracy a t a reduced cost. The sensor can be employed to schedule irrigation via connection to custom irrigation controllers and conventional irrigation timers. Blonquist Jr., (2 005) reported that the Acclima D igital TDT sensor has the potential to offer a more aff ordable alternative sensor to others (Tektronix and Campbell) . The Acclima Digital TDT Sensor frequency bandwidth and permittivity estimates based on travel time measurements compare quite well to those of the PAGE 36 36 Tektronix TDR and Campbell Scientific TDR100. The Acclima Digital TDT Sensor has the advantage over TDR in that signal transmitting and sampling hardware is located in the sensor head negating cable losses. TDT is also advantageous in that one way travel time reduces signal attenuation in the sample ( assuming sensor rods are the same length). Blonquist Jr., (2 005) said a lthough the Acclima Digital TDT Sensor is presently geared for closed loop irrigation control, where excavation is necessary for installation, refinement of the rod geometry for insert ion (and perhaps conversion to a TDR measurement) will likely rank this TDT method alongside its TDR counterpart as an accepted laboratory and field standard for determining soil water content. Marble et al., (2011) evaluated the performance of three soil water content sensors (CS616/625, Campbell Scientific, Inc., Logan, UT; TDT, Acclima, Inc., Meridian, ID; 5TE, Decagon Devices, Inc., Pullman, WA) and a soil water potential sensor (Watermark 200SS, Irrometer Company, Inc., Riverside, CA) in laboratory and field conditions. The CS616/625, 5TE and Watermark sensors were strongly influenced by fluctuations in soil temperature, while the TDT sensor was not influenced. The TDT sensor stayed steady while taking readings even in relative high fluctuations in soil temperature. When irrigating, temperature fluctuations are common hence being steady is quite an important feature . The TDT s ensor used in this research is produced by Acclima closed loop ( http://acclima.com/wd/acclimadocs/agriculture/SDI 12%20Sensor%20User%20Manual.pdf accessed on 28 May 2014) , the Acclima Series SDI12 soil moisture transducer uses the industry standard SDI 12 interface for PAGE 37 37 communicating with a data recorder or other SDI 12 equipped controlling device. The SDI 12 communications standard is digital serial data communications hardware and protocol standard based on 1200 baud, ASCII character communications over a three wire bus. The SDI12 Series is compliant with Version 1.3 o f the SDI 12 standard. user manual (2008) ( http://acclima.com/wd/acclimadocs/agriculture/SDI 12%20Sensor%20User%20Manual.pdf accessed on 28 May 2014) states that the absolute moisture content of the soil is calculated from the permittivity using the Topp equation ( Topp et al., 1980) . The transducer can be commanded to produce both the bulk permittivity and the moisture content of the soil. The accuracy and stability of the Series SDI12 is obtained through a patented hardware and firmware system that digitizes the returned waveform and then uses proprietary digital signal analysis algorithms to extract the real propagation time and distortion parameters of the returned wave. High accuracy is achieved over a wide range of soil temperatures and electrical conductivity. In the SDI12 series the resolution of the digiti zed waveform is 5 picoseconds permitting a small transducer to report very high resolution data. From the extracted distortion parameters the transducer calculates an d reports the electrical conductivity of the soil. The permittivity and soil moisture measurements are compensated for temperature. The transducer als o reports soil temperature. The F igure 3 7 shows a SDI 12 Soil M oisture Transducer installed at SUA field site . The sensor is buried at the irrigation root zone of the crop . The sensor will read more accurate when it is near the emitter that gives water to the plant. The sensor will sense water available to plant if placed near the stem of the crop. Many horticultural PAGE 38 38 crops such as o kra develop with short roots which have a limited root zone. Placing the sensor further away might hinder instant control of irrigati on systems as signals will conseq uently vary from plant water requirements. The placement distance and pattern should be the same for all the placed sensors to obtain consistent and reliable control. Variations in soil type and condition, adjacent land use, sha ding, and drainage patterns can all cause a sensor to read an unrepresentative soil moisture content, which can result in too much or too little irrigation. If one area does not represent the average condition of the (s oil moisture sensor) SMS controlled area, SMS placement in that l ocation should be avoided. If the sensor probe ( such as TDT) is long, flat, and has exposed rounded wave guides (steel rods), it should be installed horizontally an d with the wide side facing up (St. Johns River Water Management District, 2008). Varble e t al., (2011) states that the digital TDT and the 5TE (measures soil moisture, soil temperature and bulk electrical conductivity EC) sensors performed be tter at each treatment at site . They conducted an experiment to compare different sensors and the TDT. Varble et al. (2011) used f our statistical measures to arrive to the conclusion. The four statistical equation were computed to compare and evaluate each equation predicted (P) _ v value with the observed (O) _ v value derived from gravimetric soil samples taken from the field and laboratory soils. These equations are defined by Willmott (1982) and include the coefficient of determination (R 2 ), mean bias error (MBE), root mean square error (RMSE), and the index of agreement ( k )). Installation of the sensor probe is basically the heart of the automatic control systems. Variations will eventually produce o n differences that may hurt control or bring wrong information that will induce irregularities within the irrigat ion root zone. High PAGE 39 39 supply of water will bring problems to the plants since high moisture increases th e risks of diseases and pests. High supply also wastes precious water. Low supply of water will wilt the plant. Wilting crop s that are in flowering sta ge will cause adverse impacts to the yields. This SDI 12 sensor allows easy integration with the microcontroller which is the main control ler (MC). Requests using SDI 12 commands can be interpreted by both the MC and the sensor. This mutual working allows correct information from the sensor to be transmitted to the MC. The string received has all the information but the MC will trim the whole i nformation and allow only percentage moisture content to be stored in the system. The SDI 12 sensors respond by using their specific identification numbers ( IDs ) . This is a crucial design since it allows only one pin of the PIC microcontroller to communic ate with several sensors . In this research, experiments were carried using six sensors which were all connected to one pin of the MC . Each sensor is called using its ID (1,2,3,4,5 or 6) and then it replies starting with its ID. Hence the MC can control all of them simultaneously. Sometimes, these sensors might not produce information on time as requested. They can easily be corrupted by disturbances due to electricity supply irregul arities. This is common for device s that may have electric variations that halt the chip performance. Any disturbed information becomes unreadable but if read is more error prone information. A s information. PAGE 40 40 The control of water is dependent on the correct informa tion coming from sensors at the correct time . Whenever the information is not delivered on time the MC is going to miss that particular information. The treatments are based on the 3 sensors. So if readings from all three are miss ing then no decision is un dertaken by the microcontroller. But the good news is that the observation shows that more than 90% of the readings were correctly collected. This will be analyzed more in the Discussion chapter of this document. Another good thing about this particular sensor is that it works over a wide range of input voltage from 5V 15V. This range provides a good opportunity at the design level and creates a room for wide range of control . This provides great security to the equipment which is the most expensive equip ment in this study followed by pressure sensor. Damage to this sensor would cause a great loss to the farmer. In real field application s few er moi sture sensors are recommended in the field. And the placement of the sensor for great quality results becomes a great challenge to the grower. Bill of Materials This section presents the bill of materials that were used to construct precision irrigation control system. See Table 3 1. Table 3 1 . Bill of m aterials Type of the component Model of the component The man ufacturer /retailer of the component Number of units Pressure transducer MSP300 Measurement S pecialties 1 Digital TDT sensor SDI 12 Acclima 6 Rain sensor TX32U La Crosse Technology 1 Humidity sensor RHT03 Maxdetect 1 Light sensor TSL235R Texas Advanced Optoelectronic solutions 1 PAGE 41 41 Table 3 1. Continued Type of the component Model of the component The manufacturer/retailer of the component Number of units Flow sensor FS 4400H Savant Electronics 1 Latching solenoid valves 3/4 Inch NPT Jar Top Valve Orbit Sprinkler System 2 Wifly wireless modules RN XV Roving Networks 1 Voltage regulator LM 317 TG Sparkfun electronics 2 Master controller PIC18F4585 Microchip technology Inc 1 Slave controller PIC16F883 Microchip technology Inc 1 P channel Mosfet P channel Mosfet 60V 30A Sparkfun electronics 3 N Channel Mosfet N channel Mosfet 60V 30A Sparkfun electronics 1 Stand by battery Maxell 32032 Hitachi Maxell Ltd 1 Battery holder Maxell 32032 holder Hitachi Maxell Ltd 1 Trickle Charge Timekeeping Chip DS1302 Maxim Integrated 1 Serial EEPROM chip 24LC256 Microchip technology Inc 1 Voltage booster LM2577 Sunkee electronics 1 H bridge Motor driver SN754410 Texas Instrument 2 General purpose transistors(NPN) 2N3904 Sparkfun electronics 5 General purpose transistors(PNP) 2N3906 Sparkfun electronics 6 Capacitor ceramic 101,102,103,104 Sparkfun electronics 8 Printed circuit board Custom design PCB123 company 1 Resistors 1k, 4.7k, 10k Sparkfun electronics 15 Keypad 16 button 4x4 keypad Sparkfun electronics 1 Liquid crystal display (Serial enabled) 20x4 LCD Newhaven company 1 LED super bright white COM 00531 Sparkfun electronics 1 Mini push button switch COM 00097 Sparkfun electronics 1 Polyethylene water tank 1000L model Chemi and cotex industries Limited 1 Drip irrigation pack 250m2 Net a firm unit Balton ltd and Net a firms company 1 PAGE 42 42 Precision Irrigation Control System (PICS) Design and I mplementation In this section, the design and implementation of the PICS is discussed. The design is the formulation of the printed circuit boa rd (PCB) and the impleme ntation focused in incorporation of all parts of the PICS. Before this formulation, a breadboard design was made in the laboratory. This is the testing design in which each item wa s installed o n a breadboard and then tested fo r functional capabilities. F igure 3 8 shows a breadboard design. The breadboard design was then formulated in PCB software. All the parts with the circuits as discussed above were incorporated to design a prin ted circuit board. The design was done using professional software known as PCB123 version 4 design suites. The red lines indicate upper connections and the blue lines in dicate bottom connections. See F igure s 3 9 and 3 10 . The labeling is done within the PICS PCB software . The PCB is designed 4.5 inch by 6 inch in s ize. Two boards were ordered, each of which costs 75 .04 USD. Figu re 3 11 show s a received board. The PICS PCB was populated with all the devices needed as shown in F igure 3 12 . A soldering machine was used to solder each part carefully. After, soldering the PICS system, the ar rangement of the PICS with other parts such as voltage booster, batteries pack, keyboar d and LC display was made. F igure 3 13 shows the PCB with all items mounted in a small box for weather protection . The PICS was taken for testing to a small field at the crop museum on the main campus of Sokoine U niversity of Agriculture. Figure 3 14 shows it installed in the field. At that location it was connected with all the sensor s and valves. PAGE 43 43 PICS Software F lowchart The PICS software is installed inside the master controller. The software is designed to control all the processes required to execute feedback control of the control system. Symbols seen in the flowchart Figure 3 15 are more elaborated in the following section as the PICS commands. The PICS C ommands The PICS is instructed using commands. The so ftware program inside the MC communicate s with all the sensors and the slave microco ntroller. This design was essential as it allows any third party designer to a design wireless controlling program through any de vice of choice such as windows based or iOS based or Android based . Simply any operating software application can be developed to work with the PICS. These commands act like an interface between the PICS and connected wireless devices like phones and laptops. When these commands execute they represent a certain message that can be interpreted using external devices. For example wh en * is sent to the wireless device, it means that the PICS machine is actually alive and it is running. Also, it sends the current date and time to the wireless device. Thi s allows the farmer to make a change if the time is not correct. Collection of data will be merely useless if al l the information is recorded with the wrong time. See Table 3 2 . Table 3 2 . The PICS commands and their meanings PICS command Meaning Remark * PICS is alive and working Working machine # Reset It is going to reset the machine and restart PAGE 44 44 Table 3 2. Continued PICS command Meaning Remark 0 Reset the memory record It is initiated after pressing # where you make the system to start recording data in previous used memory. Usually, it is initiated after R has been utilized. S Settings request The machine will display the settings and allows the farmer to make any changes R Recorded data retrieval This command will request the machine to display all the data recorded before last reset instruction was sent F Sensor 1 readings When F is received , the machine will request readings from soil moisture sensor number 1. Two readings will be retrieved: soil temperature and moisture G Sensor 2 readings When G is received , the machine will request readings from soil moisture sensor number 2. Two readings will be retrieved: soil temperature and moisture H Sensor 3 readings When H is received , the machine will request readings from soil moisture sensor number 3. Two readings will be retrieved: soil temperature and moisture J Sensor 4 readings When J is received , the machine will request readings from soil moisture sensor number 4. Two r eadings retrieved: temperature and moisture PAGE 45 45 Table 3 2. Continued PICS command Meaning Remark K Sensor 5 readings When K is received , the machine will request readings from soil moisture sensor number 5. Two readings will be retrieved: soil temperature and moisture L Sensor 6 readings When L is received , the machine will request readings from soil moisture sensor number 6. Two readings will be retrieved: soil temperature and moisture Windows b ased Application for Windows (Tiny B ootloader) Programming of the microcontrollers needs a compiler and programmer. The PIC microcontroller is programmed wirelessly and it does not need t he programmer which is expensive. Instead, the compiler will be used to com pile the program that will be loaded to the microcontroller wirelessly. This process works seamlessly with the PICS environment allowing upgrades of the PICS software to be carried easily. According to tiny PIC bootloader website ( http://www.etc.ugal.ro/cchiculita/software/picbootloader.htm accessed on 21 may 2014), a bootloader is a program software that stays in the microcontroller and communicates with the PC (usually through the se rial interface). In this case, t he PICS is connected through Wi Fi but still the PC will need to be configured with t he virtual port of the Wi Fi so that it can have a serial interface needed for bootloa der operations. The bootloader receives a user program from the PC and writes it in the flash memory, then launches this program in execution. Bootloaders can only be us ed with those PAGE 46 46 microcontrollers like the 18F series that can write their flash memory through software. Most of the PIC microcontrollers are capable of being written to their flash memory. The PICS system uses the M main controller (MC). The bootloader itself must be written into the flash memory with an external programmer. The p rogrammer used in this research was the M elabs U2 Programmer . In this case it could be done only once but if it happens that the bootloader has been corrupted then the operation needs to be executed again. In order for the bootloader to be launched after each reset, a "goto bootloader" instruction must exist somewhere in the first four instru ctions; There are two types of b ootloaders, some th at require that the user reallocate his code and others that by themselves reallocate the first four instructions of the user program to another location and execute them when the bootloader exits. The PIC 18F4585 microcontroller can easily be loaded with the bootloader and programmed seamlessly using the Wi Fi connection at the microcontroller. T he microcontroller is connect ed to the wireless module using the RX and TX pin of the PIC microcontroller. These pins have been used because they can communicate s ynchronously and asynchronously easily. Using tiny bootloader (TBL) s oftware as demonstrated in the F igure 3 16 , the MC was loaded with the program. The loading will involve turning off (resetting) the pin 1 of th e 18F4585 for few seconds and release so that the program can be loaded to after the boot loader codes. In order to work with wireless the TBL will need to connect to port. The VirtualSerialP orts softwar e ( See F igure 3 17 ) was used to create a virtual serial port for wireless connection to the system. The COM2 port is connected through Internet protocol (IP) address 10.251.25.199 (The Wi Fi module is programmed to use this IP address. It can be PAGE 47 47 changed) using tra nsport control protocol (TCP). The baud rate is set to 9600 bits per second. Android based Software for Precision Irrigation Control S ystem The PICS works very well with the PC but not all farmers will be using a PC at all times . A cellphone app that can work very well with the PICS was developed as a tool. The app needs the wireless capabilities of the android phone. This allows seamless connection between the phone and the PICS. However, this is connection is limited to distance covered by the Wi Fi net wor ks (less than 100 meters). Longer distances may be covered by using a Wi Fi extender , but at increase d cost to the farmer. The app ( F igure 3 18 ) was developed with necessary features like reset, collection of data, sensor requests and instant display of the PICS messages while processing the data. These features are quite necessary for farmers in real time applications and to s ee if any problem is occurring on the farm . Critical operations such as valve operati on can be attended instantly. The farmer has the capability to assess the operations and report or attend any problem . While starting, the app first search es for Wi Fi connection to the PICS. If there is no connection the app will eventually halt. This operation is quite necessary as the farmer cannot press any button to test connectivity that could send false message to the PICS if accidentally reconnected. The app was developed to fit in 127mm ( 5in ) screen of the So ny Xperia Z running Jelly Bean A use different scree n size unless it is changed. The app fit the screen. The Basic4android integrated development envi ronment was used to develop the WaterControl software . It is quite user friendly t o all the programmers that know the PAGE 48 48 B asic programming language and who want to program android apps which are java based apps. In fact, Java is a propri etary programming language for the A ndr oid applications development. C hange s can be done easily using the Basic4android software used to develop the app. The phone and PC need to synchronize using either a wireless connection or Bluetooth. The connection will give a live display between the two devices and gives an opportunity to simulate the software using the phone or PC. This feature speed s up the development task. The app was us ed to collect data from the fi e l d. The data was then stored automatically in the phone default directory of the app located at android/data/b4a.example/files. The collect ed data can then be analyzed by using the PC. Instead of requiring a laptop to be carried to the field , a phone save s the data collected from the PICS . The file created has its name written with the da te and hour that it was recorded. It is in the format (Data on date_month_hour_minutes.txt) for quick reference. The data collected is very small in size and will not exceed 22KB for 1 week of data collection. Testing of the I rr igation C ontroller (Lab E xpe riments) The PICS was connected to test the control loop at the CREC, Lake Alfred, Florida, USA . A bucket was prepared and filled with the soil. The acclima moisture sensor was then buried inside the bucket. One empty flask was filled with water and connected to solenoid valve. The PICS was allowed to control water instantly as it drains in the bucket. The system was able to control any amount of water to maintain 15% soil moisture content within the bucket. PAGE 49 49 Testing of the F ield A pplication (Field E xperiments) Study S ite The experiments have been carried out at Morogoro region in Tanzania. Morogoro munic ipality is located eastern of Tanzania about 210 km inland west of Dar es Salaam city. Experiments were conducted at the crop museum farms (Sokoine University of Agriculture) latitude 6.84 longitudes 37.65 at an altitude of 400 meters above mean sea leve l. Figure 3 19 downloaded from the google maps shows the site location at the crop museum. These farms are coordinated by crop science department at SUA . During the study period, different planting dates were use d within a 2013 2014 year due to environmental conditions at planting time. The first experiment was taken from 26 October 2013 to 20 December 2013, the second was 06 Jan 2014 to 05 March 2014, the third was from 11 March 2014 to 30 April 2014 and the last one was from 28 May 2014 to present . The first experiment failed due to instrument malfunctioning and power problems. The second experiment failed because aphids pests killed all the plants and hence the experiment was reestablished on 11 march 20 14. This third experiment was successful but it happened until the ra iny season which started in mid April interfered . The controller was unable to collect all the required information as expected and hence it was cancelled to allow rainy season to finish. The ex perimental de sign used is a split plot with six plots arranged in a randomized complete block design. Two treatments were established with each treatment having two replication s as shown in F igure 3 20 numbered from 1 to 6. One treatment is the conventional irrigation control treatment and the other one is the PICS controlled treatment. The basis of this experiment is to make comparison between the PAGE 50 50 two treatments. The yellow plots used are conventionall y irrigated while the blue field plots are PICS irrigated. The brown plots are bare lands and were used to randomize the experiment. In each plot, 4 rows of okra plant s were established for evaluation. See F igure 3 21 . O ne drip line was used for each row . Kigalu et al., (2008) reported that the design for one drip line per row produced higher yields than one per two rows of tea. In particular, data presented in the paper clearly demonstrated that drip treatment I2, row of tea produced the highest yields compared with applying either more or less water, and saves water compared to 75% and 100% replacement of soil water deficit . A lt hough , Kigalu researched in tea farms, these facts can be considered to apply to okra and other horticultural crops too. This is the standard drip irrigation layout in Tanzania. Okra Crops Used for T esting Due to high number of pests that affect other h o rticultural crops in Morogoro, o kra was chosen since it can withstand some chall enges of the aphids. T his experiment needed a plant that will change with water use rather than diseases. ( http://homeguides.sfgate.com/treatment aphids okra 35015.html access ed on 24 may 2014 ) Okra, a cousin to other members of the mallow (Malvaceae) family such as cotton, hollyhock, and hibiscus is a tall growing, single stemmed plant that produces edible seed pods. Grown as an annual in home vegetable gardens, okra is started from seed in full sun in well drained soil. Varieties of okra for the home garden inclu de spineless varieties such as Ann ie Oakley II, Clemson Spineless, Emerald, Lee and Dwarf Green Longpod. It matures approximately in 50 to 60 days, okra pods grow on PAGE 51 51 short stalks from the main stem at leaf axils. Both leaves and fruit of okra can be attacked by aphids. Cultural, organic an d chemical controls can be used to manage this insect pest. In this experiment, Clemson Spineless variety of okra was used for testing. Tank Elevation, Irrigation Installations and S ettings The drip irrigation system was set. The system provides water to the plants effectively and efficiently, thereby it reduces water loss through evaporation. Also, it is effective in reducing crop diseases such as black spots and powdery mildew. Before installation was done, water source pressure was determined and found that it was too low to provide more than 15psi (103kpa) required by the latch solenoid valve and the water was not reliable for continuous control. The tank was installed at 15 feet (4.6 m) high (See F igure 3 22 ) to solve this problem and provide more that 18psi (124kpa) pressure. This pressure was enough to supply across the field. The tower is created using local woods that are a vailable in Tanzania. T he wood was not treated against wood pests. This simple instal lation is quite familiar to most farmers in Tanzania. The design is well built and can withstand the weak winds of Morogoro. It was observed that the flow meters experienced turbulent flow of the liquid caused by the valves upstream that adversely affecte d the meter readings. The valves and flow meters were installed closely together. This type of installation is quite common in Tanzania but it can bring very wrong readings to the flow meters. T he flow meters an d the latch solenoid valve need a minimum len gth of the straight pipe before and after the meter to keep the water in a laminar flow pattern . Standard good engineering practice is to require ten diameters of the pipe upstream and five diameters downstream. So for the PICS system the 20mm meter size r equired 200mm upstream and 100 mm downstream. PAGE 52 52 The piping was modified to meet the length requirements and the flow sensors indicated correctly . Figure 3 23 demonstrate s the change that allowed water to flow and be read by the manual flow meters . Data C ollection The most important data tha t is collected will be the water used for both treatments and percentage of soil moisture. These data will be used to compare the perfo rmance of the conventional method and the PICS controlled method. The data and graphs derived are used to evaluate the supply of water to the plants and analyze water usage by the plants. The PICS is expected to have a well balanced soil moisture control w ithin the field. Sensor Calibration T ests The experiment was set at the field to test the sensors for performance and similarity. Varble et al. , (2011) emphasized the importance of the sensor calibrations. It is apparent that each individual sensor requi res unique calibrations for the soil and conditions in which they will operate. It is recommended that field based calibrations be developed, over laboratory based calibrations, since field data are more representative of the conditions in which the sensor s will operate. This should be done in every season to obtain correct readings As it has been stated above in literature review, the acclima TDT sensors expected to be accurate . But testing was conducted to see accuracy in real installations. Figure 3 24 shows the first experiment based on testing the similarity of the sensor readings. The soil was filled with water to saturation and then the PICS collected data every 10 minutes. A lso, another test, as s hown in F igure 3 25 , is done to see the pattern of water percolation in the soil. This test will indicate the operating moisture of PAGE 53 53 the soil. The operating moisture, here, I refer to the soil moisture available at each depth determined after water percolation. Figure 3 1 . Kadeghe G Fue. Conceptual diagram of the controller . 01 June 2014 . Morogoro,Tanzania PAGE 54 54 Figure 3 2 . Kadeghe G Fue. Advanced conceptual diagram of the controller . 01 June 2014 . Morogoro,Tanzania PAGE 55 55 Figure 3 3 . Kadeghe G Fue. Advanced conceptual diagram of the controller with voltage booster . 01 June 2014 . Morogoro,Tanzania PAGE 56 56 Figure 3 4 . RHT03 r ecommended circuit connections. Reprinted with the permission from Max Detect Technology Co. Ltd , https://www.sparkfun.com (June 20, 2014 ) Figure 3 5 . The MSP 300 pressure transducer . Reprinted with pe rmission from Measurement specialties , http://www.meas spec.com (June, 20, 2014 ) PAGE 57 57 Figure 3 6 . Kadeghe G Fue. Location of rain sensor and humidity sensor in the field . 09 November 2013. Morogoro, Tanzania Figure 3 7 . Kadeghe G Fue. SDI 12 Soil Moisture Transducer at the field experiment treatment . 26 November 2013 . Morogoro, Tanzania PAGE 58 58 Figure 3 8 . Kadeghe G Fue. The PICS bread board at the laboratory in CREC . 10 June 2013. Lake Alfred, Florida Figure 3 9 . Kadeghe G Fue. Upper part of the PICS PCB . 30 June 2013 . Lake Alfred, Florida PAGE 59 59 Figure 3 10 . Kadeghe G Fue. The full PICS PCB . 30 June 2013. Lake Alfred, Florida PAGE 60 60 Figure 3 11 . Kadeghe G Fue. PCB plate received from PCB123 Company . 10 July 2013. Lake Alfred, Florida PAGE 61 61 Figure 3 12 . Kadeghe G Fue. PCB with all other electronic chips soldered on it . 23 July 2013. Lake Alfred, Florida PAGE 62 62 Figure 3 13 . Kadeghe G Fue . PICS with other modules in a weather protection box . 25 July 2013. Lake Alfred, Florida PAGE 63 63 Figure 3 14 . Kadeghe G Fue. PICS installed at crop museum . 23 September 2014. Morogoro, Tanzania PAGE 64 64 Figure 3 15 . Kadeghe G Fue. Flowchart of the PICS software . 25 June 2014. Morogoro, Tanzania PAGE 65 65 Figure 3 16 . Kadeghe G Fue. The tiny b ootloader used to load the program to the PICS MC microcontroller . 26 June 2014. Morogoro, Tanzania Figure 3 17 . Kadeghe G Fue. Virtual serial ports software . 20 June 2014. Morogoro, Tanzania PAGE 66 66 Figure 3 18 . Kadeghe G Fue. The s creenshot of PICS android based software . 26 March 2014. Morogoro, Tanzania PAGE 67 67 Figure 3 19 . Kadeghe G Fue. Field experiment site at Sokoine University of agriculture l atitude 6.84 longitudes 37.65 downloaded from maps.google.com . 23 May 2014. Morogoro, Tanzania Figure 3 20 . Kadeghe G Fue. Experimental plot layout . 29 January 2014. Morogoro, Tanzania PAGE 68 68 Figure 3 21 . Kad eghe G Fue. Two treatments with three replications Plots . 10 January 2014. Morogoro, Tanzania Figure 3 22 . Kadeghe G Fue. Tower elevation and tank installations . 03 February 2014. Morogoro, Tanzania PAGE 69 69 Figure 3 23 . Kadeghe G Fue. Straight pipe installations at the site . 23 February 2014. Morogoro, Tanzania Figure 3 24 . Kadeghe G Fue. First experiment to test similarity of the sensors . 11 March 2014. Morogoro, Tanzania PAGE 70 70 Figure 3 25 . Kadeghe G Fue. Sensor water percolation investigation layout . 12 March 2014 . Morogoro, Tanzania PAGE 71 71 CHAPTER 4 RESULTS AND DISCUSSIONS Sensor Calibration T ests The data were collected and recorded every ten minutes and recorded over a 24 ho ur period using the layout in F igure 3 24 . Figure 4 1 shows the readings which were taken at 8 inches (200 m m) depth in the soil profile . The readings show that the sensors were recording accurately except sensor number 4 which had a signi ficant offset that indicate s a problem. This observation ind icates that the TDT sensors are accurate in some required sense as we stated before. S ensor number 4 is defective might be proved wrong by the second experiment below where sensor number 4 was kept together with sensor number 3 at a dep th of 15 0 m m in the soil profile. From the layout in F igure 3 2 4 , 925 sensor readings (data points) were collected for all the sensors. The readings taken within time chang es were correlated each to sensor 4 . Correlation is representing the degree of linear association between two measured sensor readings . T he correlation table verifies that the sensors are strongly correlated since it is above +0.5. See Table 4 1. Tabl e 4 1 . Correlation table for sensor 4 an d others in the 1st experiment Sensor number Sensor 4 Correlation Sensor 1 Sensor 4 0.93 Sensor 2 Sensor 4 0.78 Sensor 3 Sensor 4 0.7 9 Sensor 5 Sensor 4 0.8 1 Sensor 6 Sensor 4 0.6 9 PAGE 72 72 In normal sense, 0 indicates no correlation and +1 indicates strong correlation. That means sensor 1 and sensor 4 are strongly correlated while sensor 6 and sensor 4 are more weakly correlated . Taylor (1990) defined correlation coefficient as it is represe nting the degree of linear association between two measured v ariables. This shows that sensor number 4 was responding equally with others but it was reading lower values compared to others. Furthermore, F igure 4 1 shows sensor water percolation levels experiment to a depth of 35 0 m m in the soil profile using the layout from Figure 3 25 . 2998 sensor readings were collected by the PICS. Sensor 3 and sensor 4 were kept togeth er since they are average correlated compared to others as represented in Table 4 1 . Sensor 4 has shown that it reads the same as sensor 3 which indicates that the 1 st experiment may have given a wrong impression about the accuracy in sensor number 4. Wrong impression may have been caused by wrong sensor placement. The sensor should always be place careful when buried to the soil profile. In F igure 4 2 , the peak value in 13 th March, 2014 at 12:55pm to 2:35pm was caused by slight rainfall. It was not a heavy the experiment. At different times readings were taken to investigate the speed of water percolation in the soil profile. At 5cm sensor 1 r eadings, at 15cm sensor 3 readings, at 35 sensor 5 reading and at 35cm sensor 2 readings were used to generate T able 4 2 . Table 4 2 . Sensor readings at different depth in different time taken Sensor at depth 12 03 14 16:05 12 03 14 18:25 13 03 14 0:55 13 03 14 8:05 13 03 14 12:45 14 03 14 5:05 5 40.1 37.1 35.5 35.4 37.4 34 15 42.9 40.4 38.8 38.2 39.1 37 25 53.4 44.5 42.3 41.5 41.8 39.2 35 53.4 53 43.8 44.8 44.8 43.1 PAGE 73 73 The T able 4 2 repr esents data selected randomly from F igure 4 2 to analyze the optimum soil moisture of the soil. Figure 4 3 represent s th e graph derived from t he T able 4 2 . The red and green straight line s represent the optimum soil moisture obtain at the third day 14 march 2014 at 5:05AM. The water drains more uniform above the 20 cm mark compared to below. The lower parts of the soils retain water for long time. Field cap acity is determined when saturated soil is left to drain for 2 to 3 days. At these days we assume that the remaining soil water content represent field capacity. In this case the field capacity is betwe en 3 5 % 40%. The soil texture is clay loam so the range is acceptable in certain extent. The PICS should maintain that level. T o determine optimum soil sensor position and soil moisture percentage in the soil profile is a very challenging task for researchers and farmers too. Munoz Ca rpena et al., (2005) discussed that d ue to the soil's natural variability, the location and number of soil water sensors may be crucial and future research should include optimization of sensor placement. Mafuta et al. (2013) discussed sensor positioning in the root zone for drip irrigated farmland. Sensor positioning in the root zone of the plant is crucial, because it determines the amount of water to be applied during each irrigation event. A sensor placed very deep into the soil allows the irrigation s ystem to apply more water up to that depth beyond plant roots; the water below plant roots is lost through deep percolation. On the other hand, a very shallow sensor promotes light irrigation, consequently, failing to apply water into the root zone and the refore stressing the plants . The results in F igure 4 2 proves that the optimum depth and percentage of water at that depth in PAGE 74 74 different times. In their study, ( Texas Water Development Board, 2004 ) said maize is a d eep Mor ris, (2006) discussed the conclusion that at this sensor depth, about 70% of water uptake by crops takes place; the ef fective root zone in this case wa Based on those study conclusions, the sensor s in the subsequent field experiments pl aced at abou t 20 c m depth . The effective root zone of the most of the okra plants grow up to 35 40 cm in first 3 4 weeks and then grow further to 2 feet or more . Based on the graph, at the 20 cm depth , the field capacity is more than 35% but less than 40% . This is s et to control moisture based on the average root growth. Preci sion Irrigation Control S ystem ( PICS) Field D eployment After the development of the PICS, field deployment was conducted to see the performance of the system. This experiment is crucial since the performance of PICS in harsh environment is of crucial importance. R ecordings of the sensor testing were taken using the PICS though it was not true testing but it proved how it ca n be designed to calibrate the sensors. The water balance control was set at 35% of the soil profile at the depth of 20 cm. Then, the readings were recor ded every 20minutes. I n this stage , the machine is trying to control water after every 4 minutes. Balan cing of the limits and water uptake should range at the water lower limit as prede termined before. As the water balance control mark is increased , more wa ter is required to maintain and more irrigation is commended . Hence, correct determination of the lowe r limit and upper limit is of great PAGE 75 75 importance for farmer to obtain maximum performance to the plant growth while saving water . Control of water using two limits, lower limit and upper limit is of great importance as it reduces the time used to switch on or off the valves and hence saves water. But this control will give a very rough control when visualiz ed inside while working. Figure 4 4 shows control of water supply to the plants using upper and lower limit. The system will irrigate until it finds that 39% of soil moisture is detected and then it stops to irrigate until depletion is lower than 31%. 31% is not a permanent wilting point. The observation in the table 4 show that the drip irrigated soil using emitter s of 0.5 liter per hour will only take about 20 30 minutes to increase more than 10% of soil moisture when the sensors 250 ml of the water. This observation gave a conclusion that if we irrigate for 4 minutes only then less than 3% of soil moisture will be increased in the soil profile. PAGE 76 76 Figure 4 1 . Kadeghe G Fue. Sensor similarity test results . 14 June 2014. Morogoro, Tanzania 30 40 50 60 70 80 90 100 11/03/2014 10:10 11/03/2014 11:25 11/03/2014 12:55 11/03/2014 14:25 11/03/2014 16:5 11/03/2014 17:35 11/03/2014 19:5 11/03/2014 20:35 11/03/2014 22:5 11/03/2014 23:35 12/03/2014 1:5 12/03/2014 2:35 12/03/2014 4:5 12/03/2014 5:35 12/03/2014 7:5 12/03/2014 8:35 12/03/2014 10:5 12/03/2014 11:35 12/03/2014 13:5 % Soil Moisture Date time Sensor measurements at 200 mm depth %soil moisture(Sensor 1) %soil moisture(Sensor 2) %soil moisture(Sensor 3) %soil moisture(Sensor 4) %soil moisture(Sensor 5) %soil moisture(Sensor 6) PAGE 77 77 Figure 4 2 . Kadeghe G Fue. Water percolation results for experiment 2 . 14 June 2014. Morogoro, Tanzania 30.00 35.00 40.00 45.00 50.00 55.00 60.00 12/03/2014 15:02 12/03/2014 16:35 12/03/2014 18:25 12/03/2014 19:55 12/03/2014 21:25 12/03/2014 22:55 13/03/2014 00:55 13/03/2014 02:35 13/03/2014 04:25 13/03/2014 06:05 13/03/2014 08:05 13/03/2014 09:35 13/03/2014 11:05 13/03/2014 12:55 13/03/2014 14:35 13/03/2014 16:05 13/03/2014 17:55 13/03/2014 19:35 13/03/2014 21:05 13/03/2014 22:45 14/03/2014 00:15 14/03/2014 01:55 14/03/2014 04:05 14/03/2014 05:35 14/03/2014 07:15 14/03/2014 08:55 14/03/2014 10:45 14/03/2014 12:35 14/03/2014 14:15 % Sensor 1 Soil Moisture % Sensor 4 Soil Moisture % Sensor 3 Soil Moisture % Sensor 5 Soil Moisture % Sensor 6 Soil Moisture % Sensor 2 Soil Moisture PAGE 78 78 Figure 4 3 . Kadeghe G Fue. Water percolation graph with respect to time and soil profile depth . 14 June 2014. Morogoro, Tanzania -40 -35 -30 -25 -20 -15 -10 -5 0 30 35 40 45 50 55 60 Soil profile depth(cm) % Soil Moisture 12-03-14 16:05 12-03-14 18:25 13-03-14 0:55 13-03-14 8:05 13-03-14 12:45 14-03-14 5:05 PAGE 79 79 Figure 4 4 . Kadeghe G Fue. Water balancing at 35% and having +4% upper limits and 4% lower limits (31% 39%) . 14 June 2014. Morogoro, Tanzania 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 %soil moisture date time % Sensor 4 5 6 Soil Moisture % Sensor 4 Soil Moisture % Sensor 6 Soil Moisture % Sensor 5 Soil Moisture PAGE 80 80 CHAPTER 5 CHALLENGES AND EXPER IENCES GAINED IN THI S RESEARCH The research had a lot of experiences and challenges that should be considered in advance for future research. The chal lenges occurred at the lab, field site, system failures and sensor failures. The system was designed to be fully wireless controlled . This has posed a lot of challenges when a wireless module do es power failures whe n the l ow power solar panel was used. A l ow power solar panel gives an economic advantage in third world applications but in this system the optimum power was quite difficult to be achieved without random failures. The PICS sometimes heat ed the components leading to module failure. Unfortunately, it was twice returned to the laboratory for repair and maintenance. This affected the data collection activity which needs to be continuous without missing. The needed pressure of the water, 15psi (103kPa) was not available in the initial design . R aising the tank was not initially planned but was necessary during the experiment. The initial tower height of 6 feet (2 metres) was not enough to obtain the minimum pressure required and then it was decided to increase t he height up to 15 feet (5 metres) high. Also, water availability is a great challenge to precision irrigation technologies which need s water at all times for deficit irrigation control. The a phids pests were present and tried to kill a lot of plants. These pests induced irregularity of the plants and hence influenced the results and consumption of water in plants. The e xperiment number 2 basically failed because all the small plants were attacked b y the aphids. The control of the aphids using chemical proved successful in other experiments compared to experiment number 2. PAGE 81 81 Mafuta et al. (2013) discussed very common challenge in sensor disturbances. It was also observed that there is a high possibility of disturbing the sensors and wires during field work, for example, weeding and other farm activities. In this study, the wires were disturbed; even the drippers were also disturb ed. This will eventually induce irregularity in data collection a nd general control action . The short period of one year to complete the research has hindered more testing and modification of the system. Also, some of the researches that were disturbed high rainfall in 2014 were not fully carried to validate some resul ts obtain in previous experiments. This has also prevented further research to consumer acceptance of this discussions for the system. Sensor placement is a great challenge for spatially variable irrigation. It is difficult to obtain consistent data that could trigger regular control of irrigation. Placement of the sensors in different locations that are irrigated from the same source of water will bring different readings. T his indicates that drip irrigation most of the time irrigate s irregularly in the field. The PICS face challenge s to compare the readings of the sensors and decide when an d how much to irrigate. Figure 4 3 shows wha t happened when the sensor placement variation s . Water from the same source but irrigates differently in the field. While it looks like the sensor 5 and 6 becomes more stable at 28 th of March, 2014, the other sensor 4 fluctuates with irrigation control. Ho wever, this might be due to either sensor placement or field soil variability. High peaks in F igure 5 1 were caused by high rainfall that happened during the night of 28 th March, 2014. PAGE 82 82 The design of the PICS had some problems. The connection to the PICS is merely wireless and no cable connection capability was made. This had some problems , especially when the Wi Fi module failed sometimes. The new design should consider inclusion of the RS232 or USB connection. The system can use the GSM module to connect but the incorporation of latest technologies such as 4G and LTE would increase its communication capability. Distant updates of the database will give flexibilities. The challenge to fo llow the information at the field using the Wi Fi or sending the SMS always increases cost s to the farmer. Figure 5 1 . Kadeghe G Fue. Sensor 4, 5 and 6 takes readings from the same water supply but in different field. Irrigation irregularities while prov iding water using the same water source . 14 June 2014. Morogoro, Tanzania 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 26/03/2014 11:16 26/03/2014 13:25 26/03/2014 15:36 26/03/2014 17:45 27/03/2014 08:05 27/03/2014 10:16 27/03/2014 11:36 27/03/2014 12:41 27/03/2014 13:46 27/03/2014 14:51 27/03/2014 15:56 27/03/2014 17:05 27/03/2014 18:11 28/03/2014 07:21 28/03/2014 08:26 28/03/2014 09:31 28/03/2014 10:36 28/03/2014 11:41 28/03/2014 12:46 28/03/2014 13:51 28/03/2014 14:56 28/03/2014 16:00 28/03/2014 17:05 %soil moisture date time % Sensor 4 5 6 Soil Moisture % Sensor 6 Soil Moisture % Sensor 5 Soil Moisture % Sensor 4 Soil Moisture PAGE 83 83 CHAPTER 6 CONCLUSIONS AND RECOMMENDATIONS The PICS was successful built using considerable low cost technologies which have demonstrated that low cost control system is possible for developing countries. This new technology prototype can be used for farming in developing countries. Modularity of the PICS using low cost devices in it allows easy replacement. Results of the experiment showed that the PICS control irrigation using signal s from the Acclima TDT moisture sensors. This type of automation has achieved relatively reliable control in terms of moisture at the depth of 20 cm and 35% soil moisture. The machine was able to collect all the data regarding rainfall, humidity and temper ature. The data collected had some problems but at least 80% of the data were correct and allowed system control. The experiments showed that effective control is achieved when the system is allowed to control every 4 minutes compared to 20 minutes or one hour control. More precise control is boosted by reducing the time to collect sensor information. Precision irrigation control system has shown a great potential to African farms applicability. Systems like this will need replications for demo use in Afric an farms. The cost of replicating one unit is still very high, approximately more than 3 00 USD . Including the one soil moisture sensor costs, t hat w ill bring the cost to about 54 5 USD (See Table 6 1 ) . This cost is still considerable high for Tanzanian farmers but can be afforded to large scale productions such as grape crops or flowers production who may find it more useful for them. More research is needed to determine and reduce the economic cost of precision agri culture tools. Most of them that are present in the market PAGE 84 84 cost more than 545 USD for full set and they are very delicate and can be destroyed very easily. A farmer who cultivates horticultural crops can save from 50% to 70 % of water per acre of cultivat ion. Most horticultural crops are water for three hours in a day and approximately 33000 liters are used per acre. For three months of cultivation then at least 40 days are used to water the plants. 50% of water saved in 40 days of irrigation is about 660 meter cubic of water. In Tanzania, most of the farmers use water from rivers and other alternative water sources. In case they use water supplied from the government authorities then each meter cubic costs about 750 Tshs (for current exchange it is approxi mately equal to 50 US cents) and the charge increases as days go on. It means they will save at most 330 USD per acre. The farmer who uses PICS for more than 2 acres will see the profit. Even the farmer who uses the PICS for one acre and cultivates for mor e than two seasons will get the profit. The instrument has performed very well in the field. The shield used was able to withstand dry and wet seasons and the system was not hurt. The instrument wireless connection was performing well. All the time the sys tem was reachable using both the laptop and phone. The soil moisture sensors were performing very well while in the soil. No any sensor failed during all the tests. The system performance was dependent on the rainfall and humidity sensor also. The use of mobile smart phone is still very low in Tanzania but the growth is significant and signifies future opportunities . This low usage will limit the adoption of precision irrigation equipment in Tanzania. About three quarters of Tanzanians can read and write b ut they are not reading agricultural reports . This is also a limit to the system PAGE 85 85 adoption in Tanzania. Awareness of the importance of commercial horticultural farming is of great significance in Tanzania. Most of the farmers will adopt precision agricultur e if they find more markets in the world market that require quality products . More research should focus on determination of the minimum cost that farmers could incur to purchase precision agriculture. The cost of the technology always decreases with num ber of the users. Hence, if more users require the instrument the less the cost it becomes. The government of Tanzania , through the commission of science and technology (COSTECH) , should modify the policy of technology transfer to farmers and conduct national search for new technologies that can be improved for small holder farmers . Also, the national fund for advancement of science and technology (NFAST) that is organized by COSTECH should include advancing existing technologies or techno logies that were developed in the universities. The government and n on governmental organization s should sponsor practical testing of technologies f o r the farmers, make sure that technology is transferred to the farmers , and farmers are encouraged to use n ew technologies. The PICS prototype provides technology advancement in Tanzania. L ocal production of such systems will give Tanzania an opportunity to be the leading producers of irrigation controls in Africa. I recommend that new designs should be based on reducing electricity demands so that we can reduce the costs of the solar panel, reduce the system size to be installed in the field, and also, to determine which sensors are effective but cheap for Tanzania. Research should also be done to identify a p rofitable horticultural crop that needs high attention to the crop water use. It has been reported that grapes need more attention to the water supply to maintain the quality of the fruit. PAGE 86 86 I recommend that new designs should include data analysis using sma rt phone particularly android based because the system is already having an app in android phone. Also, the online updating of the PICS software would be good for hassle free updates. Table 6 1 . Cost approximation for the PICS Particulars Costs ($) PICS components 150 PCB costs (4 pairs) 50 Wireless Module(RN XV WiFLy) 30 PICS weather enclosure 25 Orbit valve and Orbit Solenoid for battery operated 60 Solar panel and system battery 35 Soil Sensors per acre (1 sensor) 175 La crosse Rain gauge 20 Total cost of PICS 54 5 PAGE 87 87 LISTS OF REFERENCES Adnan, M. , Singleton, A., and P. Longley ( 2010 ) . Developing Efficient Web based GIS Applications. Centre for Advanced Spatial Analysis working papers series. Paper 153. Aiyelaagbe, I ., and Ogbonnaya, F. (1996).Growth, fruit production and seed yield of okra (Abelmoschus esculentus L) in response to irrigation and mulching. Research Bulletin, No. 18. National Horticultural Research Institute, Ibadan. 13 pp. Allen, R. , Pereira, L. , Raes, D., and Smith, M. (1998). Crop evapotranspiration Guidelines for computing crop water requirements FAO Irrigation and drainage paper 56. FAO, Rome, 300, 6541. Batra, B. , Inder, M., Arora, S., and Mohan, I. (2000).Effect of irrigation and nitrogen levels on dry matter production by okra (Abelmoschus esculentus L. Moench). Haryana Journal of Horticultural Science, 29(4): 239 241. Blon quist Jr, J., Jones, S. , and Robinson, D. (2006). Precise irrigati on scheduling for turfgrass using a subsurface electromagnetic soil moisture sensor. Agricultural water management, 84(1): 153 165. Blon quist Jr, J., Jones, S. , and Robinson, D. (2005). A time domain transmission sensor with TDR performance characteristics . Journal of hydrology, 314(1) : 235 245. Campos, A., Pere ira , L., Gon calves , J., Fabi a o , M., Liu Y., Li Y., Mao , Z. and Dong, B. ( 2003 ) . Water saving in the Yellow River Basin, China. 1. Irrigation demand scheduling. Agricultural Engineering International Vol. V ( www.cigr ejournal.tamu.edu ). Casade sús, J., Mata, M., Marsal, J., and Girona, J. (2012). A general algorithm for automated scheduling of drip irrigation in tree crops. Computers and Electronics in Agriculture, 83: 11 20. ernandez, M., Osuna, J., and Crnojevic, V. (2014). Sources of Remote Sensing Data for Precision Irri gation. InSensing Technologies f or Precision Irrigation (pp. 53 67). Fortes, P. , Platono v, A., and Pereira, L. (2005). GISAREG: A GIS based irrigation scheduling simulation model to support improved wat er use. Agriculture Water Manage ment . 77: 159 179. Hedley, C. , and Yule, I. (2009). A method for spatial prediction of daily soil water status for precise irrigation scheduling. Agricultural water management, 96(12): 1737 1745. PAGE 88 88 Home, P., Kar, S. and Panda, R. (2000). Effect of irrigation scheduling on water and nitrogen balances in the crop root zone. Zeitschrift fuar Be wasserungswirtschaft, 35 (2): 223 235. Jai kumaran, U. , and Nandini, K. (2001). Development of mulch cum drip irrigation for vegetables . South Indian Horticulture, 49: 373 375. Joosten, F. (2007). Development Strategy for the export oriented horticulture in Ethiopia. Wageningen, N etherlands, 52. TAHA, 2008: Tanzania Horticulture Association (TAHA) Annual publication, Arusha. Khriji, S., Houssaini, D. , Jmal, M . , Viehweger, C., Abid, M., and Kanoun, O. (2014). Precision irrigation based on wireless sensor network. Science, Measurement & Technology, IET, 8(3): 98 106. Kingalu, J. ( 2008 ) . Drip Irrigation and Fertigation of Tea. Developing Agricultural and Agri Business Innovation in Africa. ( http://info.worldbank.org/etools/docs/library/243684/session2aTzCaseStudiesTe aIrrigationMFPs.pdf ) Kigalu, J., Kimambo, E. , Msite, I., and Gembe, M. (2008). Drip irrigation of tea irrigation. Agricultural water management, 95(11): 1253 1260. Leib, B., Jabro, J., and Matthews, G. ( 2003 ) . Field evaluation and performance comparison of soil moisture sensors. Soil Sci ence .168: 396 408. Mafuta, M., Zennaro, M., Bagula, A., Ault, G., Gombachika, H., and Chadza, T. (2013). Successful deployment of a wireless sensor network for precision agriculture in malawi. International Journal of Distributed Sensor Networks . Mahajan, A., Moghaddam, M., Entekhabi D. , Goykhman, Y., Li, K., and Liu , M . (2010) . Based Optimal Observations and Remote Sensing, I E EE Journal 3( 4 ): 522 535. Mashindano O., Kazi, V. and Mashauri, S. (2013). Taping Export Opportunities for Horticulture Products in Tanzania: Do we have Supporting Policies and Institutional Frameworks? ESRF policy brief. ( http://esrf.or.tz/docs/horticulture_products_in_tanzania.pdf revised on 28 may 2014) Morris, M., and Energy, N. (2006). Soil moisture monitoring: low cost tools and methods. National Center for Appropriate Te chnology (NCAT). Available on li ne at: http://www.attra.ncat.org/attrapub/PDF/soil_moisture. pdf . Möller, M. (2003). Drip Irrigation of Tea in the Southern Highlands of Tanzania. Unpub lished M.Sc. Thesis. Cranfield University, Silsoe College, UK. PAGE 89 89 Möller, M., and Weatherhead, E. ( 2007 ) . Evaluating drip irrigation in commercial tea production in Tanzania. Irrig ation Drain age Syst em . 21 (1): 17 34. Muñoz Carpena, R., Shukla, S., and Morgan, K. (2004). Field devices for monitoring soil water content. University of Florid a Cooperative Extension Service. Institute of Food and Agricultural Sciences, EDIS. Muñoz Carpena, R., and Dukes, M. (2005). Automatic irrigation based on soil moistu re for vegetable crops. ABE356. Muñoz Carpena, R., Li, Y. , Klass en, W., and Dukes, M. (2005). Field comparison of tensiometer and granular matrix sensor automatic drip irrigation on tomato. Hort iculture Technology, 15(3): 584 590. Official Online Gateway o f the United Republic of Tanzania. http://www.tanzania.go.tz/agriculture.html Ortega, J. , D e Juan, J., and Tarjuelo, J. ( 2005 ) . Improving water management: the irrigation advisory service of Ca stil la la Mancha (Spain). Agriculture Water Manag ement . 77: 37 58. Paul, W. ( 2002 ) . Prospects for controlled application of water and fertiliser, based on sensi ng permittivity of soil. Computers and Electronics in Ag riculture . 36: 151 163. Pereira L. , Teod oro P., Rodrigues P., and Teixeira J. ( 2003 ) . Irrigation scheduling simulation: the model ISAREG. In : Rossi G., Cancelliere A., Pereira L.S., Oweis T., Shatanawi M.,Zairi A. (Eds.) Tools for Drought Mitigation in Mediterranean Regions. Kluwer, Dordrecht, pp. 161 180. Qualls, R., Scott, J. , and De Oreo, W. ( 2001 ) . Soil moisture sensors for urban landscape irrigation: e ffectiveness and reliability. Journal of American Water Resour ce Assoc iation . 37(3): 547 559. Rowshon, M. , and Amin, M. ( 2010 ) . GIS based irriga tion water management for precision farming of rice. International Journal of Agriculture and Biological Engineering. Vol. III No.3 ( http://www.ijabe.org ) Schueller, J. (1992). A review and integrating analysis of spatially variable control of crop production. Fertilizer Research, 33(1): 1 34. Sergeant, A. ( 2004). Horticultural and floricultural exports constraints, potential and an agenda for Support for the Tanzania diagnostic trade integration study. Available on l ine at: http://www.tzdpg.or.tz/index.php?eID=tx_nawsecuredl&u=0&file=uploads/medi a/HortiFloriRpt3_1_.pdf&t=1462190503&hash=0007bf8d45b37148171e7257f528 37cb294d51f5 PAGE 90 90 Shuman, D., Mahajan, A., Liu, M., and Moghaddam, M. scheduling for soil m oisture sensing: from physical models to optimal c Proceedings of the IEEE 98(11): 1918 1933. Smajstra, A., and Koo, R. ( 1986 ) . Use of tensiometers for scheduling of citrus irrigation. Proceedings of Florida State Horticulture Society. 99: 51 56 Smith, M., Pereira, L. , Berengena, J., Itier, B., Goussard, J., Ragab, R., Tollefson, L., and Van Hoffw egen, P. ( 1996 ) . Irrigation Scheduling: From Theory to Practice. FAO Water Repor t 8, FAO, Rome, 384 pp. Taylor, R. (1990). Interpretation of the correlation coefficient: a basic review. Journal of diagnostic medical sonography , 6(1): 35 39. Teixeira, J. , and Pereira, L. ( 1992 ) . ISAREG, an irrigation scheduli ng model . ICID Bulletin, 41(2): 29 48. Teixeira, J. , Paulo , A. , and Pereira , L. ( 1996 ) . Simulation of irrigation demand at sector level . Irrigation and Drainage Systems, 10: 159 178. Texas Water Development Board. (2004). al Water Conservation Practices Topp, G. , Davis , J. , and Annan , A. ( 1980 ) . Electromagnetic determination of soil water content: Measurements in coaxial transmission lines , Water Resour. Res. , 16 ( 3 ) : 574 582 . Tor re Neto, A., Schueller, J. , and Haman, D. (2000). Networked sensing and valve actuation for spatially variable microsprinkler irrigation. In 2000 ASAE Annual International Meeting, Milwaukee, Wisconsin, USA, 9 12 July 2000 (pp. 1 17). American Society of Agricultural Engineers. URT (United Repub lic of Tan zania) (2001): Rural Development Strategy (RDS), Prime Regional Administration and Local Government (PMO RALG), Dar es Salaam URT (United Republic of Tanzania) (2002): Agricultural Sector Development Programme (ASDP), Ministry of Agr iculture, Food security and Cooperatives (MAFC), Dar es Salaam URT (United Republic of Tanzania) (2011): The Tanzania Five Years Development Plan 2011/12 Planning Co mmissio n (PO PC), Dar es Salaam Varble , J. , and Chávez, J. (2011). Performance evaluation and calibration of soil water content and potential sensors for agricultural soils in eastern Colorado. Agricultural Water Management, 101(1), 93 106. PAGE 91 91 Whelan, B. , and McBratney, A. management. Precision Agriculture, 2(3): 265 279. Willmott, C. , ( 1982 ) . Some comments on the evaluation of model performance. Bull etin American Meteorological Soc iety . 63: 1309 1313. PAGE 92 92 B IOGRAPHICAL SKETCH Kadeghe Fue received his Master of Science in agricultural and biological engineering from the University of Florida , USA in the summer of 2014. He s tudied precision agriculture , information systems and automation. He was sponsored by iAGRI under USAID feed the futur e program. He received Borlaug LEAP fellowship award in the fall 2013. After then, He received the Pan African Conference on Science, Computing and Telecommunications ( PACT ) 2014 best student paper award in Arusha, Tanzania S olar powered, Wi Fi R e programmable Precision July, 2014 . Before joining the University of Florida, He received Bachelor of Science in computer engineering and information technology and attained hono r s degree from the University of Dar es Salaam, Tanzania in October, 2011 . Then, He joined Sokoine University of Agriculture in 2011 as the academic staff . He teaches several courses in computer programming languages, networking, microcomputer systems a nd databa se management systems. He has supervised more than ten undergraduate students on their special projects in information systems development and others in precision agriculture . He is a gra duate engineer member of the Institution of Engineers Tanzania and Am erican Society of Agricultural and Biological Engineers . He has specialized in applications of computers and electronics in Agriculture especially in area of Precision agriculture, e agriculture and Software systems engineering. He has published scientific papers in ICT and applications of computers in Agriculture. Also, He does c onsultancy in software systems development and electronic control systems development to t he public and private companies .
https://ufdc.ufl.edu/UFE0047240/00001
Hunter Smarter Farming: Irrigating for Profit The Hunter Smarter Farming: Irrigating for Profit project aims to improve the capabilities of the Hunter’s dairy irrigators to increase profits by optimising dry matter (DM) production and utilisation throughout the irrigation season, concentrating on efforts to start irrigation at the right time and rate to avoid ongoing seasonal soil moisture deficit. Research undertaken by the Tasmanian Institute of Agriculture (TIA), as part of the national Smarter Irrigation for Profit project (2015-2018), determined that for each day irrigation start-up was delayed, pasture utilisation loss was equivalent to 105 kgDM/ha/day. Potential pasture not grown is pasture not utilised and milk not produced! TIA researchers have termed this period of under-watering at start-up as “The Green Drought”. Figure 1 (below) is based upon data from a Tasmanian farm, however, it is highly typical of irrigation management practices across the industry. By delaying irrigation start-up this farm has quickly dropped DM production under this irrigator. Once irrigation was applied, soil moisture had fallen below the root zone of the plant and subsequent irrigation applications, applied at the systems maximum capacity, were not enough to increase soil moisture to within the optimal zone for plant requirement for a prolonged period. Figure 1 Starting-up irrigation too late will lead to Green Drought Source: Dr James Hills, TIA, 2018 Please watch Dr James Hills Dairy Research Foundation Video here. At present, usual practice on many irrigating dairy farms in the Hunter region is to use visual inspection or historic management to make scheduling decisions. But in today’s variable seasonal conditions, are there better ways to apply a more informed approach? There is also a tendency to delay irrigation to postpone costs associated with water, energy and labour use or avoid maintenance outlay resulting in systems not being prepared to operate when needed or operate inefficiently. But what is the true cost of not applying water earlier? By not monitoring soil moisture early and throughout the season, irrigators become limited by the capacity of their systems. The water applied once start-up is delayed could possibly be ineffective if the system cannot apply irrigation at the required rate to increase, as appose to simply maintaining soil moisture, which is optimal for plant growth (called the Readily Available Water (RAW) range). Is the irrigation season ahead simply a battle to catch-up on the irrigation missed? The most expensive water may be the water that was never applied. How is the project assisting dairy irrigators? The project is: - Continuously updating a resource kit of system maintenance and efficiency check-lists for the main irrigation systems used in the region. This is improving system preparedness and increasing water and energy use efficiency across the region; - Sign-posting dairy irrigators to other resources to improve system preparedness and increase water and energy use efficiency; - Preparing e-learning opportunities for farmers, primarily as webinars, podcasts and short videos, to increase knowledge and understanding of the factors which influence irrigation scheduling decisions, including key terminology and interpretation of water balance information; - Extending seasonally relevant information from irrigation sites established to demonstrate the use of two irrigation scheduling fundamentals: - Soil moisture monitoring using sensors to indicate depth of effective water (via rainfall & irrigation), logged in real-time to a web-based platform accessible via an App. - Use of a freely available weather based irrigation scheduling tool so that a general water balance can be determined and maintained. This is a trial App called the Scheduling Irrigation Dairy for Dairy, developed by the National Centre for Engineering in Agriculture, University of Southern Queensland & Dairy Australia; and - Communicating project learnings and undertaking basic economic analysis. Tocal Dairy Irrigation Site Tocal dairy is operated by NSW DPI. In early 2018 a major development of the irrigation system was undertaken resulting in the installation of three centre pivots and there are future plans to convert more of the irrigation system to overhead watering. The dairy has partnered with the Hunter Smarter Farming: Irrigating for Profit project to assist with improved irrigation scheduling decisions and monitoring of production and profit impacts. Hear from Tocal Dairy's managers themselves here Read about Tocal Dairy’s plans in working with the project here. Gloucester Irrigation Sites The two Gloucester farms involved in the project are new to soil moisture monitoring. Their location and a timeline of their activities can be followed on Google Maps by Clicking here . EM 38 mapping and soil characterisation surveys were conducted to inform the most appropriate location of soil moisture monitoring equipment. Both sites have been installed with 80 cm EnviroPro capacitance probes with sensors located at 15cm, 40cm and 80cm depths which sample a sphere of about 100mm, however 95% of the measurement is within 50mm. Capacitence probes measure volumetric soil moisture content (mm). Each property has chosen a different telemetry systems (logging of data) to meet their particular requirements. Tom Middlebrook- Bowman Farm, Barrington NSW Milking herd: 450 Dairy farm area: 450 ha Milking platform: 200 ha Irrigated area: 100 ha Irrigation System: 2 centre pivots, 1 lateral Smarter Farming: 15 ha centre pivot, 5 span (270m) 1900 Figure 2: Bowman Farm: EM38 mapping identified variability in soil type to be considered in locating the probe. Figure 3: Soil samples taken from 1m depth cores were sampled to characterise the various soils mapped by the EM 38 survey. Moisture monitoring equipment: Single 80cmEnviroPro capacitance probe with 3 sensor depths, Adcon NextG telemetry logger. System has capacity to accommodate multiple probes but these must be located within 60m of the logger. Automatic tipping bucket rain gauge also installed. Hear from Tom himself by watching this short video. Adam Forbes- Kywong Flat, Barrington, NSW Milking herd: 700 ha Dairy farm area: 700 ha Milking platform: 180 ha Irrigated area: 210 ha Irrigation System: 2 centre pivots, bike-shift, soft-hose & hard-hose Smarter Farming Site: 50 ha centre pivot, 7 span (450m) 1900 Figure 4: Kywong Flat: EM38 survey of one paddock under the pivot area identifies heavier and lighter soils. Moisture monitoring equipment: Two 80cm EnviroPro capacitance probe with 3 sensor depths located in two separate paddocks with field telemetry loggers. The loggers feed into a LoRa Gateway base station located in a near-by office. This system allows for multiple probes to be located around the farm. Automatic tipping bucket rain gauge also installed. Hear from Adam himself by watching this short video. Irrigation Reports Follow the story of the two Gloucester Smarter Farming farms as they make critical irrigation decisions to optimise the production and profitability of their study sites. Gain an insight into how they are using the available information and weighing-up the compromises. July – September 2019 Irrigation Report Hear from Adam Forbes about his irrigation decisions this period at Kywong Flat April – June 2019 Irrigation Report Listen to a podcast of Tom Middlebrook talking about his irrigation decisions at Bowman Farm April-June 2019. Listen to a podcast of Adam Forbes talking about his irrigation decisions at Kywong Flat April-June 2019. Both podcasts also provide a short discussion on the results and recommendations of a recent irrigation system evaluation. February - March Irrigation 2019 Report Summer Season Irrigation Report (November 2018- January 2019) Hear from Tom Middlebrook on his decisions at Bowman Farm: Click here Hear from Adam Forbes on his decisions at Kywong Flat: Click here Smarter Farming Resources The Hunter Smarter Farming Project produces local resources to assist Hunter dairy irrigators to improve irrigation start-up and gate “Green Drought” production losses. These resources increase knowledge, understanding and preparedness to forward plan irrigation scheduling (especially timing) which will maximise yield potential under irrigation. Videos & podcasts Irrigation System Evaluations- Get your system operating efficiency! Avoiding “The Green Drought”- a 101 video for all irrigators! Tasmanian Institute of Agriculture (TIA) dairy researchers, Dr James Hills & Dr Mark Freeman, explain the Green Drought phenomenon and provide practical information to avoid lost production by not commencing irrigation on time. This is a 30 minute presentation which will be sure to change your approach to timing irrigation start-up. Click here to avoid The Green Drought Preparing your irrigation system to make sure you are ready to start! A quick list for preparing your system. Click here for a quick overhead irrigation check Pumping Irrigation Water- efficient operation pays! A Tamworth dairy farm was assisted by an external irrigation consultant to identify ways to more efficiently pump water to reduce costs. Click here to reduce costs associated with moving water Soil Moisture Monitoring The Hunter Smarter Farming Project uses soil moisture monitoring at the Gloucester and Hunter Valley demonstration farms to better gauge when irrigation should commence. In this video, Brian Thomson of Porosity Agricultural Services explains why soil moisture monitoring is important in making irrigation scheduling decisions and provides a good insight into choosing and installing soil moisture monitoring probes. Click here to learn about Soil Moisture Monitoring Management and System Checklists Hunter Smarter Farming Checklist Bike Shift Hunter Smarter Farming Checklist CPLM Hunter Smarter Farming Checklist K-Line Hunter Smarter Farming Checklist Travellers Hunter Smarter Farming Irrigation Development Decisions Hunter Smarter Farming Soil Moisture Monitoring Check Listen to a 15 minute podcast interview conducted by Hunter Smarter Farming Project Manager, Marguerite White, with Irrigation System Consultant, Peter Smith (Sapphire Irrigation Consulting). Peter outlines the main priorities when it comes to having irrigation systems ready for start-up and operating efficiently during the irrigation season. After listening, download the check-lists relevant to your irrigation system above- and get started, smarter! Project Baseline Reports SIP Soils Report 'Kywong Flat' SIP Soils Report 'Bowman Farm' Useful sign-posts to external resources DairyNZ Irrigation Website- Click here Practical guides, checklists and tips for maintaining and operating your irrigation system as well basic soil science and scheduling concepts explained simply. A good site to get you started, especially after watching the The Green Drought presentation above. Irrigation New Zealand Practical Resources - Click here This site provides essential foundational irrigation information- from the need to know soil science basics to detailed considerations when exploring advanced technologies in monitoring and application equipment. Need to know more about soil moisture monitoring options? Go straight to this Irrigation New Zealand publication- Click here for SMM Guide (Book 11) Dairy Australia has a new short publication on options- Click here for SMM Information Sheet Contact To find out anything further on the project, please contact Marguerite White, Project Manager, ICD Project Services at [email protected] or phone: 0447 500 415.
https://hunter.lls.nsw.gov.au/our-region/projects-and-programs/smarter-irrigation-for-profit-hunter-starting-smarter-project
water contentin soil. A soil moisture probe is made up of multiple soil moisture sensors. One common type of soil moisture sensors in commercial use include the Frequency domain sensorsuch as the capacitance sensor. One of the major manufacturers of soil moisture sensors and probes is [http://www.aquaspy.com AquaSpy] . Measuring soil moisture is important in agriculture to help farmers manage their irrigation systems more efficiently. Not only are farmers able to generally use less water to grow a crop, they are able to increase yields and the quality of the crop by better management of soil moisture during critical plant growth stages. Besides agriculture, there are many other disciplines using soil moisture sensors. Golf courses are now using sensors to increase the efficiencies of their irrigation systems to prevent over watering and leaching of fertilizers and other chemicals offsite. In urban areas, landscapes and residential lawns are using soil moisture sensors to interface with an irrigation controller. Connecting a soil moisture sensor to a simple irrigation clock will convert it into a “smart” irrigation controller that prevents an irrigation cycle when the soil is wet. An example of a controlling soil moisture sensor is the [http://www.aquaspy.com/default.cfm?id=201 AquaBlu] . Wikimedia Foundation. 2010. Look at other dictionaries:
https://en.academic.ru/dic.nsf/enwiki/10923751
George W. Bush Presidential Library - Dallas, Texas The George W. Bush Presidential Center, located on a prominent 23-acre site on the Southern Methodist University Campus in Dallas, has a goal of being one of the most restorative landscapes in the nation. Visitors to the Center experience a native Texas landscape in a 15-acre urban park, which serves as a commitment to environmental conservation and restoration. A network of paths takes visitors through restored Texas environments such as Blackland Prairie, Post Oak Savannah, and Cross Timbers Forest. The highly-disturbed, 25-acre urban site required complete ecological restoration. Under a mosaic of vegetated ecological zones, Jeffrey L. Bruce & Company provided soil recommendations based on self-sustaining biological processes. Integrated water harvesting will provide over 60 percent of the water needs for the landscape by capturing runoff, rainwater, condensate, and cooling tower process water. Water quality is maintained by routing harvested water through bioswales and wetland polishing cells, which contain an elaborate palette of native vegetation. JBC’s innovative water management strategy for this LEED Platinum certificated facility utilizes leading-edge “smart” irrigation technology that is automated and regulated through both soil moisture sensing, and real-time evapotranspiration (Eto) rates. The system contains a 400,000 gallon underground open cell cistern, which when full, discharges polished water into a groundwater infiltration system. Jeffrey L. Bruce & Company is responsible for water resource management, irrigation engineering, subsurface drainage and agronomic soil recommendations.
http://www.jlbruce.com/george-w-bush-presidential-library
Editor’s note: This article is the latest in a series discussing water consumption and use from a supply perspective and as it relates to watershed management concepts. This series will be produced in connection with the Nebraska Water Balance Alliance (NEWBA) and several of its associates. It was management consultant Peter Drucker who was first credited with the quote, "If you can't measure it, you can't improve it." Like any business, that statement rings true for anyone involved in production agriculture — and probably now more than ever. As Sutherland area farmer and NEWBA member Roric Paulman notes, "We're going to be challenged from every aspect to add value to the water we use and consume." In previous articles in this series, we've explored water consumption and use, and possible methods to address unnecessary consumption at the watershed level. But what are the primary drivers of consumption and use, and how can we measure them to help make more informed decisions to benefit the watershed? Fortunately, growers, Natural Resources Districts (NRDs) and other stakeholders now have a range of tools at their disposal to achieve these goals. NEWBA's primary focus is to identify new and emerging technologies that measure and apply processes to help understand and adopt water balance concepts that can improve water utilization for production agriculture and other water needs. The following is not a comprehensive list of technologies, but a look at some of the latest technologies available in the water management toolbox, and how using them can help landowners, growers and watershed managers add value to the water that's used and consumed. It starts with rain It starts with precipitation — the fundamental driver of water supply. "If you think about water balance, you have to know what's coming into the system to have any idea of what outputs are in terms of evapotranspiration, streamflow or changes in storage. Being able to identify that number is critical," says Trenton Franz, hydrogeophysicist at the University of Nebraska-Lincoln. A number of weather station networks through Automated Weather Data Network (AWDN) and National Oceanic and Atmospheric Association provide precipitation data and forecasts to help growers make decisions based on how much rainfall they've already received, and in the case of models, decisions based on how much rainfall they may receive. However, as Franz notes, these networks can be supplemented by local real-time sensor data. This is where Arable's Mark comes in. The Mark measures rainfall using an acoustic sensor, which also allows it to distinguish the acoustic signature of rain from hail. Growers can use this to ground-truth rainfall with weather forecasts and models, and calibrate them based on previous weather events to improve the accuracy for weather models for a specific field. PLUG-AND-PLAY: One of Arable's Mark sensors is mounted on a center pivot near Gothenburg. The Mark brings everything together into a single device that's easy to install. This includes a sensor that distinguishes the acoustic signature of rain from hail. It also features a barometer, radiometer and spectrometer. (Photo by Adam Wolf) Designed to include a number of measuring technologies in a single, portable device, the Mark includes a barometer, radiometer and spectrometer to measure humidity, estimate evapotranspiration (ET) and calculate crop water stress index (CWSI). It also features an auxiliary plug to connect with soil moisture probes. Franz is collaborating with Adam Wolf, CEO at Arable, to deploy 15 to 20 Marks on farms in Nebraska this spring to establish a ground-base network, and to work with remote sensing and radar networks to ground-truth existing radar products, including AWDN and NOAA stations. And Wolf notes precipitation data itself is valuable in making informed decisions on actionable items. "For example, say I don't need to irrigate if it's raining on Thursday. That will save me on pumping costs as well as my allocation," he says. "Someone sees the data, understands the implications, and if they act on it, they receive the value. The question is, 'Do I find value in those systems?'" What's in the bucket? Many growers are familiar with and often use capacitance probes to monitor soil moisture. However, one of the more underutilized measurements is soil water-holding capacity, notes Paulman. "We haven't done a good enough job of understanding the water-holding capacity in our fields," he says. "What we expect to do more often is when we do that soil sample, we put that into the soil moisture triangle, measure the percentage of clay and sand, and get a map of the field of water-holding capacity." Measuring water-holding capacity starts with identifying the major soil type and then using a soil moisture probe placed precisely in that location and in the right topography, notes Nick Emanuel, founder of CropMetrics, a company that develops software and hardware to bring irrigation management tools to a centralized platform, which includes capacitance probes. Often, irrigation is triggered when the lightest soil — the soil with the lowest water-holding capacity — in the field shows water stress. However, watering consistently based on the lightest soil means other soils get overwatered. "If we overwater the majority soil type, we are minimizing opportunities for maximum yield and profitability on that soil type," says Emanuel. "That's what water optimizing really comes down to — first and foremost, optimizing the soil type, the majority soil type and water-holding capacity." The gold standard for soil moisture probes are neutron probes. The Cosmic ray neutron probes designed by Franz at UNL work similar to these, but instead of using a radioactive source to emit neutrons, they use "fast neutrons" already in the atmosphere that collide with hydrogen atoms in the air and soil, bouncing in and out, and forming a cloud of "slow neutrons" to measure hydrogen. "You can think about the soil as a bucket. You need to know the inputs and how large your bucket is, and you need to know the status of the bucket," says Franz. "Having that spatial information is very critical in designing an irrigation prescription map. The Cosmic Ray probe can help measure the size of the bucket and how much water's in the bucket, providing a cost-effective solution for producers." ET takes the lion's share Then there's evapotranspiration (ET), a combination of evaporation and transpiration, which represents the greatest percentage of consumption of water. About 80% of precipitation is lost to ET, notes Dayle McDermitt, former researcher and now consultant at Li-Cor. And there are several methods to measure actual ET. For example, Li-Cor's Flux Tower systems use eddy covariance to measure actual ET. As wind flows over the surface of the earth, turbulence is created, and that turbulence consists of eddies. Li-Cor's Flux Towers simultaneously measure the wind velocity, gas concentration and temperature of those eddies as they transport carbon dioxide and water vapor from the atmosphere to the surface, and from the surface to the atmosphere, measuring the "flux" of water vapor and carbon dioxide that's leaving and returning. Measuring the water vapor that's leaving gives an accurate measurement of ET. The Fluxtower is stationary, and records data every tenth of a second to provide actual ET measurements every half hour. Most towers stand 2 to 3 meters above the ground. The footprint of the tower's is generally a hundred times the height. Measuring water input, the precipitation, and the surface and groundwater, as well as measuring ET, the majority of the output, gives landowners and watershed managers a look at the entire water balance, notes McDermitt. "Anything we can do to manage ET, whether it's in crop selection or residue management, can have a very substantial impact on the residual liquid water that's available for maintaining the aquifer and ecosystems services in waterways," says McDermitt. "Most of the time, ET is the biggest consumptive component."
https://www.farmprogress.com/water/adding-value-water-through-more-informed-decisions?ag_brand=nebraskafarmer.com
Title: Will be Updated Soon.... Title: Performance of soybean (Glycine max) cultivars in Virginia field sites: identifying shoot phenotypes through manual and automated image analysis methods to predict yield Title: The Digitalization of Crops & The Future of Agriculture Title: Experimental investigation of a microwaves pilot plant Title: Feasibility study for the introduction of renewable energies in the organic cacao supply chain in Ecuador Title: A critical appraisal of the implications of big data application and information communication technology in smallholder farming in Africa Title: Genotype x environment interactions and stability analysis for yield and sucros of some promising sugarcane clones Title: Evaluation of the vigure of roselle crop (Hibiscus sabdariffa L.) using rhizobacteria for phytoremediation purposes under conditions of stress caused by copper sulfate (CuSo4) Title: Paludiculture as a win-win option for future agriculture use of peatlands Title: The use of attractants and biofortification with nutrients in seed production of white clover (Trifolium repens L.) Title: Influence of an innovative biopreparation on soil microbial activity and lettuce yielding Title: Key competitiveness and quality requirements for successful small fresh produce farmers in emerging countries. Title: Effect of Agri-mat and Grass Mulch on Soil Water Regime, Temperature and Crop Yield in Sandy Loam and Loam Soils Title: Integration of semi-transparent photovoltaics in the greenhouse systems Title: Assessment of the apparent electrical conductivity in soils of a dried lake in Greece using an electromagnetic induction sensor Title: Integration of plant species indigenous to the deserts of the UAE into public urban greenery of the Gulf Region: Evaluation of the potential to save irrigation water and improve soil properties Title: Climate resilient water management approaches in the Mediterranean agriculture Title: The Role of Veterinarians from Soil to Table in Food Safety Title: Applications of hybrid artificial Intelligence models in hydrology Title: Do rural women participate in homestead plant biodiversity conservation? A case of Homna upazila under Cumilla district Title: Screening and biochemical responses of tomato (Lycopersicum esculentum L) genotypes for salt tolerance Title: A Mobile-App-Geospatial-Ecosystem-Technology of Ginger-Biomedicines-Physiology Prevent Future-Pandemic: Improved Agriculture-Horticulture-Biodiversity-Wildlife-Conservation-Environment Title: ICT Prognoses for Easing Water Scarcity and Its Energy Efficient Management: An Outline Title: Participatory testing of precision fertilizer management technologies in the mid-hills of Nepal. Title: Large role of small relief elements in precision agriculture Title: Applying Integrated Pest Management strategy as an alternative method for Tuta Absoluta control in Zira settlement, Azerbaijan. Title: Fish consumption behaviour and perception of fish food security of low-income households in urban areas of Ghana Title: OsFBK4, a Novel GA Insensitive Gene Positively Regulates Plant Height in Rice (Oryza Sativa L.) Title: Screening of different wheat (Triticum aestivum) varieties under soil-less fodder production system as a climate smart strategy Title: Modelling Soil moisture dynamics for smart irrigation scheduling Title: Weight gain and mortality rate of broiler birds feed with neem, ginger and garlic additives Title: Agricultural Technology: Applications of Artificial Intelligence and Digital Twins in Vertical Farming (Controlled Environment Agriculture) Title: Perspective of using Burkholderia species in Agriculture: Is it Friend or enemy?
https://agri-conferences.com/speakers/2022
Abstract : There are lot of resources available in nature. Among the resources, water plays an important role in our day to day activities and in agriculture. But, nowadays water has become a major problem in many parts of the world. Day by day, as the population increases, the requirement for water resources is rapidly increasing. So, a lot of new ideologies are being implemented in conserving water resources. One of the ways of controlling water resources is by reducing the usage of water in irrigation. Many ideas were proposed and implemented in attempts to reduce the usage of water. The existing model, “Automated irrigation system using weather prediction for efficient usage of water resources” (AISWP), lacks efficiency in precise weather prediction and also it is proposed such that it works for only one particular crop. So, for better efficiency of the system, the method called “Irrigation Monitoring and Controlling System” (IMCS) has been proposed. In this method, the shortcomings of AISWP method has been rectified by watering the crops considering the moisture content of the soil and weather prediction of the surroundings, using web scraper. Web scraper can also be used to perform watering based on the type of crop that is being grown. The model has also been provided with an additional feature to calculate the pH of the soil so that fertilizers can be used according to it. So, by using this method, watering of the plants can be precisely monitored based on the climatic conditions, type of crop that is being harvested and to check the pH value of the soil to avoid soil erosion. Cite this Research Publication : N. Nishitha, Vasuda, R., Poojith, M., and Dr. T. K. Ramesh, “Irrigation Monitoring and Controlling System”, in 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 2020.
https://www.amrita.edu/publication/irrigation-monitoring-and-controlling-system/
The environmental conditions inside a greenhouse are basic to ensure plants are kept alive and healthy. In most cases, these greenhouses have the possibility to modify the internal temperature and humidity through active systems (e.g. dehumidifiers, air conditioners, boilers, fans, water sprinklers) or through passive systems, for example the mechanized opening of air passages in roofs or facades. By deploying actuators and sensors to measure parameters such as temperature and humidity inside and outside the greenhouse, soil and / or leaf moisture, etc., a set of rules can be configured within the Smart Data System platform to control all existing active or passive systems and ensure optimal environmental conditions. Additionally, the platform can be used to reduce and control costs associated with the greenhouse management, such as litres of water consumed for irrigation, kilos of fertilizer used, quantity of products harvested. This information can then be used to have a clear view of overall expenditures and help take actions to optimize the production process and maximize profitability.
http://www.smartdatasystem.es/agriculture/
Politicians: politicians need communication as they are often delivering bad news such as scandals, failures and targets not being met. Judgement will always be cast on the way they communicate due to always being in the public eye and the way they handle situations will injure their public image and/or their political party. Politicians will likely use someone to write speeches on topics to try and control the damage made by bad situations whilst finding the positives in the situation and find someone else to blame so that they aren’t the ones in the public firing line. Doctors and Health professionals: Doctors and Health professionals need to communicate with good and bad news with service users, their families and each other whether that be a bad diagnosis, the death of a family member or in hand overs. These professionals go through training to learn the right communication and how to communicate more effectively whilst with empathy. Police and Law enforcement: Police officers need to communicate effectively with victims, criminals and their families. They need effective communication because the way they speak to a victim can either make it sound like victim blaming or it can sound like they aren’t interested where as they must keep their opinion to themselves until a side is proven with evidence. Much like doctors they are given extensive training on speaking and listening. Managers: A manager needs good communication skills as they are a classed as the team leader. They’re the person that everyone goes to when there is a problem or a complain whether that be between two colleges’ or customers. Managers need to delegate jobs and responsibilities within the workplace whilst making sure that the staff member understands what they’re doing when they’re meant to be doing it sometimes resulting in having to repeat themselves in several different ways throughout that shift. A major part of a manager’s job role is dealing with customers whether that be general inquests or complaints they have to use excellent speaking and listening skills to help them with their enquiries and if there is a complaint against a member of staff they have to remove the member of staff from the situation, so they can listen to the customers side of the situation and then go to find the staff member to listen to theirs after doing so they have the tough decision to make on the need to take it further and how to solve it between themselves, the staff member and the customer. Servers: Like a manager a server need excellent communication skills as they have a more one to one customer assistance role within the work place. Servers need to communicate with guests about both drinks and food menus whilst listening and answering questions about each menu and everything on it. servers also need to communicate with not only their own team but the other teams such as the kitchen, bar and management team whether that be for orders that were miscommunicated or changed or over a complaint. Servers are also the first people that customers see as they not only serve but they also host which means they greet and seat the guests before any orders are taken. Bartenders: A bartender’s communication is important as like a server they deal with customer requests and questions about drinks and menus. Listening skills are extremely important as they must deal with allergies and be responsible enough to stop service if the customer is slurring or showing any signs of intoxication. They too must work and communicate with other teams such as with the servers if they serve the customer first and are told about an allergy they must communicate with the several teams in the restaurant to inform them of such allergy with the guest description or by discretely pointing them out to each of the teams. Whilst doing all of this they must be able to listen to those above them such as the supervisors or the managers about the job they are doing and taking on criticism with their customer service and close downs and putting it into practice. Disclaimer This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can order our professional work here.
https://eduzaurus.com/free-essay-samples/speaking-and-listening-skills/
In any business organization, common procedures occur in sequence. They are linear. In addition, some procedures also repeat over a time. The organization needs to find out such linear and repeating procedures to compile them into sets of Standard Operating Procedures (SOPs). These procedures when compiled step by step, can prove to be an excellent learning material for training the newly joined staff in a short period of time. Let us learn about a few SOPs followed in the front office department. SOP for Handling Guest Luggage This is a procedure followed by the bell desk staff at the time of the guest’s arrival and departure. It goes as follows − Handling Luggage on Guest Arrival - As a bellboy look for the new arrival of guest. - The guest vehicle stops at the hotel entrance. - Go ahead and open the vehicle door. - Greet the guest as, "Welcome to (hotel_name), I am (own_name). Do you need any help with your luggage?" - Help the elderly/disables guests to get out of the vehicle if required. - Take the luggage in charge and ensure that nothing is left in the vehicle. - Ask the guest’s name politely as, "May I know your name Sir/Madam?" - Tag the luggage with the guest name. - Ask if anything fragile or perishable is in the luggage. - Add this information on the luggage tag. - Inform the guest that their luggage is with you. - Escort the guest to the hotel reception. - Inform the guest that you will be taking care of their luggage. - With the other front office staff, find out the accommodation number allotted to the guest. - Write the accommodation number on the luggage tag. - Confirm if the guest registration formality is complete. - If the room is ready, take the luggage to the room by the staff elevator. - Place the luggage on the luggage rack. - If the room is not ready, then take the luggage to the store room. - Record the luggage details into the Daily Luggage Register. Handling Luggage on Guest Departure Inform the guest that you are going to guest’s accommodation to collect the luggage. Have an informal conversation with the guest as, "Mr./Ms. (Guest_Name), I hope you enjoyed your stay with us. Do you need an airport transport?" Collect the luggage from the guest room. If the guest needs to store the luggage for long term, tag the luggage with the guest name, accommodation number, date and time of collection, contact number, and receive the guest’s signature on long-term luggage request form. Ensure with the guest that nothing perishable is there in the luggage. Store the luggage on the designated departure area. If the guest is leaving the hotel immediately after check-out, then bring the luggage to the lobby. If a transport vehicle is ready to go then place the luggage in the vehicle. Request the guest to verify the loaded luggage. Update the departure luggage movement on the Daily Luggage movement register. SOP for Handling Reservation Request The SOP goes as follows − Pick up the incoming call in three rings. Greet the guest in the audible voice, introduce yourself, and ask how you can help the guest as, “Good (morning/evening), this is Mr./Ms. own_name, how may I help you?” Wait for the guest to respond. The guests say that he/she needs an accommodation in your hotel. Tell the guest that it’s your pleasure. Take a new reservation form. Inform the guest about the types of accommodations in your hotel and their respective charges. Ask for the guest’s name, contact number, and type of accommodation the guest wants. Ask for the guest’s dates of arrival and departure. Check for availability of the accommodation during those dates. Briefly describe the amenities the hotel provides to its guests. If the accommodation is available, inform the guest. If exactly the same kind of accommodation is not available, ask the guest if he/she would care for another type of accommodation. Note down the guest’s requirements related to the accommodation. Ask the guest if an airport pickup/drop service is required. Ask how the guest would settle the bill: by cash, credit, or direct billing. If the guest prefers by cash or by card, then insist to pay the part of cash in advance against booking charges or credit card details of the guest. Inform about reservation with the guest name, contact number, accommodation type required, payment method, and confirmation number. Conclude the conversation as, “Thank you for calling hotel_name, have a nice day!” SOP for Guest Check-in The SOP goes as follows − - Upon the guest’s arrival, greet the guest. - Ask the guest for his/her name politely. - Search the reservation record in the PMS. - Generate and print a registration card. - Handover a GRC to the guest for verifying printed details. - Request the guest to show the ID card from an authorized institute. - Request to show passport and visa in case of foreigner guest. - Request the guest to fill in the following details on the GRC − - Salutation - Designation - Organization - Business or Residence Address with City and ZIP Code - Purpose of Visit - Contact Number in case of Emergency - Passport Details - Visa Details - Inform the guest about any early/late check-out policies. - Request the guest to sign on the GRC. - Counter-sign the GRC. - Update the details on the guest record. - Create a guest account. - Prepare copies of driving license/passport and visa. - Attach them to the GRC and file the entire set. SOP for Handling Wake up Calls There are manual and automatic wakeup calls. Handling Wakeup Call Manually The guest can request for a wakeup call at the front office directly or by calling from his/her own accommodation. Ask the guest for a wake-up time and any immediate special request after getting up. Open the Wakeup Call Register and enter the following information − Salutation Guest Name Accommodation number Wakeup date Wakeup time Any Special immediate request such as tea/coffee, etc. - Conclude the conversation by greeting the guest again. Pass the special request for tea/coffee to the room service staff. At the time of wakeup call, follow the given steps − Confirm the current time. Call the guest’s accommodation number on telephone. Greet the guest as per the time and inform about the current time and the progress on guest’s special request. - Handling Wakeup Call Automatically Most hotels facilitate their guests to set automatic wakeup call using their phones or televisions. The housekeeper must ensure that the printed instructions about setting an automatic call are kept handy and visible. The guest can set automatic call which is notified at the PBX system and the PMS system. Even if the guest has set up an automatic call, it is the responsibility of the front office staff to give a manual wakeup call to the guest to avoid any chances of inconvenience. SOP for Guest Check-out The process of checking out generally is initiated by the guest. The guest calls up front office and asks to keep the bill ready. - The guest arrives at the front desk. - Greet the guest. - Print a copy of guest folio. - Hand it over to the guest for verification. - If there is any discrepancy, assure the guest about its solving. - Resolve the discrepancy immediately. - Apologize to the guest for inconvenience. - From the guest database, ensure the guest’s preference of payment method. Recite it to the guest. - Settle the guest account. - Print the receipt and give it to the guest. - Ask the guest if he/she needs any assistance for luggage. - Ask the guest if the transport facility to the airport is required. - Greet the guest for giving an opportunity to serve as, “Hope you enjoyed your stay with us. Thank you. Good (morning/afternoon/night). SOP for Processing Cancellation Requests The guests initiate the cancellation of the reserved accommodation. The SOP goes as − Request for the guest’s full name and reservation number. Search the guest database for the given name and reservation number. Recite the guest name, accommodation details and the date of reservation. Ask the guest if he/she would like to postpone it. Request the guest for reason behind cancellation. Record the reason in the PMS. If the cancellation is being done by a person other than the guest, record the person’s name, contact number, and relation with the guest for information. Inform the caller about any cancellation charges applicable according to the hotel policies. Cancel the reservation in the PMS. Inform the guest about e-mail for cancellation charges. Send the cancellation charges plus cancellation number to the guest by e-mail. SOP for Controlling Guest Room Keys The front office staff needs to manage at least two sets of the keys. The number of sets may vary according to the guest policy. Accommodation numbers are not written on the keys, which creates problems when the keys are misplaced within or around the premises. Giving Accommodation Key to the Guest Request for the guest’s last name and accommodation number. Check the information told by the guest against the one recorded in PMS. If there is any deviation, request the guest to provide photo ID card. Do not give away the accommodation key without proper authentication. If the doubt about the guest arises and the guest refuses to cooperate, then inform the front office manager immediately. If any other superior front office staff member recognizes the guest, then you can give away the duplicate key. If the guest has lost the key and needs a new one, then ascertain that the guest has lost it. In the above case, program a new key with the same code. Present the newly created key to the guest. You must not issue accommodation keys to any person that claims to be sent by the guest for getting the keys. Yet, you can give them to a non-guest, if the guest has sent the person with a written authority letter addressed to the front office team. In such case, confirm by calling the guest and accompany the non-guest to the accommodation. Giving Occupied Accommodation’s Key to Staff The authorized staff on duty is allowed to access the occupied guest accommodation for the purpose of professional work. For example, the keys can be given for preparing vacated accommodation, laundry staff, mini-bar staff, and bell-boy to take out the guest luggage. SOP for Turning Away Reservation Request One thing for sure, always try to solve the guest’s accommodation problem as far as possible. Try selling hotel service by giving options than plainly denying to what the guest wants. There are a number of reasons why a reservation staff needs to turn down the reservation request. These are few important ones − - The hotel is fully booked during busy seasons. - Guest is not interested to reserve after knowing rates. - The type of accommodation the guest desires is not available. This is how you turn down a reservation gracefully − When the guest calls to enquire, answer the call as, “Good (morning, evening), this is own_name from reservations. How may I help you?” The guest says he/she would like to reserve an accommodation. Reply as, “Certainly (Sir/Madam). May I request you for your name, mobile number and email ID please?” The guest tells the same. Further ask, “And your company/travel agency name is?” The guest replies, “I am from (Company/TA name)”. Ask the guest about check-in and check-out dates required for reservation. Request the guest to hold the line till you search for availability of the desired accommodation. Inform the guest approximately how much time you would take to find out. Put the call on hold and check availability. Convey the non-availability of the desired type of accommodation to the guest politely as, “Sorry sir/madam, “(all the accommodations are occupied/the desired type of accommodation is not available)”. Suggest the guest about a nearby sister-concern hotel, if any. Suggest the guest to take other similar kind of accommodation by describing its amenities. If the guest doesn’t agree to it, turn away politely as, “Sorry sir, then we don’t have any other available accommodation.” Record the guest data in the PMS along with the ‘Turn away’ reason.
https://tutorialspoint.com/front_office_management/front_office_management_sops.htm
Step by Step: our Ibogaine Treatment Intake Process Step One: Making Contact After making contact with Clear Sky Recovery, a case manager at our intake will discuss your situation with you and answer whatever questions you may have, followed up by making an appointment for a longer telephone interview. During the telephone assessment we’ll provide you a detailed explanation of what Clear Sky Recovery’s ibogaine treatment program can do for you — or your loved ones and family members, if you are calling on behalf of somebody else — and we’ll answer whatever questions you may have about being treated with ibogaine. This first extended interview usually lasts about half an hour (30 minutes), but it can take considerably longer in some cases. Please plan ahead and try to schedule the appointment for a time when you’ll have at least an hour to communicate with us, so that we have the chance to cover everything, answer all the questions you might have, and get a clear picture of the overall situation. We’ll need to get your general medical history and detailed information regarding the current drug-related issues that you are seeking treatment for, as well as a complete list of any medications you may presently be taking. You’ll also need to make payment arrangements with us before continuing to the next step. We’ll remain in contact with you after the initial assessment, and follow up on any additional questions that may arise, as your treatment date gets closer. We’re very aware that this is a stressful time for you, and everybody involved; we will make ourselves available for additional conference calls, or to answer any questions that may arise in the process. Step Two: Making a Reservation If you are calling on behalf of someone else and making arrangements for them, before we can reserve a placement at our ibogaine detox facility, our staff will need to speak with the individual who will be receiving the actual treatment. In many cases people are taking various medications that can interfere with ibogaine, and it’s very important that the patient understands how and why they must be discontinued prior to arrival in Cancun and treatment with ibogaine. Our program typically lasts for one week (7 days), but a longer treatment period may be required if you are heavily dependent on various opioid drugs that have a long half-life, and are unable to make the switch to short-acting opioids (SAOs) prior to arriving. We usually arrange for arrivals to take place on Monday through Wednesday. Once we’ve received payment and you have a confirmed reservation at our treatment facility, you’ll need to book a flight to Cancun International Airport (CUN). Step Three: Making Travel Arrangements After we have your flight information, we’ll arrange for our driver and your case manager to meet you at the airport upon arrival in Cancun. You need to have a valid passport to travel to Cancun. If you do not have current ID, a company many of our clients have had positive experience with is: Easy Passports and Visas. They can be reached at: [+1] 866.487.3279 and provide expedited services for obtaining whatever travel documents you need. We strongly suggest that patients travel on their own. If it’s important that family members or other loved ones travel with patients, you’re more than welcome to visit our ibogaine treatment clinic upon arrival in Cancun; however, you will need to make arrangements for a rental car, or contact a limo service or taxi to take you to your hotel. We are unable to provide limo services for family members or friends of the patient. Step Four: Arrival in Cancun After arriving in Cancun you’ll need to go through customs and immigration. Please do not travel with drugs, weapons, or any other illegal contraband. If you are opioid-dependent our doctors will provide you with morphine to stabilize you upon reaching our treatment facility. You will not be dopesick. You will not be going through withdrawal. You’ll need to fill out a customs form before exiting the plane. For fastest processing we strongly suggest writing tourism as your motivation for visiting Cancun. The address you are staying at is: Casa del Mar Carretera Punta Sam Km 4 num 17. After being processed through customs and immigration, you’ll exit the airport terminal, and make your way to street level. You’re going to see a large variety of taxis and tour operators. Almost all U.S. airlines will arrive through Terminal #3. Look over to your left and there will be a door with a sign saying, “Family and Friends.” Go through the door and we will be waiting for you with a sign.
https://clearskyibogaine.com/step-by-step-intake-process/
-On every hour, and between every shift, staff will sanitize common-touch surfaces including, but not limited to: Door handles/push bars, keyboards, counters, sink knobs, bathroom stall latches, liquid chalk dispensers, telephones, computer mice, and hand sanitizer pumps/bottles -Upon starting a shift, coughing, sneezing, or eating, an employee must wash their hands afterwards, with warm water and soap for at least 20 seconds. -Employees are required to wear a cloth mask with at least two layers of material during their shift. Rock Box Bouldering has ordered masks but their arrival date is unknown. It is ultimately the responsibility of the employee to procure a mask. -Employees will monitor the liquid chalk dispensers for adequate levels of alcohol saturation, at least once per shift -Employees will enforce the reservation system and ask those without reserved time to leave. -When stripping routes, hold will be quarantined for at least 80 hours to inactivate the virus before being pressure washed. -After pressure washing holds will be submerged in a bleach solution for sterilization. -Carpet vacuuming at the end of the night is now only required in zones that had powdered chalk used in them (either from setting or from customers requesting a hold chalked up)\ -Employees are required to report if they have or have had a confirmed case of COVID19, including date the test was performed. -In good weather, doors should be kept open to increase ventilation Part 2: New procedures for customers -Masks will be required for customers. We have masks on order but their arrival date is currently unknown. It is ultimately the responsibility of the customer to procure a mask. -If you feel sick, please don’t come to the gym. -Occupancy will be reduced to a number in accordance with local, state, and federal guidelines -We are eliminating morning hours during this time- on weekdays we will always open at 1 PM. -During the first phase of Rock Box Reopening, only members will be allowed to climb. This phase will last for at least two weeks. We want to make sure our members have the ability to climb at a time that isn’t too inconvenient with the limited occupancy. -All climbing will have reserved timeslots in our calendar system, in chunks of one hour. If you come to the gym without making a reservation you may get lucky and have an open spot available, but we highly suggest reserving time. Slots may be reserved up to a week in advance, and you can only reserve two hours of time per day. Customers are limited to three visits a week. -If you are in the gym past your reserved time you may be asked at any point to leave if someone comes in to claim the last reserved spot. -All classes and events are cancelled. -Constant climber cards punches are reduced to 7 per month to complete a card. Double punch Fridays are suspended. Rollover is still 3 to 1. -Customers will be asked to use hand sanitizer or wash their hands upon entering the gym -Powder chalk is banned- liquid chalk only. If a customer would like a specific hold dusted with powder chalk please alert the staff and we can do it for you. Part 3: Confirmed case procedures -Upon discovery of a customer or employee with a confirmed case of COVID19, the gym will close for three days for cleaning and route resetting. -If over 50% of employees are in quarantine due to symptoms (but have not been tested due to lack of availability of tests), we will treat this as a confirmed case. -The person(s) with a confirmed case will have their shifts/reservations from the past two weeks looked up, and those who were present while they were working/climbing will be notified.
https://www.rockboxbouldering.com/covid19-procedures
Public parking is available at a location nearby (reservation is not possible) and charges may apply. Internet - No internet access available. Kitchen - Oven - Kitchen Bathroom Living Area - Dining area - Sitting area Pets - Pets are not allowed. Campania Highlights Pristine beaches of Capri Island(7.5 miles) This picturesque and upmarket island is home to the charming Blue Grotto sea cave and to countless pristine beaches. The Path of the Gods(7.8 miles) The Path of the Gods takes hiking and outdoor lovers from Agerola to Positano, accompanied by breathtaking views of the Amalfi Coast. Pompeii(11.4 miles) A trip to Italy is not complete without visiting the impactful archeological ruins of Pompei, declared a UNESCO World Heritage Site. Check-in 4:00 PM - 7:00 PM Check-out 8:00 AM - 10:00 AM Cancellation/ prepayment Cancellation and prepayment policies vary according to room type. Please enter the dates of your stay and check what conditions apply to your preferred room. Children and Extra Beds All children are welcome. Free! All children under 2 years stay free of charge when using existing beds. There is no capacity for extra beds in the room. Pets Pets are not allowed. Cards accepted at this property Hover over the cards for more info. The Fine Print Please contact the Keyholder 1 week before arrival to communicate your expected arrival time Please note that the full amount of the reservation is due before arrival. Interhome will send a confirmation with detailed payment information. After full payment is taken, the property's details, including the address and where to pick up the keys, will be sent to you by email. A security deposit of EUR 200 is required upon arrival for incidental charges. This deposit is fully refundable upon check-out and subject to a damage inspection of the accommodation. Bed linens and towels are not included in the room rate. Guests can rent them at the property for an additional charge of EUR 12 per person or bring their own. Review score Based on 7 reviews - Cleanliness 5.8 - Comfort 6.3 - Location 6.1 - Facilities 7.1 - Staff 8.3 - Value for money 6.7 - Breakfast 7.5 Show reviews from: Sort by:
http://www.villas.com/italy/campania/sorrento/limoneto-a-priora.html
Dear Guest, The following conditions govern the contractual relationship between you and the hostel. If you place your booking through an information or reservation system, or any other tourist information location - hereafter called "Reservation Location" – all accommodations through a booking agency in is accordance with the current booking offer. Contractual relationships are directly between you and the hostel. The following conditions and content shall be effective, between you and the hostel and will come into effect upon receipt of your reservation confirmation (i.e., accommodation contract). Please read these terms and conditions carefully. 1. Completion of the accommodation contract, Location of exchange 1.1 Bookings, which may be given orally, in writing, by phone, by fax, over the Internet or by e-mail, by the guest of the hostel, which may represented by a reservation location as a mediator of the accommodation, are contractually binding. 1.2 The accommodation contract is concluded upon receipt of the hostel booking confirmation, which a reservation location operates as representative of the hostel. It requires no particular additional form. 1.3 The booking is done by the guest who is the official legal representative of all persons listed in the booking, and is responsible for the contractual obligations of their guest(s), provided that (s)he has taken on a separate obligation in an explicit and separate written declaration. 1.4 The reservation location acts solely as an agent of the booked accommodation service. 1.5 Bookings can be made only through legally competent persons. Booking by minors requires prior written consent of the legal representative. 2. Reservations 2.1 Non-binding reservation, the guest is entitled to free withdrawal, only with the expressed agreement of the hostel or the reservation location as a possible representative of the hostel. If such an agreement was not made, the reservation, for the hostel and guest, is still contractually binding as stated in Paragraphs 1.1 and 1.2 2.2 If a non-binding reservation agreement has been made, the guest is obligated to inform the hostel or the reservation location, by the agreed upon deadline, if the reservation is to be treated as a binding reservation. If the guest does not inform the hostel or the reservation location by the previously agreed upon deadline, the reservation will be terminated (as stated in Section 1.2) accordingly. 3. Services and Prices 3.1 The amounts of owed hostel services arise solely from the service description in the booking documents (brochure, accommodation list, offer letters, website) in accordance with all information and explanations contained therein. 3.2 The stated prices are final and include all additional costs, unless otherwise specifically stated or agreed. Any additional costs to are primarily consumption-related costs (e.g., holiday apartments or holiday homes) as well as fees for additional services booked by the guest. 3.3 The total amount, less any advance payments, unless otherwise agreed, is payable upon arrival in connection with a detailed bill. 3.4 During the stay of the customer in the hostel, the hostel is also entitled to request payment of accrued claims by issuing an interim invoice at any time and to require immediate payment. 3.5 For groups of 9 persons, unless otherwise agreed, a 10% (ten percent) deposit of the total price is required, maximum 4 weeks after confirmation of booking. The balance is, unless otherwise agreed, without paying an additional fee, is due no later than the day before arrival. For short-term group bookings within 8 weeks before arrival, there is no advance payment required, but the bill must be paid in full, immediately, upon confirmation of booking. 4. Payment 4.1 Unless otherwise explicitly agreed, the guest who has made the contract (receipt of booking confirmation by the hostel or reservation location in oral, written or electronic form) for bookings, which is at least 14 days before the arrival date, is required to pay, within 7 days of booking, a deposit of the 1st night only directly to the hostel, not to the reservation location. 4.2 For bookings made, less than 14 days prior to arrival, the deposit must be paid immediately. 4.3 If the Hostel is willing and able to provide the contractual services and the guest who is required to deliver payment upon arrival, as previously stated, can lay no claim to use to the contracted available services. 4.4 The total residual payment, including incidental costs and consumption-related costs are to be paid on the day of departure to the hostel. 4.5 For stays lasting longer than 7 days, the hostel is entitled to present interim statements for additional - especially locally - booked or unused services or consumption-related costs in accordance with the contractual arrangements. These costs are required to be paid immediately. 5. Cancellation by the Guest 5.1 It should be noted that the visitor - regardless of the nature of the booking and/or the length of stay – does not have the right to a free legal termination or revocation of the rights as stated in the accommodation contract. Even illness, professional reasons or transportation problems (i.e., car breakdown, etc) does not release the guest from payment of the agreed room rate. 5.2 Unless otherwise agreed upon, in individual cases, the guest can cancel their reservation up to 3 days before the date of arrival without any charge. The cancellation must be submitted in accordance with Section 5.6 to the hostel or the reservation location. Otherwise, in the event of cancellation or other non-use of the booked accommodation (in whole or in part) the hostel will insist on payment of the agreed total price including the food portion. 5.3 The Court acknowledges that the expenses saved by the hostel can be stated as follows: Overnight at 10% For bed and breakfast 20% With half-board 30% In full-board 40% of the total price. 5.4 The host reserves the right to prove to the hostel, that higher expenses were saved. In this case, the guest is only responsible for payment of lower amount. 5.5 Travel cancellation and travel interruption insurance is strongly recommended. 5.6 Alternative 1: bookings through the reservation location The cancellation, for accounting reasons, should only be sent to the reservation location, not to be sent to the hostel and should be in written form. Alternative 2: Booking directly with the hostel The cancellation shall be addressed, for accounting reasons, only to the hostel, not to local tourism agencies, travel agencies or other agencies and should be in written form. 5.7 Bookings for groups from 10 persons: reservations can be cancelled up to 60 days before the date of arrival for group bookings on the basis of multi-bed rooms. For groups based on single and double rooms, a free cancellation period is 45 days before the date of arrival. This deadline also applies if the contract was completed within this period. In case of a late cancellation within this period, the customer is obligated to pay cancellation charges according to the following conditions: - 59 and 44 to 30 days before arrival, 30 percent of the agreed total price will be paid. - 29 to 10 days before arrival, 50 percent of the agreed total price will be paid. - 9 to 1 day (s) before arrival, 80 percent of the agreed total price will be paid. - In case of cancellation on the day of arrival or no shows will be charged 100 percent of the agreed total price. If there is, a reduction in number of persons within that period, at least ten percent of above the applicable cancellation fees are due. Less than ten percent the number of people can be free of charge until 1 day before arrival. 5.8 Booked meals can be canceled up to 8 days prior to arrival, after which, a cancellation fee of 100 percent of the agreed total price will charged. 5.9 The customer is free to prove that the hostel has had no loss or damage caused to the hostel lower than the required compensation package. 5.10 The above provisions of the cancellation deadlines and fees apply accordingly, provided that no separate contract arrangements were agreed on (e.g., for bookings on special dates such as holidays, fairs, etc.) 6. Cancellation of hostels 6.1 If, pursuant to Section 4, the agreed upon or required advance payment or deposit is not made within the set deadline, the Hostel has the right to cancel the reservation contract. 6.2 Furthermore, the hostel is entitled to withdraw from the contract for a serious reason. 6.3 The hostel will inform the customer about the contract cancellation without delay in writing. 6.4 In case of revocation the customer is not entitled to claim damages. 7. Obligations of the guest 7.1 The guest is obliged to inform the hostel of accommodation deficiencies in the performance or other contractual services without delay or to seek remedy. 7.2 The notice of defect is to be directed solely at the hostel, not to the reservation location. 7.3 A withdrawal and / or termination of the guest is only allowed if there are substantial defects and the hostel has not made, within a reasonable time set by the client, a reasonable remedy. 7.4 Claims of the guest are considered invalid only when defects and damage were not caused by the guest, or if the hostel is not able or willing to provide a solution to the problem. 7.5 The accommodation should only be occupied by persons agreed upon by hostel. Overcrowding can justify the right of the hostel in the immediate termination of the contract and / or more reasonable compensation. 7.6 The guest is required to do everything reasonable in order to remedy defects or performance problems that occur and limit possible damage as much as possible. 7.7 Pets of any kind are forbidden, unless agreed with the hostel and in the event of such an agreement only for the purposes of allowing the nature and size of the animal information provided. 8. Liability of the hostels and the exchange 8.1 The contractual liability of the hostels for damages except personal injury (including damage due to injury before - in addition - and post-contractual obligations) is limited to three times the price of the stay, a) if the damage of the guest is not caused by the hostel either intentionally or negligently or b) where the hostel is responsible for any damage suffered by the guest through the fault of an agent. 8.2 Any liability of the hostels for items brought under § § 701 ff BGB remains unaffected by this provision. 8.3 The hostel is not liable for any default in connection with services that are arranged as external services (such as sporting events, theater, exhibitions, etc.) and which are identified as external services. 8.4 The reservation location shall be liable for any errors made by them and their assistants in the mediation. For the provision of the service booked and any defects of service provision is the hostel solely responsible. 8.5 The customer is liable for damage caused to inventory under the law. 8.6 The hostel reserves the right to ask for a deposit of 10.00 EUR per person, but a maximum of 500,00 EUR per group , upon the group's arrival, which is refunded on departure provided the group has not created any damage to the hostel. Damages which exceed the deposit amount must be paid locally. 8.7 Should disruptions or defects in the performance of the hostel occure, the hostel will attempt to remedy the situation. If the customer fails to notify the hostel of a defect to the hostel, then a claim to reduce the contractually agreed remuneration is not applicable. 8.8 If the customer rents a parking space in the garage of the hostel or in a parking lot of the hostel, even if a fee is charged, the guest must abide by the statutory provisions of the City of Berlin. 8.9 Wake-up calls are executed with the utmost care from the hostel. Claims for damages, except for gross negligence or willful intent are not applicable. 8.10 News, mail and merchandise deliveries for guests are handled with care. The hostel will deliver, hold, and - upon request - for a fee, forward such items, including lost property which may be delivered to the mailing address of the hostel or to the hostel’s corporate address. Claims for damages, except for gross negligence or willful intent are not applicable. The hostel has the right to, no later than one month after a storage period and charging of a reasonable fee, gives aforementioned objects to the local lost property office. 8.11 Claims are limited to two years from the date on which the customer becomes aware of the damage or, without regard to such knowledge, three years from the date of said damage causing event. This does not apply to liability for damages arising from injury to life, limb or health, and for other damages based on intentional or grossly negligent breach of duty of the hostel, a legal representative or agent of the hotel. 9. Arrival and departure times 9.1. Unless otherwise agreed upon, the reserved accommodation is available from 2:00 p.m. on the day of arrival. The customer has no claim to enter the reserved room earlier. 9.2 If the guest arrives after this time, the guest is obliged to inform the hostel in due time. If the hostel is not informed within a reasonable amount of time, the hostel is entitled to offer the room to another guest. For reservations of only one evening, the room can and will be offered to new guests two hours after the scheduled arrival time. For reservations for two days or more, the room can be offered to other guest as of 12:00 p.m. the following day. 9.3 Unless otherwise agreed, the accommodation should be vacated on departure day by 11:00 a.m.. 9.4 The customer is not entitled to the provision of certain rooms, unless the hostel has confirmed the provision of certain rooms in writing. 9.5 In group bookings of 9 people or more with accommodation in shared rooms, the hostel has the right to determine, which guest will be placed in which room. 9.6 Reserved rooms must be occupied no later than 6:00 p.m. by the customer on the day of arrival. If there is no reservation that has been guaranteed by pre-payment or a security deposit, the hostel has the right to offer the accommodation to another guest without providing alternative accommodations for the guest. The hostel has the right to terminate any contacts based upon late arrival. 9.7 On the agreed day of departure, the rooms must be vacated no later than at 11.00 o’clock. After that, the hostel can charge the guest 100% of the list price for an additional nights stay. Contractual claims are not valid. The customer is free to prove to the hostel, that there should be no additional change or a reduced charge for the use of the room beyond the checkout time stated here. 9.8 For groups of 9 persons or more, the hostel is to be handed, latest upon arrival, a list of all guests with full name and birth date. 9.9 If the total number of guests exceeds the contracted number of persons as stated in the reservation, the additional guest(s) is not entitled to accommodation within the hostel. 9.10 Accommodation for persons under the age of 18 years old is not permitted. Minors must be accompanied by an adult or at least, have the written consent of a parent or guardian, including a copy of the identity card of the person, in order to stay. This rule does not apply to groups of friends, accompanied by the parent or an legal guardian. 9.11 By reservations with breakfast included, the breakfast will be provided after the first night. 10. Choice of Law and Jurisdiction 10.1 The host can only sue the hostel at the place of accommodation (company headquarters). 10.2 The overall legal and contractual relationship between the hostel and guests, who have no general place of residence or place of business in Germany, will be governed by the German Republic / Law. 10.3 If agreed upon that the guest can pay the entire accommodation price at the end of their stay, the guest must be prepared to pay the entire costs within the hostel before vacating. Jurisdiction for claims of payment of the total costs and the costs of the headquarters of the hostels is within the German Republic. 10.4 Otherwise, decisive actions brought by the hostels to the guest will take place within the guest’s country of origin, unless the action is directed against merchants, legal persons under public or private law or persons who have their domicile or residence abroad, or whose residence or at the time of the legal action is not known. In these cases, the official location of the hostel is decisive. 23.08.2012 Read the news! FREE !!
https://www.2a-hostel.de/en/agb
Membership in the Chaine des Rotisseurs is obtained through a nomination process. A new member must be nominated by an existing member in good standing. A letter from the nominating member must be submitted to the Bailli Regional for consideration by the Council.Maximum enrollment may restrict acceptance of new members. Dress at Dinner - Gentlemen members will wear a dinner jacket (tuxedo) throughout the evening, unless the invitation advises differently. Dinner jackets may either be white (summer) or black. - Gentlemen guests are required to dress in a fashion similar to that of their host. - Ladies whether members, escorts or guest are expected to be in appropriate formal evening attire of a style and standard suited to the formal wear of the gentlemen. - It is incumbent upon members to be sure their guests are aware of these dress rules so that the high standard of decorum is maintained, and the risk of embarrassment to guests is reduced. Dinner - There will be no speeches during dinner. - It is felt that Chaine members and their guest should enjoy the dishes seasoned exactly as prepared by the Chef. In order that no external influences distracts from the intended taste and flavour of these dishes, salt and pepper will not be available during service. - Chaine members and their guest are expected to remain seated throughout dinner in order to avoid any disruption in the flow and movement of the service brigade. - Unless otherwise advised all decorations are the property of the establishment. Under no circumstances are these decorations to be removed from the tables, damaged or defaced. This includes decorations made from food products, floral arrangements or china. - Chaine rules suggest that food is to be eaten when served, even if the others at the table have not yet been served, as this is the moment of peak quality and a sign of respect for, and appreciation of the labours of the Chef and kitchen staff. - The Chaine does not expect the establishment hosting dinner to make substitutions of food courses to our members because of allergy, dislikes or medical conditions. It is appropriate if there is a concern to inquire with the establishments as to the ingredients used in the meal or course and not indulge; in other words pass on that particular course. A replacement dish will not be made for that course. - Chains of Office members must wear their ribbons at all functions, unless otherwise advised. Dinner Reservations for Members - An invitation is extended to a Chaine member and one accompanying person of his or her choice. - Venue capacity may restrict the number of spaces available to members. In the unlikely situation that the numbers of members requesting reservations should exceed the number of places at an event, reservations will be accepted in the order in which payment was received by the Argentier. - Members whose dues are in arrears will not be granted a reservation until all outstanding dues have been paid in full. - The earlier you place your reservation the more likely you are to have a place at the table. Once you receive the only invitation you are welcome to subscribe for the dinner. - Under no circumstances should members contact the host of an establishment to secure reservations for any event. Dinner Reservations for Guests - Guests are welcome, contingent upon the space limitations of the host establishment. - New members are not permitted the privilege of inviting guests in their first year following Intronization. - Members may inform the Argentier of their intention to bring guests at the time they confirm their own attendance at a particular function; however space for guests will be confirmed only after the member’s reservation cut-off date. - Requests for reservations for guests will be accepted in the order in which payment was received by the Argentier. - Guests will be accommodated according to the following protocol: - Chaine members from other Bailliage’s - Out of town immediate family of members - Out of town house guests of members - Prospective members approved by council - Other guests - It must be remembered that La Chaine is not a service club. It is not the role of Ia Chaine to be an entertainment arm for staff, associates or employees. The Accolade - The Accolade given after the dinner is not a critique, but rather an acknowledgement to the establishment.
https://chainecalgary.ca/membership-guidelines/
The parties declare that this rental does not relate to premises rented for use as a main dwelling or mixed professional use and as a main dwelling. Consequently, they agree that their respective rights and obligations will be governed by the provisions of this contract, by the decree of December 28, 1976 modified and failing this by the provisions of the Civil Code. The premises covered by this contract are rented furnished on a seasonal basis. 2. DURATION OF THE RENTAL AGREEMENT This reservation is made and accepted according to the dates selected with a check-in from 16:00 minimum and expressly agrees to have vacated the premises according to the dates selected at 10:00 am maximum. Arrival and departure times may be modified after written agreement and will incur additional costs. At the start of the rental, the Lessor will give the Lessee the keys and / or access codes and the instructions relating to the accommodation. 3. RENTAL PRICES AND CHARGES The Parties have agreed to fix the tariff fixed in the detail of the reservation for the entire duration of the rental described in paragraph 2. The above rent includes, for the entire duration of the rental, the payment of rental charges and available supplies recalled below: • City water* • Electricity* • Heating* • Air conditioning* • High-speed wifi Internet access; • Television access; Not included in the rent: • Sheets and towels are at the charge of the lessee: €23 per person (or €30 per person depending on the origin of the reservation) • Exit cleaning is extra: the price differs according to the apartment. • The tourist tax * except if a special rate has been agreed with these additional charges 4. RESERVATION TERMS A 50% deposit is required when booking, the balance will have to be paid one month minimum before arrival. Unless the arrival is scheduled less than a month after booking, 100% will be charged upon booking. The reservation of the apartment will be effective upon receipt of the deposit. Two payment options: by credit card or by bank transfer. The customer has 1 day to make the transfer and send the confirmation of the transfer. Without deposit or full payment (according to terms) the reservation cannot be confirmed. Payment of the site fees will not be sufficient to maintain the reservation if the deposit or full payment (according to the terms) remains unpaid. we retain the right to cancel the reservation regardless of the customer's reason. Payment by bank card remotely will incur additional costs of 2.85% of the amount of the transaction. In case of reservation on the website, these costs will be paid downstream after the payment of the reservation. In order to make a transfer, here is the information of the bank account: Bank name: CAISSE D'EPARGNE Account name: SARL LRA Bank address: 5 Rue des Belges, 06400 CANNES Bank phone: 0 826 08 36 59 Account number: 08007002921 IBAN: FR76 1831 5100 0008 0070 0292 151 SWIFT: CEPAFRPP831 5. SECURITY DEPOSIT At the latest when entering the premises, the lessee will give the Lessor a security deposit fixed between 1000 € and 3000 € depending on the apartment. It is intended to cover damage and / or deterioration of the accommodation and furniture and objects furnishing the accommodation caused by the Lessee. A preauthorization by credit card, the amount will be blocked and will be returned subject to a maximum delay of 30 days after his departure, deduction made where appropriate of the sums covering the damage and / or deterioration of the accommodation. This payment can in no case be considered as advance payment of rent. The holder must be present accompanied by his identity document. This sum will be returned as soon as possible. If this security deposit proves to be insufficient, the lessee undertakes to improve this sum. This guarantee cannot in any case make "LRA" responsible for any rental or supply from a third party, even if the services have been provided at the request of the tenant. In no case will the pre-authorization be returned on the day of departure. 6. ASSIGNMENT AND SUBCASE This reservation is concluded intuitu personae for the benefit of the sole lessee identified at the start of the contract. Any assignment of this reservation, any total or partial subletting, any provision, even free, is strictly prohibited. The Lessee may not leave the disposal of the premises, even free of charge and / or by loan, to a person outside his home. (Except if the rental is made on a professional basis, the lessee may make the apartment available to his employees or partners) 7. INVENTORY OF FIXTURES The furniture and moveable property will be the subject of an inventory when the lessee enters and leaves. However, due to a large number of arrivals and departures at the same time, it is possible that the lessor is unable to take the inventory of fixtures in presence of the lessee, who agrees that this inventory of fixtures will be opposable to him. The furniture and moveable property must only suffer from wear and tear arising from the normal use to which they are destined. Any which are missing or out of use when the present rental expires, will be paid or replaced by the lessee. The same applies to the paintings, wallpaper, curtains, etc. The complete cleaning of the premises, and if needed, the cleaning of carpets, bedding, blankets, mattresses, etc, will be at the lessee’s expense, over and above the rent. An inventory will be given to the lessee the day of the arrival. At the check-out, the cleaner will check everything in the apartment, and the lessor will have 30 days to let the lessee know if something’s wrong with the apartment 8. LESSEE’S OBLIGATIONS - Upon arrival, the lessee must prove his identity and submit the security deposit. If it is impossible to prove his identity and to transmit the deposit on the day of arrival, the reservation may be automatically canceled without any possible refund. - The Lessee will peacefully use the rented accommodation and the furniture and equipment according to the destination given to them by the lease and will respond to damage and losses that may occur during the term of the contract in the premises which he has exclusive enjoyment. - The Lessee will maintain the rented accommodation and return it in good condition to cleanliness and rental repairs at the end of the contract. If items in the inventory are broken or damaged, the Lessor will claim their replacement value. - It should avoid any noise likely to annoy neighbors, especially those emitted by radio, television and other devices. - The Lessee may not exercise any recourse against the Lessor in the event of theft and damage to the rented premises. - He will respect the maximum number of people according to the maximum capacity of beds in accordance with the description given to him. - The lessee may not oppose the visit of the premises if the Lessor or his agent so request or in case of emergency, an entry would be possible without warning the Lessee. - Regarding the exit cleaning included, the kitchen and the bathroom must however be clean and the garbage must be thrown away in the containers for the building. -If the Lessee does not come the same date as the booking day and does not inform the Lessor by a writing text (e.g. SMS), the lessor may rightly, try to re-let the accommodation while retaining the right to turn against the lessee . - The lessee cannot demand any compensation following a malfunction or work in the common parts of the building. - Late check-in after 8 p.m. will be charged € 50, € 80 after 10 p.m or €100 after midnight -The lessee must use the property for residential purposes only. If the lessee decides to organize an event or invite more people than the maximum capacity of beds the lessee must notify the lessor. Additional charges will mandatorily apply. The Lessor will invoice a compensation at the height of minimum €3000 for making parties or any Events (Cocktails, Shooting in any kinds (movies, video clip etc), private celebration or others if He is not informed by a writing text (e.g. SMS) and allowed by the lessor. - Occasionally artists' paintings or posters are displayed in the apartment. In this case, these paintings or posters are entirely your responsibility. Any degradation will generate a payment. - Some apartments have toilet equipped with motor: It is strictly forbidden to throw anything other than toilet paper otherwise the toilet will be out of use. Failure to comply with these conditions will result in significant water damage which will be your sole responsibility. Failure to comply with these guidelines will result in the intervention of a technician with a minimum amount of € 350. -Smoking is strictly prohibited in the apartments: the sum of € 350 will be deducted from the amount of the deposit in the event of non-compliance with this instruction. - For any late departure: Release of the apartment between 10h and 12h: 30% of the night will be applied. After 12h: 90 % of the night will be applied. - Concerning the outdoor jacuzzi present in some apartments, instructions apply: It is mandatory to take a shower before having a bath in the Jacuzzi. If any grains of sand are found inside the jacuzzi, it may cause damages to the motor. Fees can be took from the deposit. The cover needs to be stowed back on the top of the jacuzzi, after each use, with the straps. The water quality and condition will be checked during the check-out. Cleaning extra fees can be applied. In case of specialist intervention needed, the cost of it will be under your responsibility. When using the Jacuzzi, avoid electric shut down by not using too many electric appliances at the same time and turn it off after use. If you need us to change the water and to clean up the jacuzzi during your stay it will cost you 150€. 9. CANCELLATION The signing of the contract is binding on both parties. No termination is possible without the written agreement of the parties. If the Lessee renounces the rental, he remains liable for the entire rent. The balance must be on the lessor's bank account no more than 30 days before the start of the rental. If necessary, the lessor retains the right to cancel the reservation whatever the reason for the lessee. 10. INSURANCES The lessee’s personal effects are not covered for theft, fire, loss or deterioration. It is up to him to take out any insurance policy that he considers useful to cover his personal possessions, as well as civil liability. In no event will the lessee hold the lessor responsible in case of theft committed in the rented premises. 11. TERMINATION BY PLEASE RIGHT In the event of breach by the Lessee of one of the contractual obligations, this lease will be terminated automatically. This termination will take effect after a period of 24 hours after a simple summons by registered letter or letter delivered by hand remained unsuccessful. 12. ELECTION OF DOMICILE For the execution hereof, the lessor and the lessee elect domicile in their respective homes. However, in case of dispute, the home court of the lessor should have sole jurisdiction. This agreement and its aftermath are subject to French law. 13. EXTRAS We suggest different extras, that you can choose and pay during your stay: It is preferable to transmit the extras before arrival to guarantee the availability. 14. TOURIST TAX The company is subject to the tourist tax: on arrival between €2,3 per night and per person must be paid (except free for children under 18 years). The amount of the tourist tax is fixed by the city and can vary between booking and arrival.
https://www.lra-cannes.com/rentals/verCondicionesGenerales.php?bk=bk_lracannes&Idioma=EN
Located in Soraga in the Trentino Alto Adige region, Locazione turistica Casa Pederiva (SOF743) has a balcony. The property is 14 km from Canazei, and private parking is featured. The apartment features 2 bedrooms, a TV with satellite channels, an equipped kitchen with a dishwasher and a microwave, and 1 bathroom with a shower. Bolzano is 42 km from the apartment, while Ortisei is 45 km away. Please inform Locazione Turistica Pederiva - SOF743 in advance of your expected arrival time. You can use the Special Requests box when booking, or contact the property directly with the contact details provided in your confirmation. A security deposit of EUR 100 is required upon arrival for incidentals. This deposit is fully refundable upon check-out and subject to a damage inspection of the accommodation. Towels can be rented (reservation needed) at EUR 4.00 per person per stay or guests can bring their own. 1 Babycot available, free of charge. 1 Extrabed(s) available, charges apply.
https://only-apartments.com/soraga-di-fassa/locazione-turistica-pederiva-sof743-prdRE5YM8
Located in Himmelpfort in the Brandenburg region, Ferienhaus Himmelpfort 102S has a garden. The accommodation is 33 km from Feldberg. The chalet features 2 bedrooms, a TV with satellite channels, an equipped kitchen with a dishwasher and a microwave, and 1 bathroom with a shower. Rheinsberg is 33 km from the chalet, while Templin is 31 km away. The nearest airport is Berlin Tegel Airport, 84 km from Ferienhaus Himmelpfort 102S. Please inform Chalet Waldhäuser - HIM101-2 in advance of your expected arrival time. You can use the Special Requests box when booking, or contact the property directly with the contact details provided in your confirmation. A security deposit of EUR 100 is required upon arrival for incidentals. This deposit is fully refundable upon check-out and subject to a damage inspection of the accommodation. Bedlinen can be rented (reservation needed) at EUR 18.00 per person per stay or guests can bring their own. 1 Babycot available, free of charge.
https://only-apartments.com/fuerstenberg-havel/chalet-waldhaeuser-him101-2-prdWVV6GZ
Free! Free public parking is available at a location nearby (reservation is not needed). Internet - Free! WiFi is available in all areas and is free of charge. Kitchen - Dining table - Coffee machine - Cleaning products - Stovetop - Oven - Kitchenware - Electric kettle - Kitchen - Washing machine - Microwave - Refrigerator Bedroom - Wardrobe/Closet - Alarm clock Bathroom - Towels - Free toiletries - Hairdryer - Linens - Bathtub or shower Living Area - Dining area - Sofa - Sitting area - Desk Media & Technology - Flat-screen TV - Cable channels - Satellite channels - CD Player - DVD Player - TV - Computer - Radio Room Amenities - Clothes rack Pets - Pets are not allowed. Accessibility - Upper floors accessible by elevator Outdoors - Balcony General - Iron - Air conditioning - Air Conditioning - All Spaces Non-Smoking (public and private) - Heating - Tile/Marble floor - Elevator - Heating - Family Rooms - Ironing facilities - Non-smoking Rooms - Safe Services - Grocery Deliveries - Airport Shuttle Building characteristics - Private apartment in building Languages Spoken - Italian - English Lazio Highlights Rome – the Eternal City(2.5 miles) Home to the Coliseum, Trevi Fountain and lots more: the only thing you’ll lack in Rome is time to experience it all! Atmospheric sunsets from Rome's Palatine Hill(2.9 miles) Experience the twilight hours on Rome’s founding spot - Palatine Hill boasts splendid views over the Roman Forum, Circus Maximus and the Coliseum. Ostia Antica(11.8 miles) The ancient theatre, round temple and forum are amongst the Roman remains that make up the popular archeological site of Ostia Antica. Check-in 2:00 PM - 10:00 PM Check-out 6:00 AM - 10:00 AM Cancellation/ prepayment Cancellation and prepayment policies vary according to apartment type. Please enter the dates of your stay and check what conditions apply to your preferred room. Children and Extra Beds All children are welcome. All children under 2 years are charged EUR 15 per night in a crib. All children under 16 years are charged EUR 10 per night for extra beds. The maximum number of extra beds in a room is 1. Any type of extra bed or crib is upon request and needs to be confirmed by management. Additional fees are not calculated automatically in the total cost and will have to be paid for separately during your stay. Pets Pets are not allowed. Cards accepted at this property Hover over the cards for more info. The Fine Print Please inform Vatican Apartment Holiday of your expected arrival time in advance. You can use the Special Requests box when booking, or contact the property directly using the contact details provided in your confirmation. A security deposit of EUR 100 is required upon arrival for incidental charges. This deposit is fully refundable upon check-out and subject to a damage inspection of the accommodation. Payment before arrival by bank transfer is required. The property will contact you after you book to provide instructions. Review score Based on 32 reviews - Cleanliness 9.3 - Comfort 8.9 - Location 8.8 - Facilities 8.7 - Staff 9.4 - Value for money 9.3 - Free WiFi 9.6 Show reviews from: Sort by:
http://www.villas.com/italy/lazio/rome/vatican-apartment-holiday.html
What do I need to do if I'm running late? Sometimes a situation may occur where you simply run behind schedule. For instance, if your car breaks down on the way to the parking location or airport. In this case, it is important that you telephone the parking provider to inform them of your delay. Their staff will have your expected arrival time in their planning and then will be able to change this for you. You can find the telephone number from the parking provider in your confirmation email. You have made a reservation through Parkos, but your flight times have changed? If you know of this change before you depart, then you can amend your booking via manage my booking, you are able to change your reservation up to 24 hours prior to your arrival, free of charge. You are notified of a change in your flight time at a later stage? Then please telephone the parking provider directly, you can find their telephone number in your confirmation email. Do you have any questions? Contact our customer service 5 days a week.
https://parkos.com.au/frequently-asked-questions/running-late.html
Offering free WiFi and free private parking, Appartamento Lucia is set in Àrbatax, just 1.7 km from The Red Rocks Beach. The property was built in 2016 and features air-conditioned accommodation with a terrace. The apartment comes with 2 bedrooms, 1 bathroom, bed linen, towels, a flat-screen TV with satellite channels, a dining area, a fully equipped kitchen, and a balcony with mountain views. This property will not accommodate hen, stag or similar parties. Please inform Appartamento Lucia in advance of your expected arrival time. You can use the Special Requests box when booking, or contact the property directly with the contact details provided in your confirmation. In response to Coronavirus (COVID-19), additional safety and sanitation measures are in effect at this property. Managed by a private host Children of any age are allowed. Children up to and including 3 years old stay for free when using an available cot. Children up to and including 5 years old stay for free when using an existing bed. No extra beds are available. Any type of extra bed or child's cot/crib is upon request and needs to be confirmed by management. WiFi is available in all areas and is free of charge. Free private parking is possible at a location nearby (reservation is not needed). Pets are not allowed.
https://www.hotel-sardinia.com/hotel/a-rbatax-appartamento-lucia_4999713-1
Located in Santa Margherita di Pula in the Sardinia region, VILLA ALBA has a patio. This villa features a garden, barbecue facilities, free WiFi and free private parking. The villa includes 3 bedrooms, a kitchen with a dishwasher and a microwave, as well as a kettle. A children's playground is available for guests at the villa to use. Santa Margherita di Pula Beach is 200 metres from VILLA ALBA. The nearest airport is Cagliari Elmas Airport, 39 km from the accommodation. Please inform VILLA ALBA in advance of your expected arrival time. You can use the Special Requests box when booking, or contact the property directly with the contact details provided in your confirmation. This property will not accommodate hen, stag or similar parties. A damage deposit of EUR 500 is required. The host charges this 7 days before arrival. This will be collected by bank transfer. You should be reimbursed within 7 days of check-out. Your deposit will be refunded in full via bank transfer, subject to an inspection of the property. Children of any age are allowed. Children up to and including 1 year old stay for free when using an existing bed. No extra beds are available. Any type of extra bed or child's cot/crib is upon request and needs to be confirmed by management. WiFi is available in all areas and is free of charge. Free private parking is possible on site (reservation is not possible). Pets are not allowed.
https://www.cagliarilastminute.com/hotel/santa-margherita-di-pula-villa-alba_5971608-1
Located in Budoni, 1.3 km from Spiaggia di Porto Ainu and 1.6 km from Spiaggia Capannizza, Appartamenti Budoni provides accommodation with free WiFi and a garden with a barbecue and garden views. Featuring a kitchen with a fridge and a stovetop, each unit also comes with a satellite flat-screen TV, ironing facilities, wardrobe and a seating area with a sofa. There is a fully equipped private bathroom with bidet and a hairdryer. The apartment offers a terrace. Baia Sant'Anna Beach is 1.9 km from Appartamenti Budoni. The nearest airport is Olbia Costa Smeralda Airport, 39 km from the accommodation. Please inform Appartamenti Budoni in advance of your expected arrival time. You can use the Special Requests box when booking, or contact the property directly with the contact details provided in your confirmation. This property will not accommodate hen, stag or similar parties. Children of any age are allowed. People no matter the age stay for free when using an available extra bed. No cots are available. Any type of extra bed or child's cot/crib is upon request and needs to be confirmed by management. WiFi is available in all areas and is free of charge. Free public parking is possible at a location nearby (reservation is not needed). Pets are allowed on request. Charges may be applicable.
https://www.hotel-sardinia.com/hotel/budoni-appartamenti-budoni_1440338-1
Rue Guillaume Onfroy, 35400 Saint-Malo, France Located 3.6 km from Ferry Terminal du Naye, 3.8 km from Solidor Tower and 4 km from Palais du Grand Large, Holiday Home La Hulotais provides accommodation situated in Saint Malo. With free private parking, the property is 2.5 km from Sablons Beach and 3 km from Sillon Beach. The air-conditioned holiday home consists of 4 bedrooms, a kitchen with dining area, and 2 bathrooms with shower. A TV is provided. Cale de Dinan Ferry is 5 km from the holiday home, while Casino Barrière Saint-Malo is 5 km away. The nearest airport is Pleurtuit Airport, 12 km from Holiday Home La Hulotais. A security deposit of EUR 400 is required upon arrival for incidentals. This deposit is fully refundable upon check-out and subject to a damage inspection of the accommodation. Please inform Holiday Home La Hulotais in advance of your expected arrival time. You can use the Special Requests box when booking, or contact the property directly with the contact details provided in your confirmation. Please note that the full amount of the reservation is due before arrival. Interhome will send a confirmation with detailed payment information. After full payment is taken, the property's details, including the address and where to collect keys, will be emailed to you. This property will not accommodate hen, stag or similar parties. Towels can be rented (reservation needed) at EUR 5.00 or guests can bring their own.
https://only-apartments.com/saint-malo/holiday-home-la-hulotais-prd0266GK
Free! Free public parking is available at a location nearby (reservation is not possible). Internet - Free! WiFi is available in all areas and is free of charge. Kitchen - Dining table - Coffee machine - Stovetop - Oven - Kitchenware - Kitchen - Washing machine - Refrigerator Bedroom - Wardrobe/Closet Bathroom - Towels - Bidet - Hairdryer - Shower - Walk-in Shower - Toilet paper - Linens - Bathtub or shower - Toilet Living Area - Dining area - Sitting area - Desk Room Amenities - Trash cans - Clothes rack Pets - Free! Pets are allowed. No extra charges. Accessibility - Upper floors accessible by stairs only Building characteristics - Private apartment in building Miscellaneous - Air Conditioning - Heating - Family Rooms Languages Spoken - Italian - English Area Info – Show map Closest Landmarks - Guinigi Tower 0.4 miles - Lucca Cathedral 0.5 miles - Basilica of San Frediano 0.6 miles - Lucca Centrale Railway Station 0.6 miles - Villa Reale 4 miles Most Popular Landmarks - San Domenico 9.5 miles - Leaning Tower of Pisa 10.2 miles - Pisa Cathedral 10.2 miles - Piazza dei Miracoli 10.3 miles - Botanical Gardens of Pisa 10.4 miles Restaurant: Gli Orti di Via Elisa (0.3 miles) Supermarket: CONAD (0.1 miles) Market: Via dei Bacchettoni (0.1 miles) Ski lift: Abetone (22.4 miles) Check-in 2:00 PM - 8:00 PM Check-out Until 10:30 AM Cancellation/ prepayment Cancellation and prepayment policies vary according to apartment type. Please enter the dates of your stay and check what conditions apply to your preferred room. Children and Extra Beds Children cannot be accommodated at the hotel. Any additional children or adults are charged EUR 10 per night for extra beds. The maximum number of extra beds in a room is 1. Any type of extra bed or crib is upon request and needs to be confirmed by management. Additional fees are not calculated automatically in the total cost and will have to be paid for separately during your stay. Pets Free! Pets are allowed. No extra charges. Cash only This property only accepts cash payments. Please inform Lidia Guest House of your expected arrival time in advance. You can use the Special Requests box when booking, or contact the property directly using the contact details provided in your confirmation. A security deposit of EUR 100 is required upon arrival for incidental charges. This deposit is fully refundable upon check-out and subject to a damage inspection of the accommodation. Payment before arrival by bank transfer is required. The property will contact you after you book to provide instructions. No review score yet... We need at least 5 reviews before we can calculate a review score. If you book and review your stay, you can help Lidia Guest House meet this goal.
http://www.booking.com/hotel/it/lidia-guest-house.html
Route des Bains 1911 Leytron Wallis, Apartment Les Chalets de Marie A20 is set in Ovronnaz. Featuring a terrace, the apartment is in an area where guests can engage in activities such as skiing and cycling. Boasting a DVD player, the apartment has a kitchen with a dishwasher, a microwave and a fridge, a living room, a dining area, 1 bedroom, and 1 bathroom with a shower. A TV with cable channels is offered. A security deposit of CHF 300 is required upon arrival for incidentals. This deposit is fully refundable upon check-out and subject to a damage inspection of the accommodation. Please note that the full amount of the reservation is due before arrival. Interhome will send a confirmation with detailed payment information. After full payment is taken, the property's details, including the address and where to collect keys, will be emailed to you. Please inform Apartment Les Chalets de Marie A20 in advance of your expected arrival time. You can use the Special Requests box when booking, or contact the property directly with the contact details provided in your confirmation. This property will not accommodate hen, stag or similar parties. The office will be closed on Sundays. For Sunday arrivals please contact the office well in advance to arrange arrival details.
https://only-apartments.com/leytron/apartment-les-chalets-de-marie-a20-prdLV8MXN
Free! Free public parking is available on site (reservation is not needed). Internet Kitchen - Stovetop - Kitchenware - Electric kettle - Microwave - Refrigerator - Kitchenette Bedroom - Wardrobe/Closet Bathroom - Hairdryer Living Area - Dining area - Sofa - Sitting area Media & Technology - Flat-screen TV - Cable channels - Radio Pets - Pets are not allowed. General - Iron - All Spaces Non-Smoking (public and private) - Hardwood/Parquet floors - Heating - Heating - Family Rooms - Non-smoking Rooms Services - Airport Shuttle (surcharge) - Wake-up service - Airport Shuttle - 24-Hour Front Desk View - City view - Landmark view Languages Spoken - Russian - Lithuanian - English - German Check-in From 2:00 PM Check-out Until 12:00 PM Cancellation/ prepayment Cancellation and prepayment policies vary according to apartment type. Please enter the dates of your stay and check what conditions apply to your preferred room. Children and Extra Beds All children are welcome. Any additional children or adults are charged EUR 10 per night for extra beds. The maximum number of extra beds in a room is 1. Any type of extra bed or crib is upon request and needs to be confirmed by management. Additional fees are not calculated automatically in the total cost and will have to be paid for separately during your stay. Pets Pets are not allowed. Cash only This property only accepts cash payments. The Fine Print A deposit via bank transfer is required to secure your reservation. The property will contact you with instructions after booking. Please inform Jolando Apartment of your expected arrival time in advance. You can use the Special Requests box when booking, or contact the property directly using the contact details provided in your confirmation. Review score Based on 31 reviews - Cleanliness 9.4 - Comfort 8.6 - Location 9.4 - Facilities 8.9 - Staff 9.8 - Value for money 8.9 - Free WiFi 9.2 - Breakfast 10 Show reviews from: Sort by:
http://www.villas.com/lithuania/kaunas-county/kaunas/jolando-apartment.html
"Good" 7.5 /10 2 guests reviews Apartment Front de Mer 14390 CABOURG 1 apartment, 25 m² 4 people 1 bedroom 1 bathroom 50-72 €/night 1 adult 2 adults 3 adults 4 adults 5 adults 6 adults 7 adults 8 adults 9 adults 10 adults 11 adults 12 adults 13 adults 14 adults 15 adults 16 adults 18 adults 20 adults 25 adults 30 adults 40 adults 50 adults 100 adults 0 child 1 child 2 children 3 children 4 children 5 children 6 children 7 children 8 children 9 children 10 children Age of children Close Please wait... we are looking for availabilities We speak Instant confirmation rental Cabourg No booking or credit card fees Health information This establishment has taken additional health and hygiene measures to ensure you have a peaceful stay. Safety features Staff follow all safety protocols as directed by local authorities Cleaning Linens, towels and laundry washed in accordance with local authority guidelines Guest accommodation is disinfected between stays Physical distancing Physical distancing rules Screens or physical barriers placed between staff and guests in appropriate areas Contactless check-in/check-out General information Check-in : 16:00 - 18:00 Check-out : 10:00 WiFi available for purchase Lift Non smoking Private car park Beachfront Television Activities Horse riding Located 550 metres from Ecole de Voile, less than 1 km from Central Beach and a 14-minute walk from Cabourg Casino, Apartment Front de Mer.8 provides accommodation situated in Cabourg. This beachfront property offers access to a balcony. This apartment includes 1 bedroom, a living room and a TV, an equipped kitchenette with a dining area, and 1 bathroom with a shower. Private parking is available at the apartment. Cabourg Beach is 1.2 km from Apartment Front de Mer.8, while Cabourg Raccourse is 2.5 km away. The nearest airport is Deauville - Normandie Airport, 29 km from the accommodation. Availability and prices Apartment (4 people) Apartment of 25 m² Balcony Television Toilets Kitchen Dining area Kitchenette Coffee machine Microwave Stovetop Fridge Oven Kitchenware Bedding Bedroom 1 : 1 double bed Bedroom 2 : 1 sofa bed Bathroom Shower This apartment features a microwave, dining area and stovetop. WiFi is available in all areas and charges are applicable. Private parking is possible at a location nearby (reservation is not possible) and charges may be applicable. Pets are not allowed. All children are welcome. There is no capacity for extra beds in the room. The maximum number of total guests in a room is 4. There is no capacity for cots in the room. Cards accepted Visa Mastercard A security deposit of EUR 300 is required upon arrival for incidentals. This deposit is fully refundable upon check-out and subject to a damage inspection of the accommodation. Please inform Apartment Front de Mer in advance of your expected arrival time. You can use the Special Requests box when booking, or contact the property directly with the contact details provided in your confirmation. Please note that the full amount of the reservation is due before arrival. Interhome will send a confirmation with detailed payment information. After full payment is taken, the property's details, including the address and where to collect keys, will be emailed to you. Towels can be rented (reservation needed) at EUR 5.00 per person per stay or guests can bring their own. Bedlinen can be rented (reservation needed) at EUR 10.00 per person per stay or guests can bring their own. Apartment Front de Mer 14390 CABOURG (15 km from Deauville) GPS coordinates : 49.28728, -0.12517 Cercle 100km Train station Gare de Dives Cabourg Gare de Dives sur Mer Port Guillaume Port Port Guillaume Tourist office Office de Tourisme de Cabourg Office de Tourisme de Varaville Office de Tourisme d'Houlgate Office de Tourisme Campagne et bais de l'orne Beach Plage du Menhir Plage des Romantiques Plage du Home Plage Armengaud Franceville-Plage Cinema Cinéma Le Normandie Cinéma Casino Casino Grand Casino de Cabourg Casino d'Houlgate Golf Golf public de Cabourg Golf Club de Cabourg Le Home Print access map Plan your route Activities nearby Golf course (within 3 km) Allow the deposit of cookies to access these customer reviews. Allow Authorize the deposit of cookies from our partners to access this content.
https://www.gites.fr/gites_apartment-front-de-mer_cabourg_h3028990_en.htm
Tengia 6760 Faido Ticino, Switzerland Holiday Home Rustico Panorama is located in Lavorgo. Private parking is available on site. The holiday home is equipped with 2 bedrooms, 1 bathroom, a flat-screen TV with satellite channels, a dining area, a fully equipped kitchen, and a terrace with mountain views. If you would like to discover the area, cycling is possible in the surroundings. Andermatt is 49 km from the holiday home, while Bellinzona is 48 km away. The nearest airport is Lugano Airport, 75 km from Holiday Home Rustico Panorama. A security deposit of CHF 400 is required upon arrival for incidentals. This deposit is fully refundable upon check-out and subject to a damage inspection of the accommodation. Please inform Holiday Home Rustico Panorama in advance of your expected arrival time. You can use the Special Requests box when booking, or contact the property directly with the contact details provided in your confirmation. Please note that the full amount of the reservation is due before arrival. Interhome will send a confirmation with detailed payment information. After full payment is taken, the property's details, including the address and where to collect keys, will be emailed to you. 1 Babycot available, charges apply.
https://only-apartments.com/faido/holiday-home-rustico-panorama-prdNQY530
Route de Moléson 1663 Gruyères Fribourg, Switzerland Situated in Moleson in the Canton of Fribourg region, Apartment Grevire features a balcony. The apartment has mountain views and is 32 km from Lausanne. The apartment is fitted with 1 separate bedroom, 1 bathroom, a fully equipped kitchen with a dining area, and a TV. If you would like to discover the area, skiing and cycling are possible in the surroundings. Leukerbad is 49 km from the apartment, while Montreux is 17 km from the property. The nearest airport is Belp Airport, 53 km from Apartment Grevire. A security deposit of CHF 300 is required upon arrival for incidentals. This deposit is fully refundable upon check-out and subject to a damage inspection of the accommodation. Please inform Apartment Grevire in advance of your expected arrival time. You can use the Special Requests box when booking, or contact the property directly with the contact details provided in your confirmation. Please note that the full amount of the reservation is due before arrival. Interhome will send a confirmation with detailed payment information. After full payment is taken, the property's details, including the address and where to collect keys, will be emailed to you.
https://only-apartments.com/gruyeres/apartment-grevire-prdELLZWV
- 7 YOLOv5: Latest YOLO? Introduction Any computer vision enthusiast has surely heard of YOLO models for object detection. Ever since the first YOLOv1 was introduced in 2015, it garnered too much popularity within the computer vision community. Subsequently, multiple versions of YOLOv2, YOLOv3, YOLOv4, and YOLOv5 have been released albeit by different people. In this article, we will give a brief background about all the object detection models of the YOLO family from YOLOv1 to YOLOv5. Basic Working of YOLO Object Detector Models As for every ML-based model precision and recall are very important to deduce and judge its accuracy and robustness. Thus the creator of YOLO kept tried to come up with the object detection model that maximizes mAP (mean average precision). - Recall is the ratio of true positives to total positive prediction(correct or incorrect). - Precision is the ratio of true positives to the ground truth positives(total correct predictions). - The mean of all average precision is called mean average precisions(mAP). Besides this, the architecture of all the YOLO models have a similar theme of components as outlined below – - Backbone: A convolutional neural network that accumulates and produces visual features with different shapes and sizes. Classification models like ResNet, VGG, and EfficientNet are used as feature extractors. - Neck: A set of layers that integrate and blend characteristics before passing them on to the prediction layer. Example: Feature pyramid network(FPN), path aggregation network(PAN) and Bi-FPN - Head: Takes in features from the neck along with the bounding box predictions. Performs classification along with regression on the features and bounding box coordinates to complete the detection process. Outputs 4 values, generally x,y coordinate along with width and height. YOLOv1 – The Beginning The first YOLO model was introduced by Joseph Redmon et all in their 2015 paper titled “You Only Look Once: Unified, Real-Time Object Detection”. Till that time RCNN models were the most sought-after models for object detection. Although the RCNN family of models were accurate but were relatively slow because it was a multi-step process of finding the proposed region for the bounding box and then do classification on these regions and finally do post-processing to refine the output. YOLO was created with the goal to do away with multistage and perform object detection in just a single stage, thus increasing the inference time. Performance YOLOv1 sported a 63.4 mAP with an inference speed of 45 frames per second (22ms per image). At that time, it was a huge improvement of speed over the RCNN family for which inference rates ranged from 143ms to 20 seconds. Technical Improvements The basic working of the YOLO model relies upon its unified detection technique which groups together different components of object detection into a single feed neural network. The model divides an incoming image into numerous grids and calculates the probability of an object resides inside that grid. This is done for all the grids that the image is divided into. After that, the algorithm groups nearby high-value probability grids as a single object. Low-value predictions are discarded using a technique called Non-Max Suppression(NMS). The model is trained in a similar fashion where the center of each object detected is compared with the ground truth. In order to check whether the model is correct or not and adjust the weights accordingly. YOLOv2 – Better, Faster, Stronger YOLOv2 was released by Joseph Redmon and Ali Farhadi in 2016 in their paper titled “YOLO9000:Better, Faster, Stronger”. The 9000 signified that YOLOv2 was able to detect over 9000 categories of objects. This version had various improvements over the previous version YOLOV1. Performance YOLOv2 registered 78.6 mAP on the VOC 2012 dataset. We can see in the below table that it performed very well on the VOC 2012 dataset compared to other object detection models. Technical Improvements YOLOv2 version introduced the concept of anchor boxes. Anchor boxes are nothing but predefined areas for an image that illustrates the idealized position of the objects to be detected. We calculate the ratio of overlap over union (IoU) of the predicted bounding box and the pre-defined anchor box. The IoU value acts as a threshold to decide whether the probability of the detected object is sufficient to make a prediction or not. But in the case of YOLO, anchor boxes are not computed randomly. Instead, the YOLO algorithm examines the training data and performs clustering on it (dimension clusters). All this is done in order to ensure that the anchor boxes that are used represent the data on which we will be training our model. This helps in enhancing the accuracy a lot. Additional Improvements - In order to adapt to different aspect ratios, the YOLOv2 model is randomly resized throughout the training process (this is called multi-scale training). - For the model to be robust the YOLOv2 model was trained on a combination of the COCO dataset (80 classes with bounding boxes) and the ImageNet dataset (22k classes without bounding boxes). When the model processes an image with labels the detection and classification error is calculated. Whereas when the model sees a label-less image it backpropagates the classification error only. This structure is called the WordTree. - Inference speeds of up to 200 FPS and mAP of 75.3 were achieved using a classification network architecture called darknet19 (the backbone of YOLO). YOLOv3: An Incremental Improvement In 2018, Joseph Redmon and Ali Farhadi introduced the third version of YOLOv3 in their paper “YOLOv3: An Incremental Improvement”. This model was a little bigger than the earlier ones but more accurate and yet was fast enough. Performance YOLOv3-320 has an mAP of 28.2 with an inference time of 22 milliseconds. (On the COCO dataset). This is 3 times fast than the SSD object detection technique yet with similar accuracy. Technical Improvements YOLOv3 consisted of 75 convolutional layers without using fully connected or pooling layers which greatly reduced the model size and weight. It provided the best of both worlds i.e. using residual models (from the ResNet model) for multiple feature learning with feature pyramid network(FPN) while maintaining minimal inference times. - A feature pyramid network is a feature extractor that extracts different types/forms/sizes of features for a single image. It concatenates all the features so that the model can learn local and general features. - By employing the use of logistic classifiers and activations the class predictions for the YOLOv3 goes above and beyond RetinaNet-50 and 101 in terms of accuracy. - As the backbone, the YOLOv3 model uses the Darknet53 architecture. YOLOv4 – Optimal Speed and Accuracy of Object Detection YOLOV4 was not released by Joseph Redmon but by Alexey Bochkovskiy, et all in their 2020 paper “YOLOv4: Optimal Speed and Accuracy of Object Detection”. Performance YOLOv4 model stands atop of the other detection models like efficientDet and ResNext50. It has the Darknet53 backbone (same as the YOLOv3). Technical Improvements YOLOv4 introduced the concept of the bag of freebies (techniques that bring about an enhancement in model performance without increasing the inference cost) and the bag of specials (techniques that increase accuracy while increasing the computation cost). It has a speed of 62 frames per second with an mAP of 43.5 percent on the COCO dataset. Bag of Freebies (BOF) - Data augmentation techniques: Cutmix (Cut and mix multiple images containing objects that we want to detect), Mixup(Random mixing of images), Cutout, Mosaic data augmentation. - Bounding box regression loss: Experimentation of different types of bounding box regression types. Example: MSE, IoU, CIoU, DIoU. - Regularization: Different types of regularization techniques like Dropout, DropPath, Spatial dropout, DropBlock. - Normalization: Introduced the cross mini-batch normalization which has proven to increase accuracy. Along with techniques like Iteration-batch normalization and GPU normalization. Bag of Specials BOS - Spatial attention modules (SAM): Generates feature maps by utilizing the inter-spacial feature relationship. Help in increasing accuracy but increase the training times. - Non-max suppression(NMS): In the case of objects that are grouped together we get multiple bounding boxes as predictions. Non-max suppression reduces false/excess boxes. - Non-linear activation functions: Different types of activation functions were tested with the YOLOv4 model. Example ReLU, SELU, Leaky, Swish, Mish. - Skip-Connections like weighted residual connections(WRC) or cross-stage partial connections(CSP). YOLOv5: Latest YOLO? YOLOv5 is supposedly the next member of the YOLO family released in 2020 by the company Ultranytics just a few days after YOLOv4. No paper has been released and there is a debate in the community if it justifies using YOLO branding as it is just the PyTorch implementation of YOLOv3. - Also Read – Introduction to YOLOv5 Object Detection with Tutorial - Also Read – Tutorial – YOLOv5 Custom Object Detection in Colab Performance The authenticity of performance cannot be guaranteed as there is no official paper yet. It achieves the same if not better accuracy(mAP of 55.6) than the other YOLO models while taking less computation power. Technical Improvements - Better data augmentation and loss calculations (Now that the base of the model has shifted from C to PyTorch) - Auto learning of anchor boxes (they do not need to be added manually now) - Use of cross-stage partial connections(CSP) in the backbone. - Use of path aggregation(PAN) network in the neck of the model - Easier framework to train and test(PyTorch). - Ease of use and installation. - Instead of CFG files, the new version supports YAML files which greatly enhances the layout and readability of model configuration files.
https://machinelearningknowledge.ai/a-brief-history-of-yolo-object-detection-models/
Python | Document field detection using Template Matching Template matching is an image processing technique which is used to find the location of small-parts/template of a large image. This technique is widely used for object detection projects, like product quality, vehicle tracking, robotics etc. In this article, we will learn how to use template matching for detecting the related fields in a document image. Solution: Above task can be achieved using template matching. Clip out the field images and apply template matching using clipped field images and the document image. The algorithm is simple yet reproducible into complex versions to solve the problem of field detection and localization for document images belonging to specific domains. Approach: - Clip/Crop field images from the main document and use them as separate templates. - Define/tune thresholds for different fields. - Apply template matching for each cropped field template using OpenCV function cv2.matchTemplate() - Draw bounding boxes using the coordinates of rectangles fetched from template matching. - Optional: Augment field templates and fine tune threshold to improve result for different document images. Input Image: Output Image: Below is the Python code: Python3 Advantages of using template matching: - Computationally inexpensive. - Easy to use and modifiable for different use-cases. - Gives good results in case of document data scarcity. Disadvantages: - Result are not highly accurate as compared to segmentation techniques using deep learning. - Lacks overlapping pattern problem resolution.
https://www.geeksforgeeks.org/python-document-field-detection-using-template-matching/?ref=rp
that apply classifiers on handcrafted features[5, 25, 4] extracted at all possible locations and scales of images. Recently, the fully convolutional neural network (FCN) based methods [34, 8, 29] bring a revolution to the field of object detection. These FCN frameworks also follow a sliding window fashion, but their end-to-end approach of learning model parameters and image features from scratch significantly improves detection performance. R-CNN [15, 14] further improves the accruacy on object detection beyond FCN based methods. Conceptually, R-CNN contains two phases. The first phase uses region proposal methods to generate all the potential bounding box candidates in the image. Then the second phase applies a CNN classifier to distinguish different objects for every proposal. Although R-CNN becomes the new state-of-the-art system for general object detection [9, 33], it is very hard to detect small objects such as human faces and far-away cars, since the low resolution and lack of contexts in each candidate box significantly decrease the classification accuracy on them. Moreover, the two different stages in the R-CNN pipeline cannot be optimized jointly, leaving the trouble for applying end-to-end training on R-CNN. In this work, we focus on one question: To what extent can an one-stage FCN perform on object detection? To this end, we present a novel FCN based object detector, DenseBox, that does not require proposal generation and is able to be optimized end-to-end during training. Although similar to many existing sliding window fashion FCN detection frameworks [34, 8, 29], DenseBox is more carefully designed to detect objects under small scales and heavy occlusion. We train DenseBox and apply careful hard negative mining techniques to boostrap the detection performance. To make it even better, we further integrate landmark localization into the system through joint multi-task learning . To verify the usefulness of landmark localization, we manually annotate a set of keypoints for the KITTI car detection dataset and will release annotation afterward. Our contribution is two-fold. First, we demonstrate that a single fully convolutional neural network, if designed and optimized carefully, can detect objects under different scales with heavy occlusion extremely accurately and efficiently. Second, we show that when incorporating with landmark localization through multi-task learning, DenseBox further improves object detection accuracy. We present experimental results on public benchmark datasets including MALF (Multi-Attribute Labeled Faces) face detection and KITTI car detection , that indicate our DenseBox is the state-of-the-art system for face detection and car detection. 2 Related Work The literature on object detection is vast. Before the success of deep convolutional neural networks , the widely used detection systems are based on a combination of independent components. First, handcrafted image features such as HOG [5, 45, 44], SIFT , and Fisher Vector are extracted at every location and scale of an image. Second, object models such as pictorial structure model (PSM) and deformable part-based model (DPM) [11, 46, 43] allow object parts (e.g. head, torso, arms and legs of human) to deform with geometric constraints. Finally, a classifier such as boosting methods , linear SVM , latent SVM , or random forests decides whether a candidate window shall be detected as containing an object. The application of neural networks for detection tasks such as face detection also has a long history. The first work may date back to early in 1994 when Vaillant et al. proposed to train a convolutional neural network to detect face in image window. Later in 1996 and 1998 Rowley et al. [31, 32] presented neural network based face detection systems to detect upright frontal face in image pyramid. There is no way to compare the performance of those ancient detectors with today’s detection systems on face detection benchmarks. Even so, they are still worth revisiting, as we find many similarities in design with our DenseBox. Recently, several papers propose algorithms of using deep convolutional neural networks for locating objects [34, 8, 29]. OverFeat train a convolutional layer to predict the box coordinates for multiple class-specific objects from an image pyramid. MultiBox generate region proposals from a network whose output layer simultaneously predicts multiple boxes, which are used for R-CNN object detection. YOLO also predicts bounding boxes and class probabilities directly from full images in one evaluation. All these methods use shared computation of convolutions[34, 16, 30, 24] which has been attracting increasing attention for efficient, yet accurate, visual recognition. However, most state-of-the-art object detection approaches [26, 20, 8, 14, 41] rely on R-CNN, which divides detection into two steps: salient object proposal generation and region proposal classification. Several recent works such as YOLO and Faster R-CNN have jointed region proposal generation with classifier in one stage or two stages. It is pointed out by that R-CNN with general proposal methods designed for general object detection could results in inferior performance in detection task such as face detection, due to loss recall for small-sized faces and faces in complex appearance variations. They share similarities with our method, and we will discuss them with our method in more detail in later context. Object detection is often involved with multi-task learning such as landmark localization, pose estimation and semantic segmentation. Zhu et al. propose a tree structure model for joint face detection, pose estimation and landmark localization. Deep net based object detection systems are also natural for integrating multi-task learning. Devries et al. learn facial landmarks and expressions simultaneously through deep neural networks. Sijin et al. simultaneously learn pose joint regressor and sliding window body part detector in a deep network architecture. 3 DenseBox for Detection The whole detection system is illustrated in Fig 1. The single convolutional network simultaneously output multiple predicted bounding boxes and class confidence. All components of object detection in DenseBox are modeled as a fully convolutional network except the non-maximum suppression step, so region proposal generation is unnecessary. In the test, the system takes an image (at the size of ) as input, and output a feature map with 5 channels. If we define the left top and right bottom points of the target bounding box in output coordinate space as and as respectively, then each pixel located at in the output feature map describe a bounding box with a 5-dimensional vector , where is the confidence score of being an object and , ,, denote the distance between output pixel location with the boundary of target bounding box. Finally every pixel in the output map is converted to bounding box with score, and non-maximum suppression is applied to those boxes whose scores pass the threshold. 3.1 Ground Truth Generation It is unnecessary to put the whole image into the network for training because it would take most computational time in convolving on background. A wise strategy is to crop large patches containing faces and sufficient background information for training. In this paper, we train our network on single scale, and apply it to multiple scales for evaluation. Generally speaking, our proposed network is trained in a segmentation-like way. In training, the patches are cropped and resized to with a face in the center roughly has the height of 50 pixels. The output ground truth in training is a 5-channel map sized , with the down-sampling factor of 4. The positive labeled region in the first channel of ground truth map is a filled circle with radius , located in the center of a face bounding box. The radius is proportional to the bounding box size, and its scaling factor is set to be 0.3 to the box size in output coordinate space, as show in Fig 2. The remaining 4 channels are filled with the distance between the pixel location of output map between the left top and right bottom corners of the nearest bounding box. Note that if multiple faces occur in one patch, we keep those faces as positive if they fall in a scale range(e.g. 0.8 to 1.25 in our setting) relative to the face in patch center. Other faces are treated as negative samples. The pixels of first channel, which denote the confidence score of class, in the ground truth map are initialized with 0, and further set to 1 if within the positive label region. We also find our ground truth generation is quite similar to the segmentation work by Pinheiro et.al. In their method, the pixel label is decided by the location of object in patch, while in DenseBox, the pixel label is determined by the receptive field. Specifically, if the output pixel is labeled to 1 if it satisfies the constraint that its receptive field contains an object roughly in the center and in a given scale. Each pixel can be treated as one sample , since every 5-channel pixel describe a bounding box. 3.2 Model Design Our network architecture illustrated in Fig 3 is derived from the VGG 19 model used for image classification. The whole network has 16 convolution layers, with the first 12 convolution layers initialized by VGG 19 model. The output of conv4_4 is feed into four convolution layers, where the first two convolution layers output 1-channel map for class score, and the second two predict the relative position of bounding box by 4-channel map. The last convolution layers act as fully connected layers in a sliding-window fashion. Multi-Level Feature Fusion. Recent works[2, 22] indicate that using features from different convolution layers can enhance performance in task such as edge detection and segmentation. Part-level feature focus on local details of object to find discriminative appearance parts, while object-level or high-level feature usually has a larger receptive field in order to recognize object. The larger receptive field also brings in context information to predict more accurate result. In our implementation, we concatenate feature map from conv3_4 and conv4_4. The receptive field (or sliding window size) of conv3_4 is , almost the same size of the face size in training, and the conv4_4 have a much larger receptive field, around in size, which could utilize global textures and context for detection. Note that the feature map size of conv4_4 is half of the map generated by conv3_4, hence we use a bilinear up-sampling layer to transform them to the same resolution. 3.3 Multi-Task Training. We use the ImageNet pre-trained VGG 19 network to initialize DenseBox. Actually, in initialization, we only keep the first 12 convolution layers(from conv1_1 to conv4_4), and the other layers in VGG 19 are replaced by four new convolution layers with “xavier” initialization. Like Fast R-CNN, our network has two sibling output branches. The first outputs the confidence score (per pixel in the output map) of being a target object. Given the ground truth label , the classification loss can be defined as follows. |(1)| Here we use loss in both face and car detection task. We did not try other loss functions such as hinge loss or cross-entropy loss, which seems to be a more appropriate choice, as we find the simpleloss work well in our task. The second branch of outputs the bounding-box regression loss, denoted as . It targets on minimizing the loss between the predicted location offsets and the targets , as formulized by: |(2)| 3.3.1 Balance Sampling. The process of selecting negative samples is one of the crucial parts in learning. If simply using all negative samples in a mini-batch will bias prediction towards negative samples as they dominate in all samples. In addition, the detector will degrade if we penalize loss on those samples lying in the margin of positive and negative region. Here we use a binary mask for each output pixel to indicate whether it is selected in training. Ignoring Gray Zone. The gray zone is defined on the margin of positive and negative region. It should not be considered to be positive or negative, and its loss weight should be set to . For each non-positive labeled pixel in the output coordinate space, its ignore flag is set to 1 only if there is any pixel with positive label within pixel length. Hard Negative Mining. Analogous to hard-negative mining procedure in SVM, we make learning more efficient by searching the badly predicted samples rather than random samples. After negative mining, the badly predicted samples are very likely to be selected, so that gradient descent learning on those samples leads more robust prediction with less noise. Specifically, negative mining can be performed efficiently by online bootstrap. In the forward propagation phase, we sort the loss (Eq 1) of output pixels in decending order, and assign the top 1% to be hard-negative. In all experiments, we keep all positive labeled pixels(samples) and the ratio of positive and negative to be 1:1. Among all negative samples, half of them are sampled from hard-negative samples, and the remaining half are selected randomly from non-hard negative. For convenience, we set a flag to those pixels (samples) selected in a mini-batch. Loss with Mask. Now we can define the mask for each sample as a function of flags mentioned above: |(3)| Then if we combine the classification (Eq 1) and bounding box regression (Eq 2) loss with masks, our full multi-task loss can be represented as , |(4)| where is the set of parameters in the network, and the Iverson bracket function is activated only if the ground truth score is positive. It is obvious that the bounding box regression loss should be ignored for negative samples (background), since there is no notation for them. The balance between classification and regression tasks is controlled by the parameter . In our experiments, we normalize the regression target by dividing by the standard object height, which is in ground truth map, and works well in all experiments under this normalization. Other Implementation Details. In training, an input patch is considered to be “positive patch” if it contains an object centered in the center at a specific scale. These patches only contain negative samples around the positive samples. To fully explore the negative samples in the whole dataset, we also randomly crop patches at random scale from training images, and resize them to the same size and feed them to the network. We call this kind of patch as “random patch”, and the ratio of “positive patch” and “random patch” in training is 1:1. In addition, to further increase the robustness of our model, we also randomly jitter every patch before feeding them into the network. Specifically, we apply left-right flip, translation shift (of 25 pixels), and scale deformation (from ). We use mini-batch SGD in training and the batch size is set to 10. The loss and output gradients must be scaled by the number of contributing pixels, so that both loss and output gradients are comparable in multi-task learning. The global learning rate starts with 0.001, and it is reduced by a factor of 10 at every 100K iterations. We follow the default momentum term weight 0.9 and the weight decay factor 0.0005. 3.4 Refine with Landmark Localization. In this part, we show that landmark localization can be achieved in DenseBox just by stacking a few layers owe to the fully convolution architecture. Moreover, we can refine detection results through the fusion of landmark heatmaps and face score map. As shown in Fig 4, we incorporate another sibling branch output for landmark localization. Suppose there are landmarks, the landmark localization branch outputs N response maps, with each pixel represent the confidence score of being a landmark at that location. The appearance of ground-truth maps used for this task is quite similar to the ground-truth for detection. For a landmark instance , the th instance of landmark , its ground-truth is a positive labeled region located at the corresponding location on the th response map in the output coordinate space. Note that the radius should be relative small (e.g. )to prevent loss of accuracy. Similar to classification task, the landmark localization loss is defined as a loss between predicted values and labels, and we still apply the negative mining and ignore region discussed in the previous section. The final output refine branch, taking the classification score map and landmark localization maps as input, targets on refine of the detection results. An appropriate solution could be using high-level spatial model to learn the constraints of landmark confidence and bounding box score, to further increase the performance of detections. Tompson et.al. proposed a MRF-like model using modified convolution (SoftPlus convolution) with non-negative output to connect the distribution of spatial location for each body part. However, their model also include and stages, which make model difficult to train. In our implementation, we use convolutions with ReLU activation to approximate the spatial model. If we denote the refine detection loss as, which is almost the same as the classification loss mentioned before but the predict map is from the refine branch, the full loss becomes as , |(5)| where and controll the balance of the three tasks. They are assigned to 1 and 0.5 respectively in our experiments. 3.5 Comparison The highlight of DenseBox is that it frames object detection as a regression problem and provides an end-to-end detection framework. Several recent works such as YOLO and Faster R-CNN have jointed region proposal generation with classifier together. Here we compare DenseBox to other related detection systems, pointing out the key similarities and differences. Traditional NN-based Face Detector. The neural network-based face detectors refer to those face detection system using neural network before the recent break-through results of CNNs for image classification. Applying neural networks for face detection has a long history, and the early works date back to 1990s. Rowley et al. train neural network-based detectors which only is activated on faces with specific size, and apply detectors on the image pyramid with sliding-window fashion. Our DenseBox is very similar to them in the detection pipeline, excepting that we use modern CNNs as detectors. Hence the DenseBox could be called as “ Modern NN-based detector“ in one sense. OverFeat. OverFeat designed by Sermanet et al. might be the first work that train a convolution neural network to perform classification and localization together after the success application of deep CNNs for image classification. It also apply fully convolutional network on test time, an equivalent but much efficient way to perform sliding window detection. However it still disjoints classification and localization in training, and need complex post-processing to produce detection results. Our method is very similar to OverFeat but a multi-task jointly learned end-to-end detection network. Deep Dense Face Detector (DDFD) The DDFD, psoposed by Farfade et.al., is a face detection system based on convolutional neural networks. It claims to have superior performance over R-CNN on face detection task due to the reason that proposal generation in R-CNN may miss some face regions. Although the DDFD is a complete detection pipeline, the DDFD is not an end-to-end framework since it separate the class probability prediction and bounding box localization as two tasks and two stages. Our DenseBox can be optimized directly for detection , and can be easily improved by incorporating landmark information. Faster R-CNN. The faster R-CNN still use region proposals to find objects in an image. Unlike the its former variants, the region proposals in faster R-CNN is produced by region proposal networks(RPNs) sharing convolutional feature computation with classifiers in the second stage. The PRN shares many similarities with our method DenseBox. However, The PRN needs predefined anchors while ours does not. The PRN is trained on multi-scale objects while the DenseBox presented in this paper is trained on one scale with jitter-augmentation, which means our method need to evaluate feature at multiple scales. Moreover, the training schemes are quite different between DenseBox and PRN. MultiBox. The MultiBox trains a convolutional neural network to generate proposals instead of selective search. Both DenseBox and MultiBox are trained to predict bounding boxes in an image, they generate bounding boxes in different way. As compared in , the MultiBox method generates 800 non-translation-invariant anchors, whereas our DenseBox output translation-invariant bounding boxes like RPN. As the down-sampling factor of output map is 4, DenseBox will densely generate one bounding box with score at every 4 pixels. YOLO. Redmon et al. propose a unified object detection pipeline, called YOLO. Both DenseBox and YOLO can be trained end-to-end from images, but the model design differs in the output layers. The YOLO system takes a image as input, and outputs grid cells, only 49 bounding boxes per image. Our DenseBox uses up-sampling layers to keep a relative high-resolution output, with a down-sampling scale factor of 4 in our model. This enables our network capable to detect very small objects and highly overlapped objects, which YOLO is unable to deal with. 4 Experiments In this section, we demonstrate the performance of DenseBox on MALF(Multi-Attribute Labelled Faces) dataset and KITTI car detection task. We also evaluate our method on those tasks with or without the help of landmark annotation, showing that multi-task learning with landmark localization can significantly boost the performance. We compare our results with current the state-of-the-art systems, which shows that our method achieves competitive results on object detection tasks. Nothe that we do not compare the performances of our DenseBox with original R-CNN directly on those task, but we highlight the performances of other methods which claim to use R-CNN or those methods have alrealy compared themselves to R-CNN. 4.1 MALF Detection Task The MALF detection test dataset contains 5,000 images in collected from the Internet. Unlike the widely used FDDB face detection benchmark, which is collected from news photos and the pose tends to be frontal, the face images in MALF have much larger diversity, making it closer to real world application than FDDB. Training and Testing. We train two models described in section 3 on 31,337 Internet-collected images with 81,024 faces annotated with 72 landmarks illustrated in Fig 5. One model only use bounding box information, while the other model utilize both bounding box and landmark information for comparison. They are both initialized with ImageNet pre-trained VGG19 model. The faces in training are roughly scaled to 50 pixels in height, and the scale jitter range is , the same as described in section 3.3. On testing, we first selectively down sample images so that for each image the longest image side does not exceed 800 pixels. Then we test our model on each image at several scales. The test scale starts from to with the step of . This setting enable our models to detect faces from 20 pixels to 400 pixels in height. The non-maximum suppression IOU threshold in face detection is set to . Under this configuration, it taks several seconds to process one image in MALF dataset on an Nvidia K40 GPU. Results. We illustrate the results of three versions of DenseBox on MALF dataset. The “DenseBoxNoLandmark” denotes DenseBox without landmark in training. “DenseBoxLandmark” is the model incorporating landmark localization, and “DenseBoxEnsemble” is the result of ensembling 10 DenseBox with landmarks from different batch iterations. As shown in Fig 6, landmark localization gives a significant performance boost on face detection. We also notice that the models trained with different batch iterations still have high diversity since another significant boost has been seen by model ensemble. Then we compare our best model with other state-of-the-art methods on MALF. Surprisingly, our model achieves the best performance, with mean recall rate of , almost outperform DDFD by , which claims to have better performance than R-CNN on face detection task. 4.2 KITTI Car Detection Task The KITTI object detection benchmark consists of 7481 training images and 7518 test images. The total number of objects in training sums up to 51,867, in which cars only accounts for 28,742. The key difficulty of KITTI car detection task is that a great amount of cars are in small size (height ¡ 40 pixels) and occluded. To overcome this difficulty, previous works such as need careful part and occlusion modeling. Training and Testing. As well as in face detection task, we train two models(one without landmark, the other with landmark) on the KITTI object detection training set. Since KITTI does not provide landmarks for car, we selectively annotate 8 landmarks for large cars (height ¿ 50 pixels). The landmarks of car is shown in Fig 5, and we finally annotate 7790 cars, roughly of the total cars. The testing procedure is the same as in face detection, except that we do not down sample car images. The evaluation metric of KITTI car detection task is different from general objecte detection. KITTI requires an overlap of 70% for true positive bounding box, while other tasks such as face detection only requires 50% overlap. This strict criteria requests high accurate car localization. On KITTI, we set the non-maximum suppression IOU threshold to 0.75. Results. Results. Table 1 shows the results of DenseBox and other methods. We can see that partially annotated landmark information (27%) still can boost detection performance. On average, model with landmark localization slightly outperforms no-landmark model by 0.9% in average precision. The promotion on performance is not as great as in face detection. The reason could be that the landmark information is not sufficient. The insufficient lies on both the amount (27% in car while 100% in face) and the quality (8 landmarks in car whereas 74 in face). Compared with other methods, the DenseBox still achieves competitive results. DenseBox defeats traditional detection system such as Regionlets and spCov by a large margin. Our average precision on moderate car is 85.74%, slightly better than DeepInsight, which use R-CNN framework with ImageNet pre-trained GoogLeNet. Our model has been ranked as the top 1 for 4 months until an anonymous submission titled “NIPS ID 331”, which use stereo information for training and testing. Recently a method named “DJML” overtakes all other methods. 5 Conclusion We have presented the DenseBox, a unified end-to-end detection pipeline for detection. The performance can be boosted easily by incorporating landmark information. We also analysis our method and other related object detection system, highlighting the difference and the contribution of DenseBox. The DenseBox achieves impressive performance on both face detection and car detection task, demonstrating its high suitable for situation where proposal generation might fail. The key problem of DenseBox is the speed. The original DenseBox presented in this paper needs several seconds to process one image. But this has been addressed in our latter version. We will present another paper describing a real-time detection system on KITTI and face detection, called DenseBox2. References - Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1798–1828, 2013. - G. Bertasius, J. Shi, and L. Torresani. Deepedge: A multi-scale bifurcated deep network for top-down contour detection. arXiv preprint arXiv:1412.1123, 2014. - X. Chen, K. Kundu, Y. Zhu, A. Berneshawi, H. Ma, S. Fidler, and R. Urtasun. 3d object proposals for accurate object class detection. In NIPS, 2015. - R. G. Cinbis, J. Verbeek, and C. Schmid. Segmentation driven object detection with fisher vectors. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 2968–2975. IEEE, 2013. - N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886–893. IEEE, 2005. - T. Devries, K. Biswaranjan, and G. W. Taylor. Multi-task learning of facial landmarks and expression. In Computer and Robot Vision (CRV), 2014 Canadian Conference on, pages 98–103. IEEE, 2014. - P. Dollár, R. Appel, and W. Kienzle. Crosstalk cascades for frame-rate pedestrian detection. In Computer Vision–ECCV 2012, pages 645–659. Springer, 2012. - D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 2155–2162. IEEE, 2014. - M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010. - S. S. Farfade, M. Saberian, and L.-J. Li. Multi-view face detection using deep convolutional neural networks. arXiv preprint arXiv:1502.02766, 2015. - P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(9):1627–1645, 2010. - P. F. Felzenszwalb and D. P. Huttenlocher. Pictorial structures for object recognition. International Journal of Computer Vision, 61(1):55–79, 2005. - A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), 2012. - R. Girshick. Fast r-cnn. arXiv preprint arXiv:1504.08083, 2015. - R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580–587. IEEE, 2014. - K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In Computer Vision–ECCV 2014, pages 346–361. Springer, 2014. - V. Jain and E. G. Learned-Miller. Fddb: A benchmark for face detection in unconstrained settings. UMass Amherst Technical Report, 2010. - A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. - B. Li, T. Wu, and S.-C. Zhu. Integrating context and occlusion for car detection by hierarchical and-or model. In Computer Vision–ECCV 2014, pages 652–667. Springer, 2014. - H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua. A convolutional neural network cascade for face detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5325–5334, 2015. - S. Li, Z.-Q. Liu, and A. B. Chan. Heterogeneous multi-task learning for human pose estimation with deep convolutional neural network. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pages 488–495. IEEE, 2014. - W. Liu, A. Rabinovich, and A. C. Berg. Parsenet: Looking wider to see better. arXiv preprint arXiv:1506.04579, 2015. - C. Long, X. Wang, G. Hua, M. Yang, and Y. Lin. Accurate object detection with location relaxation and regionlets re-localization. In Computer Vision–ACCV 2014, pages 260–275. Springer, 2015. - J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015. - D. G. Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110, 2004. - W. Ouyang, P. Luo, X. Zeng, S. Qiu, Y. Tian, H. Li, S. Yang, Z. Wang, Y. Xiong, C. Qian, et al. Deepid-net: multi-stage and deformable deep convolutional neural networks for object detection. arXiv preprint arXiv:1409.3505, 2014. - B. Pepik, R. Benenson, T. Ritschel, and B. Schiele. What is holding back convnets for detection? arXiv preprint arXiv:1508.02844, 2015. - P. O. Pinheiro, R. Collobert, and P. Dollar. Learning to segment object candidates. arXiv preprint arXiv:1506.06204, 2015. - J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. arXiv preprint, abs/1506.02640, 2015. - S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497, 2015. - H. Rowley, S. Baluja, T. Kanade, et al. Neural network-based face detection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 20(1):23–38, 1998. - H. Rowley, S. Baluja, T. Kanade, et al. Rotation invariant neural network-based face detection. In Computer Vision and Pattern Recognition, 1998. Proceedings. 1998 IEEE Computer Society Conference on, pages 38–44. IEEE, 1998. - O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, pages 1–42, 2014. - P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. - K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. - C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. - J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Advances in Neural Information Processing Systems, pages 1799–1807, 2014. - R. Vaillant, C. Monrocq, and Y. Le Cun. Original approach for the localisation of objects in images. IEE Proceedings-Vision, Image and Signal Processing, 141(4):245–250, 1994. - P. Viola and M. J. Jones. Robust real-time face detection. International journal of computer vision, 57(2):137–154, 2004. - Y. Xiang, W. Choi, Y. Lin, and S. Savarese. Data-driven 3d voxel patterns for object category recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1903–1911, 2015. - J. Yan, Y. Yu, X. Zhu, Z. Lei, and S. Z. Li. Object detection by labeling superpixels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5107–5116, 2015. - B. Yang, J. Yan, Z. Lei, and S. Z. Li. Fine-grained evaluation on face detection in the wild. In Automatic Face and Gesture Recognition (FG), 11th IEEE International Conference on. IEEE, 2015. - Y. Yang and D. Ramanan. Articulated human detection with flexible mixtures of parts. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(12):2878–2890, 2013. - Y. Yu, J. Zhang, Y. Huang, S. Zheng, W. Ren, C. Wang, K. Huang, and T. Tan. Object detection by context and boosted hog-lbp. In VOC Workshop Talk, page 104, 2010. - J. Zhang, K. Huang, Y. Yu, and T. Tan. Boosted local structured hog-lbp for object localization. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1393–1400. IEEE, 2011. - X. Zhu and D. Ramanan. Face detection, pose estimation, and landmark localization in the wild. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2879–2886. IEEE, 2012.
https://deepai.org/publication/densebox-unifying-landmark-localization-with-end-to-end-object-detection
The YOLO algorithm is primarily different in the unification of the detection and localization algorithm. Here, we’ll look at the algorithm itself. S x S grid The input image is divided into an S x S grid. If the center of the object falls within that grid box, it is the boxes responsibility to claim the bounding box for that object. With an S x S grid, you can predict multiple bounding boxes on the same image. B bounding boxes Each grid cell predicts B bounding boxes and confidence scores. The confidence score is the PR(Object) * IOU truth/pred. The IOU method is interesting becuase you only want to predict with confidence, that which your grid is able to see. Thus if you are able to see the center, in general you should be able to predict the class most confidently. Predictions Each bounding box consists of 5 predictions x,y,w,h and confidence. (x,y) is the center, and w,h are relative to the whole image. The confidence is as shown before, a IOU between the predicted box and any ground truth box. C Conditional Classes Each grid cell also predicts C conditional class probabilities. They are conditioned on the grid cell containing an object. Figure 2 shows this process:
https://www.apaperaday.com/01/03/2020/6yolo-2algorithm/
Models, code, and papers for "Minghui Liao": Scene text detection is an important step of scene text recognition system and also a challenging problem. Different from general object detection, the main challenges of scene text detection lie on arbitrary orientations, small sizes, and significantly variant aspect ratios of text in natural images. In this paper, we present an end-to-end trainable fast scene text detector, named TextBoxes++, which detects arbitrary-oriented scene text with both high accuracy and efficiency in a single network forward pass. No post-processing other than an efficient non-maximum suppression is involved. We have evaluated the proposed TextBoxes++ on four public datasets. In all experiments, TextBoxes++ outperforms competing methods in terms of text localization accuracy and runtime. More specifically, TextBoxes++ achieves an f-measure of 0.817 at 11.6fps for 1024*1024 ICDAR 2015 Incidental text images, and an f-measure of 0.5591 at 19.8fps for 768*768 COCO-Text images. Furthermore, combined with a text recognizer, TextBoxes++ significantly outperforms the state-of-the-art approaches for word spotting and end-to-end text recognition tasks on popular benchmarks. Code is available at: https://github.com/MhLiao/TextBoxes_plusplus Recently, segmentation-based methods are quite popular in scene text detection, as the segmentation results can more accurately describe scene text of various shapes such as curve text. However, the post-processing of binarization is essential for segmentation-based detection, which converts probability maps produced by a segmentation method into bounding boxes/regions of text. In this paper, we propose a module named Differentiable Binarization (DB), which can perform the binarization process in a segmentation network. Optimized along with a DB module, a segmentation network can adaptively set the thresholds for binarization, which not only simplifies the post-processing but also enhances the performance of text detection. Based on a simple segmentation network, we validate the performance improvements of DB on five benchmark datasets, which consistently achieves state-of-the-art results, in terms of both detection accuracy and speed. In particular, with a light-weight backbone, the performance improvements by DB are significant so that we can look for an ideal tradeoff between detection accuracy and efficiency. Specifically, with a backbone of ResNet-18, our detector achieves an F-measure of 82.8, running at 62 FPS, on the MSRA-TD500 dataset. Code is available at: https://github.com/MhLiao/DB Recently, models based on deep neural networks have dominated the fields of scene text detection and recognition. In this paper, we investigate the problem of scene text spotting, which aims at simultaneous text detection and recognition in natural images. An end-to-end trainable neural network model for scene text spotting is proposed. The proposed model, named as Mask TextSpotter, is inspired by the newly published work Mask R-CNN. Different from previous methods that also accomplish text spotting with end-to-end trainable deep neural networks, Mask TextSpotter takes advantage of simple and smooth end-to-end learning procedure, in which precise text detection and recognition are acquired via semantic segmentation. Moreover, it is superior to previous methods in handling text instances of irregular shapes, for example, curved text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the proposed method achieves state-of-the-art results in both scene text detection and end-to-end text recognition tasks. This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks. With the development of deep neural networks, the demand for a significant amount of annotated training data becomes the performance bottlenecks in many fields of research and applications. Image synthesis can generate annotated images automatically and freely, which gains increasing attention recently. In this paper, we propose to synthesize scene text images from the 3D virtual worlds, where the precise descriptions of scenes, editable illumination/visibility, and realistic physics are provided. Different from the previous methods which paste the rendered text on static 2D images, our method can render the 3D virtual scene and text instances as an entirety. In this way, complex perspective transforms, various illuminations, and occlusions can be realized in our synthesized scene text images. Moreover, the same text instances with various viewpoints can be produced by randomly moving and rotating the virtual camera, which acts as human eyes. The experiments on the standard scene text detection benchmarks using the generated synthetic data demonstrate the effectiveness and superiority of the proposed method. The code and synthetic data will be made available at https://github.com/MhLiao/SynthText3D Text in natural images is of arbitrary orientations, requiring detection in terms of oriented bounding boxes. Normally, a multi-oriented text detector often involves two key tasks: 1) text presence detection, which is a classification problem disregarding text orientation; 2) oriented bounding box regression, which concerns about text orientation. Previous methods rely on shared features for both tasks, resulting in degraded performance due to the incompatibility of the two tasks. To address this issue, we propose to perform classification and regression on features of different characteristics, extracted by two network branches of different designs. Concretely, the regression branch extracts rotation-sensitive features by actively rotating the convolutional filters, while the classification branch extracts rotation-invariant features by pooling the rotation-sensitive features. The proposed method named Rotation-sensitive Regression Detector (RRD) achieves state-of-the-art performance on three oriented scene text benchmark datasets, including ICDAR 2015, MSRA-TD500, RCTW-17 and COCO-Text. Furthermore, RRD achieves a significant improvement on a ship collection dataset, demonstrating its generality on oriented object detection. Reading text in the wild is a very challenging task due to the diversity of text instances and the complexity of natural scenes. Recently, the community has paid increasing attention to the problem of recognizing text instances with irregular shapes. One intuitive and effective way to handle this problem is to rectify irregular text to a canonical form before recognition. However, these methods might struggle when dealing with highly curved or distorted text instances. To tackle this issue, we propose in this paper a Symmetry-constrained Rectification Network (ScRN) based on local attributes of text instances, such as center line, scale and orientation. Such constraints with an accurate description of text shape enable ScRN to generate better rectification results than existing methods and thus lead to higher recognition accuracy. Our method achieves state-of-the-art performance on text with both regular and irregular shapes. Specifically, the system outperforms existing algorithms by a large margin on datasets that contain quite a proportion of irregular text instances, e.g., ICDAR 2015, SVT-Perspective and CUTE80. Inspired by speech recognition, recent state-of-the-art algorithms mostly consider scene text recognition as a sequence prediction problem. Though achieving excellent performance, these methods usually neglect an important fact that text in images are actually distributed in two-dimensional space. It is a nature quite different from that of speech, which is essentially a one-dimensional signal. In principle, directly compressing features of text into a one-dimensional form may lose useful information and introduce extra noise. In this paper, we approach scene text recognition from a two-dimensional perspective. A simple yet effective model, called Character Attention Fully Convolutional Network (CA-FCN), is devised for recognizing text of arbitrary shapes. Scene text recognition is realized with a semantic segmentation network, where an attention mechanism for characters is adopted. Combined with a word formation module, CA-FCN can simultaneously recognize the script and predict the position of each character. Experiments demonstrate that the proposed algorithm outperforms previous methods on both regular and irregular text datasets. Moreover, it is proven to be more robust to imprecise localizations in the text detection phase, which are very common in practice.
https://www.profillic.com/search?query=Minghui%20Liao
Browsing SUNY Polytechnic Institute by Subject "YOLO (You Only Look Once)" Now showing items 1-1 of 1 - Text Detection from an ImageRecently, a variety of real-world applications have triggered a huge demand for techniques that can extract textual information from images and videos. Therefore, image text detection and recognition have become active research topics in computer vision. The current trend in object detection and localization is to learn predictions with high capacity deep neural networks trained on a very large amount of annotated data and using a high amount of processing power. In this project, I have built an approach for text detection using the object detection technique. Our approach is to deal with the text as objects. We use an object detection method, YOLO (You Only Look Once), to detect the text in the images. We frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. YOLO, a single neural network, that predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. The MobileNet pre-trained deep learning model architecture was used and modified in different ways to find the best performing model. The goal is to achieve high accuracy in text spotting. Experiments on standard datasets ICDAR 2015 demonstrate that the proposed algorithm significantly outperforms methods in terms of both accuracy and efficiency.
https://soar.suny.edu/handle/20.500.12648/9/browse?type=subject&value=YOLO+%28You+Only+Look+Once%29
Visual saliency computation is about detecting and understanding salient regions and elements in a visual scene. Algorithms for visual saliency computation can give clues to where people will look in images, what objects are visually prominent in a scene, etc. Such algorithms could be useful in a wide range of applications in computer vision and graphics. In this thesis, we study the following visual saliency computation problems. 1) Eye Fixation Prediction. Eye fixation prediction aims to predict where people look in a visual scene. For this problem, we propose a Boolean Map Saliency (BMS) model which leverages the global surroundedness cue using a Boolean map representation. We draw a theoretic connection between BMS and the Minimum Barrier Distance (MBD) transform to provide insight into our algorithm. Experiment results show that BMS compares favorably with state-of-the-art methods on seven benchmark datasets. 2) Salient Region Detection. Salient region detection entails computing a saliency map that highlights the regions of dominant objects in a scene. We propose a salient region detection method based on the Minimum Barrier Distance (MBD) transform. We present a fast approximate MBD transform algorithm with an error bound analysis. Powered by this fast MBD transform algorithm, our method can run at about 80 FPS and achieve state-of-the-art performance on four benchmark datasets. 3) Salient Object Detection. Salient object detection targets at localizing each salient object instance in an image. We propose a method using a Convolutional Neural Network (CNN) model for proposal generation and a novel subset optimization formulation for bounding box filtering. In experiments, our subset optimization formulation consistently outperforms heuristic bounding box filtering baselines, such as Non-maximum Suppression, and our method substantially outperforms previous methods on three challenging datasets. 4) Salient Object Subitizing. We propose a new visual saliency computation task, called Salient Object Subitizing, which is to predict the existence and the number of salient objects in an image using holistic cues. To this end, we present an image dataset of about 14K everyday images which are annotated using an online crowdsourcing marketplace. We show that an end-to-end trained CNN subitizing model can achieve promising performance without requiring any localization process. A method is proposed to further improve the training of the CNN subitizing model by leveraging synthetic images. 5) Top-down Saliency Detection. Unlike the aforementioned tasks, top-down saliency detection entails generating task-specific saliency maps. We propose a weakly supervised top-down saliency detection approach by modeling the top-down attention of a CNN image classifier. We propose Excitation Backprop and the concept of contrastive attention to generate highly discriminative top-down saliency maps. Our top-down saliency detection method achieves superior performance in weakly supervised localization tasks on challenging datasets. The usefulness of our method is further validated in the text-to-region association task, where our method provides state-of-the-art performance using only weakly labeled web images for training.
https://open.bu.edu/handle/2144/19587
Object detection serves as a significant step in improving performance of complex downstream computer vision tasks. It has been extensively studied for many years now and current state-of-the-art 2D object detection techniques proffer superlative results even in complex images. In this chapter, we discuss the geometry-based pioneering works in object detection, followed by the recent breakthroughs that employ deep learning. Some of these use a monolithic architecture that takes a RGB image as input and passes it to a feed-forward ConvNet or vision Transformer. These methods, thereby predict class-probability and bounding-box coordinates, all in a single unified pipeline. Two-stage architectures on the other hand, first generate region proposals and then feed it to a CNN to extract features and predict object category and bounding-box. We also elaborate upon the applications of object detection in video event recognition, to achieve better fine-grained video classification performance. Further, we highlight recent datasets for 2D object detection both in images and videos, and present a comparative performance summary of various state-of-the-art object detection techniques. Object detection with Transformers (DETR) has achieved a competitive performance over traditional detectors, such as Faster R-CNN. However, the potential of DETR remains largely unexplored for the more challenging task of arbitrary-oriented object detection problem. We provide the first attempt and implement Oriented Object DEtection with TRansformer ($\bf O^2DETR$) based on an end-to-end network. The contributions of $\rm O^2DETR$ include: 1) we provide a new insight into oriented object detection, by applying Transformer to directly and efficiently localize objects without a tedious process of rotated anchors as in conventional detectors; 2) we design a simple but highly efficient encoder for Transformer by replacing the attention mechanism with depthwise separable convolution, which can significantly reduce the memory and computational cost of using multi-scale features in the original Transformer; 3) our $\rm O^2DETR$ can be another new benchmark in the field of oriented object detection, which achieves up to 3.85 mAP improvement over Faster R-CNN and RetinaNet. We simply fine-tune the head mounted on $\rm O^2DETR$ in a cascaded architecture and achieve a competitive performance over SOTA in the DOTA dataset. Human-Object Interaction (HOI) detection, inferring the relationships between human and objects from images/videos, is a fundamental task for high-level scene understanding. However, HOI detection usually suffers from the open long-tailed nature of interactions with objects, while human has extremely powerful compositional perception ability to cognize rare or unseen HOI samples. Inspired by this, we devise a novel HOI compositional learning framework, termed as Fabricated Compositional Learning (FCL), to address the problem of open long-tailed HOI detection. Specifically, we introduce an object fabricator to generate effective object representations, and then combine verbs and fabricated objects to compose new HOI samples. With the proposed object fabricator, we are able to generate large-scale HOI samples for rare and unseen categories to alleviate the open long-tailed issues in HOI detection. Extensive experiments on the most popular HOI detection dataset, HICO-DET, demonstrate the effectiveness of the proposed method for imbalanced HOI detection and significantly improve the state-of-the-art performance on rare and unseen HOI categories. Code is available at https://github.com/zhihou7/HOI-CL. Video salient object detection aims at discovering the most visually distinctive objects in a video. How to effectively take object motion into consideration during video salient object detection is a critical issue. Existing state-of-the-art methods either do not explicitly model and harvest motion cues or ignore spatial contexts within optical flow images. In this paper, we develop a multi-task motion guided video salient object detection network, which learns to accomplish two sub-tasks using two sub-networks, one sub-network for salient object detection in still images and the other for motion saliency detection in optical flow images. We further introduce a series of novel motion guided attention modules, which utilize the motion saliency sub-network to attend and enhance the sub-network for still images. These two sub-networks learn to adapt to each other by end-to-end training. Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on a wide range of benchmarks. We hope our simple and effective approach will serve as a solid baseline and help ease future research in video salient object detection. Code and models will be made available. Understanding interactions between humans and objects is one of the fundamental problems in visual classification and an essential step towards detailed scene understanding. Human-object interaction (HOI) detection strives to localize both the human and an object as well as the identification of complex interactions between them. Most existing HOI detection approaches are instance-centric where interactions between all possible human-object pairs are predicted based on appearance features and coarse spatial information. We argue that appearance features alone are insufficient to capture complex human-object interactions. In this paper, we therefore propose a novel fully-convolutional approach that directly detects the interactions between human-object pairs. Our network predicts interaction points, which directly localize and classify the inter-action. Paired with the densely predicted interaction vectors, the interactions are associated with human and object detections to obtain final predictions. To the best of our knowledge, we are the first to propose an approach where HOI detection is posed as a keypoint detection and grouping problem. Experiments are performed on two popular benchmarks: V-COCO and HICO-DET. Our approach sets a new state-of-the-art on both datasets. Code is available at https://github.com/vaesl/IP-Net. 3D object detection has been wildly studied in recent years, especially for robot perception systems. However, existing 3D object detection is under a closed-set condition, meaning that the network can only output boxes of trained classes. Unfortunately, this closed-set condition is not robust enough for practical use, as it will identify unknown objects as known by mistake. Therefore, in this paper, we propose an open-set 3D object detector, which aims to (1) identify known objects, like the closed-set detection, and (2) identify unknown objects and give their accurate bounding boxes. Specifically, we divide the open-set 3D object detection problem into two steps: (1) finding out the regions containing the unknown objects with high probability and (2) enclosing the points of these regions with proper bounding boxes. The first step is solved by the finding that unknown objects are often classified as known objects with low confidence, and we show that the Euclidean distance sum based on metric learning is a better confidence score than the naive softmax probability to differentiate unknown objects from known objects. On this basis, unsupervised clustering is used to refine the bounding boxes of unknown objects. The proposed method combining metric learning and unsupervised clustering is called the MLUC network. Our experiments show that our MLUC network achieves state-of-the-art performance and can identify both known and unknown objects as expected. Detecting small objects in video streams of head-worn augmented reality devices in near real-time is a huge challenge: training data is typically scarce, the input video stream can be of limited quality, and small objects are notoriously hard to detect. In industrial scenarios, however, it is often possible to leverage contextual knowledge for the detection of small objects. Furthermore, CAD data of objects are typically available and can be used to generate synthetic training data. We describe a near real-time small object detection pipeline for egocentric perception in a manual assembly scenario: We generate a training data set based on CAD data and realistic backgrounds in Unity. We then train a YOLOv4 model for a two-stage detection process: First, the context is recognized, then the small object of interest is detected. We evaluate our pipeline on the augmented reality device Microsoft Hololens 2. This paper revisits human-object interaction (HOI) recognition at image level without using supervisions of object location and human pose. We name it detection-free HOI recognition, in contrast to the existing detection-supervised approaches which rely on object and keypoint detections to achieve state of the art. With our method, not only the detection supervision is evitable, but superior performance can be achieved by properly using image-text pre-training (such as CLIP) and the proposed Log-Sum-Exp Sign (LSE-Sign) loss function. Specifically, using text embeddings of class labels to initialize the linear classifier is essential for leveraging the CLIP pre-trained image encoder. In addition, LSE-Sign loss facilitates learning from multiple labels on an imbalanced dataset by normalizing gradients over all classes in a softmax format. Surprisingly, our detection-free solution achieves 60.5 mAP on the HICO dataset, outperforming the detection-supervised state of the art by 13.4 mAP Automatic detection of firearms is important for enhancing security and safety of people, however, it is a challenging task owing to the wide variations in shape, size and appearance of firearms. To handle these challenges we propose an Orientation Aware Object Detector (OAOD) which has achieved improved firearm detection and localization performance. The proposed detector has two phases. In the Phase-1 it predicts orientation of the object which is used to rotate the object proposal. Maximum area rectangles are cropped from the rotated object proposals which are again classified and localized in the Phase-2 of the algorithm. The oriented object proposals are mapped back to the original coordinates resulting in oriented bounding boxes which localize the weapons much better than the axis aligned bounding boxes. Being orientation aware, our non-maximum suppression is able to avoid multiple detection of the same object and it can better resolve objects which lie in close proximity to each other. This two phase system leverages OAOD to predict object oriented bounding boxes while being trained only on the axis aligned boxes in the ground-truth. In order to train object detectors for firearm detection, a dataset consisting of around eleven thousand firearm images is collected from the internet and manually annotated. The proposed ITU Firearm (ITUF) dataset contains wide range of guns and rifles. The OAOD algorithm is evaluated on the ITUF dataset and compared with current state of the art object detectors. Our experiments demonstrate the excellent performance of the proposed detector for the task of firearm detection.
https://www.catalyzex.com/search?query=Object%20Detection&with_code=false&page=18
True to the impressive title of the article: “YOLO9000: Better, Faster, Stronger” , YOLOv2 inherits and develops from YOLOv1 with a series of new changes and improvements to produce an upgraded version that is both good and good. better, faster, and more powerful . These changes include reusing previous work, and creating new methods. The YOLOv2 enhancement model achieves SOTA results on PASCAL VOC and COCO datasets, outperforming other methods such as Faster R-CNN + ResNet and SSD while still being much faster: - At 67 FPS, YOLOv2 has an accuracy of 76.8 mAP on the VOC 2007 test dataset. - At 40 FPS, YOLOv2 has an accuracy of 78.6 mAP. Next, the authors propose a method to train YOLOv2 simultaneously on the detection and classification dataset. With this method, the model is simultaneously trained on the COCO (detection) and ImageNet (classification) data sets, resulting in a YOLO9000 version with the ability to detect more than 9000 different objects, all in real time. real. 2. Algorithm details 2.1. Better Again, YOLOv1 had some disadvantages compared to the leading detection systems at the time: - YOLO has quite high Localization Errors – it has difficulty locating objects accurately. - YOLO also has a rather low Recall compared to the region proposal methods. Therefore, YOLOv2 mainly focuses on improving recall and localization while maintaining classification accuracy, thereby improving the accuracy of the model. These changes include reusing previous work, while also generating new ideas, and are listed below: - Batch Normalization : Using batch normalization greatly improves convergence without the need to use regularization. By adding batch normalization to all convolutional layers, performance is improved by 2% mAP. In addition, it also helps regularize the model, removing dropout from the model without overfitting. - Using High Resolution Classifier : YOLOv1 trains Classifier Network at resolution 224 × 224 224 times 224 and increase the resolution YOLOv2 solves the above drawback. First, we fine tune the classification network at the resolution 448 × 448 448 times 448 out of 10 epochs on the ImageNet dataset. This gives the network time to adjust its filters to work better on high-resolution input images. After that, we just fine tune this network for detection. This high resolution classification network increases mAP by almost 4%. - Using Anchor Box to predict Bounding Box : YOLOv1 directly predicts the coordinates of the bounding box using the Fully Connected Layers immediately following the Convolutional Feature Extractor. YOLOv2 improves on this by reusing the Anchor Box idea in Faster R-CNN. This makes it easier for the network to predict bounding boxes. Then, YOLOv2 will discard the last 2 Fully Connected Layers of YOLOv1, because predicting the bounding box from the anchor boxes and confidence score requires only Convolutional Layers. YOLOv2 also removes the Pooling Layer so that the output of Convolutional Layers has a higher resolution. Another advantage of using anchor boxes is that we eliminate the constraint that each cell can only predict one object (class) like in YOLOv1. Instead, we will predict the class and objectness for every anchor box. This will increase the number of detected objects, since each cell will predict more objects. In addition, YOLOv2 adjusts the network to predict on the input image size 416 × 416 416 times 416 instead of Similar to YOLOv1, predicting objectness will still predict the IOU of the ground truth and proposed box, and predicting the class is still predicting the conditional probability that the class knows that exists. an object: P r ( OLD l a S S i ) = P r ( OLD l a S S i O b j e c t ) × P r ( O b j e c t ) Pr(Class_i) = Pr(Class_i | Object) times Pr(Object) P r ( O bj ec t ) Using anchor boxes in YOLOv2 will increase the number of predicted bounding boxes to more than 1000 boxes/image (much more than YOLOv1 with only 98 boxes per image). This reduces accuracy by a small amount. Specifically: - YOLOv1 reached 69.5 mAP, recall = 81%. - With the anchor box, YOLOv2 reached 69.2 mAP, recall = 88%. Thus, although the mAP of YOLOv2 decreased compared to YOLOv1, recall increased significantly. - Estimating Anchor Boxes : When using anchor boxes for YOLOv2, two problems arise. The first problem is that the initial size of the anchor box is chosen at random. Although the network can learn to fit the boxes properly, however, if the selected anchor boxes are of good enough size, the network’s learning becomes easier and thus predicts good detections. It has been found that in most data sets, bounding boxes are usually sized according to certain proportions and sizes. For example, the bounding box of a normal person will have an aspect ratio (width / height ratio) of 1:3, or the bounding box of a car viewed from the front often has an aspect ratio of 1:1. So, instead of choosing the initial size of the anchor box randomly, we will use the K-Means Clustering algorithm on the set of bounding boxes of the training set to automatically find the sizes of the anchor boxes, which will be the anchors. box represents the common bounding box sizes in the training set. The mechanism of the k-means algorithm in predicting the anchor box is as follows: - Initially, we will initialize randomly k k anchor boxes do k k centroids (center of cluster) first. - For each anchor box, we calculate I O U IOU of each bounding box with that anchor box. - Because we want the anchor boxes to have I O U IOU is good for the bounding box, so we define the distance metric as follows: Explain the above formula: We have 0 I O U first 0 leq IOU leq 1 . Bounding box has - After calculating d(box, centroid) for each anchor box, we divide the bounding boxes into the corresponding centroids and then update again. k k centroids. - Repeat the above steps until the algorithm converges. The figure above is an illustration of the K-Means Clustering algorithm in predicting the above anchor boxes, with k = 5 k = 5 . Each cluster is a different color, corresponding to the color of the center (which is the size of the anchor box). The figure above is anchor box clustering on the VOC and COCO datasets. The figure on the left shows the value I O U IOUAverage with values k k is different. Value k = 5 k = 5 was chosen because there is a good tradeoff between high recall vs model complexity. The figure on the right shows the cluster centers of two data sets VOC and COCO. We see that COCO has a larger size variation than VOC. After performing K-Means, the cluster centers (which are the anchor box sizes) are significantly different from the manually selected anchor boxes. The table above compares the average IOU with the closest anchor box on the 2007 VOC dataset, obtained from the above K-Means Clustering algorithm and selecting the anchor boxes manually. Only with k = 5 k = 5 anchor boxes, Cluster IOU (2nd row) gave the same results as Anchor Boxes (3rd row) with 9 anchor boxes. ( - Initially, we will initialize randomly - Directly predicting the center coordinates of the bounding box : This is the second problem we encounter when using anchor boxes, especially in the first loops. That instability mainly comes from predicting the center coordinates ( x , y ) (x, y) of the bounding boxes. Recall that in Faster R-CNN’s Region Proposal Network (RPN) , it will predict two values The problem is that the other formula has no constraints at all. Eg t x = first t_x = 1 will shift the bounding box to the right by an interval equal to the width of the bounding box, So, instead of predicting the center coordinates of the bounding box from the anchor box, YOLOv2 will use the same approach as YOLOv1, which is to directly predict the center coordinates of the bounding box according to the position of each grid cell in the feature. map instead of predicting by anchor box position. This helps to constrain the bounding box’s coordinates to the segment [ 0 , first ] [0, 1] . To do that, we will use the logistic activation function ( The picture above is an illustration of what we just said above. We will predict the size (width and height) of the bounding box (light blue image) according to the size of the anchor box (dotted rectangle) obtained from the clustering algorithm above. The center coordinates of the bounding box will be predicted according to the position of the cell on the feature map using the sigmoid activation function. YOLOv2 predicts 5 bounding boxes per cell in the feature map. Network will predict 5 coordinates for each bounding box: t x , t y , t w , t H , t o t_x, t_y, t_w, t_h, t_o . If that cell has coordinates The final formula is to predict the confidence score of the bounding box b b . Since we limit the position of the predicted bounding box, the learning becomes simpler, thereby making the network more stable. Using anchor box clustering together with directly predicting the coordinates of the bounding box by cell of the feature map increases the mAP by almost 5% compared to the YOLO version that predicts the position of the bounding box according to the position of the anchor box. . - Use more detailed features : YOLOv2 predicts detection on feature map dimensions 13 × 13 13 times 13 – enough to predict large objects. Moreover, it overcomes the drawback of YOLOv1 – the difficulty in predicting small objects – by using many feature maps of different sizes (inspired by Faster R-CNN and SSD). This will help improve the prediction of small objects from finer grained features.In particular, YOLOv2 will add a feature map of size. The passthrough layer has the effect of connecting (concatenate) the feature map 13 × 13 13 times 13 with feature map Normally, joining two feature maps is only possible when they have the same width and height. In the article, the author simply said (verbally): “The passthrough layer concatenates the higher resolution features with the low resolution features by stacking adjacent features into different channels instead of spatial locations, similar to the identity mappings in ResNet. This turns the 26 × 26 × 512 26 times 26 times 512 feature map into a Through research, the above feature map resizing technique is called Reorg . This is essentially just a technique to reorganize the memory to turn the feature map n × n × c first n times n times c_1 Fort The main idea of this technique is as follows. Suppose we want to reduce the length and width of each side 2 2 times the number of channels must be increased 4 4 times. This transformation is not at all like the resize operation in image processing. For easy visualization, you can see the figure below: The image above is a size feature map 4 × 4 4 times 4 . To bring the feature map to size Thus, using the Reorg technique has turned the feature map 26 × 26 × 512 26 times 26 times 512 to feature map - Training on different sized images : Since YOLOv2 uses only Convolutional Layers and Pooling Layers, it can resize input images during the algorithm run. Therefore, YOLOv2 can adapt well to many different sized input images. The author trained the network on many different image sizes to increase the adaptability of YOLOv2 to various image sizes. This means that YOLOv2 can make predictions at different resolutions. The above table compares detection systems on the PASCAL VOC 2007 test dataset. We can see that YOLOv2 is both faster and more accurate than previous detection systems. YOLOv2 will run faster with small sized images. At the highest resolution, YOLOv2 achieves the greatest accuracy with 78.6 mAP while still achieving speeds greater than real time (40 FPS). Moreover, it can run at different resolutions without too much trade-off between speed and accuracy. 2.2. Faster YOLOv2 gives more accurate prediction results, and its speed is also faster. To do that, its network architecture has changed significantly compared to YOLOv1. Instead of using a custom network based on the GoogLeNet architecture, YOLOv2 uses a new classification model as the base network, named Darknet-19 . It includes 19 Convolutional Layers and 5 Maxpooling Layers. The following figure depicts the specific architecture of Darknet-19. Darknet-19 only needs 5.58 billion operations to process an image, while with YOLOv1’s architecture is 8.52 billion operations, while still achieving 72.9% top-1 accuracy and 91.2% top-5 accuracy on the dataset. whether ImageNet. Thus, it can be seen that the speed of YOLOv2 is significantly increased (34%) compared to YOLOv1. We summarize the ideas in the table below: Most of the ideas listed in the table above increase mAP significantly, except for switching to a Fully Convolutional Network with Anchor Box and using a new backbone network. Switching to Anchor Box increased recall while keeping mAP almost the same (from 69.5 to 69.2), while using a new backbone network reduced computational costs by 34%. (mAP from 69.2 to 69.6). 2.3. Stronger Although detection systems are getting faster and more accurate, they are still constrained by a small set of objects. The datasets for object detection are very limited compared to other tasks like classification and tagging. The most common detection dataset contains thousands to hundreds of image views, with the number of labels ranging from tens to hundreds. The classification data sets are much more extensive, consisting of millions of images with tens or hundreds of thousands of classes. Increasing the size and number of classes for the detection dataset is not simple at all because labeling images for detection is much more expensive than labeling for classification or tagging (because in addition to labeling images for detection). class, we also have to assign exact bounding box coordinates, which is extremely time consuming). Therefore, it is unlikely that the detection dataset will be as large and large as the classification dataset in the near future. Therefore, the author proposes the following two solutions: - Propose a method to exploit a large number of existing classification data, use it to expand the scope of object recognition for the detection system. This method makes it possible to combine different data sets together. - Propose a concurrent training algorithm that makes it possible to train object detectors on both classification and detection datasets. This method uses: - The detection images have been labeled to learn information about the detect: predicting bounding box coordinates to accurately predict the object, objectness (object existence or not), how to classify common objects . - Classification pictures to increase vocabulary – expand the number of predictable classes – thereby making YOLOv2 even more powerful. To accomplish the above two things, during training, the algorithm will mix images from two classification and detection datasets together: - When the network sees an image labeled for detection, it backpropagates the error across the entire loss function. - When the network sees an image labeled for classification, it just backpropagates the error from the classification error components of the loss function. At this point, a new problem emerges, that the detection datasets only have common objects and generalized labels, such as “dog”, “person”, “boat”. For example, the COCO dataset below with 80 classes has a common meaning: Meanwhile, the classification dataset has both more labels and more depth. For example, the ImageNet dataset (picture below) (22k class) has more than one hundred dog breeds, such as “Norfolk terrier”, “Yorkshire terrier”, “Bedlington terrier”, etc. So we need to find a way to match the most related labels to be able to train concurrently on the data sets. Most classification methods use the Softmax Layer function to classify objects, which assumes each image has only one label. However, we cannot apply that function to the ImageNet dataset because an image can have more than one label, such as “Norfolk terrier” and “dog”. We can use multi-label models to solve this problem, but it is at a disadvantage in the COCO detection dataset, because the images in this dataset have only one label. Thus, we can see that there is a contradiction between the detection and classification datasets. 2.3.1. Hierarchical classification The labels of the ImageNet dataset are obtained from WordNet – a language database used to structure concepts and their relationships to each other. In WordNet, “Norfolk terrier” and “Yorkshire terrier” are both hyponym (a word with a more specific meaning and within the meaning of another word) of “terrier” – which is a category in “hungting dog” ” – is a category of “dog”. Most classification methods assume a flat structure – words have equal, independent and separate meanings from each other, with no word depth (no word is within the meaning of another word). However, to be able to combine data sets, we need to build a structure for classes. WordNet is structured as a directed graph, not a tree, because the language is very complex. For example, “dog” belongs to the “canine family” and “domestic animal”, meaning “dog” belongs to two different branches. We will rely on the structure of WordNet to build a hierarchical tree from the concepts in the ImageNet dataset, using only visual nouns, with the root node being “physical object”. The way to do this is as follows: - First, we will add branches that have a single path from the root node. - With the concepts left behind, we add paths to make the tree grow as little as possible. For example, if a concept has two paths from the root, one path adds three edges to the tree, the other only adds one edge to the tree, then we will choose the shorter path. We call the above implementation the WordTree – a hierarchical model for visual concepts: To perform the classification on WordTree, we predict the probability of an object based on the product of the conditional probabilities, going from that node to the root node (we assume the image contains the object, so Pr(physical object) = 1). For example: 3. Conclusion So we have introduced and detailed the YOLOv2 and YOLO9000 and their improvements over the first YOLO version. YOLOv2 gives better results and faster speed than other recognition systems on different detection datasets. Moreover, it can process images with different sizes without having to trade off much between speed and accuracy. YOLOv2 is further improved by concurrent training between the detection and classification dataset, thereby producing the YOLO9000 version with the ability to predict more than 9000 objects. We use WordTree to combine data from many different sources and optimize techniques to train simultaneously on ImageNet and COCO datasets. YOLO9000 marks a big step forward in bridging the gap between data set detection and classification.
https://itzone.com.vn/en/article/yolov2-better-faster-and-more-powerful/
In this post, we will cover Faster R-CNN object detection with PyTorch. We will learn the evolution of object detection from R-CNN to Fast R-CNN to Faster R-CNN. This post is part of our PyTorch for Beginners series 1. Image Classification vs. Object Detection Image Classification is a problem where we assign a class label to an input image. For example, given an input image of a cat, the output of an image classification algorithm is the label “Cat”. In object detection, we are not only interested in what objects are in the input image, but we are also interested in where they are located. The figure above illustrates the difference between image classification and object detection. 1.1. Image Classification vs Object Detection : Which one to use? Typically, image classification is used in applications where there is only one object in the image. There could be multiple classes (e.g. cats, dogs, etc.) but usually, there is only one instance of that class in the image. In most applications where there are more than one objects in the input image, we need to find the location of the objects, and then classify them. We use an object detection algorithm in such cases. Object detection can be hundreds of times slower than image classification, and therefore, in applications where the location of the object in the image is not important, we use image classification. 2. Object Detection We can think of object detection as a two-step process - Find bounding boxes containing objects such that each bounding box has only one object. - Classify the image inside each bounding box and assign it a label. In the next few sections, we will cover steps that led to the development of Faster R-CNN object detection architecture. 2.1 Sliding Window Approach Most classical computer vision techniques for object detection like HAAR cascades and HOG + SVM use a sliding window approach for detecting objects. In this approach, a sliding window is moved over the image, and all the pixels inside that sliding window are cropped out and sent to an image classifier. If the image classifier identifies a known object, the bounding box and the class label are stored. Otherwise, the next window is evaluated. The sliding window approach is computationally very expensive because to detect objects in an input image, sliding windows at different scales and aspect ratios need to be evaluated at every pixel in the image. As a result, sliding windows are used only when we are detecting a single object class with a fixed aspect ratio. For example, the HOG + SVM or HAAR based face detector in OpenCV uses a sliding window approach. In case of a face detector, the complexity is manageable because only square bounding boxes are evaluated at different scales. 2.2. R-CNN Object Detector Convolutional Neural Network (CNN) based image classifiers became popular after a CNN based method won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012. Because every object detector has an image classifier at its heart, the invention of a CNN based object detector became inevitable. There were two problems that needed to be overcome though - CNN based image classifiers were computationally very expensive compared traditional techniques like HOG + SVM or HAAR cascades. - The computer vision community was growing more ambitious. People wanted to build a multi-class object detector that could handle different aspect ratios in addition to different scales. Therefore, a sliding window based approach for object detection was ruled out. It was just too expensive. Researchers started working on a new idea of training a machine learning model that could propose locations of bounding boxes that contained objects. These bounding boxes were called Region Proposals or Object Proposals. Region proposals were merely lists of bounding boxes with a small probability of containing an object. It did not know or care which object was contained in the bounding box. A region proposal algorithm outputs a list of a few hundred bounding boxes at different locations, scales, and aspect ratios. Most of these bounding boxes do NOT contain any object. Why are region proposals still useful? Evaluating the image classifier at a few hundred bounding boxes proposed by the region proposal algorithm is much cheaper than evaluating it at hundreds of thousands or even millions of bounding boxes in case of the sliding window approach. One of the first approaches that used region proposals was called R-CNN ( short for Regions with CNN features) by Ross Girshick et al. They used an algorithm called Selective Search to detect 2000 region proposals and ran a CNN + SVM based image classifier on these 2000 bounding boxes. The accuracy of R-CNN at that time was the state of the art, but the speed was still very slow ( 18-20 seconds per image on a GPU ) 2.3 Fast R-CNN Object Detector In R-CNN each bounding box was independently classified by the image classifier. There were 2000 region proposals and the image classifier calculated a feature map for each region proposal. This process was expensive. In the followup work by Ross Girshick, he proposed a method called Fast R-CNN that significantly sped up object detection. The idea was to calculate a single feature map for the entire image instead of 2000 feature maps for 2000 region proposals. For each region proposal, a region of interest (RoI) pooling layer extracted a fixed-length feature vector from the feature map. Each feature vector was then used to for two purposes - Classify the region into one of the classes ( e.g. dog, cat, background ). - Improve the accuracy of the original bounding box using a bounding box regressor. 2.4 Faster R-CNN Object Detector In Fast R-CNN, even though the computation for classifying 2000 region proposals was shared, the part of the algorithm that generated region proposals did not share any computation with the part that performed image classification. In the followup work called Faster R-CNN, the main insight was that the two parts — calculating region proposals and image classification — could use the same feature map and therefore share the computational load. A Convolutional Neural Network was used to produce a feature map of the image which was simultaneously used for training a region proposal network and an image classifier. Because of this shared computation, there was a significant improvement in the speed of object detection. 3. Object Detection with PyTorch [ code ] In this section, we will learn how to use Faster R-CNN object detector with PyTorch. We will use the pre-trained model included with torchvision. All the pre-trained models in PyTorch can be found in torchvision.models If you want to learn more about all of these models and many more application and concepts of Deep Learning and Computer Vision in detail, check out the official Deep Learning and Computer Vision courses by OpenCV.org. 3.1. Input and Output The pretrained Faster R-CNN ResNet-50 model that we are going to use expects the input image tensor to be in the form [n, c, h, w] and min size of 800px. where - n is the number of images - c is the number of channels , for RGB images its 3 - h is the height of the image - w is the widht of the image The model will return - Bounding boxes [x0, y0, x1, y1] all all predicted classes of shape (N,4) where N is the number of classes predicted by the model to be present in the image. - Labels of all predicted classes. - Scores of each predicted label. 3.2. Pretrained Model Download the pretrained model from torchvision with import torchvision model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) model.eval() Line 2 will download a pretrained Resnet50 Faster R-CNN model with pretrained weights. Define the class names given by PyTorch’s official Docs COCO_INSTANCE_CATEGORY_NAMES = [ '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table', 'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush' ] We can see some N/A’s in the list, as a few classes were removed in the later papers. We will go with the list given by PyTorch. 3.3. Prediction of the model Let’s define a function to get the image path and get the prediction of the image by the model. def get_prediction(img_path, threshold): img = Image.open(img_path) # Load the image transform = T.Compose([T.ToTensor()]) # Defing PyTorch Transform img = transform(img) # Apply the transform to the image pred = model([img]) # Pass the image to the model pred_class = [COCO_INSTANCE_CATEGORY_NAMES[i] for i in list(pred['labels'].numpy())] # Get the Prediction Score pred_boxes = [[(i, i), (i, i)] for i in list(pred['boxes'].detach().numpy())] # Bounding boxes pred_score = list(pred['scores'].detach().numpy()) pred_t = [pred_score.index(x) for x in pred_score if x > threshold][-1] # Get list of index with score greater than threshold. pred_boxes = pred_boxes[:pred_t+1] pred_class = pred_class[:pred_t+1] return pred_boxes, pred_class - Image is obtained from the image path - the image is converted to image tensor using PyTorch’s Transforms - image is passed through the model to get the predictions - class, box coordinates are obtained, but only prediction score > threshold are chosen. 3.4. Pipeline for Object Detection Next we will define a pipeline to get the image path and get the output image. def object_detection_api(img_path, threshold=0.5, rect_th=3, text_size=3, text_th=3): boxes, pred_cls = get_prediction(img_path, threshold) # Get predictions img = cv2.imread(img_path) # Read image with cv2 img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert to RGB for i in range(len(boxes)): cv2.rectangle(img, boxes[i], boxes[i],color=(0, 255, 0), thickness=rect_th) # Draw Rectangle with the coordinates cv2.putText(img,pred_cls[i], boxes[i], cv2.FONT_HERSHEY_SIMPLEX, text_size, (0,255,0),thickness=text_th) # Write the prediction class plt.figure(figsize=(20,30)) # display the output image plt.imshow(img) plt.xticks() plt.yticks() plt.show() - prediction is obtained from get_prediction method - for each prediction, bounding box is drawn and text is written with opencv - the final image is displayed 3.5. Inference Now lets use the API pipleine which we built to detect object in some images. The pretrained Model takes around 8 seconds for inference in CPU and 0.15 second in NVIDIA GTX 1080 Ti GPU. Example 3.5.1 Download an image for inference !wget https://www.wsha.org/wp-content/uploads/banner-diverse-group-of-people-2.jpg -O people.jpg use the image with the api function to display the output.
https://www.learnopencv.com/faster-r-cnn-object-detection-with-pytorch/
Object detectors have hugely profited from moving towards an end-to-end learning paradigm: proposals, features, and the classifier becoming one neural network improved results two-fold on general object detection. One indispensable component is non-maximum suppression (NMS), a post-processing algorithm responsible for merging all detections that belong to the same object. We propose a new network architecture designed to perform non-maximum suppression (NMS), using only boxes and their score. We report experiments for person detection on PETS and for general object categories on the COCO dataset. Cityscapes Dataset We present a new large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5000 frames in addition to a larger set of weakly annotated frames. The dataset is thus an order of magnitude larger than similar previous attempts. Details on annotated classes, example images and more are available at this webpage. What makes for effective detection proposals? Detection proposals allow to avoid exhaustive sliding window search across images, while keeping high detection quality. We provide an in depth analysis of proposal methods regarding recall, repeatability, and impact on DPM, R-CNN, and Fast R-CNN detector performance. We introduce a novel metric, the average recall (AR), which rewards both high recall and good localisation and correlates surprisingly well with detector performance. Our findings show common strengths and weaknesses of existing methods, and provide insights and metrics for selecting and tuning proposal methods. See the Difference: Direct Pre-Image Reconstruction and Pose Estimation by Differentiating HOG We exploit the piece-wise differentiability of HOG descriptor to facilitate differentiable vision pipelines. We present our implementation of ∇HOG based on the auto-differentiation toolbox Chumpy and show applications to pre-image visualization and pose estimation which extends the existing differentiable renderer OpenDR pipeline. Object Disambiguation for Augmented Reality Applications In this project we propose a novel object recognition system that fuses state-of-the-art 2D detection with 3D context. We focus on assisting a maintenance worker by providing an augmented reality overlay that identifies and disambiguates potentially repetitive machine parts. In addition, we provide an annotated dataset that can be used to quantify the success rate of a variety of 2D and 3D systems for object detection and disambiguation. Scalable Multitask Representation Learning for Scene Classification We propose a multitask learning approach (MTL-SDCA) which scales to high-dimensional image descriptors (Fisher Vector) and consistently outperforms the state of the art on the SUN397 scene classification benchmark with varying amounts of training data and varying K when classification performance is measured via top-K accuracy. Learning Using Privileged Information: SVM+ and Weighted SVM We relate the privileged information to importance weighting and show that the prior knowledge expressible with privileged features can also be encoded by instance weights. Moreover, we argue that if there is vast amount of data available for model selection (e.g. a validation set), it can be used to learn instance weights that allow the Weighted SVM to outperform SVM+. Learning Smooth Pooling Regions for Visual Recognition In this project we have investigated the pooling stage in the Spatial Pyramid Matching (SPM) architectures. In order to preserve some spatial information, typically the SPM methods divide the image into sub-regions in a fixed manner according to some division template (for instance by splitting the image into 2-by-2 non-overlapping sub-regions), and next aggregate statistics separately over such sub-regions. In this work, we question such arbitrary division and propose a method that discriminatively learn the optimal division together with the classifier's parameters. Finally, we have shown experimentally that such optimized pooling stage boosts the overall accuracy in the visual recognition task and therefore cannot be left as an arbitrary choice. Recognizing Materials from Virtual Examples In this project, we investigate if and how appearance descriptors can be transferred from the virtual world to real examples. We study two popular appearance descriptors on the task of material categorization as it is a pure appearance-driven task. Beyond this initial study, we also investigate different approach of combining and adapting virtual and real data in order to bridge the gap between rendered and real-data. Our study is carried out using a new database of virtual materials MPI-VIPS that complements the existing KTH-TIPS material database. MPII Multi-Kinect Dataset In this project we explore the benefit of using multiple depth cameras (Microsoft Kinect) for object localization. We provide MPII Multi-Kinect Dataset, a novel dataset which is collected with four Kinect cameras simultaneously in a kitchen environment. Teaching 3D Geometry to Deformable Part Models State-of-the-art object detectors nowadays typically target 2D bounding box localization of objects in images. While this is sufficient for object detection itself, there are important computer vision problems like 3D scene understanding and autonomous driving which would benefit much more from object detector capable of outputing richer object hypotheses (viewpoints of objects, correspondences across views, fine-grained categories etc). In this work we aim at narrowing the representational gap which exists between the standard object detector output and the ideal input of a high-level vision task, like 3D scene understanding. Cross-Modal Stereo by Using Kinect We complement the depth estimate within the Kinect by a cross-modal stereo path that we obtain from disparity matching between the included IR and RGB sensor of the Kinect. We investigate physical characteristics of the Kinect sensors: how the RGB channels can be combined optimally in order to mimic the image response of the IR sensor. Adapting RGB in frequency domain to mimic an IR image did not yield improved performance. We further propose a more general method that learns optimal filters for cross-modal stereo under projected patterns. Our combination method produces depth maps that include sufficient evidence for reflective and transparent objects, and preserves at the same time textureless objects, such as tables or a walls. Image Warping For Face Recognition In this project we develop novel image warping algorithms for full 2D pixel-grid deformations with application to face recognition. We propose several methods with different optimization complexity depending on the type of mutual dependencies between neighboring pixels in the image lattice. We evaluate the presented warping approaches on four challenging face recognition tasks in highly variable domains. Addressing scalability in object recognition While current object class recognition systems achieve remarkable recognition performance for individual classes, the simultaneous recognition of multiple classes remains a major challenge: building reliable object class models requires a sufficiently high number of representative training examples, often in the form of manually annotated images. Since manual annotation is costly, our research aims at reducing the amount of required training examples and manual annotation for building object class models, thereby increasing scalability. We explore three different ways of achieving this goal. Monocular Scene Understanding from Moving Platforms This project combines state-of-the-art object detectors, semantic scene segmentation and the notion of tracklets to perform 3D scene understanding from a moving platform with a monocular camera. Improved results are presented for pedestrians, cars and trucks.
https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/research/object-recognition-and-scene-understanding
Object detection and localization is one of the key challenges in autonomous driving Geiger et al. (2012) and robotics Wolf et al. (2016). Compared to the performance of 2D object detection in images the results in 3D object detection lag behind considerably, mainly caused by increased difficulty of the localization task. Instead of fitting axis-aligned rectangles around the part of objects that is visible in the image plane, the main challenge in 3D detection and localization is the amodal oriented 3D bounding box prediction in 3D space, which includes the occluded or truncated parts of objects. Despite the recent advancement of image-only 3D detectors Wang et al. (2019); Li et al. (2019) point cloud information from LiDAR sensors is a requirement to achieve highly accurate 3D localization. Unfortunatly, LiDAR data is represented as an unordered, sparse set of points in a continuous 3D space and can not be directly processed by standard convolutional neural networks (CNNs). Deep CNNs operating on multiple discretized 2D Bird’s Eye View (BEV) maps achieved early success in both 3DChen et al. (2017); Ku et al. (2018) and BEV Yang et al. (2018) detection. Simply discretizing point clouds is inevitably linked to a loss of information. Current state-of-the-art 3D object detection is largely based on the seminal work PointNet Qi et al. (2017). Pointnets are used in combination with 2D image detectors to perform 3D localization on point cloud subsets Qi et al. (2018); Shin et al. (2019). Two-stage approaches Shi et al. (2019) use PointNets in an initial segmentation stage and a subsequent localization stage. Voxelnet Zhou and Tuzel (2018) introduces Voxel Feature Encoding (VFE) layers which utilize PointNet to learn an embedding of local geometry within voxels, which can then be processed by 3D and 2D convolutional stages. Others Yan et al. (2018); Lang et al. (2019) add further improvements to Voxelnet. Motivation Voxelnet is a single-stage model, as such it applies VFE-encoding with a uniform resolution on the whole scene, although such a resolution is only necessary at locations that contain objects. As a consequence, it is severely limited by memory constraints, especially during training, where only a batch size of 2 can be processed by a GPU with 11 GB of memory. Given the typical sparseness of objects within a LiDAR scene, only a few subsets contain viable information to train regression Engelcke et al. (2017) . But in the case of Voxelnet Batch Normalization layersIoffe and Szegedy (2015) break this local independence. Patch Refinement To be able to train a small detector focused on 3D bounding box regression, we decompose the task into a preliminary BEV detection step and a local 3D detection step, similar to the two-stage approach of R-CNN Girshick et al. (2014). Object sizes are bound and unaffected by the distance to the sensor. We construct a Local Refinement Network (LRN) that operates on small subsets of points within a fixed-sized cuboid, which we term ”patches”. The RPN does not have to perform warping and independence of the LRN can be achieved by training with some noise to account for proposed locations that are slightly offset. We favor an independent approach because it enables additional augmentation options, the higher resolution features have to be calculated in any case and it allows us to evaluate the regression ability of the LRN without the influence of an RPN. Our work comprises the following key contributions: - We demonstrate that it is a viable approach to decompose the 3D object detection task in autonomous driving scenarios into a preliminary BEV detection followed by a local 3D detection via two independently trained networks. - We show that it takes only a few simple modifications to utilize the Voxelnet Zhou and Tuzel (2018) architecture either as an efficient Region Proposal Network or as a Local Refinement Network that is capable to perform highly accurate 3D bounding box regression and is not limited to fixed input space. - We report the beneficial effect of adding the corner bounding box parametrization of AVOD Ku et al. (2018) as auxiliary regression targets, even without applying the proposed decoding algorithm. 2 Related Work MV3D Chen et al. (2017) and AVOD Ku et al. (2018) are two-stage models that fuse image and point cloud information and perform regression on the resulting 2D BEV feature maps, projections of point clouds and camera information. While this enables the use of 2D CNNs, these approaches cannot capture the full geometric complexity of the 3D input scene, due to information loss caused by discretization. Frustum PointNets Qi et al. (2018) projects the proposals of an image-based 2D detector onto the point cloud. The resulting frustum is then further processed by a sequence of PointNets. It demonstrates that accurate amodal bounding box prediction can be performed without context information, which is actively removed by a segmentation PointNet. Noticeably, detection scores are calculated without taking the LiDAR representation into account. Voxelnet Zhou and Tuzel (2018) applies a 3D grid to divide the input space into voxels. Followed by a sparse voxel-wise input encoding via a sequence of PointNet-based VFE layers. This enables the network to learn structures within voxels and to embed the point cloud into a structured representation while retaining the most important geometric information in the data. Which are subsequently processed by 3D and 2D CNNs. Both our RPN and LRN are based on Voxelnet, modified to better accomplish their respective tasks. SECOND Yan et al. (2018) modifies Voxelnet by replacing the costly dense 3D convolutions with efficient sparse convolutions, reducing both run-time and memory consumption considerably. Furthermore, they propose ground truth sampling, an augmentation technique that populates training scenes with additional objects from other scenes. Besides speeding-up training by increasing the average number of objects per frame, this augmentation provides strong protection against overfitting on context information Shi et al. (2019); Lang et al. (2019). PointPillars Lang et al. (2019) proposes a VoxelNet-based model without 3D convolutional middle layers. Instead, they apply a voxel grid with a vertical resolution of one and the encoding of the vertical information is solely performed within the VFE-layers. Further optimized towards speed, the model achieves the highest frame rates within the pool of current 3D object detection models. Our RPN follows a similar design, but we use a vertical resolution of two and concatenation. PointRCNN Shi et al. (2019) is a two-stage approach utilizing PointNets, that introduces a novel LiDAR-only bottom-up 3D proposal generation first stage, followed by a second stage to refine predictions. Similar to our model it follows the R-CNN approach and pools the relevant subset of the input point cloud for each proposal. Unlike our LRN, the second stage of Point R-CNN reuses higher-level features and relies on the RPN to transform the proposals into a canonical representation. 3 Method Figure 2 depicts the inference procedure, which follows the original R-CNN Girshick et al. (2014) approach. 3.1 Local Refinement Network We follow the Voxelnet approach and apply a 3D voxel grid to the input, grouping points to voxels. This is followed by a VFE stage and then by a reduction step from 3D to 2D BEV feature maps. These are then processed by our 2D convolutional backbone network. Grouping Points into Voxels We use the efficient sparse encoding algorithm of Voxelnet that processes only non-empty voxels. Although we train only on small regions of the input scene, we preserve absolute (global) point coordinates. This way our model can learn that objects farther from the sensor are typically represented with fewer measurements than nearby objects. Voxel Feature Encoding While the grouping algorithm of Voxelnet processes only non-empty voxels, the input to the VFE layers consists of dense tensors with a large proportion of padded zeros (roughly 90 % with the default setting of at most 35 points per voxel). As it has a regularizing effect on the running mean and variance this zero-padding alleviates the use of Batch Normalization (BN)Ioffe and Szegedy (2015). We relinquish the use of zero padding and remove the BN in our VFE-stage. Instead, we apply per-sample normalization. Overall, our modified VFE layer requires less memory while also increasing the predictive performance of our model. Reduction Network The activation tensor resulting from the VFE stage is still three dimensional. Via multiple 3D-convolutional layers, the vertical resolution of the activation tensor is reduced to one resulting in 2D feature maps. The result is a 2D representation in BEV with vertical information encoded locally. Backbone Network Figure 3 depicts the architecture of our backbone network. It uses multiple blocks of 2D convolutional layers to generate intermediate feature maps of different resolutions. We then apply transposed convolutional layers to get feature maps with equal dimensions before the output layers, where feature maps are concatenated and combined with convolutions at each anchor position. The resulting feature maps are labeled and in Figure 3. For the regression head, the receptive field of has been chosen to cover the majority of objects while the receptive field of is slightly larger to cover outliers. Restricting the receptive field of the regression head reduces distractions from the environment when predicting exact bounding boxes. In contrast, we perform the detection based on higher-level feature maps that have a larger receptive field. Loss We use a variant of residual box parametrization with a direction classifierYan et al. (2018) and sine and cosine encoding for orientation. From the vectorrepresenting the three box-center coordinates , height , width , length and yaw around the -axis of an oriented 3D bounding box, we calculate residual regression targets . In , , the subscripts and denote bottom and top respectively, between a ground truth vector and an anchor vector from the same vector space as described in Equation (1): |(1)| where denotes the component of the vector (the notation for the other components follows respectively). Additionally, we use the box parametrization of AVOD Ku et al. (2018) as auxiliary regression targets. We transform ground truth boxes and anchor boxes into a 2D corner representation in BEV for the - and -coordinates of each corner resulting in eight variables , where the subscripts enumerate corners. Then we calculate the targets . While we do not use these additional regression parameters during inference, the added training task improves performance considerably. We use axis-aligned 2D Intersection over Union (IoU) in BEV as a similarity measure between the ground truth boxes rotated to the nearest axis and the corresponding anchor type. For detection, positive anchors have to surpass an IoU threshold of , while anchors with an IoU below are treated as negatives. For regression, the positive threshold is set to . Our default choice for training the detection head is the balanced sampling of anchors with a ratio of 3 to 1 of negative and positive anchors. For detection and direction we use the binary cross-entropy loss function, denotedand for the regression targets we use the smooth L1 loss (also known as Huber loss), denoted . Balancing is achieved via the hyperparameters, with the default values of 1, 1 and 2 respectively. Let and denote the sampled positive and negative detection anchors and let denote the sigmoid activation of the classification output. Further, let denote the positive regression anchors. is the total number of positive regression anchors. The overall loss function is given as in Equation (2). |(2)| 3.2 Patch Construction The independence of the LRN enables us to construct patches from ground truth annotations. Surface Sampling Inspired by the beneficial effect of ground truth sampling in SECOND, we try to achieve protection against overfitting on context information in a similar way. First, we create lists of objects and related surrounding areas (surfaces) in the training set. We then remove points within the slightly enlarged bounding boxes of the objects present in the respective surface. We then rotate each surface to align its center with the depth axis in front of the sensor car. Finally, we sort both object and surface lists based on the absolute distance to the sensor. During training, we combine objects with surfaces that appear in a similar distance to the sensor. To ensure that the object lines up with the surface, we look up the vertical coordinate of the original object and place the augmented object at the corresponding height. We apply surface sampling based on the difficulty levels easy, moderate and hard with the respective probabilities of, and . Global & Per-Object Augmentation We then proceed with standard augmentation techniques. Scaling and mirroring of the patch and the object individually. Contrary to Zhou and Tuzel (2018); Yan et al. (2018) we do not apply per-object rotation and vertical translation. As per-object rotation introduces self-occlusion artifacts and vertical translation creates unreasonable patches where objects appear below ground or fly above it. Especially in the context of self-driving, the training data for car objects is heavily biased, with a strong peak for parallel orientation to the ego-vehicle. A major benefit of working on a per-object basis without a fixed input space is the ability to use an augmentation for a full range of rotations in the forward view around the global -axis by . Therefore, every object is learned to be recognized in every possible orientation inside the patch, respecting the global position in which the so rotated object would happen to appear. Consequently, both the perpendicular and the parallel anchor types are trained equally well. Random Cropping - LRN Detection Objective At this stage, we have an augmented object placed somewhere upon a surface. To achieve robustness against imperfect proposals by an actual RPN and to create a training task for the detection head, we sample noise from a circular area ( and meters). Finally, we crop the patch at the BEV location that is determined by the object center and the offset from the sampled noise. Therefore, the objective of the detection head of the LRN will be to revert this offset and determine the correct object center within the small anchor grid. A task that is closely related to regression and designed to achieve a correlation between detection score and the ability to accurately regress an object. 3.3 Region Proposal Network Our RPN is a slim version of the LRN network, the main source of simplification is the circumvention of the vertical reduction stage. Instead of multiple 3D convolutional layers, we use a vertical voxel resolution of two and concatenate the two resulting feature planes. While this change reduces the 3D detection results considerably, the BEV detection results remain nearly unchanged. 3.4 Inference During inference, we take the proposals of an RPN to extract patches. To increase the similarity with our training task, we remove points within the slightly enlarged bounding box of additional proposals falling within the patch. Similar to training we rotate the patch upon the depth axis (improvement of +0.15 AP). 4 Experiments Data Set We evaluate our method on the KITTI 3D object detection benchmark Geiger et al. (2012), which provides samples of LiDAR point clouds as well as RGB camera images. The data set consists of 7,481 training samples and 7,518 test samples. Detections are evaluated on three difficulty levels (easy, moderate, hard). Average precision (AP) is used as the metric, successful detections for the class car require an IoU of at least 70,%. Experimental Setup We subdivide the original training samples into a 3,712 samples train set and a 3,769 samples validation set as described in Chen et al. (2015), which we use for all our experiments and the submission to the test server. Furthermore, we use a non-maximum suppression threshold of 0.01. We train our model on a single 1080Ti GPU. 4.1 Implementation Details Region Proposal Network For our RPN we use the input space and BEV resolution of Voxelnet, a vertical resolution of 2, the described loss and a backbone configuration of ABC/AX. To compensate for the smaller receptive fields due to the circumvented 3D convolutions, we add 2 layers to the convolutional block 1. We first pre-train the RPN on patches with the local objective and only afterward train it for its final task as an RPN, simply by changing its input to whole point clouds and decreasing the learning rate by a factor of 0.1 and the batch size to 4. The RPN adapts to the new input within a few epochs only. Due to the increased imbalance between foreground and background objects, present when processing whole point clouds, we choose Focal LossLin et al. (2017) with default parameters and to train the detection head. Local Refinement Network For the width and depth dimensions, we chose voxel sizes of meters and patch sizes of meters. For the vertical dimension, we chose a voxel height of meters. In order to reduce the vertical dimension from 19 to 1, we use a sequence of 4 3D convolutional layers with a kernel-size of 3, vertical strides of (2,1,2,1), and no padding. To compensate for the smaller receptive fields due to the increased resolution we add 5 layers to the convolutional block 1 and 2 layers to convolutional block 2. The convolutional block related to the extended feature maps X consists of 6 3x3 convolutions without initial down-sampling. The voxel resolution isin the encoding stage. The regression and detection heads operate on a anchor grid. We train with a batch size of 32 samples for 5 million samples with Adam Optimizer, an initial learning rate of and after one million samples we multiply the learning rate by every samples. 4.2 Evaluation on the KITTI Test Set Table 1 presents the results of our method on the KITTI test set using the Average Precision (AP) metric. Consuming only LiDAR data and 50 % of the training data, our submission outperformed all previous methods for 3D object detection on cars. Noticeably, on the easy difficulty, the effect of the local training objective of the detection head is most prominent. A comparison of the precision-recall curves provided by the KITTI benchmark showed that our model has a distinct advantage to better avoid high ranked false positive detections that do not pass the 70 % IoU threshold. 4.3 Ablation Studies on the KITTI Validation Set VFE-layers We trained our network with VFE-layers as proposed in Voxelnet, with zero-padding and BN. In this case, we observe a large performance drop overall difficulty levels when used in our model. We hypothesize that the reason is that typically the batch statistics of those features are highly variable due to a low number of points and a varying input space. Additionally, we applied BN without zero-padding. This performs poorly both on whole scenes, as well as on patches. Augmentation We analyze the effectiveness of our augmentation strategies: surface sampling and additional global rotation. Table 2 shows two strong performance drops when either of those is not employed. For surface sampling, this drop is expected, since we fully rely on the construction of artificial patches to avoid the model to overfit on context information. The drop caused by reduced global rotation could be related to an increased orientation bias. The data set comprises largely objects parallel to the sensor car and few objects in the perpendicular orientation. As we rotate all objects upon the depth axis and sample rotation only from , the number of perpendicular objects is further reduced. Overall, both augmentation strategies are vital components of the training procedure. Auxiliary Regression Targets In our experiments without auxiliary regression targets, we observe slower, more unstable learning. Additionally, with auxiliary regression targets, the model becomes more decisive in rejecting false positives and preserves the level of recall with fewer proposals. Table 2 shows a decrease in performance if auxiliary regression targets are omitted. Backbone Modification We validate our architecture design against variants (see Figure 3), in which we modified the connections of the regression head and detection head to the feature maps A, B, C, and X. In the standard variant, the detection head is connected to B and C, while the regression head is connected to A and X. We denote this backbone as BC/AX. As a first variant, the backbone ABC/AX uses an additional feature map for detection, which leads to earlier overfitting of the detection head. A comparison of the backbone variant C/AX and backbone variant B/AX using only one feature map for detection, suggests that the higher level map C is of greater importance. The backbone variant BC/A relinquishes the use of the additional regression map, which has a negative effect on the performance on easy and moderate difficulty levels. The backbone variant “BC/AX red.” uses fewer layers in the convolutional blocks 1 and 2, which achieves almost identical performance. Table 2 shows the decrease in performance for the different backbone variants. Patch Extraction We study the influence of additional objects present in a patch. We calculate thresholds based on the percentile of the detection scores over the validation set. We then remove additional objects only if their respective detection score exceeds a given threshold. We observe that removing those objects where the RPN is most confident is of greater importance and that the detection of objects of difficulty hard is affected the most from additional objects within a patch. We conclude that the effect is caused by distraction effects of additional objects with distinct features. 4.4 Additional Experiments Refinement of Other Models We study how our LRN performs when the region proposals are provided by other detection models. To this end, we construct validation patches based on the predictions generated by two other models. Our experiments show only marginal differences between our RPN and two state-of-the-art detection models, namely SECOND v1.5 and PointRCNN (see Table 3). The results underline that the regression capability of the RPN is of low importance. Additionally, we compare our RPN to proposals created from ground truth labels. The gap increases with difficulty and suggests further improvement can be achieved via an RPN that is more capable to distinguish objects described by a low amount of LiDAR points, e.g. a fusion-based RPN. Domain Adaptation We further investigate the option of a two-phase training procedure for our lightweight RPN. As the patches occur at their original distance to the sensor, we train a detector that can operate on whole scenes. First, we pre-train by concentrating on the domain of patches only, then we switch to the domain of whole scenes. The pre-trained RPN surpasses the moderate 3D-scores of Voxelnet () after only one additional epoch. 5 Conclusion We have proposed Patch Refinement, a two-stage model for 3D object detection from sparse LiDAR point clouds. We demonstrate that a modified Voxelnet is capable of highly accurate local bounding box regression and a simplified Voxelnet is an adequate choice for an RPN to complement such an LRN. As the LRN operates on local point cloud subsets only, it can refine the proposals of an arbitrary RPN on demand. Further improvements regarding accuracy may be attainable by using a ground plane estimation algorithm and the integration of image information in the RPN stage. References - (2015) 3D object proposals for accurate object class detection. In Advances in Neural Information Processing Systems 28, pp. 424–432. Cited by: §4. - (2017) Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1907–1915. External Links: Document Cited by: §1, §2, Table 1. - (2017) Vote3deep: fast object detection in 3d point clouds using efficient convolutional neural networks. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 1355–1361. Cited by: §1. - (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. External Links: Document Cited by: §1, §4. - (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587. External Links: Document Cited by: §1, §3. - (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pp. 448–456. External Links: Link Cited by: §1, §3.1. - (2018) Joint 3d proposal generation and object detection from view aggregation. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–8. External Links: Document Cited by: 3rd item, §1, §2, §3.1, Table 1. - (2019) PointPillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12697–12705. Cited by: §1, §2, §2, Table 1. - (2019) Stereo r-cnn based 3d object detection for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7644–7652. Cited by: §1. - (2019) Multi-task multi-sensor fusion for 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7345–7353. Cited by: Table 1. - (2017-10) Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2999–3007. External Links: Document, ISSN 2380-7504 Cited by: §4.1. - (2018) Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 918–927. Cited by: §1, §2, Table 1. - (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660. Cited by: §1. - (2019) Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–779. Cited by: §1, §2, §2, Table 1. - (2019) Roarnet: a robust 3d object detection based on region approximation refinement. In 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 2510–2515. Cited by: §1. - (2019) Pseudo-lidar from visual depth estimation: bridging the gap in 3d object detection for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8445–8453. Cited by: §1. - (2016-01) Enhancing semantic segmentation for robotics: the power of 3-d entangled forests. IEEE Robotics and Automation Letters 1 (1), pp. 49–56. External Links: Document, ISSN 2377-3766 Cited by: §1. - (2018) Second: sparsely embedded convolutional detection. Sensors 18 (10), pp. 3337. Cited by: §1, §2, §3.1, §3.2, Table 1. - (2018) Pixor: real-time 3d object detection from point clouds. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7652–7660. Cited by: §1. - (2018) Voxelnet: end-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499. External Links: Document Cited by: 2nd item, §1, §2, §3.2, Table 1.
https://deepai.org/publication/patch-refinement-localized-3d-object-detection
In terms of deep learning, object detection refers to the process of having a deep neural network recognize different objects within an image. This can be done in several different ways, but no matter how the task is carried out, object detection is critical for applications like autonomous driving, robot item sorting, and facial recognition. So what is deep learning exactly? It is an extension of machine learning, and machine learning is the study of techniques that let machines carry out various tasks without being explicitly programmed to do so. Machine learning systems have three principal components: an input, a node/neuron, and an output. The input is the data that is being fed into the machine learning system, and the node/neuron represents one type of mathematical algorithm that will manipulate this information. Finally, the output of the system is the network's decision or inference about the data after it has been manipulated by the algorithm. These three components represent a simple neural network. A deep neural network is the term applied to many simple neural networks that have been linked together. Deep neural networks have multiple layers, where the output of the first layer becomes the input of the second layer, and the output of the second layer becomes the input of a third layer, and so on. A neural network operates by analyzing the relevant features of the input data, detecting patterns in that data, and then making predictions about that data or similar data. The deeper a neural network is, the more layers it has, the more complex the pattern it can distinguish. The term "deep learning" just refers to using deep neural networks to analyze data, discern the relevant patterns in the data, and make predictions about the data. Object detection is a technique used in the field of computer vision, and computer vision refers to getting a computer to analyze an image and distinguish the image’s content in a manner similar to humans. Humans look at an image and recognizing individual objects and regions within that image. Without object detection, computers would not be carry out the incredibly complex tasks that relate to computer vision. They would not be to recognize people, animals, or other objects in an image, and therefore we wouldn't be able to do things like create autonomous vehicles, carry out facial recognition, or have robots interact with objects. There are three different steps to object detection in machine learning computer vision: pattern recognition, feature extraction, and classification. Pattern recognition is the stage where the deep neural network analyzes the entire image. The goal of pattern recognition is to discern relevant patterns within the image and memorize those patterns so that feature extraction can be done. Feature extraction refers to breaking the general patterns that have been found down into distinct features. The larger patterns and shapes that exist within the image are broken up into smaller regions and features, and the network will focus on the features or patterns it believes are important, ignoring other parts of the image. The final step is classification. After the notable features of an image have been extracted, the network joins these features together into the shape of an object, or multiple objects. Once this representation of an object has been created, the amalgamation of edges and shapes is compared against the network's knowledge of objects. As an example, if the network is provided with an image of the cat, the features of the cat, like the shape of the head, the whiskers, the eyes, and the fur will be used to recognize the subject as a cat, by comparing it to other already known images of cats. When doing object detection with deep learning techniques, one of two different approaches is typically used. Either a custom object detection system is created from scratch, or a pre-trained object detector is used and simply tweaked slightly for the user's needs. The advantage of creating a custom object detection network is that, when compared to pre-trained object detectors, they are typically more accurate, resulting in better detection and classification. The drawback to this approach is that a very large labeled data set is necessary to train the custom network, and the process of labeling the data, as well as manually selecting parameters for the deep neural network can be very time intensive. In contrast to creating a custom object detector, a pre-trained deep neural network can be used, an instance of transfer learning. Pretrained networks come with a considerable advantage, the architecture as well as most of the weights, have already been selected and trained. This dramatically reduces the time needed to build an object detection system, but performance can be weaker when compared to a fully customized deep neural network. There are two different types of networks used to train object detection systems. These are the Two-Stage network and the Single-Stage network. R-CNN and variations on R-CNNs are two-stage object detection networks. Two-Stage networks operate by detecting region proposals, sections of the image which can potentially hold an object within them. This is the first stage of the network, while the second stage of the network is responsible for classifying the objects found within the proposed regions. The advantage of a Two-Stage Network is that they are typically much more accurate than Single-Stage networks, however they are also noticeably slower than Single-Stage Networks. In contrast, Single-Sage networks are networks that make their predictions for an entire image, rather than splitting the image up into discreet sub-regions. Single stage networks tend to utilize versions of the You Only Look Once or YOLO algorithm, and they use tools called anchor boxes to accomplish this. The anchor boxes contain the genuine, ground-level truth about the location of the object, and they are compared with the predicted bounding boxes for the object. The advantage of using a Single-Stage Network is that they are typically much faster than Two-Stage networks, but they are often less accurate and can have trouble detecting small objects in particular. It’s also possible to do object detection using standard machine learning algorithms, instead of using deep neural networks. Some of the most commonly used machine learning algorithms for object detection include Aggregate Channel Features, the Viola-Jones algorithm (primarily used when detecting humans, for detection of the upper body and face) and the SVM classification algorithm, which utilizes histograms of oriented gradient features. Much like deep learning object detection frameworks, you'll begin by using one of two methods: creating a customized object detector or using a pre-trained object detector. While deep learning object detection frameworks offer support for automatic feature selection, when using a plain machine learning object detection frameworks, you'll need to select the identifying features yourself. Deep learning object detection systems typically perform better than regular machine learning approaches. However, if processing power is a substantial limiting factor and if you don't have a lot of labeled training images to work with, you may wish to use a machine learning approach instead.
https://www.imageannotation.ai/blog/what-is-object-detection-in-deep-learning
Due to the heterogeneous nature of handwritten text, it is much more difficult to automatically recognise compared to printed text (two orders of magnitude of difference in error rate, see Table 3 vs Table 5 in ). Recent successes in handwriting recognition can be attributed to developments in deep neural networks. However, due to large computational costs, the systems are usually limited to recognising characters, words, and lines. We propose a full page offline handwriting recognition framework that is less computationally expensive compared to existing frameworks. Ii Related work Ii-a Text localisation Text localisation is an essential component of document layout analysis and accurate text localisation is crucial for handwriting recognition . Handcrafted features that utilise blob detection, clustering, edge detection, and histogram projections dominate in the traditional techniques. More recently, data-driven techniques are becoming more prominent with the growth of neural networks. Such techniques can be categorised based on the method in which the position of the text is defined. This includes lines , bounding boxes , or areas containing “text pixels” . In this paper, we predict bounding boxes around the text using deep learning techniques of object detection. Given an image that contains multiple objects, object detection identifies bounding boxes that encompass the objects along with the confidence of the class of the object. In this work, the Single Shot MultiBox Detector (SSD) framework was applied to text localisation. Ii-B Text recognition Handwritten digit recognition with the MNIST dataset was among the first work in deep learning . However, the learning problem was limited to images of single digit characters. Significant advances in handwritten text recognition were realised by the description of the multidimensional recurrent neural networks (MD-RNN) in Graves et al. and the Connectionist Temporal Classification (CTC) loss. A number of advances based on the MD-RNN were reported including using attribute embeddings , dropout , Tucker decomposition etc. Recent works conducted by Puigcerver suggest that the multidimensional aspects of the MD-RNN can be replaced with feeding image-features (from a CNN) into a one-dimensional LSTM to significantly reduce the memory requirements of the systems. The described methods are either limited to single words or single lines of handwritten text. Bluche et al. [12, 13] described an end-to-end system that uses an MD-RNN along with an LSTM to encode multiple lines of text. Although the described system shows promise to automatically recognise multiple lines, it may not be a practical solution as it requires a large amount of computational power . Winglinton et al. utilised a region proposal network to find the starting positions of text lines and a line follower network was trained to trace the line of text. This was followed by using a CNN-LSTM approach to recognise the characters. Ii-C Approach overview Previous works showed that the MD-RNN requires a large amount of computation power when it is used to recognise multiple lines of handwritten text. A less computationally expensive alternative could be realised if multiple lines of handwriting recognition were not directly performed. To achieve this, our described framework is comprised of two major components: text localisation and recognition. Text localisation identifies the positions of handwritten text given an image of the full page. Once a passage of handwritten text was identified, segmentation was conducted to locate each line of text. Text recognition refers to converting an image of a line of handwritten text into a string with the corresponding characters and denoising the string with a language model. By limiting handwriting recognition to single lines, the computational costs associated with this framework can be dramatically reduced compared to previous works that utilise the MD-RNN. Iii Methods Rather than designing an end-to-end network, we took a modular approach consistent with described components in the literature. This principle allows components of the framework to be easily replaced and tested with different ones. An overview of our system is provided in Figure 1. Iii-a Text localisation The purpose of text localisation is to identify bounding boxes of each line of text given an image containing both printed and handwritten text. The text localisation procedure consists of two stages: passage identification and line segmentation. Iii-A1 Passage identification The goal of passage identification is to predict the location of the handwritten passage (bounding boxes containing , coordinates, width and height of the bounding box in percentages of the page size). To simplify this step, we assume that there is one passage of printed text and one passage of handwritten text (using the IAM dataset see Section IV-A for more details). This was achieved by extracting image features from a pre-trained truncated 34 layer residual network (ResNet34) trained on ImageNet. In the ResNet34, the weights of the first convolutional layer were averaged into one channel to accommodate for greyscale images. The features were then fed into three fully connected layers: two layers with 64 units and a relu activation and one layer with 4 units and a sigmoid activation. The four units with sigmoid activation correspond to the, coordinates, width, and height of the bounding box in percentages. The network was trained to minimise the mean squared error. Iii-A2 Line segmentation Given an image containing only handwritten text, this component predicts bounding boxes surrounding each line of text. We modelled this as an object detection problem to detect words followed by using a clustering algorithm to combine words into lines. A two stage approach was taken because early experiments showed that the network was prone to missing objects when identifying handwritten text. By detecting individual words, the chance of the network missing an entire line of words was less likely. In our implementation, the SSD architecture was used to predict bounding boxes relative to anchor points and predict the probability that the bounding boxes are encompassing words. The downsampler consists of two convolutional layers, batch normalisation layer, and a relu activation function. The class and bounding box predictor consists of a single convolutional layer with 6 (4 positional + 2 for classes) output channels. To adapt the SSD to our requirements, image features were extracted with a similar network described in SectionIII-A1 (ResNet34). Furthermore, the anchor boxes were adapted to resemble words (only squares and rectangles with widths height). The SSD was trained to minimise the cross-entropy loss for the class (handwriting or not handwriting) and the L1 loss for the bounding box. Non-maximum suppression was performed to filter out objects overlapping bounding boxes. After the bounding boxes of words were detected, a greedy algorithm was then used to cluster the words into lines proposals based on the overlap in the y-direction (see Algorithm 1). The following heuristics were used to evaluate the line proposals: - Lines must have a minimum area - Lines that exceed boundaries of the page are removed - Lines (excluding the last line) that are substantially shorter than the median width of the lines are removed - Lines that are much longer than the median height are split into 2 lines (accounts for double lines) - Lines with starting positions that significantly deviate from other lines are removed - Remove lines that greatly overlap with other lines Lines that are not eliminated by the heuristics algorithm are used as the output of the text localisation stage. Iii-B Text recognition Text recognition takes images containing single lines of handwritten text and recognises the corresponding characters. Our approach includes handwriting recognition then denoising the output with a language model. Iii-B1 Handwriting recognition Following a similar scheme to , we implemented a CNN-biLSTM network. It makes use of a multi-scale CNN for image feature extraction, then the features are fed into a bidirectional LSTM. The network was trained to optimise the CTC loss (shown in Figure 2). Intuitively, the CNN generates image features that are spatially aligned to the input image. The image features are then sliced along the direction of the text to generate a fixed number of “timesteps” and sequentially fed into an LSTM. The CNN used to generate image features was identical to the residual network described in Section III-A1 (ResNet34, Figure 2-a). In order to account for varying sizes of the input image (e.g., lines that contain only one word compared to lines that contain seven words), multiple downsamples of the image features are provided (Figure 2-b, identical to the downsampler in the SSD used in Section III-A2). The image features and downsampled image features were each fed into separate biLSTMs. The outputs of the biLSTMs were concatenated along the time dimension and decoded into a array where is the maximum length of the sequence and is the number of unique characters (Figure 2-c). This array is fed into the language model denoiser. Iii-B2 Language model denoiser The output of the CNN-biLSTM needs to be transformed into the output string. As the output contains probabilities corresponding to each character of the sequence, a naive solution (greedy solution) is to take the maximum probability (argmax) of each of the slices and collapse the characters using the CTC collapsing function. Inspired by , a beam search approach can alleviate such issues by combining multiple decoding paths to generate candidate strings. A language model can be included in the beam search decoding to weigh the proposals based on their likeliness. The beam search approach required substantially more computational power as our early experiments, revealed that there was up to computational time increase compared to the greedy solution). In this paper, a language denoiser network was developed. Given a noisy input string, the network denoises the string in a sequence-to-sequence configuration. A previous approach encodes the noisy input at the character level and decode the clean output at the word level to ensure that the output is only composed of in-vocabulary words. This has proved relatively effective however falls apart for out of vocabulary words like names and places. To circumvent this issue, a character to character encoding / decoding scheme based on the Transformer architecture was used . The denoiser was trained on sentences from an external database of public domain novels . Characters are randomly inserted and deleted from the sentences (with a uniform distribution). Also, characters are replaced with visually similar counterparts (e.g., ‘d’ can be replaced with ‘c’ and ‘l’) in an attempt to model the real noisy distribution of the handwriting recognition model. The generated noisy sentences are used to predict its original counterpart. During inference, the output of the trained denoiser is fed into a beam search algorithm to generate candidate strings. We make use of the following heuristics to rank them: - Pick the candidate strings with the highest proportion of in-vocabulary words. - Pick the candidate strings with the lowest Levenshtein distance. - Pick the candidate string with the lowest perplexity score using an off-the-shelf pre-trained language model. Iv Experiment Iv-a Evaluation The system was evaluated with the IAM dataset . The IAM dataset contains 1539 pages of scanned documents. Each scanned document contains printed text and 657 writers were asked to write the contents of the printed text in the space provided. The dataset was split into train and test data, where the test dataset includes validation 1, validation 2, and test data designated by the authors of the dataset. We both evaluated the system qualitatively by visually evaluating the transcription of examples and quantitatively by computing the character error rate (CER). Furthermore, we conducted memory and timing comparative analysis. The CER was calculated with SCLITE and the effects of the following components were evaluated: 1) no line heuristics, 2) no language model (argmax algorithm), 3) with beam search , and 4) with the denoiser described in Section III-B2. The predicted and actual text were aligned and the average CER was calculated line-by-line. Our method was compared to similar methods presented in [14, 13, 12]. Iv-B Training details The networks were developed with Apache’s MXNet deep learning framework . The networks for each component (passage identification, line segmentation, handwriting recognition, and language model denoising) were trained separately and the Adam optimiser was used for all the networks . Data augmentation including random translation, shearing, and occlusions were performed. However, many typical data augmentation are not applicable to this application (e.g., flipping and random cropping). In the word/line object recognition component, to circumvent this issue, lines or words were randomly blanked out. Details of the implementation can be seen here (https://github.com/awslabs/handwritten-text-recognition-for-apache-mxnet). V Results Figure 3 shows actual results of paragraph segmentation and word to line segmentation. We can observe that the paragraph segmentation algorithm mostly predicts the bounding boxes of the handwriting component successfully, however, the third column presents a failure case where the last line is not encompassed by the predicted bounding box. Given an image containing only handwritten text, the word detection algorithm can detect tight bounding boxes for each word. However, as mentioned in Section III-A2, there are several short words (typically with words 3 characters) that are not detected. Despite the missing words, we can observe in Figure 3-c that all the lines were successfully detected. Figure 4 presents the selected examples to show differences between the language model component of the described system. First, we can observe that the greedy algorithm ([AM]) performs reasonably well and the beam search ([BS]) algorithm does not dramatically improve the results. In a), we can see that the word “noused” was converted into “roused”, which may be based on the preceding word “head” and the visual similarity of ‘n’ and ‘r’. In b), the handwriting looks like “beclared” but the denoiser replaced ‘b’ with ‘d’ based on the learnt language model. In c), the ‘t’ in “desterted” was deleted also based on language modelling. In d), none of the algorithms were successful to correct the sentences, and the denoiser worsened the CER. The CER presented in Table II suggests that line heuristics dramatically improved handwriting recognition. Qualitatively evaluating the results suggest that the line heuristics algorithm eliminated incorrectly identified lines that caused large disparities when aligning the predicted and correct text. The denoiser achieved a 1.4 CER decrease compared to the greedy argmax algorithm and beam search algorithm. When compared to previous works on recognising cropped images (i.e., feeding a cropped image containing only the handwritten portion compared to the full page with printed and handwritten text, as indicated by Seg. in Table II), our method outperforms Bluche . However, methods described in Bluche and Wigington had lower CER compared to our method. Table IV presents the memory and timing requirements for our memory compared to existing methods. When comparing the mean time taken to run an image, our method requires approximately less time compared to and . Our method also utilises substantially less memory (approximately less memory) compared to (unfortunately, the memory requirements for and could not be attained). Since our memory usage is substantially smaller, it is possible to run multiple images at the same time; effectively reducing the time required by a third. Vi Conclusion In this paper, we presented a full page offline handwritten text recognition framework. This framework consists of a pipeline where the handwritten text is localised (text localisation) followed by converting images of words into strings (text recognition). Our method achieved a CER of 8.50. The main advantage of the framework introduced is the reduced computational costs compared to existing methods. For a tradeoff of CER2 comparing to , the throughput could be effectively when using a similar amount of memory. In conclusion, the framework that we presented is a computationally cheap alternative to performing full page offline handwritten text recognition. The results in this paper demonstrate the potential of this framework and future work can investigate different components of the pipeline for improved results. Acknowledgement Thank you Simon Corston-Oliver, Vishaal Kapoor, Sergey Sokolov, Soji Adeshina, Martin Klissarov, and Thom Lane for their helpful feedback for this project. References - M. Yousef, K. F. Hussain, and U. S. Mohammed, “Accurate, data-efficient, unconstrained text recognition with convolutional neural networks,” 2018. - G. Renton, Y. Soullard, C. Chatelain, S. Adam, C. Kermorvant, and T. Paquet, “Fully convolutional network with dilated convolutions for handwritten text line segmentation,” International Journal on Document Analysis and Recognition (IJDAR), pp. 1–10, 2018. - T. Grüning, R. Labahn, M. Diem, F. Kleber, and S. Fiel, “Read-bad: A new dataset and evaluation scheme for baseline detection in archival documents,” in 2018 13th IAPR International Workshop on Document Analysis Systems (DAS). IEEE, 2018, pp. 351–356. - B. Moysset, C. Kermorvant, C. Wolf, and J. Louradour, “Paragraph text segmentation into lines with recurrent neural networks,” in Document Analysis and Recognition (ICDAR), 2015 13th International Conference on. IEEE, 2015, pp. 456–460. - W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision. Springer, 2016, pp. 21–37. - Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in neural information processing systems, 1990, pp. 396–404. - A. Graves and J. Schmidhuber, “Offline handwriting recognition with multidimensional recurrent neural networks,” in Advances in neural information processing systems, 2009, pp. 545–552. - J. I. Toledo, S. Dey, A. Fornés, and J. Lladós, “Handwriting recognition by attribute embedding and recurrent neural networks,” in Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on, vol. 1. IEEE, 2017, pp. 1038–1043. - V. Pham, T. Bluche, C. Kermorvant, and J. Louradour, “Dropout improves recurrent neural networks for handwriting recognition,” in Frontiers in Handwriting Recognition (ICFHR), 2014 14th International Conference on. IEEE, 2014, pp. 285–290. - H. Ding, K. Chen, Y. Yuan, M. Cai, L. Sun, S. Liang, and Q. Huo, “A compact cnn-dblstm based character model for offline handwriting recognition with tucker decomposition,” in Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on, vol. 1. IEEE, 2017, pp. 507–512. - J. Puigcerver, “Are multidimensional recurrent layers really necessary for handwritten text recognition?” in Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on, vol. 1. IEEE, 2017, pp. 67–72. - T. Bluche, “Joint line segmentation and transcription for end-to-end handwritten paragraph recognition,” in Advances in Neural Information Processing Systems, 2016, pp. 838–846. - T. Bluche, J. Louradour, and R. Messina, “Scan, attend and read: End-to-end handwritten paragraph recognition with mdlstm attention,” in Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on, vol. 1. IEEE, 2017, pp. 1050–1055. - C. Wigington, C. Tensmeyer, B. Davis, W. Barrett, B. Price, and S. Cohen, “Start, follow, read: End-to-end full-page handwriting recognition,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 367–383. - U.-V. Marti and H. Bunke, “The iam-database: an english sentence database for offline handwriting recognition,” International Journal on Document Analysis and Recognition, vol. 5, no. 1, pp. 39–46, 2002. - K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. - S. Ghosh and P. O. Kristensson, “Neural networks for text correction and completion in keyboard decoding,” arXiv preprint arXiv:1709.06429, 2017. - A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017, pp. 5998–6008. - “Public domain novels http://www.textfiles.com/etext/fiction/.” - J. Fiscus, “Sclite scoring package version 1.5,” US National Institute of Standard Technology (NIST), URL http://www. itl. nist. gov/iaui/894.01/tools, 1998. - T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang, “Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems,” arXiv preprint arXiv:1512.01274, 2015. - D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
https://deepai.org/publication/a-computationally-efficient-pipeline-approach-to-full-page-offline-handwritten-text-recognition
I got my first exposure to LIDAR (Light Detection and Ranging) at a tradeshow, where one of the manufacturer demos was a glorious false-color point cloud generated from high-accuracy LIDAR measurements of the ornate front face of an ancient cathedral. LIDAR is an extremely useful tool for many types of surveying, including (as it turns out) mapping post-disaster environments and supplying valuable information about conditions on the ground to those responsible for disaster response. As explained in Susan Parks' article in Imaging Notes, "Disaster Management using LiDAR", LIDAR enables very accurate surveys of large areas and takes less time than using traditional survey methods. If you happen to have before and after LIDAR surveys, you can generate maps showing exactly what's changed. For instance, University of California, Davis researchers who used pre- and post-earthquake LIDAR surveys to visualize a northern Mexico earthquake zone to understand how a series of small faults resulted in a major earthquake. Or, you can read the report (PDF) issued by New Zealand's National Institute of Water and Atmospheric Research (NIWA) on the effects of both the September 2010 and February 2011 earthquakes on the Avon-Heathcote estuary in Canterbury, New Zealand. In this case, the researchers used aerial photography, LIDAR, and RTK GPS survey data to figure out how the estuary had changed. Or, for a slightly less academic discussion (and prettier pictures), you can read the USGS article "Start with Science: Hurricane Isaac, Weathering the Storm & Understanding Isaac's Impacts" which talks about the various tools, including LIDAR, used to assess how Hurricane Isaac affected the Gulf Coast states. This is pretty nifty, right? Sure, you end up with very large data sets, but those very large data sets give you lots of meaty information about what's moved and how much. So far, though, these stories have involved either airborne LIDAR or LIDAR on a fixed platform. What happens when you attach LIDAR (and a bunch of other sensors) to a person who is trying to map an area? That's the current problem before the MIT Researchers discussed in Jack Clark's ZDnet article, "MIT employs Kinect and lasers in real-time mapping gear for firefighters". I'd suggest taking the time to read their research paper linked here (PDF). For this particular application, the LIDAR data is combined with information from a stripped down Kinect RGB-D camera and inertial and barometric sensor data to create a map of an indoor environment. The technique they're adapting, Simultaneous Localization and Mapping (SLAM) is used by autonomous robots to map their environments and to track their location within the mapped environment. This is tricky to do when the sensor package is on a flat surface and moving in a controlled manner; it's even harder to get the mapping to work if the sensor package is attached to someone climbing over a pile of rubble or a sloping floor, or if that someone moves to different floors. Did I mention that it's also providing the data in real time? In the consumer device arena, a great deal of work has gone into fusing together data from a smartphone's sensors to more accurately track user movement to provide improved pedestrian navigation when GPS signals are lost. This application is far more challenging (let's take something hard and make it harder!) but it promises to be a powerful tool for first responders, allowing them to understand and navigate complex environments.
https://www.fierceelectronics.com/components/making-sense-post-disaster-environments
Our goal is to apply cutting edge unmanned systems to hazardous environments in all domains - Air, Land, and Sea. We enable remote, real-time detection and mapping using a wide variety of unmanned systems - suited to the environment. Our consulting services provide engineering, business development, and sensor integration. Air, Land & Sea BlueWave Robotics unmanned systems solutions provide remote, real-time sensing for dangerous applications where human lives cannot be risked. Sensor Integration Solutions Our consulting service provide assistance and networking solutions for numerous clients across various industries. We believe that autonomous tools can and should be deployed often, because ‘some jobs were meant for robots’.
http://www.bluewaverobotics.com/
It is a well-known fact that the different type of sensors used in vehicles today lack the capability to independently enable L3 to L5 automation in the industry. The fusion of multiple sensors for different use cases is the next step forward to achieving a reliable low-cost solution for an entirely autonomous driving experience. This blog gives an overview of the advantages and shortcomings of various sensors and how the fusion of sensors can effectively complement each other to enable L3-L5 autonomous driving capabilities. Automotive sensors and their comparison Even after considering the future technological advancements in the sensor domain, there is a considerable gap in the capabilities of any individual sensor to meet the technical challenges for Level 3 - Level 5 autonomous vehicles. Camera, lidar, radar and ultrasound sensors contribute a major part in this environment perception. Firstly, let’s look at some features of these sensors and how they stack up against each other: |Features||Camera||Radar||LiDAR||Ultrasonic| |Object detection range||High||Very High||High||Low| |Depth resolution||Low to Nil||High||High||Medium| |Angular positional resolution||High||Medium||High||Low| |Field of view||High||Very High||Very High||Low| Top Sensor Fusion Applications Currently, most of the ADAS applications are solved by single sensor solutions. More generally, there is no requirement of heterogeneous sensor fusion for L1-L2 applications. But to meet the criteria of autonomous cars, it is almost impossible to achieve the automotive level of requirements with a single sensor solution. A complete understanding of surrounding standalone sensors will not suffice. For an autonomous car to comprehend its surroundings and to make necessary safety-critical decisions, it is imperative to use multiple sensor data, some of which are listed below: |Application||Description||Optimal Sensor Fusion Case| |Adaptive Cruise Control (ACC)||This application requires a very long range with small FoV.||Camera + Lidar + Radar| |Blind Spot Detection (BSD)||Medium range of 40m with velocity estimation||Camera + Radar| |Lane Change Assist (LCA)||Medium range of 40m with velocity estimation||Camera + Radar| |Automatic Emergency Braking (AEB)||3D positional accuracy is highly required||Camera + Radar| |Automated Parking||Surround view with 3D precise object detection||Camera + Radar| Figure 1: Various sensors & their applications Sensor Fusion Approaches Sensor fusion data is required to achieve a reliable and consistent perception of the environment, and it is important to stress the fact that the heterogeneous sensor data needs to be utilized for its multimodality and redundancy. The market for sensor fusions has consolidated on three different approaches: - High-level abstracted fusion: This is one of the simplest forms of sensor fusion, wherein individual sensors with their own algorithm pipeline provide their input, which is then merged to provide the final picture. - Detection points fusion: The detection points from the different sensors are merged together to provide a deeper fusion of the multimodal sensor data. Multiple features can be extracted from the same object as seen by different sensors. - Raw data fusion: This approach is at the deepest abstract level, aiming to utilize all the available sensor data. Since the data to be processed is huge, high-end deep learning networks are required to extract any useful data to be used for any L5 autonomous decision. Read more about sensor fusion in automotive ADAS and AD covering various aspects such as - SAE Levels of autonomy - Various sensors and comparison on more than 15 features - Sensor fusion in Level-3 and above - Practical challenges in Level-3 and above Challenges in Sensor Fusion Even though the heterogeneous sensor approach can complement the limitations of each other and are capable of providing a complete environment sensing as required for autonomous vehicles, the technology to bring the final application has many challenges and lacks in making major strides in performance improvement. The major technical issues faced in the heterogeneous sensor fusion at all levels of fusion can be grouped into three categories. All applications would face some or all of these issues, and solving them would require a proper solution: - Spatial & Temporal alignment - Data uncertainty handling - Sensor resolution and parameter differences The sensor positioning and the respective live calibration is a challenge. Live calibration is a complex problem to be solved even in a homogenous sensor environment. Considering the different units of measurement and field of view, the heterogeneous sensor environment depends on the spatial alignment of the sensors. The frame rate of the sensors varies in a wide spectrum, and the fusion of data, at which the temporal point is, is always a tradeoff to achieve faster response or complex implementation. The dependability of each sensor is different for each use case. This dependability or confidence factor also changes with the environment. Camera data cannot have the same confidence in bright, sunny light or in night conditions. The same goes for radars in open clear environments and in tunnels. Along with dependability, the availability of sensor data also has to be handled. A fused decision-making process cannot assume that both sensor data will be available at every frame and will have a high confidence level at each of those frames. The algorithms should be able to handle differences in the reliability of sensor data, noisy sensor data, missing and inconsistent data, accuracy and precision losses, etc. Apart from this, the heterogeneous sensor data differs in many ways, such as depth resolution, angular resolution, positional accuracy, data format, outward alignment and Field of View. The point cloud densities are also very different. This would require the perception algorithm to be able to handle each sensor data differently. A classifier which can handle high-resolution camera data will not be able to perform the same as a high-resolution radar data. This is due to the different representations of the sensor data itself. Even though this factor is the major advantage of the fusion of bringing in multimodal data sets for efficient environment perception, bringing the multimodal data set is a challenge in itself. Conclusion It is established that multi sensor fusion is mandatory to meet the stringent requirements of use cases required for level 3 and above autonomous cars. The limitations in current non-fusion approaches to meet the Level 3 and above requirements are discussed in detail with two sample use cases. The architectural approaches required for each level of autonomous cars are presented in detail along with associated challenges. It is now a race against time to build practical solutions using sensor fusion to rapidly enable L3-L5 autonomy and realize the elusive future of autonomous driving.
https://www.pathpartnertech.com/sensor-fusion-for-l3-and-beyond/
Highly accurate localization of moving vehicles in urban environments is crucial for emerging autonomous driving. Current approaches integrate global positioning system (GPS), inertial measurement unit (IMU), wheel tick odometry, high-resolution panoramic imagery and light detection and ranging (LIDAR) data acquired by an instrumented vehicle, to generate high-resolution environment maps that is used for localization. Most of these sensors are not yet part of the standard equipment for modern cars. Therefore gathering the data for these maps still requires expensive special vehicles. Due to this, the process of gathering data for map generation is expensive and time consuming. Since radar sensors are already available in most of the modern cars, in this master thesis project, we design a method/algorithm that uses radar sensor to create road maps that can be used for localization. The proposed project has the goal to develop a prototype demonstrating the feasibility to create a radar map that can be used for localization based on the output (peek-lists) from off-the-shelf radar sensors. Assignment The list underneath provides an overview of the research questions to be answered in this project: - Are radar measurements of the same sensor under similar circumstances repeatable - Are radar measurements from different radar sensors repeatable - Can the aggregated radar measurements be correlated with TomTom’s LIDAR point cloud data Answers to the above questions should provide sufficient basis to draw conclusions about the feasibility to create a radar map layer from LIDAR data.
http://microelectronics.tudelft.nl/Education/thesisdetails.php?ti=144
CE's Feng leads testing of new autonomous car The School of Civil Engineering has taken delivery of a new SAE Level 4 autonomous vehicle that will serve as a research platform. Yiheng Feng, assistant professor of civil engineering, will lead testing of the car, including developing and testing smart transportation applications such as autonomous driving, cooperative perception, smart intersection management and cybersecurity. This vehicle platform can be used to develop research prototypes and bridge gaps between theoretical models and real-world implementation. It also can be used as an education platform for undergraduate and graduate students to learn emerging technologies in smart transportation areas. It could also be an asset for the College’s Autonomous and Connected Systems (ACS) initiative, which has great potentials in enhancing multidisciplinary faculty collaboration and attracting external funding,” Feng said. The car, a Ford Fusion hybrid, was enhanced by DATASPEED inc. The company installed the by-wire control system “so that the vehicle can be controlled by computer programs instead of human drivers,” Feng said, as well as a power distribution system and all of the sensors. The process took three months. Feng worked with different units in his department, the College, and the university — risk management, parking, transportation, and Procurement Service to obtain the vehicle. The vehicle is equipped with various onboard sensors, including one 64-layer LiDAR (light detection and ranging) device, one RGB front-view camera, one long-range radar for perception, one IMU and high-resolution GNSS system with RTK for localization, and a Cohda MK5 OBU for V2X communication. An onboard industrial PC (IPC) controls the vehicle using the Drive-By-Wire system provided by DATASPEED. The IPC’s computational power is provided by the Nvidia GPU card and Intel Xeon E processor, which is sufficient for real-time sensor data processing, path planning, and control of the vehicle. Currently, Feng’s graduate students and postdoctoral researcher, Hanlin (Julia) Chen, are working with the vehicle. "Later, I plan to introduce new topics related to the autonomous vehicle in my courses such as CE361 and CE565 so that other graduate and undergraduate students will have chances to work with the vehicle and get firsthand experience."
https://engineering.purdue.edu/CE/AboutUs/News/Homepage_Features/2022-0722-ce-feng
The benefits of AVs can only be meaningful when deployed at scale. As Cruise moves from R&D into early commercialization, the approach to system architecture has evolved to provide for a more capable system at a cost point that enables rapid scaling. We will discuss this progression, some of the enabling technologies and paradigms, and what we anticipate for the future. Hear from: Shane McGuire Principal Systems Engineer, Systems Architecture Cruise Exterior automotive imaging applications are quickly evolving, to meet customer requirements image sensor manufactures are being forced to develop new technologies. Many of these new technologies are necessary for both human and machine vision applications. Exterior cameras are used for rear view, surround view, e-mirror, digital video recording, ADAS and AD applications. In this paper we will discuss the requirements and challenges associated with developing these new technologies. Unfortunately these requirements are often conflicting forcing image sensor manufactures to make tradeoffs based on cost, size and time to market. Specifically, we will discuss high dynamic range image capture, LED flicker mitigation, low light sensitivity, high temperature operation and cyber security. Hear from: Boyd Fowler CTO OMNIVISION Hear from industry analysts, observers and those working directly in the autmotive sector as they explore the future of the supply chain and whether there will be consolidation in specific areas (e.g. SOCs, AD software suppliers, Lidar suppliers etc.) Hear from: Rudy Burger Managing Partner Woodside Capital Partners Juergen Hoellisch Founder Hoellisch Consulting GmbH Abhay Rai Senior Vice President indie Semiconductor Chris Van Dan Elzen EVP, Radar Product Area Veoneer Liang Downey Digital Advisor, Energy, Mobility and Sustainability Customer Transformation Microsoft Industry Solutions Chair IEEE USA Women in Engineering LiDAR remains one of the most critical sensors enabling autonomous driving. And while most agree with the criticality of this sensor, confusion remains regarding what performance is needed to address different use cases and enable different levels of autonomy. Warren Smith, who helped develop the perception teams at Uber ATG and Aurora Innovation, will discuss LiDAR requirements from the point of view of a perception engineer. What key data is needed from the sensor and how is that data used by perception to address difficult edge cases. How does this boil down to lidar specifications and how can lidar manufacturers use this information to enable L4-L5 autonomous vehicles. Hear from: Warren Smith Director of Perception Insight LiDAR This talk compares the characteristics of digital code modulation (DCM) radar to traditional analog modulated radars used today, such as Frequency Modulated Continuous Wave (FMCW) radars. The speaker will explain how these radar systems operate, including the transmission, reception, and the associated signal processing employed to determine the distance, velocity, and angle of objects in the environment. By comparing these two radar systems, familiarity with digital radar is enhanced and the potential advantages of digital radar are better appreciated. The speaker will introduce two benchmarks of merit: 1) High Contrast Resolution (HCR), which is critical to resolving small objects next to large objects (e.g., a child in front of a truck), and 2) Interference Susceptibility Factor (ISF), which characterizes a radar’s resilience to self-interference and cross interference. These benchmarks are essential to understanding the value of radar in use cases that are crucial to achieving increased safety for vehicle automation and autonomy. Hear from: Dr. Arunesh Roy Senior Director Advanced Applications and Perception Uhnder It has become widely accepted that LiDAR sensors will be an indispensable part of a sensor suite that will enable vehicular autonomy in the future. However, sensor costs remain very high and prevent the ubiquitous adoption of LiDAR sensors. Bringing knowledge and expertise in Cost-Engineering and Design for Manufacturing from the HDD space into the LiDAR space can accelerate the large-scale deployment of LiDAR sensors. In this talk, some of the key manufacturing technologies will be highlighted. Hear from: Dr Zoran Jandric Engineering Director Seagate Technology Regardless of Radar type, simulation of the sensor is absolutely essential to reach the goals desired for ADAS and especially levels 4-5 for AV. Some aspects of this required simulation are discussed and how to implement these aspects into a simulation correctly. Discussion points for Radar simulation include: - World Material Property measurement including Angle of Incidence - Advanced Ray Tracing - Micro Doppler, Ghost Targets and Doppler Ambiguity - Radar placement effects (bumper, grill, etc.) Finally, a discussion of a cutting edge Hardware-in-the-Loop for radar is also presented. Hear from: Tony Gioutsos Director Portfolio Development Autonomous Americas Siemens Steering, ranging and detection are the core elements that simultaneously operate a LIDAR system. At Baraja, our LiDAR combines our patented Spectrum-Scan steering technology and unique ranging technique, Random Modulation Continuous Wave (RMCW) paired with homodyne detection, to enable a high-performance Doppler LiDAR without any of the known issues found in other LiDAR designs. This novel combination of core technologies allows for no-compromise, unprecedented LiDAR performance, reliability and integrability that will enable a fully-autonomous future without the costly trade-offs of legacy technologies. Hear from: Federico Collarte Founder and CEO Baraja Honda and, more recently, Mercedes-Benz have made history by rolling out the first level 3 vehicles on open roads. These achievements have been made possible notably thanks to one technology – LiDAR. To bring these features to scale, LiDAR technology is undergoing 2 concurrent transitions that will – bring the reliability and productization to automotive industry standards – deliver uncompromising performance compared to the pre-LiDAR status quo. Hear from: Clement Nouvel LiDAR Technical Director Valeo 4D imaging radar has become a technology of choice for in-cabin safety and ADAS, favored for its high-resolution imaging, versatile field of view configurations and precise target data. But high cost, substantial hardware and extreme complexity have restricted deployment to premium models. In this thought-provoking session, we will discuss a crucial turning point for 4D imaging radar, which made it affordable and accessible to all vehicle models, supporting dozens of applications. The “Democratization of 4D Imaging Radar” is a presentation about making high-end safety available for all vehicle models. Hear from: Dan Viza Head of US Business Development Vayyar To make the technology available for volume model vehicles, the measurement capability and reliability of LiDAR must be ensured in cost-effective production at large quantities. A core task in mass production is the assembly of optical, mechanical, and electronic components. The precise alignment of emitting and receiving electronics with projection or imaging objective lenses plays a decisive role here. Tolerances in all components of the sensors prevent the assembly of an optomechanical system by a straightforward mounting process. Alignment requires an automated process with inline feedback on sensor performance to ensure that the required optomechanical parameters are of high quality for each device and within tight tolerances for the entire production. The paper describes recent developments of different alignment procedures TRIOPTICS has developed for various types of LiDAR systems used in the automotive industry to ensure repeatable and reproducible quality under production requirements. Hear from: Dirk Seebaum Business Unit Manager TRIOPTICS For the longest time, radar applications deployed DSPs featuring fixed point arithmetic, as floating point operations were considered to be inferior in terms of performance, power efficiency and area (PPA), which is critical for any embedded system. Yet there has always been a desire to move to floating point arithmetic, as it allows for a larger dynamic range as required by the latest radar systems, achieving the required signal to noise ratio (SNR). This presentation will cover a detailed floating point / fixed point tradeoff analysis, featuring radar use cases. It will also discuss the growing interest in AI enhanced radar algorithms, and how these can be enabled using a vector DSP, either standalone or combined with a tightly coupled AI accelerator. Specific focus will be given to the programming flow featuring support for TensorFlow, Caffe or ONNX. Hear from: Markus Willems Senior Product Manager Synopsys A discussion that will consider whether stringent OEM requirements can be met and whether it is possible to achieve functional safety and automotive-grade reliability whilst preserving modern vehicle design. Hear from: Kevin Vander Putten Director Cepton Amit Mehta Head of Innovation North American Lighting Juergen Hoellisch Founder Hoellisch Consulting GmbH Paula Jones President ibeo Automotive USA Automotive radar has been around for decades, but over the past few years there has been a flurry of activity in the new uses of radar in the car – from new applications to exotic antennas. This talk will introduce the audience to some new radar-based applications in vehicle localization, in-cabin health monitoring, and occupancy detection as well as cover notable new approaches to classic automotive radar. We will discuss how they work, why they are useful and, in some cases, why it took so long for them to appear. Hear from: Harvey Weinberg Director of Sensor Technologies Microtech Ventures There are many ways to evaluate camera image quality using standardized equipment and metrics. However, after the results are tabulated, how do you assess which camera is most suitable for your specific application? In this presentation DXOMARK will introduce an example of evaluation benchmark protocol for Automotive camera Image Quality. Hear from: Pierre-Yves Maitre Senior Image Quality Engineer DXOMark The largest cost of developing artificial intelligence-based automated driving solutions is collecting and labelling data for training and validation regardless of autonomy level. Furthermore, data quality and diversity are also critical to enable truly robust and intelligent systems. The use of synthetic and augmented data coupled with automatically annotated real-world data will be a game-changer for developing, testing and updating the next generation of Automated Driving software solutions. This talk will discuss state-of-the-art data generation and labelling methods, introduce an integrated, cost-efficient, data-driven pipeline, and use different hardware platforms at different stages, from training to in-vehicle integration. Hear from: Dr. Peter Kovacs Production SVP aiMotive By developing an end-to-end optical simulation pipeline including AI, we are able determine the impact of optical parameters on learning-based approaches. We will show how to use this method?to jointly determine post-processing image rectification and optical characteristics for optimized ADAS and autonomous driving applications. We will demonstrate that we can ease most of the image rectification processes by directly obtaining an optimized image with a camera designed according to such optical characteristics. Hear from: Patrice Roulet Co-Founder Immervision Recently, non-RGB image sensors gain a traction in the automotive applications. One traction is from the demand to achieve smaller pixel size with keeping low light SNR. We did a pros/cons study of the popular color filter arrays such as RCCB, RYYCy, RCCG and RGB, including the analysis of so-called Yellow / Red traffic signal differentiation issue. The other traction is from the demand to use one camera for both of Machine Vision and Human Vision purposes, especially in Driver Monitoring Systems. RGB-Ir is under study for this application. In this presentation, we will present those color filter options and discuss what is useful for what applications. Hear from: Dr. Eiichi Funatsu VP of Technology OMNIVISION The commercialisation of Autonomous Systems including autonomous cars will require rigorous methods to certify artificial intelligence and make it safe. However, no solutions or standards exist today to guide the OEM and Tier 1 companies in that challenge, which is why CS Group has invested two years of research to develop a process – based on avionics certification – that aims to make the embedded artificial intelligence functionally safe. Hear from:
https://auto-sens.com/events/detroit/agenda/jsf/jet-engine/meta/session-start-date:Wednesday%2011th%20May/
Most Popular in the Last... 3D model See the following - A Trip Through A 3D-Modeled Brain Brains are, by design, incredibly dense. Whether a particular brain belongs to a human or a mouse, it features layer upon layer of matter that twists and turns and is almost incomprehensible in its complexity... Read More » MIT and DARPA Pack Lidar Sensor Onto Single Chip Light detection and ranging, or lidar, is a sensing technology based on laser light. It’s similar to radar, but can have a higher resolution, since the wavelength of light is about 100,000 times smaller than radio wavelengths. For robots, this is very important: Since radar cannot accurately image small features, a robot equipped with only a radar module would have a hard time grasping a complex object. At the moment, primary applications of lidar are autonomous vehicles and robotics, but also include terrain and ocean mapping and UAVs...
https://openhealthnews.com/tagged/3d-model?quicktabs_mot_popular_tabs=1
Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/92769 |Title:||New integrated navigation scheme for the level 4 autonomous vehicles in dense urban areas||Authors:||Hsu, LT | Wen, W |Issue Date:||2020||Source:||2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), 20-23 April 2020, Portland, OR, USA, p. 297-305||Abstract:||Accurate and globally referenced positioning is fatal to the safety-critical autonomous driving vehicles (ADV). Multi-sensor integration is becoming ubiquitous for ADV to guarantee the robustness and accuracy of the navigation system. Unfortunately, the existing sensor integration systems are still heavily challenged in urban canyons, such as Tokyo and Hong Kong. The main reason behind the performance degradation is due to the varying environmental conditions, such as tall buildings and surrounded dynamic objects. GNSS receiver is an indispensable sensor for ADV, which relies heavily on the environmental conditions. The performance of GNSS can be significantly affected by signal reflections and blockages from buildings or dynamic objects. With the enhanced capability of perception, fully or partially sensing the environment real-time becomes possible using onboard sensors, such as camera or LiDAR. Inspired by the fascinating progress in perception, this paper proposes a new integrated navigation scheme, the perception aided sensor integrated navigation (PASIN). Instead of directly integrating the sensor measurements from diverse sensors, the PASIN leverages the onboard and real-time perception to assist the single measurement, such as GNSS positioning, before it is integrated with other sensors including inertial navigation systems (INS). This paper reviews several PASIN, especially on the GNSS positioning. As an example, GNSS is aided by the perception of a camera or LiDAR sensors, are conducted in dense urban canyons to validate this novel sensor integration scheme. The proposed PASINS can also be extended to LiDAR- or visual- centered navigation system in the future.||Keywords:||Autonomous driving vehicle | Camera GNSS LiDAR PASIN Perception Positioning Urban canyon |Publisher:||Institute of Electrical and Electronics Engineers||ISBN:||978-1-7281-0244-3 (Electronic ISBN) | 978-1-7281-9446-2 (Print on Demand(PoD) ISBN) |DOI:||10.1109/PLANS46316.2020.9109962||Rights:||© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | The following publication Hsu, L. T., & Wen, W. (2020, April). New Integrated Navigation Scheme for the Level 4 Autonomous Vehicles in Dense Urban Areas. In 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS) (pp. 297-305) is available at https://doi.org/10.1109/PLANS46316.2020.9109962 |Appears in Collections:||Conference Paper| Show full item record Files in This Item: |File||Description||Size||Format| |Hsu_New_Integrated_Navigation.pdf||Pre-Published version||1.54 MB||Adobe PDF||View/Open| Page views7 Citations as of Jun 26, 2022 Downloads3 Citations as of Jun 26, 2022 SCOPUSTM 1 Citations Citations as of Jun 23, 2022 Altmetric Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
https://ira.lib.polyu.edu.hk/handle/10397/92769
# Lidar Lidar (/ˈlaɪdɑːr/, also LIDAR, or LiDAR; sometimes LADAR) is a method for determining ranges (variable distance) by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. It can also be used to make digital 3-D representations of areas on the Earth's surface and ocean bottom of the intertidal and near coastal zone by varying the wavelength of light. It has terrestrial, airborne, and mobile applications. Lidar is an acronym of "light detection and ranging" or "laser imaging, detection, and ranging". It is sometimes called 3-D laser scanning, a special combination of 3-D scanning and laser scanning. Lidar is commonly used to make high-resolution maps, with applications in surveying, geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics, laser guidance, airborne laser swath mapping (ALSM), and laser altimetry. It is also used in control and navigation for some autonomous cars and for the helicopter Ingenuity on its record-setting flights over the terrain of Mars. ## History and etymology Under the direction of Malcolm Stitch, the Hughes Aircraft Company introduced the first lidar-like system in 1961, shortly after the invention of the laser. Intended for satellite tracking, this system combined laser-focused imaging with the ability to calculate distances by measuring the time for a signal to return using appropriate sensors and data acquisition electronics. It was originally called "Colidar" an acronym for "coherent light detecting and ranging," derived from the term "radar", itself an acronym for "radio detection and ranging". All laser rangefinders, laser altimeters and lidar units are derived from the early colidar systems. The first practical terrestrial application of a colidar system was the "Colidar Mark II", a large rifle-like laser rangefinder produced in 1963 which had a range of 7 miles and an accuracy of 15 feet, to be used for military targeting. The first mention of lidar as a stand-alone word in 1963 suggests it originated as a portmanteau of "light" and "radar": "Eventually the laser may provide an extremely sensitive detector of particular wavelengths from distant objects. Meanwhile, it is being used to study the moon by 'lidar' (light radar) ..." The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar. Lidar's first applications were in meteorology, for which the National Center for Atmospheric Research used it to measure clouds and pollution. The general public became aware of the accuracy and usefulness of lidar systems in 1971 during the Apollo 15 mission, when astronauts used a laser altimeter to map the surface of the moon. Although the English language no longer treats "radar" as an acronym, (i.e., uncapitalized), the word "lidar" was capitalized as "LIDAR" or "LiDAR" in some publications beginning in the 1980s. No consensus exists on capitalization. Various publications refer to lidar as "LIDAR", "LiDAR", "LIDaR", or "Lidar". The USGS uses both "LIDAR" and "lidar", sometimes in the same document; the New York Times predominantly uses "lidar" for staff-written articles, although contributing news feeds such as Reuters may use Lidar. ### General description Lidar uses ultraviolet, visible, or near infrared light to image objects. It can target a wide range of materials, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules. A narrow laser beam can map physical features with very high resolutions; for example, an aircraft can map terrain at 30-centimetre (12 in) resolution or better. The essential concept of lidar was originated by EH Synge in 1930, who envisaged the use of powerful searchlights to probe the atmosphere. Indeed, lidar has since been used extensively for atmospheric research and meteorology. Lidar instruments fitted to aircraft and satellites carry out surveying and mapping – a recent example being the U.S. Geological Survey Experimental Advanced Airborne Research Lidar. NASA has identified lidar as a key technology for enabling autonomous precision safe landing of future robotic and crewed lunar-landing vehicles. Wavelengths vary to suit the target: from about 10 micrometers (infrared) to approximately 250 nm (UV). Typically, light is reflected via backscattering, as opposed to pure reflection one might find with a mirror. Different types of scattering are used for different lidar applications: most commonly Rayleigh scattering, Mie scattering, Raman scattering, and fluorescence. Suitable combinations of wavelengths can allow for remote mapping of atmospheric contents by identifying wavelength-dependent changes in the intensity of the returned signal. The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar, although photonic radar more strictly refers to radio-frequency range finding using photonics components. ## Technology ### Mathematical formula A lidar determines the distance of an object or a surface with the formula: where c {\displaystyle c} is the speed of light, d {\displaystyle d} is the distance between the detector and the object or surface being detected, and t {\displaystyle t} is the time spent for the laser light to travel to the object or surface being detected, then travel back to the detector. ### Design The two kinds of lidar detection schemes are "incoherent" or direct energy detection (which principally measures amplitude changes of the reflected light) and coherent detection (best for measuring Doppler shifts, or changes in the phase of the reflected light). Coherent systems generally use optical heterodyne detection. This is more sensitive than direct detection and allows them to operate at much lower power, but requires more complex transceivers. Both types employ pulse models: either micropulse or high energy. Micropulse systems utilize intermittent bursts of energy. They developed as a result of ever-increasing computer power, combined with advances in laser technology. They use considerably less energy in the laser, typically on the order of one microjoule, and are often "eye-safe", meaning they can be used without safety precautions. High-power systems are common in atmospheric research, where they are widely used for measuring atmospheric parameters: the height, layering and densities of clouds, cloud particle properties (extinction coefficient, backscatter coefficient, depolarization), temperature, pressure, wind, humidity, and trace gas concentration (ozone, methane, nitrous oxide, etc.). ### Components Lidar systems consist of several major components. #### Laser 600–1000 nm lasers are most common for non-scientific applications. The maximum power of the laser is limited, or an automatic shut-off system which turns the laser off at specific altitudes is used in order to make it eye-safe for the people on the ground. One common alternative, 1550 nm lasers, are eye-safe at relatively high power levels since this wavelength is not strongly absorbed by the eye, but the detector technology is less advanced and so on these wavelengths are generally used at longer ranges with lower accuracies. They are also used for military applications because 1550 nm is not visible in night vision goggles, unlike the shorter 1000 nm infrared laser. Airborne topographic mapping lidars generally use 1064 nm diode-pumped YAG lasers, while bathymetric (underwater depth research) systems generally use 532 nm frequency-doubled diode pumped YAG lasers because 532 nm penetrates water with much less attenuation than 1064 nm. Laser settings include the laser repetition rate (which controls the data collection speed). Pulse length is generally an attribute of the laser cavity length, the number of passes required through the gain material (YAG, YLF, etc.), and Q-switch (pulsing) speed. Better target resolution is achieved with shorter pulses, provided the lidar receiver detectors and electronics have sufficient bandwidth. ##### Phased arrays A phased array can illuminate any direction by using a microscopic array of individual antennas. Controlling the timing (phase) of each antenna steers a cohesive signal in a specific direction. Phased arrays have been used in radar since the 1940s. The same technique can be used with light. On the order of a million optical antennas are used to see a radiation pattern of a certain size in a certain direction. The system is controlled by timing the precise flash. A single chip (or a few) replace a US$75,000 electromechanical system, drastically reducing costs. Several companies are working on developing commercial solid-state lidar units. The control system can change the shape of the lens to enable zoom in/zoom out functions. Specific sub-zones can be targeted at sub-second intervals. Electromechanical lidar lasts for between 1,000 and 2,000 hours. By contrast, solid-state lidar can run for 100,000 hours. ##### Microelectromechanical machines Microelectromechanical mirrors (MEMS) are not entirely solid-state. However, their tiny form factor provides many of the same cost benefits. A single laser is directed to a single mirror that can be reoriented to view any part of the target field. The mirror spins at a rapid rate. However, MEMS systems generally operate in a single plane (left to right). To add a second dimension generally requires a second mirror that moves up and down. Alternatively, another laser can hit the same mirror from another angle. MEMS systems can be disrupted by shock/vibration and may require repeated calibration. The goal is to create a small microchip to enhance innovation and further technological advances. #### Scanner and optics Image development speed is affected by the speed at which they are scanned. Options to scan the azimuth and elevation include dual oscillating plane mirrors, a combination with a polygon mirror, and a dual axis scanner. Optic choices affect the angular resolution and range that can be detected. A hole mirror or a beam splitter are options to collect a return signal. #### Photodetector and receiver electronics Two main photodetector technologies are used in lidar: solid state photodetectors, such as silicon avalanche photodiodes, or photomultipliers. The sensitivity of the receiver is another parameter that has to be balanced in a lidar design. #### Position and navigation systems Lidar sensors mounted on mobile platforms such as airplanes or satellites require instrumentation to determine the absolute position and orientation of the sensor. Such devices generally include a Global Positioning System receiver and an inertial measurement unit (IMU). #### Sensor Lidar uses active sensors that supply their own illumination source. The energy source hits objects and the reflected energy is detected and measured by sensors. Distance to the object is determined by recording the time between transmitted and backscattered pulses and by using the speed of light to calculate the distance traveled. Flash LIDAR allows for 3-D imaging because of the camera's ability to emit a larger flash and sense the spatial relationships and dimensions of area of interest with the returned energy. This allows for more accurate imaging because the captured frames do not need to be stitched together, and the system is not sensitive to platform motion. This results in less distortion. 3-D imaging can be achieved using both scanning and non-scanning systems. "3-D gated viewing laser radar" is a non-scanning laser ranging system that applies a pulsed laser and a fast gated camera. Research has begun for virtual beam steering using Digital Light Processing (DLP) technology. Imaging lidar can also be performed using arrays of high speed detectors and modulation sensitive detector arrays typically built on single chips using complementary metal–oxide–semiconductor (CMOS) and hybrid CMOS/Charge-coupled device (CCD) fabrication techniques. In these devices each pixel performs some local processing such as demodulation or gating at high speed, downconverting the signals to video rate so that the array can be read like a camera. Using this technique many thousands of pixels / channels may be acquired simultaneously. High resolution 3-D lidar cameras use homodyne detection with an electronic CCD or CMOS shutter. A coherent imaging lidar uses synthetic array heterodyne detection to enable a staring single element receiver to act as though it were an imaging array. In 2014, Lincoln Laboratory announced a new imaging chip with more than 16,384 pixels, each able to image a single photon, enabling them to capture a wide area in a single image. An earlier generation of the technology with one fourth as many pixels was dispatched by the U.S. military after the January 2010 Haiti earthquake. A single pass by a business jet at 3,000 meters (10,000 ft.) over Port-au-Prince was able to capture instantaneous snapshots of 600-meter squares of the city at a resolution of 30 centimetres (12 in), displaying the precise height of rubble strewn in city streets. The new system is ten times better, and could produce much larger maps more quickly. The chip uses indium gallium arsenide (InGaAs), which operates in the infrared spectrum at a relatively long wavelength that allows for higher power and longer ranges. In many applications, such as self-driving cars, the new system will lower costs by not requiring a mechanical component to aim the chip. InGaAs uses less hazardous wavelengths than conventional silicon detectors, which operate at visual wavelengths. ### Flash lidar In flash lidar, the entire field of view is illuminated with a wide diverging laser beam in a single pulse. This is in contrast to conventional scanning lidar, which uses a collimated laser beam that illuminates a single point at a time, and the beam is raster scanned to illuminate the field of view point-by-point. This illumination method requires a different detection scheme as well. In both scanning and flash lidar, a time-of-flight camera is used to collect information about both the 3-D location and intensity of the light incident on it in every frame. However, in scanning lidar, this camera contains only a point sensor, while in flash lidar, the camera contains either a 1-D or a 2-D sensor array, each pixel of which collects 3-D location and intensity information. In both cases, the depth information is collected using the time of flight of the laser pulse (i.e., the time it takes each laser pulse to hit the target and return to the sensor), which requires the pulsing of the laser and acquisition by the camera to be synchronized. The result is a camera that takes pictures of distance, instead of colors. Flash lidar is especially advantageous, when compared to scanning lidar, when the camera, scene, or both are moving, since the entire scene is illuminated at the same time. With scanning lidar, motion can cause "jitter" from the lapse in time as the laser rasters over the scene. As with all forms of lidar, the onboard source of illumination makes flash lidar an active sensor. The signal that is returned is processed by embedded algorithms to produce a nearly instantaneous 3-D rendering of objects and terrain features within the field of view of the sensor. The laser pulse repetition frequency is sufficient for generating 3-D videos with high resolution and accuracy. The high frame rate of the sensor makes it a useful tool for a variety of applications that benefit from real-time visualization, such as highly precise remote landing operations. By immediately returning a 3D elevation mesh of target landscapes, a flash sensor can be used to identify optimal landing zones in autonomous spacecraft landing scenarios. Seeing at a distance requires a powerful burst of light. The power is limited to levels that do not damage human retinas. Wavelengths must not affect human eyes. However, low-cost silicon imagers do not read light in the eye-safe spectrum. Instead, gallium-arsenide imagers are required, which can boost costs to $200,000. Gallium-arsenide is the same compound used to produce high-cost, high-efficiency solar panels usually used in space applications ## Classification ### Based on orientation Lidar can be oriented to nadir, zenith, or laterally. For example, lidar altimeters look down, an atmospheric lidar looks up, and lidar-based collision avoidance systems are side-looking. ### Based on scanning mechanism Laser projections of lidars can be manipulated using various methods and mechanisms to produce a scanning effect: the standard spindle-type, which spins to give a 360-degree view; solid-state lidar, which has a fixed field of view, but no moving parts, and can use either MEMS or optical phased arrays to steer the beams; and flash lidar, which spreads a flash of light over a large field of view before the signal bounces back to a detector. ### Based on platform Lidar applications can be divided into airborne and terrestrial types. The two types require scanners with varying specifications based on the data's purpose, the size of the area to be captured, the range of measurement desired, the cost of equipment, and more. Spaceborne platforms are also possible, see satellite laser altimetry. ## Airborne Airborne lidar (also airborne laser scanning) is when a laser scanner, while attached to an aircraft during flight, creates a 3-D point cloud model of the landscape. This is currently the most detailed and accurate method of creating digital elevation models, replacing photogrammetry. One major advantage in comparison with photogrammetry is the ability to filter out reflections from vegetation from the point cloud model to create a digital terrain model which represents ground surfaces such as rivers, paths, cultural heritage sites, etc., which are concealed by trees. Within the category of airborne lidar, there is sometimes a distinction made between high-altitude and low-altitude applications, but the main difference is a reduction in both accuracy and point density of data acquired at higher altitudes. Airborne lidar can also be used to create bathymetric models in shallow water. The main constituents of airborne lidar include digital elevation models (DEM) and digital surface models (DSM). The points and ground points are the vectors of discrete points while DEM and DSM are interpolated raster grids of discrete points. The process also involves capturing of digital aerial photographs. To interpret deep-seated landslides for example, under the cover of vegetation, scarps, tension cracks or tipped trees airborne lidar is used. Airborne lidar digital elevation models can see through the canopy of forest cover, perform detailed measurements of scarps, erosion and tilting of electric poles. Airborne lidar data is processed using a toolbox called Toolbox for Lidar Data Filtering and Forest Studies (TIFFS) for lidar data filtering and terrain study software. The data is interpolated to digital terrain models using the software. The laser is directed at the region to be mapped and each point's height above the ground is calculated by subtracting the original z-coordinate from the corresponding digital terrain model elevation. Based on this height above the ground the non-vegetation data is obtained which may include objects such as buildings, electric power lines, flying birds, insects, etc. The rest of the points are treated as vegetation and used for modeling and mapping. Within each of these plots, lidar metrics are calculated by calculating statistics such as mean, standard deviation, skewness, percentiles, quadratic mean, etc. ### Airborne lidar bathymetry The airborne lidar bathymetric technological system involves the measurement of time of flight of a signal from a source to its return to the sensor. The data acquisition technique involves a sea floor mapping component and a ground truth component that includes video transects and sampling. It works using a green spectrum (532 nm) laser beam. Two beams are projected onto a fast rotating mirror, which creates an array of points. One of the beams penetrates the water and also detects the bottom surface of the water under favorable conditions. The data obtained shows the full extent of the land surface exposed above the sea floor. This technique is extremely useful as it will play an important role in the major sea floor mapping program. The mapping yields onshore topography as well as underwater elevations. Sea floor reflectance imaging is another solution product from this system which can benefit mapping of underwater habitats. This technique has been used for three-dimensional image mapping of California's waters using a hydrographic lidar. Drones are now being used with laser scanners, as well as other remote sensors, as a more economical method to scan smaller areas. The possibility of drone remote sensing also eliminates any danger that aircraft crews may be subjected to in difficult terrain or remote areas. ### Full-waveform LiDAR Airborne LiDAR systems were traditionally able to acquire only a few peak returns, while more recent systems acquire and digitize the entire reflected signal. Scientists analysed the waveform signal for extracting peak returns using Gaussian Decomposition. Zhuang et al, 2017 used this approach for estimating aboveground biomass. Handling the huge amounts of full-waveform data is difficult. Therefore, Gaussian Decomposition of the waveforms is effective, since it reduces the data and is supported by existing workflows that support interpretation of 3D point clouds. Recent studies investigated voxelisation. The intensities of the waveform samples are inserted into a voxelised space (i.e. 3D grayscale image) building up a 3D representation of the scanned area. Related metrics and information can then be extracted from that voxelised space. Structural information can be extracted using 3D metrics from local areas and there is a case study that used the voxelisation approach for detecting dead standing Eucalypt trees in Australia. ## Terrestrial Terrestrial applications of lidar (also terrestrial laser scanning) happen on the Earth's surface and can be either stationary or mobile. Stationary terrestrial scanning is most common as a survey method, for example in conventional topography, monitoring, cultural heritage documentation and forensics. The 3-D point clouds acquired from these types of scanners can be matched with digital images taken of the scanned area from the scanner's location to create realistic looking 3-D models in a relatively short time when compared to other technologies. Each point in the point cloud is given the colour of the pixel from the image taken at the same location and direction as the laser beam that created the point. Mobile lidar (also mobile laser scanning) is when two or more scanners are attached to a moving vehicle to collect data along a path. These scanners are almost always paired with other kinds of equipment, including GNSS receivers and IMUs. One example application is surveying streets, where power lines, exact bridge heights, bordering trees, etc. all need to be taken into account. Instead of collecting each of these measurements individually in the field with a tachymeter, a 3-D model from a point cloud can be created where all of the measurements needed can be made, depending on the quality of the data collected. This eliminates the problem of forgetting to take a measurement, so long as the model is available, reliable and has an appropriate level of accuracy. Terrestrial lidar mapping involves a process of occupancy grid map generation. The process involves an array of cells divided into grids which employ a process to store the height values when lidar data falls into the respective grid cell. A binary map is then created by applying a particular threshold to the cell values for further processing. The next step is to process the radial distance and z-coordinates from each scan to identify which 3-D points correspond to each of the specified grid cell leading to the process of data formation. ## Applications There are a wide variety of lidar applications, in addition to the applications listed below, as it is often mentioned in National lidar dataset programs. These applications are largely determined by the range of effective object detection; resolution, which is how accurately the lidar identifies and classifies objects; and reflectance confusion, meaning how well the lidar can see something in the presence of bright objects, like reflective signs or bright sun. Companies are working to cut the cost of lidar sensors, currently anywhere from about $1,200 to more than $12,000. Lower prices will make lidar more attractive for new markets. ### Agriculture Agricultural robots have been used for a variety of purposes ranging from seed and fertilizer dispersions, sensing techniques as well as crop scouting for the task of weed control. Lidar can help determine where to apply costly fertilizer. It can create a topographical map of the fields and reveal slopes and sun exposure of the farmland. Researchers at the Agricultural Research Service used this topographical data with the farmland yield results from previous years, to categorize land into zones of high, medium, or low yield. This indicates where to apply fertilizer to maximize yield. Lidar is now used to monitor insects in the field. The use of Lidar can detect the movement and behavior of individual flying insects, with identification down to sex and species. In 2017 a patent application was published on this technology in the United States, Europe, and China. Another application is crop mapping in orchards and vineyards, to detect foliage growth and the need for pruning or other maintenance, detect variations in fruit production, or count plants. Lidar is useful in GNSS-denied situations, such as nut and fruit orchards, where foliage blocks satellite signals to precision agriculture equipment or a driverless tractor. Lidar sensors can detect the edges of rows, so that farming equipment can continue moving until GNSS signal is reestablished. #### Plant species classification Controlling weeds requires identifying plant species. This can be done by using 3-D lidar and machine learning. Lidar produces plant contours as a "point cloud" with range and reflectance values. This data is transformed, and features are extracted from it. If the species is known, the features are added as new data. The species is labelled and its features are initially stored as an example to identify the species in the real environment. This method is efficient because it uses a low-resolution lidar and supervised learning. It includes an easy-to-compute feature set with common statistical features which are independent of the plant size. ### Archaeology Lidar has many uses in archaeology, including planning of field campaigns, mapping features under forest canopy, and overview of broad, continuous features indistinguishable from the ground. Lidar can produce high-resolution datasets quickly and cheaply. Lidar-derived products can be easily integrated into a Geographic Information System (GIS) for analysis and interpretation. Lidar can also help to create high-resolution digital elevation models (DEMs) of archaeological sites that can reveal micro-topography that is otherwise hidden by vegetation. The intensity of the returned lidar signal can be used to detect features buried under flat vegetated surfaces such as fields, especially when mapping using the infrared spectrum. The presence of these features affects plant growth and thus the amount of infrared light reflected back. For example, at Fort Beauséjour – Fort Cumberland National Historic Site, Canada, lidar discovered archaeological features related to the siege of the Fort in 1755. Features that could not be distinguished on the ground or through aerial photography were identified by overlaying hill shades of the DEM created with artificial illumination from various angles. Another example is work at Caracol by Arlen Chase and his wife Diane Zaino Chase. In 2012, lidar was used to search for the legendary city of La Ciudad Blanca or "City of the Monkey God" in the La Mosquitia region of the Honduran jungle. During a seven-day mapping period, evidence was found of man-made structures. In June 2013, the rediscovery of the city of Mahendraparvata was announced. In southern New England, lidar was used to reveal stone walls, building foundations, abandoned roads, and other landscape features obscured in aerial photography by the region's dense forest canopy. In Cambodia, lidar data were used by Damian Evans and Roland Fletcher to reveal anthropogenic changes to Angkor landscape. In 2012, Lidar revealed that the Purépecha settlement of Angamuco in Michoacán, Mexico had about as many buildings as today's Manhattan; while in 2016, its use in mapping ancient Maya causeways in northern Guatemala, revealed 17 elevated roads linking the ancient city of El Mirador to other sites. In 2018, archaeologists using lidar discovered more than 60,000 man-made structures in the Maya Biosphere Reserve, a "major breakthrough" that showed the Maya civilization was much larger than previously thought. ### Autonomous vehicles Autonomous vehicles may use lidar for obstacle detection and avoidance to navigate safely through environments. The introduction of lidar was a pivotal occurrence that was the key enabler behind Stanley, the first autonomous vehicle to successfully complete the DARPA Grand Challenge. Point cloud output from the lidar sensor provides the necessary data for robot software to determine where potential obstacles exist in the environment and where the robot is in relation to those potential obstacles. Singapore's Singapore-MIT Alliance for Research and Technology (SMART) is actively developing technologies for autonomous lidar vehicles. The very first generations of automotive adaptive cruise control systems used only lidar sensors. #### Object detection for transportation systems In transportation systems, to ensure vehicle and passenger safety and to develop electronic systems that deliver driver assistance, understanding vehicle and its surrounding environment is essential. Lidar systems play an important role in the safety of transportation systems. Many electronic systems which add to the driver assistance and vehicle safety such as Adaptive Cruise Control (ACC), Emergency Brake Assist, and Anti-lock Braking System (ABS) depend on the detection of a vehicle's environment to act autonomously or semi-autonomously. Lidar mapping and estimation achieve this. Basics overview: Current lidar systems use rotating hexagonal mirrors which split the laser beam. The upper three beams are used for vehicle and obstacles ahead and the lower beams are used to detect lane markings and road features. The major advantage of using lidar is that the spatial structure is obtained and this data can be fused with other sensors such as radar, etc. to get a better picture of the vehicle environment in terms of static and dynamic properties of the objects present in the environment. Conversely, a significant issue with lidar is the difficulty in reconstructing point cloud data in poor weather conditions. In heavy rain, for example, the light pulses emitted from the lidar system are partially reflected off of rain droplets which adds noise to the data, called 'echoes'. Below mentioned are various approaches of processing lidar data and using it along with data from other sensors through sensor fusion to detect the vehicle environment conditions. ##### Obstacle detection and road environment recognition using lidar This method proposed by Kun Zhou et al. not only focuses on object detection and tracking but also recognizes lane marking and road features. As mentioned earlier the lidar systems use rotating hexagonal mirrors that split the laser beam into six beams. The upper three layers are used to detect the forward objects such as vehicles and roadside objects. The sensor is made of weather-resistant material. The data detected by lidar are clustered to several segments and tracked by Kalman filter. Data clustering here is done based on characteristics of each segment based on object model, which distinguish different objects such as vehicles, signboards, etc. These characteristics include the dimensions of the object, etc. The reflectors on the rear edges of vehicles are used to differentiate vehicles from other objects. Object tracking is done using a 2-stage Kalman filter considering the stability of tracking and the accelerated motion of objects Lidar reflective intensity data is also used for curb detection by making use of robust regression to deal with occlusions. The road marking is detected using a modified Otsu method by distinguishing rough and shiny surfaces. Roadside reflectors that indicate lane border are sometimes hidden due to various reasons. Therefore, other information is needed to recognize the road border. The lidar used in this method can measure the reflectivity from the object. Hence, with this data road border can also be recognized. Also, the usage of sensor with weather-robust head helps detecting the objects even in bad weather conditions. Canopy Height Model before and after flood is a good example. Lidar can detect high detailed canopy height data as well as its road border. Lidar measurements help identify the spatial structure of the obstacle. This helps distinguish objects based on size and estimate the impact of driving over it. Lidar systems provide better range and a large field of view which helps detecting obstacles on the curves. This is one major advantage over RADAR systems which have a narrower field of view. The fusion of lidar measurement with different sensors makes the system robust and useful in real-time applications, since lidar dependent systems can't estimate the dynamic information about the detected object. It has been shown that lidar can be manipulated, such that self-driving cars are tricked into taking evasive action. ### Biology and conservation Lidar has also found many applications in forestry. Canopy heights, biomass measurements, and leaf area can all be studied using airborne lidar systems. Similarly, lidar is also used by many industries, including Energy and Railroad, and the Department of Transportation as a faster way of surveying. Topographic maps can also be generated readily from lidar, including for recreational use such as in the production of orienteering maps. Lidar has also been applied to estimate and assess the biodiversity of plants, fungi, and animals. In addition, the Save the Redwoods League has undertaken a project to map the tall redwoods on the Northern California coast. Lidar allows research scientists to not only measure the height of previously unmapped trees, but to determine the biodiversity of the redwood forest. Stephen Sillett, who is working with the League on the North Coast lidar project, claims this technology will be useful in directing future efforts to preserve and protect ancient redwood trees. ### Geology and soil science High-resolution digital elevation maps generated by airborne and stationary lidar have led to significant advances in geomorphology (the branch of geoscience concerned with the origin and evolution of the Earth surface topography). The lidar abilities to detect subtle topographic features such as river terraces and river channel banks, glacial landforms, to measure the land-surface elevation beneath the vegetation canopy, to better resolve spatial derivatives of elevation, and to detect elevation changes between repeat surveys have enabled many novel studies of the physical and chemical processes that shape landscapes. In 2005 the Tour Ronde in the Mont Blanc massif became the first high alpine mountain on which lidar was employed to monitor the increasing occurrence of severe rock-fall over large rock faces allegedly caused by climate change and degradation of permafrost at high altitude. Lidar is also used in structural geology and geophysics as a combination between airborne lidar and GNSS for the detection and study of faults, for measuring uplift. The output of the two technologies can produce extremely accurate elevation models for terrain – models that can even measure ground elevation through trees. This combination was used most famously to find the location of the Seattle Fault in Washington, United States. This combination also measures uplift at Mount St. Helens by using data from before and after the 2004 uplift. Airborne lidar systems monitor glaciers and have the ability to detect subtle amounts of growth or decline. A satellite-based system, the NASA ICESat, includes a lidar sub-system for this purpose. The NASA Airborne Topographic Mapper is also used extensively to monitor glaciers and perform coastal change analysis. The combination is also used by soil scientists while creating a soil survey. The detailed terrain modeling allows soil scientists to see slope changes and landform breaks which indicate patterns in soil spatial relationships. ### Atmosphere Initially, based on ruby lasers, lidar for meteorological applications was constructed shortly after the invention of the laser and represent one of the first applications of laser technology. Lidar technology has since expanded vastly in capability and lidar systems are used to perform a range of measurements that include profiling clouds, measuring winds, studying aerosols, and quantifying various atmospheric components. Atmospheric components can in turn provide useful information including surface pressure (by measuring the absorption of oxygen or nitrogen), greenhouse gas emissions (carbon dioxide and methane), photosynthesis (carbon dioxide), fires (carbon monoxide), and humidity (water vapor). Atmospheric lidars can be either ground-based, airborne or satellite depending on the type of measurement. Atmospheric lidar remote sensing works in two ways – by measuring backscatter from the atmosphere, and by measuring the scattered reflection off the ground (when the lidar is airborne) or other hard surface. Backscatter from the atmosphere directly gives a measure of clouds and aerosols. Other derived measurements from backscatter such as winds or cirrus ice crystals require careful selecting of the wavelength and/or polarization detected. Doppler lidar and Rayleigh Doppler lidar are used to measure temperature and/or wind speed along the beam by measuring the frequency of the backscattered light. The Doppler broadening of gases in motion allows the determination of properties via the resulting frequency shift. Scanning lidars, such as the conical-scanning NASA HARLIE LIDAR, have been used to measure atmospheric wind velocity. The ESA wind mission ADM-Aeolus will be equipped with a Doppler lidar system in order to provide global measurements of vertical wind profiles. A doppler lidar system was used in the 2008 Summer Olympics to measure wind fields during the yacht competition. Doppler lidar systems are also now beginning to be successfully applied in the renewable energy sector to acquire wind speed, turbulence, wind veer, and wind shear data. Both pulsed and continuous wave systems are being used. Pulsed systems use signal timing to obtain vertical distance resolution, whereas continuous wave systems rely on detector focusing. The term, eolics, has been proposed to describe the collaborative and interdisciplinary study of wind using computational fluid mechanics simulations and Doppler lidar measurements. The ground reflection of an airborne lidar gives a measure of surface reflectivity (assuming the atmospheric transmittance is well known) at the lidar wavelength, however, the ground reflection is typically used for making absorption measurements of the atmosphere. "Differential absorption lidar" (DIAL) measurements utilize two or more closely spaced (<1 nm) wavelengths to factor out surface reflectivity as well as other transmission losses, since these factors are relatively insensitive to wavelength. When tuned to the appropriate absorption lines of a particular gas, DIAL measurements can be used to determine the concentration (mixing ratio) of that particular gas in the atmosphere. This is referred to as an Integrated Path Differential Absorption (IPDA) approach, since it is a measure of the integrated absorption along the entire lidar path. IPDA lidars can be either pulsed or CW and typically use two or more wavelengths. IPDA lidars have been used for remote sensing of carbon dioxide and methane. Synthetic array lidar allows imaging lidar without the need for an array detector. It can be used for imaging Doppler velocimetry, ultra-fast frame rate (MHz) imaging, as well as for speckle reduction in coherent lidar. An extensive lidar bibliography for atmospheric and hydrospheric applications is given by Grant. ### Law enforcement Lidar speed guns are used by the police to measure the speed of vehicles for speed limit enforcement purposes. Additionally, it is used in forensics to aid in crime scene investigations. Scans of a scene are taken to record exact details of object placement, blood, and other important information for later review. These scans can also be used to determine bullet trajectory in cases of shootings. ### Military Few military applications are known to be in place and are classified (such as the lidar-based speed measurement of the AGM-129 ACM stealth nuclear cruise missile), but a considerable amount of research is underway in their use for imaging. Higher resolution systems collect enough detail to identify targets, such as tanks. Examples of military applications of lidar include the Airborne Laser Mine Detection System (ALMDS) for counter-mine warfare by Areté Associates. A NATO report (RTO-TR-SET-098) evaluated the potential technologies to do stand-off detection for the discrimination of biological warfare agents. The potential technologies evaluated were Long-Wave Infrared (LWIR), Differential Scattering (DISC), and Ultraviolet Laser Induced Fluorescence (UV-LIF). The report concluded that : Based upon the results of the lidar systems tested and discussed above, the Task Group recommends that the best option for the near-term (2008–2010) application of stand-off detection systems is UV-LIF , however, in the long-term, other techniques such as stand-off Raman spectroscopy may prove to be useful for identification of biological warfare agents. Short-range compact spectrometric lidar based on Laser-Induced Fluorescence (LIF) would address the presence of bio-threats in aerosol form over critical indoor, semi-enclosed and outdoor venues such as stadiums, subways, and airports. This near real-time capability would enable rapid detection of a bioaerosol release and allow for timely implementation of measures to protect occupants and minimize the extent of contamination. The Long-Range Biological Standoff Detection System (LR-BSDS) was developed for the U.S. Army to provide the earliest possible standoff warning of a biological attack. It is an airborne system carried by helicopter to detect synthetic aerosol clouds containing biological and chemical agents at long range. The LR-BSDS, with a detection range of 30 km or more, was fielded in June 1997. Five lidar units produced by the German company Sick AG were used for short range detection on Stanley, the autonomous car that won the 2005 DARPA Grand Challenge. A robotic Boeing AH-6 performed a fully autonomous flight in June 2010, including avoiding obstacles using lidar. ### Mining For the calculation of ore volumes is accomplished by periodic (monthly) scanning in areas of ore removal, then comparing surface data to the previous scan. Lidar sensors may also be used for obstacle detection and avoidance for robotic mining vehicles such as in the Komatsu Autonomous Haulage System (AHS) used in Rio Tinto's Mine of the Future. ### Physics and astronomy A worldwide network of observatories uses lidars to measure the distance to reflectors placed on the moon, allowing the position of the moon to be measured with millimeter precision and tests of general relativity to be done. MOLA, the Mars Orbiting Laser Altimeter, used a lidar instrument in a Mars-orbiting satellite (the NASA Mars Global Surveyor) to produce a spectacularly precise global topographic survey of the red planet. Laser altimeters produced global elevation models of Mars, the Moon (Lunar Orbiter Laser Altimeter (LOLA)) Mercury (Mercury Laser Altimeter (MLA)), NEAR–Shoemaker Laser Rangefinder (NLR). Future missions will also include laser altimeter experiments such as the Ganymede Laser Altimeter (GALA) as part of the Jupiter Icy Moons Explorer (JUICE) mission. In September, 2008, the NASA Phoenix Lander used lidar to detect snow in the atmosphere of Mars. In atmospheric physics, lidar is used as a remote detection instrument to measure densities of certain constituents of the middle and upper atmosphere, such as potassium, sodium, or molecular nitrogen and oxygen. These measurements can be used to calculate temperatures. Lidar can also be used to measure wind speed and to provide information about vertical distribution of the aerosol particles. At the JET nuclear fusion research facility, in the UK near Abingdon, Oxfordshire, lidar Thomson Scattering is used to determine Electron Density and Temperature profiles of the plasma. ### Rock mechanics Lidar has been widely used in rock mechanics for rock mass characterization and slope change detection. Some important geomechanical properties from the rock mass can be extracted from the 3-D point clouds obtained by means of the lidar. Some of these properties are: Discontinuity orientation Discontinuity spacing and RQD Discontinuity aperture Discontinuity persistence Discontinuity roughness Water infiltration Some of these properties have been used to assess the geomechanical quality of the rock mass through the RMR index. Moreover, as the orientations of discontinuities can be extracted using the existing methodologies, it is possible to assess the geomechanical quality of a rock slope through the SMR index. In addition to this, the comparison of different 3-D point clouds from a slope acquired at different times allows researchers to study the changes produced on the scene during this time interval as a result of rockfalls or any other landsliding processes. THOR THOR is a laser designed toward measuring Earth's atmospheric conditions. The laser enters a cloud cover and measures the thickness of the return halo. The sensor has a fiber optic aperture with a width of 7.5 inches that is used to measure the return light. ### Robotics Lidar technology is being used in robotics for the perception of the environment as well as object classification. The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and crewed vehicles with a high degree of precision. Lidar are also widely used in robotics for simultaneous localization and mapping and well integrated into robot simulators. Refer to the Military section above for further examples. ### Spaceflight Lidar is increasingly being utilized for rangefinding and orbital element calculation of relative velocity in proximity operations and stationkeeping of spacecraft. Lidar has also been used for atmospheric studies from space. Short pulses of laser light beamed from a spacecraft can reflect off tiny particles in the atmosphere and back to a telescope aligned with the spacecraft laser. By precisely timing the lidar 'echo,' and by measuring how much laser light is received by the telescope, scientists can accurately determine the location, distribution and nature of the particles. The result is a revolutionary new tool for studying constituents in the atmosphere, from cloud droplets to industrial pollutants, which are difficult to detect by other means." Laser altimetry is used to make digital elevation maps of planets, including the Mars Orbital Laser Altimeter (MOLA) mapping of Mars, the Lunar Orbital Laser Altimeter (LOLA) and Lunar Altimeter (LALT) mapping of the Moon, and the Mercury Laser Altimeter (MLA) mapping of Mercury. It is also used to help navigate the helicopter Ingenuity in its record-setting flights over the terrain of Mars. ### Surveying Airborne lidar sensors are used by companies in the remote sensing field. They can be used to create a DTM (Digital Terrain Model) or DEM (Digital Elevation Model); this is quite a common practice for larger areas as a plane can acquire 3–4 km wide swaths in a single flyover. Greater vertical accuracy of below 50 mm can be achieved with a lower flyover, even in forests, where it is able to give the height of the canopy as well as the ground elevation. Typically, a GNSS receiver configured over a georeferenced control point is needed to link the data in with the WGS (World Geodetic System). LiDAR are also in use in hydrographic surveying. Depending upon the clarity of the water LiDAR can measure depths from 0.9m to 40m with a vertical accuracy of 15 cm and horizontal accuracy of 2.5m. Forestry Lidar systems have also been applied to improve forestry management. Measurements are used to take inventory in forest plots as well as calculate individual tree heights, crown width and crown diameter. Other statistical analysis use lidar data to estimate total plot information such as canopy volume, mean, minimum and maximum heights, and vegetation cover estimates. Aerial LiDAR has been used to map the bush fires in Australia in early 2020. The data was manipulated to view bare earth, and identify healthy and burned vegetation. ### Transport Lidar has been used in the railroad industry to generate asset health reports for asset management and by departments of transportation to assess their road conditions. CivilMaps.com is a leading company in the field. Lidar has been used in adaptive cruise control (ACC) systems for automobiles. Systems such as those by Siemens, Hella, Ouster and Cepton use a lidar device mounted on the front of the vehicle, such as the bumper, to monitor the distance between the vehicle and any vehicle in front of it. In the event, the vehicle in front slows down or is too close, the ACC applies the brakes to slow the vehicle. When the road ahead is clear, the ACC allows the vehicle to accelerate to a speed preset by the driver. Refer to the Military section above for further examples. A lidar-based device, the Ceilometer is used at airports worldwide to measure the height of clouds on runway approach paths. ### Wind farm optimization Lidar can be used to increase the energy output from wind farms by accurately measuring wind speeds and wind turbulence. Experimental lidar systems can be mounted on the nacelle of a wind turbine or integrated into the rotating spinner to measure oncoming horizontal winds, winds in the wake of the wind turbine, and proactively adjust blades to protect components and increase power. Lidar is also used to characterise the incident wind resource for comparison with wind turbine power production to verify the performance of the wind turbine by measuring the wind turbine's power curve. Wind farm optimization can be considered a topic in applied eolics. Another aspect of Lidar in wind related industry is to use computational fluid dynamics over Lidar-scanned surfaces in order to assess the wind potential, which can be used for optimal wind farms placement. ### Solar photovoltaic deployment optimization Lidar can also be used to assist planners and developers in optimizing solar photovoltaic systems at the city level by determining appropriate roof tops and for determining shading losses. Recent airborne laser scanning efforts have focused on ways to estimate the amount of solar light hitting vertical building facades, or by incorporating more detailed shading losses by considering the influence from vegetation and larger surrounding terrain. ### Video games Recent simulation racing games such as rFactor Pro, iRacing, Assetto Corsa and Project CARS increasingly feature race tracks reproduced from 3-D point clouds acquired through Lidar surveys, resulting in surfaces replicated with centimeter or millimeter precision in the in-game 3-D environment. The 2017 exploration game Scanner Sombre, by Introversion Software, uses Lidar as a fundamental game mechanic. In Build the Earth, Lidar is used to create accurate renders of terrain in Minecraft to account for any errors (mainly regarding elevation) in the default generation. The process of rendering terrain into Build the Earth is limited by the amount of data available in region as well as the speed it takes to convert the file into block data. ### Other uses The video for the 2007 song "House of Cards" by Radiohead was believed to be the first use of real-time 3-D laser scanning to record a music video. The range data in the video is not completely from a lidar, as structured light scanning is also used. In 2020, Apple introduced the fourth generation of iPad Pro with a lidar sensor integrated into the rear camera module, especially developed for augmented reality (AR) experiences. The feature was later included in the iPhone 12 Pro lineup and subsequent Pro models. On Apple devices, LiDAR empowers portrait mode pictures with night mode, but also quickens auto focus and improves accuracy in the Measure app. In 2022, Wheel of Fortune started using lidar technology to track when Vanna White moves her hand over the puzzle board to reveal letters. The first episode to have this technology was in the Season 40 premiere. However, the technology has had mixed reviews from fans of the show. ## Alternative technologies Recent development of Structure From Motion (SFM) technologies allows delivering 3-D images and maps based on data extracted from visual and IR photography. The elevation or 3-D data is extracted using multiple parallel passes over mapped area, yielding both visual light images and 3-D structure from the same sensor, which is often a specially chosen and calibrated digital camera. Computer stereo vision has shown promise as an alternative to LiDAR for close range applications.
https://en.wikipedia.org/wiki/Laser_altimeter
A handful of county highway department employees in the Rochester area gathered recently at the Olmsted County Public Works Service Center for a presentation and live demonstration by University of Minnesota Research Fellow Brian Davis about his team’s work involving light detection and ranging – or LiDAR. “LiDAR is like radar, but with light,” Davis said. “It gives you information about what’s around the sensor.” Davis and his fellow researchers have outfitted a sedan with special LiDAR equipment and other technology that is capable of capturing a 360-degree, 3-D view of a scene in real time. “We use the car as a test bed,” Davis said. “We have a lot of different types of sensors on the car that we use for the different projects that we’re working on. Right now we have a LiDAR sensor on top. Sometimes we have a high-accuracy GPS receiver in there. We have a cellular modem. We have a handful of inertial sensors. So it’s a lot of different stuff that we use to cater to the application.” For his presentation, Davis showed the attendees some of the data his team had already collected. “We showed a handful of pre-collected data at a handful of intersections around Rochester and Minneapolis,” Davis said. “What it shows is the point cloud collected by the sensor – just the raw point cloud with no post-processing done. In that information you can see people moving through it, cars moving through it, buses and light rail trains.” After the presentation, Davis led the group to the parking lot for a close-up look at the technology and how it collects data and displays that data in real time. Le Sueur County GIS manager Justin Lutterman was among those who could envision possible applications for LiDAR. “It’ll be interesting to see where this can go,” Lutterman said. “I’m sure the private industry will take off with this and emergency management, or the sheriffs and ambulances, would appreciate this kind of technology on their vehicles for a situation they might have to recreate. Roads and traffic designers would be able to monitor their resources, pavements, traffic counts and things like that.” Over the coming months, researchers will gather more data to develop a workshop for county personnel interested in learning more about LiDAR and how it can be applied in their transportation systems. “The next steps for this project are to collect some data with the car at intersections. Then we can use that information to fine tune our algorithms,” Davis said. “What the algorithms are going to do is take that raw data and give us useful information, like the number of cars, or the time a car passes through an intersection. That all feeds into the workshop we’re developing. The workshop is going to be for county GIS workers, traffic engineers and county engineers who are interested in learning about these technologies.” Mobile imagery, LiDAR help MnDOT maintain its assets How do you quickly and cost-effectively get an accurate inventory of transportation assets spread out along more than 1,100 miles of roadway? That was the problem facing the Minnesota Department of Transportation’s Metro District, which needed an inventory of its plate beam guardrail and concrete barriers. To accomplish this, engineers in the district launched an innovative research implementation project using a pair of mobile mapping technologies — Light Detection and Ranging (LiDAR) and mobile imaging — that can collect vast amounts of geospatial data on highway infrastructure in a safe and efficient manner. Mobile imaging uses a camera mounted on a vehicle driving at highway speeds to take high-resolution photos at regular intervals. It’s accurate to within 1 foot, which makes it suitable for use in preliminary (30 percent) design plans without additional field surveys. In this project, researchers collected mobile images of roadway barriers and extracted data from them along Metro District roadways, including all ramps, overpasses, interchanges, weigh stations, rest areas and historical sites. Researchers also collected LiDAR data at three Metro District sites. LiDAR uses a laser range finder and reflected laser light to measure distances. It provides survey-grade data accurate to within 0.1 foot, but it is significantly more expensive to collect than mobile imaging. “Mobile imagery and mobile LiDAR are relatively new technologies, but this research shows that they are options that we can use. Collecting this information manually would have taken a lot more time and money,” said MnDOT Asset Management Engineer Trisha Stefanski. MnDOT’s barrier inventory will provide invaluable information for design, planning and maintenance. The data will be published on MnDOT’s Georilla map server, where it will be beneficial to a variety of projects and recurring tasks. For example, if a vehicle hits a barrier, maintenance staff will be able to check the database to see the type of barrier and end treatment to ensure they bring the right equipment to make repairs. Although the project focused on barriers, the imagery contains data on other assets as well. MnDOT has already used the imagery to extract noise wall and sign data. - Minnesota Department of Transportation Metro Barrier Extraction and LiDAR Project – Final Report #2014-22 (PDF, 1 MB, 13 pages) - Using Mobile Mapping to Inventory Barriers – Technical Summary #2014-22TS (PDF, 1 MB, 2 pages) This blog post was adapted from an article in our upcoming issue of Accelerator, MnDOT’s research and innovation newsletter.
https://mntransportationresearch.org/tag/lidar/
ForensisGroup Automated Vehicles Expert Witnesses research and develop automated vehicle technologies ranging from Advanced Driver Assistance Systems (ADAS) to highly automated systems. These experts support development of perception, sensor fusion, localization, obstacle detection & tracking algorithms. They evaluate and compare deep learning algorithms for specific applications and related tasks to improve performance, training, and suitability for embedded applications, as well as develop immersive VR/AR simulation environments. They have opined in lawsuits concerning the transparency of autonomous vehicle testing, onboard safety technology, as well as cases involving trade secrets and proprietary design files, blueprints, and performance testing documentation self-driving car technology. They also assist law firms with regulatory issues and legislation concerning autonomous vehicle technology. They also research issues of driver error, patents, and LiDAR (Light Detection and Ranging). These experts have Bachelor’s, Master’s, PhD, or post-doctorate education in fields such as electrical and/or electronic engineering; mechanical engineering; computer engineering or computer science; aerospace, biomedical, chemical, or other engineering disciplines; and robotics, mathematics, statistics, natural sciences, or related fields. If our selection doesn’t include the automated vehicles expert witness you’re looking for, please contact us. We will assist you in locating the right expert for your case, and provide you with an initial case consultation at no cost. And because our experts don’t pay a fee to be listed, we can recruit only the most trusted and credible experts based on a rigorous screening process. Contact us to learn more and retain a highly-qualified automated vehicles expert for your case. I have over 5 years of experience in the psychology field with an emphasis in human factors. I have a thorough understanding of architecture; behavioral performance; ergonomics; human-computer interaction; social psychology; subjective usability; system usability; and trust and usability. I have been published in peer-reviewed journals within my field and have presented at conferences of the like. My research has included how human factors affect autonomous vehicles, environment design and evaluation, and voting systems. I have prior experience offering my services as a human factors expert witness. With over 25 years experience in the field of Electrical Engineering & Navigation, I am an expert in Non-GPS navigation, Precise Global Positioning System (GPS) positioning, differential GPS, GPS receiver design, navigation warfare, vision-based navigation, Inertial Navigation System (INS)/GPS-based flight truth reference systems for navigation, and integrating GPS into existing weapons systems. I have also worked as an electrical engineering and GPS navigation expert witness in Ohio.
https://www.forensisgroup.com/expert-witness/automated-vehicles/
The panel discussion is jointly organized by Leibniz-IZW and WWF Germany. For decades, deforestation has been cited as the most immediate threat to biodiversity in tropical rainforests. However, in recent years, the magnitude of the global illegal wildlife trade has increased significantly, and several new studies indicate that unsustainable hunting may be a greater threat to tropical biodiversity than deforestation. Already the large-scale commercialization of hunting in tropical ecosystems worldwide has led to a widespread “empty forest syndrome” – an habitat that is structurally intact but devoid of large vertebrate species – and this trend is expected to increase in the near future. Defaunation has myriad ecological and socio-economic consequences. The disappearance of large vertebrate species, for example, can degrade ecosystem services, change evolutionary trajectories, and even impact human health. This panel discussion will provide a platform for scientists, conservation practitioners, and the donor community to take a critical look at how current conservation strategies can be strengthened to deal with this global challenge. The discussion will focus on ways to integrate findings from recent scientific studies into conservation actions that can effectively address the defaunation crisis.
http://www.izw-berlin.de/panel-discussion-defaunation.html
As Covid-19 sweeps across the globe, one thing is becoming clearer than ever – the mismanagement of our environment is having a devastating effect on both global health and the wider economy. From Ebola, MERS and SARS to bird flu and Zika virus, the increasing trend of novel disease outbreaks, topped off with the Covid-19 outbreak is not a chance happening, rather, it is an externality of how we currently do business across the world. Our globalised network of travel and trade is dramatically changing the way we use land. We live in a world where as much as half of the world’s tropical forest has become agriculture and human settlements. And this process is accelerating. The increasing human activity of unchecked land-use change and the overexploitation of natural assets is often referred to as the Anthropocene, a term which covers the irrevocable altering of our planet’s landscapes, oceans and atmosphere by humans. What does the Anthropocene mean for human health? Since 1940, land-use change has been the leading driver of zoonotic diseases. As human encroachment on nature continues to drive the decline of species and ecosystems, opportunities for pathogens to spread between wildlife, livestock and humans become increasingly abundant, causing a continuous cycle of viral spill-over and spread. Covid-19 is just the latest global example of when viruses of two different strains interact. As our global economy accelerates the rate of unchecked land-use change, so too does the pace of pandemic emergence. If we fail to change, we will soon reach a new pandemic era, indeed, some might say, it has already arrived. Our current approach is to wait for outbreaks to start, and then design drugs or vaccines to control them, but we have seen inexorably with Covid-19, that this approach is not fit for purpose. As we currently wait for a vaccine, the pandemic has already devastated the global economy and hundreds of thousands of people have died. Equally, our chances of finding solutions to ever more prevalent disease outbreaks reside in the preservation of our natural capital. Approximately 25% of all drugs used today are derived from rainforest plants. And yet, we are far from capitalising on the full potential of nature-based solutions, having only catalogued less than 15% of species now alive. So, as we further drive deforestation and species extinction, we in turn, destroy our most precious library of innovation. Could it be that we have already wiped out the best vaccine for covid-19? The role of investment While the understanding of climate change as a systemic risk to portfolios is becoming more established, the identification of biodiversity-related risks is still limited. In fact, despite the risks, the vast majority of the world’s biggest investors are paying very little attention to the impacts of their investments on biodiversity. ShareAction’s recent analysis finds that not one of the world’s largest asset managers has published a dedicated policy on specific biodiversity risks and impacts, and only 11% reference a need to mitigate the negative impacts on the natural environment in their investment policies. Shockingly, 86% of asset managers still make no reference to ecosystem protection, natural capital or biodiversity in their policies (see below). To make matters worse, less than half of the assessed firms engage with portfolio companies on corporate strategy on biodiversity and even fewer ask for better disclosure of the impacts of company supply chains on natural ecosystems. This is in spite of the fact that many of these companies are engaging in activities that are harming natural habitats through land-use change, overexploitation of resources, and pollution. Investing in a healthy future With the spectre of increasing disease outbreaks looming and the ever-rising economic cost of Covid-19, we cannot afford this biodiversity oversight in asset management. The financial industry has a critical role to play as investors have the power to block each step in the chain of disease emergence. Through the ownership and financing of companies worldwide, investors have the power to influence behaviour and drive change. We must address the unchecked deforestation and wildlife exploitation that has become a feature of everyday business. By valuing things like natural capital in investment decisions, we can put pressure on industries that harvest tropical timber and wildlife products, reduce our risk of a pandemic epoch and safeguard the medicine of the future. This means raising the material profile of natural capital in investment decisions. Organizations like Global Canopy are doing good work in this area, but we must do more. If we do not act, our current trajectory could see the cost of future pandemics rocket into the tens of trillions. Indeed, The world’s leading biodiversity experts have warned that the current coronavirus pandemic is likely to be followed by even more deadly and destructive disease outbreaks – unless we halt on the destruction of the natural world. As we enter a critical decade for biodiversity, the ongoing Covid-19 crisis must motivate asset managers, pension funds, and insurers, to bring biodiversity into the heart of financial markets. The transformative change that we need to avoid future pandemics and other catastrophic consequences of environmental decline will simply not happen without the financial sector stepping up to the challenge.
https://ingena.co.uk/2020/08/04/covid-19-and-the-changing-nature-of-investment-in-the-anthropocene/
The United Nations reported that 1 million plant and animal species were threatened with extinction in 2019, with many just decades away from a tipping point. The same report also found that three-quarters of land-based environments and two-thirds of marine-based environments have been detrimentally altered by human activities. In order to reduce habitat loss and slow extinction rates, we have to understand how human activities threaten and endanger ecosystems. Can Human Activities Destroy Ecosystems? Humans impact the environment in a variety of damaging ways. Extracting natural resources, polluting air and waterways and razing wild landscapes are some of the most damaging examples industrial destruction. These activities can destroy some or all of an entire ecosystem, wiping out the plants and animals that call these ecosystems home. As the climate crisis continues to raise temperatures worldwide, experts say continued human encroachment on wildlife threatens the biodiversity that is the hallmark of wild ecosystems. There are currently 77 animals listed as “extinct in the wild” by the International Union for Conservation of Nature (IUCN), an organization that publishes a “red list” of species on the brink of extinction. The IUCN also reports that over 41,000 species are threatened, though not on the brink of extinction, accounting for 28 percent of the total of all species assessed by researchers. How Many Ecosystems Have Humans Destroyed? Scientists have found that less than 3 percent of the earth’s total land can be considered to still have ecological integrity, a framework used for measuring restoration and mitigation efforts to manage and preserve ecosystems. Humans are overusing the earth’s biologically productive land — including cropland, fisheries and forests — by at least 56 percent, which destroys the ability of these lands to provide important ecosystem services like storing carbon emissions or protecting wildlife. At least 75 percent of the earth’s ice-free land has also been significantly altered. The oceans are increasingly polluted as well, and the earth has lost over 90 percent of wetlands since 1700, according to reporting from the Guardian. Between 2009 and 2018, he U.N. Environment Programme (UNEP) found the world lost about 11,700 square kilometers of coral — which is equivalent to 14 percent of the global total — while more than 30 percent of the world’s reefs have been affected by rising temperatures. Coral reefs are extremely important natural resources — home to about 25 percent of the ocean’s fish and a wide range of other species. What Are the Ecosystems Destroyed by Humans? According to a 2020 study published in One Earth, humans significantly altered ecosystems across an area the size of Mexico, as 58 percent of the earth’s land ecosystems experienced moderate or intense pressure from human activity. The authors also found that out of the earth’s 14 biomes and 795 ecoregions, 46 ecoregions spanning across 10 biomes have been highly affected by human-caused destruction, leading to severe ecosystem and biodiversity loss. The Living Planet Index found a 68 percent decrease between 1970 and 2016 in population sizes of mammals, birds, amphibians, reptiles and fish, particularly severe in the tropical subregions of the Americas, where wildlife populations have decreased by 94 percent. Over 3 million species that live in the Amazon rainforest are now threatened by human-caused ecosystem collapse. The oceans are also under threat. UNEP and other experts have projected that by 2050 there will be a 70-90 percent worldwide decrease in live coral reefs because of climate change, and that coral reefs may even become extinct within our lifetimes. Over one-third of marine mammals and one-third of sharks, shark relatives and reef-forming coral are now threatened with extinction. How Do Humans Destroy Ecosystems? One of the greatest threats to the earth’s ecosystems is the food on our plates. Humans destroy nature, accelerate climate change and endanger biodiversity — as a result of a more highly industrialized livestock-based food system. One U.N. report found that more than one-third of the world’s land surface and nearly 75 percent of freshwater resources are devoted to producing food to feed a rapidly growing population. The report also found that in 2015, 33 percent of marine fish were being harvested at unsustainable levels. In a paper published in Science in 2006, scientists found that marine ecosystems were experiencing unprecedented population loss because of overfishing, estimating that marine biodiversity would eventing collapse by 2050 if humans continue along the same unsustainable trajectory. In addition to the detrimental effects of overfishing, livestock production is to blame for biodiversity loss. The U.N. Food and Agriculture Organization (FAO) found that 26 percent of the planet’s ice-free land is used for livestock grazing, with 33 percent of cropland used for livestock feed production, together threatening to decimate the earth’s biodiversity rates. Livestock production is one of the biggest drivers of climate change — animal agriculture alone is responsible for at least 16.5% of global greenhouse gas emissions. Animal agriculture is also a well-known source of air and water pollution, threatening local ecosystems by overloading waterways with fertilizer, manure and pesticides. One of the world’s largest beef producers is the country of Brazil — cattle farming there is causing as much as 80 percent of Brazil’s deforestation. Brazilian cattle ranching accounts for the release of 392 million tons of carbon into the atmosphere every year, which is equivalent to 17 percent of the country’s total emissions. This figure does not include land use change, which would add even more to this figure. If global meat consumption continues to grow — and it’s already more than doubled since 1990 — human activities like cattle ranching will continue to destroy the earth’s most vital ecosystems, including the Amazon rainforest. Hunting and Fishing Hunting and fishing are two human activities that also destroys wild ecosystems. Whenever humans remove animals from their ecosystems — whether for food or sport — it harms ecosystems in direct and indirect ways. For example, research shows that overhunting can kill forest trees because of the reduction of mammals that eat seeds — posing a risk to the wider ecological dynamics of tropical forests. In addition, overhunting can decrease species diversity, altering interactions between different types of animals and other creatures, disrupting migration and hibernation patterns and damaging natural food chains, leading to uneven population growth across species — all of which creates a negative impact on ecosystems. Similarly, fishing can damage seabed ecosystems in unintended ways. A common practice in the commercial fishing industry is trawling, which drags up plants and coral or leads to bycatch, in which marine species such as dolphins, whales, sea turtles and sharks are caught unintentionally and discarded. Trawling stirs up the water in a way that increases sediment and blocks sunlight, creating ocean dead zones. Introduced Species Invasive species are a major threat to ecosystems that threaten habitat loss and endanger biodiversity. Some species are introduced by humans to an area deliberately, including for pest control and pet imports. These deliberate introductions can have a detrimental effect on local ecosystems. For example, because pet pythons released in the Everglades have few natural predators there, their introduction has decimated local species. Invasive pythons have caused raccoon and opossum populations to plummet by 99 percent, effectively obliterating the populations of marsh rabbits and foxes in some areas. Other species are introduced unintentionally through transportation or trade. For example, zebra mussels attach themselves to boats, which enables them to spread easily through bodies of water. The effects of these zebra mussels are akin to an “aquatic pandemic,” increasing the chances of algal blooms that kill native species. Land Use Change Land use change is the human-led process of transforming natural landscapes, whether by direct or indirect action. The Intergovernmental Panel on Climate Change (IPCC) has found that land use change for agriculture can lead to significant habitat and biodiversity loss, as well as land and water degradation. Converting wild landscapes to farmland or for other human uses has caused most of the world’s deforestation and desertification. Deforestation is the purposeful clearing of forested land, either through clear-cutting or selective logging to obtain wood for fuel, as well as for manufacturing, grazing and to grow crops like corn and soy for farmed animal feed. It can lead to biodiversity loss by disrupting the habitats of local species, one of the main causes of extinction. In 2021, 9.3 million acres of trees in the tropics were lost due to rising populations, agriculture and energy demands. Deforestation in the Amazon has forced animals out of their natural ranges as existing habitats are clear-cut, removing their sources of shelter, food and water. It also causes soil erosion and adds to climate change pollution by destroying natural carbon reserves like forests. Soil erosion and land degradation can cause congested and polluted waterways, increased flooding, and loss of arable land through a process known as desertification. Desertification is a consequence of land degradation, often caused by overgrazing, leading to the deterioration of habitats as the land transforms into desert-like conditions. The U.N. has found that arable land loss is 30 to 35 times higher than the historical rate, and that over 12 million hectares of land is lost each year to desertification. Pollution Pollution is the introduction of contaminants into a habitat with adverse consequences. Pollutants include everything from car exhaust and the burning of coal to sewage or pesticides, all of which negatively impact land, water and air. Land pollution can be caused by agriculture, mining, landfills, construction, nuclear waste, urbanization and industry. Land pollution can also poison groundwater through a process called leaching. The Environmental Protection Agency has discovered that over 20 percent of lakes and 30 percent of streams in the U.S. are polluted by sources like animal waste from industrialized animal agriculture, storm water, waste water, fossil fuels and fertilizers. Pollution from dairy farming, for example, causes water pollution, with pollution from animal waste and fertilizer making its way to oceans and causing one of the largest dead zones ever recorded in the Gulf of Mexico. While most air pollution comes from energy use and production, researchers have concluded that fine particle air pollution from industrialized agriculture leads to over 17,000 deaths in the U.S. per year, the bulk of which comes from animal agriculture. Resource Exploitation Research has shown that humans are depleting natural resources at almost double the rate at which they can regenerate. By 2050, humans will need 2.5 earths to meet our resource usage demands. The most exploited resources include sand, water, fossil fuels, palm oil, trees and soil. The overuse of these resources — caused by poor farming practices, overpopulation, logging and overconsumption — can create water shortages, oil and mineral depletion, forest cover loss and species extinction. The expansion of agriculture — mostly for beef production — has been a primary driver of forest loss and degradation. On-farm food waste also has negative impacts, as the water, fuel and fertilizer resources used to produce, pack and transport the food ultimately go to waste too. How Can We Reduce the Impact of Human Activity on the Environment? The U.N. has urged the international community pursue sustainable development to mitigate the impacts of climate change. Unfortunately, the United States is not on track to meet U.N. sustainable development goals, despite President Biden rejoining the Paris Agreement in 2021. The U.N.’s 2022 Sustainable Development Goals Report recommends that countries implement the following changes in order to reduce human-caused damages to the earth and its ecosystems: - Protect and restore the world’s wetlands, which are used as breeding grounds for 40 percent of the world’s plant and animal species - Reduce food waste - Move away from reliance on natural resources - Increase climate finance that invests in actions to reduce greenhouse gas emissions - Fight ocean acidification - Reduce the flow of litter, waste and runoff into waterways - Increase protected areas of the oceans - Improve regulations and increase the monitoring and surveillance of overfishing - Reduce the felling of forests, mainly caused by agricultural expansion - Fight species extinction by moving away from unsustainable agricultural methods, logging and over-harvesting of wild species - Increase protections identified as key for global biodiversity In a special Intergovernmental Panel on Climate Change (IPCC) report on land, climate scientists also recommended that people reduce their meat consumption. What You Can Do The increase in industrialized animal farming across the globe is the primary driver of biodiversity loss and ecosystem degradation. In 2021, research supported by the U.N. Environment Programme predicted that devastating biodiversity loss will continue unless our food system undergoes radical change, like shifting to plant-rich diets. Research has shown again and again that plant-based foods are better for the environment and result in lower greenhouse gas emissions than meat and dairy. A study published in PNAS in 2022 also reinforced the importance of an environmentally sustainable food system with less reliance on these foods. Reducing your consumption of meat and reducing food waste are two of the most important individual climate actions you can take. You can also vote for politicians that demonstrate a commitment to crafting public policy to protect ecosystems. Zane is an activist-scholar, co-editor of Queer and Trans Voices: Achieving Liberation Through Consistent Anti-Oppression, and the founder of Roots DEI Consulting and Policy.
https://sentientmedia.org/humans-destroying-ecosystems/
THE pangolin, known as anteater, has recently been included on the list of animals that are threatened with extinction under Appendix I of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). CITES’s Appendix I lists species that are the most endangered among its listed animals and plants. They are threatened with extinction and CITES prohibits international trade in specimens of these species, except when the purpose of the import is not commercial, for instance, for scientific research. The listing came following the alarming decrease in the global population of the unique species of mammal that feed on insects, particularly ants, with the hope that global effort to prevent the illegal-wildlife trade would also follow. Termites are beneficial to forest ecosystems as they feed on dead plant materials, wood, leaf litter, soil or even animal dung. Recycling of wood and plant matter makes termites ecologically important, as the process help improve soil nutrient, boosting plant in the forest. Home invaders At home, however, some species of termites or white ants are horrible pests. They could cause great damage to home structures. Uncontrolled, they could literally bring the house down and cost millions worth of real-property investment. Who would chose to buy a home invaded by termites, anyway? In the Philippines homeowners spend huge amount of money hiring professionals just to extinguish the sneaky termites. Termites leave a thin layer of the wood they feed on for their protection. By the time they are discovered, they have already caused damage that would cost homeowners a fortune repairing the structural integrity of their house. Termite controller In Palawan, the only place in the Philippines where pangolins thrive, termite control is not much of a problem. However, the rampant illegal-wildlife trade targeting unique species of plants and animals, including pangolins, is becoming a serious problem, officials of the Department of Environment and Natural Resources (DENR) said.Director Theresa Mundita S. Lim of DENR’s Biodiversity Management Bureau (BMB) said maintaining a healthy population of pangolins help control the population of termites. While there are other animals that eat or prey on termites, pangolins are the pest’s most notorious predator. Big appetite for ants Besides termites, pangolins have a big appetite for ants. A single adult pangolin can eat up to 200,000 ants in one meal, or more than 70 million in a year. “Without pangolins, there is a big chance the termite population would grow and spread throughout the province, increasing the risk of transfer to other areas, where they can also cause trouble,” Lim said. Josie de Leon, chief of the BMB’s wildlife unit, said termites are pangolins favorite meal. “There are other animals that feed on termites, but only pangolins can effectively control their population,” she said. Pangolins climb trees to feed on termites that establish colonies on high trees. While termites feed mostly on dead wood, it could also cause severe damage to trees with weak resistance or protect against attacking white ants. The unchecked population of termite compromise ecological balance, she said. Lim urges real-estate and -property developers to invest in protecting and conserving pangolins and their habitats to prevent what could be a major problem caused by home invasion of termites once pangolins become extinct. “Real-property developers should invest in biodiversity conservation because a healthy ecosystem helps protect their investment from damage caused by termites,” she said. Most hunted The pangolin is one of the world’s most illegally traded wildlife. Based on the estimate of the DENR’s BMB, almost a thousand Palawan pangolins were illegally traded from 2000 to 2013. China is known to be the destination of pangolins being smuggled out of the country—whether dead or alive—for their alleged medicinal value and aphrodisiac properties. According to Lim, hunters of pangolins are after their meat, skin and internal organs. In Palawan wildlife law enforcers reported that even indigenous people are into hunting these defenceless anteaters. “They know how to hunt them and they know where to find them. They catch them and sell them to unscrupulous wildlife traders,” Lim said. Global protection Its inclusion on CITES list of endangered species highlights the importance of maintaining a healthy population in the wild, along with its ecological importance in helping regulate insect population and ensure survival of seedlings. The upgrading of the pangolin to CITES Appendix I came with stricter penalties for those involved in the illegal trade and killing of the harmless mammal. Environment Secretary Regina Paz L. Lopez recently warned against catching, trading or killing anteaters as mandated by CITES. Adopted by over 180 countries, CITES is an international agreement that aims to ensure the survival of wild plants and animals. Appendix I lists plants and animals that are threatened with extinction, thus, trading them internationally for commercial purposes is strictly prohibited. Of the eight pangolin species worldwide, only one can be found in the Philippines. Locally known as balintong, the Manis culionensis is endemic to Palawan province. Palawan’s anteater is critically endangered, with its numbers highly threatened by its low fecundity or number of offspring produced per year, loss of habitat, and illegal trade of its scales and meat. Prior to its inclusion in Appendix I, the pangolin was listed in Appendix II, which provides a modest level of protection as it requires exporting countries to ensure that any traded pangolin specimens have been legally obtained and that their export will not be detrimental to the species’ survival. “Further endangering the pangolin is a crime that threatens our biodiversity and the fragility of our ecosystems. The DENR will not hesitate to apply the full extent of the law to anyone caught catching, killing or selling pangolin,” Lopez warned. Republic Act 9147, or the Wildlife Resources Conservation and Protection Act, prescribes various penalties for illegal acts toward threatened species. Stricter penalty Under the law, illegal transport of pangolin may merit imprisonment of up to one year and a fine of up to P100,000. A jail term of up to four years and a fine of P300,000 await those who will be found guilty of trading pangolin. The killing of pangolin carries a jail term of up to 12 years and a fine of up to P1 million. The ban on the international trade of pangolins was proposed and approved by the Philippines and the United States during the World Wildlife Conference of the CITES Conference of Parties in Johannesburg, South Africa, held from September 24 to October 5. With the illegal trade compounded by habitat loss and the species’ low rate of reproduction, officials of the DENR believe that it would be impossible for pangolin population to recover, given current rates of catch. The inclusion of the Philippine pangolin in Appendix I of CITES would help prevent the further decline of its population in the wild and ensure their continued performance as a regulator of insect populations.
https://businessmirror.com.ph/2016/10/22/pangolin-termite-control-specialist/
According to draft of UN report set to be released on May 6, 2019, up to One million world species are at risk of extinction due to human activity. It highlights how humanity has undermined natural resources upon which its very survival depends. This 44-page draft report which has summarized 1,800-page assessment of scientific literature on the state of Nature conducted by the UN will be examined on 29 April, 2019 by 130 nations that will meet in Paris, France. Key Findings of Report - Extinction: It warns of forthcoming rapid acceleration in global rate of species extinction. With upto 1 million species at extinction risk, and one fourth of known plant and animal species already threatened, loss of species is tens to hundreds of times higher than it was, on average, over last 10 million years. - Causes: Direct cause of species loss are continuously shrinking habitat and land-use change, hunting for food, illegal trade in wildlife body parts, climate change and pollution. - Impact on Ecosystem: Almost three-fourth of land, half of marine environments and half of inland waterways have been ‘severely’ changed by human activity. - This is mainly due to human activities, like overconsumption, illegal poaching, deforestation and fossil fuel emissions, which further push ecosystems toward a point of no return. - Impact of Humans: Such depletion will harm humans, especially indigenous vulnerable groups and those living in poorest communities. - Threat equivalent to climate change: The accelerating loss of clean air, drinkable water, forests, pollinating insects, protein-rich fish and storm-blocking mangroves are a few of diminishing services offered by Nature, which poses threat not less than that by climate change. - Dependence on Nature: More than 2 billion people rely on wood fuel for energy, 4 billion rely on natural medicines, and 75% of global food crops require animal pollination. - It cautions against climate change solutions that may accidentally harm nature. Example-Biofuels use combined with “carbon capture and storage” (i.e. sequestration of CO2 released when biofuels are burned) is a key in transition to green energy on a global scale. But land needed for growing biofuel crops may lead to cutting into food production, expansion of protected areas or reforestation efforts. Way Forward We need to recognise that climate change and loss of Nature are equally important, not just for environment, but also for development and economic issues. Unsustainable methods used for our food and energy production undermines regulating services we get from Nature, therefore only “transformative change” can stem the damage. Get these updates on Current Affairs Today Android App Tags: Biodiversity • Biofuels • Climate Change • Ecosystem • Environment 22 May: World Biological Diversity Day Every year, May 22 is observed as International Day for Biodiversity or World Biodiversity Day. This date commemorates the date of Adoption of the Agreed Text of the Convention on Biological Diversity at UNEP Headquarters, Nairobi on 22 May 1992. In 2000, UN General Assembly via resolution 55/201 decided to celebrate World Biodiversity Day on May 22 instead of December 29, which was previously designated as International Biodiversity Day. Theme The theme for 2017 for World Biodiversity Day is “Biodiversity and Sustainable Tourism”. The theme is in sync with the observance of 2017 as “International Year of Sustainable Tourism for Development” as proclaimed by the United Nations General Assembly. UN has already declared 2011-20 as United Nations Decade on Biodiversity to support and promote reducing the loss of biodiversity. About Global Biodiversity The term Biological Diversity was first coined by wildlife scientist and conservationist Raymond F. Dasmann in 1968. This term became widespread in use during 1980s. Biodiversity refers to the “totality of genes, species and ecosystems of a region”. There are three levels of biodiversity viz. species diversity, ecosystem diversity and genetic diversity. The term biodiveristy is used to address several problems in conservation of environment including loss of species, destruction of habitats, invasive species, genetic pollution, over exploitation and effects of climate change on biodiversity. The spatial distribution of organisms, species and ecosystems is called Biogeography. Biodiversity is unequally distributed on Earth and it varies across regions on the basis of climatic and geographical factors. On earth, highest biodiversity is found in tropics. In comparison to Oceans, terrestrial biodiversity is much greater. It is estimated that there are 8.7 million species on earth of which 2.1 million live in Oceans while rest are terrestrial. The terrestrial biodiversity is greater at equator in comparison to poles. Around 90% of world’s biodiversity is found n tropical rainforests which occupy less than 10 percent of Earth’s surface. The marine biodiversity is highest along the coasts in the Western Pacific which is known for highest sea temperature. Around 70% of World species are found in 12 countries viz. Australia, Brazil, China, Columbia, Costa Rica, Congo, Ecuador, India, Indonesia, Madagascar, Mexico and Peru.
https://currentaffairs.gktoday.in/tags/ecosystem
Part 2 in a 2-part series about model risk management in the areas of artificial intelligence (AI) and decision intelligence (DI) for enterprises. Read Part 1. We are moving artificial intelligence (AI) systems from the lab, where we can control for a limited set of variables, to potentially massive-scale implementations, where variables will propagate and multiply. We must have an AI engineering discipline that can help predict and adjust for those variables. From single- to multilink AI/DI decisions Early enterprise AI implementations are still quite limited and tend to be focused on single-link predictions such as: - “What’s the chance that this customer will churn?” - “What is the predicted lifetime revenue from this customer?” - “Which clause in this regulation has been changed since we last reviewed it?” - “Where are the logos appearing in this video?” In a typical enterprise use case, these single-link AI models provide insight used by human decision-makers to determine the best next step based on this information. The decision space – the inputs and the outcomes that result from them – are well-defined and tightly limited. For instance, an AI system may display information such as “churn risk,” or “high lifetime value” – based on a small set of criteria – on a customer service representative’s terminal when they are talking with a customer. Or a natural language processing (NLP) AI system might be trained to identify and distribute updates to a specific set of regulations. A human analyst can then adjust contract terms or modify a limited universe of product or service features to comply with those changed legal requirements, with significant business damage possible if the NLP system is poorly designed and trained. The risks of these single-link systems fall into several categories, as we discussed in Part 1 of this series: - Bias in data selection - Training on data sets that are not representative of future conditions (the Black Swan problem) - Societal or external bias However, these single-link use cases represent only a tiny fraction of potential AI/DI “injection and inflection” points in a typical enterprise or other large organization. Future AI systems will involve multiple models in a cause-and-effect cascade, as the systems are used for both decisions (decision intelligence) and process improvement. For instance, a system used for capital allocation might include one model to predict future customer growth, another to predict the impact of a marketing allocation spent on customer service on likelihood-to-recommend (L2R), and a third model to represent the likely conversation rate from a sales campaign. A decision regarding the best course of action to increase revenues might use a system that relies on all of the models working together. This is qualitatively more complex – and therefore involves a qualitatively different level of risk – than single-link AI systems. As this example shows, AI-driven decision processes will increasingly determine goals and incentives and make or contribute to multiple decisions that underlie all of an organization’s strategic and tactical choices. Finding a way to manage the associated risks will become critical to the success or failure of enterprise initiatives or even the survival of the entire enterprise. Complexity naturally increases with each new level of AI maturity and as multiple AI systems are woven into a firm’s processes. As AI-driven enterprise systems become more complex and deeply embedded, new and emerging risks also become much harder to identify and address. Unintended and intended decision externalities As described above, decisions automated by AI/DI within an enterprise are separated by layers of context from both source data and external risk points. That makes it more difficult to identify biases in the data and to trace any issues back to their root cause. But a deeper risk can arise in a circumstance that that doesn’t even reflect bias from the enterprise’s point of view. One of us (Kamesh) worked with a law firm as part of a strategic planning initiative. The project found that to maximize profits, the company should focus on high-net-worth clients. Logically, this means that fewer resources and lesser attention would be focused on cases coming from, for example, the U.S. Consumer Financial Protection Bureau. But, while those cases were fundamentally less lucrative for the firm, they arguably provided an equally or more desirable outcome from the point of view of social justice. This kind of optimization is, of course, pervasive within enterprises. Businesses are in most cases beholden to deliver profitability or other results that benefit their immediate stakeholders and often treat external societal impacts as cost-free externalities. This is nothing new. What is new is when this kind of selection is supercharged first by the availability of masses of data and, as we move forward, by AI models that do an increasingly better job of excluding all but the most lucrative of clients from any business. Telecom companies might, for the first time, understand that there is a class of customers who are more expensive to service through the call center than the revenues they receive. On the flip side, they will provide “marquis” service to VIP customers with wide social influence, such as politicians and media stars. In a multilink context, this is how net-neutrality decisions made based on optimizing network-management practices might trickle down to help determine different service outcomes for different groups of customers. The inferior services available to less-wealthy clients are not the result of design, but they are the result of myriad decisions driven by unconscious and self-amplifying biases. The same pattern plays out every time an enterprise decides which suppliers to do business with or where to set up shop based on regulatory structures or any of the millions of decisions that underlie business relationships and structures. And most of this is happening out of sight, deep inside enterprise systems and in databases that have grown like coral over decades and through billions of transactions. The impact of any one of the billions of decisions that underlie each transaction is so remote from the final output or outcome as to be invisible. There truly are ghosts in the machines already. In the context of AI, this means that the data we intend to use to train hugely powerful AI systems is opaque. We can’t see what biases are incorporated – whether by design or default – in the data and the architectures of our databases and repositories. But biases are there, and these tools are guaranteed to amplify them. Indeed, the use of AI creates a new smoke-screen layer for companies that manifest intended bias: a knowledgeable negative on external societal factors. Traditional IT risk models do not catch the external societal implications that can flow from ignoring non-economic internal risks and externalities. In many cases, those non-economic risks arise when decisions are unknowingly made based on flawed or biased data, and they are amplified when AI/DI is introduced into enterprise business-support systems. For these reasons, we must develop a new discipline to manage enterprise AI risk. Getting started The sheer scale of enterprise data that is planned to be used for AI is considerable. It is potentially all the data that humans ever have or will create and store electronically. With massive data, computing power, and ever-more sophisticated algorithms, the outcome space – the possible results of a chain of decisions made by AI/DI – is equally large. The risk of unintended outcomes may very well grow nonlinearly with the complexity of the application for which it is used. We need to do everything we can to stay ahead of that risk. We are experiencing the results of delaying the hard work on climate change. We have learned an immense amount, but it is only a fraction of what we need to begin healing our planet. We don’t really understand how to put that knowledge to work as we literally push against the tide. The resulting damages are already massive, and some are likely irreversible in our lifetimes – or even those of our children’s children. We can’t make that same error when it comes to AI. If we wait for the hidden biases and other flaws hiding in our data to emerge out of AI-based decision systems, it will be too late. Whether we like it or not, these powerful tools are placed on a knife’s edge: Get this right and we could make a tremendous save; get it wrong, and we are doomed to live with the unintended consequences. Today is the time to begin. We need the brilliant minds designing these systems and imagining how they might be used to also turn their attention to understanding what is really going on inside the black boxes of our data repositories and computing processes. We need to be able to see where the properties inherent in those systems reside and how they might interact. These risks will emerge from the depths of data, processes, and decisions. They will not be obvious. And they will propagate across the massively interconnected webs of commerce. These risks can be addressed only by associating them correctly with the AI/DI-driven decision points (across the layers of context). This requires that we invest now and address those risks in the core setup of AI/DI systems. We need to look into the future to identify where and how AI-related risks will emerge. And we need to share that knowledge with everyone who has a stake in this project, which is everyone on this planet. This is the only way we can hope to reach a consensus on the level of risk we can accept and the rules we need to put in place to control them. Time for a new risk-management discipline The good news is this: When the new engineering disciplines that built airplanes and skyscrapers emerged, robust quality assurance and risk management practices grew alongside them. Those disciplines learned to detect, mitigate, and eliminate unintended consequences from these powerful new technologies. By taking AI and DI as seriously as engineering disciplines, which means recognizing that AI and DI models are artifacts that must be rigorously managed, we have every hope of obtaining the very best from these powerful new tools while limiting any negative consequences. In conclusion, each one of us plays a part in contributing to this new discipline. - Developers should go beyond simply creating good training data to consider the larger context in which that data was developed to understand any selection or other biases it may contain. As developers build models, we should go beyond evaluating their accuracy to modeling the decision-making context in which our models will be used and mapping their effects within and beyond the organization that will use them. We should also add human inspection and control points as much as is feasible, especially as our systems become more and more complex. - Business decision-makers should support developers as they strive, as above, to answer broader questions regarding the impact of their AI and DI models. - Policy-makers should sponsor programs that create the new discipline of AI/DI risk management and advance technologies that lead to transparency and accountability. Policy-makers should insist on rigorous answers as to the potential unintended consequences of this powerful technology. - Risk managers should expand IT risk initiatives to cover both model and decision risk (decisions usually involve multiple models, as above) and create collaborative structures that allow multiple stakeholders to participate in the identification, review, and mitigation of the risks that emerge from the automation of predictions and decisions that were previously done only by humans.
https://www.digitalistmag.com/cio-knowledge/2019/01/30/from-data-to-model-risk-enterprise-ai-di-risk-management-challenge-part-2-06195979
Roundtable Recap: Realizing Responsible AI in Washington, DC Last month, Credo AI in partnership with our investors, Sands Capital, kicked off our Realizing Responsible AI roundtable series in Washington, D.C. with policymakers, industry, academia and other key stakeholders in the Responsible AI space to discuss how to move from principles to practice. Our Founder and CEO, Navrina Singh, was joined by Northrop Grumman’s VP & Chief Compliance Officer, Carl Hahn, and EqualAI’s President and CEO Miriam Vogel for a discussion focused on driving organizational change, starting with first-movers in industries including HR tech, healthcare, financial services and defense. Here are the key takeaways from our conversation in DC: 1/ Embed enterprise values into company culture to accelerate RAI adoption. Enterprise values help shape people’s behaviors and actions, including the design, development and deployment of AI. It’s more than simply checking a box – it’s the creation of an actionable culture that empowers employees to share in your mission to realize the responsible use of AI. The foundation of this type of culture is alignment on enterprise values across the organization. Navrina Singh recommends that once those values are set, organizations must work to codify these values, build them into their organizational infrastructure, observe the impact on employees and repeat the process with diverse voices providing input at every stage. 2 / Mitigate unintended consequences throughout an AI model’s lifecycle. “AI is a reflection of our society and the company's building and deploying it.” said Miriam Vogel. She suggests that companies take a closer look at the potential beneficiaries and the unintended harms associated with automated decision-making tools and AI models. “Without representation in the development and testing phase, there is serious danger that natural biases will remain unexposed until full-scale deployment.” Even if you aren’t building AI, you may still be using it. In fact, 80% of global HR departments use AI tools like resume analyzers and chatbots. (SHRM, 2019). New York City is one of the first localities to pass legislation requiring any algorithmic hiring tools to be audited annually for disparate impact, beginning January 1, 2023. Laws like these will help shape future regulations and ultimately, enable AI to create more equality and inclusivity. Learn more about Credo AI's audit offering here. 3/ Develop trust, transparency and action with multi-stakeholder engagement. Carl Hahn warned, “Without public trust, without policy leader trust, without the trust of the people using your technology, you will fail.” Carl emphasized if you can build trust, you can deliver enormous value to your organization. Trust and transparency in AI is a dynamic process; oftentimes requiring engagement across technical and oversight teams. Share your learnings and best practices, and utilize tools that enable multi-stakeholder collaboration. As mentioned in the Credo AI manifesto, the Responsible AI ecosystem needs a community of practice to deliver on the promise of AI. Credo AI is bridging the oversight deficit by operationalizing the behavior changing policies to align incentives across technical and business stakeholders. To help align your organization, consider establishing cross-functional working groups or a RAI board. Badge programs like EqualAI can help executives identify and reduce unconscious bias and create an action plan to develop and maintain responsible AI governance. 4/ Acknowledge external pressures and the increasingly important role of ESG in RAI. AI policy is expanding, from the EEOC DOJ Guidance to the NIST Risk Management Framework and the EU AI Act, but regulation is not the only pressure organizations are facing. Employees, consumers, investors and businesses are also demanding more transparency, trust and accountability. AI oversight and accountability is quickly becoming a board level issue as well, given the potential for Directors and Companies to be exposed to legal liabilities if their AI systems are not designed, developed and deployed properly. “Stakeholders expect us to do this. I know our Board does….If you aren’t thinking about RAI as a differentiator and an ESG issue, you will be left behind.” - Panelist As part of our commitment to ensure Responsible AI becomes the global standard, Credo AI is bringing together experts including policymakers, data scientists and business leaders across risk, compliance and audit to be part of the solution. Join our community waitlist here.
https://www.credo.ai/blog/roundtable-recap-realizing-responsible-ai-in-washington-dc
GAO recently developed a framework for the use of artificial intelligence (AI) by federal agencies. At the federal level, AI has applications across a variety of sectors, including transportation, healthcare, education, finance, defense, and cybersecurity. The framework consists of four complementary principles: governance, data, performance and monitoring. The purpose of the framework is to ensure accountability and responsible use of AI in government programs and processes. “AI is evolving at a pace at which we cannot afford to be reactive to its complexities, risks, and societal consequences,” the report said. “It is necessary to lay down a framework for independent verification of AI systems even as the technology continues to advance.” The reports states that when implementing AI systems, assessing technical performance is needed to ensure that the AI system solves the problem initially identified and uses the appropriate data sets for the problem. Without this assurance, an unintended consequence may occur. According to GAO, one example of an unintended consequence is the use of predictive policing software to identify likely targets for police intervention. The intended benefits of such software are to prevent crime in specific areas and improve resource allocation of law enforcement. GAO’s report detailed a study, where researchers demonstrated that the tool disproportionately identified low-income or minority communities as targets for police intervention regardless of the true crime rates. “Applying a predictive policing algorithm to a police database, the researchers found that the algorithm behaves as intended,” the report said. “However, if the machine learning algorithm was trained on crime data that are not representative of all crimes that occur, it learns and reproduces patterns of systemic biases.” According to the study, the systemic biases can be perpetuated and amplified as police departments use biased predictions to make tactical policing decisions. It is also worth remembering that National Institute of Standards and Technology tests of facial recognition technology found that it generally performs better on lighter-skinned men than it does on darker-skinned women, and does not perform as well on children and elderly adults as it does on younger adults. These differences could result in more frequent misidentification for individuals within certain demographics. A further example was an AI predictive model that was used in healthcare management where researchers compared Black and White patients’ health risk scores but due to healthcare expenses being used as a proxy, Black patients were receiving a different risk score. “However, healthcare expenses do not represent health care needs across racial groups, because Black patients tend to spend less on healthcare than White patients for the same level of needs, according to a study,” GAO said. “For this reason, the model assigned a lower risk score to Black patients, resulting in that group under-identified as potentially benefiting from additional help, despite having similar healthcare needs.” Through governance practices, management will be able to manage risk, highlight the importance of integrity and ethical values and ensure compliance with regulations and laws, the report said. At the organizational level, governance will help, “incorporate organizational values, consider risks, assign clear roles and responsibilities, and involve multidisciplinary stakeholders,” GAO said. At the system level of governance, this practice will help entities ensure that AI meets performance requirements and achieves its intended outcomes, the report said. According to GAO, three key practices will include technical specifications to ensure the AI’s intended purpose, ensure the system complies with regulations and promotion of transparency with external stakeholders over design, operation and limitations. Through data practices, entities will be able to, “ensure quality, reliability, and representatives of data sources, origins, and processing,” the report said. Data used for model development of AI systems will consist of documenting sources and identifying origins of data, assessing reliability of data, assessing data variables and assessing the use of synthetic, imputed or augmented data. Regarding data used for system operations, three key practices were established to assess interconnectivities and dependencies of data streams, assessing quality or any potential biases and assessing data security for AI systems. The report emphasized the importance of data security and privacy and GAO said entities that are using or plan to implement AI systems should conduct data security assessments, including risk assessments, have a data security plan, and conduct privacy assessments. Any deficiencies or risks identified in testing for security and privacy should also be addressed. Addressing performance, GAO has established practices that are intended to produce results that are consistent with program objectives. At the component level, some practices put in place for performance would document model and non-model components, define performance metrics that are consistent, assess the performance of each component and assess the outputs of each component. At the system level, the same practices from the component level will be applied with the addition of identifying potential biases or other societal concerns resulting from the AI system and developing procedures for human supervision of AI in order to ensure accountability, the report said. The last principle, monitoring, would ensure reliability and relevance over time, GAO said. The practices established for continuous monitoring of performance would consist of developing plans for continuous or routine monitoring of the AI system, establish the range of data and model drift that represent the desired results and documenting the results from monitoring activities. In addition, when assessing sustainment and expanded use, GAO said the established practices would assess the utility of the AI system to ensure its relevance, and identify conditions where the AI system may be scaled or expanded beyond its current use.
https://www.hstoday.us/federal-pages/gao-develops-framework-for-the-use-of-artificial-intelligence-by-federal-agencies/
Auditing is a management tool that can be used to evaluate and monitor the internal performance and compliance of your company with regulations and standards. An audit can also be used to determine the overall effectiveness of an existing system within your company. How do you incorporate compliance auditing best practices to help maximize compliance, efficiency, and value of your audit? Here are five critical factors for value-added audits. 1. Goal Aligned with Business Strategy There are many reasons why companies conduct audits: - Support commitment to compliance - Avoid penalties - Meet management system requirements - Meet corporate or customer mandates - Support acquisition or divestiture - Assess organizational structure and competency - Identify cost saving and pollution prevention opportunities - Determine alignment with strategic direction It is vital to define and understand the goal of your compliance audit program before beginning the audit process. Establishing goals enables recognition of broader issues and can lead to long-term preventive programs. Not establishing a clear, concise goal can lead to a waste of resources. Audit goals and objectives should be nested within the company business goals, key performance objectives, and values. An example of a goal might be to effectively measure environmental compliance while maintaining a reasonable return on investment. Once the goal is established, it is important to communicate it across all functions of the organization to get company-wide support. Performance measurements should also be communicated and widely understood. 2. Management Buy-in The audit program must have upper management support to be successful. Management must exhibit top-down expectations for program excellence, view audits as a tool to drive continuous improvement, and work to imbed audits within other improvement processes. Equally important, management must not use audit results to take punitive action against any person or department. 3. Documented Audit Program Systematically Applied Describe and document the audit process for consistent, efficient, effective, and reliable application. Audit procedures should be tailored to the specific facility/operation being audited. A documented program will include the following: - Scope. The scope discusses what areas/media/timeframe will be audited. The scope of the audit may be limited initially to what is manageable and can be done very well, thereby producing performance improvement and a wider understanding and acceptance of objectives. It may also be limited by identifying certain procedural or regulatory shifts and changes. As the program is developed and matures (e.g., management systems, company policy, operational integration), it can be expanded and, eventually, shift over time toward systems in place, prevention, efficiency, and best practices. It is important at the scoping stage to address your timeline. Audits should be scoped to make sure you get them done but also to make sure you have audited all compliance areas in an identified timeframe. - Criteria. Compliance with requirements will clearly be covered in an audit, but what about other opportunities for improvement (e.g., pollution prevention, energy savings, carbon reduction)? All facilities need to be covered at the appropriate level, with emphasis based on potential compliance and business risks. Assess the program strengths, redundancy, integration within the organization, and alignment with the program goal. Develop specific and targeted protocols that are tailored to operational characteristics and based on applicable regulations and requirements for the facility. As protocols are updated, the ability to evaluate continuous improvement trends must be maintained. - Auditor training (i.e., competency, bias). A significant portion of the audit program should be conducted by knowledgeable auditors (e.g., independent insiders, third parties, or a combination thereof) with clear independence from the operations being audited and from the direct chain of command. For organizational learning and to leverage compliance standards across facilities, it is good practice to vary at least one audit team member for each audit. Companies often enlist personnel from different facilities and with different expertise to audit other facilities. Periodic third-party audits further bring outside perspective and reduce tendencies toward “home-blindness”. Training should be done throughout the entire organization, across all levels: + Auditors are trained on both technical matters and program procedures. + Management is trained on the overall program design, purpose, business impacts of findings, responsibilities, corrections, and improvements. + Line operations are trained on compliance procedures and company policy/systems. Consider having auditor training conducted by an outside source to teach people how to decide what to audit and follow a trail. It can also work well to train internal auditors by having them audit alongside an experienced 3rd party. - Audit conduct (i.e., positive approach). A positive approach and rationale for the audit must be embraced. Management establishes this tone and sets the expectation for cooperation among all employees. Communication before, during, and after the audit is vital in keeping things positive. It is important to stress the following: - Auditor interviews are evaluating systems, not personal behaviors. - The audit is an effective tool to improve performances. - Results will not be used punitively. - Audit reporting. Information from auditing (e.g., findings, patterns, trends, comparisons) and the status of corrective actions often are reported on compliance dashboards for management review. Audit reports should be issued in a predictable and timely manner. It is desirable to orient the audit program toward organizational learning and continual improvement, rather than a “gotcha” philosophy. “Open book” approaches help learning by letting facility managers know in advance what the audit protocols are and how the audits will be conducted. Documentation is essential, and reporting should always align with program goals and follow legal guidance. There is variability in what gets reported and how based on the company’s objects. For example: - Findings only vs. opportunities for improvement and best management practices? - Spreadsheet vs. long format report? - Scoring vs. prioritization of findings (beware of the unintended consequences of scores!)? - Recommendations for corrective actions included or left for separate discussion? - Corrective and preventive action. Corrective actions require corporate review, top management-level attention, and management accountability for timely completion. A robust root cause analysis helps ensure not just correction/containment of the existing issue, but also preventive action to assure controls are in place to prevent the event from recurring. For example, if a drum is labeled incorrectly, the corrective action is to relabel that drum. A robust plan should be to also look for other drums that might be labeled incorrectly and to add and communicate an effective preventive action (e.g., training or posting signs showing a correctly labeled drum). - Follow-up and frequency. Address repeat findings. Identify patterns and seek root cause analysis and sustainable corrections. Communications with management should be done routinely to discuss status, needs, performance, program improvements, and business impacts. Those accountable for performance need to be provided information as close to “real time” as possible. There are several levels of audit frequency, depending on the type of audit: - Frequent: Operational (e.g., inspections, housekeeping, maintenance) – done as part of routine day-to-day operational responsibilities - Periodic: Compliance, systems, actions/projects – conducted annually/semi-annually - As needed: For issue follow-up - Infrequent: Comprehensive, independent – conducted every three to four years 4. Robust Corrective Action Program As mentioned above, corrective actions are a must. If there is no commitment to correction, there is no reason to audit. A robust root cause analysis is essential. This should be a formal, yet flexible, approach. There should be no band-aids. Mistake-proof corrections and include metrics where possible. In the drum example given above, a more robust corrective action program would look at the root cause: Why was the drum mislabeled? Did the person know to label it? If so, why didn’t they do it? The correction itself is key to the success of the audit program. Establish the expected timeframe for correction (including addressing preventive action). Establish an escalation process for delayed corrections. Corrective actions should be reviewed regularly by upper management using the existing operations review process. There must also be a process for verification that the correction has been made; the next audit cycle may not be sufficient. Note also that addressing opportunities for improvement, not just non-compliance findings, may increase the return on investment associated with conducting an audit. 5. Sharing of Findings and Best Practices Audit results should be communicated to increase awareness and minimize repeat findings. Even if conducted under privilege, best practices and corrections can and should still be shared. Celebrate the positives and creative solutions. Stress the value of the audit program, always providing metrics and cost avoidance examples when possible. Inventory best practices and share/transfer them as part of audit program results. Use best-in-class facilities as models and “problem sites” for improvement planning and training. Value-Added Audit An audit can provide much additional value and return on organization if it is planned and managed effectively. This includes doing the following:
https://kestreltellevate.com/top-5-critical-factors-for-value-added-auditing/
November 2019 – The Slovak Cyber Security Act (Act No. 69/2018 Coll., the "Act") defines the minimum requirements to ensure cyber security in Slovakia. The Act applies to operators of essential services, i.e., to entities in key sectors, including banking, electronic communications, energy and healthcare1, and to digital service providers. While the Act focuses on providers of services that are essential for the proper functioning of society and the economy, its measures may also apply to smaller companies in certain sectors. For instance, in healthcare the Act applies to (1) healthcare providers listed in Annex 1 of the Act, defined as "any persons or any other entity legally providing healthcare in the territory of a Member State" and (2) administrators and operators of networks and IT systems that form an element of critical infrastructure.2 Decree No. 164/2018 Coll., laying down identification criteria for operated services (essential services criteria) sets out the specific sector and impact criteria for healthcare. These include setting out the minimum number of emergency beds in last three calendar years at 500, the status of highly specialised traumatology care centres under separate legislation3 and the provision of laboratory services.4 In principle, operators of essential services have primarily the obligation (1) to take the prescribed security measures, and (2) to address and immediately report security incidents. However, they are also obliged: - to report to the National Security Authority (the "NSA") that the company should be registered in the register of essential services operators (and to inform the provider of electronic communication services of this); - to take and comply with security measures to the extent prescribed; - to address cyber security incidents (including providing appropriate evidence to be used in prosecution); - to enter into an agreement on compliance with safety measures and notification duties with providers of those services that directly relate to the operation of networks and information systems; and with - various notification duties: - - to report each substantial cyber security incident through a uniform cyber security information system,5 - to notify the providers listed above about any reported cyber security incident; - to inform the law enforcement authorities if a crime related to a cyber attack was committed. Impact criteria are defined in the Decree as the consequences of a cyber security incident involving the functionality of an IT system or network upon which the provision of service depends. Potential consequences of a cyber security attack in healthcare can include an economic loss higher than 0.1% of GDP, an economic loss or material damage of more than EUR 250,000 suffered by at least one user, more than 100 injured persons requiring medical treatment, or the loss of one life6, and also includes disruption of public order or public security. Another important obligation is to carry out a cyber security audit within two years from registration in the list of essential services operators. The cyber security audit seeks to evaluate compliance with adopted security measures and with other obligations under the Act. A Decree laying down the rules and scope of the cyber security audit and details of the accreditation of bodies verifying compliance is currently under discussion in the intradepartmental comments procedure. Current wording of the draft Decree provides for a cyber security audit each two years and after each change with a significant effect on the implemented security measures. The audit is to be carried out by an individual—an auditor who is certified by an accredited certification body. Such certification is to be made based on an application that contains the requirements prescribed by law, and the certificate is to be issued with a validity of no more than three years with a renewal option. The auditor's authority includes establishing the duration of the audit so as to sufficiently verify if adopted security measures are effective. At the end of the audit the auditor issues a final audit report with an assessment of the audit results and the evidence used to make the assessment. Essential service operators are obliged to present to the NSA within 30 days of completion of the cyber security audit the final audit report and the rectification measures, including specific time limits. Costs of the audit are to be borne by the essential service operator. In the area of cyber security, the National Security Authority also carries out inspections, issues decisions imposing measures, and imposes sanctions for minor or other administrative offences. The NSA may impose a penalty from EUR 300 up to 1% of overall annual turnover for the preceding financial year, but no more than EUR 300,000. In a future Decree the NSA will define requirements for the accreditation of compliance verification bodies, for the expertise and qualifications to be held by auditors, for the content and scope of the final audit report, and for the outcome of cyber security audit. Footnotes The Act also applies to the following sectors: transport, post, industry, information and telecommunications technologies, water and air. https://www.nbu.gov.sk/wp-content/uploads/kyberneticka-bezpecnost/prevadzkovatelia-ZS.htm Though this term was used in a number of documents that were discussed as part of the legislative process, Slovak healthcare legislation uses only the term "trauma centre" - please also see Decree No. 44/2008 of the Ministry of Healthcare of the Slovak Republic laying down minimum requirements for staffing, material and equipment of individual types of healthcare units. This is again an unclear term as Slovak healthcare legislation again uses only the term "in vitro diagnostic centre". Captured under this type of centres is, in addition to laboratory examinations of biological material, also examination using CT/MR imaging technologies and similar, which are currently almost exclusively digitalised with these providers. Laboratory operators frequently use electronic services to inform about test results. https://www.nbu.gov.sk/kyberneticka-bezpecnost/jednotny-informacny-system-kybernetickej-bezpecnosti/index.html It can be very difficult to identify the requirement of the potential loss of one life in connection with healthcare provision, as it will be very hard to tell when there is a risk of loss of life during a cyber security incident. The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
https://www.mondaq.com/security/902094/new-obligations-under-the-slovak-cyber-security-act
During the 2020 International Conference of Machine Learning, our team was selected to present a new paper "Are AI-Based Anti-Money Laundering Systems Compatible with Fundamental Rights?" at the Law and Machine Learning Workshop. This article analyzes current AML systems as well as new AI techniques to determine whether they can satisfy the European fundamental rights principle of proportionality, a principle that has taken on new meaning as a result of the European Court of Justice’s Digital Rights Ireland and Tele2 Sverige – Watson cases. You can read the corresponding blog post in the link below, which will also take you to the full document available on SSRN. 01/07/20 - Chair news Xavier Vamparys Appointed Head Of Artificial Intelligence Ethics At Cnp Assurances Visiting researcher at Télécom Paris, Xavier Vamparys has just been appointed Head of Artificial Intelligence Ethics at CNP Assurance. Xavier Vamparys will lead the group’s multidisciplinary AI ethics committee, responsible for guiding the use of AI within the group, including for fraud detection and anti-money laundering. At Télécom Paris, Xavier Vamparys’s research is focused on how AI affects the insurance industry, and in particular the insurance industry’s public interest missions. 02/04/20 - publications Netherlands Welfare Case Sheds Light On Explainable Ai For Aml-Cft The District Court of The Hague, Netherlands found that the government’s use of artificial intelligence (AI) to identify welfare fraud violated European human rights because the system lacked sufficient transparency and explainability. 1 As we discuss below, the court applied the EU principle of proportionality to the anti-fraud system and found the system lacking in adequate human rights safeguards. Anti-money laundering/countering the financing of terrorism (AML-CFT) measures must also satisfy the EU principle of proportionality. The Hague court’s reasoning in the welfare fraud case suggests that the use of opaque algorithms in AML-CFT systems could compromise their legality under human rights principles as well as under Europe’s General Data Protection Regulation (GDPR). 21/03/20 - publications Algorithms: Biases Control - A Report By The Institut Montaigne Calculate the shortest route on your phone, automatically create a playlist with your favorite songs, find the most relevant result via a search engine, select CVs that match a job offer: algorithms help you all along the day. But what would happen if a recruitment algorithm was discriminating? Did it systematically leave aside women or ethnic minorities? How do we make sure these errors are highlighted and corrected? Using more than forty interviews, the Montaigne Institute wishes to provide tangible solutions to limit potential drifts and restore confidence in the algorithms. This report attempts to give a French perspective to this issue, which today is mainly treated through an American lens. It extends the paper published in 2019 by Télécom Paris and the Abeona Foundation “Algorithms: bias, discrimination and equity”. 07/12/19 - publications "Integrating Ethics Into Algorithms Raises Titanic Challenges" (Le Monde) Two researchers from Télécom Paris, David Bounie and Winston Maxwell, describe for “Le Monde” the tangible solutions to tackle the risks of discrimination that platform algorithms can generate. More and more examples in justice, health, education and finance, show that artificial intelligence (AI) tools cannot be deployed without control in security systems or access to essential resources. Without these safeguards, they could generate biases, potentially discriminatory, difficult to interpret and for which no explanation is provided to end-users. The conclusion is becoming increasingly clear: AI must integrate ethics from the design of algorithms. The ethical performance of the algorithm (absence of discrimination, respect for individuals, etc.) must be included in the performance criteria, along with the accuracy of the predictions. But integrating ethics into algorithms raises titanic challenges, for five reasons: First, ethical and legal standards are often unclear, and do not lend themselves to mathematical formulation […] Second, ethics is not universal [ …] Third, ethics are political […] Fourth, ethics are economic […] Fifth, ethics are temporal. 02/09/19 - publications Is Explainability Of Algorithms A Fundamental Right? “The demand for transparency on the functioning of algorithms must be addressed with discernment”, assert, in a column to “Le Monde”, researchers David Bounie and Winston Maxwell. 14/02/19 - publications Algorithms: Biases, Discrimination And Equity Algorithms are interfering more and more in our daily life like decision support algorithms (recommendation or scoring algorithm) or autonomous algorithms embedded in intelligent machines (autonomous vehicles). Deployed in many sectors and industries for their efficiency, their results are increasingly discussed and disputed. In particular, they are accused of being black boxes and of leading to discriminatory practices linked to gender or ethnic origin. This article aims to describe the biases related to the algorithms and to outline ways to address them. We are particularly interested in the results of algorithms considering equity objectives, and their consequences in terms of discrimination. Three questions motivate this article: By which mechanisms can algorithm biases occur? Can we avoid them? And, finally, can we correct or limit them? In the first part, we describe how a statistical learning algorithm works. In a second part we are interested in the origin of these biases which can be of cognitive, statistical or economic nature. In a third part, we present some promising statistical or algorithmic approaches that can correct biases. We conclude the article by discussing the main societal issues raised by statistical learning algorithms such as interpretability, explainability, transparency, and responsibility.
https://xai4aml.org/news/
AI Assurance: What happened when we audited a deepfake detection tool called FakeFinder IQT Labs recently audited an open-source deep learning tool called FakeFinder that predicts whether or not a video is a deepfake. This post provides a high-level overview of our audit approach and findings. It is the first in a series and in future posts we will dig into the details of our AI Assurance audit, discussing our cybersecurity “red teaming,” ethics assessment, and bias testing of FakeFinder. —— There is no such thing as perfect security, only varying levels of insecurity.Salman Rushdie If only Rushdie were wrong… When it comes to software, one thing we know for sure is that some of the time, some of our tools are going to fail. And — as if cybersecurity wasn’t enough of a challenge already — introducing Artificial Intelligence (AI) and Machine Learning (ML) into software tools creates additional vulnerabilities. Auditing can help organizations identify risks before AI tools are deployed. We’ll never be able to prevent all future failures and incidents, but the right tools and techniques can help us anticipate, mitigate, and prepare for certain types of risk. Between July and September 2021, we conducted an internal audit of FakeFinder, a deepfake detection tool that some of our IQT Labs colleagues developed earlier this year. In this post, we explain our approach and summarize three primary findings: (1) FakeFinder is actually a “face swap” detector; (2) FakeFinder’s results may be biased with respect to protected classes; and (3) FakeFinder is a software prototype, not a production ready tool. Over the next few weeks, we will share additional findings and recommendations in a series of more detailed posts. Managing risk is tricky business. This was our first AI audit and we definitely do not have all the answers. But we do have a series of tactics that others can borrow and — we hope! — improve. One thing we do know is that to do this well we need input from multiple stakeholders with diverse perspectives to help us see past our own blind spots. So, if you have suggestions or are interested in collaborating on future projects, contact us at [email protected]. AI failures can be intentional or unintentional Earlier this year, IQT Labs worked with BNH.ai, a Washington, D.C.-based law firm specializing in AI liability and risk assessment, to collect and analyze 169 failures of AI tools that occurred between 1988 and 2021 and were covered in the public news media. (We recognize that AI failures covered by the media represent only a fraction of the failures that actually occurred. Nonetheless, this exercise helped us better understand common failure modes that occur when AI is deployed in real-world contexts.) Adversarial attacks, ways of tricking AI/ML models into making erroneous predictions, are a popular area of academic research. These intentional modes of failure are a growing threat, but unfortunately, they are not the only cause for concern. 95% of the AI failures we analyzed were unintentional. Instead of malicious attacks by nefarious actors, these unintentional failures were the result of oversights and accidents, a lack of testing, poor design, unintended consequences, and good old-fashion human error — someone using a tool incorrectly or thinking that the results meant something different from what they actually meant. We saw AI/ML models fail because they weren’t properly validated, because their training data was biased in a way no one realized, or because a model discovered a relationship between irrelevant features in the data…which led people to act on erroneous predictions. We also saw situations where (it appeared) important privacy implications weren’t fully considered before a tool was deployed, or where a tool didn’t provide enough transparency into how a particular decision was made. Again and again, we saw unintentional failures occur when someone overestimated the potential of a tool, ignored (or didn’t understand) its limitations, or didn’t fully think through the consequences of deploying the tool in its current state. A very brief overview of FakeFinder When you upload a video snippet, FakeFinder uses deep learning to predict whether that video is a deepfake or not, that is, whether the video has been modified or manipulated by a deep learning model. If you want to learn more about deepfakes, we recommend checking out this post. If you’d like to try FakeFinder, the code is available on GitHub for any use consistent with the terms of the Apache 2.0 license. We chose FakeFinder as the target of our audit both because deepfake detection is a novel (and timely) use of AI/ML, and also, because the tool was developed by our colleagues and we want to make internal auditing (or “red teaming”) an integral part of IQT Labs’ tool development efforts. We refer to FakeFinder as a tool, but it’s actually composed of 6 different “deepfake detector” models. These models were developed outside of IQT Labs and are also available open source. One model, Boken, was the top performer at the Deeper-Forensics Challenge. The others — Selimsef, \WM/, NTechLab, Eighteen years old, and The Medics — were the top 5 performers at the Facebook/Kaggle Deepfake Detection Challenge, which was launched in December 2019. These 5 models were trained on a dataset containing videos of paid actors (who consented to the use of their likeness for this purpose), which was curated and released by Meta (FKA Facebook) as part of the Deepfake Detection Challenge. In addition to these underlying models, the FakeFinder tool includes several other components that were built by the IQT Labs team: - A front-end application — created using Plotly’s Dash framework — that aggregates predictions from the 6 detector models and displays them in a visual interface; - An API that enables programmatic access to the models’ output; and - A containerized back-end that helps users spin up compute resources on AWS. Our audit approach, in a nutshell We used the AI Ethics Framework for the Intelligence Community, issued by the Office of the Director of National Intelligence, to guide our audit. Published in June 2020, this document developed by the United States Intelligence Community poses a series of important questions aimed to promote “an enhanced understanding of goals between AI practitioners and managers while promoting the ethical use of AI.” Given how extensive this document is, we knew that three months was not enough time for us to craft a rigorous response to each and every question. Instead, we decided to examine FakeFinder from four perspectives — Ethics, the User Experience (UX), Bias, and Cybersecurity — and focus on four sections of the AI Ethics Framework related to those perspectives: Purpose: Understanding Goals and Risks; Human Judgment & Accountability; Mitigating Undesired Bias & Ensuring Objectivity; and Testing your AI. Each of these perspectives encouraged us to examine a different aspect of FakeFinder — we looked at the infrastructure and software implementation (Cybersecurity), the deepfake detection models and their training data (Bias), how the tool presents results through the user interface (UX), and how FakeFinder might be used as part of an analytical workflow (Ethics). This multi-dimensional approach helped us think broadly about potential risks and encouraged us to seek advice from stakeholders with diverse types of expertise: software engineering, data science, legal counsel, UX design, mis/disinformation, AI policy, and media ethics. In each case, we asked ourselves and our collaborators: Is what we’re getting out of this tool what we think we’re getting? Below, we summarize three key findings. FakeFinder is actually a “face swap” detector Many of the video snippets in the Deepfake Detection Challenge dataset were manipulated using a powerful deepfake technique called “face swap.” This technique can be used to transpose the face of one person onto the motions or actions of another, like in this fake video of the former President Obama, which Jordan Peele created as a public service announcement about fake news. In fact, we believe that all the videos in the training data labeled “fake” were manipulated with face swap, but this was not disclosed to competitors. To evaluate the submitted models, Meta used a test dataset that included other types of manipulated videos, including “cheapfakes” (fake videos that were created using techniques like altering the frame rate, which do not require algorithmic intervention). As a result, we suspect that the competition organizers wanted to test whether (or to what extent) the submitted models would generalize to detect other types of fake videos that weren’t present in the training data. Since we don’t know the precise criteria by which Meta decided what constituted a “fake” in the test dataset, however, we can’t characterize what types of fake videos are likely (or unlikely) to be detected by FakeFinder’s models. All we know is that FakeFinder’s detector models were trained on a dataset where “fake” essentially meant “subjected to face swap.” This, on its own, isn’t necessarily an issue. The problem is that nothing in FakeFinder’s documentation or User Interface makes clear to users that FakeFinder’s models were trained exclusively on examples of face swap. Without this critical piece of information, there is substantial risk that users will misunderstand the output. FakeFinder’s results may be biased with respect to protected classes (i.e., race or gender) FakeFinder’s models were trained on videos of human subjects’ faces. This means that protected group information, such as skin color or features associated with biological sex, is directly encoded into the training data. Unless explicitly addressed by the model developers, this could make model performance highly dependent on facial features that relate to protected class categories. If FakeFinder’s detector models were biased with respect to protected classes, this could lead to biased predictions, which could lead to discriminatory outcomes. In April 2021, several researchers at Meta released a paper called Towards measuring fairness in AI: the Casual Conversations dataset, in which they wrote that an evaluation of the top five winners of the DeepFake Detection Challenge revealed “that the winning models are less performant on some specific groups of people, such as subjects with darker skin tones and thus may not generalize to all people”. During our audit we conducted our own bias testing (which we will describe in detail in a future post) and some of the results were concerning. We saw little indication of bias when the models were correct, but when they were wrong, they failed unevenly across the race and gender categories we tested. For example, with one detector model, we found that East Asian faces experienced 644% of the false positive rate that White faces experienced. FakeFinder’s multiple-model design might help to mitigate the biases exhibited by any individual model. However, we strongly recommend that the biases we detected are remediated before FakeFinder is used to inform decision-making in a production context. FakeFinder is a software prototype, not a production ready tool FakeFinder is not an enterprise-ready software product. It is a prototype designed to demonstrate the art of the possible in deepfake detection. In and of itself, this is not a problem. In fact, open-source prototyping is a common (and essential!) way to develop and test the limits of emerging capabilities. However, if users were to overestimate the maturity of FakeFinder, this could create significant risks. When we examined FakeFinder from a cybersecurity perspective, we found several vulnerabilities that could be remedied through established software development practices, but that may not be standard practice for data science prototyping efforts. We’ll detail our approach and findings in our next blog post. We have also summarized a few recommendations here: - FakeFinder requires 8 EC2 instances, an EFS file system, an S3 bucket, and an ECR repo to run. This complexity not only represents a large attack surface, but it also makes the tool difficult to set up and maintain. We recommend automating tool setup to prevent misconfiguration issues. - We discovered a significant exploit through the Werkzeug debug console, which enabled us to gain access to FakeFinder’s underlying detector models and their weights. To protect against this attack — and others — we recommend (1) ensuring that code is committed with Debug flags set to “false” and (2) that file systems and volumes are mounted as read-only, when possible. - We recommend using https certs for internal comms to prevent a MITM (man-in-the-middle) attack. - We also discovered a critical bug in the API component that allows for full RCE by an unauthenticated user. This exploit can be expanded to a full system takeover and exposure of potentially sensitive data. We recommend disclosing this vulnerability to impacted users and refactoring the impacted functions to remediate the vulnerability. Stay tuned for our next blog in this series!
https://www.iqt.org/ai-assurance-what-happened-when-we-audited-a-deepfake-detection-tool-called-fakefinder/
The annual Tackling Regional Adversity through Integrated Care (TRAIC) grants program provides around $600,000 to community organisations that have local ideas to boost the mental health of those affected by drought and disasters. This year, another $400,000 was added through a $2 million drought package to help parts of the state most impacted by drought. The community organisations involved have some great ideas for projects and activities that build resilience, raise awareness of mental health issues, bring people together, break down stigma, and encourage people to seek help if needed. However, the evidence-base for and impact of these activities is not always apparent. Additionally, there can be issues related to the assumption that individuals have the capacity to follow health-related advice, which is profoundly impacted by opportunity structures, social determinants, and service access, all of which are often deficient among those in rural areas affected by drought and/or severe weather events. It is essential to acknowledge the societal-level contexts and constraints and the many feedback loops and reciprocities characterising dynamic systems and real lives. To address this, the study of environmental change and mental health could benefit from systems modelling, a process used to help describe a complex set of interacting factors, which can be used to predict interactions and formulate interventions to achieve desired results. System dynamics modelling Systems modelling is methodologically approached using causal loop diagrams and system dynamics (SD) simulation modelling. It is based on a five-step approach 1 involving problem articulation, qualitative understanding of the system and simulation model building. Systems modelling also facilitates stakeholder engagement, providing a platform for participant ‘buy-in’, reducing conflict, building trust, capacity building (in the participants) and facilitating increased support of goals and outcomes 2,3. This five-step systems modelling process will enable us to elicit and understand the key drivers of poor mental health outcomes in rural Queenslanders and identify intervention points that may lie outside of the health sector. Systems modelling is a scientific method for understanding complex systems and their behaviour 4 by making systems’ structures and interactions explicit throughout the modelling process and by simulating the outcomes of novel interventions based on the identification of hidden leverage points and potential unintended consequences. Developing an SD model facilitates the engagement of (diverse) relevant sectors, bringing together collective experiences and expertise to form a combined understanding on all aspects of the system – this is a key strength of systems modelling . We believe that this approach provides a highly cost-effective way of evaluating the efficacy and economic costs and benefits of interventions and policies for reducing poor mental health outcomes. It also provides information on the likelihood of unintended consequences 4. Aims This project aims to evaluate existing and identify new intervention targets for reducing poor mental health outcomes among rural Queenslanders.
https://public-health.uq.edu.au/project/system-dynamics-model-understanding-poor-mental-health-outcomes-among-rural-queenslanders
Eliminating bias from AI critical to improve equity Artificial intelligence (AI)-driven healthcare has the potential to transform medical decision-making and treatment, but these algorithms must be thoroughly tested and continuously monitored to avoid unintended consequences to patients. Regenstrief Institute President and Chief Executive Officer and Indiana University School of Medicine Associate Dean for Informatics and Health Services Research Peter Embí, M.D., M.S., strongly stated the importance of algorithmovigilance to address inherent biases in healthcare algorithms and their deployment in a JAMA Network Open Invited Commentary. Algorithmovigilance, a term coined by Dr. Embí, can be defined as the scientific methods and activities relating to the evaluation, monitoring, understanding, and prevention of adverse effects of algorithms in healthcare. "We wouldn't think of treating patients with a new pharmaceutical or device without first ensuring its efficacy and safety," said Dr. Embí. "In the same way, we must recognize that algorithms have the potential for both great benefit and harm and, therefore, require study. Also, compared with drugs or devices, algorithms often have additional complexities and variations, such as how they are deployed, who interacts with them, and the clinical workflows where interactions with algorithms take place." The commentary was in response to a study from IBM scientists evaluating different approaches to debiasing healthcare algorithms developed to predict postpartum depression. Dr. Embí stated the study suggests that debiasing methods can help address underlying disparities represented in the data used to develop and deploy the AI approaches. He also said the study demonstrates that the evaluation and monitoring of these algorithms for effectiveness and equity is necessary and even ethically required. "Algorithmic performance changes as it is deployed with different data, different settings and different human-computer interactions. These factors could turn a beneficial tool into one that causes unintended harm, so these algorithms must continually be evaluated to eliminate the inherent and systemic inequities that exist in our healthcare system," Dr. Embí continued. "Therefore, it's imperative that we continue to develop tools and capabilities to enable systematic surveillance and vigilance in the development and use of algorithms in healthcare."
https://tectales.com/ai/eliminating-bias-from-ai-critical-to-improve-equity.html
AI now guides numerous life-changing decisions, from assessing loan applications to determining prison sentences. Proponents of the approach argue that it can eliminate human prejudices, but critics warn that algorithms can amplify our biases — without even revealing how they reached the decision. This can result in AI systems leading to Black people being wrongfully arrested, or child services unfairly targeting poor families. The victims are frequently from groups that are already marginalized. Alejandro Saucedo, Chief Scientist at The Institute for Ethical AI and Engineering Director at ML startup Seldon, warns organizations to think carefully before deploying algorithms. He told TNW his tips on mitigating the risks. Explainability Machine learning systems need to provide transparency. This can be a challenge when using powerful AI models, whose inputs, operations, and outcomes aren’t obvious to humans. Explainability has been touted as a solution for years, but effective approaches remain elusive. “The machine learning explainability tools can themselves be biased,” says Saucedo. “If you’re not using the relevant tool or if you’re using a specific tool in a way that’s incorrect or not-fit for purpose, you are getting incorrect explanations. It’s the usual software paradigm of garbage in, garbage out.” While there’s no silver bullet, human oversight and monitoring can reduce the risks. Saucedo recommends identifying the processes and touchpoints that require a human-in-the-loop. This involves interrogating the underlying data, the model that is used, and any biases that emerge during deployment. The aim is to identify the touchpoints that require human oversight at each stage of the machine learning lifecycle. Ideally, this will ensure that the chosen system is fit-for-purpose and relevant to the use case. Domain experts can also use machine learning explainers to assess the prediction of the model, but it’s imperative that they first evaluate the appropriateness of the system. “When I say domain experts, I don’t always mean technical data scientists,” says Saucedo. “They can be industry experts, policy experts, or other individuals with expertise in the challenge that’s being tackled.” Accountability The level of human intervention should be proportionate to the risks. An algorithm that recommends songs, for instance, won’t require as much oversight as one that dictates bail conditions. In many cases, an advanced system will only increase the risks. Deep learning models, for example, can add a layer of complexity that causes more problems than it solves. “If you cannot understand the ambiguities of a tool you’re introducing, but you do understand that the risks have high stakes, that’s telling you that it’s a risk that should not be taken,” says Saucedo. The operators of AI systems must also justify the organizational process around the models they introduce. This requires an assessment of the entire chain of events that leads to a decision, from procuring data to the final output. You need a framework of accountability You need a framework of accountability “There is a need to ensure accountability at each step,” says Saucedo. “It’s important to make sure that there are best practices on not just the explainability stage, but also on what happens when something goes wrong.” This includes providing a means to analyze the pathway to the outcome, data on which domain experts were involved, and information on the sign-off process. “You need a framework of accountability through robust infrastructure and a robust process that involves domain experts relevant to the risk involved at every stage of the lifecycle.” Security When AI systems go wrong, the company that deployed them can also suffer the consequences. This can be particularly damaging when using sensitive data, which bad actors can steal or manipulate. “If artifacts are exploited they can be injected with malicious code,” says Saucedo. “That means that when they are running in production, they can extract secrets or share environment variables.” The software supply chain adds further dangers. Organizations that use common data science tools such as TensorFlow and PyTorch introduce extra dependencies, which can heighten the risks. An upgrade could cause a machine learning system to break, and attackers can inject malware at the supply chain level. The consequences can exacerbate existing biases and cause catastrophic failures. Saucedo again recommends applying best practices and human intervention to mitigate the risks. An AI system may promise better results than humans, but without their oversight, the results can be disastrous. Did you know Alejandro Saucedo, Engineering Director at Seldon and Chief Scientist at the Institute for Ethical AI & Machine Learning, is speaking at the TNW Conference on June 16? Check out the full list of speakers here.
https://thenextweb.com/news/how-to-manage-ai-bias-alejandro-saucedo-interview
In the first review of its kind, the PCAOB Center for Economic Analysis is requesting public comment on a PCAOB standard that requires an engagement quality reviewer to evaluate the significant judgments made by the engagement team. The analysis of Auditing Standard (AS) No. 7, Engagement Quality Review, represents the start of a post-implementation review program whose goal is to evaluate whether adopted rules and standards are accomplishing their intended purposes. The program also seeks to identify unintended consequences and gauge the overall effects of PCAOB rules and standards. AS 7 was adopted in 2009, replacing an auditing standard that had been in place since the 1970s. The PCAOB expected the standard to provide a meaningful check on the work performed by engagement teams and increase the likelihood that registered firms would identify significant engagement deficiencies before issuing an audit report. The PCAOB Center for Economic Analysis is seeking comment by July 5 on the effects of the standard, including: - Whether it has accomplished its purpose and increased the likelihood that engagement deficiencies will be detected before the audit report is issued. - Whether reviews under the standard provide a meaningful check on the engagement team’s work. - Whether users of financial statements believe that the implementation of AS 7 has affected the credibility of financial reporting. - The experiences of auditors, preparers, and audit committees with implementation of AS 7. - The initial and recurring costs and benefits associated with the implementation of AS 7. - Whether AS 7 could be refined or improved to better accomplish its purpose. —Ken Tysiac ([email protected]) is a JofA editorial director.
https://www.journalofaccountancy.com/news/2016/apr/pcaob-engagement-quality-review-standard-201614194.html
Endless screeds have been written about the fact that the internet algorithms we constantly interact with suffer from gender bias, and all you have to do is do a simple search to see for yourself. However, according to the researchers behind a new study seeking to reach a conclusion on this matter, “until now, the debate has not included any scientific analysis.” This new article, by an interdisciplinary team, proposes a new way of approaching this issue and suggests some solutions to prevent these deviations in data and the discrimination they entail. Algorithms are being used more and more to decide whether to grant a loan or accept applications. As the range of uses of artificial intelligence (AI), as well as its capabilities and importance, increases, it becomes increasingly vital to evaluate any biases associated with these operations. ‘Although this is not a new concept, there are many cases where this issue has not been investigated, thus ignoring the potential consequences,’ said the researchers, whose study, published in open access in the journal Algorithms magazine, mainly focused on Genre biases in the different fields of AI. Such biases can have a huge impact on society: “Prejudice affects anything that is discriminated against, excluded, or associated with a stereotype. For example, a gender or a race may be excluded in a decision process or, simply, some behaviors can be engaged in because of one’s gender or the color of one’s skin,” explained the research’s principal investigator, Juliana Castañeda Jiménez, an industrial doctoral student at the Universitat Oberta de Catalunya (UOC) under the supervised by Ángel A. Juan, of the Polytechnic University of Valencia, and Javier Panadero, of the Polytechnic University of Catalonia. According to Castañeda, “it is possible for algorithmic processes to discriminate based on gender, even if programmed to be ‘blind’ to this variable.” The research group – which also includes researchers Milagros Sáinz and Sergi Yanes, both from the Gender and ICT research group (GenTIC) of the Internet Interdisciplinary Institute (IN3), Laura Calvet, from the Salesian University School of Sarrià, Assumpta Jover, from the Universitat de València, and Ángel A. Juan, illustrate this with a series of examples: the case of a well-known recruitment tool which preferred male candidates to female ones, or that of some credit services which offered less favorable conditions to women than to men . “If old and biased data is used, you are likely to see negative bias regarding Black, gay, and even female demographics, depending on when and where the data comes from,” Castañeda explained. The sciences are for boys and the arts are for girls To understand how these patterns affect the different algorithms we deal with, the researchers analyzed previous work that identified gender biases in data processing in four types of AI: those describing applications in natural language processing and generation , decision management, speech recognition and facial recognition recognition. In general, they found that all algorithms identified and classified white men better. They also found that they reproduced false beliefs about physical attributes that should define someone according to their biological sex, ethnic or cultural background, or sexual orientation, and also that they created stereotypical associations linking men to the sciences and women to the arts. Many of the procedures used in image and voice recognition are also based on these stereotypes: cameras find it easier to recognize white faces and audio analysis has problems with higher-pitched voices, which mainly affect women. The cases most likely to suffer from these problems are those whose algorithms are built based on the analysis of real-life data associated with a specific social context. “Some of the main causes are the underrepresentation of women in the design and development of AI products and services and the use of gender biased datasets,” noted the researcher, who argued that the problem stems from cultural environment in which they are developed. “A algorithm, when trained with biased data, can detect hidden patterns in society and, when it operates, reproduce them. So, if men and women are unequally represented in society, the design and development of AI products and services will show gender bias.” How can we put an end to all this? The numerous sources of gender bias, as well as the peculiarities of any given type of algorithm and data set, mean that eliminating this bias is a very tough, though not impossible, challenge. “Designers and all others involved in their design must be aware of the possibility of the existence of biases associated with the logic of an algorithm. In addition, they must understand the measures available to minimize, as far as possible, potential biases and implement in a way that do not occur, because if they are aware of the types of discrimination that occur in society, they will be able to identify when the solutions they develop reproduce them,” Castañeda suggested. This work is innovative because it was carried out by specialists in several areas, including a sociologist, an anthropologist and experts in gender and statistics. “Team members provided a perspective that went beyond the self-contained mathematics associated with algorithms, thus helping us to see them as complex socio-technical systems,” said the study’s principal investigator. “When comparing this work with others, I think it is one of the few that presents the problem of bias in algorithms from a neutral point of view, highlighting both social and technical aspects to identify why an algorithm might make a biased decision “, he added. concluded. Juliana Castaneda et al, Addressing gender bias issues in algorithmic data processing: A socio-statistical perspective, Algorithms (2022). DOI: 10.3390/a15090303 Provided by Universitat Oberta de Catalunya (UOC) Citation: How to End Gender Bias in Internet Algorithms (2022, Nov 23) Retrieved Nov 24, 2022 from https://techxplore.com/news/2022-11-gender-biases-internet-algorithms.html This document is subject to copyright. Except in all propriety for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.
https://rayafrique.com/how-to-end-gender-bias-in-internet-algorithms/
First report to scrutinise use of algorithms by government agencies The New Zealand government has produced its first report into the use of algorithms by government agencies, saying there are few safeguards against biased algorithms, and there is ample scope for government agencies to lift their game. The report follows one in May issued by the New Zealand Human Rights Commission warning that public sector use of algorithms for predictive purposes could lead to unfair treatment of individuals or groups. It called for steps to be taken to ensure such practices conformed to human rights and ethical standards. The government said the report, by the government chief data steward and the government chief digital officer, provided valuable insights into the use of algorithms by government agencies, and suggested how their use could be improved for both fairness and transparency. It examined the use of algorithms in 14 government agencies. The report said data bias posed a significant challenge for effective algorithm use, but there was little monitoring to detect any bias. "Even the best algorithms can perpetuate historic inequality if biases in data are not understood and accounted for," it said. "Only a minority of participating agencies described a formal process for considering the limitations of algorithms as a part of the development process for these tools. "Few agencies reported any regular review process for existing algorithms to ensure they are achieving their intended aims without unintended or adverse effects. "This suggests a risk that the limitations of algorithms, and the data they draw upon, may depend on the skills and experience of individuals in particular roles and, therefore, may not be systemically and consistently identified to decision-makers." Read more: MedicMind AI keeps an eye on eye disease Recommendations in the report include maintaining human oversight, involving those who will be affected, promoting transparency and awareness, regularly reviewing algorithms that inform significant decisions, and monitoring for adverse effects. Government chief data steward Liz MacPherson said: “New Zealand has robust systems and principles in place around the safe use of data, but as techniques become more sophisticated we must remember to keep the focus on people and make sure the things we are doing are for their benefit." Government chief digital officer Paul James said algorithms were an important part of government and were "evolving to provide services that work better for all of us, and also make it easier for citizens to engage with government." He said the report included case studies that highlight how algorithms are already enabling innovative solutions to complex problems. "One example is an algorithm being used by Work and Income to identify young people at risk of long-term unemployment, so they can be offered assistance. This provides a great example of the way these techniques can help those who may be in need."
https://www2.computerworld.co.nz/article/648851/nz-government-warns-algorithm-bias/
A commitment to transparency and consistent human oversight are two of the key factors in ensuring that federal agencies use artificial intelligence ethically, according to a new report by International Data Corporation, a market intelligence and consulting firm. The emphasis is on how government AI might affect individuals: “Responsible and ethical use of AI includes protecting individuals from harm based on algorithmic or data bias or unintended correlation of personally identifiable information (PII) even when using anonymous data,” the report says. “And since machine learning is trained by humans, these biases may be inherently and inadvertently built into machine-based AI systems.” The IDC analysts cite examples in health care, cybersecurity and other fields where AI has been deployed for mission-critical operations in government. In any use of the emerging technology — which has drawn more and more interest from federal officials — “there are technical challenges in machine learning that can result in errors, such as false positives or false negatives that trigger automated actions or other events,” the report says. Human oversight of the systems, including subjecting the algorithms to “performance review” style audits, is key to mitigating these risks, the report argues. “Agencies should also provide mechanisms to protect citizen interest, especially when sharing information with ecosystem partners,” it states. “Formal governance policies should be created to address who owns, who analyzes, and who has access to each granularity of data to protect confidentiality and PII.” The report also encourages federal agencies to be transparent around the use and impact of AI systems, both with citizens and with employees. Agencies should provide explanations of where and how algorithms are being deployed, it states. And on the workforce side, “leadership should also explain how workers are impacted by AI and how the nature of work will change.” “AI can improve quality of life for government workers by processing a vast number of documents and identifying those that are optimal and required to solve a specific case,” the report states, echoing a familiar and favorite talking point of government officials. “However, specific roles may change, and over time, some positions may be eliminated while new ones are created.” The report includes a number of other suggestions as well, including that agencies develop contingency plans in case “something goes wrong.” “For nations to thrive, the ethics of AI must catch up with the technology so that governments, industries, and individuals learn to trust this transformative technology,” Adelaide O’Brien, research director at IDC Government Insights, and author of the report, said in a statement.
https://www.fedscoop.com/transparency-oversight-vital-responsible-government-ai-report-says/