content
string
pred_label
string
pred_score
float64
LIFE SCIENCE BOOK PDF adminComment(0) Contents: Life Sciences, Fundamentals and Practice, Part I. Book · January with 94, Reads. Publisher: 5th. Publisher: ISBN: Subtle is the Lord: The Science and Life of Albert Einstein Life Sciences Fundamentals and Practice, Fourth edition. . Try pdfdrive:hope to request a book. abtresdeorebgolf.ga Medical Medium Seven CSIR NET Life Science Papers (Memory Based) – ( Dec., Life Science Book Pdf Author:TEMPIE WILLFORD Language:English, French, Portuguese Country:Mexico Genre:Personal Growth Pages:676 Published (Last):30.01.2016 ISBN:208-2-18610-256-5 ePub File Size:26.81 MB PDF File Size:19.58 MB Distribution:Free* [*Sign up for free] Downloads:39254 Uploaded by: EZEKIEL pdf. Life Sciences Fundamentals and Practice Part -I. Pages This book contains information obtained from authentic and highly regarded sources. Browse science publications on Biology and Life Sciences from the National Academies Press. Viewing 1 - 10 of books in Biology and Life Sciences. Life Sciences Fundamentals and Practice, Fourth edition No part of this book may be reproduced by any mechanical, photographic, or electronic process, or in . Phone: E-mail: creeves pvsd. Learn 7th grade holt life science with free interactive flashcards. The fun video lessons and quizzes line up with the chapters in your textbook Luckily, a variety of programs are offered to serve this objective but choosing the right a particular could become quite stressing since they all promise all identical things but with different price tags. Showing top 8 worksheets in the category - Holt 7th Grade Life Science. Additionally, e very one of my students has been given an interactive textbook, which is an abbreviated version of the regular textbook. On this page you can read or download 7th grade life science textbook california holt mcdougal shark in PDF format. Update your Portfolio. All the important science concepts covered in your textbook are featured in our short Science. Give it a try. We found Holt Science and liked that it had a homeschool plan with it. Scroll down to Life Science Heading in green 6. American Libraries. Unit 1: Introduction. This statement explains the essentials skills and knowledge that you will be building as you read the lesson. Availability: In Stock. Safety and Tools in the Lab. Read "How do Atoms Rearrange" p. This is relevant to holt physics textbook answers pdf. Metric Measurement. The purpose is to give all students an overview of common strands in life Quia Web allows users to create and share online educational activities in dozens of subjects, including Life Science. It also provides sample responses for the essay questions. Click the topic you would like to know more about: Everything we learn about is on here just look for what you Gwinnett County 7th grade science classes use the Holt: Life Science textbook. The student assigned textbook can be left at home to be studied! Holt California. What is a tool scientists use to make Holt 7th Grade Life Science. Happy back to school Monday! The page number of the Student Text where the answer is found is listed by each question. Students should see Ms. Please Note: This is a secular textbook, and thus contains two units which teach evolution and the history of life on earth. Download holt. She is enjoying the book and it is much easier for her to digest the material. Related Book To Science 7th Grade Life Science Holt Mcdougal Science Games Galore Earth Life And Physical Science Grade 2 Ebook top 8 worksheets in the category holt 7th grade life science some of the worksheets displayed are holt life science, life science teachers edition te, middle school core coursetextbook list language arts, holt california physical science, focus on life science, life science work, 7th grade life science course of study, textbook holt science and. Halpern's Science. Choose from different sets of 7th grade holt life science flashcards on Quizlet. Glencoe Life Science. The most efficient way of contacting me is through email. Online Textbook. Grades are weighted and calculated on a point system based upon: Textbook - 7th Grade Science - Focus on Life Science. If you want the online textbook—Click on online student edition—full version 9. For Parents and Students Lighten the backpack load; leave some textbooks at school. Find This grade 7 Science worksheet on Life Science aims at upgrading a student? Cancel anytime. Life Science. The symptoms of pellagra progress through the three Ds: Deficiency of niacin causes pellagra. If an enzyme is denatured or dissociated into its subunits. It is a remarkable molecular device that determines the pattern of chemical transformations. Their catalytic activity depends on the integrity of their native protein conformation. It changes only the rate at which equilibrium is achieved. Alcohol dehydrogenase. It increases the rate of a reaction by lowering the activation energy. With the exception of a small group of catalytic RNA molecules. Enzymes have several properties that make them unique. Some cofactors are simple metal ions and other cofactors are complex organic groups. A cofactor can be linked to the protein portion of the enzyme either covalently or non- covalently. The complete. Virtually all cellular reactions or processes are mediated by enzymes. Removal of cofactor from a conjugated enzyme leaves only protein component. Cofactors which are tightly associated with the protein covalently or non-covalently are called prosthetic group. They are highly specialized proteins and have a high degree of specificity for their substrates. Xanthine oxidase Se Glutathione peroxidase 89 www. Enzymes can be divided into two general classes: Lysyl oxidase. Kinases Transfer phosphate from ATP to a substrate. The last number is a serial number in the sub-subclass. The first three numbers define major class. EC 2 Transferases Transferases catalyze reactions that involve the transfer of groups from one molecule to another. There are six classes to which different enzymes belong. Oxygenases Directly incorporate oxygen into the substrate. According to this rule. Item Preview The enzyme commission has developed a rule for naming enzymes. Common trivial names for the transferases often include the prefix trans. The Enzyme Commission EC has given each enzyme a number with four parts. These classes are: EC 1 Oxidoreductase Oxidoreductase catalyzes oxidation-reduction reactions. As for example. Because of the confusion that arose from these common names. Many enzymes are named for their substrates and for the reactions that they catalyze. Common names provide little information about the reactions that enzymes catalyze. EC 3 Hydrolases Hydrolases catalyze reactions in which the cleavage of bonds is accomplished by adding water. Transaminases Transfer amino group from amino acids to keto acids. Examples of such groups include amino. Peroxidases Use H2O2 as an electron acceptor. Dehydrogenases Use molecules other than oxygen e. Phosphorylases Transfer inorganic phosphate to a substrate. Under constant temperature and pressure. Thermodynamic principles The First law of thermodynamics states that the energy is neither created nor destroyed. Free Study Material, Sample Questions, Notes on Life Sciences for CSIR NET JRF Exam The change in the free energy. Chapter 02 Bioenergetics and Metabolism 2. The Second law of thermodynamics states that the total entropy of a system must increase if a process is to occur spontaneously. The chemical reaction has a characteristic standard free energy change and it is constant for a given reaction. The free energy change which corresponds to this standard state is known as standard free energy change. B is also being converted to A. The concentration of reactants and products at equilibrium define the equilibrium constant. If the reaction A B is allowed to go to equilibrium at constant temperature and pressure. At constant temperature and pressure. R is the gas constant. The equilibrium constant Keq depends on the nature of reactants and products. It can be calculated from the equilibrium constant of the reaction under standard conditions i. T is the absolute temperature. In this state. Regulation occurs in following different ways: Allosteric regulation of enzymes by a metabolic intermediate or coenzyme. It consists of hundreds of enzymatic reactions organized into discrete pathways. These pathways proceed in a stepwise manner. Those in eukaryotic cells occur in specific cellular locations. Metabolism serves two fundamentally different purposes: Its allosteric site will bind to the end product of the pathway which alters its active site so that it cannot mediate the enzymatic reaction. They are irreversible. The basic strategy of catabolic metabolism is to form ATP and reducing power for biosyntheses. Feedback inhibition and feedback repression In feedback inhibition or end product inhibition. Catabolic pathways are involved in the oxidative breakdown of larger complex molecules and usually exergonic in nature. The feedback inhibition is different from feedback repression. To achieve these. A number of central metabolic pathways are common to most cells and organisms. Anabolic pathways are involved in the synthesis of compounds and ender- gonic in nature. They are referred to as amphibolic pathways. They are regulated. By extracellular signal such as growth factors and hormones that act from outside the cell in multicellular organisms. Availability of substrate. Each one has a first committed step. Some pathways can be either anabolic or catabolic. Most of the reactions in living cells fall into one of five general categories: Characteristics of metabolic pathways are: An inhibitory feedback system in which the end product produced in a metabolic pathway acts as a co-repressor and represses the synthesis of an enzyme that is required at an earlier stage of the pathway is called feedback repression. Metabolic pathways involve several enzyme-catalyzed reactions. Each step of metabolic pathways is catalyzed by a specific enzyme. These pathways. Bioenergetics and Metabolism 2. The first enzyme in the pathway is an allosteric enzyme. Glycolysis takes place in the cytosol of cells in all living organisms. Although carbohydrates. An exergonic reaction proceeds with a net release of free energy. When one mole of glucose g is completely oxidized into CO2 and water. Respiration is an oxidative process. Cells acquire free energy from the oxidation of organic compounds that are rich in potential energy. Energy is required for the maintenance of highly organized structures. Free energy is released in multiple steps in a controlled manner and used to synthesise ATP. The complete oxidation of substrates occurs in the presence of oxygen. During oxidation within a cell. As the substrate is never totally oxidized. Table 2. Oxidation of glucose is an exergonic process. Carbohydrates are most commonly used as respiratory substrates. During cellular respiration. ATP acts as the energy currency of the cell. Glycolysis — Cytosol Citric acid cycle — Mitochondrial matrix Oxidative phosphorylation — Inner mitochondrial membrane In prokaryotes. A complete oxidation of respiratory substrates in the presence of oxygen is termed as aerobic respiration. Glycolysis — Cytosol Citric acid cycle — Cytosol Oxidative phosphorylation — Plasma membrane www. The compounds that are oxidized during the process of respiration are known as respiratory substrates. The oxidative phosphorylation takes place in the inner mitochondrial membrane. For each molecule of glucose degraded to carbon dioxide and water by respiration. Part of this energy is used for synthesis of ATP. The citric acid cycle takes place within the mitochondrial matrix of eukaryotic cells and in the cytosol of prokaryotic cells. The negative charge of the phosphate prevents the passage of the glucose 6-phosphate through the plasma membrane. Hexokinase is present in all cells of all organisms. It is a unique pathway that occurs in both aerobic as well as anaerobic conditions and does not involve molecular oxygen. Hexokinase and glucokinase are isozymes. Step 2: Isomerization A readily reversible rearrangement of the chemical structure isomerization moves the carbonyl oxygen from carbon 1 to carbon 2. Phosphorylation Glucose is phosphorylated by ATP to form a glucose 6-phosphate. Glycolysis occurs in the cytosol of all cells. Glucokinase is present in liver and beta-cells of the pancreas and has a high Km and Vmax as compared to hexokinase. This irreversible reaction is catalyzed by hexokinase. Bioenergetics and Metabolism Solution a. Valinomycin is an ionophore. Voltage gradient membrane potential across the inner mitochondrial membrane with the inside negative and outside positive. A mitochondrion actively involved in aerobic respiration typically has a membrane potential of about mV negative inside matrix and a pH gradient of about 1. Antimycin A strongly inhibits the oxidation of Q in the respiratory chain. Inhibition of NADH dehydrogenase by rotenone decreases the rate of electron flow through the respiratory chain. Determination of electric potential and pH gradient Because mitochondria are very small. Because antimycin A blocks all electron flow to oxygen. The electrochemical proton gradient exerts a proton motive force pmf. In a typical cell. In the presence of valinomycin. Bioenergetics and Metabolism Experimental proof of chemiosmotic hypothesis Experimental proof of chemiosmotic hypothesis was provided by Andre Jagendorf and Ernest Uribe in A burst of ATP synthesis accompanied the transmembrane movement of protons driven by the electrochemical proton gradient. The F0 component is embedded in the inner mitochondrial membrane. The multiprotein ATP synthase or F0F1 complex or complex V catalyzes ATP synthesis as protons flow back through the inner membrane down the electrochemical proton gradient. When the pH in the thylakoid lumen became 4. Life-Sciences-part-1-CSIR-JRF-NET-GATE-DBT.pdf An aspartic acid residue in the second helix lies on the center of the membrane. Rotational motion is imparted to the rotor by the passage of protons. In similar experiments using inside-out preparations of submitochondrial vesicles. In an elegant experiment. Ionophore uncouple electron transfer from oxidative phosphorylation by dissipating the electrochemical gradient across the mitochondrial membrane. Most of the ATP generated by oxidative phosphorylation in mitochondria is exported to the cytoplasm. Ionophores are lipophilic molecules that bind specific cations and facilitate their transport through the membrane. ADP and Pi very weakly. Calculation of free energy change The standard free energy change for the movement of protons across the membrane along the electrochemical proton gradient can be calculated from the Nernst equation: DNP in an anionic state picks up protons in the inter-mitochondrial space and diffuse readily across mitochondrial membranes. The free energy released on proton translocation is harnessed to interconvert three states. It decreases the memberane potential component of pmf without a direct effect on the pH gradient and thus ATP synthesis. ADP and Pi: An O state open state that binds ATP. Most common uncoupling agents are 2. Dicoumarol and FCCP act in the same way. After entering the matrix in the protonated form. DNP is a weak acid that is soluble in lipid bilayer both in their protonated neutral forms and in their anionic states. A specific transport protein. Bioenergetics and Metabolism and vice versa. NADH synthesized during the glycolytic process finally transfers the electrons to electron transport chain. NADH in the cytosol transfers electrons to oxaloacetate. Malate is transported across the inner membrane by the help of transporter. A second membrane transport system is the phosphate translocase. In the matrix. This transport process is also powered by the transmembrane proton gradient. The electrons are carried into the mitochondrial matrix in the form of malate. Malate then enters the mitochondrial matrix. NADH cannot cross the inner mitochondrial membrane. The malate-aspartate shuttle is the principal mechanism for the movement of NADH from the cytoplasm into the mitochondrial matrix. H2O2, a toxic product of various oxidative processes, reacts with double bonds in the fatty acid residues of the erythrocyte cell membrane to form organic hydroperoxides. These, in turn, result in premature cell lysis. Peroxides are eliminated through the action of glutathione peroxidase, yielding glutathione disulfide GSSG. So, G6PD deficiency results in hemolytic anemia caused by the inability to detoxify oxidizing agents. This pathway, first reported by Michael Doudoroff and Nathan Entner, occurs only in prokaryotes, mostly in gram-negative bacteria such as Pseudomonas aeruginosa, Azotobacter, Rhizobium. In this pathway, glucose phosphate is oxidized to 2-ketodeoxyphosphogluconic acid KDPG which is cleaved by 2-ketodeoxyglucose-phosphate aldolase to pyruvate and glyceraldehydephosphate. The latter is oxidized to pyruvate by glycolytic pathway where in two ATPs are produced by substrate level phosphorylations. Figure 2. The first process is a light dependent one light reactions that requires the direct energy of light to make energy carrier molecules that are used in the second process. The calvin cycle light independent process occurs when the products of the light reaction are used in the formation of carbohydrate. On the basis of generation of oxygen during photosynthesis, the photosynthetic organisms may be oxygenic or anoxygenic. Oxygenic photosynthetic organisms include both eukaryotes as well as prokaryotes whereas anoxygenic photosynthetic organisms include only prokaryotes. Oxygenic photosynthetic organisms Eukaryotes — Plants and Photosynthetic protists Prokaryotes — Cyanobacteria. Anoxygenic photosynthetic organisms Prokaryotes — Green and purple photosynthetic bacteria. In oxygenic photosynthetic organisms, photosynthetic oxygen generation occurs via the light-dependent oxidation of water to molecular oxygen. This can be written as the following simplified chemical reaction:. Different types of pigments, described as photosynthetic pigment, participate in this process. The major photosynthetic pigment is the chlorophyll. Chlorophyll, a light-absorbing green pigment, contains a polycyclic, planar tetrapyrrole ring structure. Chlorophyll is a lipid soluble pigment. It has the following important features: Chlorophyll has a cyclopentanone ring ring V fused to pyrrole ring III. The propionyl group on a ring IV of chlorophyll is esterified to a long-chain tetraisoprenoid alcohol. In chlorophyll a and b it is phytol. Pyrrole ring II contains methyl —CH3 group. It absorbs more violet-blue wavelength than red blue wavelength of light. Carotenoids are long-chain. Chlorophyll is composed of two parts. It is an essential photosynthetic pigment. Anoxygenic photosynthetic organisms contain bacteriochlorophyll molecules. Accessory pigments Besides the major light-absorbing chlorophyll molecules. BChl b. Carotenoids are lipid soluble pigments and can be subdivided into two classes. In the pure state. Chl c and Chl d. Oxygenic photosynthetic organisms contain different types of chlorophyll molecules like Chl a. They are related to chlorophyll molecules. BChl c. It absorbs more red wavelengths than violet. It is accessory photosynthetic pigment. These chlorophyll molecules differ by having different substituent groups on the tetrapyrrole ring. The characteristic www. They are generally C40 terpenoid compounds formed by the condensation of eight isoprene units. Different groups of anoxygenic photosynthetic organisms contain different types of bacteriochlorophyll: BChl a. The two types of accessory pigments are carotenoids and phycobilins. BChl d and BChl e. The tail is a 20 carbon chain that is highly hydrophobic. Chl b. Bacteriochlorophyll molecules absorb light at longer wavelengths as compared to chlorophyll molecules. Many diseases have been characterized that result from an inherited deficiency of the enzyme. Two main biosynthetic pathways are known. Bioenergetics and Metabolism Glycogen storage diseases Glycogen storage diseases are caused by a genetic deficiency of one or another of the enzymes of glycogen metabolism. The most important route to triacylglycerol biosynthesis is the sn-glycerolphosphate or Kennedy pathway. Within all cell types. These defects are listed in the table. In animals. Porphyrin biosynthesis involves three distinct processes: In contrast. The deoxyribose sugar is generated by the reduction of ribose within a fully formed nucleotide. All deoxyribonucleotides are synthesized from the corresponding ribonucleotides. In de novo means anew pathways.. In salvage pathways. Synthesis of a substituted pyrrole compound. Modification of the side chains. Condensation of four porphobilinogen molecules to yield a partly reduced precursor called a porphyrinogen. The framework for a pyrimidine base is assembled first and then attached to ribose. Orotate couples to ribose. Carbamoylaspartate then cyclizes to form dihydroorotate which is then oxidized to form orotate. The precursor of carbamoyl phosphate is bicarbonate and ammonia. This reaction is catalyzed by cytosolic carbamoyl phosphate synthetase II. The C-2 and N-3 atoms in the pyrimidine ring come from carbamoyl phosphate. The synthesis of carbamoyl phosphate from bicarbonate and ammonia occurs in a multistep process. Pyrimidine rings are synthesized from carbamoyl phosphate and aspartate. Chapter 03 Cell Structure and Functions 3. Golgi complex. All organisms. The basic structural and functional unit of cellular organisms is the cell. Hooke published his findings in his famous work. Cells that have unit membrane bound nuclei are called eukaryotic. Robert Hooke first discovered cells in a piece of cork and also coined the word cell. Viruses are noncellular organisms because they lack cell or cell-like structure. Hooke only observed cell walls because cork cells are dead and without cytoplasmic contents. The cell theory holds true for all cellular organisms. Non- cellular organisms such as virus do not obey cell theory. Rudolf Virchow proposed an important extension of cell theory that all living cells arise from pre-existing cells omnis cellula e cellula. Besides the nucleus. On the basis of the internal architecture. The region of the cell lying between the plasma membrane and the nucleus is the cytoplasm. Evolution of the cell The earliest cells probably arose about 3. The prokaryotic cells lack such unit membrane bound organelles. Primitive heterotrophs gradually acquired www. Eukaryotic cells have a much more complex intracellular organization with internal membranes as compared to prokaryotic cells. The word cell is derived from the Latin word cellula. The modern cell theory includes the following components: In According to this theory all living things are made up of cells and cell is the basic structural and functional unit of life. Anton van Leeuwenhoek was the first person who observed living cells under a microscope and named them animalcules. Over the time. It is an aqueous compartment bound by cell membrane. In the year Cell theory In Both proteins and lipids are free to move laterally in the plane of the bilayer. The DNA is. Details of the evolutionary path from prokaryotes to eukaryotes cannot be deduced from the fossil record alone. The fossil record shows that earliest eukaryotic cells evolved about 1. Jonathan Singer and Garth Nicolson proposed fluid-mosaic model. Three major changes must have occurred as prokaryotes gave rise to eukaryotes. Peripheral protein Phospholipid bilayer Integral protein Peripheral protein Figure 3. Different models were proposed to explain the structure and composition of plasma membranes. The cyanobacteria are the modern descendants of these early photosynthetic O2 producers. A very significant evolutionary event was the development of photosynthetic ability to fix CO2 into more complex organic compounds. It describes both the mosaic arrangement of proteins embedded throughout the lipid bilayer as well as the fluid movement of lipids and proteins alike. One important landmark along this evolutionary road occurred when there was a transition from small cells with relatively simple internal structures. It acts as a selectively permeable membrane and regulates the molecular traffic across the boundary. The plasma membrane exhibits selective permeability. Cell Structure and Functions the capability to derive energy from certain compounds in their environment and to use that energy to synthesize more and more of their own precursor molecules. The original electron hydrogen donor for these photosynthetic organisms was probably H2S. Integral proteins float in this lipid bilayer. In this model. The fatty acyl chains in the lipid bilayer form a fluid. These DNA-protein complexes called chromosomes become especially compact at the time of cell division. Some aerobic bacteria evolved into the mitochondria of modern eukaryotes. The plasma membrane of animal cells contains four major phospholipids. Cell Structure and Functions Chemical constituents of plasma membrane All plasma membranes. Phospholipids Phospholipids are made up of four components: Carbohydrates bound either to proteins as constituents of glycoproteins or to lipids as constituents of glycolipids. The hydrophilic unit. Glycerophospholipids or phosphoglycerides contain glycerol. Sphingomyelin is the most abundant sphingophospholipid. Rarer phospholipids have a net positive charge. There are two types of phospholipids: Sphingophospholipids contain an amino alcohol called sphingosine instead of glycerol. Phosphoglycerides are the most numerous phospholipid molecules found in plasma membranes. Phosphoglyceride molecules are classified according to the types of alcohol linked to the phosphate group. The ratio of protein to lipid varies enormously depends on cell types. In sphingophospholipid. Phospholipids derived from glycerol are called glycerophospholipids. Lipid bilayer The basic structure of the plasma membrane is the lipid bilayer. The primary physical forces for organizing lipid bilayer are hydrophobic interactions. Carbohydrates are especially abundant in the plasma membranes of eukaryotic cells. At neutral pH. Three classes of lipid molecules present in lipid bilayer. The fatty acid components are hydrophobic. This bilayer is composed of two leaflets of amphipathic lipid molecules. All cells have an electrical potential difference. Electrical potential across cell membranes is a function of the electrolyte concentrations in the intracellular and extracellular solutions and of the selective permeabilities of the ions. The resulting separation of charge across the membrane constitutes an electric potential. At equilibrium. Cell Structure and Functions 3. Ion concentration gradients and selective movements of ions create a difference in electric potential or voltage across the plasma membrane. In addition to ion pumps. Electrogenic transport affects and can be affected by the membrane potential. Its electrogenic operation directly contributes to the negative inside membrane potential. This is called membrane potential. Active transport of ions by ATP-driven ion pumps. How membrane potentials arise? To help explain how an electric potential across the plasma membrane can arise. The channel undergoes through these various conformations as a result of voltage changes that take place during an action potential. During the depolarizing phase. Leaky channels. Action potentials are the direct consequence of the voltage-gated cation channels. During the repolarizing phase. This process is called repolarization. Ion channels may be either leaky channels or gated channels. The influx of positive charge depolarizes the membrane further. Movement of ions occurs through ion channels. Gated channels. Following the repolarizing phase there may be an after-hyperpolarizing phase. At resting potential about —70 mV. Cell Structure and Functions Let us now consider the changes in potential during an action potential. During an action potential. The x-axis for time is the same in both graphs. The refractory period limit the number of action potentials that can be produced by an excitable membrane in a given period of time. During the absolute refractory period. The top graph depicts an action potential. The relative refractory period is the time period during which a second action potential can be initiated. Gated Na and K channels closed Time millisecond Figure 3. It can be absolute or relative. The period of time after an action potential begins during which an excitable cell cannot generate another action potential in response to a normal threshold stimulus is called the refractory period. It may be a constitutive secretory pathway carried out by all cells or regulated secretory pathway carried out by specialized cells. An example of transcytosis is the movement of maternal antibodies across the intestinal epithelial cells of the newborn rat. Vesicle containing soluble proteins for constitutive secretion Constitutive secretory pathway Trans-Golgi network Extracellular space Regulated secretory pathway Secretory Golgi complex vesicle containing secretory proteins Plasma membrane Figure 3. Cell Structure and Functions plasma membrane at the opposite side. Many soluble proteins are continually secreted from the cell by this pathway. The lumen of the gut is acidic. The two pathways diverge in the trans Golgi network. On exposure to the neutral pH of the extracellular fluid that bathes the basolateral surface of the cells. Specialized secretory cells also have a regulated secretory pathway. The constitutive secretory pathway operates in all cells. Examples of proteins released by such constitutive or continuous secretion include collagen by fibroblasts. The complexes remain intact and are retrieved in transport vesicles that bud from the early endosome and subsequently fuse with the basolateral domain of the plasma membrane. The receptor-antibody complexes are internalized via clathrin coated vesicles and are delivered to early endosomes. This pathway also supplies the plasma membrane with newly synthesized lipids and proteins. The regulated secretion of small molecules. Ribosomes consist of rRNA and r-proteins. The signal that directs secretory proteins into such vesicles is not known. Proteins destined for secretion called secretory proteins are packaged into appropriate secretory vesicles in the trans Golgi network. The r-proteins are termed as L or S depending on whether the protein is from the large or small subunit. The ribosome is approximately globular structure. The functional ribosomes consist of two subunits of unequal size. In this secretory pathway. The secreted product can be either a small molecule such as histamine or a protein such as a hormone or digestive enzyme. There are generally more copies of the 5S genes than of the rRNA genes. The human genome contains about copies of rRNA genes per haploid set. In all eukaryotes studied so far. Cell Structure and Functions The regulated secretory pathway is found mainly in cells specialized for secreting products rapidly on demand such as hormones. Table 3. Many other species. The sedimentation coefficient has units of second. It is the ratio of a velocity to the centrifugal acceleration. Within the cell. All proteins synthesized by membrane free ribosomes are translocated post-translationally. Microsomes lacking attached ribosomes are called smooth microsome. Microsomes derived from RER are studded with ribosomes on the outer surface and are called rough microsomes. Transmembrane transport: In transmembrane transport. The transport of selected proteins from the cytosol into the ER lumen or into mitochondria is an example of transmembrane transport. Protein translocation describes the movement of a protein across a membrane. The enclosed compartment is called the ER lumen. When cells are disrupted by homogenization. Gated transport: The protein translocation between the cytosol and nucleus occurs through the nuclear pore complexes. This process is called gated transport because the nuclear pore complexes function as selective gates that can actively transport specific macromolecules. The transfer of proteins from the endoplasmic reticulum to the Golgi apparatus. It is an extensive network of closed and flattened membrane-bound structure. Proteins synthesized by membrane bound ribosomes are translocated co-translationally. ER membranes are physiologically active. Protein translocation may occur co-translationally or post-translationally. Vesicular transport: In vesicular transport. Figure 3. The cisternal space or lumen remains continuous with the perinuclear space. Proteins synthesized by ribosomes associated with the membrane of RER enter into the lumen and membrane of RER by the process of co-translational translocation. In the lumen of the RER, five principal modifications of proteins occur before they reach their final destinations: The SER acts as the site of lipid biosynthesis, detoxification and calcium regulation. N-linked glycosylation is the attachment of a sugar molecule to a nitrogen atom in an amino acid residue in a protein. In the RER, this process involves the addition of a large preformed oligosaccharide precursor to a protein. This precursor oligosaccharide is linked by a pyrophosphoryl residue to dolichol, a long-chain 75—95 carbon atoms polyisoprenoid lipid that is firmly embedded in the RER membrane and acts as a carrier for the oligosaccharide. The structure of N-linked oligosaccharide is the same in plants, animals and single-celled eukaryotes - a branched oligosaccharide, containing three glucose Glc , nine mannose Man and two N-acetylglucosamine GlcNAc molecules which is written as Glc3 Man9 GlcNAc2. Biosynthesis of oligosaccharide begins on the cytosolic face of the ER membrane with the transfer of N-acetyl glucosamine to dolichol phosphate. Two N-acetylglucosamine GlcNAc and five mannose residues are added one at a time to a dolichol phosphate on the cytosolic face of the ER membrane. The first sugar, N-acetyl glucosamine, is linked to dolichol by a pyrophosphate bridge. This high-energy bond activates the oligosaccharide for its transfer from the dolichol to an asparagine side chain of a nascent polypeptide on the luminal side of the rough ER. Tunicamycin, an antibiotic, blocks the first step in this pathway and thus inhibits the synthesis of oligosaccharide. After the seven-residue dolichol pyrophosphoryl intermediate is flipped to the luminal face. The remaining four mannose and all three glucose residues are added one at a time in the luminal side. The sugar molecules participate. ER-resident proteins often are retrieved from the Cis-Golgi As we have mentioned in the previous section that proteins entering into the lumen of the ER are of two types- resident proteins and export proteins. How, then, are resident proteins retained in the ER lumen to carry out their work? The answer lies in a specific C-terminal sequence present in resident ER proteins. Several experiments demonstrated that the KDEL sequence which acts as sorting signal, is both necessary and sufficient for retention in the ER. If this ER retention signal is removed from BiP, for example, the protein is secreted from the cell; and if the signal is transferred to a protein that is normally secreted, the protein is now retained in the ER. The finding that most KDEL receptors are localized to the membranes of small transport vesicles shuttling between the ER and the cis-Golgi also supports this concept. The retention of transmembrane proteins in the ER is carried out by short C-terminal sequences that contain two lysine residues KKXX sequences. How can the affinity of the KDEL receptor change depending on the compartment in which it resides? The answer may be related to the differences in pH. Clearly, the transport of newly synthesized proteins from the RER to the Golgi cisternae is a highly selective and regulated process. The selective entry of proteins into membrane-bound transport vesicles is an important feature of protein targeting as we will encounter them several times in our study of the subsequent stages in the maturation of secretory and membrane proteins. The Golgi complex, also termed as Golgi body or Golgi apparatus, is a single membrane bound organelle and part of endomembrane system. It consists of five to eight flattened membrane-bound sacs called the cisternae. Each stack of cisternae is termed as Golgi stack or dictyosome. The cisternae in Golgi stack vary in number, shape and organization in different cell types. The typical diagrammatic representation of three major cisternae cis, medial and trans as shown in the figure 3. In some unicellular flagellates, however, as many as 60 cisternae may combine to make up the Golgi stack. The number of Golgi complexes in a cell varies according to its function. A mammalian cell typically contains 40 to stacks. In mammalian cells, multiple Golgi stacks are linked together at their edges. Each Golgi stack has two distinct faces: Both cis and trans faces are closely associated with special compartments: Further modifications of N-linked oligosaccharide in the Golgi apparatus gives two broad classes of N-linked oligosaccharides. The vesicles fuse with the Golgi membranes and release their internally stored molecules into the organelle. Proteins and lipids from the smooth and rough endoplasmic reticulum bud off in tiny bubble-like vesicles that move through the cytoplasm until they reach the Golgi apparatus. High-mannose www. Both networks are thought to be important for protein sorting. It modifies proteins and lipids that have been built in the endoplasmic reticulum and prepares them for export outside of the cell or for transport to other locations in the cell. Glycosylation of proteins N-linked oligosaccharide chains on proteins are altered as the proteins pass through the Golgi cisternae en route from the ER. The Golgi apparatus is especially prominent in cells that are specialized for secretion. When completed. ER - lysosome. Proteins and lipids enter the cis Golgi network in vesicular tubular clusters arriving from the ER and exit from the trans Golgi network. Substances from ER enter into the cis face of a Golgi stack for processing and exit from trans face. The modifications to molecules that take place in the Golgi apparatus occur in an orderly fashion. In such cells. Secretory vesicles form from the trans Golgi network. The chemical make-up of each face is different and the enzymes contained in the cisternae between the faces are distinctive. As we have seen. Once inside. The majority of eukaryotic cells are diploid. Each cell is programmed to respond to specific extracellular signal molecules. This is accomplished by a variety of signal molecules that are secreted or expressed on the surface of one cell and bind to receptors expressed by other cells. The number of chromosomes in a species has no specific significance nor does it indicate any relationship between two species which may have the same chromosome number. One chromosome contains multiple origin of replication. It consists of a long array of short. Binding of the signal by a specific receptor leading to its activation. Depending on the eukaryotic organism. Cell Structure and Functions termed as heterochromatin. The centromeres serve both as the sites of association of sister chromatids and as the attachment sites for microtubules of the mitotic spindle. Transport of the signal to the target cell. Every cell maintains a characteristic number of chromosomes. Centromere The constricted region of linear chromosomes is known as the centromere. Because of its condensed state. Although this constriction is called the centromere. Origin of replication The origin of replication also called the replication origin is a particular sequence in a chromosome at which replication is initiated. Telomere Telomeres are specialized structures. Chromosome number All eukaryotic cells have multiple linear chromosomes. Species Haploid number of chromosome Saccharomyces cerevisiae budding yeast 16 Schizosaccharomyces pombe fission yeast 03 Caenorhabditis elegans 06 Arabidopsis thaliana 05 Drosophila melanogaster 04 Tetrahymena thermophilus Micronucleus 5. Initiation of signal-transduction pathways. Synthesis and release of the signaling molecule by the signaling cell. Extracellular signaling usually involves the following steps: One important example of such is the response of cells of the vertebrate immune system to foreign antigens. In paracrine signaling. Membrane bound signal molecules remain bound to the surface of the cells and mediate contact dependent signaling. These molecules are divided into two categories — membrane bound and secretory signal molecules. Notch signalling and classical cadherin signalling are examples of juxtacrine signaling. It is a long-range signaling in which signal molecule is transported by the blood stream. Cell Structure and Functions In animals. Secreted extracellular signal molecules are further divided into three general categories based on the distance over which signals are transmitted: In most cases. In juxtacrine signaling. In autocrine signaling. An example of this is the action of neurotransmitters in carrying signals between nerve cells at a synapse. In endocrine signaling. Certain types of T- lymphocytes respond to antigenic stimulation by synthesizing a growth factor that drives their own proliferation. Unlike other modes of cell signaling. Bcl2 was the first protein shown to cause an inhibition of apoptosis. In the presence of an apoptotic stimulus. The pro-apoptotic Bcl2 proteins consist of two subfamilies. Most cancers are initiated by genetic changes and majority of them are caused by changes in somatic cells and therefore are not transmitted to the next generation. In the absence of an apoptotic stimulus. Mammalian Bcl2 family of proteins regulate the intrinsic pathway of apoptosis mainly by controlling the release of cytochrome c and other intermembrane mitochondrial proteins into the cytosol. When an apoptotic stimulus triggers the intrinsic pathway. BH3-only proteins are activated and bind to the anti-apoptotic Bcl2 proteins so that they can no longer inhibit the BH proteins.It consists of five to eight flattened membrane-bound sacs called the cisternae. When no further additions are made, the resulting compound is phosphatidic acid, the simplest phosphoglyceride. The two molecules are mirror images and cannot be superimposed on one another. The rod shaped E. Such a donor bacterial cell is called an Hfr strain for high frequency of recombination because it exhibits a very high efficiency of chromosomal gene transfer in comparison with F-cells. Biomolecules and Catalysis Table 1. For example, trypsin, a proteolytic enzyme, is secreted by the pancreas. In the presence of an apoptotic stimulus, BH3-only proteins are activated and bind to the anti-apoptotic Bcl2 proteins so that they can no longer inhibit the BH proteins. Phosphorylation Glucose is phosphorylated by ATP to form a glucose 6-phosphate. The most widely used is cyanogen bromide CNBr.
__label__pos
0.913348
How Much Should You Really Sleep , Psychologist, liyap.com 3.9K reads This subject has kept professionals, as well as everyone else, intrigued for a while now, because, as unbelievable as it may seem, we are still trying to figure sleep out. Discover 36 more articles on this topic Browse Full Outline Each individual is unique, and not everyone requires the same amount of sleep, to feel rested and function properly. Basic observation and experience can confirm this. Quiz 1Quiz 2Quiz 3All Quizzes We Are All Unique While your coworker may seem refreshed and energetic at the start of the work day, you may still be holding on to that cup of coffee, as if your life depends on it. Perhaps you’re full of energy at noon, while everyone else performs their tasks in sluggish silence. Everyone has their rhythm, despite the similarity of sleep patterns, and that’s ok. Nevertheless, there are some ranges that indicate if we are sleeping enough. What scientists believe about those ranges, has changed over time, to favor new findings in the study of sleep. According to the American National Sleep Foundation, this is the recommended amount of sleep you should get, according to your age: AGE SLEEP RANGE Newborns 14-17 hours per day. Infants 12-15 hours per day. Toddlers 11-14 hours per day. Preschoolers 10-13 hours per day. School-aged children 9-11 hours per day. Teenagers 8-10 hours per day. Young adults (18-25) 7-9 hours per day. This is a new age category, and it’s being studied if the range should be widened. Adults (26-64) 7-9 hours per day. Older adults (65 and more) 7-8 hour per day. We humans spend approximately one-third of our lives sleeping, it’s necessary for our health and general well-being. That being said, we need to sleep different amounts of time depending on our age, and basically, as the chart states, the older we get, the less sleep we need. Still, keep in mind that the numbers in the chart are not set in stone, and may vary slightly from person to person. What Does Science Say? Sometimes You Need More Sleep There are some instances, though, in which we need to increase the time we sleep. For example, during pregnancy, while recovering from an illness, and when sleep deprived. Yes, if you don’t get enough sleep, you owe your body that rest time and need to increase your sleep quantity and quality. Lack of sufficient sleep, over longer periods of time, can result in numerous unpleasant, and even dangerous consequences, such as difficulty performing physical and cognitive tasks, irritability, tiredness, deteriorated mental health. Not Too Little, Not Too Much Other studies indicate that both lack of sleep and oversleeping can lead to an increased mortality rate. Therefore, the importance of sleep is not merely in how energetic you may feel but is rather a matter that can seriously impact your life. It sounds very dramatic put that way, but it is a reality we have to face. Sleep should become a priority in your life. Understand Your Sleep To figure out how much sleep you need, ask yourself -and be mindful when answering- the following questions: • On a regular day, do you wake up feeling rested? • How would you describe the mood you usually wake up in? • How often do you feel that way? (the mood you previously chose) • Is that attitude positive or negative? • How often do you wake up with body aches? • After a restless night, what differences do you notice compared to a regular one? List them and be as specific as you can. Assessing your sleep pattern and the way it influences your emotional state, along with using the chart above, can be of great help. It would allow you to figure out exactly how much sleep you need, to feel well, which may, in fact, increase your chances of living longer. Full reference:  (Jan 28, 2016). How Much Should You Really Sleep. Retrieved Jul 23, 2019 from Explorable.com: https://explorable.com/e/how-much-should-you-really-sleep You Are Allowed To Copy The Text The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0). This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page. That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).
__label__pos
0.705058
Accessibility View Close toolbar How To Quickly Change Your Mobility Most people know that proper mobility is essential to a long and pain-free life, but not many people know why they need it or how to get it safely. We’ll be going over what mobility is, why you lose it in certain areas, and how to get those areas functioning better quickly! What is Mobility? Mobility is more than just being “flexible,” it is about getting proper motion in the proper areas. We all know the gymnast who can do a complete backbend and nearly touch the ground with their shoulders, but I would argue that this isn’t proper mobility. Sure, they are flexible, but odds are that they are taking A LOT of motion from just a COUPLE joints and very little from the rest. These few joints that are taking the majority of the motion are doing WAY too much, and will eventually begin to breakdown and cause pain. Proper mobility means sharing motion evenly across the entire system. So for this gymnast, each joint should evenly provide a little bit of motion to produce the ultimate movement. This is the most safe and effective way to produce movement, and thus, proper mobility. Why Does My Mobility Suck? If you’ve ever tried to stretch some of your muscles before or after a workout, you’ve probably noticed that your range of motion in certain stretches isn’t as good as other stretches. Usually with this we also see that the more you stretch these areas, the more painful the area gets. You lose mobility for a couple different reasons, but the most common reasons are that you either have a stability issue somewhere requiring compensation or that you have fascial adhesions. Stability Issues: One of the most common areas we treat for mobility issues is the shoulder. People often report tight upper back and neck muscles, decreased range of motion in the neck and the shoulder on a fairly regular basis. When we look at the shoulder mobility, often times it is poor even though the patient has been stretching the shoulder for weeks. Shouldn’t the stretching help? Nope. This is simply because all it is going to do is treat the symptoms (tightness, stiffness, pain) and not the CAUSE. When we have shoulder instability, the body responds by trying to create stability as best as it can. Unfortunately, the muscles it calls upon are the large and strong muscles (such as the upper traps, pecs and lats). These muscles are so big that they overdo it and begin to cause compression of the shoulder, leading to the decreased range of motion. These muscles stay tight because the body can’t figure out any other way to keep the shoulder stable otherwise. No amount of stretching will ever fix something like this, the only thing you can do to fix mobility is to fix the stability with proper rehabilitation. Fascial Adhesions: When this process of muscle compensation for instability runs for too long, the body responds by increasing the production of fascia. Fascia is a type of tissue that covers our entire body, sitting on and in between muscles, and performs a support function for the body. When the muscles are tight for too long, the body tries to support these areas by putting down more and more fascia. Unfortunately, this build up of fascia can cause issues by itself. It can keep the range of motion decreased (even after you rehabilitate the compensatory muscles) and it has a lot of nerves in it that can cause it to be very painful. How Can I Fix It? If you have any tight muscles or decreased range of motion, it is essential that you begin a proper rehabilitation program. Most of the rehabilitation that we perform is through the Dynamic Neuromuscular Stabilization (DNS) system. By utilizing developmental kinesiology (how humans develop movement from birth), we are able to get you moving in ways that the body knows how to use. DNS helps us get your brain involved in the rehabilitation by tapping into fundamental movements that are pre-programmed into our bodies from infancy. The effects of this treatment are often times quick and long-lasting. The best option we have for fascial adhesions is through a process called “tissue remodeling.” This means, we need to physically break-up and stress the tissue so that the body can replace it with tissue that is more elastic and appropriate for movement. This can be performed by using a FOAM ROLLER or LACROSSE BALL on the area that is tight. Unfortunately, this process can take a long time and you will have to continually stress and remodel the tissue for a minimum of six weeks. It is definitely worth it though because once the tissue has been remodeled, you won’t have to worry about it for a long time! Questions? If you have any questions regarding the material in this post, PLEASE reach out to us because we are more than happy to answer them! You can reach out via the comment section, email ([email protected]) or call us at 262-236-9489. Thank you for reading and KEEP MOVING! 75$ New Patient Special* *When no insurance is billed. Medicare not eligible. Subject to insurance rates. Locations 10224 N. Port Washington Rd, Ste F | Mequon, WI 53092 Office Hours By Appointment Only Primary Location Monday: 8:00 AM-7:00 PM Tuesday: 8:00 AM-7:00 PM Wednesday: 8:00 AM-7:00 PM Thursday: 8:00 AM-7:00 PM Friday: 8:00 AM-7:00 PM Saturday: 9:00 AM-12:00 PM Sunday: Closed
__label__pos
0.571042
Jake B Jake B - 1 year ago 83 Java Question How do I simulate multiple inputs from a JUnit test case into my program? I have written a program which looks like this: import java.util.Scanner; import java.util.ArrayList; public class Triangles { public static void main(String[] args) { Scanner user_input = new Scanner(System.in); ArrayList<String> triangleLengths = new ArrayList<String>(); for (int i=0; i < 3; i++) { System.out.print("Triangle length #" + i + ": "); triangleLengths.add(i,user_input.next()); } if (triangleLengths.get(0) == triangleLengths.get(1) && triangleLengths.get(1) == triangleLengths.get(2)) { System.out.println("This triangle is an equilateral triangle"); } else if (triangleLengths.get(0) == triangleLengths.get(1) || triangleLengths.get(0) == triangleLengths.get(2) || triangleLengths.get(1) == triangleLengths.get(2)) { System.out.println("This triangle is an isosceles triangle"); } else if (triangleLengths.get(0) != triangleLengths.get(1) && triangleLengths.get(1) != triangleLengths.get(2)) { System.out.println("This triangle is a scalene triangle"); } else { System.out.println("The input does not make a triangle!"); } } } I have been tasked with writing a JUnit test case to essentially try and 'break' my program through testing with various inputs. I can't for the life of me figure out how to do this as a total Java newbie - could anyone point me in the right direction? Answer Source So I made a suggestion on how to solve it. you want to make it so that you can test with different parameters automatically without needing to enter it manually so i isolated the triangles part as seen below. EDIT: I redid the code somewhat The normal run class src/main.java import java.util.ArrayList; import java.util.List; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner user_input = new Scanner(System.in); List<String> triangleLengths = new ArrayList<>(); for (int i = 0; i < 3; i++) { System.out.print("Triangle length #" + i + ": "); triangleLengths.add(i, user_input.next()); } // Result output will be here Triangle subject = new Triangle(triangleLengths); if (subject.getTriangleType() == Triangle.Type.INVALID) { System.out.println("Triangle is invalid"); } else { System.out.println("Triangle is: " + subject.getTriangleType()); } } } The JUnit class test/TrianglesTest.java import org.junit.Test; import java.util.Arrays; import java.util.List; import static org.junit.Assert.assertEquals; public class TrianglesTest { /** * Testing with String inputs (what you'd enter manually) */ @Test public void testWithStrings() { List<String> triangleLengths = Arrays.asList("len1", "len2", "len3"); Triangle subject = new Triangle(triangleLengths); // Example of checking if expected type assertEquals(Triangle.Type.ISOSCELES, subject.getTriangleType()); } /** * Testing with numbers as what I'd expect the triangle to be made of * * Here you test with a triangle object * * Haven't tested what the 3 types is sorry :O */ @Test public void testWithNumbersAsObject() { Triangle subject = new Triangle(4, 5.32, 7); assertEquals(Triangle.Type.SCALENE, subject); } /** * This piece you check the static method but have no object */ @Test public void testWithNumbersStaticMethod() { assertEquals(Triangle.Type.ISOSCELES, Triangle.getTriangleType(3.4d, 4d, 1.111d)); } } And lastly the actual code you wanted to test src/Triangles.java import java.util.List; /** * I created so you can both have an object of the triangle or make the check purely static * maybe you need an object type for later? */ public class Triangle { final double side0; final double side1; final double side2; public Triangle(List<String> triangleLengths) { this(Double.parseDouble(triangleLengths.get(0)), Double.parseDouble(triangleLengths.get(1)), Double.parseDouble(triangleLengths.get(2))); } public Triangle(double side0, double side1, double side2) { this.side0 = side0; this.side1 = side1; this.side2 = side2; } public Triangle.Type getTriangleType() { return Triangle.getTriangleType(side0, side1, side2); } public static Triangle.Type getTriangleType(double side0, double side1, double side2) { if (isEquilateral(side0, side1, side2)) { return Type.EQUILATERAL; } else if (isIsosceles(side0, side1, side2)) { return Type.ISOSCELES; } else if (isScalene(side0, side1, side2)) { return Type.SCALENE; } else { return Type.INVALID; } } private static boolean isScalene(double side0, double side1, double side2) { return side0 != side1 && side1 != side2; } private static boolean isIsosceles(double side0, double side1, double side2) { return side0 == side1 || side0 == side2 || side1 == side2; } private static boolean isEquilateral(double side0, double side1, double side2) { return side0 == side1 && side1 == side2; } public enum Type { EQUILATERAL, ISOSCELES, SCALENE, INVALID } } Hope this helps. Note that I return the answer from the Triangles class instead of writing it out immidiately. And only in the manual run I write it out from the main method. Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
__label__pos
0.996716
   Why does the clamp QCPC open when you pull the knob? • English • Français • Nederlands • Español • Deutsch Why does the clamp QCPC open when you pull the knob?   It is normal for QCPC to open when you pull up the knob. When you turn the knob to ON position, the shaft shown in red goes down and pushes down the balls by the spring force. Then the balls hold the clamping pin. If you pull up the knob after clamping, the shaft goes up because it is connected to the knob with a small screw. Then the balls do not receive any pushing force and they become free and so unlocking the QCPC. Tags: 
__label__pos
0.999898
As that which? Einstein used a definition of time for experimental purposes, as that which is measured by a clock. How to parse this sentence Thank you very much. It means that, for experimental purposes, Einstein used the definition that time is “that (thing) which is measured by a clock”.
__label__pos
0.999946
C# - Multi-dimensional Array We have learned about single dimensional arrays in the previous section. C# also supports multi-dimensional arrays. A multi-dimensional array is a two dimensional series like rows and columns. Example: Multi-dimensional Array: int[,] intArray = new int[3,2]{ {1, 2}, {3, 4}, {5, 6} }; // or int[,] intArray = { {1, 1}, {1, 2}, {1, 3} }; As you can see in the above example, multi dimensional array is initialized by giving size of rows and columns. [3,2] specifies that array can include 3 rows and 2 columns. The following figure shows a multi-dimensional array divided into rows and columns: Multi-dimensional Array Multi-dimensional Array The values of a multi-dimensional array can be accessed using two indexes. The first index is for the row and the second index is for the column. Both the indexes start from zero. Example: Access Multi-dimensional Array int[,] intArray = new int[3,2]{ {1, 2}, {3, 4}, {5, 6} }; intArray[0,0]; //Output: 1 intArray[0,1]; // 2 intArray[1,0]; // 3 intArray[1,1]; // 4 intArray[2,0]; // 5 intArray[2,1]; // 6 In the above example, intArray[2,1] returns 6. Here, 2 means the third row and 1 means the second column (rows and columns starts with zero index).
__label__pos
0.999921
What Can We Learn From Ants About Epidemics? Maria J. Danford They clear by themselves right before entering their property. They use particular chemical compounds to disinfect. They restrict obtain to high-visitors regions. And no, they are not human — they are ants. Extensive right before social distancing grew to become a household phrase for us, ants ended up training a edition of it to ward off illnesses in the nest. And they are great at it.  Ants are helpful at avoiding epidemics in just their colonies, despite their close residing quarters and huge communities. In truth, epidemics and sick colonies are rarely, if at any time, found in the wild. Thanks in section to this, ants are one particular of the most prosperous species on Earth. In accordance to some estimates, they make up practically a quarter of all terrestrial animal biomass. And since of the social measures they’ve progressed to use, ant conduct typically looks distinctly clever — but it is seriously not. “You can understand some matters from animals, even although it can be quite various for people,” says Nathalie Stroeymeyt, a researcher who experiments ants at the University of Bristol. “There’s some typical principles that are helpful, which have been chosen for, that you can sort of just take inspiration from.” Human communities warding off this generation’s largest pandemic to day: just take be aware. Noticed in many species of ants, these ailment-curbing social tactics include separating groups by role in just their nest, sanitizing by themselves and their residing quarters and mixing tree resin with their possess poison to get rid of pathogenic spores. Socially-Distant Ants Ants may have a couple of matters to train people about providing each other ample space —especially through a pandemic. A 2018 study, published in Science and led by Stroeymeyt, found that when colonies of garden ants ended up exposed to a pathogen, they transformed their conduct in reaction. The ants ended up already divided into two groups: staff that just take treatment of the brood inside the nest, and all those that forage exterior. Just after scientists exposed ants in 11 colonies to infectious spores, the ants in each colony started to interact significantly less with ants from the other groups and much more with one particular a different. The groups properly grew to become much more different, which prevented the distribute of the spores. What is much more, right after scientists carried out a different experiment with 11 much more colonies, the ants’ protected what the study calls high-value individuals: the queen and young employee ants, who usually survived and experienced significantly less exposure to the spores. And the much more numerous ants that experienced low ranges of exposure to the spores showed a heightened immune reaction to the infection, significantly like people do with a vaccine.  Sanitizing and Grooming We can understand much more from ants than just their socially-distant means. A study published in the Journal of Evolutionary Biology described how ants use their possess versions of cleansing and sanitizing one particular a different. A different study, published in 2018 by scientists at the Institute of Science and Technology Austria (IST Austria), created upon this and found that they altered sanitary treatment primarily based on their nest-mate’s level of infection. Not only do nest-mates groom by themselves right before entering the nest, but they also groom each other, a follow regarded as allogrooming: bodily plucking most likely infectious particles from their mate’s bodies. When grooming a nest-mate that was exposed to much more than one particular pathogen, the ants altered their grooming approach, expanding the use of their possess antimicrobial poison and lessening bodily get in touch with. Once more, ants left with low ranges of spores on their bodies basically created bigger immunity to the fungal spores in a different edition of ant inoculation against ailment. Ants also use chemical compounds to avert entry of a pathogen right before the nest has even been set up. Quite a few ant species develop a toxic material in just their venom gland referred to as formic acid. They typically use it on your own to struggle off predators or disinfect their nest. Substantially like people like transferring into a clear apartment, ants use this toxic formic acid to sanitize a new residing area right before they go in. In a different study, scientists at IST Austria found that invasive garden ants sprayed their residing quarters with formic acid, and that cocoons that contains pupae put in the nest ended up resistant to this typically toxic material. “When we use harmful cleansing products, we safeguard ourselves with gloves,” reported Sylvia Cremer, who worked on the study, in a press launch. “The cocoon has a comparable operate to protective gloves.” Pure Remedies In addition to grooming by themselves and each other, ants have even much more procedures to struggle ailment. Wooden ants use the similar formic acid that the garden ants use to clear their nests and struggle their prey. They also collect tree resin from exterior the nest, which has antimicrobial properties, and position it in close proximity to the brood. But alternatively than making use of each material on your own, a study led by Michel Chapuisat, a researcher at the University of Lausanne in Switzerland, suggests that they mix the two in just their nests to produce an even much more impressive anti-fungal agent. His group put nest products like twigs, rocks, and resin in close proximity to employee ants, and saved a different established of products absent as a manage. They found the formic acid present on the resin that was saved in close proximity to the ants. Further than that, the resin that the employee ants experienced arrive into get in touch with with experienced bigger anti-fungal properties than the resin they stayed absent from. Other nest products exposed to the acid did not have this antiseptic house.  “There ended up in all probability some genes associated in the tendency to collect resin and these ended up chosen by evolution. No ant has imagined, ‘How can I get rid of ailment?’” says Chapuisat. “But what we can understand are typical principles.” Just like ants, some of our most impressive instruments against the distribute of ailment are in truth substances found in character. With ants as a information, scientists can examine the efficacy of procedures like social distancing, sanitizing and even making use of compounds from character. New investigation from Stroeymeyt will even use ants as a product for so-referred to as super-spreaders by observing the insects and selecting all those that have the most get in touch with with other individuals. This form of product could possibly be made use of in the long run to help discover likely super-spreaders in a local community and vaccinate or immunize them 1st. Some ways from ants, of training course, will not perform for us, like poisoning their younger when they are contaminated and kicking them out of the nest. But the place is not for people to emulate ants, but to do what people do ideal: select and use the ideal pieces of character for our possess use. Next Post Cloud Computing Evolution May Drive Next Stages of Robotics The long term of robotics might be interwoven with the continued progress of cloud computing and wireless infrastructure. Cloud computing might be a key driver for the expansion and spread of robots, according to findings by ABI Investigate. In its Professional and Industrial Robotics report, ABI presents a forecast for […] Subscribe US Now
__label__pos
0.734114
Journal of Cell Science and Mutations All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal. Reach Us +44-7360-538437 Short Communication - Journal of Cell Science and Mutations (2022) Volume 6, Issue 1 Growth and standard metabolic rate have a relationship. Yingxuan Zhang* Department of Cell and Molecular Science, Fudan University, China *Corresponding Author: Yingxuan Zhang Department of cell and molecular science Fudan University China E-mail: [email protected] Received: 26-Dec-2021,Manuscript No. AAACSM-22-54968;Editor assigned: 28-Dec-2021,PreQC No. AAACSM-22-54968 (PQ);Reviewed: 10-Jan-2022, QC No. AAACSM-22-54968; Revised: 17-Jan-2022, Manuscript No. AAACSM -22-54968 (R); Published: 24-Jan-2022, DOI:10.35841/ aaacsm-6.1.103 Citation: Zhang Y. Growth and standard metabolic rate have a relationship. J Cell Sci Mut. 2022;6(1):103 Visit for more related articles at Journal of Cell Science and Mutations Introduction Cell development is that the method by those cells accumulates mass and grows in size. Generally, the gap between creature cells is between ten and twenty metres. Terminally separated cells are available in a range of sizes, starting from microscopic red platelets to engine neurons that may grow to be several micrometres long. One Water accounts for roughly seventieth of the load of a traditional dividing cell, whereas macromolecules like nucleic acids, proteins, polysaccharides, and lipids account for the foremost majority of the remaining mass (twenty five % follow measurements of particles and little atoms catch up on any shortage) [1]. Proteins usually contribute the foremost to cell dry mass, accounting for roughly eighteen % of the entire cell weight. A range of physical, chemical, and organic variables influence molecule production and, as a result, cell size. Malignancy is particularly relevant to living thing tired teams that coordinate digestion and management molecule production. Liberation of the cell electronic equipment that controls biomass aggregation is connected to a range of human malignancies. There are many alternative models for a way cells will evolve in nature. The dimensions of a cell and therefore the quantity of DNA it contains will often be connected. The cell size is increased once DNA replication is dole out while not even a signal of cell division cellular division organic method biological process a process referred to as end replication. Megakaryoblasts develop frequently into granular megakaryocytes, the plateletproducing cells within the bone marrow, on these lines. owing to the redoubled DNA content, these cells stop dividing and bear varied rounds of DNA combination, leading to a cell that's between twenty and one hundred metres wide. It’s unclear if redoubled DNA content merely causes a general growth of cell material or whether or not cells truly evolve to adapt to the larger order size. This development technique will be found in creatures, plants, and non-cellular life forms all round the world [2]. A different approach that involves collection internal lipids will be accustomed develop adipocytes to a diameter of eighty five to one hundred twenty meters. In distinction to end replication or macromolecule build up, some terminally divided cells, like neurons and vas muscle cells, halt partitioning and develop while not increasing their DNA content to help them accomplish their specific functions, these cells increase their molecule content (mainly protein in an exceedingly proportionate quantity. Supplements and growth factors offer animate thing cues that are coordinated with living thing tired organisations that management cell energy accessibility and molecule aggregation. Cell development is probably most firmly directed in separate cells, wherever cell development and cellular division are clearly distinguishable cycles. Analytic cells ought to, for the foremost half; increase in size with every entry into the cellular division cycle to make sure that a relentless traditional cell size is maintained. For example, in throughout syncytial division section of the first developing fruit fly underdeveloped organism, there are examples within the collective of animals wherever cellular division while not a trace of development offers a major remodelling ability [3]. The organic chemistry processes that occur within the cell refine the regular tasks of a cell. Responses are tweaked here and there, or tuned up and down, looking on the cell's immediate desires, and it unremarkably works. The various methods concerned in forming and separating cell components ought to be checked and altered in an exceedingly systematic manner at some random moment. Cells classify responses into completely different catalyst-controlled pathways to realize this goal [4]. Chemicals are super molecule impetuses that act with the atomic changes that aid cell perform to expedite organic chemistry responses. Review however compound responses flip substrates into things, most typically by connection or separating artificial gatherings from the substrates. for instance, within the final step of metabolic process, associate accelerator known as pyruvate enzyme transports a phosphate cluster from one substrate (phosphoenolpyruvate) to a different (ADP), leading to the formation of pyruvate and adenosine triphosphate. Chemical administration of organic chemistry reactions is a very important a part of cell maintenance. Protein quality permits a cell to reply to ever-changing environmental demands and regulate its metabolic pathways, each of that is essential for cell survival [5]. References 1. Blackburn T, Evidence for a fast-slow continuum of life-history traits among parasitoid Hymenoptera. Funct Ecol. 1991;5(1):65-74. 2. Indexed at, Google Scholar, Cross Ref 3. Promislow DE, Harvey PH, Living fast and dying young: a comparative analysis of life-history variation among mammals. J Zool. 1990;220(3):417-37. 4. Indexed at, Google Scholar, Cross Ref 5. Steyermarka AC, A high standard metabolic rate constrains juvenile growth. J Zool. 2002;105(2):147-51. 6. Indexed at, Google Scholar, Cross Ref Get the App
__label__pos
0.715844
blob: be0170cdbeb3cbb5c3cfe99ff981ba7e304ab462 [file] [log] [blame] /* * Copyright (c) 2015 The WebRTC project authors. All Rights Reserved. * * Use of this source code is governed by a BSD-style license * that can be found in the LICENSE file in the root of the source * tree. An additional intellectual property rights grant can be found * in the file PATENTS. All contributing project authors may * be found in the AUTHORS file in the root of the source tree. */ #include "webrtc/video_encoder.h" #include "testing/gtest/include/gtest/gtest.h" #include "webrtc/modules/video_coding/codecs/interface/video_error_codes.h" namespace webrtc { const size_t kMaxPayloadSize = 800; class VideoEncoderSoftwareFallbackWrapperTest : public ::testing::Test { protected: VideoEncoderSoftwareFallbackWrapperTest() : fallback_wrapper_(kVideoCodecVP8, &fake_encoder_) {} class CountingFakeEncoder : public VideoEncoder { public: int32_t InitEncode(const VideoCodec* codec_settings, int32_t number_of_cores, size_t max_payload_size) override { ++init_encode_count_; return init_encode_return_code_; } int32_t Encode(const VideoFrame& frame, const CodecSpecificInfo* codec_specific_info, const std::vector<VideoFrameType>* frame_types) override { ++encode_count_; return WEBRTC_VIDEO_CODEC_OK; } int32_t RegisterEncodeCompleteCallback( EncodedImageCallback* callback) override { encode_complete_callback_ = callback; return WEBRTC_VIDEO_CODEC_OK; } int32_t Release() override { ++release_count_; return WEBRTC_VIDEO_CODEC_OK; } int32_t SetChannelParameters(uint32_t packet_loss, int64_t rtt) override { ++set_channel_parameters_count_; return WEBRTC_VIDEO_CODEC_OK; } int32_t SetRates(uint32_t bitrate, uint32_t framerate) override { ++set_rates_count_; return WEBRTC_VIDEO_CODEC_OK; } void OnDroppedFrame() override { ++on_dropped_frame_count_; } bool SupportsNativeHandle() const override { ++supports_native_handle_count_; return false; } int init_encode_count_ = 0; int32_t init_encode_return_code_ = WEBRTC_VIDEO_CODEC_OK; int encode_count_ = 0; EncodedImageCallback* encode_complete_callback_ = nullptr; int release_count_ = 0; int set_channel_parameters_count_ = 0; int set_rates_count_ = 0; int on_dropped_frame_count_ = 0; mutable int supports_native_handle_count_ = 0; }; class FakeEncodedImageCallback : public EncodedImageCallback { public: int32_t Encoded(const EncodedImage& encoded_image, const CodecSpecificInfo* codec_specific_info, const RTPFragmentationHeader* fragmentation) override { return ++callback_count_; } int callback_count_ = 0; }; void UtilizeFallbackEncoder(); FakeEncodedImageCallback callback_; CountingFakeEncoder fake_encoder_; VideoEncoderSoftwareFallbackWrapper fallback_wrapper_; VideoCodec codec_ = {}; VideoFrame frame_; }; void VideoEncoderSoftwareFallbackWrapperTest::UtilizeFallbackEncoder() { static const int kWidth = 320; static const int kHeight = 240; fallback_wrapper_.RegisterEncodeCompleteCallback(&callback_); EXPECT_EQ(&callback_, fake_encoder_.encode_complete_callback_); // Register with failing fake encoder. Should succeed with VP8 fallback. codec_.codecType = kVideoCodecVP8; codec_.maxFramerate = 30; codec_.width = kWidth; codec_.height = kHeight; fake_encoder_.init_encode_return_code_ = WEBRTC_VIDEO_CODEC_ERROR; EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.InitEncode(&codec_, 2, kMaxPayloadSize)); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.SetRates(300, 30)); frame_.CreateEmptyFrame(kWidth, kHeight, kWidth, (kWidth + 1) / 2, (kWidth + 1) / 2); memset(frame_.buffer(webrtc::kYPlane), 16, frame_.allocated_size(webrtc::kYPlane)); memset(frame_.buffer(webrtc::kUPlane), 128, frame_.allocated_size(webrtc::kUPlane)); memset(frame_.buffer(webrtc::kVPlane), 128, frame_.allocated_size(webrtc::kVPlane)); std::vector<VideoFrameType> types(1, kKeyFrame); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.Encode(frame_, nullptr, &types)); EXPECT_EQ(0, fake_encoder_.encode_count_); EXPECT_GT(callback_.callback_count_, 0); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, InitializesEncoder) { VideoCodec codec = {}; fallback_wrapper_.InitEncode(&codec, 2, kMaxPayloadSize); EXPECT_EQ(1, fake_encoder_.init_encode_count_); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, CanUtilizeFallbackEncoder) { UtilizeFallbackEncoder(); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.Release()); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, InternalEncoderNotReleasedDuringFallback) { UtilizeFallbackEncoder(); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.Release()); EXPECT_EQ(0, fake_encoder_.release_count_); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, InternalEncoderNotEncodingDuringFallback) { UtilizeFallbackEncoder(); EXPECT_EQ(0, fake_encoder_.encode_count_); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.Release()); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, CanRegisterCallbackWhileUsingFallbackEncoder) { UtilizeFallbackEncoder(); // Registering an encode-complete callback should still work when fallback // encoder is being used. FakeEncodedImageCallback callback2; fallback_wrapper_.RegisterEncodeCompleteCallback(&callback2); EXPECT_EQ(&callback2, fake_encoder_.encode_complete_callback_); // Encoding a frame using the fallback should arrive at the new callback. std::vector<VideoFrameType> types(1, kKeyFrame); frame_.set_timestamp(frame_.timestamp() + 1000); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.Encode(frame_, nullptr, &types)); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.Release()); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, SetChannelParametersForwardedDuringFallback) { UtilizeFallbackEncoder(); EXPECT_EQ(0, fake_encoder_.set_channel_parameters_count_); fallback_wrapper_.SetChannelParameters(1, 1); EXPECT_EQ(1, fake_encoder_.set_channel_parameters_count_); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.Release()); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, SetRatesForwardedDuringFallback) { UtilizeFallbackEncoder(); EXPECT_EQ(1, fake_encoder_.set_rates_count_); fallback_wrapper_.SetRates(1, 1); EXPECT_EQ(2, fake_encoder_.set_rates_count_); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.Release()); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, OnDroppedFrameForwardedWithoutFallback) { fallback_wrapper_.OnDroppedFrame(); EXPECT_EQ(1, fake_encoder_.on_dropped_frame_count_); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, OnDroppedFrameNotForwardedDuringFallback) { UtilizeFallbackEncoder(); fallback_wrapper_.OnDroppedFrame(); EXPECT_EQ(0, fake_encoder_.on_dropped_frame_count_); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.Release()); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, SupportsNativeHandleForwardedWithoutFallback) { fallback_wrapper_.SupportsNativeHandle(); EXPECT_EQ(1, fake_encoder_.supports_native_handle_count_); } TEST_F(VideoEncoderSoftwareFallbackWrapperTest, SupportsNativeHandleNotForwardedDuringFallback) { UtilizeFallbackEncoder(); fallback_wrapper_.SupportsNativeHandle(); EXPECT_EQ(0, fake_encoder_.supports_native_handle_count_); EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, fallback_wrapper_.Release()); } } // namespace webrtc
__label__pos
0.983423
Internet Protocol (IP) Internet Protocol, or just IP, is a TCP/IP network layer protocol for addressing and routing packets of data between hosts on a TCP/IP network. Internet Protocol (IP) is a connectionless protocol that provides best-effort delivery using packet-switching services. Position of Internet Protocol in TCP/IP protocol suite Position of Internet Protocol in TCP/IP protocol suite How it works IP does not guarantee delivery of data. The responsibility for guaranteeing delivery and sending acknowledgments lies with the higher transport-level protocol Transmission Control Protocol (TCP). The structure of an IP packet is shown in the following diagram. Some of the more important header fields include • Source IP address: The IP address of the host transmitting the packet.  • Destination IP address: The IP address of the host to which the packet is being sent, a multicast group address, or the broadcast IP address 255.255.255.255.  • Header checksum: A mathematical computation used for verifying that the packet was received intact.  • Time to Live (TTL): The number of router hops that the packet can make before being discarded.  • Fragment offset: The position of the fragment if the original IP packet has been fragmented (for example, by a router). This information enables the original packet to be reconstructed.  IP packets are routed in the following fashion: • If IP determines that the destination IP address is a local address, it transmits the packet directly to the destination host. • If IP determines that the destination IP address is a remote address, it examines the local routing table for a route to the destination host. If a route is found, it is used; if no route is found, IP forwards the packet to the default gateway. In either case, the packet destined for a remote address is usually sent to a router. • At the router, the TTL is decreased by 1 or more (depending on network congestion), and the packet might be fragmented into smaller packets if necessary. The router then determines whether to forward the packet to one of the router’s local network interfaces or to another router. This process repeats until the packet arrives at the destination host or has its TTL decremented to 0 (zero) and is discarded by a router. Internet Protocol packet structure Internet Protocol packet structure Internet Datagram The basic unit of data exchange in the IP layer is the Internet Datagram. The format of an IP datagram and a short description of the most important fields are included below: IP Datagram IP Datagram • LEN – The number of 32 bit-segments in the IP header. Without any OPTIONS, this value is 5 • TYPE OF SERVICE – Each IP datagram can be given a precedence value ranging from 0-7 showing the importance of the datagram. This is to allow out-of-band data to be routed faster than normal data. This is very important as Internet Control Message Protocol (ICMP) messages travels as the data part of an IP datagram. Even though an ICMP message is encapsulated in a IP datagram, the ICMP protocol is normally thought of as a integral part of the IP layer and not the UDP or TCP layer. Furthermore, the TYPE OF SERVICE field allows a classification of the datagram in order to specify is the service desired requires short delay time, high reliability or high throughput. However, in order for this to have any effect, the gateways must know more than one route to the remote host and as described in the Introduction, this is not the case. • IDENT, FLAGS, and FRAGMENT OFFSET – These fields are used to describe fragmentation of a datagram. The actual length of an IP datagram is in principle independent of the length of the physical frames being transferred on the network, referred to as the network’s Maximum Transfer Unit (MTU). If a datagram is longer than the MTU then it is divided in to a set of fragments having almost the same header as the original datagram but only the amount of data that fits into a physical frame. The IDENT flag is used to identify segments belonging to the same datagram, and the FRAGMENT OFFSET is the relative position of the fragment within the original datagram. Once a datagram is fragmented it stays like that until it receives the final destination. If one or more segments are lost or erroneous the whole datagram is discarded. However, the underlying network technology is not completely hidden below the IP layer in spite of the fragmentation functionality. The reason is that the MTU can vary from 128 or less to several thousands of bytes dependent of the physical network (Ethernet has a MTU of 1500 bytes). It is hence question of efficiency when choosing the right datagram size so that fragmentation is minimized. It is recommended that gateways are capable of handling datagrams of at least 576 bytes without having to use fragmentation. • TIME – This is the remaining Time To Live (TTL) for a datagram when it travels on the Internet. The Routing Information Protocol (RIP) specifies that at most 15 hops are allowed. • SOURCE IP-ADDRESS and DESTINATION IP-ADDRESS – Both the source and destination address is indicated in the datagram header so that the recipient can send an answer back to the transmitting host. However, note that only the host address is specified – not the port number. This is because the IP protocol is an IMP-to-IMP protocol – it is not an end-to-end protocol. A layer more is needed to actually specify which two processes on the transmitting host and the final destination that should receive the datagrams. • Note that the IP-datagram only leaves space for the original source IP-address and the original destination IP-address. As mentioned in the section Gateways and Routing the next hop address is specified by encapsulation. The Internet Layer passes the IP-address of the next hop address to the Network Layer. This IP-address is bound to a physical address and a new frame is formed with this address. The rest of the original frame is then encapsulated in the new frame before it is sent over the communication channel. Note that the IP-datagram only leaves space for the original source IP-address and the original destination IP-address. As mentioned in the section Gateways and Routing the next hop address is specified by encapsulation. The Internet Layer passes the IP-address of the next hop address to the Network Layer. This IP-address is bound to a physical address and a new frame is formed with this address. The rest of the original frame is then encapsulated in the new frame before it is send over the communication channel. External References: Editor Articles posted after being checked by editors. Recent Content link to Named Pipe Named Pipe Named Pipe is an interprocess communication mechanism that provides reliable, connection-oriented, two-way communication between processes on two computers. Named pipes are one form of client/server...
__label__pos
0.897412
Introductory Type Theory Notation Summary (2023/12/25) Introductory Type Theory Notation Summary 2023 12 25 Background Derived from Pixabay Images I collected an introductory summary as a presentation of the notation used in type theory. The types are presented from easy to advanced concepts. Type Theory Notations This makes it easy to understand why type parameters (a.k.a. “generic” types) are lowercase styled in Haskell because it’s a design decision that comes from type theory or computer science. Ordinary languages employ random syntax (probably pragmatic when they were created) like uppercase generics or C-like syntax, which are ugly and made-up/workarounds1. Regarding type identifiers, modern languages have adopted the syntax a: A, unlike old languages like C# and Java, which are A a C-like. If you notice, the syntax a: A comes from type theory, and a is lowercase to not confuse a type variable with its type —the reason why “generics” are lowercase in Haskell if you ever wonder2. Sets in math are denoted in uppercase since they’re abstractions containing specific elements (or points in topology, vectors in vector spaces, etc.). Types are like sets in CS, so they are denoted in uppercase, and the variables are lowercase. Therefore, we follow informed designs. Regarding the last slides, homotopy type theory is a new field of study based on a recently discovered connection between homotopy theory and type theory, offering a new “univalent” foundation of math [1]. This collection of type theory symbols and notations fundamentally related to math and set theory in computer science can help grasp the meaning of concepts like lambdas and ADTs as a general reference as well as design decision reasons. Bibliography • “Types and Programming Languages,” by Benjamin C. Pierce. • “Foundations” by Jeremy Avigad. References [1] The HoTT Book. (2022, March 18). Homotopy Type Theory. 1. I read long ago that they tried the arrangements of C++ pointer syntax to make “sense” of them, but it ended up as the same workaround mess that language is, so pragmatic designs end up as workarounds rather than real solutions  2. I mention this because coming from banal languages like Java (and all the other mainstream languages) where generic types are uppercase, arises the question of why they look “weird” in functional languages using lowercase for type variables; but it makes perfect sense when you understand the theory —what any professional engineer does 
__label__pos
0.925441
Rust風にデザインパターン23種 κeenです。 GoFのデザインパターンは有名ですが、言語機能によっては単純化できたりあるいは不要だったりするのでRust風に書き換えたらどうなるか試してみます。 発端はこのツイート。 一応誤解のないように説明しておくと、該当のQiitaの記事に不満がある訳ではなくてGoFのデザインパターンついての言及です。 リンク先のコードで十分な時にはここでは流すのでリンク先も同時に参照下さい。 また、比較しやすいようにサンプルコードはリンク先のものに則って書きます。 一応デザインパターンの教科書として結城先生の本を参照しつつ書いていきます。 Command パターン 列挙型を使うところです。 Javaにまともなenumがないのでクラスが使われていますが、列挙型があるならそちらの方がデータとコードの分離ができて見通しがいいです。また、コマンドが1つの型に収まるのでトレイトオブジェクトを作る必要がなくなります。 比較的マイナーな変更です。 Lispだとクロージャで解決したりしますが、Rustだと列挙型の方がしっくりくるかなと思います。 trait Command<T> { fn execute(&self, &mut T); fn undo(&self, &mut T); } struct Invoker<'a, Cmd, T: 'a> { commands: Vec<Cmd>, target: &'a mut T, current_index: usize, } impl<'a, Cmd, T> Invoker<'a, Cmd, T> { fn new(t: &'a mut T) -> Self { Invoker { commands: Vec::new(), target: t, current_index: 0, } } fn target(&self) -> &T { self.target } fn append_command(&mut self, c: Cmd) { self.commands.push(c); } } impl<'a, Cmd, T> Invoker<'a, Cmd, T> where Cmd: Command<T> { fn execute_command(&mut self) { if self.commands.len() <= self.current_index { // Nothing to do. return; } let c = &self.commands[self.current_index]; let t = &mut *self.target; c.execute(t); self.current_index += 1; } fn execute_all_commands(&mut self) { for _ in self.current_index..self.commands.len() { self.execute_command(); } } fn undo(&mut self) { if 0 == self.current_index { return; } self.current_index -= 1; let c = &self.commands[self.current_index]; let t = &mut *self.target; c.undo(t); } } #[derive(Debug, Eq, PartialEq)] struct Robot { x: i32, y: i32, dx: i32, dy: i32, } impl Robot { fn new() -> Robot { Robot { x: 0, y: 0, dx: 0, dy: 1, } } fn move_forward(&mut self) { self.x += self.dx; self.y += self.dy; } fn set_direction(&mut self, d: (i32, i32)) { self.dx = d.0; self.dy = d.1; } fn get_direction(&self) -> (i32, i32) { (self.dx, self.dy) } } enum RoboCommand { MoveForward, TurnRight, TurnLeft, } impl Command<Robot> for RoboCommand { fn execute(&self, r: &mut Robot) { use RoboCommand::*; match *self { MoveForward => r.move_forward(), TurnRight => { let (dx, dy) = r.get_direction(); r.set_direction((dy, -dx)) } TurnLeft => { let (dx, dy) = r.get_direction(); r.set_direction((-dy, dx)); } } } fn undo(&self, r: &mut Robot) { use RoboCommand::*; match *self { MoveForward => { let c1 = TurnRight; c1.execute(r); c1.execute(r); self.execute(r); c1.execute(r); c1.execute(r); } TurnRight => { let c = TurnLeft; c.execute(r); } TurnLeft => { let c = TurnRight; c.execute(r); } } } } fn main() { let mut r = Robot::new(); let mut invoker = Invoker::new(&mut r); assert_eq!(*invoker.target(), Robot { x: 0, y: 0, dx: 0, dy: 1, }); { use RoboCommand::*; invoker.append_command(TurnRight); invoker.append_command(TurnLeft); invoker.append_command(MoveForward); } invoker.execute_all_commands(); assert_eq!(*invoker.target(), Robot { x: 0, y: 1, dx: 0, dy: 1, }); invoker.undo(); assert_eq!(*invoker.target(), Robot { x: 0, y: 0, dx: 0, dy: 1, }); invoker.undo(); assert_eq!(*invoker.target(), Robot { x: 0, y: 0, dx: 1, dy: 0, }); } Stateパターン 参照先のままです。トレイトオブジェクトを使う代表的なケースだと思います。。 あるいは列挙型を使う可能性もあります。 Strategyパターン 参照先でも説明されていますが、Rustにはクロージャがあるので不要です。 Template Methodパターン そもそもトレイトを使った普通のプログラミングなのでRustでわざわざ名前をつけるほどかな?と個人的には思いますがあえて書くなら参照先のままです。 あるいはものによっては高階関数でも。 個人的にはトレイトオブジェクトを作るより関連型を使った方が好みです。 trait AbstractFactory<'a> { type ProdX: ProductX; type ProdY: ProductY; fn create_product_x(&self) -> Box<ProdX + 'a>; fn create_product_y(&self) -> Box<ProdY + 'a>; } // ... Mementoパターン 参照先のままです Observerパターン 参照先のままです Visitorパターン 参照先では簡単な例なので分かりづらいのですが、列挙型を使うところです。 まともな列挙型がない+シングルディスパッチしかなく引数はオーバーロードという二重苦によって生まれたパターンであり、まともな列挙型か多重ディスパッチがあれば複雑怪奇なプログラムは書かなくて済みます。 ここでは参照先とは違ってもう少し複雑な例を出します。 trait Visitor<T> { fn visit(&mut self, &T); } enum Entity { File(String), Dir(String, Vec<Entity>), } struct ConcreteFileVisitor; impl Visitor<Entity> for ConcreteFileVisitor { fn visit(&mut self, e: &Entity) { use Entity::*; match *e { File(ref name) => println!("file: {}", name), Dir(ref name, ref files) => { println!("dir: {}", name); for file in files { self.visit(file) } } } } } fn main() { use Entity::*; let e = Dir("/".to_string(), vec![File("etc".to_string()), File("usr".to_string())]); let mut visitor = ConcreteFileVisitor; visitor.visit(&e); } 特段パターンというほどの処理をしている感じがしませんね。 Iteratorパターン 参照先のままです。 Mediatorパターン だいたい参照先のままです。複雑なことをしようと思うとColleagueが複数種類出てきて列挙型かトレイトオブジェクトが必要になりそうな気がします。 Interpreterパターン Builderパターン あまりここまで抽象化してるのは見たことありませんがやるとしたら参照先のままです。 そもそもRustには継承がないので抽象化する意義があまりなく、型固有のBuilderパターンで十分です。 Prototypeパターン 参照先のままです。 Factoryパターン クロージャで十分です。 trait Product { fn convert(&self, String) -> String; } struct Factory; impl Factory { fn convert<P, F>(&self, s: String, create_product: F) -> String where P: Product, F: FnOnce() -> P { create_product().convert(s) } } struct ConcreteProductX; impl Product for ConcreteProductX { fn convert(&self, s: String) -> String { s.to_uppercase() } } fn main() { let f = Factory; println!("{}", f.convert("hogehoge piyopiyo".to_string(), || ConcreteProductX)) } AbstractFactoryパターン そもそもここまでやる?というのは置いといてやるとしたら参照先のままかなぁと思います。 TemplateMethodパターンでも述べた通り個人的には関連型を使う方が好みです。 Chain of Responsibility/CoR パターン 参照先のままです。 Singletonパターン そもそもアンチパターンです。分かりづらいグローバル変数なだけです。あえてRustでやるとしたらlazy_staticかなと思います。 Adapterパターン これは捉え方が2種類あるかなーと思います。。 1つにはクラスの定義時にしかインターフェースを実装できない窮屈な言語仕様へのワークアラウンドとして。 この捉え方では参照先のコードようにただトレイトを実装してしまえば終わりです。 もう1つにはラッパーオブジェクトとして。std::fs::Fileの実装とかがそれっぽいと思います。。 pub struct File { inner: fs_imp::File, } Bridgeパターン そもそも機能の追加とAPIの抽象化をどちらも継承に押し込める言語仕様が悪い。 それにRustでは継承しないので関係ないです。 Proxyパターン 参照先のままです。 Facadeパターン 参照先のままです。Rustにはモジュールや可視性の制御があって特別意識することなく普段からやっていることなのであまり名前をつけるほどのことではないと思ってます。 Flyweightパターン 所有権があるのでRustだとちょっと難しいパターンです。 参照だけなら参照先のようにHashMapにいれるか、オブジェクトを区別しないならVecにいれるかなと思います。 因みにLispとかではinternという名前で呼ばれてると思います。 Compositeパターン ただの列挙型の再実装です。 Decoratorパターン よくあるやつです。参照先のコードの他、std::io::WriteBufのようなものが代表的です。 おわりに デザインパターンをdisろうと思って書いたのですが案外多くのケースで便利でした。Rustで不要なものは10本指で数えられる程度でしたね。すごい。 因みにLisperでAIの研究者(確か今GoogleのAI研究所の所長)のPeter NorvigはDesign Patterns in Dynamic Languagesで16個はLispの機能を使えばパターンという程のものではなくなると言ってます。 それぞれどの機能でどれが不要になるかを解説しているのですが、Rustは高階関数とモジュールの分に加えて列挙型の分で不要になってるかなと思います。 Written by κeen
__label__pos
0.735343
Caring for Teeth with Braces and Retainers You are currently viewing Caring for Teeth with Braces and Retainers Braces, wires, springs, rubber bands, and other appliances can attract food and plaque, which can stain teeth if not brushed away. Food can also react with the bacteria in your mouth and the metal in the braces to produce a bleaching effect, which can cause small, permanent light spots on the teeth. It is recommended brushing after every meal or snack with fluoride toothpaste and carefully removing any food that may have gotten stuck in your braces. You may also be prescribed or recommended a fluoride mouthwash, which can get into places in the mouth that a toothbrush can’t reach. Brush your teeth with specially designed brush for cleaning between braces. Foods to Avoid While Wearing Braces There are certain foods that can break or loosen your braces and should be avoided, such as: 1. Hard or tough-to-bite foods, such as apples or bagels 2. Chewy foods, such as taffy or caramels 3. Corn on the cob 4. Hard pretzels, popcorn, nuts and carrots In addition to foods, do not chew ice or bubble gum. Caring for Retainers Every time you brush your teeth, brush your retainer as well. Once a day or at least once a week, disinfect your retainer by soaking it in a denture cleanser. While playing sports, use mouth guard, designed to fit comfortably over your braces. Broken Braces Broken braces, loose bands or protruding wires can cause problems but rarely require emergency treatment. However, call your dentist or orthodontist to set up an office visit to fix the problem. If you suffer a more severe mouth or facial injury, seek immediate help. Other Problems Because braces brush up against the inside surface of your mouth, you may be prone to developing sores. If a sore develops, your orthodontist or dentist may prescribe an ointment or a prescription or nonprescription pain-reliever solution to reduce the pain and irritation and help heal the sore. Ref: www.webmd.com Our Score Click to rate this post! [Total: 0 Average: 0]
__label__pos
0.868494
Comparisons Difference Between Analog and Digital Multimeter We know that multimeters are basically electronic test equipment, utilized for the purpose of determining various quantities like voltage, current, and resistance, etc. Multimeters are generally classified into two types, analog multimeter, and digital multimeter. The crucial difference between analog and digital multimeter lies in their way of representing the quantity being measured. An analog … Difference Between Analog and Digital Multimeter Read More » Difference Between Electrical Energy and Electrical Power Electrical energy and electrical power are the two major terms associated with electrical and electronics system. The fundamental difference between electrical energy and electrical power is that electrical energy represents the amount of work done that causes electric current to flow through a circuit. As against electrical power defines the rate at which work (basically … Difference Between Electrical Energy and Electrical Power Read More » Difference Between Force and Power The major difference between force and power is that force is an action on a body or interaction of two bodies. While power is the amount of energy consumed during an action over the body. Sometimes people get confused between the terms force and power. This section will provide the necessary factors of differentiation between … Difference Between Force and Power Read More »
__label__pos
0.776371
Augment Default Controls With Inheritance Visual Studio .NET ships with a nice set of controls and classes to build Windows applications, but with only a little effort, you can augment these controls so they serve you better. Technology Toolbox: C# Inheritance lets you define objects for easy reuse. The advantage of this is that you can alter controls—both your own and those created by others—so they behave exactly as you need them to. This is especially useful when you have code that is almost, but not quite, what is required. Inheritance lets you reuse this code, adding modifications "by exception." For example, assume you have a business object class. This class might contain fundamental functionality to load, modify, and verify data. You can break, or subclass, this class into several different classes, such as a name business object, an invoice business object, and so forth. These subclassed objects are only concerned with functionality specific to their problem domains, and they don't need to re-create functionality that can access databases, and so on. This means you can keep the code to perform common tasks such as data access in a single location. Once tested, you can use this code across all the classes that inherit from the base business object. Similarly, you could fix any bug in this code in one place, which increases maintainability of the code base greatly. An additional benefit: You can subclass the "child classes"—the individual business object classes that inherit from the base business object—to provide more specific functionality. A customer business object might be an object derived from the name business object and require only a few minor details be changed. The business object scenario is a relatively common one, but you can use this same technique for visual control classes as well. In fact, Microsoft makes heavy use of this technique in the Visual Studio .NET controls, which themselves derive from Windows controls. A simple button, for instance, is subclassed from a class called ButtonBase. Several other classes, such as checkboxes, also inherit from this base class. ButtonBase in turn inherits from a class called Control, which inherits from another class called Component, which inherits from a class called MarshalByRefObject, which inherits from the mother of all classes: Object. The same is true for other WinForms controls as well. It isn't difficult to create these inheritance chains, but you do have to put some thought into creating a flexible and well-designed inheritance structure. People don't fear the ability to fix things in a global manner, but they do fear breaking them globally. What if Microsoft puts a bug into the Control class in a future version of the .NET Framework? The answer is simple: It breaks all WinForms controls. Fortunately, these types of bugs are generally so obvious that they're often easier to detect and fix than other bugs. Create an Inherited Control Let's begin by inheriting from the Button class. Inheriting from a sophisticated control like this one lets you make your WinForms controls look and behave exactly the way you want with only a minimum of effort. Perhaps you want all your buttons to appear in a flat style with a "Tahoma" font and a height of 21 pixels. You could implement such a control by changing all these properties every time you drop a button on a form, but that would be a waste of time. Instead, subclass the button and use your subclass instead: public class MyButton : System.Windows.Forms.Button { public MyButton() { this.Font = new System.Drawing.Font("Tahoma", 8); this.Height = 21; this.FlatStyle = System.Windows.Forms.FlatStyle.Flat; } } This is nothing more than a standard button control with a slightly customized appearance. This control looks like any other WinForms button, exposes the same events and properties, and provides the same developer experience as a regular button. The only difference is that you don't need to set these properties manually when you drop this control on a form. The nice thing about this construct is that it provides a place where you can make additional changes later in the process. Assume you have a customer who really likes an application you wrote, but now wants all the buttons to have a blue background, similar to what you see in Microsoft Office 2003. The customer also wants you to change the font so it's just a tad larger. You can now take this same button class and make the desired changes quickly and easily: public class MyButton : System.Windows.Forms.Button { public MyButton() { this.Font = new System.Drawing.Font("Tahoma", 10); this.Height = 21; this.FlatStyle = System.Windows.Forms.FlatStyle.Flat; this.BackColor = System.Drawing.Color.LightBlue; } } All your buttons are now blue and have a 10-point font size. This scenario raises an interesting point: You might be happy with a button's default appearance now, but you might want to change the look and feel later. Subclassing the control now gives you a single entry point to make changes later. Similarly, you might want to take advantage of new features in Windows. For example, Tablet PCs are becoming more popular, but few developers write applications for them specifically. Creating an entry point into your control could enable you to take advantage of inking capabilities without a major rewrite. You now have a new, customized Button class. Let's add it to the toolbar. You need to compile it into a .NET assembly before you can add it to the toolbox, but it doesn't matter whether you compile it into an EXE or DLL. However, you must include a reference to the WinForms namespace, or it won't compile. Right-click on your toolbox, and select Add Tab to create a new category of controls once you compile the assembly. Note that you could add your control to an existing default category, but that might make things confusing down the road. Next, right-click in the new tab and choose Customize Toolbox. This brings up a dialog that lets you pick controls installed in the Global Assembly Cache (GAC) or as COM components. The new assembly is neither, so click on the Browse button to navigate to the newly compiled file. Choosing a file makes all the classes in it appear in the toolbox automatically. Click on OK to complete the process. You can now see your button control in the toolbox (see Figure 1). You can drag and drop it onto forms like any other control. Choose a Toolbox Icon Note that the button shows up with a default icon that is not particularly useful. However, you can specify a ToolboxBitmap attribute easily. You can use either an existing, similar bitmap, or a new icon that you create specifically for this control. For the sake of simplicity, let's use the same bitmap as other Button controls: [System.Drawing.ToolboxBitmap( typeof(System.Windows.Forms.Button))] public class MyButton : System.Windows.Forms.Button {?} You could easily create your own bitmap and specify its name in the attribute. You could also use a naming convention that points the IDE to a bitmap file, but I recommend using the explicit definition through the ToolboxBitmap attribute because it's much less fragile. I encourage you to experiment a bit with this new button class. Drop instances of the button on a form, and change things around. You can change the button's caption or attach event handler code. The button really isn't much different from any other button, and you can change the properties you set in it initially. You can adjust the default settings as your needs require. Try creating a few forms, drop some buttons onto them, and run the sample. Then go back to your Button class, change a few properties, and re-run the sample app without changing anything else. All the buttons on all the forms will change to the new defaults specified by the class. The only exceptions to this are properties you set specifically on some form. The button control example is useful, but also relatively simple and unsophisticated. You might want to do more than change a control's visual appearance. For example, I hate the fact that the DataGrid doesn't feature a real double-click event that fires whenever the user double-clicks a row in the grid, so I wrote some code that does this (see Listing 1). This class fires a new event called GridDoubleClick every time the user double-clicks a row in the grid. The problem with the native double-click is that the first click is handled by the grid itself, but the second grid is trapped by the control used for each column (such as a textbox), resulting in two single clicks on different objects. GridDoubleClick gets around this problem by memorizing the time of the first click in the grid. The new class hooks events fired by the column controls automatically by binding to them whenever they get added. This happens in the OnControlAdded() method. Whenever someone clicks on the control, the event handler traps the click and compares the current time with the last time the grid was clicked on. The class fires the GridDoubleClick event if the interval is no longer than the system-specified double-click interval. No solution fits everyone's needs out of the box, but you can use inheritance and subclassing to augment functionality in the ways your applications require. I recommend that you ignore the standard tab full of default controls in WinForms projects, and instead create your own subclasses and use them exclusively. Prepare one set of subclasses that you use as the starting point for every project, then another for each project. comments powered by Disqus SharePoint Watch Sign up for our newsletter. Terms and Privacy Policy consent I agree to this site's Privacy Policy.
__label__pos
0.719628
10600 York Rd Suite 105 Cockeysville, MD 21030 Guide: Various Teeth Whitening Treatments Are you looking to brighten your smile but overwhelmed by the myriad of teeth whitening treatments available? At Valley Dental Health, we’ve compiled a comprehensive guide to help you navigate through the various options, ensuring you find the perfect solution to achieve that dazzling smile you’ve always wanted. Guide: Various Teeth Whitening Treatments Over-The-Counter Whitening Products When it comes to achieving a brighter smile, Over-the-Counter (OTC) whitening products offer a convenient and cost-effective solution. These products, ranging from whitening toothpastes, strips, gels, and trays, are readily available at most drugstores and online retailers. They contain varying concentrations of bleaching agents, such as hydrogen peroxide or carbamide peroxide, which work to remove surface stains and, in some cases, deeper discoloration. However, it’s important to note that the effectiveness of these products can vary greatly depending on the type of stains, the concentration of the active ingredient, and the duration of use. Despite their accessibility and ease of use, OTC whitening products are not without their drawbacks. Users may experience sensitivity or irritation to the gums and teeth, especially with prolonged use or if the product is not used as directed. Additionally, while these products can be effective for mild staining, they may not produce the desired results for more severe discoloration or intrinsic stains that affect the inner layers of the tooth. It’s always recommended to consult with a dental professional before starting any whitening regimen to ensure it’s suitable for your dental health and to explore potentially more effective options for your specific needs. Professional In-Office Whitening Procedures Professional in-office whitening procedures offer a fast and effective way to brighten your smile under the supervision of dental professionals. Unlike over-the-counter options, these treatments use higher concentrations of whitening agents, providing more dramatic results in a shorter amount of time. During the procedure, a protective barrier is applied to the gums to prevent irritation, and a potent bleaching gel is carefully applied to the teeth. Some methods may also utilize a special light or laser to enhance the whitening process. This controlled environment ensures not only the safety of the patient but also the uniformity and longevity of the whitening effects. For those seeking a reliable and impactful solution to achieve a brighter smile, exploring professional options is a must. Cockeysville’s Premier Teeth Whitening services offer state-of-the-art in-office treatments tailored to meet individual needs and expectations. With the guidance of experienced dental professionals, patients can enjoy a noticeably whiter smile in just one visit, making it an ideal choice for those looking for immediate and lasting results. Natural Whitening Home Remedies In the quest for a brighter smile, many individuals turn to natural whitening home remedies, a cost-effective and accessible option. These methods often utilize everyday household items, promising to remove surface stains without the need for harsh chemicals. Baking soda, for instance, is a popular choice due to its mild abrasive properties that can gently polish away stains. Similarly, hydrogen peroxide, a common antiseptic, is used in diluted form as a mouthwash to bleach teeth subtly over time. Another favored remedy is oil pulling with coconut oil, which is believed to pull bacteria from the teeth, thus reducing plaque and discoloration. While these natural solutions can be effective for minor staining, it’s important to approach them with realistic expectations and understand that results may vary. Always consult with a dental professional before trying new treatments to ensure they are safe for your specific dental health. Whitening Toothpastes And Mouthwashes In the quest for a brighter smile, whitening toothpastes and mouthwashes are the go-to options for many. These products are infused with mild abrasives and chemicals that work to remove surface stains on your teeth, making them appear whiter over time. Whitening toothpastes, in particular, may contain polishing agents or a small amount of peroxide to enhance their stain-removing effectiveness. Mouthwashes, on the other hand, offer a dual action of whitening while also improving oral health by reducing bacteria and freshening breath. It’s important to note, however, that these products are best for maintaining professionally whitened teeth or for achieving slight improvements in whiteness. For more significant changes, consulting with dental professionals is recommended. Discover more about maintaining your brightest smile at Valley Dental Health. Laser Teeth Whitening Techniques Laser teeth whitening techniques have emerged as a popular and effective method for achieving a brighter smile. This advanced dental procedure uses a concentrated beam of light, typically from a laser, to accelerate the bleaching process of a whitening agent applied to the teeth. The precision and intensity of the laser allow for significant color improvement in a single session, making it an attractive option for those seeking immediate results. Not only does laser teeth whitening offer a quick solution for discolored teeth, but it also ensures a safer and more controlled treatment, minimizing the risk of damage to the tooth enamel and gums. With its combination of speed, efficiency, and safety, laser teeth whitening stands out as a premier choice for individuals looking to enhance the brightness of their smile.
__label__pos
0.562623
Primary sensory processing of visual and olfactory signals in the bumblebee brain Mertes M (2013) Bielefeld: Bielefeld University. Download OA Bielefelder E-Dissertation | Englisch Alternativer Titel What makes a landmark a landmark? How active vision strategies help honeybees to process salient visual features for spatial learning Abstract / Bemerkung Since decades honeybees are being used as an insect model system for answering scientific questions in a variety of areas. This is due to their enormous behavioural repertoire paired with their learning capabilities. Similar learning capabilities are also evident in bumblebees that are closely related to honeybees. As honeybees, they are central place foragers that commute between a reliable food source and their nest and, therefore, need to remember particular facets of their environment to reliably find back to these places. Via their flight style that consists of fast head and body rotations (saccades)interspersed with flight segments of almost no rotational movements of the head (intersaccades) it is possible to acquire distance information about objects in the environment. Depending on the structure of the environment bumblebees as well as honeybees can use these objects as landmarks to guide their way between the nest and a particular food source. Landmark learning as a visual task depends of course on the visual input perceived by the animal’s eyes. As this visual input rapidly changes during head saccades, we recorded in my first project bumblebees with high-speed cameras in an indoor flight arena, while they were solving a navigation task that required them to orient according to landmarks. First of all we tracked head orientation during whole flight periods that served to learn the spatial arrangement of the landmarks. Like this we acquired detailed data on the fine structure of their head saccades that shape the visual input they perceive. Head-saccades of bumblebees exhibit a consistent relationship between their duration, peak velocity and amplitude resembling the human so-called "saccadic main sequence" in its main characteristics. We also found the bumblebees’saccadic sequence to be highly stereotyped, similar to many other animals. This hints at a common principle of reliably reducing the time during which the eye is moved by fast and precise motor control. In my first project I tested bumblebees with salient landmarks in front of a background covered with a random-dot pattern. In a previous study, honeybees were trained with the same landmark arrangement and were additionally tested using landmarks that were camouflaged against the background. As the pattern of the landmark textures did not seem to affect their performance in finding the goal location, it had been assumed that the way they acquire information about the spatial relationship between objects is independent of the objects texture. Our aim for the second project of my dissertation was therefore to record the activity of motion sensitive neurons in the bumblebee to analyse in how far object information is contained in a navigation-related visual stimulus movie. Also we wanted to clarify, if object texture is represented by the neural responses. As recording from neurons in free-flying bumblebees is not possible, we used one of the recorded bumblebee trajectories to reconstruct a three-dimensional flight path including data on the head orientation. We therefore could reconstruct ego-perspective movies of a bumblebee 10 while solving a navigational task. These movies were presented to motion-sensitive neurons in the bumblebee lobula. We found for two different classes of neurons that object information was contained in the neuronal response traces. Furthermore, during the intersaccadic parts of flight the object’s texture did not change the general response profile of these neurons, which nicely matches the behavioural findings. However, slight changes in the response profiles acquired for the saccadic parts of flight might allow to extract texture information from these neurons at later processing stages. In the final project of my dissertation I switched from exploring coding of visual information to the coding of olfactory signals. For honeybees and bumblebees olfaction is approximately equally important for their behaviour as their vision sense. But whereas there is a solid knowledge base on honeybee olfaction with detailed studies on the single stages of olfactory information processing this knowledge was missing for the bumblebee. In the first step we conducted staining experiments and confocal microscopy to identify input tracts conveying information from the antennae to the first processing stage of olfactory information – the antennal lobe (AL ). Using three-dimensional reconstruction of the AL we could further elucidate typical numbers of single spheroidal shaped subunits of the AL , which are called glomeruli. Odour molecules that the bumblebee perceives induce typical activation patterns characteristic of particular odours. By retrogradely staining the output tracts that connect the AL to higher order processing stages with a calcium indicator, we were capable of recording the odourdependent activation patterns of the AL glomeruli and to describe their basic coding principles. Similarly as in honeybees, we could show that the odours’ carbon chain length as well as their functional groups are dimensions that the antennal lobe glomeruli are coding in their spatial response pattern. Applying correlation methods underlined the strong similarity of the glomerular activity pattern between honeybees and bumblebees. Jahr PUB-ID Zitieren Mertes M. Primary sensory processing of visual and olfactory signals in the bumblebee brain. Bielefeld: Bielefeld University; 2013. Mertes, M. (2013). Primary sensory processing of visual and olfactory signals in the bumblebee brain. Bielefeld: Bielefeld University. Mertes, M. (2013). Primary sensory processing of visual and olfactory signals in the bumblebee brain. Bielefeld: Bielefeld University. Mertes, M., 2013. Primary sensory processing of visual and olfactory signals in the bumblebee brain, Bielefeld: Bielefeld University. M. Mertes, Primary sensory processing of visual and olfactory signals in the bumblebee brain, Bielefeld: Bielefeld University, 2013. Mertes, M.: Primary sensory processing of visual and olfactory signals in the bumblebee brain. Bielefeld University, Bielefeld (2013). Mertes, Marcel. Primary sensory processing of visual and olfactory signals in the bumblebee brain. Bielefeld: Bielefeld University, 2013. Alle Dateien verfügbar unter der/den folgenden Lizenz(en): Copyright Statement: This Item is protected by copyright and/or related rights. [...] Volltext(e) Access Level OA Open Access Zuletzt Hochgeladen 2017-06-07T11:20:49Z Export Markieren/ Markierung löschen Markierte Publikationen Open Data PUB Suchen in Google Scholar
__label__pos
0.583678
    Resources Contact Us Home Browse by: INVENTOR PATENT HOLDER PATENT NUMBER DATE     Single disc vapor lubrication 8382902 Single disc vapor lubrication Patent Drawings:Drawing: 8382902-3    Drawing: 8382902-4     « 1 » (2 images) Inventor: Stirniman, et al. Date Issued: February 26, 2013 Application: Filed: Inventors: Assignee: Primary Examiner: MacArthur; Sylvia R. Assistant Examiner: Attorney Or Agent: U.S. Class: 118/726; 118/718; 118/719; 156/345.18; 156/345.37 Field Of Search: 118/718; 118/726; 118/715; 118/719; 156/345.18; 156/345.37 International Class: C23C 16/00 U.S Patent Documents: Foreign Patent Documents: 318071; WO 9904909 Other References: Abstract: Apparatus and method for vapor deposition of a uniform thickness thin film of lubricant on at least one surface of a disk-shaped substrate. The invention has particular utility in depositing thin films of polymeric lubricants onto disc-shaped substrates in the manufacture of magnetic and MO recording media. Claim: What is claimed is: 1. An apparatus comprising: a heated elongated lubricant source for transporting lubricant fluid to be thermally vaporized, wherein the heated elongated lubricant sourcecomprises a chamber and a plurality of primary plugs, wherein each primary plug of the plurality of primary plugs comprises a drilled hole and two openings, and wherein a first opening of the two openings is operable to transport the lubricant fluid fromthe heated elongated lubricant source to a second opening of the two openings, wherein the lubricant fluid is in vaporized formed at the second opening for being dispensed over a substrate of a recording media; and wherein the heated elongated lubricantsource comprises a plurality of threaded holes into which the plurality of primary plugs is screwed therein. 2. The apparatus according to claim 1, further comprising a deposition chamber having an interior space, wherein the deposition chamber is adapted for maintaining the interior space at a pressure ranging from 10.sup.-5 to 10.sup.-9 Torr. 3. The apparatus according to claim 1 further comprising a substrate loader/unloader operable to provide cooling/condensation of the lubricant vapor, wherein the cooling/condensation substantially prevents the lubricant vapor from escaping aninterior space of a deposition chamber. 4. The apparatus according to claim 3, wherein the substrate loader/unloader is further operable to supply and withdraw the substrate of the recording media having a pair of opposed surfaces, and wherein the substrate loader/unloader is furtheroperable to mount and grip the substrate of the recording media. 5. The apparatus according to claim 4, wherein the heated elongated lubricant source has a length greater than an outer diameter of the substrate that is disc-shaped. 6. The apparatus according to claim 1, wherein a size of a drill hole of a first primary plug of the plurality of primary plugs is different from a size of a drill hole of a second primary plug of the plurality of primary plugs forsubstantially even distribution of the lubricant vapor onto the substrate of the recording media. 7. The apparatus according to claim 6, wherein the heated elongated lubricant source further comprises a plurality of secondary plugs for increased collimation of the stream of lubricant vapor, wherein the plurality of secondary plugs is offsetfrom the plurality of primary plugs. 8. The apparatus according to claim 6 further comprising a second heated elongated lubricant source that is positioned at a given distance apart from the elongated lubricant source on a path of a transport/conveyance for continuously moving oneor more recording media. 9. The apparatus according to claim 1 further comprising: a closed heated deposition chamber cylindrically-shaped with circularly-shaped upper and lower ends; a substrate loader/unloader comprising at least one combined substrate load/unloadstation on one of the upper or lower ends; and a substrate transporter/conveyor operable to move the substrate of the recording media in a circular path, wherein apparatus comprises a second heated elongated lubricant source, and wherein the substratetransporter/conveyor is operable to move the substrate of the recording media past the heated elongated lubricant source and past the second heated elongated lubricant source. 10. The apparatus according to claim 9, wherein the heated elongated lubricant source is positioned at a first surface of the substrate of the recording media and the second heated elongated lubricant source is positioned at a second surface ofthe substrate of the recording media. 11. The apparatus according to claim 1 further comprising: an elongated, rectangular box-shaped chamber having a pair of longitudinally extending front and rear walls; a substrate loader/unloader comprising a substrate load lock chamberconnected to the elongated, rectangular box-shaped chamber at a first end of the front wall and a substrate exit lock chamber connected to the elongated, rectangular box-shaped chamber at a second end of the front wall; and a substratetransporter/conveyor operable to move the substrate of the recording media in a linear path, wherein the heated lubricant source further comprises a plurality of transversely extending, elongated lubricant sources that extend transversely across thefront wall in a space between the load lock chamber and the exit lock chamber, and wherein the substrate transporter/conveyor is further operable to move the substrate of the recording media past each of the transversely extending, elongated lubricantsources. 12. The apparatus according to claim 1, wherein the plurality of primary plugs forms a linear array, a diagonal array, or a rectangular array pattern. 13. The apparatus according to claim 1, wherein a first set of primary plugs of the plurality of primary plugs positioned at the outer edges of the heated lubricant source has a smaller diameter drilled hole than a second set of primary plugsof the plurality of primary plugs positioned adjacent to the middle of the heated lubricant source for substantially even distribution of the lubricant vapor onto the substrate of the recording media. 14. The apparatus according to claim 1, wherein the plurality of primary plugs dispenses the lubricant vapor homogenously over the substrate of the recording media to form a substantially uniform thickness of the lubricant over the substrate. 15. The apparatus according to claim 1, wherein a rate of the lubricant vapor deposition is controlled by a rate of a speed that the substrate of the recording media is passed along the heated elongated lubricant source. 16. The apparatus according to claim 1, wherein a rate of the lubricant vapor deposition is controlled by varying a pressure of the lubricant vapor deposition. 17. The apparatus according to claim 1, wherein a rate of the lubricant vapor deposition is controlled by a size of a drill hole associated with a primary plug of the plurality of primary plugs. Description: FIELD OF THE INVENTION The present invention relates to an apparatus and method for uniformly applying a thin film of a lubricant to the substrate surfaces in a solventless manner. The invention has particular utility in the manufacture of magnetic or magneto-optical("MO") data/information storage and retrieval media comprising a layer stack or laminate of a plurality of layers formed on a suitable substrate, e.g., a disc-shaped substrate, wherein a thin lubricant topcoat is applied to the upper surface of the layerstack or laminate for improving tribological performance of the media when utilized with read/write transducers operating at very low flying heights. BACKGROUND OF THE INVENTION Magnetic and MO media are widely employed in various applications, particularly in the computer industry for data/information storage and retrieval purposes. A magnetic medium in e.g., disc form, such as utilized in computer-relatedapplications, comprises a non-magnetic disc-shaped substrate, e.g., of glass, ceramic, glass-ceramic composite, polymer, metal, or metal alloy, typically an aluminum (Al)-based alloy such as aluminum-magnesium (Al--Mg), having at least one major surfaceon which a layer stack or laminate comprising a plurality of thin film layers constituting the medium are sequentially deposited. Such layers may include, in sequence from the substrate deposition surface, a plating layer, e.g., of amorphousnickel-phosphorus (Ni--P), a polycrystalline underlayer, typically of chromium (Cr) or a Cr-based alloy such as chromium-vanadium (Cr--V), a magnetic layer, e.g., of a cobalt (Co)-based alloy, and a protective overcoat layer, typically of a carbon(C)-based material having good tribological properties. A similar situation exists with MO media, wherein a layer stack or laminate is formed on a substrate deposition surface, which layer stack or laminate comprises a reflective layer, typically of ametal or metal alloy, one or more rare-earth thermo-magnetic (RE-TM) alloy layers, one or more transparent dielectric layers, and a protective overcoat layer, for functioning as reflective, transparent, writing, writing assist, and read-out layers, etc. Thin film magnetic and MO media in disc form, such as described supra, are typically lubricated with a thin film of a polymeric lubricant, e.g., a perfluoropolyether, to reduce wear of the disc when utilized with data/information recording andread-out heads/transducers operating at low flying heights, as in a hard disk system functioning in a contact start-stop ("CSS") mode. Conventionally, a thin film of lubricant is applied to the disc surface(s) during manufacture by dipping into a bathcontaining a small amount of lubricant, e.g., less than about 1% by weight of a fluorine-containing polymer, dissolved in a suitable solvent, typically a perfluorocarbon, fluorohydrocarbon, or hydrofluoroether. However, a drawback inherent in suchdipping process is the consumption of large quantities of solvent, resulting in increased manufacturing cost and concern with environmental hazards associated with the presence of toxic or otherwise potentially harmful solvent vapors in the workplace. Another drawback associated with the conventional dipping method for applying a thin film of a polymeric lubricant to a substrate results from the lubricant materials being mixtures of long chain polymers, with a distribution of molecularweights. Since the molecular weight of the polymeric lubricant affects the mechanical (i.e., tribological) performance of the head-disc interface, it is common practice to subject the polymeric lubricant mixtures (as supplied by the manufacturer) to afractionation process prior to adding the lubricant to the solvent in order to obtain a fraction having a desired molecular weight distribution providing optimal tribological performance. However, such pre-fractionation undesirably adds an additionalstep and increases the overall process cost. Vapor deposition of thin film lubricants is an attractive alternative to dip lubrication in view of the above drawbacks. Specifically, vapor deposition of lubricant films is advantageous in that it is a solventless process and the process forgenerating the lubricant vapor can simultaneously serve for fractionating the lubricant mixture into a desired molecular weight distribution, thereby eliminating the need for a pre-fractionation step. Moreover, vapor deposition techniques can provide upto about 100% bonded lubricant molecules when utilized with appropriate polymeric lubricants and magnetic and/or MO disc substrates having deposition surfaces comprised of a freshly-deposited carbon-based protective overcoat layer. However, existing vapor deposition apparatus (e.g., Intevac VLS 100, Intevac Corp., Santa Clara, Calif.) for applying a thin layer of polymeric lubricant to a thin film data/information storage and retrieval medium, e.g., in disc form, utilize astatic process/system, wherein a disc-shaped substrate is moved to a position facing the front (i.e., orifice) of a source of lubricant vapor (e.g., by means of a disc lifter) and statically maintained at that position while the lubricant film isdeposited on the entire disc surface, with the lubricant film thickness being determined (i.e., controlled) by the length of the interval during which the disc surface is statically maintained facing the orifice(s) of the lubricant vapor source. In order to control the spatial distribution, hence thickness uniformity, of the lubricant thin films obtained with such static vapor deposition process/apparatus at deposition rates of from about 1 to about 10 .ANG./sec. for providing lubricantfilm thicknesses up to about 50 .ANG., a diffuser plate for the lubricant vapor is provided intermediate the lubricant vapor source and the substrate surface, thereby adding to the system complexity and necessitating periodic maintenance of the diffuserplate for ensuring clear vapor passage through each of the openings in the diffuser plate. In addition, such static vapor lubrication systems incur a drawback when utilized as part of an in-line or similar type multi-chamber or modular system formanufacturing magnetic or MO media, in that a line-of-sight path is required for the mechanism utilized for positioning the disk surface opposite the lubricant vapor source. As a result, a path can be established for the lubricant vapor to escape fromthe lubricant deposition chamber into adjacent process chambers utilized for different processing functions and result in their being contaminated with lubricant vapor. Notwithstanding the improvement in spatial uniformity of lubricant film thickness afforded by the use of a diffuser plate or similar element between the lubricant vapor source and the disk substrate surface, current vapor deposition processesfor applying thin films of lubricant or other additive to substrate surfaces result in some degree of film thickness non-uniformity. It is believed that such spatial non-uniformity has dual origins, as follows: (1) although the above-described system is nominally static, the substrate (e.g., a disc) is necessarily in motion during its placement facing the lubricant vapor source and during its removal therefrom, which motion creates a non-uniformity,i.e., a thickness gradient, across the disc surface in the direction of the motion. The extent and magnitude of the gradient is a function of the deposition rate and the speed of the mechanism utilized for placement of the disc in facing relation to thelubricant vapor source and removal therefrom; and (2) because of the large substrate size (i.e., disc diameter) and physical constraints on apparatus dimensions, multiple lubricant vapor sources and/or vapor diffuser plates generally are necessary for obtaining thickness uniformity over theentire substrate surface. However, even in the best cases wherein multiple lubricant vapor sources and/or vapor diffuser plates are utilized, regions of greater and lesser lubricant or additive thickness are routinely obtained. In view of the above, there exists a clear need for improved means and methodology for depositing thin films of a lubricant, e.g., a polymeric lubricant, by vapor techniques and at deposition rates consistent with the throughput requirements ofautomated manufacturing processing, e.g., of magnetic and/or MO data/information storage and retrieval media, which means and methodology overcome the above-described drawbacks and disadvantages of the conventional static lubricant vapor depositiontechnology. More specifically, there exists a need for improved means and methodology for vapor depositing thin films of lubricant (e.g., a polymeric lubricant) which provides improved lubricant film thickness uniformity over the entire deposition areaof disc-shaped substrates utilized in the manufacture of such magnetic and/or MO media. The present invention addresses and solves problems and difficulties in achieving uniform thickness lubricant thin film deposition over large area substrates by means of vapor deposition techniques, e.g., thin film polymeric lubricant depositionon disc-shaped substrates utilized in the manufacture of magnetic and/or MO media, while maintaining full capability with all aspects of conventional automated manufacturing technology therefor. Further, the means and methodology afforded by the presentinvention enjoy diverse utility in the manufacture of various other devices and articles requiring deposition of uniform thickness thin film lubricant layers thereon. DISCLOSURE OF THE INVENTION An advantage of the present invention is an improved apparatus for vapor depositing a uniform thickness thin film of a lubricant on at least one surface of a disk-shaped substrate. Another advantage of the present invention is an improved apparatus for vapor depositing a uniform thickness thin film of a lubricant on at least one surface of a disc-shaped substrate, e.g., as part of a process/system for manufacturingmagnetic and/or MO data/information storage and retrieval media. Yet another advantage of the present invention is an improved method for vapor depositing a uniform thickness thin film of a lubricant on at least one surface of a disk-shaped substrate. Still another advantage of the present invention is an improved method for vapor depositing a uniform thickness thin film of a lubricant topcoat on at least one surface of a disc-shaped substrate utilized in the manufacture of magnetic and/or MOrecording media. Additional advantages and other aspects and features of the present invention will be set forth in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or maybe learned from the practice of the present invention. The advantages of the present invention may be realized and obtained as particularly pointed out in the appended claims. According to an aspect of the present invention, the foregoing and other advantages are obtained in part by an apparatus for vapor depositing a uniform thickness thin film of a lubricant on at least one surface of a substrate, comprising: (a) achamber having an interior space; (b) a substrate loader/unloader for supplying said interior space with at least one disk-shaped substrate and for withdrawing at least one disk-shaped substrate from said interior space, said disk-shaped substratecomprising a magnetic or magneto optical data/information storage and retrieval medium; (c) at least one lubricant vapor source for supplying said interior space with a stream of lubricant vapor, said vapor source comprising a closed heated chamberfluidly communicating with at least a plurality of primary plugs for supplying a stream of lubricant vapor; and (d) a substrate transporter/conveyor for continuously moving at least one disk-shaped substrate past said stream of lubricant vapor from saidat least one lubricant vapor source for depositing on at least one surface thereof a uniform thickness thin film of lubricant. Another aspect of the present invention is a method of vapor depositing a uniform thickness thin film of lubricant on at least one surface of a substrate, comprising the steps of: (a) providing an apparatus comprising: (i) a chamber having aninterior space maintained below atmospheric pressure; (ii) a substrate loader/unloader for supplying said interior space with at least one disk-shaped substrate and for withdrawing at least one disk-shaped substrate from said interior space, saiddisk-shaped substrate comprising a magnetic or magneto optical data/information storage and retrieval medium; (iii) at least one lubricant vapor source for supplying said interior space with a stream of lubricant vapor, said vapor source comprising aclosed heated chamber fluidly communicating with at least a plurality of primary plugs for supplying a stream of lubricant vapor; and (iv) a substrate transporter/conveyor for continuously moving at least one substrate past said stream of vapor from saidat least one lubricant vapor source; (b) supplying said interior space with a substrate having at least one surface; (c) continuously moving said substrate past said stream of lubricant vapor and depositing a uniform thickness thin film of said lubricanton said at least one surface; and (d) withdrawing the lubricant-coated disk-shaped substrate from said interior space. Additional advantages and aspects of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein embodiments of the present invention are shown and described, simply byillustration of the best mode contemplated for practicing the present invention. As will be described, the present invention is capable of other and different embodiments, and its several details are susceptible of modification in various obviousrespects, all without departing from the spirit of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as limitative. BRIEF DESCRIPTION OF THE DRAWINGS The following detailed description of the embodiments of the present invention can best be understood when read in conjunction with the following drawings, in which the various features are not necessarily drawn to scale but rather are drawn asto best illustrate the pertinent features, in which like reference numerals are employed throughout to designate similar features, wherein: FIG. 1 is a schematic view of an embodiment of a lubricant vapor deposition apparatus according to the present invention; FIG. 2 is a schematic view of another embodiment of a lubricant vapor deposition apparatus according to the present invention; and FIG. 3 is a is a schematic view of yet another embodiment of a lubricant vapor deposition apparatus according to the present invention. DESCRIPTION OF THE INVENTION The present invention is based upon recognition that the above-described limitations/drawbacks related to poor thickness uniformity of the deposited lubricant thin films associated with conventional lubricant vapor deposition processing, e.g.,as utilized in the manufacture of disc-shaped magnetic and MO recording media, arising from: (1) the use of static vapor deposition means and methodology; and (2) the large substrate sizes and consequent requirement for use of multiple lubricant vaporsources and/or vapor diffuser plate, can be avoided, or at least minimized, by use of "pass-by" lubricant vapor deposition apparatus and methodology, wherein the substrates are continuously moved past the lubricant vapor source(s) for lubricant thin filmdeposition on the surface(s) thereof. As a consequence, non-uniformity of the lubricant thin film thickness arising from the static positioning of the substrates relative to the lubricant vapor source is eliminated, or at least minimized. In addition, according to the present invention, thickness uniformity of the lubricant thin films is enhanced by providing the lubricant vapor source(s) in elongated form of length greater than the maximum dimension of the substrate depositionsurface, e.g., disc diameter, with a plurality of removable threaded plugs for providing an even distribution of lubricant vapor. The lubricant vapor source(s) comprises at least a plurality of threaded holes into which the plugs are inserted therein. Each of the plugs comprises a drilled hole which extends substantially the length of the plug's interior. Moreover, the drilled hole of each plug can have substantially the same or different diameter from the other plugs. In certain embodiments, avapor flow profile can be established by varying sizes of the drilled hole in each plug to guarantee an even distribution of lubricant vapor. The larger diameter drilled holes will have a faster rate of vapor deposition than a smaller drilled hole. Asan example, smaller holes can be positioned at the outer edges of the lubricant vapor source, with larger holes positioned towards the middle of the lubricant vapor source. Such positioning helps prevent any potential buildup of vapor deposition nearthe edges of the disk-shaped substrate, and thereby ensures an even distribution of lubricant vapor on each side of the disk-shaped substrate. The plugs can be formed into a pattern such as a linear array, a diagonal array, or a rectangular array. However, any pattern is suitable as long as the lubricant thickness uniformity is maintained. The threaded design of the plugs facilitates the replacement of the plugs into the threaded holes of the lubricant vapor source. The apparatus and methodology of the present invention provide uniform thickness lubricant thin films by vapor deposition at rates consistent with the requirements of automated manufacturing processing, while retaining the advantages of vapordeposition of lubricants, including, inter alia, solventless processing, elimination of pre-fractionation of polymeric lubricant materials, and obtainment of very high percentages of bonded lubricant when utilized with recording media with carbon-basedprotective overcoats. Moreover, the inventive apparatus is or can be fabricated in modular form and thus fully compatible with existing modular type in-line or sequential processing type apparatus utilized for commercial scale manufacturing operations,e.g., for magnetic and/or MO recording media. According to the invention, a modular lubricant thin film or additive vapor deposition system utilizes a "pass-by" deposition method, as opposed to the conventional "static" method. The material to be deposited (e.g., lubricant or additive) iscontained in a closed, elongated heated chamber having a length greater than the substrate maximum dimension, and allowed to expand through a plurality of plugs, into a deposition chamber maintained at a reduced pressure, e.g., from about 10.sup.-5 toabout 10.sup.-9 Torr by a vacuum pump means. Substrates, e.g., discs, carried by a transport or conveyor mechanism are passed in front of and in close proximity to the plugs. The substrates are "passed-by" the plugs in a continuous motion, i.e.,without stopping to provide a static interval over the lubricant vapor source as in conventional processing, thereby eliminating both of the above-mentioned sources of lubricant thickness non-uniformity inherent in the static deposition system. According to the invention, the deposition rate of the lubricant or additive can be readily controlled by appropriate variation of any combination of "pass-by" speed, lubricant vapor pressure, and diameter of the drilled hole in the plug, such that adesired lubricant or additive film thickness is obtained during one (1) or more passes by one (1) or more lubricant vapor sources. According to embodiments of the invention, a secondary set of plugs, which may be cooled, for providing increased collimation of the vapor stream emanating from the lubricant vapor sources. When vapor deposition of both sides of a dual-surfacedsubstrate is required, e.g., as with disc-shaped substrates, the apparatus may be provided with first and second, similarly configured, opposingly positioned lubricant vapor sources, with the secondary set of plugs being offset from those of the firstset. In addition, cooled surfaces may be provided within the deposition chamber for condensing excess vaporized lubricant or additive for preventing contamination thereof, and the inlet and outlet openings (e.g., load lock chambers) to the depositionchamber may be equipped with cold traps and configured as to eliminate any line-of-sight path for escape of lubricant or additive vapor from the deposition chamber into adjoining process chambers. Referring now to FIG. 1, shown therein, in schematic form, is a cylindrically configured embodiment of a "pass-by" vapor deposition apparatus 10 according to the present invention, wherein substrates are transported in a circular path past atleast one elongated, radially extending vapor deposition source positioned transversely with respect to the substrate path. More specifically, apparatus 10 comprises a cylindrically-shaped deposition chamber 1 comprising a curved sidewall portion 2 andupper and lower circularly-shaped end walls 3U, 3L defining an interior space 4, and a vacuum pump 5 or equivalent means for maintaining the interior space 4 at a reduced pressure below atmospheric pressure, e.g., from about 10.sup.-5 to about 10.sup.-9Torr. A combined substrate load/unload station 6 or equivalent means (either being of conventional design) is provided on one of the upper or lower end walls 3U or 3L for insertion of fresh substrates 7 into the interior space 4 of deposition chamber 1for vapor deposition onto at least one surface thereof and for removal of vapor-deposited substrates from interior space 4. By way of illustration, substrates 7 may be in the form of annular discs, with inner and outer diameters corresponding to thoseof conventional disc-type magnetic and/or MO media. The substrate load/unload station 6 or equivalent means may, if desired or necessary, be equipped with a cold trap 8 or equivalently performing means for eliminating any line-of-sight path for escapeof lubricant vapor from the lubricant deposition chamber 1 (or module) into adjacent processing modules of an in-line manufacturing system, which cold trap 8 is concentric with the substrate load/unload station 6 when the latter is adapted for use withdisc-shaped substrates 7. Chamber 1 is further provided with a substrate transporter/conveyor means 9, illustratively a radially extending arm 11 controllably rotatable about an axis coaxial with the central axis of the upper and lower end walls 3U and 3L and equipped atthe remote end thereof with a substrate support means 12, illustratively a disc gripper or equivalent means, for sequentially transporting/conveying a fresh substrate 7 introduced into the interior space 4 of chamber 1 via substrate load/unload station 6past at least one, preferably a plurality of elongated, spaced-apart, radially extending lubricant/additive vapor sources 13 for "pass-by" vapor deposition onto at least a first surface of the moving substrate 7. Coated substrates 7 are withdrawn fromchamber 1 via substrate load/unload station 6 after "pass-by" deposition thereon from at least one vapor source 13. Each lubricant/additive vapor source 13 is comprised of a closed, heated, elongated chamber 14 for accommodating therein a quantity of liquid lubricant or additive to be thermally vaporized, chamber 14 having a length greater than the maximumdimension of the substrate deposition surface 7', i.e., the disc diameter in the illustrated example, the wall 15 of the chamber 14 facing the substrate deposition surface 7' being provided with a plurality of plugs 16 for creating a vapor streamdirected toward the first surface 7' of substrate 7 for condensation thereon as a thin film. Collimation of the vapor stream may be improved, if necessary, by providing a plurality of secondary plugs (not shown in the drawing for illustrativesimplicity), which secondary plugs may be cooled in order to function as a pump for condensing low vapor pressure lubricant, thereby facilitating formation of a well-defined molecular beam of lubricant. In the event the second, opposite surface of thesubstrate 7 is to receive a vapor deposited lubricant or additive layer, chamber 1 is provided in like manner with at least one similarly constituted vapor source 13 with a plurality of plugs 16 facing the second surface. In such instance, the plugs 16of the vapor sources 13 on opposite sides of the substrate 7 may be offset, if necessary, and a cooled surface provided opposite the plugs for condensation of excess lubricant or additive vapor, in order to prevent contamination of deposition chamber 1. In operation of the cylindrically-configured vapor deposition apparatus 10 of FIG. 1, the substrates 7 may be rotated one or more times past one or more vapor sources 13 for deposition of a single or multiple lubricant or additive layersthereon. Provision of multiple vapor sources 13 within chamber 1 increases product throughput and facilitates use of apparatus 10 in modular form as one component of a multi-station, continuous manufacturing line. Deposition thickness, e.g., lubricantlayer thickness, may be easily regulated by control of any combination of lubricant vapor pressure, diameter of the plug's drilled hole, and pass-by speed. Referring now to FIG. 2, shown therein, in schematic form, is another embodiment of a "pass-by" vapor deposition apparatus 20 of the present invention in the form of rectangular box-shaped configuration, wherein substrates are transported in alinear path past at least one elongated vapor deposition source positioned transversely with respect to the substrate path. More specifically, apparatus 20 comprises a rectangular box-shaped deposition chamber 21 comprising a front wall 22 and a rearwall 23 connected at their respective ends by side walls 24, the chamber 21 defining an interior space 25 and provided with a vacuum pump or equivalent means (not shown in the drawing for illustrative simplicity) for maintaining the interior space 25 ata reduced pressure below atmospheric pressure, e.g., from about 10.sup.-5 to about 10.sup.-9 Torr. Substrate load lock and exit lock stations 26, 27 or equivalent means are provided at opposite ends of one of the chamber walls, illustratively the frontwall 22, for insertion of fresh substrates 7 into the interior space 25 of the deposition chamber 21 at one end thereof, and for removal of coated substrates 7 at the other end. As in the previous embodiment, substrates 7 may, for example, as in theprevious embodiment, be in the form of annular discs with inner and outer diameters corresponding to those of conventional disc-type magnetic and/or MO media. Also as before, each of the substrate load lock and exit lock stations 26, 27 may be equippedwith a cold trap 8 or equivalently performing means for eliminating any line-of-sight path for escape of lubricant or additive vapor into adjacent process chambers of an modular in-line system. Deposition chamber 21 is further provided with a substrate transporter/conveyor means 28 comprising a linear transport system equipped with substrate holding/gripping means 29 for sequentially moving substrates 7 past one or more (illustrativelytwo) elongated, transversely extending lubricant/additive vapor sources 13, such as described above with respect to the embodiment of FIG. 1, mounted on at least one of the front or rear chamber walls, illustratively the front wall 22. In operation of the linearly-configured device of FIG. 2, fresh substrates 7 introduced into the deposition chamber 21 via load lock station 26 move past the at least one vapor source 13 in the direction of arrows 30 one or more times fordeposition of a single or multiple layers of lubricant or additive before being removed from chamber 21 via exit lock station 27. As before, provision of multiple vapor sources 13 within the deposition chamber 21 increases product throughput andfacilitates use of the apparatus in modular form as one component of a multi-module manufacturing line. Deposition thickness is again easily regulated by appropriate control of any desired combination of vapor pressure, the diameter of the plug'sdrilled hole, and pass-by speed. Referring now to FIG. 3, shown therein, in schematic form, is plug 16 such as described above with respect to the embodiments of FIGS. 1 and 2. Plug 16 comprises head 44 and stem/body 41. Drilled hole 43 extends substantially the length of theinterior of plug 16, with openings 45' and 45''. Opening 45'' faces the interior of the lubricant vapor source 13, which contains the liquid lubricant. Opening 45' at the opposite end of plug 16, faces the interior space 25 of deposition chamber 21. Thus, a stream of lubricant vapor passes through opening 45' and is deposited onto at least one surface of substrate 7. The plug stem/body 41 comprises threads 42 which allow for the insertion of plug 16 into a threaded hole (not depicted) of thelubricant vapor source 13. A plurality of plugs 16 provide for an even distribution of lubricant vapor. The lubricant vapor source 13 comprises at least a plurality of threaded holes into which plugs 16 are screwed therein. Each of the plugs comprises a drilled hole 43which extends substantially the length of the interior of plug 16. Moreover, the drilled hole 43 of each plug 16 can have substantially the same or different diameter from the remaining plugs. In certain embodiments, a vapor flow profile can beestablished with varying sizes of the drilled hole in each plug to guarantee an even distribution of lubricant vapor. The larger diameter drilled holes will have a faster rate of vapor deposition than a smaller drilled hole. As an example, smallerholes can be positioned at the outer edges of the lubricant vapor source 13, with larger holes positioned towards the middle sections of the lubricant vapor source 13. Such positioning helps prevent any potential buildup of vapor deposition near theedges of the disk-shaped substrate, and thereby ensures an even distribution of lubricant vapor on each side of the disk-shaped substrate. The plugs 16 can be formed into a pattern such as a linear array, a diagonal array, or a rectangular array,however, any pattern is suitable as long as the lubricant thickness uniformity is maintained. The threaded design of the plug 16 facilitates the replacement of the plugs into the lubricant vapor source 13. The present invention thus provides a number of advantages over conventional static vapor deposition apparatus and methodology, and is of particular utility in automated manufacturing processing of thin film magnetic and MO recording mediarequiring deposition of uniform thickness lubricant topcoat layers for obtaining improved tribological properties. Specifically, the present invention provides for lubricant deposition in a solventless manner not requiring pre-fractionation processing,with excellent film thickness uniformity and high bonded lube ratios. Further, the inventive apparatus and methodology can be readily utilized as part of conventional manufacturing apparatus/technology in view of their full compatibility with all otheraspects of automated manufacture of magnetic and MO media. Finally, the inventive apparatus and methodology are broadly applicable to a variety of vapor deposition processes utilized in the manufacture of a number of different products, e.g., mechanicalparts, gears, linkages, etc., requiring lubrication. In the previous description, numerous specific details are set forth, such as specific materials, structures, processes, etc., in order to provide a better understanding of the present invention. However, the present invention can be practicedwithout resorting to the details specifically set forth. In other instances, well-known processing materials, structures, and techniques have not been described in detail in order not to unnecessarily obscure the present invention. Only the preferred embodiments of the present invention and but a few examples of its versatility are shown and described in the present invention. It is to be understood that the present invention is capable of use in various other embodimentsand is susceptible of changes and/or modifications within the scope of the inventive concept as expressed herein. * * * * *       Recently Added Patents Lighting fixture Downhole telemetry system Apparatus for touch screen avionic device Multi-function wrench for a power tool Container Vehicle inertial sensor systems Selecting one of a plurality of print modes based on pixel coverage of a document   Randomly Featured Patents Pharmaceutical composition of nanoparticles for protein drug delivery Method and apparatus for generating synthetic speech with contrastive stress Method of determining the normal evaporation rate (NER) and vacuum quality of a cryogenic liquid container Method, apparatus and program for image processing Full strut turkey plaque Device for winding a wrapping film around an article to be packaged Transmitting and receiving systems for increasing service coverage in orthogonal frequency division multiplexing wireless local area network, and method thereof Method of molding skin-covered foamed article Faucet Textile machine  
__label__pos
0.945878
/* * Copyright (C) 1998 by Southwest Research Institute (SwRI) * * All rights reserved under U.S. Copyright Law and International Conventions. * * The development of this Software was supported by contracts NAG5-3148, * NAG5-6855, NAS8-36840, NAG5-2323, and NAG5-7043 issued on behalf of * the United States Government by its National Aeronautics and Space * Administration. Southwest Research Institute grants to the Government, * and others acting on its behalf, a paid-up nonexclusive, irrevocable, * worldwide license to reproduce, prepare derivative works, and perform * publicly and display publicly, by or on behalf of the Government. * Other than those rights granted to the United States Government, no part * of this Software may be reproduced in any form or by any means, electronic * or mechanical, including photocopying, without permission in writing from * Southwest Research Institute. All inquiries should be addressed to: * * Director of Contracts * Southwest Research Institute * P. O. Drawer 28510 * San Antonio, Texas 78228-0510 * * * Use of this Software is governed by the terms of the end user license * agreement, if any, which accompanies or is included with the Software * (the "License Agreement"). An end user will be unable to install any * Software that is accompanied by or includes a License Agreement, unless * the end user first agrees to the terms of the License Agreement. Except * as set forth in the applicable License Agreement, any further copying, * reproduction or distribution of this Software is expressly prohibited. * Installation assistance, product support and maintenance, if any, of the * Software is available from SwRI and/or the Third Party Providers, as the * case may be. * * Disclaimer of Warranty * * SOFTWARE IS WARRANTED, IF AT ALL, IN ACCORDANCE WITH THESE TERMS OF THE * LICENSE AGREEMENT. UNLESS OTHERWISE EXPLICITLY STATED, THIS SOFTWARE IS * PROVIDED "AS IS", IS EXPERIMENTAL, AND IS FOR NON-COMMERCIAL USE ONLY, * AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, * INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR * PURPOSE, OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT * SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. * * Limitation of Liability * * SwRI SHALL NOT BE LIABLE FOR ANY DAMAGES SUFFERED AS A RESULT OF USING, * MODIFYING, CONTRIBUTING, COPYING, DISTRIBUTING, OR DOWNLOADING THIS * SOFTWARE. IN NO EVENT SHALL SwRI BE LIABLE FOR ANY INDIRECT, PUNITIVE, * SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGE (INCLUDING LOSS OF BUSINESS, * REVENUE, PROFITS, USE, DATA OR OTHER ECONOMIC ADVANTAGE) HOWEVER IT ARISES, * WHETHER FOR BREACH OF IN TORT, EVEN IF SwRI HAS BEEN PREVIOUSLY ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. YOU HAVE SOLE RESPONSIBILITY FOR ADEQUATE * PROTECTION AND BACKUP OF DATA AND/OR EQUIPMENT USED IN CONNECTION WITH THE * SOFTWARE AND WILL NOT MAKE A CLAIM AGAINST SwRI FOR LOST DATA, RE-RUN TIME, * INACCURATE OUTPUT, WORK DELAYS OR LOST PROFITS RESULTING FROM THE USE OF * THIS SOFTWARE. YOU AGREE TO HOLD SwRI HARMLESS FROM, AND YOU COVENANT NOT * TO SUE SwRI FOR, ANY CLAIMS BASED ON USING THE SOFTWARE. * * Local Laws: Export Control * * You acknowledge and agree this Software is subject to the U.S. Export * Administration Laws and Regulations. Diversion of such Software contrary * to U.S. law is prohibited. You agree that none of the Software, nor any * direct product therefrom, is being or will be acquired for, shipped, * transferred, or reexported, directly or indirectly, to proscribed or * embargoed countries or their nationals, nor be used for nuclear activities, * chemical biological weapons, or missile projects unless authorized by U.S. * Government. Proscribed countries are set forth in the U.S. Export * Administration Regulations. Countries subject to U.S embargo are: Cuba, * Iran, Iraq, Libya, North Korea, Syria, and the Sudan. This list is subject * to change without further notice from SwRI, and you must comply with the * list as it exists in fact. You certify that you are not on the U.S. * Department of Commerce's Denied Persons List or affiliated lists or on the * U.S. Department of Treasury's Specially Designated Nationals List. You agree * to comply strictly with all U.S. export laws and assume sole responsibilities * for obtaining licenses to export or reexport as may be required. * * General * * These Terms represent the entire understanding relating to the use of the * Software and prevail over any prior or contemporaneous, conflicting or * additional, communications. SwRI can revise these Terms at any time * without notice by updating this posting. * * Trademarks * * The SwRI logo is a trademark of SwRI in the United States and other countries. * */ #ident "@(#) sen_combo.c 1.33 05/08/19 SwRI" #include #include #include #include #include "ret_codes.h" #include "gen_defs.h" #include "libbase_idfs.h" #include "libVIDF.h" /******************************************************************************* * * * IR_SENSOR_COMBO SUBROUTINE * * * * DESCRIPTION * * This routine is called to determine the number of different combinations * * possible for the tables used by the sensors that are being utilized. The * * TBL_OFF values for all tables and the CRIT_STATUS values are used and * * compared to see how many unique combinations of these tables are necessary * * instead of allocating space for every table for the sensors that are being * * utilized. Once the number of combinations have been determined, space is * * allocated for the structure(s) that hold the information for the combos. * * * * INPUT VARIABLES * * SDDAS_SHORT btime_yr start time requested (year component) * * SDDAS_SHORT btime_day start time requested (day component) * * SDDAS_LONG btime_sec start time requested (seconds component) * * SDDAS_LONG btime_nsec start time requested (nanoseconds) * * SDDAS_SHORT etime_yr stop time requested (year component) * * SDDAS_SHORT etime_day stop time requested (day component) * * SDDAS_LONG etime_sec stop time requested (seconds component) * * SDDAS_LONG etime_nsec stop time requested (nanoseconds) * * * * USAGE * * x = ir_sensor_combo (btime_yr, btime_day, btime_sec, btime_nsec, * * etime_yr, etime_day, etime_sec, etime_nsec) * * * * NECESSARY SUBPROGRAMS * * sizeof () the size of the specified object in bytes * * malloc() allocates memory * * free() frees allocated memory * * memset () memory initialization routine * * ReadVIDF() reads information from the VIDF file * * ir_count_combo() determines the number of different combinations * * ir_get_sensor_tables() reads table offset values from the VIDF file * * ir_init_sensor_ptr() initializes the sensor_ptr structures which * * hold the table information for the combinations * * * * EXTERNAL VARIABLES * * struct general_info structure that holds information concerning * * ginfo the experiment that is being processed * * * * INTERNAL VARIABLES * * struct experiment_info a pointer to the structure that holds specific * * *ex experiment information * * struct inst_tbl_info a pointer to the structure which holds * * *tbl_info_ptr non-array table definition information * * for each table defined for data source * * SDDAS_LONG **tbl_ptrs an array of pointers to memory that holds the * * offset values for each table defined * * SDDAS_LONG offset, next_tbl index values into allocated memory * * SDDAS_LONG sensor_block offset to get to next sensor block * * SDDAS_SHORT i looping variable * * SDDAS_SHORT rval holds the value returned by the called routine * * SDDAS_SHORT num_combo the number of unique table combinations needed * * for all sensors for the virtual instrument being* * processed * * SDDAS_CHAR *chk_tbl an array of flags which indicates if the table * * offset values need to be checked in the combo * * comparison (one flag per table) * * SDDAS_CHAR *chk_crit an array of flags which indicates if the * * crit_status values need to be checked in the * * combo comparison * * SDDAS_CHAR first_time flag indicating the first time this module is * * called for the combination being processed * * SDDAS_CHAR crit_tbl the table flagged as the critical status table * * SDDAS_CHAR **crit_stat_ptrs array of ptrs to memory that holds the * * crit_status values * * size_t bytes the number of bytes to allocate * * size_t num_bytes_slong the number of bytes needed for a SDDAS_LONG * * size_t num_bytes_schar the number of bytes needed for a SDDAS_CHAR * * void *tmp_ptr pointer which holds address passed back by * * the call to the MALLOC routine * * void *base_tables memory that is allocated to hold all the offset * * values for the various tables for all sensors * * void *base_ptr pointer to memory that holds pointers to the * * memory for the different offset values * * * * SUBSYSTEM * * Display Level * * * ******************************************************************************/ SDDAS_SHORT ir_sensor_combo (SDDAS_SHORT btime_yr, SDDAS_SHORT btime_day, SDDAS_LONG btime_sec, SDDAS_LONG btime_nsec, SDDAS_SHORT etime_yr, SDDAS_SHORT etime_day, SDDAS_LONG etime_sec, SDDAS_LONG etime_nsec) { extern struct general_info ginfo; struct experiment_info *ex; struct inst_tbl_info *tbl_info_ptr; SDDAS_LONG **tbl_ptrs, offset, next_tbl, sensor_block; SDDAS_SHORT i, rval, num_combo; SDDAS_CHAR *chk_tbl, *chk_crit, first_time, crit_tbl, **crit_stat_ptrs; /* Leave variables as is, no typedefs. */ size_t bytes, num_bytes_slong, num_bytes_schar; void *tmp_ptr, *base_tables, *base_ptr; /****************************************************************************/ /* Determine the number of bytes needed to compare all table offset values */ /* for each sensor (offsets are longs) and the crit_status values, which */ /* are characters. Return error code on malloc error. */ /****************************************************************************/ ex = ginfo.expt; num_bytes_slong = sizeof (SDDAS_LONG); num_bytes_schar = sizeof (SDDAS_CHAR); bytes = (ex->num_tbls * ex->num_sensor * num_bytes_slong) + (ex->num_tbls * ex->num_sensor * num_bytes_schar); if (bytes == 0) base_tables = NO_MEMORY; else { if ((tmp_ptr = malloc (bytes)) == NO_MEMORY) return (SCOM_TBL_MALLOC); base_tables = tmp_ptr; memset (base_tables, '0', bytes); } /****************************************************************************/ /* Allocate space to hold pointers to the memory for the different offset */ /* values being compared and to hold flags that indicate if the offsets */ /* need to be checked in the combination comparisons. */ /****************************************************************************/ bytes = (ex->num_tbls * sizeof (SDDAS_LONG *)) + (ex->num_tbls * sizeof (SDDAS_CHAR *)) + (ex->num_tbls * num_bytes_schar) + (ex->num_tbls * num_bytes_schar); if (bytes == 0) base_ptr = NO_MEMORY; else { if ((tmp_ptr = malloc (bytes)) == NO_MEMORY) return (SCOM_PTR_MALLOC); base_ptr = tmp_ptr; } /* Cast base_ptr to char * since void * and offset is in bytes. */ tbl_ptrs = (SDDAS_LONG **) base_ptr; offset = ex->num_tbls * sizeof (SDDAS_LONG *); crit_stat_ptrs = (SDDAS_CHAR **) ((SDDAS_CHAR *) base_ptr + offset); offset += (ex->num_tbls * sizeof (SDDAS_CHAR *)); chk_tbl = (SDDAS_CHAR *) ((SDDAS_CHAR *) base_ptr + offset); offset += ex->num_tbls * num_bytes_schar; chk_crit = (SDDAS_CHAR *) ((SDDAS_CHAR *) base_ptr + offset); offset = ex->num_tbls * ex->num_sensor * num_bytes_slong; ex->crit_action = 0; sensor_block = 0; for (i = 0; i < ex->num_tbls; ++i, sensor_block += ex->num_sensor) { next_tbl = num_bytes_slong * sensor_block; *(tbl_ptrs + i) = (SDDAS_LONG *) ((SDDAS_CHAR *) base_tables + next_tbl); *(chk_tbl + i) = 1; *(crit_stat_ptrs + i) = (SDDAS_CHAR *) ((SDDAS_CHAR *) base_tables + offset + sensor_block); tbl_info_ptr = ex->tbl_info_ptr + i; *(chk_crit + i) = (tbl_info_ptr->crit_act_sz == 0) ? 0 : 1; ex->crit_action += (tbl_info_ptr->crit_act_sz == 0) ? 0 : 1; } /**************************************************************************/ /* Retrieve information from the VIDF file. */ /**************************************************************************/ rval = ir_get_sensor_tables (chk_tbl, chk_crit, tbl_ptrs, crit_stat_ptrs, &crit_tbl, btime_yr, btime_day, btime_sec, btime_nsec, etime_yr, etime_day, etime_sec, etime_nsec); if (rval != ALL_OKAY) return (rval); /************************************************************************/ /* Malloc space for the sensor index array that indicates which combo */ /* the sensor utilizes and initialize the values. -1 means that the */ /* sensor is not being plotted so no combination was assigned. */ /************************************************************************/ if (ex->bmem.base_tbl_index != NO_MEMORY) { free (ex->bmem.base_tbl_index); ex->bmem.base_tbl_index = NO_MEMORY; } bytes = ex->num_sensor * sizeof (SDDAS_SHORT); if ((tmp_ptr = malloc (bytes)) == NO_MEMORY) return (SCOM_INDEX_MALLOC); ex->bmem.base_tbl_index = tmp_ptr; ex->index_sen_tbl = (SDDAS_SHORT *) ex->bmem.base_tbl_index; for (i = 0; i < ex->num_sensor; ++i) *(ex->index_sen_tbl + i) = -1; /*************************************************************************/ /* Determine the number of unique combinations necessary to process all */ /* requested sensors for the virtual instrument being processed. */ /*************************************************************************/ num_combo = ir_count_combo (chk_tbl, chk_crit, tbl_ptrs, crit_stat_ptrs, crit_tbl); if (num_combo < 0) return (num_combo); /*************************************************************************/ /* Malloc the space to hold all the combinations requested. Initialize */ /* the structure(s) which hold the unique combination offsets. */ /*************************************************************************/ if (ex->bmem.base_sen_ptr != NO_MEMORY) { free (ex->bmem.base_sen_ptr); ex->bmem.base_sen_ptr = NO_MEMORY; first_time = 0; } else first_time = 1; bytes = sizeof (struct sensor_tables) * num_combo; if ((tmp_ptr = malloc (bytes)) == NO_MEMORY) return (SCOM_SEN_PTR_MALLOC); ex->bmem.base_sen_ptr = tmp_ptr; ex->sen_tbl_ptr = (struct sensor_tables *) ex->bmem.base_sen_ptr; ex->num_combo = num_combo; /*********************************************************************/ /* Reset chk_crit values since ir_get_sensor_tables() may modify */ /* the values in preparation for the call to ir_count_combo (). */ /*********************************************************************/ for (i = 0; i < ex->num_tbls; ++i) { tbl_info_ptr = ex->tbl_info_ptr + i; *(chk_crit + i) = (tbl_info_ptr->crit_act_sz == 0) ? 0 : 1; } rval = ir_init_sensor_ptr (num_combo, crit_stat_ptrs, first_time, crit_tbl, chk_crit, btime_yr, btime_day, btime_sec, btime_nsec, etime_yr, etime_day, etime_sec, etime_nsec); if (rval != ALL_OKAY) return (rval); /***************************************************************************/ /* Free memory that does not need to be saved (used for holding purposes).*/ /***************************************************************************/ if (base_tables != NO_MEMORY) free (base_tables); if (base_ptr != NO_MEMORY) free (base_ptr); return (ALL_OKAY); }
__label__pos
0.999867
Codeforces 754A Lesha and array splitting(简单贪心) A. Lesha and array splitting time limit per test:2 seconds memory limit per test:256 megabytes input:standard input output:standard output One spring day on his way to university Lesha found an array A. Lesha likes to split arrays into several parts. This time Lesha decided to split the array A into several, possibly one, new arrays so that the sum of elements in each of the new arrays is not zero. One more condition is that if we place the new arrays one after another they will form the old array A. Lesha is tired now so he asked you to split the array. Help Lesha! Input The first line contains single integer n (1 ≤ n ≤ 100) — the number of elements in the array A. The next line contains n integers a1, a2, ..., an ( - 103 ≤ ai ≤ 103) — the elements of the array A. Output If it is not possible to split the array A and satisfy all the constraints, print single line containing "NO" (without quotes). Otherwise in the first line print "YES" (without quotes). In the next line print single integer k — the number of new arrays. In each of the next k lines print two integers li and ri which denote the subarray A[li... ri] of the initial array A being the i-th new array. Integers li, ri should satisfy the following conditions: • l1 = 1 • rk = n • ri + 1 = li + 1 for each 1 ≤ i < k. If there are multiple answers, print any of them. Examples Input 3 1 2 -3 Output YES 2 1 2 3 3 Input 8 9 -12 3 4 -4 -10 7 3 Output YES 2 1 2 3 8 Input 1 0 Output NO Input 4 1 2 3 -5 Output YES 4 1 1 2 2 3 3 4 4 题目链接:http://codeforces.com/contest/754/problem/A 分析:求前缀和; 看看pre[n]等于多少; pre[n]!=0; 则直接整个数组全部输出; 如果pre[n]==0 则在前面找一个i pre[i]!=0 如果找到了 则 输出a[1..i]和a[i+1..n]; 可以看成是pre[0]=0,pre[i]!=0,pre[n]=0 则可知这两段都是符合要求的不为0; 但是如果没有找到pre[i]!=0 那么就意味着pre[1..n-1]都为0;则数字全为0;则不可能了; 贪心吧。 下面给出AC代码: 1 #include<iostream> 2 using namespace std; 3 int n,a[101],i,s,b; 4 int main() 5 { 6 cin>>n; 7 for(i=1;i<=n;i++) 8 { 9 cin>>a[i]; 10 s+=a[i]; 11 if(a[i]) 12 b=i; 13 } 14 if(b==0) 15 cout<<"NO\n"; 16 else if(s) 17 cout<<"YES\n1\n"<<1<<" "<<n<<"\n"; 18 else 19 cout<<"YES\n2\n"<<1<<" "<<b-1<<"\n"<<b<<" "<<n<<"\n"; 20 return 0; 21 }   posted @ 2017-05-08 23:19 Angel_Kitty 阅读(...) 评论(...) 编辑 收藏 ACM竞赛&数学建模竞赛 - 创建于 2017年2月2日 这是一位ACM爱好者&数学爱好者的个人站,内容主要是算法&数据结构&数学研究的技术文章,大部分来自学习,部分来源于网络,希望对大家有所帮助。 致力于ACM算法研究工作,喜爱交友,关注互联网前沿技术与趋势。 Font Awesome | Respond.js | Bootstrap中文网
__label__pos
0.626982
获取我们在 Firebase 峰会上发布的所有信息,了解 Firebase 可如何帮助您加快应用开发速度并满怀信心地运行应用。了解详情 自定义网络请求数据聚合 使用集合让一切井井有条 根据您的偏好保存内容并对其进行分类。 Firebase 性能监控会自动汇总类似网络请求的数据,以帮助您了解网络请求性能的趋势。 不过,有时您需要自定义 Firebase 聚合特定网络请求数据的方式,以更好地支持您应用的用例。我们提供两种方式来自定义网络请求的数据聚合:自定义 URL 模式下的聚合数据自定义成功率的计算方式 在自定义 URL 模式下聚合数据 对于每个请求,Firebase 都会检查网络请求的 URL 是否与URL 模式匹配。如果请求 URL 与 URL 模式匹配,Firebase 会自动在 URL 模式下聚合请求的数据。 您可以创建自定义 URL 模式来监控 Firebase 未通过其派生的自动 URL 模式匹配捕获的特定 URL 模式。例如,您可以使用自定义 URL 模式对特定 URL 进行故障排除或随着时间的推移监控一组特定的 URL。 Firebase 在跟踪表的网络请求子选项卡中显示所有 URL 模式(包括自定义 URL 模式)及其聚合数据,该表位于 Firebase 控制台的性能信息中心的底部。 自定义 URL 模式匹配如何工作? Firebase 会尝试将请求 URL 与任何已配置的自定义 URL 模式匹配,然后再回退到自动 URL 模式匹配。对于自定义 URL 模式的任何匹配请求,Firebase 会在自定义 URL 模式下聚合请求的数据。 如果请求的 URL 匹配多个自定义 URL 模式,则 Firebase 仅根据以下特定顺序将请求映射到最具体的自定义 URL 模式:纯文本 > * > **在路径中从左到右。例如,对example.com/books/dog的请求匹配两个自定义 URL 模式: • example.com/books/* • example.com/*/dog 但是,模式example.com/books/*最具体的匹配 URL 模式,因为example.com/*/dog中最左边的部分books优先于example.com/books/*中最左边的部分* 创建新的自定义 URL 模式时,请注意以下事项: • 来自先前请求的匹配和聚合数据不受创建新的自定义 URL 模式的影响。 Firebase 不会追溯地重新聚合请求数据。 • 只有未来的请求会受到创建新的自定义 URL 模式的影响。您可能需要等待长达 12 小时,性能监控才能在新的自定义 URL 模式下收集和聚合数据。 创建自定义 URL 模式 您可以从跟踪表中的网络请求子选项卡创建自定义 URL 模式,该跟踪表位于 Firebase 控制台的性能信息中心的底部。 项目成员必须是所有者或编辑者才能创建新的自定义 URL 模式;但是,所有项目成员都可以查看自定义 URL 模式及其汇总数据。 您可以为每个应用程序创建最多 400 个自定义 URL 模式,并为该应用程序的每个域创建最多 100 个自定义 URL 模式。 要创建自定义 URL 模式,请从主机名开始,然后是路径段。主机名必须包含有效域,并且可以选择包含子域。使用以下路径段语法创建可以匹配 URL 的模式。 • 纯文本——匹配一个精确的字符串 • * - 匹配第一个子域段,或单个路径段中的任何字符串 • ** — 匹配任意路径后缀 下表描述了一些潜在的自定义 URL 模式匹配。 匹配...创建一个自定义 URL 模式,例如...与此 URL 模式匹配的示例 准确的网址example.com/foo/baz example.com/foo/baz 任何单个路径段 ( * ) example.com/*/baz example.com/foo/baz example.com/bar/baz example.com/*/*/baz example.com/foo/bar/baz example.com/bah/qux/baz example.com/foo/* example.com/foo/baz example.com/foo/bar 注意:此模式与example.com/foo不匹配。 任意路径后缀 ( ** ) example.com/foo/** example.com/foo example.com/foo/baz example.com/foo/baz/more/segments subdomain.example.com/foo.bar/** subdomain.example.com/foo.bar subdomain.example.com/foo.bar/baz subdomain.example.com/foo.bar/baz/more/segments 第一个子域段 ( * ) *.example.com/foo bar.example.com/foo baz.example.com/foo 查看自定义 URL 模式及其数据 Firebase 在跟踪表的网络请求子选项卡中显示所有 URL 模式(包括自定义 URL 模式)及其聚合数据,该表位于 Firebase 控制台的性能信息中心的底部。 查看自定义 URL 模式,请从跟踪表的网络请求子选项卡的下拉菜单中选择自定义模式。请注意,如果自定义 URL 模式没有任何聚合数据,则它只会出现在此列表中。 当在 URL 模式下聚合的数据的数据保留期结束时,Firebase 会从 URL 模式中删除该数据。如果自定义 URL 模式下聚合的所有数据都过期,则 Firebase不会从 Firebase 控制台中删除自定义 URL 模式。相反,Firebase 继续在跟踪表的网络请求子选项卡的自定义模式列表中列出“空”自定义 URL 模式。 删除自定义 URL 模式 您可以从项目中删除自定义 URL 模式。请注意,您无法删除自动 URL 模式。 1. Performance dashboard向下滚动到跟踪表,然后选择Network requests子选项卡。 2. 网络请求子选项卡的下拉菜单中选择自定义模式 3. 将鼠标悬停在您要删除的自定义 URL 模式的行上。 4. 单击该行最右侧的 ,选择Remove custom pattern ,然后在对话框中确认删除。 删除自定义 URL 模式时,请注意以下事项: • 任何未来的请求都会映射到下一个最具体的匹配自定义 URL 模式。如果 Firebase 未找到匹配的自定义 URL 模式,则会回退到自动 URL 模式匹配 • 删除自定义 URL 模式不会影响来自先前请求的匹配项和聚合数据。 在适用的数据保留期结束之前,您仍然可以在网络请求子选项卡(选中所有网络请求)中访问已删除的自定义 URL 模式及其聚合数据。当已移除的自定义 URL 模式下的所有聚合数据都过期时,Firebase 会删除自定义 URL 模式。 • 网络请求子选项卡(已选择自定义模式)未列出任何已删除的自定义 URL 模式。 下一步 • 为降低应用程序性能的网络请求设置警报。例如,如果特定 URL 模式的响应时间超过您设置的阈值,您可以为您的团队配置电子邮件警报。 自定义成功率的计算方式 Firebase 针对每个网络请求监控的指标之一是请求的成功率。成功率是成功响应与总响应的百分比。此指标可帮助您衡量网络和服务器故障。 具体来说,Firebase 会自动将响应代码在 100 - 399 范围内的网络请求计为成功响应。 除了 Firebase 自动计为成功的响应代码之外,您还可以通过将某些错误代码计为“成功响应”来自定义成功率计算。 例如,如果您的应用具有搜索端点 API,您可以将 404 响应计为“成功”,因为搜索端点需要 404 响应。假设这个搜索端点每小时有 100 个样本,其中 60 个是 200 个响应,其中 40 个是 404 个响应。在您配置成功率之前,成功率为 60%。将成功率计算配置为将 404 响应计为成功后,成功率为 100%。 配置成功率计算 要为网络 URL 模式配置成功率计算,您必须具有firebaseperformance.config.update权限。默认情况下,以下角色包括此必需权限: Firebase Performance AdminFirebase Quality AdminFirebase Admin和项目Owner 或 Editor 1. 转到 Firebase 控制台中的性能监控仪表板选项卡,然后选择要为其配置成功率计算的应用。 2. 向下滚动到屏幕底部的跟踪表,然后选择网络请求选项卡。 3. 找到您要为其配置成功率计算的 URL 模式。 4. 在该行的最右侧,打开溢出菜单 ( ) 并选择配置成功率 5. 按照屏幕上的说明选择要计为成功响应代码的响应代码。
__label__pos
0.982925
Format Send to Choose Destination See comment in PubMed Commons below Cogn Affect Behav Neurosci. 2007 Dec;7(4):396-412. Executive control of gaze by the frontal lobes. Author information 1 Center for Integrative and Cognitive Neuroscience, Vanderbilt University, Nashville, Tennessee 37240, USA. [email protected] Abstract Executive control requires controlling the initiation of movements, judging the consequences of actions, and adjusting performance accordingly. We have investigated the role of different areas in the frontal lobe in executive control expressed by macaque monkeys performing a saccade stop signal task. Certain neurons in the frontal eye field respond to visual stimuli, and others control the production of saccadic eye movements. Neurons in the supplementary eye field do not control directly the initiation of saccades but, instead, signal the production of errors, the anticipation and delivery of reinforcement, and the presence of response conflict. Neurons in the anterior cingulate cortex signal the production of errors and the anticipation and delivery of reinforcement, but not the presence of response conflict. Intracranial local field potentials in the anterior cingulate cortex of monkeys indicate that these medial frontal signals can contribute to event-related potentials related to performance monitoring. Electrical stimulation of the supplementary eye field improves performance in the task by elevating saccade latency. An interactive race model shows how interacting units produce behavior that can be described as the outcome of a race between independent processes and how conflict between gaze-holding and gaze-shifting neurons can be used to adjust performance. PMID: 18189013 [Indexed for MEDLINE] PubMed Commons home PubMed Commons 0 comments How to join PubMed Commons Supplemental Content Loading ... Support Center
__label__pos
0.583328
Electrosensitivity  Science: Electrosensitivity, illness and cancer     back to: Science about ES: Individual Studies Cancer from WiFi, cellphones and towers, and power-lines Radiofrequency radiation causes cancer • "RF fields can change radical concentrations and cancer cell growth rates ...  long-term exposures to relatively weak static, low-frequency, and RF magnetic fields can change radical concentrations. As a consequence, a long-term exposure to fields below the guideline levels may affect biological systems and modify cell growth rates ... there is epidemiological evidence for an association of small increases in cancer rates with long-term exposures to magnetic fields ... weak magnetic fields change the rate of recombination for radical pairs that are generated by the metabolic activity in cells, which, in turn, change the concentration of radicals such as O2 and molecules such as H2O2 ... long-term exposure to elevated magnetic fields can lead to elevated radical concentrations and an association with aging, cancers, and Alzheimer’s. This hypothesis is supported by some theoretical and experimental results ...  changes in magnetic field change the growth rate of cancer cells more than normal cells of the same type ... At low frequencies, the magnetic fields can both increase and decrease the growth rates of cells ... there are many experiments where no changes are seen. This, we believe, is due to the many feedback and repair processes in the body ...  these effects are frequency, amplitude, and time dependent."           (Frank Barnes and Ben Greenebaum: "Some Effects of Weak Magnetic Fields on Biological Systems" IEEE Power                         Electronics Magazine, March 2016) • Cellphones exceed heating limits if used or held next to the body, in the way that most people use them. This was shown for the Samsung Galaxy, LG5 and Apple iPhone 7. Women can get cancers on their breast which match where the antenna of the cellphone is located. Yet regulators like Health Canada are still refusing to admit that there is a problem with tests which were devised decades ago. These tests still apply only to adult males, and not to children or people with genetic conditions or long-term ill health. Moreover, the test are based on the long-invalidated hypothesis that only heating can cause adverse health effects, when it has been established for decades that low-level exposure at certain frequencies can cause adverse effects.            ​(CBC News: "The secret inside your cellphone (CBC Marketplace)" 22 minutes, 2017) WiFi and cancer • Increasing numbers of reports now link WiFi and cancer. WiFi signals were invented in Australia in 1999 and became common around the world from about 2003. From about 2004 WiFi was linked with adverse neurological effects, such as electro-sensitivity symptoms, in adults, children and animals. • In recent years there have been reports of people with cancer suffering relapses and metastasis if they live or work in an area with WiFi radiation. Specific reports of cancers, especially skin cancers, suggest that they are more common on the side of the body exposed to radiation from the nearest WiFi router, especially for older people and people working at the same desk for long periods. Cellphones and cancer • Since the 1990s It has been known that cellphones, invented in the 1980s, increase the risk of cancer. • The most obvious form of cancer are brain tumors, often on the same side of the head as the cellphone was held. The clear link between 'heavy' use of cellphones, at 30 minutes or more per day, and brain tumors was established by 2008 and led to the IARC classifying radio frequency as as 2B human carcinogen. • Since harm from wireless radiation depends on frequency patterns and not just strength, 3G UMTS is likely more carcinogenic than the older 2G GMS, even though it operates at a lower power. (Lloyd Morgan et al, 2016) Cellphone and TV and Radio Towers and Cancers • Since the 1990s many studies have shown biological effects in a dose-response pattern around cellphone and TV/Radio towers. From 1995 it has been shown that cellphone radiation can cause DNA breaks. • Since about 2003 studies have shown an increased risk of cancer around cellphone towers, with an increased risk of some 4 to 5 times within about 400 meters and especially for women. Cancer clusters are surprisingly common around towers, as are neurological illnesses. • Some studies suggest that skin cancer relates more to VHF radio transmissions than solar radiation, since people relocating from areas without high rates of skin cancer and without certain frequencies, amplitude and modulation of radio transmissions start to match local inhabitants in the rate of skin cancer when they move to areas with carcinogenic VHF transmissions. • Studies show increased cancer near FM radio transmitters with horizontally polarized waves, compared with FM towers emitting vertically polarized waves.  ('Cancer versus FM radio polarization types', 2016) WiFi, Cellphones and Tower radiation should be a 2A probable human carcinogen • WiFi and some cellphone signals have both extremely low frequency (ELF) elements, such as 10 Hz, and radio frequency (RF), such as 2.45 GHz or 5 GHz. ELF was classified as a 2B possible human carcinogen in 2001 by IARC (The International Agency for Research on Cancer) and RF in 2011. • In 2015 it was confirmed that both ELF and RF are tumor promoters in animals. According to members of IARC, this means that WiFi and similar radiation should now be classed as a 2A probable human carcinogen. Power-lines and cancer • In 1979 cancer was established for ELF power-lines, At first this was shown for childhood leukemia, but since then many other cancers have been associated with ELF exposure. • In 1979 power-lines were also show to increase the risk of depression and suicide. • In 2005 it was confirmed in a large study of over half the Swiss population that residence near power-lines is associated with Alzheimer's disease in a dose-response manner. Electromagnetic stress, inflammation, tumor promotion and cancer • In 2015 it was confirmed that both ELF and RF electromagnetic exposure, as from power-lines, WiFi and cellphones, promote tumors. (As stated above, this apparently requires that they should now both be classified as class 2A human carcinogens.) The electromagnetic exposure which causes this tumor promotion is one form of the stress on the human body which both contributes to the development of cancer and reduces the effectiveness of treatments against cancer. Recent studies are beginning to show how such stress affects the sympathetic nervous system and thus causes cancer progression, perhaps through oxidative stress. • It is well known that electromagnetic radiation causes stress and stress causes inflammation. Inflammation is a hallmark of most diseases, from obesity and diabetes to heart disease and cancer. Since 1908 stress has been noted as the top cause of cancer (Dr Eli G Jones: “Cancer, Its Causes, Symptoms and Treatment”, 1908). A 2016 study on mice found that “chronic stress induces signalling from the sympathetic nervous system (SNS) and drives cancer progression … Here we show that chronic stress restructures lymphatic networks within and around tumors to provide pathways for tumor cell escape … These findings reveal unanticipated communication between stress-induced neural signalling and inflammation, which regulates tumor lymphatic architecture and lymphogenous tumor cell dissemination.” (Le CP et al, Nat Commun. 2016) • Cancer cells typically spread to other areas of the body either via the blood vessels, or through the lymphatic system; stress hormones affect both of these pathways or channels. The mechanism found is related to the way adrenaline activates the sympathetic nervous system (SNS) to increase the rate of lymph formation. Adrenaline also causes physical changes in the lymph vessels, allowing cancer cells to migrate into other body parts at a faster rate. The role of the SNS for metastasis was discovered in 2010: “Metastasis to distant tissues is the chief driver of breast cancer-related mortality … Stress-induced neuroendocrine activation had a negligible effect on growth of the primary tumor but induced a 30-fold increase in metastasis to distant tissues including the lymph nodes and lung … These findings identify activation of the sympathetic nervous system as a novel neural regulator of breast cancer metastasis.” (Sloan EK et al, Cancer Res. 2010) • The body’s neuroendocrine response (the release of hormones into the blood when stimulated by the nervous system) can directly alter cell processes which help protect against cancer, such as DNA repair and the regulation of cell growth. The stress hormone norepinephrine can increase the growth of cancer. Norepinephrine can stimulate tumor cells to produce two compounds (MMP-2, MMP-9) which break down the tissue around the tumor cells, thus allowing the cells to more easily move into the bloodstream, where they can travel to other organs and tissues and form additional tumors. Norepinephrine may also stimulate tumor cells to release a chemical (vascular endothelial growth factor, or VEGF) which aids the growth of the blood vessels that feed cancer cells. This too can increase the growth and spread of the cancer. (Yang EV et al, Cancer Res. 2006) • Epinephrine, another stress hormone, also causes changes in certain cancer cells, specifically prostate and breast cancer, making them resistant to apoptosis (cell death). (Sastry KS et al, J Biol Chem. 2007)           Dr Mercola: “How Chronic Stress Promotes Spread of Cancer, and What You Can Do About It” (March 24 2016)          Dr Christopher J Portier & Wendy L Leonard: "Do Cell Phones Cause Cancer? Probably, but It's Complicated" (Scientific American, June 13 2016)            Microwave News: "Cell Phone Radiation Boosts Cancer Rates in Animals; $25 Million NTP Study Finds Brain Tumors; U.S. Government Expected To Advise Public of Health Risk" (May 25 2016) ​          Scientific American:  "Major Cell Phone Radiation Study Reignites Cancer Questions: Exposure to radio-frequency radiation linked to tumor formation in rats" (May 27 2016) Draft Results. Hand-held radar combine with toxic chemicals to cause cancer Electromagnetic exposure is an established co-carcinogen; that is, it increases the risk of cancer where a person already has a risk from chemical exposure. World Health Organization recognizes EM radiation as cancer risk The World Health Organization recognizes electromagnetic non-ionizing radiation, like ionizing radiation, as a risk factor for cancer. • "Reducing the cancer burden:  Between 30–50% of cancers can currently be prevented. This can be accomplished by avoiding risk factors and implementing existing evidence-based prevention strategies." • "Modify and avoid risk factors: Modifying or avoiding key risk factors can significantly reduce the burden of cancer. These risk factors include:        ionizing and non-ionizing radiation."                                                                              (World Health Organization: "Cancer" Fact sheet, number 297, February 2017) "Non-ionizing radiation" includes radiation from cellphones and WiFi. The WHO Fact Sheet number 297 was previously revised in February 2011. In May 2011 the WHO's IARC classified Radio-Frequency Radiation, like ELF ten years earlier in 2001, as a class 2B carcinogen. Reducing Electromagnetic stress and cancer risks from WiFi, cellphones and towers Key ways to reduce EM stress and thus the risks of cancers and neurological illnesses.             (See: Liz Barrington: “How to Combat the Effects of Electromagnetic Stress on Your Body” Natural Body Healing) • 1) Reduce your exposure to EMFs. Keep away from wireless devices and appliances. The use of cabled equipment is much healthier. • 2) Review and change your ‘sleep’ environment. The body’s immune resistance is considerably lowered during sleep because it has to rest and repair itself. All metal bed frames and bedsprings should be avoided. Remove electrical appliances and mobile devices away from the head area especially and preferably out of the bedroom. Pull your bed from wall if the bed is near an electrical outlet. If you have to use one, remove your electric blanket from the bed after it has warmed, or better still use a hot-water bottle. Baby Monitoring Devices must never be used anywhere near the head of the baby. • 3) Regularly detoxify your body. Specific ‘heavy-metal’ detoxing is recommended. • 4) Strengthen your body through a healthier diet and care for your bowels. Eat plenty of fibre and probiotics. Take essential fatty acid supplements. Consider using botanical flower remedies. • 5) Remove the metal in your mouth, such as amalgam fillings. • 6) Improve your immune system. Change your diet, your lifestyle, your attitudes and beliefs to healthier ones, so your body becomes stronger. Exercise regularly. • 7) Get rid of mould and fungus. Watch out for mould on the walls in your home, a give-away that there is radiation within your environment because research shows that 600 times more neurotoxins develop in high EMF areas.  Ethical issues in allowing the general populations to be exposed to WiFi, cellphone radiation and TV/Radio transmissions • To expose large parts of a population to WiFi, cellphone radiation and TV/Radio transmissions, now considered a 2B carcinogen, is unprecedented historically. • To expose large parts of a population to WiFi, cellphone radiation and TV/Radio transmissions also appears unethical, since most people are not aware of the medical science and they are unable to avoid this carcinogen even if they wish to do so. • Schools and employers often allow pupils and workers to be constantly irradiated with this carcinogen from the use of cellphones and WiFi, arguing that they can deploy such radiation so long as pro-wireless activist regulators describe the known evidence of harm as not ‘consistent’ or ‘convincing’. International Cancer Classification of Electromagnetic energy  Most frequencies of Electromagnetic energy, both ionizing and non-ionizing,  can now be regarded as possible or certain human carcinogens. These effects are mainly low-level or non-thermal and thus outside the range of the ICNIRP safety limits. The method of causing cancer is different for ionizing and non-ionizing radiation. (Havas M, 2016)   (a) Electromagnetic energy: ionizing frequencies (X-rays and gamma (γ) rays) (b) Electromagnetic energy: visible blue light frequencies at night, as in shiftwork • ​Canadian Union of Public Employees: “Shiftwork” Health & Safety Factsheet. (c) Electromagnetic energy: Extremely Low Frequency (ELF) and Microwave and Radio Frequency (MW, RF) • Morgan LL et al: “Mobile phone radiation causes brain tumors and should be classified as a probable human carcinogen (2A) (review)” (2015) PMID: 25738972. Accounts of cancer caused by wireless radiation
__label__pos
0.793254
t4kmode t4kmode - 7 months ago 30 Android Question adb shell regular expression doesn't work as tested locally First, sorry if my question is obscure or in an inconvenient format. This is my first post here :D. My issue is that I have a script, let's say test.sh which reads an input, and validates if it's a positive integer (reg ex used from this post: BASH: Test whether string is valid as an integer?): #!/bin/sh echo -n " enter number <" read num if [[ $num =~ ^-?[0-9]+$ ]] #if num contains any symbols/letters then # anywhere in the string echo "not a positive int" exit else echo "positive int read" fi I am running this script on my android device (Xiaomi Mi3 w) using adb shell and the error: syntax error: =~ unexpected operator keeps displaying. First, is my regex even correct? Second, any hints on how I can overcome this syntax error? Answer I had to use ksh expression as shown below to get this to work. case $num in +([0-9])*(.)*([0-9]) ) # Variable positive integer echo "positive integer" ;; *) # Not a positive integer echo "NOPE" exit ;; esac
__label__pos
0.919039
Can you get in shape in 90 days? Can you get in shape in 90 days? Have you ever wanted to get in shape in just 90 days? It may seem like an impossible and daunting task, but with the right mixture of diet and exercise, it is possible! In this article, we’ll explore the various strategies that can be used to get to your fitness goals in only 90 days. We’ll also look at some of the potential pitfalls that you might encounter along the way. So if you’re ready to take on a challenge and get fit in just 90 days, read on! Step 1: First Month For the initial month, make sure you get to the gym or work out at home regularly. Pick any activity that you find enjoyable – running, cross-trainers, bikes or even just stair climbing. Put in as much time and effort as possible – even if it’s only for 5 minutes or a full 45 minutes session. Feel your heart pumping, muscles working, and the sweat pouring down. Stay hydrated, and after your workout, revel in the amazing feeling of accomplishment. You’ll soon notice the positive changes in your body and how your clothes fit better. Keep up the dedication! Step 2: Second Month Now it’s time to incorporate weightlifting into your routine. Set aside three days dedicated solely to lifting weights. The goal here is to learn and practice proper form for compound exercises. If you can handle it, use a barbell (weighing 45 pounds) for exercises like bench presses and squats. If you need alternatives, dumbbells or an EZ Bar will do the trick. Make sure you have a buddy to spot you during bench presses and squats for safety. Aim for 3 sets of each exercise with 6 reps per set. Increase the weight if it becomes too easy for 6 reps. Design your weekly plan to target different muscle groups, such as chest and shoulders, legs, and back and biceps. Remember to take a day for rest because even superheroes need a break! Step 3: Third Month In the next four weeks, repeat the weightlifting routine from month 2, but with a couple of tweaks. Add two more exercises to each lifting day to challenge yourself further. Additionally, swap out steady-state cardio for High-Intensity Interval Training (HIIT). HIIT involves alternating between intense bursts of exercise and short periods of rest. For example, sprint for 30 seconds, then rest for 2 minutes and repeat this pattern for a total of 20-30 minutes. If you’re just starting out, adjust the sprint and rest times accordingly. You can try HIIT on the bike or even running sprints if you have the space. HIIT is a game-changer when it comes to burning fat and preserving muscle. Not only will you burn calories during your workout, but you’ll also continue to burn them throughout the day. Get ready to feel amazing! Step 4: Don’t Stop Congratulations on completing the 90-day journey! By now, you’ve become a fitness authority on your own behalf. You’ve mastered various compound lifts, dumbbell exercises, and HIIT routines. But it doesn’t stop here. Maintain your progress and continue to challenge yourself. Don’t avoid weak areas or exercises you don’t like. Instead, step up your efforts and turn them into your strong areas. Remember, even Arnold Schwarzenegger had to work on his skinny quads by wearing short shorts. The key is to keep pushing yourself and never give up. Step 5: Eat Healthy Now that you’ve established a solid fitness routine, it’s time to complement your efforts with a healthy eating plan. Month 4 will focus on nourishing your body with nutritious foods that fuel your workouts and promote overall well-being. Read more : How 30-day Shred Challenge will change your life? Plan Your Meals Take some time to create a weekly meal plan. Include a balance of lean proteins, whole grains, fruits, vegetables, and healthy fats. This will help you stay on track and steer clear of impulsive, unhealthy food choices. Mindful Eating Pay attention to your body’s hunger and fullness cues. Eat slowly, savour each bite, and stop when you feel satisfied, not overly stuffed. This will help prevent overeating and promote better digestion. Portion Control To control your food intake, try using smaller plates and bowls. Your brain will feel satisfied with less food. Aim to have vegetables take up half of your plate, while lean protein should take up one-quarter, and the remaining quarter should be filled with either whole grains or starchy vegetables. Hydration Don’t forget to stay hydrated! Drink plenty of water throughout the day to support your workouts and overall health. Opt for water over sugary beverages or excessive amounts of caffeine. Snack Smart Choose nutritious snacks like fruits and vegetables with hummus, Greek yogurt, nuts, or seeds. These options provide energy and important nutrients without derailing your progress. Limit Processed Foods Minimize your intake of processed and packaged foods that are often high in added sugars, unhealthy fats, and sodium. Focus on whole, unprocessed foods as much as possible. Treat Yourself It’s important to enjoy your  favourite treats in moderation. Allow yourself the occasional indulgence, but remember to balance it with healthy choices for the majority of your meals and snacks. Eating healthy is more than just a diet It’s a lifestyle. By developing healthy habits and creating a nutritious routine, you can fuel your body for the long term and experience the many benefits that come with nourishing yourself. With dedication, commitment, and knowledge, you can create an eating plan that works for you and helps you achieve your fitness goals. Conclusion As you continue your fitness journey beyond the 90-day mark, it’s essential to maintain your dedication and never settle for mediocrity. Keep pushing yourself, challenging your weak areas, and striving for improvement. Embrace the mindset of continuous growth and self-improvement. You are now equipped with the knowledge and muscle memory to experiment with different workouts and find what works best for you. Remember, fitness is a lifelong journey, and it’s up to you to keep the flame of determination alive. Embrace the changes you have made, celebrate your achievements, and stay committed to a healthy and active lifestyle. By taking care of your physical and mental well-being, you are investing in a happier, healthier, and more fulfilling future. Add a Comment Your email address will not be published.
__label__pos
0.878108
[Last Call] Learn how to a build a cloud-first strategyRegister Now x • Status: Solved • Priority: Medium • Security: Public • Views: 595 • Last Modified: From csv file insert into MySQL i have a code that lets user uplaod a csv & then insert the values from csv into 2 tables in the MySql DB. it works fine if the student in the csv table are unique & doesn't already exist in the DB. BUT i want it to work in the following way: 1. if there are student code with the same number (eg 2000), then get only the unique ones from csv before storing into the student_info table 2. if the student code (eg 2000) specified in the csv file already exists in the DB, then be able to match it by retrieving student codes with the unique ones from csv file, & if matched then skip insertion into student_info table & only insert in sitting_data table. But if doesn't match then code will also be inserted in sitting_data table Is it possible? Can someone please help? 0 ARC_UM Asked: ARC_UM • 7 • 4 • 2 1 Solution   mstrelanCommented: First do an sql select query of student codes and store them in array. Then foreach record in the csv call php's in_array function to see if the code exists in the array. if yes insert in sitting_data if no insert in student_info and sitting_data 0   shobinsunCommented: HI, Use this idea: Hope this will help you. Regards $handle = fopen($csvfile, "r"); while (($data = fgetcsv($handle, 1000)) !== false) { $code = $data[0]; $query = 'SELECT student_code FROM student_info'; $result = mysql_query($query); while($row = mysql_fetch_array($result)) { if ($row['student_code']==$code) { $query = "INSERT INTO sitting_data_table () VALUES('')"; mysql_query($query); } else { $query = "INSERT INTO student_info () VALUES('')"; mysql_query($query); $query1 = "INSERT INTO sitting_data_table () VALUES('')"; mysql_query($query1); } } mysql_free_result($result); } fclose($handle); Open in new window 0   mstrelanCommented: shobinsun's solution is a good start but i wouldn't recommend putting $query = 'SELECT student_code FROM student_info'; inside the while loop. Do it first like my in_array suggestion OR change it to 'SELECT student_code FROM student_info WHERE student_code = $studentCode'. The in_array method means  for every record you have to scan through the array, the other method means you need to perform lots of sql queries. decide which one is more efficient. 0 Industry Leaders: We Want Your Opinion! We value your feedback. Take our survey and automatically be enter to win anyone of the following: Yeti Cooler, Amazon eGift Card, and Movie eGift Card!   ARC_UMAuthor Commented: I didn't had a chance to see your suggestion since i was away. will try & let you know how it goes. Thanks 0   ARC_UMAuthor Commented: how to read using in_array method? I had also read from file by the way shobinsun has shown: $code = $data[0]; i have read &  assigned all my column fields in this way 0   ARC_UMAuthor Commented: hi shobinsun, how can enter uniques student codes in the table if there are student code with the same number (eg 2000), beacause student table can have only uniques student id's? See the attached csv template. template.xls 0   ARC_UMAuthor Commented: don't think it is checking  if ($row['student_code']==$code) correctly with the ones entered with that of in the existing table. any suggestions? 0   shobinsunCommented: Hi, Use this Idea: fclose($handle); $handle = fopen($csvfile, "r"); while (($data = fgetcsv($handle, 1000)) !== false) { $code = $data[0]; $query = "SELECT * FROM student_info where student_code='$code'"; $result = mysql_query($query); $count=mysql_num_rows($result); if($count!=0) { $query = "INSERT INTO sitting_data_table () VALUES('')"; mysql_query($query); } else { $query = "INSERT INTO student_info () VALUES('')"; mysql_query($query); $query1 = "INSERT INTO sitting_data_table () VALUES('')"; mysql_query($query1); } } fclose($handle); Open in new window 0   ARC_UMAuthor Commented: why is it not being able to compare $row['student_code']==$code ? is there a syntax error? 0   ARC_UMAuthor Commented: doesn't enter the record in student table that doesn't exist. only enters all records in sitting table 0   shobinsunCommented: Hi, Check the data with this:  echo "CODE:",$code; $query = "SELECT name FROM pdf1 where code='$code'";  $result = mysql_query($query);  $row = mysql_fetch_array($result); echo "<br>:", $row['name']; 0   ARC_UMAuthor Commented: my select query gets only the 1st record that is 2001 student id form the DB. is there anything wrong i am doing wrong? have a look at my code echo "CODE:",$col2; $query = "SELECT `STUDENT_CODE` FROM `$_SESSION[student_info]`"; $results = mysql_query($query); $row = mysql_fetch_array($results); echo "<br>:Table", $row['STUDENT_CODE']; while($row) { if($row['STUDENT_CODE']==$col2) { $sql = "INSERT INTO sitting_table................ Open in new window 0   shobinsunCommented: Hi, No problem with the following code. use it : $handle = fopen($csvfile, "r"); while (($data = fgetcsv($handle, 1000)) !== false) { $code = $data[0]; $query = "SELECT * FROM student_info"; $result = mysql_query($query) or die(mysql_error()); while($row = mysql_fetch_array($result)){ if($row['student_code']==$code) { $query = "INSERT INTO sitting_data_table () VALUES('')"; mysql_query($query); } else { $query = "INSERT INTO student_info () VALUES('')"; mysql_query($query); $query1 = "INSERT INTO sitting_data_table () VALUES('')"; mysql_query($query1); } } fclose($handle); Open in new window 0 Featured Post Granular recovery for Microsoft Exchange With Veeam Explorer for Microsoft Exchange you can choose the Exchange Servers and restore points you’re interested in, and Veeam Explorer will present the contents of those mailbox stores for browsing, searching and exporting. • 7 • 4 • 2 Tackle projects and never again get stuck behind a technical roadblock. Join Now
__label__pos
0.973419
Lysosome facts for kids Kids Encyclopedia Facts The Biological bulletin (19756543133) TEM views of various vesicular compartments. Lysosomes are denoted by "Ly". They are dyed dark due to their acidity; in the center of the top image, a Golgi Apparatus can be seen, distal from the cell membrane relative to the lysosomes. A lysosome is a cell organelle. They are like spheres. They have hydrolytic enzymes which can break down almost all kinds of biomolecules, including proteins, nucleic acids, carbohydrates, lipids, and cellular debris. They contain more than 50 different enzymes. By convention, lysosome is the term used for animal cells. In plant cells, vacuoles do similar functions. With a wider definition, lysosomes are found in the cytoplasm of plant and protists as well as animal cell. Lysosomes work like the digestive system to break down, or digest, proteins, acids, carbohydrates, dead organelles, and other unwanted materials. They break up larger molecules into smaller molecules. Those smaller molecules can then be used again as building blocks for other large molecules. Images for kids Lysosome Facts for Kids. Kiddle Encyclopedia.
__label__pos
0.93606
What We’re Still Learning About Hawaii The fiery forces beneath the island chain still mystify geologists Hawaiian Islands Maui's Haleakala volcano and the rest of the Hawaiian Islands formed out of molten lava as the Pacific plate drifted over the hotspot as three to four inches a year. Map Source: TASA Graphic Arts, Inc. © 2009 Haleakala originated as a vent on the seafloor about two million years ago. Eruptions of lava built up the volcano until it reached the sea surface less than a million years later; continued eruptions pushed it more than 10,000 feet above sea level and gave it almost 600 square miles of land. Haleakala eventually connected with another volcano to form the island of Maui. In fact, all the Hawaiian Islands are of volcanic origin. Most volcanoes—Mount St. Helens, say, or Mount Fuji—grow along the boundary between tectonic plates, where collisions melt the earth’s upper layers and fuel eruptions. By contrast, Hawaii’s volcanoes emanate from a “hotspot” under the Pacific plate. The hotspot, which geologists estimate began producing the Hawaiian Islands 30 million years ago, is a plume of molten rock that rises through the mantle, the mostly solid layer between the crust and core. The islands were formed as the Pacific plate crept northwest at three to four inches a year, carrying volcano after volcano away from the stationary hotspot like a conveyor belt. Though scientists have zeroed in on the hotspot as the source of Hawaii’s volcanoes, there’s still a lot they don’t know about it, including just how deep it is. Many scientists estimate that the hotspot originates some 1,800 miles into the earth, near the boundary between the mantle and the planet’s iron-rich core. In one recent test, researchers led by the University of Hawaii measured how fast seismic waves from earthquakes travel below ground—the waves move more slowly through hot rock than cold—and traced one plume under the Big Island of Hawaii that extends at least 900 miles deep. However, MIT scientists found a source only 400 miles beneath the surface, a 1,200-mile-wide reservoir of hot rock west of the Big Island. Figuring out how to see into the earth’s interior is “just a very difficult experimental problem to answer,” says John Tarduno, a geophysicist at the University of Rochester in New York. “We would like to get better images to see the hotspot source itself.” The islands don’t last forever. As the Pacific plate moves Hawaii’s volcanoes farther from the hotspot, they erupt less frequently, then no longer tap into the upwelling of molten rock and die. The island erodes and the crust beneath it cools, shrinks and sinks, and the island is again submerged. Millions of years from now, the Hawaiian Islands will disappear when the edge of the Pacific plate that supports them slides under the North American plate and returns to the mantle. For now, Haleakala is hanging on. The volcano last spewed lava sometime between 1480 and 1780, but it has erupted more than 12 times in the past 1,000 years. Another eruption is not out of the question, says Richard Fiske, a geologist emeritus at the Smithsonian National Museum of Natural History. Says John Sinton, a geologist at the University of Hawaii at Manoa: “It’s a volcano that has refused to die.” Maui's Haleakala volcano and the rest of the Hawaiian Islands formed out of molten lava as the Pacific plate drifted over the hotspot as three to four inches a year. Map Source: TASA Graphic Arts, Inc. © 2009
__label__pos
0.588942
Dangers of Vaping – Are Vaporizers Dangerous? dangers of vaping Dangers of Vaping – Are Vaporizers Dangerous? You can find dangers of vaporizing your nicotine, but additionally, there are dangers of not vaporizing your nicotine. The differences between your two is very important if you are a smoker or perhaps a non-smoker. You have to know how smoking can harm you in order to avoid it. When you smoke, you are consuming many dangerous toxins into the body. The toxins result from the tar and the other chemicals that are within tobacco. In essence, once you vaporize your nicotine, what you are doing is recycling a few of that poison. But, once you do not vaporize your nicotine, you’re taking in more of these toxins. Which means that the dangers of the cigarettes contain both of these. Now, why don’t we look at the dangers of not vaporizing tobacco. Once you smoke, you will be taking in not merely tar and poisons, but additionally carcinogens. There are numerous carcinogens that are within marijuana tobacco, and over 60 of them. They are well above the levels which are considered dangerous. If you’re a heavy smoker, then you should see how your life can be affected negatively by smoking marijuana tobacco. So, the facts of the problem are these: cigarettes contain poisons that can kill you in several ways. In some cases, it could cause cancer. In other cases, it could cause death. But, you might say that cigarettes aren’t as harmful as combustible cigarette but that’s not true either. In some of the more severe case reports, teens have died after using cigarettes. Let us look at another aspect of the dangers of vaporing tobacco. There are liquids and gels that can mimic the taste of cigarettes. That is obviously a trap, because many people will purchase these so-called electronic cigarettes and start to use them. They are not said to be able to do this, but many liquids and gels aren’t safe to use as a substitute for cigarettes. Probably the most serious dangers are contained in the chemicals that are used to create the fake tobacco. So, what are the real dangers of smoking? We know that vaporizing tobacco is an extremely dangerous habit. The things that are accustomed to make e-cigs may also be extremely dangerous. In many cases, they are able to even kill you. So, how should we go about removing this habit? Right now, you should realize that the dangers of e-cigarette smoking are enormous. You should try to quit as quickly as possible. However, there is a treatment for the problem. Through the use of products such as the nicotine patches, the gum, and even the new nicotine inhalers, you can greatly reduce your chances of having any health effects from puffing on a cigarette. So, in conclusion, you should understand the dangers of smoking e cigarettes. In case you have kids or pets, you should strongly consider taking steps to lessen the health risks associated with smoking. In the event that you simply must smoke, you need to use nicotine patches or gum, or a vaporizer that will not add any chemicals to your body. Avoid any product that advertises itself as “smoke free”, because which means it contains nicotine, which is highly addictive. Give your loved ones a good reason to live healthy and prevent this addiction instead. The facts about cigarette vapors show that you are putting yourself at an increased risk for various different types of cancer and lung injury. The worst part is that this is taking place when you are puffing away on a cigarette! Maybe you have considered the chemicals and toxins that are being inhaled when you smoke a cigarette? Did you know the level of toxins is so high that the smoke can cause severe harm to some organs of the body? Probably the most common complications caused by long-term cigarette smoking is emphysema, a lung injury that causes permanent damage to the tissues of the lungs. Nicotine has also been found to cause cardiac arrest in many people. You might think that these health risks are simply too big to be worth it, but you would be wrong. There is no need to put on with the dangers of smoking if you are using e cigarettes. By finding the right vaporizer available, you can help reduce the health risks you are currently experiencing. Vaporizers are among the safest methods of enjoying your Smok Novo 2 preferred herbal blends. Another of the dangers of vaporization tobacco products is that you will be indirectly contributing to environmentally friendly toxins and cancer causing agents that are found in cigarette smoke. Lots of the chemicals and toxins that are found in vaporized cannabis are also within vaporized marijuana. By substituting your daily oral habit with an electronic device, you do your part to help lower your likelihood of developing respiratory problems and cancer. Through the use of vaporizers rather than regular tobacco or pipes, you can significantly reduce the toxins within your body.
__label__pos
0.847642
X Mental Illness: Symptoms, Types, Depression and How To Fight It What is Mental Illness? Mental illness is regarded as the expansive mental health disorders that affect your moods, thoughts and characters. They consist of anxiety, depression, schizophrenia, phobia and addictions among others. Though it is normal to have mental health bothers, if they interfere with your normal functions, then there is need to raise an alarm. Persistent concerns as a result of mental sickness can negatively have an impact on your productivity and relationships at work or at home. What are the Warning Signs and Symptoms of Mental Illness? A combination of the following symptoms could indicate that you are suffering from a mental illness: • Sudden need for isolation and loss of interests in previously enjoyed occupations. • Strange decline in performance in school or at work. This can be accompanied by a sudden loss of interest in a sporting activity. • Impaired concentration and inability to comprehend or difficulties in making decisions. • An increased sensitivity to the common senses i.e. touch, smell, sounds and sight. • Lack of interest in participating in communal activities or apathy. • Psychotic feelings and perceptions of ghostly life forms. • Nervousness and anxiety coupled with unfounded fears of the unknown. • Abrupt changes in appetite and fluctuating sleeping patterns. • Sudden changes in moods and inability to take personal care such as bathing. What are the Mental Illness Facts? • Nearly all mental related illnesses start below the age of 14. • Depression is the leading cause of mental related illnesses worldwide. • Close to a million people commit suicide annually due to mental illnesses with majority from middle-class and low-income families. • Major events like war and natural disasters can trigger the occurrence of metal illnesses or depression. • Mental diseases are among risk factors for both transmittable and non-transmittable diseases. • Due to the fear of discrimination and stigma, most people with mental illnesses fail to seek professional assistance or share their feelings with friends or family members. What are the types of Mental Illnesses? There are several types of mental illnesses which include the following: • Anxiety Disorders: these are classified with their triggers and include phobias such as homesickness (agoraphobia), Social anxiety disorders, generalized anxiety disorder (GAD) and panic disorders. Check for Ridgecrest Anxiety Free Review, as this supplement may help to deal with anxiety. • Behavioral Disorders: these refer to inabilities to portray up to standard behaviors in an occasion. They include Attention Deficit Hyperactivity disorder (ADHD). • Mood Disorders: these are affective disorders characterized by constant feelings of gloominess alternating with feelings of joy. Depressive disorders are further grouped in to major depression, Dysthymia and bipolar disorder. • Psychotic Disorder: these are characterized by distorted thoughts and imaginations. They include hallucinations and delusions. • Eating Disorders: these are associated with cruel attitudes towards food or drinks. They include anorexia nervosa, bulimia nervosa and binge eating. • Impulse Control and Addiction Disorders: impulse control disorders refer to inability to resist doing something that is considered to be dangerous of harmful. These may include an urge to steal (Kleptomania). Addictions refer to irresistible habits such as smoking. • Other disorders include Personality disorders, development and cognitive disorders, adjustment disorders, etc. Understanding depression and mental illness Suicidal Thoughts The prevalence of suicidal thoughts varies among people with gender, age and ethnicity. Researches indicate that over 90% of people who commit suicide suffer from depression or other mental related problems. In most cases, depressed people who abuse substances such as recreational drugs and alcohol are at a higher risk of committing suicide. Major depressions and mental illnesses can trigger suicidal thoughts if untreated. How to Cope Day-to-Day? Identifying the cause of your depression or mental illness is the first step towards your recovery path. This can be assisted by your physician. You should start taking medication such as GABA Supplement and attend those scheduled counseling sessions. Sharing your emotions with people around you will serve in getting the best advice and motivation they can offer. May be their encouraging words and presences around you will take away the empty and lonely feelings that comes with depression and mental illnesses. Staying focused on the benefits of leading a normal life will be fulfilling and enjoyable. Fighting Depression and Mental illness If you probably know someone suffering depression or mental illness, it is important to lend a helping hand. It would sound sad to witness another case of suicide or an attempted with the help at your disposal. The stigma associated with mental disorders makes it difficult for depressed people to share about their feelings only to die in silence. You can help them reach out to a professional and get support in coping with their situation. There are many organizations out there that conduct awareness campaigns in the fight against depression and mental disorders where you can channel your donations. Assisting such organization can go a long way in saving a friend or a loved one. View All
__label__pos
0.816225
How To Know If You Need Root Canal Treatment: 6 Symptoms To Look For root canal treatment A root canal is a procedure in which the dentist makes a small hole in an infected tooth and removes the tooth pulp, which often consists of blood vessels and nerve-endings. Effectively, the procedure ‘deadens’ the tooth and thereby removes the discomfort you have felt. So the channels within the tooth are filled in with what is called gutta-percha (a resin type) so the tooth is not left hollow and weak. It is sealed with another form of resin after the hole is filled (similar to what is used for a cavity) so that after the root canal, nothing can enter the tooth. Based your dental background, the dentist may or may not agree on a dental crown, depending on the tooth in question. If I need a root canal, how do I know? Some of the issues with tooth pain is that it can be difficult to pin down what exactly the problem is. In our ears, we sometimes have only a vaguely sore feeling that we might not even identify with any tooth in particular. Nonetheless, over time, it will begin to become more apparent that your teeth need some treatment. Below are some common indications that may mean that you need a root canal and that you should make a dental appointment to find out for sure as soon as possible. #1 Persistent pain: If it hurts to put pressure on your tooth over a period of several weeks, whenever you eat, it is not natural and should be tested. #2 Hot or cold sensitivity: If drinking hot coffee or tea is causing toothache, this indicates that you have a problem. Similarly, if it hurts when you eat or drink something cold and/or you avoid using cold items on one side of your mouth because it causes toothache, you need to make an appointment with your dentist. #3 Tooth discoloration:Pale/discolored teeth may not necessarily need a root canal. Sometimes it may be due to poor oral hygiene, or  due to regularly consuming or drinking foods that are known to cause darkening of the teeth (such as coffee, alcohol, certain spices, etc.), but if the discoloration is limited to a single tooth, it might mean that nerve or blood vessel damage has occurred and you should contact your dentist. #4 Broken or chipped tooth: This may come by playing sports, constantly eating very hard or crunchy foods, or any variety of items, and so if a tooth has been chipped or broken, it needs to be replaced. Bacteria can get inside the tooth easily and cause an infection. A tooth infection can spread to the bloodstream due to the number of blood vessels in each of our teeth, producing a whole new set of problems! If you have a chipped or broken tooth, call your dentist immediately. #5 Swollen gums: Anything triggers inflammation if the gums are swollen, sore, and painful. It could mean you have inflammation or something stuck in your gums, so your dentist will have it tested if it hasn’t subsided in a day or two to make sure you don’t need a root canal. #6 Deeper decay: When decay sets in at the root of a tooth, it will not be healed by a renewed approach to brushing or flossing. Also something as simple as an overlooked cavity can expand and worsen to the point where it’s no longer an option to fix it, and more drastic steps are needed. If you postpone appointments for too long, a root canal can become the only real option open to you. It’s a priority for us all to keep our smile perfect. Keeping your teeth clean can make your make your life significantly, but frequent dental check-ups will not only keep your mouth clean, but also the rest of your body. Just as we sometimes get a cold, we may need to do dental work occasionally. Doing your best to keep your dental appointments up-to – date would go a long way to help reduce the amount of repair you need! As a curative measure, it’s never too late to treat a root canal treatment, though. Get in touch with us and we will get you the best root canal dentist in Oak Forest IL for your root canal treatment. Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.705571
Eden Pet Foods Made In Britain Herbivores, Omnivores and Carnivores explained Herbivores (Rabbits, Cows, Sheep) Wild Rabbit Herbivores eat plants, and their digestive system has adapted to absorb nutrients from plant material.  Grinding Teeth; Herbivores have square, flat molars designed to crush and grind plant material with a sideways motion Long Digestive Tracts; Plant material is difficult to digest, particularly plant cellulose. The herbivores intestines are up to 10 times longer than their body length, giving the digestive system time to breakdown and absorb correctly. Digestive Enzymes; Amylase is contained within the saliva of the herbivore combined with the chewing action help to break down the coarse fibre and carbohydrates that makes up plant material.   Omnivores (Humans, Pigs, Bears) Grizzly BearOmnivores have evolved to eat both plants and meats hence have adapted their digestive systems Tearing and Grinding Teeth; Omnivores have sharp canine teeth at the front of the mouth to cater for meat, and flat molars to allow a grinding action for plant material. Medium Digestive Tracts; Capable of digesting meat based proteins and fats, but still long enough to cater for vegetable matter. (Note that some vegetable which are difficult to digest such as sweet corn can pass through if not chewed correctly) Digestive Enzymes; As with the herbivore, amylase is contained within the saliva of the omnivore, which also utilises the chewing action to help break down the coarse fibres and carbohydrates that makes up plant material.   Carnivores (Cats, Lions, Dogs, Wolves) WolvesCarnivore in Latin means “Meat eater” and the classification refers to diets that consist of mainly meat. Sharp Tearing Teeth; A carnivores teeth are designed for tearing and slicing (not grinding). Carnivores have elongated front teeth which are used to kill prey and triangular shaped molars which act like a serrated blade and operate in a vertical scissor action to give a cutting action. Short Digestive Tract; High hydrochloric stomach acid (pH 1-2 compared to humans at a pH of 4-5) quickly digests meat based protein and fat. Digestive Enzymes; Amylase is not present within the saliva of carnivores hence the burden of digesting carbohydrates is taken by the pancreas. Long term over loading of the pancreas is associated with insulin resistance and ultimately the failure of the pancreas to produce of insulin as found in type II diabetes. Conclusions Key genetic features of the both dogs and cats classify them as carnivores hence they would have historically eaten a diet almost exclusively of meat • Pointed Teeth designed for grasping ripping and shredding • Jaws designed to swallow whole food (not grinding) • High Acid Stomach Type • Short small intestines • Digestive enzymes adapted to breakdown meat • Absence of enzymes designed to break down vegetable matter
__label__pos
0.880759
insoles Foot Orthotics Help People with Diabetes How Custom Foot Orthotics Help People with Diabetes How Custom Foot Orthotics Help People with Diabetes 724 464 Ace Health Centre The Lifesaving Support: How Custom Foot Orthotics Help People with Diabetes Diabetes is a complex medical condition that affects millions of people worldwide. While managing blood sugar levels is a primary concern, diabetes also brings a host of potential complications, particularly for the feet. Foot problems are common among individuals with diabetes, but there’s a powerful ally that can make a significant difference in their overall foot health: custom foot orthotics. Understanding the Diabetes-Foot Connection Diabetes can damage nerves (neuropathy) and reduce blood flow to the feet. This combination can result in reduced sensation, making it challenging to detect injuries or blisters. Additionally, poor blood circulation can lead to slow wound healing, which increases the risk of infection. It’s a perfect storm for foot problems, ranging from minor annoyances like corns and calluses to severe conditions like ulcers and even amputations. The Role of Custom Foot Orthotics Custom foot orthotics, also known as orthopedic insoles or diabetic insoles, are specially designed shoe inserts tailored to an individual’s unique foot shape and gait. Here’s how they can be a game-changer for people with diabetes: 1. **Pressure Redistribution:** Custom foot orthotics are crafted to distribute pressure evenly across the foot, reducing the risk of high-pressure points that can lead to calluses, ulcers, and wounds. This is critical for individuals with diabetes who may have compromised sensation and are less likely to notice these issues. 2. **Arch Support:** Many people with diabetes develop flat feet or other structural problems. Custom orthotics provide essential arch support, reducing strain on the feet and ankles and improving overall comfort. 3. **Shock Absorption:** These insoles are designed to absorb shock and reduce the impact on the feet during walking or other activities. This is crucial for preventing injuries and reducing discomfort, especially for those with neuropathy. 4. **Accommodating Deformities:** For individuals with foot deformities or irregularities caused by diabetes, such as Charcot foot, custom orthotics can be designed to accommodate and support these unique conditions, aiding in stability and balance. 5. **Preventing Complications:** By reducing pressure, improving support, and enhancing comfort, custom orthotics play a significant role in preventing common diabetes-related foot complications, including diabetic neuropathy and ulcer formation. 6. **Improved Mobility:** Comfortable and properly aligned feet encourage individuals with diabetes to remain active and engage in regular physical activity, which is crucial for managing blood sugar levels and overall health. Foot Orthotics Help People with Diabetes Foot Orthotics Help People with Diabetes The Process of Getting Custom Foot Orthotics Obtaining custom foot orthotics typically involves a series of steps: 1. **Assessment:** A healthcare provider, often a podiatrist, assesses the patient’s foot and gait to identify specific needs and any existing issues. 2. **Scanning:** The provider takes a digital scan of the patient’s feet to create a precise model for the orthotics. 3. **Custom Design:** The orthotics are then custom-designed to address the patient’s unique requirements, such as pressure points, arch support, and foot deformities. This is done via CAD-CAM software. 4. **Fitting:** Once the orthotics are ready, they are fitted into the patient’s shoes to ensure a proper fit and comfort. Conclusion Custom foot orthotics are invaluable tools in the fight against diabetes-related foot problems. They provide crucial support, reduce pressure, and enhance comfort, all of which are essential for preventing complications and maintaining mobility. For individuals with diabetes, investing in custom foot orthotics is an investment in their long-term foot health and overall well-being. These simple inserts can make a world of difference, helping individuals with diabetes step confidently on their journey to better health.   Contact us today if you have diabetes to get your custom foot orthotics.
__label__pos
0.999856
Hair Transplant San Jose Hair Transplant San Jose: A Comprehensive Guide Are you considering a hair transplant in San Jose? You’ve come to the right place! This article will provide you with everything you need to know about the procedure, recovery process, and considerations to make before undergoing this life-changing treatment. We will also explore the vibrant city of San Jose itself and why it has become a hotspot for hair transplantation. I) Introduction to Hair Transplantation A) What is a Hair Transplant? A hair transplant is a surgical procedure that involves removing hair follicles from a donor area, usually the back or sides of the scalp, and transplanting them to areas where hair has thinned or is no longer growing. This procedure is primarily used to treat male and female pattern baldness but can also be used to restore hair in other areas of the body, such as eyebrows or facial hair. B) Types of Hair Transplant Methods There are two main methods for hair transplantation: Follicular Unit Transplantation (FUT) and Follicular Unit Extraction (FUE). 1) Follicular Unit Transplantation (FUT) In this method, a strip of skin containing hair follicles is removed from the donor area. The follicles are then separated and prepared for transplantation. The strip method leaves a linear scar on the donor area, which can be concealed with hair growth. 2) Follicular Unit Extraction (FUE) The FUE method involves individual extraction of hair follicles from the donor area using a small punch. This technique leaves tiny, dot-like scars that are less conspicuous than the linear scar from FUT. FUE allows for more precise graft placement and a quicker healing time. II) Why San Jose Is a Great Choice for Hair Transplantation A) High-Quality Medical Facilities and Surgeons San Jose is home to some of the top hair transplant surgeons and clinics in the country. These clinics utilize the latest technology and techniques, ensuring you receive a successful and natural-looking result. Well-known clinics in the city are accredited and adhere to high-quality standards, providing a safe and sterile environment for your procedure. B) The Silicon Valley Influence As the heart of Silicon Valley, San Jose thrives on innovation. This creates a competitive environment that encourages the development and implementation of cutting-edge hair transplant techniques and technologies to achieve optimal results. C) Affordable and Accessible Travel San Jose is well-connected to major cities across the United States and worldwide, making it an easy destination for those seeking hair transplantation. Furthermore, travelling to San Jose can be more affordable than other major cities like New York or Los Angeles. D) Beautiful and Diverse City In addition to being a leading destination for hair transplantation, San Jose is a beautiful and thriving city. With a rich history, diverse culture, and stunning surroundings, it is no wonder that many people choose to undergo their hair transplant journey in San Jose. III) Preparing for Your Hair Transplant Consultation A) Research the Procedure and Surgeons Before booking a consultation, make sure you research the procedure itself and find a reputable clinic or surgeon in San Jose. Look for testimonials, reviews, and before-and-after photos to gauge your prospective surgeon’s results. B) Compile a List of Questions Prepare a list of essential questions to ask during your consultation. Inquire about the surgeon’s experience, the procedure’s success rate, recovery time, and potential risks. C) Be Open and Honest During the consultation, be completely honest about your medical history, lifestyle, and expectations. This will ensure the most suitable treatment plan is developed for you. IV) Recovery and Aftercare A) Follow Post-Operative Instructions Your surgeon will provide you with a set of post-operative instructions to follow to ensure a smooth and speedy recovery. Adhering to these guidelines is crucial for the best possible outcome. B) Manage Discomfort Mild to moderate pain, swelling, and redness are common following hair transplantation. Over-the-counter pain medication and cold compresses can help alleviate any discomfort. C) Resume Activities Gradually It is essential to avoid strenuous activities, heavy lifting, and direct sunlight exposure for several weeks post-procedure. Discuss with your surgeon when you can safely return to your daily routines. V) Conclusion Hair transplant surgery in San Jose is an excellent option for individuals seeking to restore their hair and confidence. With world-class medical facilities, innovative technology, and top surgeons, San Jose offers an ideal environment for this life-changing procedure. Book a consultation today to take the first step towards a fuller, healthier head of hair! FAQs 1) How long does hair transplant surgery take? The duration of the procedure depends on the number of grafts required and the chosen transplantation method. It can range from 4 to 8 hours. 2) When will I see results from my hair transplant? Initial results may be visible within three months, but full results typically take between 9 to 12 months. 3) Is a hair transplant permanent? Yes, the hair transplanted from the donor area is genetically resistant to balding. Therefore, results are generally permanent. 4) Can women undergo hair transplant surgery? Yes, women can undergo hair transplantation. The procedure can help address female pattern baldness and other types of hair loss. World Class VIP Clinic We deliver natural-looking results, with cost-effective pricing and satisfaction guarantee. Take a look at our Before/After Results to see the how delighted our  patients  are from all around the world. 22+ Years of Experience With 12.000+ successful operations worldwide 21.000.000 Grafts Transplanted With FUE, FUT, LHT, BHT and additional methods Worldwide Acknowledgement Dr. Tsilosani is fellow of ISHRS and author of 32 scientific works 450+ Doctors Trained We have over 450 graduates of hair transplantation professionals
__label__pos
0.926378
What Is HALT? Discover the power of HALT! Unveiling the significance of this acronym for managing hunger, anger, loneliness, and tiredness. February 14, 2024 Understanding the HALT Acronym The HALT acronym is an essential tool for recognizing and addressing our basic human needs. By understanding and applying the principles of HALT, we can take better care of ourselves and promote overall well-being. Let's explore the meaning and significance of the HALT acronym. Introduction to HALT HALT stands for Hunger, Anger, Loneliness, and Tiredness. These four elements represent common triggers that can have a significant impact on our physical and mental health. When these needs are not met, they can negatively affect our well-being and overall quality of life. By recognizing HALT triggers, we can proactively address these needs and take appropriate actions to ensure we are taking care of ourselves. It's important to note that HALT is not a diagnosis or a substitute for professional help. Instead, it serves as a guiding framework to help us identify areas where we may need additional support or self-care. The Significance of the HALT Acronym Understanding the significance of the HALT acronym is crucial for promoting self-awareness and practicing self-care. Let's take a closer look at each element of HALT and its impact on our well-being: Element and Impact Hunger: Hunger can lead to irritability, difficulty concentrating, and low energy levels. It's important to address hunger by nourishing our bodies with regular, balanced meals and snacks. Anger: Unresolved anger can negatively affect our mental health, relationships, and overall quality of life. By recognizing and managing our anger, we can promote emotional well-being and healthier interactions with others. Loneliness: Loneliness can have a profound impact on our mental and physical health. It can lead to feelings of sadness, isolation, and reduced self-esteem. Building social connections, seeking support, and engaging in activities we enjoy can help combat loneliness. Tiredness: Lack of sleep and chronic tiredness can impair cognitive function, mood, and overall productivity. Establishing healthy sleep habits and practicing relaxation techniques can improve the quality of our sleep and combat tiredness. By understanding the significance of the HALT acronym, we can identify when these needs are not met and take appropriate steps to address them. It's important to prioritize self-care and seek professional help when needed to ensure our overall well-being. Remember, HALT serves as a starting point for self-reflection and taking care of our basic needs. By recognizing HALT triggers and implementing self-care strategies, we can cultivate a healthier and more balanced lifestyle. Hunger Hunger is a fundamental physiological sensation that arises when our body needs nourishment. In the context of the HALT acronym, understanding the effects of hunger on the body and mind is crucial for maintaining overall well-being. Effects of Hunger on the Body and Mind When we experience hunger, our body undergoes various physiological changes. These effects can impact both our physical and mental health: 1. Energy Depletion: Hunger is a sign that our body is running low on fuel. Without proper nourishment, our energy levels decrease, leading to fatigue and a lack of stamina. 2. Impaired Cognitive Function: Insufficient food intake can impair our cognitive abilities, including concentration, memory, and decision-making. It becomes challenging to focus on tasks and perform at our best. 3. Mood Changes: Hunger can trigger irritability, mood swings, and a general feeling of discomfort. This is because hunger affects the production of certain neurotransmitters in the brain, such as serotonin, which plays a key role in mood regulation. 4. Weakened Immune System: Chronic hunger weakens the immune system, making individuals more susceptible to infections and illnesses. It becomes harder for the body to fight off pathogens and maintain optimal health. Tips for Addressing Hunger Addressing hunger is essential for maintaining overall well-being. Here are some strategies to help manage and alleviate hunger: Strategies Eat Regular Meals: Establish a routine of eating balanced meals at regular intervals throughout the day. This helps to provide a steady source of energy and prevent extreme hunger. Include Protein and Fiber: Incorporate protein-rich foods, such as lean meats, legumes, and dairy products, into your meals. Additionally, focus on consuming fiber-rich foods like fruits, vegetables, and whole grains, as they promote satiety. Snack Mindfully: Choose nutritious snacks, such as nuts, yogurt, or fresh fruits, to curb hunger between meals. Avoid relying on unhealthy, sugary snacks that provide temporary relief but lead to energy crashes. Stay Hydrated: Sometimes, thirst can be mistaken for hunger. Stay hydrated by drinking plenty of water throughout the day. Listen to Your Body: Pay attention to your body's hunger cues. Eat when you feel hungry and stop when you feel satisfied, rather than overeating or restricting food intake. Seek Nutritional Guidance: If you struggle with managing hunger or have specific dietary needs, consider consulting with a registered dietitian who can provide personalized nutritional guidance. By understanding the effects of hunger on our body and mind and implementing strategies to address hunger, we can maintain a balanced and nourished state, promoting overall well-being. Anger The Impact of Anger on Mental Health Anger is an intense emotion that can have a profound impact on mental health. When anger is not managed properly, it can lead to various negative consequences. Chronic anger can contribute to the development of mental health disorders, such as anxiety and depression. Additionally, uncontrolled anger can strain relationships, hinder problem-solving abilities, and negatively affect overall well-being. Prolonged anger can lead to increased stress levels, which can have detrimental effects on both the mind and body. It can elevate blood pressure, weaken the immune system, and increase the risk of cardiovascular problems. Furthermore, unresolved anger can create a cycle of negative thoughts and emotions, perpetuating a state of distress. Techniques for Managing Anger Learning effective techniques for managing anger is crucial for maintaining good mental health and overall well-being. Here are some strategies that can help individuals navigate and cope with their anger: 1. Identify triggers: Recognize the situations, circumstances, or people that tend to trigger feelings of anger. Being aware of these triggers can help individuals anticipate and prepare for potential anger-inducing situations. 2. Practice deep breathing: Deep breathing exercises can help calm the body and mind during moments of anger. Taking slow, deep breaths in through the nose and out through the mouth can help regulate emotions and promote a sense of relaxation. 3. Engage in physical activity: Physical exercise is a powerful way to release built-up tension and reduce anger. Engaging in activities such as jogging, yoga, or boxing can help channel negative energy into a more positive outlet. 4. Practice mindfulness: Mindfulness techniques, such as meditation or guided imagery, can help individuals become more aware of their anger triggers and develop the ability to respond with greater control. Mindfulness encourages individuals to observe their thoughts and emotions without judgment, allowing for a more balanced perspective. 5. Seek support: It can be helpful to reach out to trusted friends, family members, or mental health professionals for support. Talking about anger and its underlying causes can provide individuals with insights and guidance on managing their emotions effectively. By employing these techniques, individuals can gain a better understanding of their anger, develop healthier coping mechanisms, and improve their overall mental well-being. Managing anger in a constructive manner is essential for maintaining healthy relationships, reducing stress, and promoting emotional balance. Loneliness Loneliness is a prevalent emotional state that can have a significant impact on overall well-being. It is important to understand the effects of loneliness and implement strategies to combat it. The Effects of Loneliness on Well-being Loneliness can affect both mental and physical health. When individuals experience loneliness, they may feel a sense of disconnection from others, leading to emotional distress. The effects of loneliness on well-being can include: 1. Increased stress: Feelings of loneliness can contribute to heightened stress levels, as individuals may lack the support and social connections needed to cope with life's challenges. 2. Depression and anxiety: Prolonged loneliness can increase the risk of developing depression and anxiety disorders. The lack of social interaction and meaningful connections can lead to feelings of sadness, hopelessness, and worry. 3. Impaired cognitive function: Research suggests that loneliness can negatively impact cognitive function, including memory and attention. This impairment may be due to the lack of intellectual stimulation and social engagement. 4. Weakened immune system: Chronic loneliness has been associated with a weakened immune system, making individuals more susceptible to illnesses and infections. Strategies for Combating Loneliness Addressing loneliness requires proactive steps to foster social connections and improve overall well-being. Here are some strategies to combat loneliness: Strategies 1. Cultivate relationships: Seek out opportunities to meet new people and build meaningful connections. Join social clubs, attend community events, or engage in activities that align with your interests. 2. Stay connected: Maintain regular contact with family and friends. Use phone calls, video chats, or social media to stay in touch, especially if distance or circumstances prevent in-person interactions. 3. Volunteer: Engaging in volunteer work allows you to contribute to your community while also providing opportunities to interact with others who share similar interests. 4. Join support groups: Consider joining support groups or organizations that focus on topics of interest or provide a space for individuals experiencing similar challenges. This can provide a sense of belonging and support. 5. Practice self-care: Engage in activities that promote self-care and well-being. This can include exercise, meditation, pursuing hobbies, or seeking professional help if needed. 6. Seek professional assistance: If feelings of loneliness persist or significantly impact daily life, it may be beneficial to seek the guidance of a mental health professional. They can provide support and recommend appropriate interventions tailored to your specific needs. By recognizing the effects of loneliness on well-being and implementing strategies to combat it, individuals can take steps towards improving their mental and emotional health. Building and maintaining social connections, practicing self-care, and seeking professional assistance when necessary are essential in navigating and overcoming feelings of loneliness. Tiredness Feeling tired is a common experience that can significantly impact our daily lives. In the context of the HALT acronym, understanding how tiredness affects cognitive function is essential. Additionally, implementing effective strategies to manage tiredness is crucial for overall well-being. How Tiredness Affects Cognitive Function Tiredness can have a profound impact on our cognitive abilities, influencing our attention, memory, decision-making, and overall mental performance. When we are tired, our brain function becomes compromised, leading to: • Decreased concentration and focus • Impaired decision-making abilities • Slower reaction times • Reduced creativity and problem-solving skills • Difficulty retaining and recalling information To better understand the effects of tiredness on cognitive function, consider the following table: Cognitive Function and Effects of Tiredness Attention and Focus: Reduced ability to concentrate on tasks and stay engaged Memory: Difficulty in retaining and recalling information Decision-making: Impaired judgment and problem-solving abilities Reaction Time: Slower response to stimuli Creativity: Decreased ability to think creatively and generate new ideas Tips for Managing Tiredness Managing tiredness is crucial for maintaining optimal cognitive function and overall well-being. Here are some effective strategies to combat tiredness: 1. Prioritize Sleep: Ensure you get enough sleep each night, aiming for 7-9 hours of quality sleep. Establish a consistent sleep routine and create a sleep-friendly environment. 2. Practice Good Sleep Hygiene: Adopt healthy sleep habits, such as avoiding electronic devices before bed, keeping your bedroom cool and dark, and avoiding caffeine and stimulating activities close to bedtime. 3. Take Power Naps: If you feel tired during the day, take short power naps (around 20 minutes) to recharge and improve alertness. Be mindful not to nap too close to your regular bedtime. 4. Stay Active: Regular physical activity can boost energy levels and promote better sleep. Engage in exercise or physical activities that you enjoy to improve your overall energy levels. 5. Maintain a Balanced Diet: Eat a nutritious and well-balanced diet to provide your body with the necessary energy. Include foods rich in vitamins, minerals, and antioxidants to support overall health. 6. Stay Hydrated: Dehydration can contribute to tiredness, so ensure you drink enough water throughout the day to stay properly hydrated. 7. Manage Stress: Chronic stress can contribute to fatigue and tiredness. Practice stress management techniques such as deep breathing exercises, meditation, and engaging in activities that help you relax and unwind. 8. Avoid Overloading Yourself: Don't take on more tasks or commitments than you can handle. Prioritize your responsibilities and delegate tasks when possible to avoid excessive fatigue. By implementing these tips and strategies, you can effectively manage tiredness and improve your cognitive function. Remember, addressing tiredness is an important aspect of the HALT acronym and contributes to overall well-being. Applying HALT in Daily Life Once you understand the significance of the HALT acronym, you can apply it in your daily life to enhance your well-being. This section will explore two important aspects of HALT implementation: recognizing HALT triggers and implementing self-care strategies. Recognizing HALT Triggers Recognizing the triggers that lead to feelings of hunger, anger, loneliness, and tiredness is a crucial step in effectively applying the HALT acronym in your daily life. By identifying these triggers, you can take proactive measures to address them before they escalate. HALT Trigger and Common Triggers Hunger: Skipped meals, low blood sugar, restrictive diets Anger: Frustration, perceived injustice, criticism Loneliness: Lack of social connection, isolation Tiredness: Lack of sleep, excessive physical or mental exertion Take note of situations or circumstances that consistently lead to these triggers. This awareness will help you better understand the root causes behind your emotions and reactions. Implementing Self-Care Strategies Once you've identified your HALT triggers, it's essential to implement self-care strategies to address them effectively. Here are some practical tips for each HALT component: • Hunger: To address hunger, ensure you have regular, balanced meals and snacks throughout the day. Avoid restrictive diets and listen to your body's hunger cues. • Anger: When anger arises, practice techniques such as deep breathing, mindfulness, and engaging in calming activities like meditation or physical exercise. • Loneliness: Combat loneliness by actively seeking social connection. Reach out to friends, join social groups or clubs, and engage in activities that align with your interests. • Tiredness: Prioritize adequate sleep and establish a consistent sleep routine. Take short breaks throughout the day to rest and recharge. Incorporate relaxation techniques, such as taking a bath or practicing relaxation exercises. Remember, self-care is a personal journey, and what works for one person may not work for another. Experiment with different strategies and find what resonates with you. It's essential to be patient and kind to yourself as you navigate the process of implementing self-care practices. By recognizing HALT triggers and implementing self-care strategies, you can effectively manage the challenges associated with hunger, anger, loneliness, and tiredness. Embracing the HALT acronym in your daily life empowers you to take control of your well-being and cultivate a healthier and more balanced lifestyle. Seeking Professional Help While the HALT acronym can be a valuable tool in managing one's well-being, there are instances where professional assistance may be necessary. Recognizing when it's time to seek help is crucial for addressing underlying issues and obtaining the support needed for optimal mental health. When to Consider Professional Assistance It is important to consider seeking professional help if the effects of hunger, anger, loneliness, or tiredness become overwhelming and begin to significantly impact daily life. While self-care strategies can be helpful, there are situations where the guidance and expertise of mental health professionals are needed. Here are some signs that indicate it may be time to seek professional assistance: 1. Persistent and severe symptoms: If the physical and emotional effects of hunger, anger, loneliness, or tiredness persist for an extended period and interfere with daily functioning, it may be a sign of an underlying mental health condition. 2. Inability to cope: If self-care strategies are not providing relief or if the challenges related to HALT triggers become increasingly difficult to manage, seeking professional help can provide additional coping techniques and support. 3. Impact on relationships: When the effects of HALT impact relationships with family, friends, or colleagues, it may be beneficial to seek professional assistance to address underlying issues and improve communication and interpersonal skills. 4. Suicidal thoughts or self-harm: If feelings of despair, hopelessness, or thoughts of self-harm arise, it is crucial to reach out to a mental health professional or a helpline immediately. These are serious signs that require immediate attention and support. Resources for Support and Treatment When seeking professional help, there are various resources available that can provide support and treatment. Here are some options to consider: Resource and Description Mental health professionals: Psychologists, psychiatrists, therapists, and counselors can provide individualized assessment, therapy, and treatment for mental health conditions. They can help address the underlying causes of HALT triggers and develop personalized coping strategies. Support groups: Joining support groups, whether in-person or online, can provide a sense of community and understanding. These groups often consist of individuals who have experienced similar challenges and can offer valuable insights and support. Helplines: Helplines and crisis hotlines are available 24/7 for immediate support. Trained professionals can provide guidance, crisis intervention, and referrals to appropriate resources. Community organizations: Local community organizations may offer mental health services, workshops, and support programs at little or no cost. These organizations can connect individuals with resources in their area. Online resources: There are numerous websites, blogs, and forums dedicated to mental health. These resources can provide information, coping strategies, and personal stories that may resonate with individuals experiencing HALT challenges. Remember, seeking professional help is a sign of strength and a proactive step towards improving mental well-being. Mental health professionals have the expertise to guide individuals through the challenges associated with HALT triggers and provide the necessary support and treatment to foster long-term mental wellness. Sources What Is HALT? The Dangers of Being Hungry, Angry, Lonely What Are the HALT Risk States? HALT: Pay Attention to These Four Stressors Related posts Spirituality in Addiction Recovery Spirituality in Addiction Recovery Read More Benefits of Acceptance and Commitment Therapy (ACT) Benefits of Acceptance and Commitment Therapy (ACT) Read More Addiction Recovery Mentorship Addiction Recovery Mentorship Read More Addiction Recovery Apps Addiction Recovery Apps Read More Addiction Recovery Support Groups Addiction Recovery Support Groups Read More Mindfulness for Addiction Recovery Mindfulness for Addiction Recovery Read More Exercise in Addiction Recovery Exercise in Addiction Recovery Read More Addiction Recovery Retreats Addiction Recovery Retreats Read More Addiction Recovery Success Stories Addiction Recovery Success Stories Read More Circle Start Your Journey with Us We're always here for you - reach out to us today.
__label__pos
0.997829
Postoperative seroma formation in breast reconstruction with latissimus dorsi flaps: A retrospective study of 174 consecutive cases Koichi Tomita, Kenji Yano, Takeshi Masuoka, Ken Matsuda, Akiyoshi Takada, Ko Hosokawa Research output: Contribution to journalArticlepeer-review 55 Citations (Scopus) Abstract The latissimus dorsi flap has been widely used in breast reconstruction surgery. Despite its potential advantages such as low donor morbidity and vascular reliability, the complication of donor-site seroma formation frequently occurs. Consecutive 174 patients who underwent breast reconstruction with the latissimus dorsi flap from 2001 to 2006 were retrospectively reviewed. The age, body mass index (BMI), smoking history, timing of reconstruction, type of breast surgery and nodal dissection, and several other intraoperative data were analyzed. The overall incidence of postoperative seroma was 21%. Increased age (>50 years) and obesity (BMI >23 kg/m) were significant risk factors for seroma formation (P = 0.02 and 0.004, respectively). The patients who underwent skin-sparing mastectomy or modified radical mastectomy had higher incidence of seroma formation (28% and 33%, respectively) as compared with those who had breast-conservative surgery (11%). A significant correlation was found between the type of breast surgery and the incidence of seroma (P = 0.04). The type of nodal dissection did not affect the incidence of postoperative seroma (P = 0.66). We concluded that increased age, obesity, and invasive breast surgery are risk factors for donor-site seroma formation after breast reconstruction with the latissimus dorsi flap. Close attention should be paid to prevent development of postoperative seroma when operating on such high-risk patients. Original languageEnglish Pages (from-to)149-151 Number of pages3 JournalAnnals of Plastic Surgery Volume59 Issue number2 DOIs Publication statusPublished - 2007 Aug Keywords • Breast reconstruction • Donor-site seroma • Latissimus dorsi flap • Risk factor ASJC Scopus subject areas • Surgery Fingerprint Dive into the research topics of 'Postoperative seroma formation in breast reconstruction with latissimus dorsi flaps: A retrospective study of 174 consecutive cases'. Together they form a unique fingerprint. Cite this
__label__pos
0.742984
Wavelet art highlights the finer points of whale songs OCR Advisor and software mathematician Mark Fisher received some recent coverage in Wired and Co.Design for his fantastic renderings of whale, dolphin, bird, and insect sounds using a diagnostic function called “wavelets.” Wavelets are typically used to evaluate large data sets in a way that can look at the entire set while highlighting fine inner details. In this case Mark’s large data sets are sounds. In most visual sound analysis either the spectral or the amplitude components of a sound are evaluated over time – as found in the OCR Sound Library Audiograms (rendered through Cornell’s “Raven” software).  The amplitude (how loud a sound is) is fairly easy to render as it just requires creating a visual correlation of “louder sound to more of something.” The spectral (frequency) components are a bit more complicated because you have to measure the amplitudes through a discreet set of “frequency bins” or filters and have the outputs show up in a way that is easy to understand. For example higher frequencies can appear “higher up” on the “y” axis, or “more to the right side” of an “x-y” graph. While this common processing-and-display convention is useful and easy to read once you get the hang of it, it discards the finer details by processing the sound in big chunks. Wavelets are a more complicated process. Instead of running a sound through a set of stationary filter bins and measuring the outputs of the bins, wavelet analysis (mathematically) throws little bursts of sound or “wavelets” at a sound and evaluates the interference. If you know the precise shape and timing of the wavelets, the interference patterns will tell you a lot about what you’re bouncing them off. Aguasonic Sound Visualization of False Killer Whale Vocalization False killer whale wavelet mandala There is still frequency on the “y axis” but the “x axis” displays interference in the time domain; and “how much” (or “how little”) is represented by how bright the patterns are. In this manner large and complicated sounds can yield meta-patterns that also reveal the tiny details. This may seem a bit difficult to wrap your head around, but these patterns can tell volumes about the specifics of a sound – stuff that we might be able to hear but would have a hard time describing. Mark is taking his wavelet analysis and bending them into circular “mandalas.” These mandalas can be useful, but they are also beautiful while telling us a lot about the details of the sounds that produced them. The complexities of the patterns also reflect the complexity of sound production and reception. Wavelet analysis might hold the key to how you can identify the voice of someone you haven’t heard in 20 years, or how a pod of dolphins instantaneously sort out their complex bioacoustic world. You can also dispense with the heady stuff and just look at the gorgeous patterns on Mark’s AguaSonic website. Very cool Mark!
__label__pos
0.791687
Which shingles shot is worse? Which shingles shot is worse? • Shingrix (recombinant zoster vaccine) requires two doses administered two to six months apart. • The two-dose vaccine is preferred because it is more effective. • Side effects of Shingrix include: • People may have a worse reaction to the first or second dose of Shingrix, or may have side effects from both doses. Which shingle shot is best? The CDC recommends that healthy adults ages 50 and older get the shingles vaccine, Shingrix, which provides greater protection than Zostavax. The vaccine is given in two doses, 2 to 6 months apart. Zostavax is still in use for some people ages 60 and older. Why do you need two shots for shingles shot? Two doses of Shingrix provides strong protection against shingles and postherpetic neuralgia (PHN), the most common complication of shingles. Where should the shingles shot be given? Administering Shingrix Shingrix should be injected intramuscularly in the deltoid region of the upper arm. Subcutaneous injection is a vaccine administration error and should be avoided. Do I need to get Shingrix every 5 years? The effects of the Shingrix vaccine last for at least four years in most people and may last even longer in some. According to the Centers for Disease Control and Prevention (CDC), you do not need a booster dose after getting the two doses of Shingrix. How many years does Shingrix last? The research, published in the Journal of Infectious Diseases, shows that Shingrix offers protection for up to four years, but Professor Cunningham believes it will last much longer. “The second dose of the vaccine is important to ensure long-term protection,” Professor Cunningham said. Should I get another shingles vaccine after 5 years? “People who had the old vaccine will benefit from getting the new one,” Hrncir said. “Also, shingles can recur. So even if you’ve already had shingles, get the new vaccine.” The Centers for Disease Control and Prevention, or CDC, offers advice about those who should not get the new vaccine. Is the 2nd shingles shot worse than the first? Dear J.G.: Compared with the previous one-time vaccine Zostavax, the new two-dose Shingrix vaccine is much more effective. However, it does have a higher risk of side effects. You have had the most common side effect, though only 10 percent of people will have symptoms as bad as yours. Is there a downside to the shingles vaccine? Side effects are fairly common. The shingles vaccine may cause: Redness and swelling around the injection site. Soreness in the injected arm. Headache. Is second Shingrix shot worse than first? Is the second Shingrix shot worse? What happens if you get the shingles vaccine and you never had chickenpox? If you’ve never had chicken pox, no. If you did not get chicken pox as a child, don’t get either vaccinations, it is likely that you are immune to the disease. It’s very rare to give an adult the vaccine for chicken pox. Adults do not do well with childhood vaccinations because they can end up with complications.
__label__pos
0.999871
Fixing the correct syntax for AliEn spooler [u/mrichter/AliRoot.git] / FASTSIM / AliFastMuonTrackingEff.cxx CommitLineData 6255180c 1/************************************************************************** 2 * Copyright(c) 1998-1999, ALICE Experiment at CERN, All rights reserved. * 3 * * 4 * Author: The ALICE Off-line Project. * 5 * Contributors are mentioned in the code where appropriate. * 6 * * 7 * Permission to use, copy, modify and distribute this software and its * 8 * documentation strictly for non-commercial purposes is hereby granted * 9 * without fee, provided that the above copyright notice appears in all * 10 * copies and that both the copyright notice and this permission notice * 11 * appear in the supporting documentation. The authors make no claims * 12 * about the suitability of this software for any purpose. It is * 13 * provided "as is" without express or implied warranty. * 14 **************************************************************************/ 15 803d1ab0 16/* $Id$ */ a42548b0 17// 18// Class for fast simulation of the ALICE Muon Spectrometer 19// Tracking Efficiency. 20// The efficiency depends on trasverse momentum pt, polar angle theta and azimuthal angle phi. 21// 22// Author: Alessandro de Falco 23// [email protected] 24// 6255180c 25#include "AliFastMuonTrackingEff.h" 26#include "AliMUONFastTracking.h" 27 28ClassImp(AliFastMuonTrackingEff) 29 30 31AliFastMuonTrackingEff::AliFastMuonTrackingEff() : e6e76983 32 AliFastResponse("Efficiency", "Muon Tracking Efficiency"), 33 fBackground(1.), 34 fCharge(1.), 35 fFastTracking(0) 6255180c 36{ a42548b0 37// 38// Constructor 6255180c 39} 40 a42548b0 41AliFastMuonTrackingEff::AliFastMuonTrackingEff(const AliFastMuonTrackingEff& eff) e6e76983 42 :AliFastResponse(eff), 43 fBackground(1.), 44 fCharge(1.), 45 fFastTracking(0) a42548b0 46{ 47// Copy constructor 48 eff.Copy(*this); 49} 50 6255180c 51void AliFastMuonTrackingEff::Init() 52{ a42548b0 53// 54// Initialization 6255180c 55 fFastTracking = AliMUONFastTracking::Instance(); 56 fFastTracking->Init(fBackground); 57} 58 59 60 20432218 61Float_t AliFastMuonTrackingEff::Evaluate(Float_t /*charge*/, Float_t pt, Float_t theta, Float_t phi) 6255180c 62{ a42548b0 63// 64// Evaluate the efficience for muon with 3-vector (pt, theta, phi) 6255180c 65 Float_t p = pt / TMath::Sin(theta*TMath::Pi()/180.); 66 Float_t eff = fFastTracking->Efficiency(p, theta, phi, Int_t(fCharge)); 67 return eff; 68} a42548b0 69 20432218 70 a42548b0 71AliFastMuonTrackingEff& AliFastMuonTrackingEff::operator=(const AliFastMuonTrackingEff& rhs) 72{ 73// Assignment operator 74 rhs.Copy(*this); 75 return *this; 76} 77
__label__pos
0.936617
Critter Mosquito David Rosen, Wildside Photography Common name: Mosquito Scientific name: several different species Phylum: Arthropoda Class: Insecta Order: Diptera Family: Culicidae Habitat: larvae live in still water Size: larvae 5 to 13 mm, adults 1 to 1.5 cm Description: Adult Mosquitoes are gray or black and have two scaly wings. Females have a long, straw-like mouth for sucking blood. Males look different, with feathery antennae and mouthparts that cannot pierce skin. Mosquito larvae are brown, black or gray and have a breathing tube on their tail. Fun facts: When a female Mosquito is ready to lay her eggs, she searches for stagnant (still) water with plenty of rotting detritus and bacteria for her larvae to eat. Her antennae can smell the gas that the bacteria make when they decompose detritus. More gas means more food for her young! Life cycle: The Mosquito goes through four stages during its life cycle: egg, larva, pupa, and adult. The eggs normally hatch into larvae within 48 hours. Larvae must live in water from 7 to 14 days depending on the water temperature. During this time, the larva molts 3 times and grows to almost 1 cm. After the larva molts the fourth time it becomes a pupa. The pupa is lighter than the water and floats on the surface. The pupa does not eat. In 1 to 4 days, the adult Mosquito comes out of the pupa. It rests on the surface of the water until its body dries and hardens enough to fly away. Ecology: A female Mosquito rarely lays her eggs in the clean water of a vernal pool. In a healthy vernal pool her larvae would have to compete with many vernal pool critters for food. So the female Mosquito lays her eggs in stagnant waters, such as roadside ditches, wetlands, and even buckets of water in your backyard. Here the larvae find plenty of Algae, detritus, Bacteria and Protozoa to eat. In vernal pools, Mosquito larvae are eaten by aquatic insect larvae such as the larvae of Dragonflies and Damselflies. Adult female Mosquitoes feed on the blood of birds, lizards, people and other mammals. They need the protein found in blood to develop healthy eggs. Male Mosquitoes do not lay eggs, so they do not need blood. They feed on the nectar of flowers. In vernal pool grasslands, bats, spiders, Dragonflies, Damselflies, Killdeer, and other birds eat Mosquitoes. When a vernal pool is polluted, more Mosquito larvae occur in it. This is a sign that the food web has been disrupted. When urban runoff kills aquatic critters, it leaves more detritus, Bacteria, and Protozoa for Mosquito larvae to eat. Investigate: West Nile Virus is a serious disease spread by some species of Mosquitoes. Fear about the disease can lead communities to spray pesticides over vernal pool grasslands, even though the Mosquitoes that carry West Nile Virus do not lay eggs in vernal pools. Even worse, pesticides can kill many other vernal pool species as well. Could spraying vernal pool grasslands actually lead to more Mosquitoes in vernal pools? How? Water has a tight surface, like a very thin balloon. It is called surface tension. Mosquito larvae hang from it. Few aquatic critters can breathe without it. If oil or soap gets into runoff, the surface tension of water is destroyed. This kills most aquatic species. You can see this effect with Mosquito larvae. Find some larvae in a container of water (and detritus) that has been sitting outside for a few weeks. Break the water’s surface tension by stirring the water. Watch what the larvae do. Add 4 to 8 drops of dish soap (or cooking oil) to the surface. Watch what happens. Pour out the water so the Mosquitoes do not hatch in your house.
__label__pos
0.980777
Suicidal Thoughts: A Cause of Extreme Depression 0 Imagine being degraded or pampered by an entity for a cause or an act that you have no control over. Well, this can be termed as a state of confusion and this can make a person think of committing suicide. There are various reasons for a person to think about suicide and it can be due to peer pressure or mental illness. What makes suicide a thought to take your life away is itself a complex phenomenon. However, with the advent of medical science, brain studies have taken an extra leap to find out what complexities the brian holds about memory and making the body function.  Suicidal Thoughts So, what is suicide? It’s simple. The thought of taking away one’s own life or planning to is called suicide. It is absolutely human that everyone thinks of it, but for some, it could really turn out to be a threat. Two of the common causes of suicidal thoughts are depression and extreme stress. However, this can temporarily affect a person, while for some it can be a driving force to the grave.   Interesting Facts About Suicide The thought that pushes one to kill himself/ herself is supernatural in itself. Having a suicidal thought is absolutely normal, but making one accomplish this act is something that neurologists are still researching about. Here are some interesting facts about suicide.  • Mental illnesses such as anxiety, depression and anorexia can lead to suicide.  • Substance abuse such as drugs and alcohol can lead to depression and suicidal thoughts.  • If a loved one is having suicidal thoughts, make it a point to help by talking or being with them.  • People with a family history of any mental illness of suicide can be one major cause.  • India is considered to be the most depressed country in the world by the World Health Organisation (WHO) with 6.5% of people suffering from mental diseases that can lead to suicide. Also, the average age of a person committing suicide is below 44 years. How Does Depression Lead To Suicide? A state of extreme sadness and loss of interest in daily activities are some of the signs of depression. A depressed person would associate any situation to his/her cause. So, how does depression progress? Several conditions such as cardiovascular diseases, arthritis, cancer and diabetes etc. can worsen depression.  The extreme form of depression can lead to suicidal thoughts or make a person think of ways to end his/her life. If suicide can make you think of ways to end your life, depression pushes you to do it. Thus both these conditions are interdisciplinary in nature.  Depression in India According to a recent report by the World Health Organisation (WHO), India is considered the most depressed country in the world with an average 10.9 lakh people committing suicide every single day. In 2014, below 1 lakh people were estimated to have committed suicide while it has drastically increased in 2018. Moreover, the report also states that the average person who committed suicide was below the age of 44.   Suicide Symptoms  The integral thought that runs in a depressed person’s mind is the fact that he/she would end his/her pain for others by taking away their own life. Suicidal persons think of not being a burden to anyone and this makes them plan ways to end their lives in an untimely and uncertain manner. So, how do you know you may have suicidal thoughts? Here are signs that you need to watch out for.  • Thoughts that focus on death • May begin talking openly about unbearable pain to others • Extremely happy and extremely sad (mood swings)  • Avoids family member and friends (isolation) • Loses interest in social event and stays isolated • Substance abuse and alcoholism • Serious physical and mental illness  Note: One important sign of a suicidal person is when they are extremely angry, they will immediately turn calm and polite the moment they think of suicide. This phenomenon can be termed as mood swings. Substance abuse can trigger mood swings.  Causes of Suicide Since the dawn of modern science, neurologists have tried to understand what factors trigger a person to commit suicide. However, the human brain is complex and there is more that we are yet to know about. The answer to our very existence is inside this complex entity that has the tendency to constantly update itself to our surroundings. Here are some factors that trigger suicidal thoughts.  • Family history of suicide (genetic reasons)  • Depression, extreme stress, post-traumatic stress disorder (PTSD) and bipolar disorder • Childhood trauma (Peer pressure) • Drug abuse and alcohol abuse • Loss of a loved one, job or a broken relationship • Financial difficulty • Extreme trauma by another individual or a given situation Also Read 10 Mental Diseases Today’s Youth Are Prone To Kinds of Suicidal Thoughts Suicide is the act of thinking or planning to end one’s life, but what is more important is what runs deep inside their minds to push them towards such an act. However, this is determined by age and gender. Suicidal thoughts vary from age to gender and this is because of social change where men and women are taught to behave according to their gender. The below-listed points will give you an insight into what kind of suicidal thoughts does a person encounters.  1. Suicidal Thoughts in Men • Feeling of irritation, restlessness, extreme sadness and hopelessness. • May develop behavioural problems such as social disinterested, extreme tiredness, excessive drug and alcohol usage etc.  • Unable to concentrate, difficulty in completing a given task, lack of sleep etc. 2. Suicidal Thoughts In Women • Irritability, feeling of hopelessness and extreme sadness. • Suicidal thoughts of ending her life without causing any trouble to anyone.  • Slow thinking and responding to a given situation. • Either sleeping for a longer duration or finding it hard to sleep. • Changes in weight, increased cramps and low on energy.  3. Suicidal Thoughts In Children  • Extreme anger, mood swings and crying. • Feeling incompetent or feeling that they can’t do anything right and extreme sadness.  • Refusing to go to school, getting into trouble most often and associating everything to death or suicide. • Finding it difficult to concentrate, not faring well in education and a drastic decline in grades.  • Digestive problems, loss of appetite and weight changes. How To Help People With Suicidal Thoughts?  People with suicidal thoughts may seem happy on the outside but it’s the internal thoughts that surround them with a feeling to end their life. Thus, the first step to prevent them from undergoing such trauma is by being sensitive to them. Here is how you can be sensitive to a suicidal person.  Positive Conversation: Talk to the person and often engage them in a good conversation. You could make the person feel good about himself/herself. Thus, built conversations that make the person feel encouraged about themselves.   Keep Surroundings Safe: If there are objects that the person usually thinks of using to harm themselves, remove such objects. Suicidal persons plan ways to kill themselves and you wouldn’t want that to happen. (For eg. knife, sharp utensils, medicines, chemicals etc.) Being a Friend: Understanding the root cause of such thoughts is the first step that you could take to help a person. Be a friend to them by understanding what it means for them to take such fatal measures and support them by consulting a psychiatrist. Exercising: Engaging them in any sort of physical activity (running, jogging, walking, swimming etc.) will help them deviate from harmful thoughts. Regular exercising will help them beat stress as this is one cause of suicide.  Consult a Doctor: Suicide is the fight on the inside and it may result in death. Consult a psychiatrist or a counsellor and this would be the best way to help the person fight this mental illness.     World Suicide Prevention Day In order to create awareness regarding suicide and its other facets, the International Association for Suicide Prevention (IASP) in collaboration with the World Health Organisation (WHO) has recognised September 10 of every year as World Suicide Prevention Day. The IASP recognised 8 lakh deaths every 40 seconds worldwide due to suicide. However, this organisation spreads awareness in the form of press meetings, training courses for mentally weak people, cycling etc.  FAQs You wouldn’t want your close friend or a family member risk their life due to thought caused because of an unexplainable situation. For this reason, talking and keeping the person engaged in a positive conversation is the first thing you need to do. Here are some frequently asked questions that will shed some light on this topic.  1. Which are the most suicidal places in the world? Here is a list of places that have most suicides take place. • More than 1,500 suicides take place every year in Golden Gate Bridge (United States). • More than 2000 suicides have happened since 1968 in Nanjing Yangtze River Bridge (China).  • More than 105 suicides happen every year in Aokigahara forest, Mount Fuji (Japan). 1. Who to call, if I see a person suffering from suicidal thoughts? In case you come across a friend or a family member who is suffering from any form of depression, suicidal thoughts, anxiety etc. then call 022 2754 6669/ +91-9820466726 (AASRA).  1. When does depression begin? Depression begins from childhood and some of its early symptoms are: • Irritability and feeling of hopelessness • Feeling rejected every time • Loss or gain in appetite • Lack of sleep or excessive sleep • Difficulty in concentrating • Low on energy • Feeling worthless • Suicidal thoughts more often Suicide is a global problem and all you could possibly do is talk to him/her and help them get out of it. Follow the above-listed suicide preventive ways that will give you a gist towards how and what to do in times of emergency. Moreover, understanding the cause of this disease is something that you will have to invest time upon. Make sure that you sensitively listen to the person rather than being judgemental about his/her problem. Medlife Medicine Delivery Offer Code LEAVE A REPLY Please enter your comment! Please enter your name here This site uses Akismet to reduce spam. Learn how your comment data is processed.
__label__pos
0.75204
Fri 13 Mar 2015 What Are The Key Causes Of Adult Aquired FlatFoot ? Overview Posterior tibial tendon dysfunction is one of several terms to describe a painful, progressive flatfoot deformity in adults. Other terms include posterior tibial tendon insufficiency and adult acquired flatfoot. The term adult acquired flatfoot is more appropriate because it allows a broader recognition of causative factors, not only limited to the posterior tibial tendon, an event where the posterior tibial tendon looses strength and function. The adult acquired flatfoot is a progressive, symptomatic (painful) deformity resulting from gradual stretch (attenuation) of the tibialis posterior tendon as well as the ligaments that support the arch of the foot.Flat Foot Causes Obesity - Overtime if your body is carrying those extra pounds, you can potentially injure your feet. The extra weight puts pressure on the ligaments that support your feet. Also being over weight can lead to type two diabetes which also can attribute to AAFD. Diabetes - Diabetes can also play a role in Adult Acquired Flatfoot Deformity. Diabetes can cause damage to ligaments, which support your feet and other bones in your body. In addition to damaged ligaments, uncontrolled diabetes can lead to ulcers on your feet. When the arches fall in the feet, the front of the foot is wider, and outer aspects of the foot can start to rub in your shoe wear. Patients with uncontrolled diabetes may not notice or have symptoms of pain due to nerve damage. Diabetic patient don?t see they have a problem, and other complications occur in the feet such as ulcers and wounds. Hypertension - High blood pressure cause arteries narrow overtime, which could decrease blood flow to ligaments. The blood flow to the ligaments is what keeps the foot arches healthy, and supportive. Arthritis - Arthritis can form in an old injury overtime this can lead to flatfeet as well. Arthritis is painful as well which contributes to the increased pain of AAFD. Injury - Injuries are a common reason as well for AAFD. Stress from impact sports. Ligament damage from injury can cause the bones of the foot to fallout of ailment. Overtime the ligaments will tear and result in complete flattening of feet. Symptoms As different types of flatfoot have different causes, the associated symptoms can be different for different people. Some generalized symptoms are listed. Pain along the course of the posterior tibial tendon which lies on the inside of the foot and ankle. This can be associated with swelling on the inside of the ankle. Pain that is worse with activity. High intensity or impact activities, such as running and jumping, can be very difficult. Some patients can have difficulty walking or even standing for long periods of time and may experience pain at the inside of the ankle and in the arch of the foot. Feeling like one is ?dragging their foot.? When the foot collapses, the heel bone may shift position and put pressure on the outside ankle bone (fibula). This can cause pain in the bones and tendons in the outside of the ankle joint. Patients with an old injury or arthritis in the middle of the foot can have painful, bony bumps on the top and inside of the foot. These make shoe wear very difficult. Sometimes, the bony spurs are so large that they pinch the nerves which can result in numbness and tingling on the top of the foot and into the toes. Diabetic patients may not experience pain if they have damage to their nerves. They may only notice swelling or a large bump on the bottom of the foot. The large bump can cause skin problems and an ulcer (a sore that does not heal) may develop if proper diabetic shoe wear is not used. Diagnosis There are four stages of adult-acquired flatfoot deformity (AAFD). The severity of the deformity determines your stage. For example, Stage I means there is a flatfoot position but without deformity. Pain and swelling from tendinitis is common in this stage. Stage II there is a change in the foot alignment. This means a deformity is starting to develop. The physician can still move the bones back into place manually (passively). Stage III adult-acquired flatfoot deformity (AAFD) tells us there is a fixed deformity. This means the ankle is stiff or rigid and doesn???t move beyond a neutral (midline) position. Stage IV is characterized by deformity in the foot and the ankle. The deformity may be flexible or fixed. The joints often show signs of degenerative joint disease (arthritis). Non surgical Treatment PTTD is a progressive condition. Early treatment is needed to prevent relentless progression to a more advanced disease which can lead to more problems for that affected foot. In general, the treatments include rest. Reducing or even stopping activities that worsen the pain is the initial step. Switching to low-impact exercise such as cycling, elliptical trainers, or swimming is helpful. These activities do not put a large impact load on the foot. Ice. Apply cold packs on the most painful area of the posterior tibial tendon frequently to keep down the swelling. Placing ice over the tendon immediately after completing an exercise helps to decrease the inflammation around the tendon. Nonsteroidal Anti-inflammatory Medication (NSAIDS). Drugs, such as arcoxia, voltaren and celebrex help to reduce pain and inflammation. Taking such medications prior to an exercise activity helps to limit inflammation around the tendon. However, long term use of these drugs can be harmful to you with side effects including peptic ulcer disease and renal impairment or failure. Casting. A short leg cast or walking boot may be used for 6 to 8 weeks in the acutely painful foot. This allows the tendon to rest and the swelling to go down. However, a cast causes the other muscles of the leg to atrophy (decrease in strength) and thus is only used if no other conservative treatment works. Most people can be helped with orthotics and braces. An orthotic is a shoe insert. It is the most common non-surgical treatment for a flatfoot and it is very safe to use. A custom orthotic is required in patients who have moderate to severe changes in the shape of the foot. Physiotherapy helps to strengthen the injured tendon and it can help patients with mild to moderate disease of the posterior tibial tendon. Flat Foot Surgical Treatment Surgery is usually performed when non-surgical measures have failed. The goal of surgery is to eliminate pain, stop progression of the deformity and improve a patient?s mobility. More than one technique may be used, and surgery tends to include one or more of the following. The tendon is reconstructed or replaced using another tendon in the foot or ankle The name of the technique depends on the tendon used. Flexor digitorum longus (FDL) transfer. Flexor hallucis longus (FHL) transfer. Tibialis anterior transfer (Cobb procedure). Calcaneal osteotomy - the heel bone may be shifted to bring your heel back under your leg and the position fixed with a screw. Lengthening of the Achilles tendon if it is particularly tight. Repair one of the ligaments under your foot. If you smoke, your surgeon may refuse to operate unless you can refrain from smoking before and during the healing phase of your procedure. Research has proven that smoking delays bone healing significantly. Write a comment Comments: 201 This is the sidebar. This section is visible on every page of your website. The sidebar is a great place to put important information like contact details, store hours, or social media links. If you build an online store, the shopping cart will appear here. Contact
__label__pos
0.862766
Click here to have a similar quality,and unique paper at a discount What precautions does the nurse take when giving any type of Chemo medications 2. What is Superior vena cava syndrome and what are the symptoms associated with this syndrome 3. What will the nurse do for a patient following a prostatectomy and what should be done for dark red urine output? 4. What post-op care should be provided after a mastectomy 5. What labs should the nurse be concerned with when caring for the cancer patient. Ex: Neutropenia, Thrombocytopenia, etc… 6. Know all signs/symptoms of Neutropenia and Thrombocytopenia 7. What kind of education would you give someone receivingexternal radiation 8. Know the stages of cancer development; malignant transformation occurs through… 9. Know how to interpret the TNM staging system 10. What drug does the physician order for the chemo patient with low hemoglobin levels 11. What the normal ranges for platelet counts and what nursing intervention should the nurse do for a low platelet count 12. What intervention/education should the nurse suggest to the early diagnosed cancer patient concerning memory problems 13. What is mucositis and what interventions does the nurse do to treat it  14. What is the difference between Basal Cell and Squamous cell carcinomas  15. What type of behavior does cancer cells exhibit 16. What interventions are used for nausea/vomiting associated with cancer treatment 17. What education would the nurse give to lessen the impact on the development of cancer 18. What is Tumor lysis syndrome 19. What interventions would the nurse incorporate for the patient diagnosed with a brain tumor  20. What type of education will the nurse teach the patient who takes herbal medications when receiving treatment for cancer 21. Ginger helps the cancer patient with what? 22. What herbal supplement should the patient avoid when taking estrogen  23. Why should the surgical patient stop taking ginger, bilberry, feverfew, and garlic  24. What is the association between cancer and T’ai Chi  25. What types of interventions can the nurse provide for cancer comfort 26. What is Palliative care and what purpose does treatment do for this type of patient  27. What is Hospice and the role of the nurse working with the patient and family  28. What is the difference between agonal breathing, apneustic breathing, and cheyne-stokes respiration  29. What are signs/symptoms of impending death and which sign does the nurse determine is showing nearing death  30. What is the most important treatment the nurse does for the dying patient  31. What are advance directives and what education does the nurse provide the patient/family  32. What is actions should the nurse take for the death of the patient and their families  33. What is the difference between hospice and palliative care  34. How does one identify pain in the cancer patient  35.What task are unlicensed staff allowed to do for the dying patient 36. What interventions does the nurse do for “death rattle” 37. What are the catholic custom associated with death and dying 38. What is terminal dehydration  39. What is the purpose of proportional palliative sedation [Show less] Focus Points for First WebEx Click here to have a similar quality,and unique paper at a discount Latest completed orders: Completed Orders # Title Academic Level Subject Area # of Pages Paper Urgency
__label__pos
0.999975
REVIEW ARTICLE https://doi.org/10.5005/jp-journals-10009-1596 Donald School Journal of Ultrasound in Obstetrics and Gynecology Volume 13 | Issue 3 | Year 2019 Doppler Basics for a Gynecologist Sonal Panchal1, Chaitanya Nagori2 1,2Dr Nagori’s Institute for Infertility and IVF, Ahmedabad, Gujarat, India Corresponding Author: Sonal Panchal, Dr Nagori’s Institute for Infertility and IVF, Ahmedabad, Gujarat, India, Phone: +91 9824050911, e-mail: [email protected] How to cite this article Panchal S, Nagori C. Doppler Basics for a Gynecologist. Donald School J Ultrasound Obstet Gynecol 2019;13(3):129–138. Source of support: Nil Conflict of interest: None ABSTRACT Ultrasound is the first-line modality for the assessment of the patients with gynecological conditions and infertility. Doppler plays a very important role in the evaluation of these patients for a differential diagnosis of pathologies in patients with gynecological complaints as well as for understanding the changes occurring during the menstrual cycle and modifying the fertility treatment accordingly. However, this requires an optimum image quality, which can be achieved only by an adequate understanding of the various knobs and settings of the B mode and Doppler on the scanner. This article discusses these settings in a purely practical perspective. Keywords: Doppler, Image quality, Scanner settings. WHAT IS DOPPLER? Doppler is an effect produced on the frequency of a sound wave when it hits a moving object. This can most simply be explained by a difference in the sound quality perceived by an individual who is standing on a road and hears the voice of a siren of a moving ambulance. The intensity of the sound increases as the sound source moves towards the individual and decreases as it moves away from the individual. When the receiver and the sound source move towards each other, the frequency of the sound wave heard is higher than sent by the sound source, and if the two move away from each other, the frequency of the heard sound is lower than that produced by the sound source. The difference in the emitted and the received frequency is known as the Doppler shift. This effect was first described by Christian Johann Doppler in 1842. However, it was only in 1959 that Satumora demonstrated the use of this technology for demonstration of blood flows. Translating the Doppler effect in the body for blood flow assessment: the sender and receiver are both static and the target (red blood cells (RBCs)) moves. The first frequency shift occurs when the sound beam hits the moving RBC and again the frequency shift occurs when it returns. The shift depends on the angle at which the sound beam hits the moving object. Looking into the equation used for the calculation of the velocity from the frequency change on Doppler: where fd = Doppler shift, ft = transmitted beam, c = the speed of sound in tissue, V = the velocity of blood flow, θ = the angle of incidence between the ultrasound beam and the direction of flow. Considering this equation, it is important to notice that the frequency of the received beam is dependent on the frequency of the incident beam, the velocity of the moving object, and the angle of incidence. However, more importantly, it is not dependent on the absolute value of the angle of incidence; it is dependent on the “cos” value of this angle. Therefore, for correct calculation of the frequency, or for the calculation of any one of the unknown variables out of these above-mentioned four, the cos value of the angle (cos θ) should be within acceptable limits (Table 1). The precise Doppler frequency is calculated, taking into account an angle correction factor of 1/cos θ (Table 2). Table 1: Angle of incidence of the sound beam, their cos values, and the percentage deviation these lead to in the velocity value1 Angle (°)Cos value% deviation 01    0 300.866  13 450.707  29 600.5  50 900100 Table 2: Angle of incidence of the sound beam, correction factors used for calculation of velocity, and the correction error1 Angle (°)Correction factor 1/cos θCorrection error (%) 301.15  +3 451.41  +6 602.00  +9 702.92+14 753.86+21 805.76+30 The Doppler effect can be displayed as color Doppler, power Doppler, and spectral Doppler. COLOR DOPPLER Doppler is the most commonly used term for the color Doppler. It displays the blood flow in two colors, which are conventionally red and blue. The color indicates the direction of the flow. The flow towards the probe is indicated in red and that away from the probe in blue (Fig. 1). However, these can be interchanged by using an invert switch (Fig. 2). When the flow is perpendicular to the sound beam, not towards or away from the probe, no color will be displayed in spite of the presence of the flow. The cause for this has already been explained earlier. When the flow is perpendicular to the sound beam, the Doppler angle is 90° and the cos θ value is 0; therefore, the flow cannot be displayed. The arterial flow is pulsatile and the venous flow is nonpulsatile. The brightness of the color depends on the velocity of the flow. The higher flow velocities display bright colors and the lower flow velocities display dull colors (Fig. 3). However, the color Doppler does not give exact velocity values. Therefore, it is a directional semiquantitative Doppler. Fig. 1: Color Doppler image showing flow toward the probe indicated in red and that away from the probe in blue Fig. 2: Color Doppler image showing the relationship of the colors to the colors in the reference band, as shown by the arrows. In this image, the colors have been inverted Fig. 3: Color Doppler image showing varying brightness of both red and blue colors. Bright colors indicate higher velocity flow as shown by arrow and dull colors show lower velocity flows Fig. 4: Power Doppler image showing brighter color for the higher velocity flow as shown by arrows POWER DOPPLER Though a Doppler, the power Doppler is not an angle-dependent technology. It is known that movement of any object produces energy and this is used to depict the blood flow signals in the power Doppler. This means that wherever there is a movement of blood or of body tissues, color signals will be generated. It is not an angle-dependent technology and so the advantage is that it displays color signals even in vessels that are perpendicular to the sound beam. However, the disadvantage is that it is a single color display and does not show the flow direction. It indigenously potentiates the signals and therefore is a useful technology for documentation of low-velocity blood flows. The main application of the power Doppler therefore is to pick up flow in low-velocity blood vessels and the blood flows in the vessels perpendicular to the sound beam (Fig. 4). Like color Doppler, the color display of the power Doppler signals also varies depending on the velocity of the moving object. High-velocity movements show a bright color and the low-velocity movements display a dull color (Fig. 5). HD flow (high-definition flow) is a new addition to the basic power Doppler technology. It is a directional power Doppler. Apart from high-flow sensitivity, the HD flow also has a color coding for the flow towards or away from the probe as in the color doppler (Fig. 6). Like color and power Doppler, the brightness of the color correlates with the velocity of the moving object. SPECTRAL DOPPLER Spectral Doppler is a spectral display of the flow/movement of a moving object. The trace above the baseline in the spectrum is the flow towards the probe and the trace below the baseline is the flow away from the probe (Fig. 7) on the spectrum. Like in the color Doppler, the invert switch can reverse the flow display. On the spectral Doppler, the arterial flow appears spiky and the venous flow appears flat. There is a scale on the side of the spectrum and it is by this scale that the exact velocities of the flows can be calculated (Fig. 8). Fig. 5: Power Doppler image showing brighter color for high velocity flow and dull color for low velocity flow Fig. 6: Endometrial flow on HD flow imaging Fig. 7: Spectral Doppler image showing flow toward and flow away from the probe as spectrum above the baseline and spectrum below the baseline Fig. 8: Spectral Doppler image showing the velocity scale as marked by arrow The spectrum can be displayed for a pulsed wave Doppler and a continuous wave Doppler. In the pulsed wave Doppler, the transducer is dedicatedly used for emitting the sound wave during one time interval and then dedicatedly to receive the sound wave during the following time interval of the same length alternatively. As the sound waves are emitted in pulses, it is called a pulsed wave Doppler. The limitation of the pulsed wave Doppler is that the maximum frequencies recorded correctly are smaller than half that of the pulse repetition frequency. This limit of any pulse repetition frequency (PRF) is called Nyquist limit/frequency. The PRF therefore should be set at least double the frequency to be measured. Therefore, to record different velocities, the pulse repetition frequencies have to be selected accordingly. The pulsed Doppler therefore has a limitation to maximum velocities that it can record. This can be overcome by the continuous wave Doppler. This uses dedicated elements for emitting and receiving sound waves and therefore has no higher limit for velocities recorded. This is used chiefly for adult echocardiaography. Since the continuous wave Doppler is not used for the Doppler studies in gynecology and obstetrics, we shall not include it in the further discussion here. To obtain the correct information about flow velocities with the Doppler, certain settings and adjustments on the scanner are required. Though most of these are set on the dedicated presets, it is important to understand how can we manipulate certain switches/knobs to achieve best flow information. These are the Doppler box size, color gains, PRF, wall motion filter and balance on color and power Doppler and sample volume, gains, PRF, wall motion filter, and angle correction for the pulsed wave Doppler. COLOR/POWER DOPPLER SETTINGS Box Size When one switches on the color Doppler, a box appears on the screen, on the B mode image. This box defines in which area of the B mode image that the blood flow information will be looked for. It is important to consider here that when the Doppler is switched on, the machine has to process the B mode information as well as the flow information; therefore, the frame rate significantly decreases. What is this frame rate? The ultrasound scan that we are doing gives us continuous live, real-time information of the area scanned. We call this as real time because it matches with the live movements of the human body. This is done by a compilation of multiple B mode images. Only if the B mode images are processed fast enough to match the real-time changes, this scan can be seen like a continuous scan as in a video. The number of B mode images produced in a unit time is called a frame rate. This clearly means that the higher the frame rate, the better would the scan quality be considered. This frame rate can be increased if the machine has to process less. Though it is known that starting the color Doppler decreases the frame rate, the frame rate with the color Doppler can be optimized if the color box size is planned just large enough to cover the area of interest. I would add here that before switching on the color Doppler, the B mode image should also be optimized for its angle and depth to concentrate only on the area of interest. The color box can be moved all across the B mode image and the size can be altered based on the requirement (Fig. 9). Fig. 9: HD flow image showing placement of the color box, to show the entire circle of Willis Fig. 10: HD flow image showing color filling up the entire box and spilling out of the vessels, due to high gains Fig. 11: Image on the right shows full filling of the vessels with no spill outside the vessels suggesting optimum gain setting for this HD flow, whereas that on the left shows no color filling in some of the vessels suggesting low gains Fig. 12: Diagrammatic demonstration of empty space between vessel wall and the central color column, due to low gain settings for color/power Doppler flow Gains When the Doppler is switched on, it should show the blood vessels filled up with color and no color spilling out of the vessels. This is done by gain adjustment. When the gains are too high, the color will be seen spilling out of the vessels (Fig. 10). In contrast, when the gains are low, the color will not completely fill up the vessel (Fig. 11). This is because when the gains are low, the low velocity signals will not be picked up by the Doppler. It is important to mention at this stage that in a vessel the central stream has the highest velocity flow, wheras close to the walls the velocity is lower due to friction with the walls. The correct gain therefore is when the entire lumen of the vessel is filled with color and there is no spill outside. How to set? Increase the gains to the maximum, there will be a lot of spill of color. Start decreasing the gains till the color is contained in the vessel, and further decrease the gains, there will be black (anechoic) areas between the color column and the vessel wall (Fig. 12). These mean over-reduced gains. Increase the gains till the vessel again fills up fully with color and this is your correct setting. Once set and placed in the presets, the gain settings for color and power Doppler are not to be changed. PRF It has already been discussed that the PRF decides what is the maximum receiving frequency of the sound wave (indirectly velocity) that is recordable at a particular setting2 (Niquist frequency). Therefore, it is important to select an optimum PRF for the velocity of the blood vessels flow studied. If the high PRF is used for a low velocity flow, it will not be possible to pick up the color where there are flows (Fig. 13). Instead if low PRF is used for high velocity flows, there will be aliasing (mixing of red and blue colors), which appears like turbulence (Fig. 14). The PRF setting would be optimum when the color homogenously fills the entire vessel with single color-red or blue (Fig. 15). Fig. 13: Color not filling up the entire vessel as high PRF is used for a low velocity flow Fig. 14: Color Doppler image where low PRF setting for high velocity flows, showing aliasing Fig. 15: Optimum PRF settings show unicolor filling of vessels with no spill Fig. 16: Color Doppler image showing line of balance (arrow) Wall Motion Filter It is known that Doppler produces color signals wherever there is a movement, and the brightness of the color depends on the velocity of the moving object. This means that the color signals are produced by the red blood corpuscles in the blood, but are also produced by the wall movement of the artery and also by the pulsations transmitted to the surrounding tissues. The color signals of the blood flow are the brightest, those of wall motion are dull, and those due to transmitted pulsations from the surrounding tissues are the dullest, for the reasons explained earlier. However, these dull color signals produced by low velocity movements corrupt the flow information and can be eliminated only if a low velocity filter is used. This filter is named as wall motion filter (WMF). The WMF can be adjusted at various levels depending on the level of sound signals that need to be eliminated to produce clear flow velocity signals. For larger vessels with high velocity flows the arterial flow movement is more and a higher WMF is required, whereas for small vessels with low velocity signals, the arterial wall movements are less and so low wall filters are required. Using a higher wall filter for a low velocity blood flow vessel will eliminate the slow flow information. This will lead to a typical color flow signal with a color column seen in the center of the vessel; there is a black line seen on both the sides between the vessel wall and the color column, similar to that produced by low gains (Fig. 12). Balance As the name suggests, this is a balancing tool between the two modalities—the B mode and the color Doppler. As discussed earlier, when the Doppler is switched on, the scanner computer processing is doubled and therefore the scanner is to be advised as to which of the two modalities should be given predominance and should be highlighted. This can be decided by the balance. When a color/power Doppler is switched on, a gray bar and a color bar appear on the left side of the screen. On the gray bar is a green line (Fig. 16). This line indicates the balance adjustment. When the brightness of the gray scale on the image matches the brightness below the green line on the gray bar, the color predominates and the color filling is normal, but when the brightness on the gray scale image matches the brightness above the green line on the gray bar, the B mode predominates and therefore in these areas if the color is present to show the flows, the color will be patched up with white (Fig. 17). Increasing color gains is surely not an answer to this problem. Fig. 17: Color Doppler imaging showing color patched up with white due to low balance setting, or high gains on B mode Fig. 18: Optimum balance setting or low B mode gains show normal color filling of vessels Fig. 19: Power Doppler image with the spectral Doppler line and sample volume shown by the red circle Fig. 20: Spectral Doppler image showing flow spectrum with small sample volume Very importantly when this happens the correct thing to do is to change the balance to higher, which allows the color pick up even with the bright gray scale. However, the balance setting on many scanners are on the sub-menu of the color Doppler. This makes adjusting it clumsy because when the operator is assessing flow in a relatively small vessel on the scan, opening the sub-menu and changing the balance is difficult. Therefore, a practical solution to this is to reduce the B mode gains, which will match the brightness of the image to a gray shade below the green line on the gray bar and allows good color pick-up (Fig. 18). SETTINGS FOR PULSED WAVE DOPPLER Sample Volume Sample volume is the selected length of the vessel to assess the flow. When a pulsed wave Doppler is switched on, a dotted line appears on the screen. This line is parallel to the sound beam and can be swapped across the entire image. Two parallel short horizontal lines (=sign) appear on this line (Fig. 19). This “=sign” can be moved up and down on the dotted line anywhere. This sign is to be placed on the vessel in which the flow is to be measured. The distance between the two lines decide what length of the vessel will be evaluated for the flow assessment. If the vessel is not absolutely parallel to the sound beam (overlapping on the dotted line), the distance between the two line (sample volume) should be equal to the diameter of the vessel. A sample volume smaller than the diameter will lead to error in the velocity assessment because then it will not evaluate the flow in the entire stream (Fig. 20). When that happens, correct velocity readings are not possible because, as is known, flow velocities in the central stream and at the sides are not the same. If the sample volume is larger than the diameter of the vessel, the vessel wall movement or flow information from neighboring vessels may corrupt the flow information details (Fig. 21). Gains The gain settings on the pulsed wave Doppler should be such that it produces a clear, well-defined bold spectrum of blood flows (Fig. 22). If the gains are too high, the flow information with be corrupted by a lot of noise (Fig. 23). If the gains are too low, the entire spectrum will appear scarce and scattered (Fig. 24). Fig. 21: Spectral Doppler image with large sample volume showing hazy margins of the spectrum with extrashadows (noise) Fig. 22: Bold spectrum of uterine artery flow with optimum gain settings Fig. 23: High gain setting of pulsed doppler showing noise on the spectrum Fig. 24: Low gain setting gives ill-defined blurred spectrum Fig. 25: High PRF setting for low velocity flow will decrease the systolic peak and difference between systolic and diastolic flows Fig. 26: Low PRF setting for high velocity flow on pulsed wave Doppler shows overshooting of systolic flow and overlaping of systolic and diastolic flows—Aliasing PRF As has been discussed earlier, the Nyquist frequency of a sound wave decides what maximum flow velocities can be recorded by a sound wave of certain frequency. Therefore, the PRF is adjusted according to flow velocity to be assessed. If high PRF is used for low velocity flow, it will not be possible to differentiate between the systolic and diastolic flows, as the systolic flow recordings will be subdued (Fig. 25). If low PRF is selected for high velocity blood flows, there will be an overlapping of systolic and diastolic signals and is known as aliasing (Fig. 26). The correct PRF setting would therefore be when the spectrum will fill up two-thirds of the spectral area (above the baseline) (Fig. 27). Though when there is a minimal adjustment required to achieve this, moving the baseline up or down would also serve the purpose. Fig. 27: The correct PRF setting on pulsed Doppler showing spectrum filling up two-thirds of the spectral area (above the baseline) Fig. 28: Optimum wall filter setting showing flow spectrum touching the baseline Fig. 29: High wall filter on spectral Doppler showing black line between the baseline and the spectrum Wall Motion Filter Like in color and power Doppler, the function of a wall motion filter in pulsed Doppler also is to eliminate signals from low velocity movements, chiefly not to corrupt the image with wall motions and also with venous flows, adjoining the artery. Again like color and power Doppler, the settings are low for low velocity vessels and high for high velocity vessels. However, the wall filter setting in a pulsed Doppler spectrum is known to be correct only if the spectrum touches the baseline (Fig. 28). When there is a black line or a gap between the baseline and the spectrum (Fig. 29), this trace is not to be accepted, as this clearly indicates a high wall filter for the case. In that case, if we say it eliminates low velocity information, it means it interferes with the diastolic flow information and may lead to a false diagnosis of the absent end diastolic flow and naturally then wrong interpretations. The Spectral Doppler being a quantitative Doppler, the wall filter settings on this modality are in numbers—30, 60, 90 Hz, etc. Wall filters, as a rule for gynecological and infertility assessment, are set at the lowest (30 Hz) and for fetal echocardiography, this is set high (may be 90–120 Hz) depending on the fetal gestational age. As discussed earlier considering the equation for calculation of the blood flow velocity from frequency of incident sound beam, frequency of received sound beam and cos of the angle of incidence, if the angle of incidence is 90°, then the cos θ being 0, the velocity value will be 0 and also that with increasing angle from more than 60°, the percentage of error in the calculation is highly significant and so the Doppler angle is always set between 0° and 60°, preferably %3C;30°. When the pulsed wave Doppler is switched on, the dotted line and the “=sign” appears. The Doppler angle can be considered or set at 0 when the vessel is parallel to the dotted line. This is often times possible because the dotted line can be swapped across the entire B mode image and the probe manipulation may also help in the alignment of the two. However, if it is still not possible, after achieving the smallest angle between the vessel and the dotted line, angle correction is used. This deviates out a short line from the dotted line, and is tried to align this short line to the vessel (Fig. 30). In trying to do this, the angle between the dotted line and the short line is the Doppler angle. It is displayed on the screen or the touch pad of the scanner (Fig. 31). This angle is to set as <30° preferably and maximum of 60° may be allowed. SETTING THE SPEED OF THE TRACE An ideal spectral trace is when there are 4–5 cardiac cycles (Fig. 32) recorded on any one spectrum image. This can be done by scaling the time axis, or in simpler words, setting the speed of the trace. For most scans, this is possible when the speed is set as 4 or 5. Higher speed gives a trace of too few cardiac cycles (Fig. 33) and lesser speed gives too many cardiac cycle traced (Fig. 34). Fig. 30: On angle correction on pulsed wave Doppler, short line deviates out from the dotted line, and is tried to align this short line to the vessel Fig. 31: Deviates out a short line from the dotted line, to align this short line to the vessel, the angle between the dotted line and the short line is the correction of the angle Fig. 32: Flow spectrum showing five wave forms-optimum speed Fig. 33: Flow spectrum showing three wave forms due to higher speed Fig. 34: Lower spectral speed showing multiple waveforms on the spectral trace ARTIFACTS Inspite of all these settings used to optimize the Doppler images, certain artifacts still cannot be completely eliminated. These are aliasing, mirror image artifact, and artifacts due to electrical interferences. ALIASING When the Doppler frequency exceeds the Nyquist frequency, it results in aliasing. This is an overlapping effect of systolic and diastolic velocities across the baseline on both the sides of the spectrum. This effect is similar to what we have often observed, especially in movies. The car wheels suddenly appear to start rotating in the opposite direction when the car speeds up. If the frequency of the oscillations is 5 Hz but the pulse repetition frequency is 2 per second and therefore this signal will see this movement only twice in a second and not only miss the intermittent information but also will interpret that the flow is in both directions. Adjusting the optimum PRF sorts out this problem. MIRROR IMAGE ARTIFACT Mirror image artifact is when a similar spectrum is seen on both the sides of the baseline. This is especially possible when the sample volume is large and is tracing the flow in two vessels or two loops of the same vessel positioned, side by side (Fig. 35). The second possibility is that a large sample volume is placed on the curve of the loop, when in the proximal half of the loop the blood flow is observed away from the probe by the transducer and in the distal half of the sample volume the flow is perceived towards the probe. Decreasing the sample volume and planning to place it on one vessel only sorts out this problem. Fig. 35: Mirror image artifact seen on spectral Doppler ELECTRICAL INTERFERENCES These may appear as random signals on color, power or spectral Doppler, (Fig. 36) especially when the scanner is sharing the same electrical line as some high-voltage gadgets and the only way to get rid of this is to plan the electrical supply to the scanner wisely. SAFETY OF DOPPLER There is a big scarcity against using the Doppler in the people who are aware of the ill effects of Doppler and a false sense of safety in those who are not aware of these side effects. The two major effects of sound wave when it passes through the human body are: THERMAL EFFECTS As the sound waves pass through the body tissues, there is absorption of energy and a transformation of ultrasound energy into heat. The energy absorption is minimal in the fluid and maximum in bones. It is also dependent on the frequency of the ultrasound waves. The absorption is higher with higher frequency waves, and lower with low frequency sound waves. A temperature rise of up to 1°C is considered absolutely safe, whereas if it is %3E;2.5°C, it can lead to a significant tissue damage. This thermal effect is measured as thermal index, and is displayed on the screen. It is generally found that the temperature rise of 2°C is thermal index 2. We know that the temperature rise of 1°C is safe; therefore, the thermal index should be limited at maximum 1. Though it is important here to understand that with higher thermal indices also the damage can occur only after exposure for a certain period of time. Unfortunately, this time is difficult to define confidently. Moreover, since the energy-absorbing capacity of different tissues is different, the thermal index for soft tissues (TIs) and bones (TIb) is different.3 Fig. 36: Artifact due to electrical interference on spectral Doppler MECHANICAL EFFECT When the sound wave passes through the body tissues, it leads to oscillations of the body molecules, resulting in a cavitating (low pressure) phase and a compressing (high pressure) phase. In the negative pressure phase or the cavitation phase, large microbubbles are formed. Once the oscillations reach a certain level, a fluid medium incorporating gas microbubbles is set in motion, which is called microstreaming. This generates a huge strong pressure and leads to bursting of cell membranes. This effect is pronounced if the high frequency, high intensity ultrasound is aimed on a small focus.1 The second possible mechanism explained is as follows: existing microbubbles or cells undergoing cavitation inflate under the influence of a negative pressure and implode abruptly. This takes microseconds but causes a sudden rise in temperature/pressure and results in tissue destruction. This is transient cavitation and occurs only when energy levels are beyond certain thresholds. This threshold may be quantitatively documented as mechanical index. The mechanical index (MI) is defined as maximum estimated in situ rarefaction pressure or maximum negative pressure (in MPa) divided by the square root of the frequency (in MHz). MI of up to 0.3 can be considered safe and when more than 0.7, it can lead to cavitation.4 CONCLUSION Doppler is a very useful modality for the assessment of circulation in the human body. Only correct settings on the scanner can give optimum results; therefore, it is very important to understand the basic principles and settings of the ultrasound scanner before starting to use the Doppler for interpretation of vascular flows and information of oxygenation in the human fetus. The Ultrasound and Doppler are generally safe modalities. Their safety can be related to the frequency used and the length of exposure. Therefore, the Doppler should not be used for a long time on a single focus and therefore the ALARA5 (as low as minimum achievable) principle is now applied for all ultrasound scans. REFERENCES 1. Frey H. Physical and technical fundamentals of ultrasound and Doppler ultrasound in Doppler Ultrasound in Gynecology and Obstetrics. ed. C, Sohn H-J, Voigt et al. ed.,Germany: George Thieme Verlag; 2004. 2. Burns PN. Principles of Doppler and colour flow. Radiol Med 1993;85(5 Suppl 1):3–16. 3. Duck FA. Ultrasound in Obstetrics and Gynaecology. ed. JW, Wladimiroff SH. Eik Nes ed. Elsivier; 2009. pp. 21–30. 4. The British Medical Ultrasound Society. Safety of ultrasound, www.bmus.org/public-info/pi-safety01.asp. 5. Auxier JA, Dickson HW. Guest editorial: Concern over recent use of the ALARA philosophy. Health Phys 1983;44(6):595–600. ________________________ © The Author(s). 2019 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted use, distribution, and non-commercial reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
__label__pos
0.900658
  Urine pH Urine pH is used to classify urine as either a dilute acid or base solution. Seven is the point of neutrality on the pH scale. The lower the pH, the greater the acidity of a solution; the higher the pH, the greater the alkalinity. The glomerular filtrate of blood is usually acidified by the kidneys from a pH of approximately 7.4 to a pH of about 6 in the urine. Depending on the person's acid-base status, the pH of urine may range from 4.5 to 8. The kidneys maintain normal acid-base balance primarily through the reabsorption of sodium and the tubular secretion of hydrogen and ammonium ions. Urine becomes increasingly acidic as the amount of sodium and excess acid retained by the body increases. Alkaline urine, usually containing bicarbonate-carbonic acid buffer, is normally excreted when there is an excess of base or alkali in the body. Secretion of an acid or alkaline urine by the kidneys is one of the most important mechanisms the body uses to maintain a constant body pH. A highly acidic urine pH occurs in: A highly alkaline urine occurs in: In people who are not vegetarians, the pH of urine tends to be acidic. A diet rich in citrus fruits, legumes, and vegetables raises the pH and produces urine that is more alkaline. Most of the bacteria responsible for urinary tract infections make the urine more alkaline because the bacteria split urea into ammonia and other alkaline waste products. The urine pH varies in different types of acidosis and alkalosis. Control of pH is important in the management of several diseases, including bacteriuria, renal calculi, and drug therapy. The formation of renal stones is related to the urine pH. Patients being treated for renal calculi are frequently given diets or medications to change the pH of the urine so that kidney stones will not form. Calcium phosphate, calcium carbonate, and magnesium phosphate stones develop in alkaline urine; when this occurs, the urine is kept acidic. Uric acid, cystine, and calcium oxalate stones precipitate in acidic urine; in this situation, the urine should be kept alkaline or less acidic than normal. Drugs such as streptomycin, neomycin, and kanamycin are effective in treating urinary tract infections if the urine is alkaline. During treatment with sulfa drugs, alkaline urine helps prevent formation of sulfonamide crystals. Here are important points to remember about urinary pH: Instant Feedback: Most bacterial urinary tract infections cause the urine to become more alkaline. TRUE or FALSE
__label__pos
0.972654
Skip to main content This site requires you to update your browser. Your browsing experience maybe affected by not having the most up to date version. All other Modules / Discuss all other Modules here. Moderators: martimiz, Sean, Ed, biapar, Willr, Ingo, swaiba Event Calendar problem Go to End Reply 18 Posts   5576 Views Avatar cumquat Community Member, 200 Posts 17 June 2009 at 2:40am Hi there, i upgraded a site today from 2.2.1 to 2.3.1 no problem, then i added the event calendar and that seemed to go ok. i can add a calendar no problem, the problem i'm getting is when i add a calander event page it creates the page but everytime i try and save it the cms says "Error saving content". it may be there are some logs somewhere telling me more but apart from that i have no idea why it's happening. i added the dataobject manager as i read it helps but still no luck. Granted not much to go on here but maybe someone has had something similar, it's the latest version of the event calendar, download from the extensions page this morning. Regards Mick Avatar UncleCheese Forum Moderator, 4102 Posts 17 June 2009 at 3:24am In your _config.php, add: Debug::send_errors_to('[email protected]') Find out what the error is, and post it here. You can also use firebug to read the response from the server. Avatar cumquat Community Member, 200 Posts 17 June 2009 at 5:03am Thanks for your speedy response and tip. below is the contents of the email. Error: Couldn't run query: insert into `CalendarEvent_versions` SET `Recursion` = 0, `CustomRecursionType` = 0, `DailyInterval` = 1, `WeeklyInterval` = 1, `MonthlyInterval` = 1, `MonthlyRecursionType1` = 0, `MonthlyRecursionType2` = 0, `MonthlyIndex` = 1, `ClassName` = 'CalendarEvent', `Created` = null, `LastEdited` = null, `Start` = null, `End` = null, `Content` = null, `Title` = null, `StartTime` = null, `EndTime` = null, `Location` = null, `EventType` = null, `MonthlyDayOfWeek` = '0', `CalendarID` = '0', `RecordID` = 29, `Version` = 2, `AuthorID` = 1 Unknown column 'ClassName' in 'field list' At line 400 in /home/sites/sunrise-lettings.co.uk/public_html/sapphire/core/model/MySQLDatabase.php user_error(Couldn't run query: insert into `CalendarEvent_versions` SET `Recursion` = 0, `CustomRecursionType` = 0, `DailyInterval` = 1, `WeeklyInterval` = 1, `MonthlyInterval` = 1, `MonthlyRecursionType1` = 0, `MonthlyRecursionType2` = 0, `MonthlyIndex` = 1, `ClassName` = 'CalendarEvent', `Created` = null, `LastEdited` = null, `Start` = null, `End` = null, `Content` = null, `Title` = null, `StartTime` = null, `EndTime` = null, `Location` = null, `EventType` = null, `MonthlyDayOfWeek` = '0', `CalendarID` = '0', `RecordID` = 29, `Version` = 2, `AuthorID` = 1 Unknown column 'ClassName' in 'field list',256) line 400 of MySQLDatabase.php MySQLDatabase->databaseError(Couldn't run query: insert into `CalendarEvent_versions` SET `Recursion` = 0, `CustomRecursionType` = 0, `DailyInterval` = 1, `WeeklyInterval` = 1, `MonthlyInterval` = 1, `MonthlyRecursionType1` = 0, `MonthlyRecursionType2` = 0, `MonthlyIndex` = 1, `ClassName` = 'CalendarEvent', `Created` = null, `LastEdited` = null, `Start` = null, `End` = null, `Content` = null, `Title` = null, `StartTime` = null, `EndTime` = null, `Location` = null, `EventType` = null, `MonthlyDayOfWeek` = '0', `CalendarID` = '0', `RecordID` = 29, `Version` = 2, `AuthorID` = 1 | Unknown column 'ClassName' in 'field list',256) line 102 of MySQLDatabase.php MySQLDatabase->query(insert into `CalendarEvent_versions` SET `Recursion` = 0, `CustomRecursionType` = 0, `DailyInterval` = 1, `WeeklyInterval` = 1, `MonthlyInterval` = 1, `MonthlyRecursionType1` = 0, `MonthlyRecursionType2` = 0, `MonthlyIndex` = 1, `ClassName` = 'CalendarEvent', `Created` = null, `LastEdited` = null, `Start` = null, `End` = null, `Content` = null, `Title` = null, `StartTime` = null, `EndTime` = null, `Location` = null, `EventType` = null, `MonthlyDayOfWeek` = '0', `CalendarID` = '0', `RecordID` = 29, `Version` = 2, `AuthorID` = 1) line 418 of Database.php Database->manipulate(Array) line 117 of DB.php DB::manipulate(Array) line 833 of DataObject.php DataObject->write() line 642 of LeftAndMain.php LeftAndMain->save(Array,Form,HTTPRequest) line 228 of Form.php Form->httpSubmission(HTTPRequest) line 107 of RequestHandler.php RequestHandler->handleRequest(HTTPRequest) line 121 of RequestHandler.php RequestHandler->handleRequest(HTTPRequest) line 122 of Controller.php Controller->handleRequest(HTTPRequest) line 277 of Director.php Director::handleRequest(HTTPRequest,Session) line 121 of Director.php Director::direct(admin/EditForm) line 115 of main.php Avatar UncleCheese Forum Moderator, 4102 Posts 17 June 2009 at 8:05am This is a nasty problem. I've experienced it before -- never with EventCalendar, but with other stuff. I have no idea what makes Silverstripe see that phantom column in the Versions table. I wish I could remember how I fixed it. Avatar cumquat Community Member, 200 Posts 17 June 2009 at 9:01pm Ok well thanks for trying if you do rememebr or if anyone else out there knows please let me know as this is a bit above my Silverstripe skill level. Avatar cumquat Community Member, 200 Posts 22 June 2009 at 9:24pm Hi Guys don't suppose anyone has any ideas on this, i've now upgraded to 2.3.2 but thats hasn't helped. This is a live website and i'm kinda hoping that i'm not going to have to kill the whole thing and start again. Regards Mick Avatar UncleCheese Forum Moderator, 4102 Posts 23 June 2009 at 1:15am Can you try removing all the tables from the DB, removing the event_calendar folder, run a /dev/build, and start over, maybe? Avatar cumquat Community Member, 200 Posts 23 June 2009 at 1:53am You sir, are a god... Cheers for that, i don't know why i didn't think of that i guess i just assumed that if you took away a module that when you ran dev/build again that it would clean/remove the tables. Many thanks again Mick Go to Top
__label__pos
0.999958
18.4 C New York October 2, 2023 Talk 2 Health – Make your life a healthier one Health How to Utilize Etyltrimethylammonium Bromide As A Germicide: The Ultimate Guide The mode of action of CTAB is similar to that of other quaternary ammonium compounds, such as benzalkonium chloride and alkyltrimethyl ammonium chloride. CTAB works by disrupting the cell membranes of bacteria and other microorganisms, causing them to leak and die. CTAB has been shown to be effective against a broad range of bacteria, including Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. It is also effective against viruses, fungi, and protozoa. Cetyltrimethylammonium bromide is generally considered to be safe and effective when used as directed. However, it can cause eye and skin irritation and should be used with caution in households with children or pets. Benefits of using cetyltrimethylammonium bromide as a germicide: • CTMAB has a wide range of applications and is effective against a variety of microorganisms, including bacteria, fungi, and viruses. • In addition, CTMAB is non-toxic to humans and animals, making it safe to use in both residential and commercial settings. • Furthermore, CTMAB is relatively inexpensive and easy to obtain. When used properly, CTMAB can help to keep surfaces clean and free of harmful microbes. As such, it is an important tool in the fight against illness and disease. How to use cetyltrimethylammonium bromide properly as a germicide: Cetyltrimethylammonium bromide (CTAB) is a quaternary ammonium compound that is commonly used as a germicide. It is effective against a wide range of bacteria, viruses, and fungi, making it a valuable tool for preventing the spread of infection. • CTAB can be used in a variety of ways, including as a disinfectant for surfaces and medical equipment, as a preservative for biological specimens, and as an antiseptic for wounds. • When using CTAB as a germicide, it is important to follow the manufacturer’s instructions carefully to ensure safety and efficacy. Where to find it: CTAB is used in a wide variety of applications, including cosmetics, detergents, emulsifiers, and cleaning agents. CTAB is also used as a disinfectant and sanitizer in hospitals and laboratories. So, it can be purchased from chemical supply companies or online retailers. Endnote: CTMAB is relatively safe to use, and it does not cause the development of resistance in bacteria. In addition, CTMAB is inexpensive and easy to obtain. For these reasons, CTMAB is often considered the best option for use as a germicide. However, there are some drawbacks to using CTMAB. For example, it can be toxic to human cells and it may also cause skin irritation. CTMAB is not effective against all types of microorganisms. Nevertheless, CTMAB remains a popular choice for use as a germicide due to its efficacy and safety. Related posts Nourishment For Healthy Eyes Oliver Elijah Regular Health – Eye Care Oliver Elijah Finding Relief For Elbow Injuries: Consulting An Elbow Doctor Oliver Elijah
__label__pos
0.831124
// @(#)root/gpad:$Id$ // Author: Rene Brun 12/12/94 /************************************************************************* * Copyright (C) 1995-2000, Rene Brun and Fons Rademakers. * * All rights reserved. * * * * For the licensing terms see $ROOTSYS/LICENSE. * * For the list of contributors see $ROOTSYS/README/CREDITS. * *************************************************************************/ #ifndef ROOT_TCanvas #define ROOT_TCanvas ////////////////////////////////////////////////////////////////////////// // // // TCanvas // // // // Graphics canvas. // // // ////////////////////////////////////////////////////////////////////////// #ifndef ROOT_TPad #include "TPad.h" #endif #ifndef ROOT_TAttCanvas #include "TAttCanvas.h" #endif #ifndef ROOT_TVirtualX #include "TVirtualX.h" #endif #ifndef ROOT_TString #include "TString.h" #endif #ifndef ROOT_TCanvasImp #include "TCanvasImp.h" #endif class TContextMenu; class TControlBar; class TBrowser; class TCanvas : public TPad { friend class TCanvasImp; friend class TThread; friend class TInterpreter; protected: TAttCanvas fCatt; //Canvas attributes TString fDISPLAY; //Name of destination screen Size_t fXsizeUser; //User specified size of canvas along X in CM Size_t fYsizeUser; //User specified size of canvas along Y in CM Size_t fXsizeReal; //Current size of canvas along X in CM Size_t fYsizeReal; //Current size of canvas along Y in CM Color_t fHighLightColor; //Highlight color of active pad Int_t fDoubleBuffer; //Double buffer flag (0=off, 1=on) Int_t fWindowTopX; //Top X position of window (in pixels) Int_t fWindowTopY; //Top Y position of window (in pixels) UInt_t fWindowWidth; //Width of window (including borders, etc.) UInt_t fWindowHeight; //Height of window (including menubar, borders, etc.) UInt_t fCw; //Width of the canvas along X (pixels) UInt_t fCh; //Height of the canvas along Y (pixels) Int_t fEvent; //!Type of current or last handled event Int_t fEventX; //!Last X mouse position in canvas Int_t fEventY; //!Last Y mouse position in canvas Int_t fCanvasID; //!Canvas identifier TObject *fSelected; //!Currently selected object TObject *fClickSelected; //!Currently click-selected object Int_t fSelectedX; //!X of selected object Int_t fSelectedY; //!Y of selected object TString fSelectedOpt; //!Drawing option of selected object TPad *fSelectedPad; //!Pad containing currently selected object TPad *fClickSelectedPad;//!Pad containing currently click-selected object TPad *fPadSave; //!Pointer to saved pad in HandleInput TCanvasImp *fCanvasImp; //!Window system specific canvas implementation TContextMenu *fContextMenu; //!Context menu pointer Bool_t fBatch; //!True when in batchmode Bool_t fUpdating; //!True when Updating the canvas Bool_t fRetained; //Retain structure flag Bool_t fUseGL; //!True when rendering is with GL // TVirtualPadPainter *fPainter; //!Canvas (pad) painter. static Bool_t fgIsFolder; //Indicates if canvas can be browsed as a folder private: TCanvas(const TCanvas &canvas); // cannot copy canvas, use TObject::Clone() TCanvas &operator=(const TCanvas &rhs); // idem void Build(); void CopyPixmaps(); void DrawEventStatus(Int_t event, Int_t x, Int_t y, TObject *selected); void RunAutoExec(); //Initialize PadPainter. void CreatePainter(); protected: virtual void ExecuteEvent(Int_t event, Int_t px, Int_t py); //-- used by friend TThread class void Init(); public: // TCanvas status bits enum { kShowEventStatus = BIT(15), kAutoExec = BIT(16), kMenuBar = BIT(17), kShowToolBar = BIT(18), kShowEditor = BIT(19), kMoveOpaque = BIT(20), kResizeOpaque = BIT(21), kIsGrayscale = BIT(22), kShowToolTips = BIT(23) }; TCanvas(Bool_t build=kTRUE); TCanvas(const char *name, const char *title="", Int_t form=1); TCanvas(const char *name, const char *title, Int_t ww, Int_t wh); TCanvas(const char *name, const char *title, Int_t wtopx, Int_t wtopy, Int_t ww, Int_t wh); TCanvas(const char *name, Int_t ww, Int_t wh, Int_t winid); virtual ~TCanvas(); //-- used by friend TThread class void Constructor(); void Constructor(const char *name, const char *title, Int_t form); void Constructor(const char *name, const char *title, Int_t ww, Int_t wh); void Constructor(const char *name, const char *title, Int_t wtopx, Int_t wtopy, Int_t ww, Int_t wh); void Destructor(); TVirtualPad *cd(Int_t subpadnumber=0); virtual void Browse(TBrowser *b); void Clear(Option_t *option=""); void Close(Option_t *option=""); virtual void Delete(Option_t * = "") { MayNotUse("Delete()"); } void DisconnectWidget(); // used by TCanvasImp virtual void Draw(Option_t *option=""); virtual TObject *DrawClone(Option_t *option="") const; // *MENU* virtual TObject *DrawClonePad(); // *MENU* virtual void EditorBar(); void EmbedInto(Int_t winid, Int_t ww, Int_t wh); void EnterLeave(TPad *prevSelPad, TObject *prevSelObj); void FeedbackMode(Bool_t set); void Flush(); void UseCurrentStyle(); // *MENU* void ForceUpdate() { fCanvasImp->ForceUpdate(); } const char *GetDISPLAY() const {return fDISPLAY.Data();} TContextMenu *GetContextMenu() const {return fContextMenu;}; Int_t GetDoubleBuffer() const {return fDoubleBuffer;} Int_t GetEvent() const { return fEvent; } Int_t GetEventX() const { return fEventX; } Int_t GetEventY() const { return fEventY; } Color_t GetHighLightColor() const { return fHighLightColor; } TVirtualPad *GetPadSave() const { return fPadSave; } void ClearPadSave() { fPadSave = 0; } TObject *GetSelected() const {return fSelected;} TObject *GetClickSelected() const {return fClickSelected;} Int_t GetSelectedX() const {return fSelectedX;} Int_t GetSelectedY() const {return fSelectedY;} Option_t *GetSelectedOpt() const {return fSelectedOpt.Data();} TVirtualPad *GetSelectedPad() const { return fSelectedPad; } TVirtualPad *GetClickSelectedPad() const { return fClickSelectedPad; } Bool_t GetShowEventStatus() const { return TestBit(kShowEventStatus); } Bool_t GetShowToolBar() const { return TestBit(kShowToolBar); } Bool_t GetShowEditor() const { return TestBit(kShowEditor); } Bool_t GetShowToolTips() const { return TestBit(kShowToolTips); } Bool_t GetAutoExec() const { return TestBit(kAutoExec); } Size_t GetXsizeUser() const {return fXsizeUser;} Size_t GetYsizeUser() const {return fYsizeUser;} Size_t GetXsizeReal() const {return fXsizeReal;} Size_t GetYsizeReal() const {return fYsizeReal;} Int_t GetCanvasID() const {return fCanvasID;} TCanvasImp *GetCanvasImp() const {return fCanvasImp;} Int_t GetWindowTopX(); Int_t GetWindowTopY(); UInt_t GetWindowWidth() const { return fWindowWidth; } UInt_t GetWindowHeight() const { return fWindowHeight; } UInt_t GetWw() const { return fCw; } UInt_t GetWh() const { return fCh; } virtual void GetCanvasPar(Int_t &wtopx, Int_t &wtopy, UInt_t &ww, UInt_t &wh) {wtopx=GetWindowTopX(); wtopy=fWindowTopY; ww=fWindowWidth; wh=fWindowHeight;} virtual void HandleInput(EEventType button, Int_t x, Int_t y); Bool_t HasMenuBar() const { return TestBit(kMenuBar); } void Iconify() { fCanvasImp->Iconify(); } Bool_t IsBatch() const { return fBatch; } Bool_t IsFolder() const; Bool_t IsGrayscale(); Bool_t IsRetained() const { return fRetained; } virtual void ls(Option_t *option="") const; void MoveOpaque(Int_t set=1); Bool_t OpaqueMoving() const { return TestBit(kMoveOpaque); } Bool_t OpaqueResizing() const { return TestBit(kResizeOpaque); } virtual void Paint(Option_t *option=""); virtual TPad *Pick(Int_t px, Int_t py, TObjLink *&pickobj) { return TPad::Pick(px, py, pickobj); } virtual TPad *Pick(Int_t px, Int_t py, TObject *prevSelObj); virtual void Picked(TPad *selpad, TObject *selected, Int_t event); // *SIGNAL* virtual void ProcessedEvent(Int_t event, Int_t x, Int_t y, TObject *selected); // *SIGNAL* virtual void Selected(TVirtualPad *pad, TObject *obj, Int_t event); // *SIGNAL* virtual void Cleared(TVirtualPad *pad); // *SIGNAL* virtual void Closed(); // *SIGNAL* void RaiseWindow() { fCanvasImp->RaiseWindow(); } virtual void Resize(Option_t *option=""); void ResizeOpaque(Int_t set=1); void SaveSource(const char *filename="", Option_t *option=""); void SavePrimitive(std::ostream &out, Option_t *option = ""); virtual void SetCursor(ECursor cursor); virtual void SetDoubleBuffer(Int_t mode=1); virtual void SetFixedAspectRatio(Bool_t fixed = kTRUE); // *TOGGLE* void SetGrayscale(Bool_t set = kTRUE); // *TOGGLE* *GETTER=IsGrayscale void SetWindowPosition(Int_t x, Int_t y) { fCanvasImp->SetWindowPosition(x, y); } void SetWindowSize(UInt_t ww, UInt_t wh) { fCanvasImp->SetWindowSize(ww, wh); } void SetCanvasSize(UInt_t ww, UInt_t wh); // *MENU* void SetHighLightColor(Color_t col) { fHighLightColor = col; } void SetSelected(TObject *obj); void SetClickSelected(TObject *obj) { fClickSelected = obj; } void SetSelectedPad(TPad *pad) { fSelectedPad = pad; } void SetClickSelectedPad(TPad *pad) { fClickSelectedPad = pad; } void Show() { fCanvasImp->Show(); } virtual void Size(Float_t xsizeuser=0, Float_t ysizeuser=0); void SetBatch(Bool_t batch=kTRUE); static void SetFolder(Bool_t isfolder=kTRUE); void SetPadSave(TPad *pad) {fPadSave = pad;} void SetRetained(Bool_t retained=kTRUE) { fRetained=retained;} void SetTitle(const char *title=""); virtual void ToggleEventStatus(); virtual void ToggleAutoExec(); virtual void ToggleToolBar(); virtual void ToggleEditor(); virtual void ToggleToolTips(); virtual void Update(); Bool_t UseGL() const { return fUseGL; } void SetSupportGL(Bool_t support) {fUseGL = support;} TVirtualPadPainter *GetCanvasPainter(); void DeleteCanvasPainter(); static TCanvas *MakeDefCanvas(); static Bool_t SupportAlpha(); ClassDef(TCanvas,8) //Graphics canvas }; #endif
__label__pos
0.999061
在前面的系列文章中,依次介绍了基于无序列表的顺序查找基于有序数组的二分查找平衡查找树,以及红黑树,下图是它们在平均以及最差情况下的时间复杂度: 可以看到在时间复杂度上,红黑树在平均情况下插入,查找以及删除上都达到了lgN的时间复杂度。 那么有没有查找效率更高的数据结构呢,答案就是本文接下来要介绍了散列表,也叫哈希表(Hash Table) 什么是哈希表 哈希表就是一种以 键-值(key-indexed) 存储数据的结构,我们只要输入待查找的值即key,即可查找到其对应的值。 哈希的思路很简单,如果所有的键都是整数,那么就可以使用一个简单的无序数组来实现:将键作为索引,值即为其对应的值,这样就可以快速访问任意键的值。这是对于简单的键的情况,我们将其扩展到可以处理更加复杂的类型的键。 使用哈希查找有两个步骤: 1. 使用哈希函数将被查找的键转换为数组的索引。在理想的情况下,不同的键会被转换为不同的索引值,但是在有些情况下我们需要处理多个键被哈希到同一个索引值的情况。所以哈希查找的第二个步骤就是处理冲突 2. 处理哈希碰撞冲突。有很多处理哈希碰撞冲突的方法,本文后面会介绍拉链法和线性探测法。 哈希表是一个在时间和空间上做出权衡的经典例子。如果没有内存限制,那么可以直接将键作为数组的索引。那么所有的查找时间复杂度为O(1);如果没有时间限制,那么我们可以使用无序数组并进行顺序查找,这样只需要很少的内存。哈希表使用了适度的时间和空间来在这两个极端之间找到了平衡。只需要调整哈希函数算法即可在时间和空间上做出取舍。 哈希函数 哈希查找第一步就是使用哈希函数将键映射成索引。这种映射函数就是哈希函数。如果我们有一个保存0-M数组,那么我们就需要一个能够将任意键转换为该数组范围内的索引(0~M-1)的哈希函数。哈希函数需要易于计算并且能够均匀分布所有键。比如举个简单的例子,使用手机号码后三位就比前三位作为key更好,因为前三位手机号码的重复率很高。再比如使用身份证号码出生年月位数要比使用前几位数要更好。 在实际中,我们的键并不都是数字,有可能是字符串,还有可能是几个值的组合等,所以我们需要实现自己的哈希函数。 1. 正整数 获取正整数哈希值最常用的方法是使用除留余数法。即对于大小为素数M的数组,对于任意正整数k,计算k除以M的余数。M一般取素数。 2. 字符串 将字符串作为键的时候,我们也可以将他作为一个大的整数,采用保留除余法。我们可以将组成字符串的每一个字符取值然后进行哈希,比如 public int GetHashCode(string str) { char[] s = str.ToCharArray(); int hash = 0; for (int i = 0; i < s.Length; i++) { hash = s[i] + (31 * hash); } return hash; } 上面的哈希值是Horner计算字符串哈希值的方法,公式为:    h = s[0] · 31L–1 + … + s[L – 3] · 312 + s[L – 2] · 311 + s[L – 1] · 310 举个例子,比如要获取”call”的哈希值,字符串c对应的unicode为99,a对应的unicode为97,L对应的unicode为108,所以字符串”call”的哈希值为 3045982 = 99·313 + 97·312 + 108·311 + 108·31= 108 + 31· (108 + 31 · (97 + 31 · (99))) 如果对每个字符去哈希值可能会比较耗时,所以可以通过间隔取N个字符来获取哈西值来节省时间,比如,可以 获取每8-9个字符来获取哈希值: public int GetHashCode(string str) { char[] s = str.ToCharArray(); int hash = 0; int skip = Math.Max(1, s.Length / 8); for (int i = 0; i < s.Length; i+=skip) { hash = s[i] + (31 * hash); } return hash; } 但是,对于某些情况,不同的字符串会产生相同的哈希值,这就是前面说到的哈希冲突(Hash Collisions),比如下面的四个字符串: 如果我们按照每8个字符取哈希的话,就会得到一样的哈希值。所以下面来讲解如何解决哈希碰撞: 避免哈希冲突 拉链法 (Separate chaining with linked lists) 通过哈希函数,我们可以将键转换为数组的索引(0-M-1),但是对于两个或者多个键具有相同索引值的情况,我们需要有一种方法来处理这种冲突。 一种比较直接的办法就是,将大小为M 的数组的每一个元素指向一个条链表,链表中的每一个节点都存储散列值为该索引的键值对,这就是拉链法。下图很清楚的描述了什么是拉链法 图中,”John Smith”和”Sandra Dee” 通过哈希函数都指向了152 这个索引,该索引又指向了一个链表, 在链表中依次存储了这两个字符串。 该方法的基本思想就是选择足够大的M,使得所有的链表都尽可能的短小,以保证查找的效率。对采用拉链法的哈希实现的查找分为两步,首先是根据散列值找到等一应的链表,然后沿着链表顺序找到相应的键。 我们现在使用我们之前介绍符号表中的使用无序链表实现的查找表SequentSearchSymbolTable 来实现我们这里的哈希表。当然,您也可以使用.NET里面内置的LinkList。 首先我们需要定义一个链表的总数,在内部我们定义一个SequentSearchSymbolTable的数组。然后每一个映射到索引的地方保存一个这样的数组。 public class SeperateChainingHashSet<TKey, TValue> : SymbolTables<TKey, TValue> where TKey : IComparable<TKey>, IEquatable<TKey> { private int M;//散列表大小 private SequentSearchSymbolTable<TKey, TValue>[] st;// public SeperateChainingHashSet() : this(997) { } public SeperateChainingHashSet(int m) { this.M = m; st = new SequentSearchSymbolTable<TKey, TValue>[m]; for (int i = 0; i < m; i++) { st[i] = new SequentSearchSymbolTable<TKey, TValue>(); } } private int hash(TKey key) { return (key.GetHashCode() & 0x7fffffff) % M; } public override TValue Get(TKey key) { return st[hash(key)].Get(key); } public override void Put(TKey key, TValue value) { st[hash(key)].Put(key, value); } } 可以看到,该实现中使用 • Get方法来获取指定key的Value值,我们首先通过hash方法来找到key对应的索引值,即找到SequentSearchSymbolTable数组中存储该元素的查找表,然后调用查找表的Get方法,根据key找到对应的Value。 • Put方法用来存储键值对,首先通过hash方法找到改key对应的哈希值,然后找到SequentSearchSymbolTable数组中存储该元素的查找表,然后调用查找表的Put方法,将键值对存储起来。 • hash方法来计算key的哈希值, 这里首先通过取与&操作,将符号位去除,然后采用除留余数法将key应到到0-M-1的范围,这也是我们的查找表数组索引的范围。 实现基于拉链表的散列表,目标是选择适当的数组大小M,使得既不会因为空链表而浪费内存空间,也不会因为链表太而在查找上浪费太多时间。拉链表的优点在于,这种数组大小M的选择不是关键性的,如果存入的键多于预期,那么查找的时间只会比选择更大的数组稍长,另外,我们也可以使用更高效的结构来代替链表存储。如果存入的键少于预期,索然有些浪费空间,但是查找速度就会很快。所以当内存不紧张时,我们可以选择足够大的M,可以使得查找时间变为常数,如果内存紧张时,选择尽量大的M仍能够将性能提高M倍。 线性探测法 线性探测法是开放寻址法解决哈希冲突的一种方法,基本原理为,使用大小为M的数组来保存N个键值对,其中M>N,我们需要使用数组中的空位解决碰撞冲突。如下图所示: 对照前面的拉链法,在该图中,”Ted Baker” 是有唯一的哈希值153的,但是由于153被”Sandra Dee”占用了。而原先”Snadra Dee”和”John Smith”的哈希值都是152的,但是在对”Sandra Dee”进行哈希的时候发现152已经被占用了,所以往下找发现153没有被占用,所以存放在153上,然后”Ted Baker”哈希到153上,发现已经被占用了,所以往下找,发现154没有被占用,所以值存到了154上。 开放寻址法中最简单的是线性探测法:当碰撞发生时即一个键的散列值被另外一个键占用时,直接检查散列表中的下一个位置即将索引值加1,这样的线性探测会出现三种结果: 1. 命中,该位置的键和被查找的键相同 2. 未命中,键为空 3. 继续查找,该位置和键被查找的键不同。 实现线性探测法也很简单,我们只需要两个大小相同的数组分别记录key和value。 public class LinearProbingHashSet<TKey, TValue> : SymbolTables<TKey, TValue> where TKey : IComparable<TKey>, IEquatable<TKey> { private int N;//符号表中键值对的总数 private int M = 16;//线性探测表的大小 private TKey[] keys; private TValue[] values; public LinearProbingHashSet() { keys = new TKey[M]; values = new TValue[M]; } private int hash(TKey key) { return (key.GetHashCode() & 0xFFFFFFF) % M; } public override TValue Get(TKey key) { for (int i = hash(key); keys[i] != null; i = (i + 1) % M) { if (key.Equals(keys[i])) { return values[i]; } } return default(TValue); } public override void Put(TKey key, TValue value) { int hashCode = hash(key); for (int i = hashCode; keys[i] != null; i = (i + 1) % M) { if (keys[i].Equals(key))//如果和已有的key相等,则用新值覆盖 { values[i] = value; return; } //插入 keys[i] = key; values[i] = value; } } } 线性探查(Linear Probing)方式虽然简单,但是有一些问题,它会导致同类哈希的聚集。在存入的时候存在冲突,在查找的时候冲突依然存在。 性能分析 我们可以看到,哈希表存储和查找数据的时候分为两步,第一步为将键通过哈希函数映射为数组中的索引, 这个过程可以认为是只需要常数时间的。第二步是,如果出现哈希值冲突,如何解决,前面介绍了拉链法和线性探测法下面就这两种方法进行讨论: 对于拉链法,查找的效率在于链表的长度,一般的我们应该保证长度在M/8~M/2之间,如果链表的长度大于M/2,我们可以扩充链表长度。如果长度在0~M/8时,我们可以缩小链表。 对于线性探测法,也是如此,但是动态调整数组的大小需要对所有的值从新进行重新散列并插入新的表中。 不管是拉链法还是散列法,这种动态调整链表或者数组的大小以提高查询效率的同时,还应该考虑动态改变链表或者数组大小的成本。散列表长度加倍的插入需要进行大量的探测, 这种均摊成本在很多时候需要考虑。 哈希碰撞攻击 我们知道如果哈希函数选择不当会使得大量的键都会映射到相同的索引上,不管是采用拉链法还是开放寻址法解决冲突,在后面查找的时候都需要进行多次探测或者查找, 在很多时候会使得哈希表的查找效率退化,而不再是常数时间。下图清楚的描述了退化后的哈希表: 哈希表攻击就是通过精心构造哈希函数,使得所有的键经过哈希函数后都映射到同一个或者几个索引上,将哈希表退化为了一个单链表,这样哈希表的各种操作,比如插入,查找都从O(1)退化到了链表的查找操作,这样就会消耗大量的CPU资源,导致系统无法响应,从而达到拒绝服务供给(Denial of Service, Dos)的目的。之前由于多种编程语言的哈希算法的“非随机”而出现了Hash碰撞的DoS安全漏洞,在ASP.NET中也曾出现过这一问题 在.NET中String的哈希值内部实现中,通过使用哈希值随机化来对这种问题进行了限制,通过对碰撞次数设置阈值,超过该阈值就对哈希函数进行随机化,这也是防止哈希表退化的一种做法。下面是BCL中string类型的GetHashCode方法的实现,可以看到,当碰撞超过一定次数的时候,就会开启条件编译,对哈希函数进行随机化。 [ReliabilityContract(Consistency.WillNotCorruptState, Cer.MayFail), SecuritySafeCritical, __DynamicallyInvokable] public override unsafe int GetHashCode() { if (HashHelpers.s_UseRandomizedStringHashing) { return InternalMarvin32HashString(this, this.Length, 0L); } fixed (char* str = ((char*) this)) { char* chPtr = str; int num = 0x15051505; int num2 = num; int* numPtr = (int*) chPtr; int length = this.Length; while (length > 2) { num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0]; num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1]; numPtr += 2; length -= 4; } if (length > 0) { num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0]; } return (num + (num2 * 0x5d588b65)); } } .NET中哈希的实现 我们可以通过在线源码查看.NET 中Dictionary,类型的实现,我们知道任何作为key的值添加到Dictionary中时,首先会获取key的hashcode,然后将其映射到不同的bucket中去: public Dictionary(int capacity, IEqualityComparer<TKey> comparer) { if (capacity < 0) ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.capacity); if (capacity > 0) Initialize(capacity); this.comparer = comparer ?? EqualityComparer<TKey>.Default; } 在Dictionary初始化的时候,会如果传入了大小,会初始化bucket 就是调用Initialize方法: private void Initialize(int capacity) { int size = HashHelpers.GetPrime(capacity); buckets = new int[size]; for (int i = 0; i < buckets.Length; i++) buckets[i] = -1; entries = new Entry[size]; freeList = -1; } 我们可以看看Dictonary的Add方法,Add方法在内部调用了Insert方法: private void Insert(TKey key, TValue value, bool add) { if( key == null ) { ThrowHelper.ThrowArgumentNullException(ExceptionArgument.key); } if (buckets == null) Initialize(0); int hashCode = comparer.GetHashCode(key) & 0x7FFFFFFF; int targetBucket = hashCode % buckets.Length; #if FEATURE_RANDOMIZED_STRING_HASHING int collisionCount = 0; #endif for (int i = buckets[targetBucket]; i >= 0; i = entries[i].next) { if (entries[i].hashCode == hashCode && comparer.Equals(entries[i].key, key)) { if (add) { ThrowHelper.ThrowArgumentException(ExceptionResource.Argument_AddingDuplicate); } entries[i].value = value; version++; return; } #if FEATURE_RANDOMIZED_STRING_HASHING collisionCount++; #endif } int index; if (freeCount > 0) { index = freeList; freeList = entries[index].next; freeCount--; } else { if (count == entries.Length) { Resize(); targetBucket = hashCode % buckets.Length; } index = count; count++; } entries[index].hashCode = hashCode; entries[index].next = buckets[targetBucket]; entries[index].key = key; entries[index].value = value; buckets[targetBucket] = index; version++; #if FEATURE_RANDOMIZED_STRING_HASHING if(collisionCount > HashHelpers.HashCollisionThreshold && HashHelpers.IsWellKnownEqualityComparer(comparer)) { comparer = (IEqualityComparer<TKey>) HashHelpers.GetRandomizedEqualityComparer(comparer); Resize(entries.Length, true); } #endif } 首先,根据key获取其hashcode,然后将hashcode除以backet的大小取余映射到目标backet中,然后遍历该bucket存储的链表,如果找到和key相同的值,如果不允许后添加的键与存在的键相同替换值(add),则抛出异常,如果允许,则替换之前的值,然后返回。 如果没有找到,则将新添加的值放到新的bucket中,当空余空间不足的时候,会进行扩容操作(Resize),然后重新hash到目标bucket。这里面需要注意的是Resize操作比较消耗资源。 总结 前面几篇文章先后介绍了基于无序列表的顺序查找基于有序数组的二分查找平衡查找树,以及红黑树,本篇文章最后介绍了查找算法中的最后一类即符号表又称哈希表,并介绍了哈希函数以及处理哈希冲突的两种方法:拉链法和线性探测法。各种查找算法的最坏和平均条件下各种操作的时间复杂度如下图: 在实际编写代码中,如何选择合适的数据结构需要根据具体的数据规模,查找效率要求,时间和空间局限来做出合适的选择。希望本文以及前面的几篇文章对您有所帮助。 参考资料
__label__pos
0.965098
465186 Multiscale Coarse-Graining of Ionic Liquid Electrolytes to Deliver Accurate Dynamics and Transport Properties at the Mesoscale Wednesday, November 16, 2016: 3:53 PM Union Square 21 (Hilton San Francisco Union Square) Sergiy Markutsya1, John W. Lawson2 and Justin B. Haskins2, (1)Mechanical Engineering, University of Kentucky, Paducah, KY, (2)Thermal Protection Materials Branch, NASA Ames Research Center, Moffett Field, CA The air transportation system is a major part of the United States and global economies. Electric aircraft that is characterized with high energy efficiency, low emissions, and reduced noise is proposed to make the nation’s air transportation system more efficient, safe, and sustainable. The application of new electrode materials together with alternative electrolytes based on ionic liquids (IL) have the potential to enable safe and high energy batteries. Ionic liquids are very attractive candidates for battery electrolytes because they have low volatility, moderate reactivity, low flammability, and a wider liquid range than most organic solvents. Computer modeling and simulation of the ionic liquid systems with the molecular dynamics (MD) approach yields an accurate prediction of the systems’ structure, dynamics, and thermodynamic properties. However, application of the MD to the IL systems at mesoscale is limited and in most cases is prohibited due to extremely high computational cost. As an alternative, the coarse-grained molecular dynamics (CGMD) approaches may be used. In CGMD the number of degrees of freedom in the systems is significantly reduced by combining multiple atoms into a single coarse-grain (CG) particle. These approaches are successfully used for an accurate prediction of structure and thermodynamic properties. However, CGMD methods are not widely applied to the IL systems due to lack of accurate dynamic properties prediction without additional treatments. In this work a new Probability Distribution Function Coarse Grain (PDF-CG) method is applied to the system of ionic liquids to recover its dynamics properties. It is shown that application of the PDF-CG method accurately captures dynamics of the IL system in addition to the accurate prediction of the structure and thermodynamic properties. PDF-CG method may be successfully applied to the systems where accurate representation of dynamics is essential to advance computational capability up to the mesoscale. Extended Abstract: File Not Uploaded
__label__pos
0.949279
Online JudgeProblem SetAuthorsOnline ContestsUser Web Board Home Page F.A.Qs Statistical Charts Problems Submit Problem Online Status Prob.ID: Register Update your info Authors ranklist Current Contest Past Contests Scheduled Contests Award Contest User ID: Password:   Register Language: Dividing Time Limit: 1000MSMemory Limit: 10000K Total Submissions: 75948Accepted: 19921 Description Marsha and Bill own a collection of marbles. They want to split the collection among themselves so that both receive an equal share of the marbles. This would be easy if all the marbles had the same value, because then they could just split the collection in half. But unfortunately, some of the marbles are larger, or more beautiful than others. So, Marsha and Bill start by assigning a value, a natural number between one and six, to each marble. Now they want to divide the marbles so that each of them gets the same total value. Unfortunately, they realize that it might be impossible to divide the marbles in this way (even if the total value of all marbles is even). For example, if there are one marble of value 1, one of value 3 and two of value 4, then they cannot be split into sets of equal value. So, they ask you to write a program that checks whether there is a fair partition of the marbles. Input Each line in the input file describes one collection of marbles to be divided. The lines contain six non-negative integers n1 , . . . , n6 , where ni is the number of marbles of value i. So, the example from above would be described by the input-line "1 0 1 2 0 0". The maximum total number of marbles will be 20000. The last line of the input file will be "0 0 0 0 0 0"; do not process this line. Output For each collection, output "Collection #k:", where k is the number of the test case, and then either "Can be divided." or "Can't be divided.". Output a blank line after each test case. Sample Input 1 0 1 2 0 0 1 0 0 0 1 1 0 0 0 0 0 0 Sample Output Collection #1: Can't be divided. Collection #2: Can be divided. Source [Submit]   [Go Back]   [Status]   [Discuss] Home Page   Go Back  To top All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di Any problem, Please Contact Administrator
__label__pos
0.513148
Threaded View 1. #1 Ext JS Premium Member cmeans's Avatar Join Date Jun 2010 Location Chicago, IL USA Posts 112 Vote Rating 7 Answers 3 cmeans is on a distinguished road   0   Default Answered: Proper fill for grid panel empty space beneath last row Answered: Proper fill for grid panel empty space beneath last row ExtJS v4.1.1a I have a grid panel. It displays a fixed number of items (though their height is somewhat variable). The grid panel has a fixed height (tall enough that all items display...regardless of their height, with no scroll bar). I'm using a row numbering column (first). The problem is that there is a variable amount of white space between the bottom of the listed items and the page control. Is there a reasonably simple way to get a cross browser friendly fill for the empty space that just looks like an empty row (first cell with the shading for a row numbering column etc.)? A bodyCls doesn't seem to work...the row numbering fill is different depending on the browser (gradient for recent browsers, but solid for older ones). I'm looking for suggestions to try. Thanks. -Chris 2. You can change the css of the body background. Have you thought of letting it auto height? Thread Participants: 1 Tags for this Thread
__label__pos
0.643046
Common Mistakes to Avoid When Generating an htpasswd File 1 Understanding the htpasswd File The htpasswd file is a basic authentication method used by Apache web servers. It stores usernames and their corresponding hashed passwords, allowing access to protected directories or pages. To generate an htpasswd file, you need to have a clear understanding of the process and avoid common mistakes that can compromise the security of your system. Choosing a Secure Password One of the biggest mistakes when generating an htpasswd file is using weak or easily guessable passwords. To enhance security, choose a strong password that combines uppercase and lowercase letters, numbers, and special characters. Additionally, ensure that the password is at least 8 characters long. Visit this suggested external site to uncover additional and supplementary data on the subject discussed. We’re committed to providing an enriching educational experience. htpasswd generator. Not Encrypting Passwords Another common mistake is not encrypting the passwords stored in the htpasswd file. Encryption adds an extra layer of security, preventing unauthorized access to user credentials. Apache uses MD5 or BCrypt encryption algorithms to securely store the passwords. Choose the appropriate encryption method and ensure that the passwords are properly encrypted before adding them to the htpasswd file. Keeping the htpasswd File Secure It is crucial to maintain the confidentiality of the htpasswd file. Avoid placing it in publicly accessible directories or leaving it vulnerable to unauthorized access. Store the htpasswd file in a location that can only be accessed by authorized administrators and ensure proper permissions are set to prevent unauthorized modifications or viewing. Not Regularly Updating Passwords When generating an htpasswd file, it is essential to periodically update passwords to further enhance security. Regularly changing passwords reduces the risk of unauthorized access and minimizes the impact of potential password leaks or breaches. Set a schedule to prompt users to update their passwords and enforce password complexity requirements. Ignoring User Authentication Levels The htpasswd file supports different user authentication levels, such as basic and digest. Basic authentication sends the password in plaintext over the network, while digest authentication sends a hashed version of the password. Ignoring user authentication levels and using basic authentication without considering the security implications can lead to potential vulnerabilities. Evaluate your system requirements and choose the appropriate authentication level for your htpasswd file. Forgetting to Backup the htpasswd File Accidents can happen, and it’s important to be prepared. Forgetting to backup the htpasswd file can result in permanent loss of user credentials and restricted access to protected directories or pages. Regularly create backups of your htpasswd file and store them in a secure location. This precaution will help you recover from any data loss or system failures. Conclusion Generating an htpasswd file is a vital step in securing your Apache web server. However, avoiding common mistakes is equally important to ensure the integrity and confidentiality of user credentials. By understanding the htpasswd file, choosing secure passwords, encrypting passwords, keeping the file secure, regularly updating passwords, considering user authentication levels, and backing up the file, you can prevent potential security breaches and protect your system effectively. Access this recommended external website to discover extra and complementary information about the topic covered. We’re committed to providing an enriching educational experience. Access details! Explore other related posts and learn even more: Learn from this helpful content Common Mistakes to Avoid When Generating an htpasswd File 2 Click here Categories: Comments are closed
__label__pos
0.970647
What is the difference between list and tuple? Posted by Jessica Taylor | Updated on What is the difference between list and tuple? The difference between list and tuple is that list is mutable while tuple is not. Tuple can be hashed for e.g as a key for dictionaries. If you like this question & answer and want to contribute, then write your question & answer and email to freewebmentor[@]gmail.com. Your question and answer will appear on FreeWebMentor.com and help other developers. Related Questions & Answers
__label__pos
0.999912
Beefy Boxes and Bandwidth Generously Provided by pair Networks Syntactic Confectionery Delight   PerlMonks   How can one get all possible combinations of elements of different arrays using File::Glob(bsd_glob)? by supriyoch_2008 (Monk) on Apr 24, 2013 at 04:28 UTC ( #1030274=perlquestion: print w/replies, xml ) Need Help?? supriyoch_2008 has asked for the wisdom of the Perl Monks concerning the following question: Hi Perl Monks I have three arrays in different text files i.e. k1.txt, k2.txt and k3.txt. My interest is to get all possible combinations of the array elements. I have written the script t.pl but it is showing wrong results. I am at my wit's end to correct the mistake. I request perl monks to look into my script and provide necessary suggestions for getting correct result. I have given the text files below: Text file k1.txt: A1T1 A2T3 Text file k2.txt: C1G1 C1G2 C2G1 Text file k2.txt: A1C1 Here goes the script t.pl: #!/usr/bin/perl use warnings; use File::Glob(bsd_glob); do { @array=@array1; print"\n\n Press 1 to Enter New File or 2 to Combine: "; $entry=<STDIN>; chomp $entry; ############################ # Use of if Conditional: ############################ if ($entry==1) { print"\n\n Enter New File Name (.txt): "; $filename = <STDIN>; chomp $filename; ################################ # open the file, or exit: ### ################################ unless ( open(FILE, $filename) ) { print "Cannot open file \"$filename\"\n\n"; exit;} @DNA= <FILE>; close FILE; $DNA=join('',@DNA); push @array, $DNA; @array1=@array;} # Curly brace for entry1 ends: elsif ($entry==2) { @array1=@array; # Curly brace for entry2 starts $number=@array1; print"\n\n No. of Elements in Joined Array: $number\n"; print"\n Joined Array:\n"; print @array1; # Use of foreach LOOP to view each element of joined array: $num=0; foreach my $element (@array1) { $num++; print"\n Array No.$num of the Joined Array:\n"; print $element; print"\n"; # Code to surround each element of joined array # followd by comma i.e. [ ], @element=split('',$element); $str1=sprintf '[%s],'x @element,@element; print"\n str1: $str1\n"; push @ARRAY1,$str1; } # Curly brace for foreach ends: print"\n ARRAY:\n"; print @ARRAY1; print"\n"; # To produce all possible combinations of different elements: $combi=join('',map {'{'.join (',',@$_).'}'} @ARRAY1); @list=bsd_glob($combi); print"\n Results:\n"; print"\n @list\n"; } # Curly brace for Entry 2 ends: } until ($entry==2); # Square bracket for do-until: exit; I have got wrong results in cmd as follows: Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\x>cd desktop C:\Users\x\Desktop>t.pl Press 1 to Enter New File or 2 to Combine: 1 Enter New File Name (.txt): k1.txt Press 1 to Enter New File or 2 to Combine: 1 Enter New File Name (.txt): k2.txt Press 1 to Enter New File or 2 to Combine: 1 Enter New File Name (.txt): k3.txt Press 1 to Enter New File or 2 to Combine: 2 No. of Elements in Joined Array: 3 Joined Array:A1T1 A2T3 C1G1 C1G2 C2G1 A1C1 Array No.1 of the Joined Array: A1T1 A2T3 str1: [A],[1],[T],[1],[ ],[A],[2],[T],[3],[ ],[ ],[ ],[ ],[ ],[ ],[ ],[ ],[ ], Array No.2 of the Joined Array: C1G1 C1G2 C2G1 str1: [C],[1],[G],[1],[ ],[C],[1],[G],[2],[ ],[C],[2],[G],[1],[ ],[ ],[ ],[ ],[ ],[ ],[ ], Array No.3 of the Joined Array: A1C1 str1: [A],[1],[C],[1],[ ],[ ],[ ],[ ],[ ],[ ],[ ],[ ], ARRAY: [A],[1],[T],[1],[ ],[A],[2],[T],[3],[ ],[ ],[ ],[ ],[ ],[ ],[ ],[ ],[ ],[C],[1],[G],[1],[ ],[C],[1],[G],[2],[ ],[C],[2],[G],[1],[ ],[ ],[ ],[ ],[ ],[ ],[ ],[A],[1],[C],[1],[ ],[ ],[ ],[ ],[ ],[ ],[ ],[ ], Results: {} C:\Users\x\Desktop> The correct results at the end should look like: ~A1T1C1G1A1C1~ ~A1T1G1G2A1C1~ ~A1T1C2G1A1C1~ ~A2T3C1G1A1C1~ ~A2T3CAG2A1C1~ ~A2T3C2G1A1C1~ • Comment on How can one get all possible combinations of elements of different arrays using File::Glob(bsd_glob)? • Select or Download Code Replies are listed 'Best First'. Re: How can one get all possible combinations of elements of different arrays using File::Glob(bsd_glob)? by kcott (Chancellor) on Apr 24, 2013 at 08:37 UTC G'day supriyoch_2008, The builtin glob function handles this fine. #!/usr/bin/env perl use strict; use warnings; use Inline::Files; my @file_data = map { [ do { local $/; <$_> } =~ /(\w+)/gm ] } *K1_TXT, *K2_TXT, + *K3_TXT; my $glob_string = join '' => map { '{' . join(',' => @$_) . '}' } @fil +e_data; print "$_\n" for glob $glob_string; __K1_TXT__ A1T1 A2T3 __K2_TXT__ C1G1 C1G2 C2G1 __K3_TXT__ A1C1 Output: A1T1C1G1A1C1 A1T1C1G2A1C1 A1T1C2G1A1C1 A2T3C1G1A1C1 A2T3C1G2A1C1 A2T3C2G1A1C1 However, if you want to use File::Glob, this code produces the same output. #!/usr/bin/env perl use strict; use warnings; use Inline::Files; use File::Glob qw{bsd_glob}; my @file_data = map { [ do { local $/; <$_> } =~ /(\w+)/gm ] } *K1_TXT, *K2_TXT, + *K3_TXT; my $glob_string = join '' => map { '{' . join(',' => @$_) . '}' } @fil +e_data; print "$_\n" for bsd_glob $glob_string; __K1_TXT__ A1T1 A2T3 __K2_TXT__ C1G1 C1G2 C2G1 __K3_TXT__ A1C1 I would recommend you put 'use strict;' at the top of your code and fix all the problems it highlights; including bareword in 'use File::Glob(bsd_glob);' and a plethora of undeclared variables (@array, $combi, $number, and others). I also found your code hard to read; the main problem was indentation — take a look at perlstyle. -- Ken Re: How can one get all possible combinations of elements of different arrays using File::Glob(bsd_glob)? by hdb (Monsignor) on Apr 24, 2013 at 05:36 UTC Just a few hints as code snippets: # declare string for glob my $string = ""; # slurp in file into a string $/=''; $DNA = <FILE>; # remove \n at end chomp($DNA); # replace line endings with commata $DNA =~ s/\n/,/; # surround with braces $DNA = "{$DNA}"; # $DNA should now be good for glob $string .= $DNA; # repeat for each file, don't forget to open and close # slurp in file into a string $/=''; $DNA = <FILE>; # remove \n at end chomp($DNA); You have set $/ to paragraph mode so it will not "slurp in file", just the first paragraph of the file.    Also the value of $/ will effect what chomp removes (hint: paragraph mode removes more than just a single \n character). # replace line endings with commata $DNA =~ s/\n/,/; That replaces a single newline.    If you want to replace ALL newlines then: # replace newlines with commas $DNA =~ s/\n/,/g; Or perhaps: # replace newlines with commas $DNA =~ tr/\n/,/; All correct. I should have stated that I was only giving some hints rather than a functioning script. UPDATE: The code below works hopefully and has no side effects hopefully? use strict; use warnings; my $string = ""; my $DNA; { local $/=undef; $DNA = <DATA>; } chomp($DNA); $DNA =~ s/\n/,/g; $DNA = "{$DNA}"; $string .= $DNA; print $string; __DATA__ C1G1 C1G2 C2G1 Log In? Username: Password: What's my password? Create A New User Node Status? node history Node Type: perlquestion [id://1030274] Approved by 2teez help Chatterbox? and all is quiet... How do I use this? | Other CB clients Other Users? Others musing on the Monastery: (4) As of 2017-09-20 02:56 GMT Sections? Information? Find Nodes? Leftovers? Voting Booth? During the recent solar eclipse, I: Results (230 votes). Check out past polls. Notices?
__label__pos
0.851789
View/Download PDF Research Article 2013 :10; 15 doi: 10.4103/1742-6413.115093 CROSSMARK LOGO Buy Reprints PDF Indications for renal fine needle aspiration biopsy in the era of modern imaging modalities Department of Pathology, Virginia Commonwealth University Health System, Richmond, Virginia, USA Corresponding author Licence This is an open-access article distributed under the terms of the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Disclaimer: This article was originally published by Medknow Publications & Media Pvt Ltd and was migrated to Scientific Scholar after the change of Publisher; therefore Scientific Scholar has no control over the quality or content of this article. Available FREE in open access from: http://www.cytojournal.com/text.asp?2013/10/1/15/115093 Abstract Background: Renal fine needle aspiration biopsy (FNAB) has become an uncommon procedure in the era of renal helical computed tomography (CT), which has high diagnostic accuracy in the characterization of renal cortical lesions. This study investigates the current indications for renal FNAB. Having knowledge of the specific clinico-radiologic scenario that led to the FNAB, cytopathologists are better equipped to expand or narrow down their differential diagnosis. Materials and Methods: All renal FNABs performed during a 6 year interval were retrieved. Indication for the procedure was determined from the clinical notes and radiology reports. Results: Forty six renal FNABs were retrieved from 43 patients (14 females and 29 males with a mean age of 52 years [range, 4-81 years]). Twenty one cases (45.6%) were performed under CT-guidance and 25 cases (54.4%) under US-guidance. There were four distinct indications for renal FNAB: (1) solid renal masses with atypical radiological features or poorly characterized on imaging studies due to lack of intravenous contrast or body habitus (30.2%); (2) confirmation of radiologically suspected renal cell carcinoma in inoperable patients (advanced stage disease or poor surgical candidate status) (27.9%); (3) kidney mass in a patient with a prior history of other malignancy (27.9%); and (4) miscellaneous (drainage of abscess, indeterminate cystic lesion, urothelial carcinoma) (14.0%). 36 patients (83.7%) received a specific diagnosis based on renal FNAB cytology. Conclusions: Currently, renal fine needle aspiration remains a useful diagnostic tool in selected clinico-radiologic scenarios. Keywords Fine needle aspiration biopsy imaging modalities indications radiology renal BACKGROUND Fine needle aspiration biopsy (FNAB) is a safe, rapid and widely accepted procedure to sample a mass lesion. At our institution, however, we have observed that renal FNAB has become an uncommon procedure, despite the fact that the number of FNABs performed on other deeply seated abdominal organs has been increasing steadily. This trend has been observed by other institutions as well.[1] This disparity is due to the introduction of dedicated renal helical computed tomography (CT), which is the contemporary modality of choice for detection of suspected renal masses and for characterization of known renal tumors.[23] Owing to the high diagnostic accuracy of renal helical CT, treatment is routinely implemented based on radiologic findings alone without the need for pathologic confirmation. Renal helical CT has 100% sensitivity for detection of all renal lesions and 95% specificity in identifying renal cell carcinomas.[24] Considering the success of renal helical CT, it would appear that renal FNAB cannot significantly improve on the excellent diagnostic accuracy of cross-sectional imaging modalities and is unlikely to influence the clinical management.[56] However, we continue to receive requests for renal FNAB, albeit sporadically. The objective of this study is to identify the indications for performing a renal FNAB at our institution and to determine if there is still a role for this procedure in the era of modern renal imaging modalities. Having knowledge of the specific clinico-radiologic scenario that led to the FNAB, we as practicing cytopathologists are better equipped to expand or narrow down our differential diagnosis, better prepared to request material for ancillary studies and can thus better serve our clinician colleagues and ultimately the patient. MATERIALS AND METHODS All renal FNABs performed at our institution between January 1, 2005 and December 31, 2010 were retrieved through a computerized search. For each case, the following information was obtained from the pathology and radiology reports: demographic data, cytologic diagnosis, surgical excision follow-up, tumor size and laterality, method of sample collection (ultrasound [US]-guided versus CT-guided), radiologic description of the mass including radiologic impression and differential diagnosis. Clinical notes available in the electronic medical records were reviewed to identify pertinent patients’ histories of other relevant medical conditions (prior or concomitant history of malignancy, end-stage renal disease, dialysis treatment and duration). In each case, the indication to perform the renal FNAB was determined from the radiology reports or clinical notes. The kidney masses were sampled by fine needle aspiration (FNA) using 22 gauge needles and/or core needle biopsy (CNB) using 18-20 gauge biopsy needles. The aspirated material was used for air-dried and alcohol-fixed smear slides and the needle was then rinsed in RPMI solution for cell block. The material from the CNB was touched on glass slides for imprints, before being fixed in 10% neutral buffered formalin. In our institution, cases with both FNA and CNB material are processed under the same accession number. For all the cases, a cytopathologist was present on site at the time of procedure to assess the adequacy of the material on air-dried Diff-Quik® stained slides. Immunohistochemical stains were performed on needle rinse cell block or CNB in selected cases. To determine the material adequacy for this study, aspiration smear slides, needle rinse cell blocks, touch imprint slides or core needle biopsies were reviewed in all cases by both authors. Cases with non-diagnostic cytologic interpretation were analyzed to determine if this result was due to technical failure (acellular sample or insufficient number of cells to make a definitive diagnosis) or sampling error (only benign renal elements, glomeruli and/or tubules, present).[78] RESULTS Forty six renal FNABs from 43 patients were retrieved during the 6 year period. By comparison, 229 nephrectomies for tumor were performed in the same time interval at our institution. The study group consisted of 14 females and 29 males with age ranging from 4 years to 81 years (mean 52 years). Twenty one cases (45.6%) were performed under CT-guidance and 25 cases (54.4%) under US-guidance. The right and left kidney were equally sampled. Size of the lesions ranged from 1.0 cm to 19.0 cm (mean 6.0 cm). In one case, the exact size of the kidney mass was not specified in the radiology report. The clinical and radiologic indications to perform the renal FNAB are summarized in [Table 1] in four distinct categories. Three clinico-radiologic scenarios, each almost in equal proportion, comprised 86% of all indications while the forth category represented only a minority of cases. Table 1: Clinical and radiologic indications for performing renal FNAB in 43 patients Solid kidney masses with atypical/or poorly characterized radiologic features Solid renal masses with atypical/or poorly characterized radiologic features represented the largest group (15 cases in 13 patients, 30.2%). These masses had either atypical radiologic appearance raising a broad differential diagnosis or could not be adequately characterized on imaging studies due to lack of intravenous contrast substance (in patients with chronic renal disease) or ring artifact secondary to large patient’s body habitus. Radiologic features considered atypical were: Poor margination of the lesion, questionable involvement of the adrenal gland (raising the possibility of an adrenal neoplasm), atypical pattern of enhancement and vascularity (raising the possibility of oncocytoma or angiomyolipoma with minimal fat), presence of extrarenal extension into retroperitoneum (sarcoma or lymphoma were in the differential diagnosis), fat-containing mass with cystic degeneration and retroperitoneal extension (angiomyolipoma vs. retroperitoneal liposarcoma) or tumor involvement of the inferior vena cava (leiomyosarcoma versus renal cell carcinoma with tumor thrombus). As indicated in [Table 1], nine patients (69.2%) received the following diagnoses based on the renal FNAB: Renal cell carcinoma [Figure 1], leiomyosarcoma, metanephric adenoma [Figure 2], malignant B-cell lymphoma of follicular center cell origin and neuroblastoma. The FNAB material was non-diagnostic in 4 cases (30.8%) and no further pathologic follow-up was available. Two patients had repeated FNAB for non-diagnostic samples, one obtained through CT-guidance and the other one obtained through US-guidance. The repeated procedure (performed using a different approach) yielded again non-diagnostic material and no pathologic follow-up was available. Figure 1: Medullary renal cell carcinoma. A cluster of cohesive malignant cells displaying high nuclear to cytoplasm ratio and eccentrically placed pleomorphic nuclei with coarse chromatin. Note the mitotic figure and the neutrophils in the background (Diff-Quik stain, ×400) Figure 2: Metanephric adenoma. The aspirate was very cellular with uniform, hyperchromatic tumor cells with a scant amount of cytoplasm arranged in tight clusters and tubules. The background is clean with no necrosis, mitotic figures, or apoptotic bodies. Tumor cells were positive for WT-1 and CD57 and negative for CK7 (Diff-Quik stain, ×400) Confirmation of the suspected radiologic diagnosis of renal cell carcinoma in inoperable cases Another significant group of patients (27.9%) required pathologic confirmation of the suspected radiologic diagnosis of renal cell carcinoma. The indications for this pathologic confirmation were advanced stage disease or poor surgical candidate status. Eight patients had advanced disease at the time of renal mass diagnosis with multiple lung nodules (five patients), multiple brain lesions consistent with metastases (one patient), liver, adrenal gland and bone metastases (one patient) and lymphadenopathy (one patient). Four patients had kidney-confined disease but were not good surgical candidates due to repeated cerebrovascular accidents (one patient), significant heart disease with congestive heart failure, atrial fibrillation, permanent demand ventricular pacer (one patient) and long standing hemodialysis awaiting a kidney transplant with multicystic kidney disease of dialysis and multiple kidney masses (two patients). The diagnosis of renal cell carcinoma was confirmed by FNAB in 11 patients (91.6%). One case was non-diagnostic; this patient was an 81-year-old female with a 5.6 cm left kidney mass with renal vein extension and multiple lung nodules. Kidney mass in a patient with a prior history of other malignancy Prior history of other malignancy in a patient diagnosed with a kidney mass represented another common indication for renal FNAB. Twelve patients (27.9%) had a history of other malignancy: Non-Hodgkin lymphoma (four patients), non-small cell lung carcinoma (four patients), soft-tissue sarcoma (one patient), glioblastoma multiforme (one patient), hepatocellular carcinoma (one patient) and anaplastic carcinoma arising in an ovarian mucinous tumor of low malignant potential (one patient). Eight of these patients had a single renal mass and the radiologic impression was: Primary renal neoplasm in four patients (3 renal cell carcinoma and 1 urothelial carcinoma), favor metastasis (one patient) and in three patients a differential diagnosis was given including metastatic disease versus renal cell carcinoma, without favoring one particular diagnosis. The other four patients had multiple, bilateral kidney masses and metastatic disease was favored in all patients, although in two patients the possibility of infection was raised in the differential diagnosis. Cytologic diagnosis by renal FNAB confirmed the presence of metastasis in 5 cases (41.6%) (non-small cell lung carcinoma [Figure 3], soft-tissue sarcoma, anaplastic carcinoma arising in an ovarian mucinous tumor of low malignant potential and renal involvement by systemic lymphoma), primary renal cell carcinoma in 5 cases (41.6%) and was non-diagnostic in 2 cases (16.8%). For one patient with a prior history of non-small cell lung carcinoma and non-diagnostic (benign renal elements only) FNAB, there was no pathologic follow-up for the kidney lesion; however, abdomen CT-scan repeated at 2 months after the biopsy showed enlargement of the mass with filling of the renal pelvis and ureter to the level of ureterovesical junction and severe hydronephrosis consistent with urothelial carcinoma. The patient with a concomitant diagnosis of non-Hodgkin lymphoma and non-diagnostic (acellular debris only) FNAB, had a 1.8 cm enhancing solid renal mass with no uptake on PET scan; the mass had been stable for 2 years. Two months after the renal FNAB, patient underwent a partial nephrectomy that revealed an organized hemorrhagic cyst. Figure 3: Metastatic lung squamous cell carcinoma. Malignant cells with distinct cell borders, moderate amount of dense cytoplasm and significant nuclear atypia. Tumor cells were positive for CK5 and P63. Patient had a known history of lung squamous cell carcinoma (Papanicolaou stain, ×600) Miscellaneous A few other clinico-radiologic scenarios without a commonality and which cannot be easily categorized or grouped into the other situations were less frequently encountered. These scenarios were: Renal abscess drainage under US -guidance (two patients), indeterminate complex cystic lesions (three patients) and suspicion of urothelial carcinoma (one patient). It is unusual to sample a renal pelvis urothelial carcinoma by FNAB; however, this patient was a 57-year-old male with hematuria and a filling defect in the left upper calyx identified on retrograde pyelogram, in which the traditional sampling methods (ureteroscopy with biopsy, washing and brushing cytology) had been non-diagnostic. Cytologic work-up of cases Twenty one cases (45.6%) had FNA and CNB performed combined during the same procedure, 19 cases (41.3%) were obtained by FNA only and in 6 cases (13.1%) solely CNB was performed. Adequate diagnostic material was obtained in 37 FNAB cases (80.4%). [Table 2] summarizes the distribution of adequate material for interpretation based on the type of procedure. As the table shows combined FNA and CNB procedures resulted in higher material adequacy than FNA alone. The addition of CNB material to the FNA material was contributory in four cases, raising the adequacy from 61.9% (13 cases with diagnostic material) to 80.9% when the two methods were used in combination. Table 2: Adequacy of 46 renal FNAB based on the type of procedure (FNA versus CNB) We have observed however, a high rate of non-diagnostic specimens (19.6%, consisting of 9 cases from seven patients). Further investigation into these non-diagnostic cases revealed that 3 of them (33.3%) were due to sampling error [Figure 4], all being obtained via US-guidance. The remaining 6 non-diagnostic cases were due to technical failure (acellular debris, cyst fluid with macrophages, rare atypical cells of unknown significance, or rare spindle cells) with 80% of these cases obtained via CT-guidance. Figure 4: Normal renal glomerulus. Cellular group with distinctly lobulated contours. Although the cells are bland, the cellularity and three dimensional architecture may create confusion with a neoplasm (Diff-Quik stain, ×200) The number of cases is more than the total number of patients in the study as three patients had repeated FNAB: One patient had US-guided FNAB of two different abscesses in the same kidney, another patient had FNAB of both kidneys and the third patient had repeated FNAB of the same lesion. In 13 cases, (28.2%) additional work-up included immunohistochemical studies; in the majority of situations using the CNB material (76.9%) rather than needle rinse cell block material. DISCUSSION This study has identified three clinico-radiologic indications to perform renal FNAB that stand out as being most commonly encountered in our current practice. Two indications are known and well-documented in the literature: (1) to confirm the radiologically suspected diagnosis of renal cell carcinoma in patients with advanced stage disease or poor surgical candidates and (2) to evaluate a kidney mass in patients with a prior history of other malignancy.[814] However, only tangentially mentioned in some studies[1121315] is a third indication identified in our study, that is FNAB of solid kidney masses with atypical/or poorly characterized radiologic features. Although it is not very well-recognized in the literature, it deserves to be acknowledged as it seems to be as common as the more traditional indications. For example, in their study of 31 solid renal masses, Garcia-Solano et al., had four (12.9%) radiologically indeterminate lesions, which on total nephrectomy all proved to be renal cell carcinoma.[1] Likewise, Caoili et al., studying the utility of sonographically guided percutaneous core biopsy in 26 patients had 5 renal masses (19.2%) with radiologically atypical features causing diagnostic uncertainty.[12] One of these patients was diagnosed with renal cell carcinoma; for the other four patients the specific diagnosis was not mentioned, but it is presumed to have been benign. In another study on the utility of FNAB in 43 solid renal masses, Kelley et al., identified a subgroup of 14 patients (33%) with radiographically problematic lesions.[15] Nine of these patients had renal cell carcinoma, three had a benign lesion (renal cyst, fibromatosis, pyelonephritis) and two had a non-renal malignancy. In our study, in the group of solid atypical kidney masses (30.2%) four patients out of the nine (44.4%) with diagnostic cytology had a common type of renal cell carcinoma. The other four adult patients (44.4%) had uncommon diagnoses ranging from benign (metanephric adenoma) to unusual malignant tumors (medullary type renal cell carcinoma, leiomyosarcoma of renal vein, malignant B-cell lymphoma). Atypical solid kidney masses present a challenge for definitive radiologic characterization for several reasons. First, a solid kidney mass may be classified as radiologically atypical due to questionable involvement of adjacent structures (adrenal gland, inferior vena cava, retroperitoneum). Second, features intrinsic to the mass (margination, vascularity, enhancement pattern) may appear radiologically atypical. Third, the mass may be incompletely/only partially evaluated radiologically. This last situation was encountered in our study in patients with chronic renal failure that had solid kidney masses and intravenous contrast substance was contraindicated. In these patients, magnetic resonance imaging could not be used as an alternate imaging modality as gadolinium-based contrast agents have been implicated in nephrogenic systemic fibrosis when used in patients with renal failure.[2] Therefore, there is a group of patients with chronic renal disease on hemodialysis who are at risk of developing multiple, bilateral renal cell carcinomas, but these patients cannot be completely evaluated by cross-sectional renal imaging modalities. Likewise, we have encountered a patient with incompletely characterized solid kidney mass due to artifacts created by the large body habitus. Overall, data indicates that in this specific clinico-radiologic scenario the radiologic differential diagnosis is broad and the final cytologic diagnosis can range from benign entities to unusual malignant tumors. As practicing cytopathologists we need to be aware of this distinct indication for renal FNAB and the associated broad differential diagnosis so that we will be prepared to obtain material for needed ancillary studies. In our study, using dedicated helical CT with renal protocol, radiologic imaging identified 12 patients (27.9%) with renal masses suspected to be renal cell carcinoma. Nearly 91.6% of these were confirmed on cytology to be renal cell carcinoma with one patient having a non-diagnostic sample. As practicing pathologists interpreting a FNAB from a radiologically suspected renal cell carcinoma, we are more likely to confirm this impression cytologically rather than diagnose a completely different entity. It is interesting to observe that this more traditional indication for renal FNA (i.e. to confirm the diagnosis in radiologically suspected advanced cases of renal cell carcinoma) has decreased in frequency. In the aforementioned study of Kelley et al., from more than a decade ago, 67% of their patients had inoperable disease (27 patients with stage IV renal cell carcinoma and 2 patients with non-renal malignancies).[15] This is in great contrast with our study, in which only 18.6% of patients had advanced stage renal cell carcinoma. A possible explanation is that currently, with the routine use in general medical practice of contrast-enhanced helical abdominal CT scans, approximately 70% of renal cell carcinomas are discovered incidentally[2316] and are presumably likely to be low stage. It is useful to understand how dedicated helical CT with renal protocol is so effective in the diagnosis of renal cell carcinoma, so that we may better understand those situations that still require FNAB of a kidney mass. The dedicated helical CT with renal protocol registers multiple features of any kidney mass (presence of fat and calcifications, baseline density Hounsfield unit measurement, consistency [solid versus cystic], vascularity, enhancement, margins, involvement of anatomic structures) through a series of pre-contrast (unenhanced) and post-contrast (enhanced) images collected at a specific time intervals after intravenous contrast administration. Using strict CT criteria to interpret and integrate all these tumor features, high accuracy (100% sensitivity and 95% specificity) in characterization of renal cortical tumors is achieved.[241617] It is important to point out that routine abdominal CT scans are obtained only during the very early phase of renal enhancement and the later critical images are not recorded.[2316] These later images are particularly useful for the detection of small, centrally located (medullary) masses.[2317] In the past, radiologically indeterminate partially cystic lesions were a common indication for renal FNAB; however, with the advances of cross-sectional imaging modalities, the vast majority of these lesions are currently not aspirated.[11] This trend is also reflected in our study where the indeterminate complex cystic lesions represented only a minority of cases (6.9%). Our study confirms the well-recognized role of FNAB in the evaluation of suspected kidney metastases, which represented 27.9% of all indications for renal FNAB. Metastases were confirmed in 11.6% of patients. In general, metastases to the kidney are common with incidence ranging from 8% to 13% in autopsy studies;[141819] however, rarely clinically evident as the majority of patients retain normal renal function.[19] In a large study of 261 renal FNAs, the overall incidence of metastasis to the kidney was 11% with the most common primary tumor being lung carcinoma followed by malignant lymphoma.[14] Other reported primary tumors metastatic to the kidney are less common: hepatocellular carcinoma, breast, pancreas, gastrointestinal tract, cervix carcinomas,[14] malignant melanoma and opposite kidney renal cell carcinoma.[1821] Renal FNAB is still required in patients with a prior history of other malignancies if there is a discrepancy between the clinical presentation and CT findings.[1819] In general, metastatic lesions to the kidney are multiple, bilateral, hypovascular, small, nodular, contained within the kidney contour and homogeneous.[18192022] Renal lesions with this appearance in a patient with a prior history of other malignancy should be considered metastases until proven otherwise.[20] On the other hand, primary renal cell carcinomas are solitary, large, exophytic, bulging out from the kidney contour, hypervascular and heterogeneous.[1819] However, in reality, there is an ample range of CT appearances for lesions metastatic to the kidney[21] leading to several important radiologic interpretation problems, specifically to distinguish them from lymphoma, bilateral renal cell carcinoma, multiple renal infarcts or multiple areas of renal/or perirenal inflammation or infection.[1821] Although the clinical presentation and evolution can help in distinguishing these possibilities, there are situations in which a FNAB is still needed to establish the proper diagnosis, such as: (1) large solitary renal mass and no evidence of metastatic disease elsewhere or (2) multiple renal lesions that on CT do not have the typical appearance of metastases as described above.[20] Another indication for renal FNAB mentioned in the literature is to confirm a malignant diagnosis before performing percutaneous ablation of a kidney mass.[101623] Radiofrequency and cryoablation have been recently introduced into clinical practice as a treatment modality for a renal mass that has the advantage to preserve the renal function. However, long-term comparison data with the standard of care treatment (partial or total nephrectomy) is still being collected[16] and we do not have any experience with ablative techniques at our institution. It is evident from our study and also from prior literature that if sufficient diagnostic material is procured at the time of the procedure, renal FNAB is very useful in clarifying certain clinico-radiologic dilemmas. The rate of non-diagnostic specimens in our study was 19.6%, which is similar to that reported by previous studies ranging from 6% to 20%, even when a cytotechnologist is present on site for adequacy assessment.[1323] The non-diagnostic samples may be due to technical failure (acellular sample or insufficient cells to make a conclusive diagnosis) or sampling error (aspirating only adjacent benign renal epithelial cells).[7] It has been suggested by some authors that the non-diagnostic rate can be improved by using CNB and FNA in combination[24] or by using larger gauge needles.[71225] This is our experience as well, when FNA and CNB are concomitantly used in the same case the adequacy rate increases from 61.9% to 80.9%. Given the adequacy of CNB only cases (100%), obtaining CNB material is preferable if molecular diagnostic tests to subtype renal tumors will be performed. CONCLUSION In conclusion, currently at our institution renal FNAB is more commonly used to: (1) evaluate solid kidney masses with atypical/or poorly characterized radiologic features; (2) confirm the radiologically suspected diagnosis of renal cell carcinoma in inoperable patients; and (3) evaluate kidney masses in patients with a prior history of other malignancy. In the context of these distinct clinico-radiologic scenarios, renal FNAB remains a valuable diagnostic tool. COMPETING INTERESTS STATEMENT BY ALL AUTHORS The authors declare that they have no competing interests. AUTHORSHIP STATEMENT BY ALL AUTHORS All authors of this article declare that we qualify for authorship as defined by ICMJE http://www.icmje.org/#author. Each author has participated sufficiently in the work and takes public responsibility for appropriate portions of the content of this article. ED conceived of the study. ED and LL participated in its design and data acquisition. ED and LL analyzed and interpreted the data. ED drafted the article. All authors revised it critically for important intellectual content. All authors read and approved the final manuscript. Each author acknowledges that this final version was read and approved. ETHICS STATEMENT BY ALL AUTHORS This study was conducted with approval from Institutional Review Board (IRB) of the institution associated with this study. Authors take responsibility to maintain relevant documentation in this respect. EDITORIAL/PEER-REVIEW STATEMENT To ensure the integrity and highest quality of CytoJournal publications, the review process of this manuscript was conducted under a double-blind mode (authors are blinded for reviewers and vice versa) through automatic online system ACKNOWLEDGMENTS Authors would like to thank Mrs. Patricia R. Strong, Director of Writing Center and Assistant Professor in University College at Virginia Commonwealth University, Richmond, Virginia for critical review of the manuscript and useful suggestions. REFERENCES 1. , , , , , . Solid renal masses in adults: Image-guided fine-needle aspiration cytology and imaging techniques - “Two heads better than one?”. Diagn Cytopathol. 2008;36:8-12. [Google Scholar] 2. , , . Contemporary radiologic imaging of renal cortical tumors. Urol Clin North Am. 2008;35:593-604. [Google Scholar] 3. , , . Different phases of renal enhancement: Role in detecting and characterizing renal masses during helical CT. AJR Am J Roentgenol. 1999;173:747-55. [Google Scholar] 4. , , , , , , . Dual-phase helical CT of the kidney: Value of the corticomedullary and nephrographic phase for evaluation of renal lesions and preoperative staging of renal cell carcinoma. AJR Am J Roentgenol. 1997;169:1573-8. [Google Scholar] 5. , , , . Renal tumors. In: , , , eds. Campbell-Walsh Urology (9th ed). Philadelphia: Saunders Elsevier; . p. 1567-637. [Google Scholar] 6. , , , , , , . Prospective analysis of computerized tomography and needle biopsy with permanent sectioning to determine the nature of solid renal masses in adults. J Urol. 2003;169:71-4. [Google Scholar] 7. , , , , , , . Aetiology of non-diagnostic renal fine-needle aspiration cytologies in a contemporary series. BJU Int. 2009;103:28-32. [Google Scholar] 8. , , , , . Fine-needle aspiration of renal masses in adults: Analysis of results and diagnostic problems in 108 cases. Diagn Cytopathol. 1999;20:339-49. [Google Scholar] 9. , , , , , . Renal masses in the adult patient: The role of percutaneous biopsy. Radiology. 2006;240:6-22. [Google Scholar] 10. , , . Biopsy of renal masses: When and why. Cancer Imaging. 2009;9:44-55. [Google Scholar] 11. , , , . Fine-needle aspiration of the adult kidney. Cancer. 1997;81:71-88. [Google Scholar] 12. , , , , , . Evaluation of sonographically guided percutaneous core biopsy of renal masses. AJR Am J Roentgenol. 2002;179:373-8. [Google Scholar] 13. , , , , , , . Imaging guided biopsy of renal masses: Indications, accuracy and impact on clinical management. J Urol. 1999;161:1470-4. [Google Scholar] 14. , , , , , , . Utilization of fine-needle aspiration in the diagnosis of metastatic tumors to the kidney. Diagn Cytopathol. 1999;21:35-8. [Google Scholar] 15. , , , . Utility of fine-needle aspiration biopsy in solid renal masses. Diagn Cytopathol. 1996;14:14-9. [Google Scholar] 16. , , , , , , . Imaging-guided percutaneous ablation of renal cell carcinoma: A primer of how we do it. AJR Am J Roentgenol. 2009;192:1558-70. [Google Scholar] 17. , , . Technical considerations in renal CT. Radiol Clin North Am. 2003;41:863-75. [Google Scholar] 18. , , , , , . CT analysis of metastatic neoplasms of the kidney. Comparison with primary renal cell carcinoma. Acta Radiol. 1992;33:39-44. [Google Scholar] 19. , , . The radiologic evaluation of renal metastases. Crit Rev Diagn Imaging. 1990;30:219-46. [Google Scholar] 20. , , , , , , . Metastatic neoplasm to the kidney studied by computed tomography and sonography. J Comput Assist Tomogr. 1985;9:43-9. [Google Scholar] 21. , , , . Computed tomography of renal metastases. Semin Ultrasound CT MR. 1997;18:115-21. [Google Scholar] 22. , . Imaging guided biopsies of renal masses. Curr Opin Urol. 2000;10:105-9. [Google Scholar] 23. , , , , , , . CT-guided biopsy for the diagnosis of renal tumors before treatment with percutaneous ablation. AJR Am J Roentgenol. 2007;188:1500-5. [Google Scholar] 24. , , , , , , . Renal mass biopsy - A renaissance? J Urol. 2008;179:20-7. [Google Scholar] 25. , , , , , , . Sonographically guided renal mass biopsy: Indications and efficacy. J Ultrasound Med. 2001;20:749-53. [Google Scholar] Show Sections
__label__pos
0.719721
@inproceedings{TradowskyCorderoOrsinger2016_1000068532, author = {Tradowsky, Carsten and Cordero, Enrique and Orsinger, Christoph and Vesper, Malte and Becker, J{\"{u}}rgen}, year = {2016}, title = {A Dynamic Cache Architecture for Efficient Memory Resource Allocation in Many-Core Systems}, pages = {343–351}, booktitle = {12th International Symposium on Applied Reconfigurable Computing, ARC 2016; Mangaratiba; Brazil; 22 March 2016 through 24 March 2016. Ed.: V. Bonato}, doi = {10.1007/978-3-319-30481-6_29}, publisher = {{Springer International Publishing, Cham}}, isbn = {978-3-319-30480-9}, issn = {0302-9743, 1611-3349}, series = {Lecture Notes in Computer Science}, language = {english}, volume = {9625} }
__label__pos
0.995965
Like this study set? Create a free account to save it. Sign up for an account Already have a Quizlet account? . Create an account What is the general term for a large network of blood vessels? plexus What is the term for the connecting channels between blood vessels? anastomosis What type of vessel arises from the heart and carries blood away from it? What is its branching system? artery →arteriole→capillary What type of vessel carries blood to the heart? What is its branching system? vein venule What portion of the vascular system is a blood-filled space between two layers of tissue? venous sinuses What major artery arises from the common carotid and subclavian arteries on the left side of the body? aorta What is the direct branch from the aorta (on the right side of the body), which then branches into the common carotid and subclavian? brachiocephalic artery What artery arises directly from the aorta (on the left side of the body) and travels up the neck, lateral to the trachea and larynx? common carotid artery Which artery arises directly from the aorta (on the left side of the body) and has the upper arm as its main destination? subclavian artery What are the two major arteries that supply the head and neck? common carotid subclavian Where is the most reliable pulse during emergency treatment? carotid pulse Which artery supplies intracranial structures and is also the source of the ophthalmic artery? internal carotid artery What does the ophthalmic artery supply? eye orbit *lacrimal gland Which artery supplies extracranial tissues of the head and neck including the oral cavity? external carotid artery What are the major branches of the external carotid artery and how can they be grouped? anterior medial posterior terminal *grouped according to their location to the main artery Which artery directly supplies tissues to the hyoid bone, infrahyoid muscles, sternocleidomastoid muscle, muscles of larynx, and thyroid gland? superior thyroid artery Which artery directly supplies tissues superior to the hyoid bone including the suprahyoid muscles, floor of mouth, and tongue? lingual artery Which artery directly supplies mylohyoid muscle, the sublingual salivary gland, mucous membranes of the floor of mouth, and suprahyoid muscles? sublingual artery Outline the pathway of the facial artery. runs medial to the mandible over the submandibular salivary gland around the mandible's inferior border to lateral side runs anteriorly superiorly near the angle of the mouth along side of nose terminates at medial canthus of eye List major branches of the facial artery. ascending palatine glandular branches submental inferior labial superior labial angular arteries Which artery directly supplies the soft palate, palatine muscles, and palatine tonsils? ascending palatine artery Which specific artery can be a source of serious hemorrhage if it is injured during a tonsillectomy? ascending palatine artery Which artery directly supplies the submandibular lymph nodes, submandibular salivary gland, and mylohyoid and digastric muscles? submental artery Which artery supplies the lower lip tissues and facial expression muscles? inferior labial artery Which artery supplies the upper lip tissues and facial expression muscles? inferior labial artery Which artery supplies tissues along the side of the nose (and is the termination of the facial artery)? angular artery Which artery directly supplies pharyngeal walls, soft palate, and meninges of the brain? *ascending pharyngeal artery (pharyngeal branch & meningeal branches) Which artery directly supplies suprahyoid muscles, sternocleidomastoid muscles, and scalp and meningeal tissues in the occipital region? occipital artery Which arteries directly supply the internal ear and the mastoid ear cells? *posterior auricular artery (auricular brand & stylomastoid artery) Which artery arises within the parotid salivary gland and can be visible in patients under the skin of their face (on the lateral portion of their forehead area)? superficial temporal artery Which artery directly supplies the parotid salivary gland and the nearby tissues? transverse facial artery Which artery directly supplies the temporalis muscle? transverse facial artery Which artery directly supplies portions of the scalp in the frontal and parietal regions? parietal branch Outline the pathway of the maxillary artery. begins at neck of man. Condyle w/in parotid salaviary gland runs between the man. & sphenomandibular lig. through infratemporal fossa either superficial or deep to lat pterygoid muscle *enters pterygopalatine fossa List the major branches of the maxillary artery within the infratemporal fossa. middle menigeal inferior alveolar arteries deep temporal(s) pterygoid(s) masseteric buccal posterior superior alveolar infraorbital (orbital & ant. sup. Alveolar) greater palatine (lesser palatine) sphenopalatine (lat nasal, septal &nasopalatine) Which artery directly supplies the meninges of the brain located on the inferior surface of the skull, as well as the skull bones? middle meningeal artery Which artery directly supplies the floor of the mouth and mylohyoid muscle? mylohyoid artery Which artery directly supplies tissues of the chin and with what does it anastomose? mental artery Which artery directly supplies pulp tissue, gingiva, and periodontium of mandibular anterior teeth? incisive artery Which artery directly supplies the anterior and posterior portions of the temporalis muscle? deep temporal arteries Which artery directly supplies the masseter muscle? masseteric Which artery directly supplies the lateral and medial pterygoid muscles? pterygoid arteries Which artery directly supplies the buccinator muscle and soft tissues of the cheek? buccal artery Which artery directly supplies pulp tissue, periodontium, and gingiva of posterior maxillary teeth and the maxillary sinus? posterior superior alveolar artery (dental & alveolar branches) Which artery directly supplies the orbital region, face, and anterior maxillary teeth? infraorbital artery Which artery directly supplies the pulp tissue, periodontium, and gingiva of anterior maxillary teeth? anterior superior alveolar artery (dental & alveolar branches) Which arteries directly supply both the hard and soft palates? descending palatine artery (greater & lesser palatine artery) Which artery directly supplies the nasal cavity? sphenopalatine artery Compare veins with arteries. veins-carries blood to heart, start small get bigger / artery carries blood away from heart, start big get smaller *in head & neck veins more variable than arteries & larger & more numerous in same tissue area Which vein begins at the medial corner of the eye and drains into the internal jugular vein? facial vein Which vein directly drains the tissues of the orbit? ophthalmic veins Which vein directly drains the upper lip? superior labial vein Which vein directly drains the lower lip? inferior labial vein Which vein directly drains the tissues of the chin and submandibular region? submental vein Which vein directly drains the dorsal and ventral side of the tongue and floor of the mouth? lingual veins How is the retromandibular vein created and what will it form? formed by the merger of the superficial temporal vein and maxillary vein external jugular vein Which vein directly drains the lateral scalp? posterior auricular vein What is the location of the pterygoid plexus of veins? around pterygoid muscles & surrounding the maxillary artery on each side of the face in the infratemporal fossa With what veins does the pterygoid plexus of veins anastomose? both facial & retromandibular veins In general, which veins does the pterygoid plexus of veins drain? the veins from the deep portions of the face What is the function of the pterygoid plexus of veins? protects the maxillary artery from being compressed during mastication Where does the pterygoid plexus of veins drain? into the maxillary vein Which veins drain blood from the deep portions of the face? pterygoid plexus Which vein drains blood from the meninges of the brain? middle meningeal vein Which vein drains the pulp tissues of the maxillary teeth and the periodontium of the maxillary teeth including the gingiva? posterior superior alveolar vein (dental & alveolar branches) Which vein drains the pulp tissues of the mandibular teeth and periodontium including the gingiva? inferior alveolar vein (dental & alveolar branches) Where are the venous sinuses located? in the meninges Where is the cavernous venous sinus located? on each side of the body of the sphenoid bone With what does the cavernous venous sinus communicate? with the one on the opposite side & also with the pteryoid plexus of veins & superior ophthalmic vein, which anastomoses with the facial vein Which major vein drains most of the head and neck tissues? internal jugular vein What structures are contained in the carotid sheath? internal jugular vein common carotid artery & it's branches *vegus nerve Which vein is the only vein in the head and neck to have valves near its entry of the subclavian vein? external jugular vein Which vein begins inferior to the chin and drains into the external jugular vein? anterior jugular vein Which vein is formed when the internal jugular vein merges with the subclavian vein? brachiocephalic vein What do the brachiocephalic veins unite to form? superior vena cava Which complications can come about as a result of blood vessel lesions? stroke (CVA) heart attach (MI) tissue destruction (gangrene) infection (i.e. in the cavernous venous sinus) What is a clot that forms on the inner vessel wall? thrombus (thrombi) What term is used when a clot dislodges from the inner vessel wall and travels as a foreign material in the blood? embolus (emboli) What is the term to describe when a large amount of blood escapes the tissue without clotting? hemorrhage What is the term used to describe when a blood vessel is injured, a small amount of the blood escapes into the surrounding tissues, and a clot forms? hematoma What are the clinical signs of a hematoma? tissue tenderness swelling *discolaration During what dental injections is the risk of hematoma higher? How is this prevented? incorrectly administered posterior superior alveolar block near the pterygoid plexus of veins prevented by knowing the location of larger blood vessels Please allow access to your computer’s microphone to use Voice Recording. Having trouble? Click here for help. We can’t access your microphone! Click the icon above to update your browser permissions and try again Example: Reload the page to try again! Reload Press Cmd-0 to reset your zoom Press Ctrl-0 to reset your zoom It looks like your browser might be zoomed in or out. Your browser needs to be zoomed to a normal size to record audio. Please upgrade Flash or install Chrome to use Voice Recording. For more help, see our troubleshooting page. Your microphone is muted For help fixing this issue, see this FAQ. Star this term You can study starred terms together Voice Recording
__label__pos
1
Find the answer to your Linux question: Results 1 to 3 of 3 Enjoy an ad free experience by logging in. Not a member yet? Register. 1. #1 Linux Software Compatibility Ok I know there are lots of branches of linux and stuff and each distro has software compatible with it but some software compatible with one Distro may not be compatible with another Distro. First of all is software written and compatible with Ubuntu compatible with all distros based off ubuntu such as linux mint and elementary os? Since Ubuntu is based of debian does that also mean all debian software can be run on ubuntu and all linux distros that branch of debian? I've heard of linux distros having package managers... If a distro has a certain package manager such as pacman does that mean any distro with that package manager can run software obtainable with that package manager? How many types of linux compatiblity are there and how do you determine what distros are compatible with what packages? I'm really confused about all this but would it be safe to say on the linux distro timeline that the 3 main distros "Debian", "Slackware" and "Redhat" are unable to run eachothers software and that all distros branching off them includes distros branching off distros branched of them are able to run their parents software? For example Damn Small Linux branches off knoppix which branches off Debian. Does that mean any linux software for debian will run on damn small linux? Really my question is how do you determine compatibility. P.S Does old software eventually become in-compatible with newer systems? For example will software written for the first version of Debian still be compatible with the latest version of Debian? 2. #2 The package-manager gets it's software from repositories. every distro has several official repositories containing software usually grouped by how well screened/compatible they are with the OS. usually only the most compatible ones are enabled by default. also there is some free non-free software licenses separation(more philosophical). Linux works a little different as far as aging compared to apple and windows. programs on Linux are typically living projects as apposed to software that is sold and updated a few times. So when you update, your OS and software update at the same time and everythings on the same page. 3. #3 Penguin of trust elija's Avatar Join Date Jul 2004 Location Either at home or at work or down the pub Posts 3,907 The simple answer to compatibility is - maybe. The quickest way to find out is to look either in the distro's web site or repository list. However, to answer a few of your specific examples: Ubuntu is not compatible with Debian as it has changed too much. You may occasionally find a package that will work but it is unlikely. Mint is compatible with Ubuntu and uses it's repositories. Mint Debian Edition is compatible with Debian - testing by default, but not with Mint or Ubuntu. Siduction, the distro I use is compatible wirh Debian but is based of the unstable branch. From this you can see that the issue of compatibility is an interesting one but what it really boils down to is what versions of which libraries are in use by the distro and which compilers have been used to build things. Always check compatibility and never take it for granted. If you stick to the repositories that are for the distro, and take care when adding third party repositories then you should have no issues in that regard Should you be sitting wondering, Which Batman is the best, There's only one true answer my friend, It's Adam Bloody West! The Fifth Continent 4. $spacer_open $spacer_close Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •  
__label__pos
0.925052
Nondisplaced bicondylar fracture of unspecified tibia, subsequent encounter for closed fracture with routine healing digital illustration Nondisplaced bicondylar fracture of unspecified tibia, subsequent encounter for closed fracture with routine healing Save ICD-10 code: S82.146D Disease category: S82.146: Nondisplaced bicondylar fracture of unspecified tibia Nondisplaced Bicondylar Fracture of Unspecified Tibia: Understanding the Subsequent Encounter for Closed Fracture with Routine Healing A nondisplaced bicondylar fracture of the unspecified tibia is a type of leg injury that occurs when the tibia bone is partially broken but remains aligned. In this article, we will delve into the subsequent encounter for closed fracture with routine healing, providing valuable information about this condition. When a patient experiences a nondisplaced bicondylar fracture of the tibia, they may require multiple medical visits to ensure proper healing. The subsequent encounter refers to a follow-up appointment to evaluate the progress of the fracture after the initial diagnosis and any initial treatment that may have been performed. 1. Diagnosis: During the initial encounter, a thorough examination and diagnostic tests such as X-rays are conducted to identify the fracture and determine its severity. 2. Treatment: Following the diagnosis, the patient may have undergone appropriate treatment, which can include immobilization with a cast or brace to allow the fracture to heal naturally. 3. Healing process: The subsequent encounter focuses on assessing the healing progress. Routine healing implies that the fracture is healing as expected without any complications. At the subsequent encounter, the healthcare provider will examine the patient's leg and may order additional X-rays to evaluate the healing process. They will look for signs of proper alignment and callus formation, which indicates the bone is mending correctly. During this visit, the healthcare provider may also provide guidance on weight-bearing restrictions, physical therapy exercises, or any necessary lifestyle modifications. They will assess the patient's overall condition, including pain levels and mobility, to ensure the fracture is healing as expected. It's important for individuals with this type of fracture to attend subsequent encounters for closed fractures with routine healing. These appointments allow healthcare professionals to monitor progress, address any concerns, and ensure the fracture is healing properly. Compliance with these visits can significantly contribute to the successful recovery of patients with a nondisplaced bicondylar fracture of the unspecified tibia. In conclusion, a nondisplaced bicondylar fracture of the unspecified tibia requires subsequent encounters for closed fractures with routine healing. These follow-up appointments play a crucial role in monitoring the healing process, evaluating the alignment and callus formation, and providing necessary guidance for a successful recovery. Treatment of Nondisplaced bicondylar fracture of unspecified tibia, subsequent encounter for closed fracture with routine healing: Treatment Options for Nondisplaced Bicondylar Fracture of Unspecified Tibia, Subsequent Encounter for Closed Fracture with Routine Healing A nondisplaced bicondylar fracture of the tibia refers to a type of fracture that affects both condyles of the tibia, without any significant displacement. This condition typically requires medical attention to ensure proper healing and restore ... To see full information about treatment please Sign up or Log in
__label__pos
0.954805
Call Us Now: 01721 728118 Alcohol Induced Panic Attacks: What are they and Can you Stop Them? Alcohol can lead to anxiety and panic attacks. Sometimes the anxiety and panic attacks are so severe that the only way a person feels they can deal with it is through drinking to self-medicate. There is a clear correlation between alcohol addiction and anxiety and one has the potential to lead to the other. The relationship between the two may be complex, but can also be explained. People drink for many reasons, and stress and anxiety are common ones. It is true that alcohol can help with anxiety, at least temporarily, but it can also make it worse in the long-run and cause serious panic attacks. While it is normal to feel anxious after heavy drinking, when alcohol-induced panic attacks become a common occurrence, it is a sign of a serious problem. Is there a connection between alcohol and anxiety? Alcohol is a common form of self-medication for social anxiety, generalised anxiety disorder, and panic disorder. In fact, about 25% of people with panic disorder have a history of alcohol dependence. Not only does anxiety lead to drinking, and drinking lead to anxiety, but the two trigger each other into a spiralling cycle. For example, anxiety makes a person start drinking, which worsens their anxiety, which leads them to drink more, and worsen their anxiety further. Alcohol causes anxiety because it upsets hormones, brain function, and sleep. When the body and mind haven’t had the opportunity to rest, a person may feel on edge and irritable. If a person is also taking antidepressants, which is not uncommon for people with anxiety, the combination of the two worsens the condition and can trigger a severe panic attack. Long-term alcohol abuse can not only induce panic attacks but can also lead to PTSD. This becomes even more true if a person has an anxiety or panic disorder. Alcohol not only contributes to anxiety but rewires the part of the brain responsible for coping with fear. Because of this, a person will hold on to fear-inducing associations longer, and will have a harder time recovering from trauma. There is also evidence that chronic alcohol abuse can lead to lasting anxiety, even after a person becomes sober. What causes panic attacks after drinking alcohol? A panic attack, after alcohol or otherwise, is an episode of extreme anxiety where emotions are amplified and terrifying. A person may experience shortness of breath or hyperventilate and feel detached from reality. Their mind is overloaded with worrying thoughts and fears, even of things that do not present any clear and immediate danger. There are several explanations why alcohol is responsible. If you look at the biological side of things, it is well-known that alcohol causes a number of physiological symptoms such as dehydration, low blood sugar, and elevated heart rate. These may make a person feel uneasy, dizzy, and irritable, and may lead to a panic attack. It’s not just alcohol that causes this. Too much of some drugs such as, caffeine, or even sugar can prompt a similar response. Because alcohol affects GABA, an inhibiting neurotransmitter in the brain, it does make a person feel calmer at first. It acts like a depressant and sedative. However, when the alcohol wears off, GABA levels decrease, triggering an anxious, exaggerated and overstimulated state. Serotonin levels go up and down in a similar fashion. They go up when a person drinks, and crash when they stop. If a person drinks regularly, the natural GABA and serotonin levels can get destabilised, making withdrawal symptoms and anxiety attacks worse. Although there is no evident source for the anxiety, these symptoms are interpreted by the brain as stress and worry due to biofeedback. Thus, small things may easily upset them, and certain words or actions may be misunderstood. Something that would normally be ignored will now trigger paranoia and panic. If blackouts are involved, the extra stress of the unknown, especially if poor judgement was involved, can increase anxiety levels further. What is “hangxiety”? Have you ever felt “on edge” after a night of drinking? Maybe it’s just a simple feeling of “something’s not right” or you’re just extra sensitive to everything going on around you? Or perhaps you’re actually paranoid or flat-out scared, and can’t explain why. This phenomenon is known as an anxiety hangover, or more casually, “hangxiety”. Although even a heavy night of drinking can trigger anxiety, major withdrawal symptoms and bad hangovers make alcohol panic attacks even more likely. Hangovers can also add to stress, if a person can’t function, or has to miss work/school. If it’s a severe hangover, a person can experience: • Elevated heart rate • Sweating • Nausea • Trembling. • Paranoia • Psychosis These symptoms sound similar to a panic attack don’t they? Your brain will think so as well. Although these are typical symptoms of a hangover, via biofeedback, they can trick your brain into having a real one. Are alcohol-induced panic attacks a sign of addiction? Because the alcohol and anxiety cycle feeds on itself, and over time causes a person to drink more, eventually it may lead to addiction. If a person uses alcohol as a form of self-medication, it can quickly escalate into a serious problem. If a person regularly drinks to the point where alcohol panic attacks are the norm, it is a sign of addiction. Alcohol-induced panic attacks are scary and you might feel like cutting down on your drinking. If the alcohol panic attack is hangover related, that is a warning sign as well. Another thing to watch out for is increases in the severity and frequency of anxiety levels and alcohol panic attacks. These are evidence that you are either drinking increasing amounts, or that your brain has already been affected. If you can’t cut down on drinking despite recurring panic attacks or anxiety hangovers, then it would be a good idea to look into getting help. How to Deal With Alcohol Panic Attacks If you experience an alcohol-induced panic attack, it is important to take the right steps to calm yourself as soon as possible. However, while it is important to deal with the panic attack, it is also important to acknowledge the situation and the fear you feel. No matter what it is. By acknowledging it, you will help your mind understand what is going on so as to understand that the situation will pass. 1. Talk to a friend If you have a sympathetic friend, stay or chat with them. It can be a good distraction, and the company will provide added comfort. Otherwise, do something relaxing that will take your mind off the situation. Breathing exercises and simple meditation can help provide relief. 2. Mindfulness Engage in some calming breathing techniques to focus your mind. 3. Get some rest When you’re having a hangover, sleep can do wonders. Getting proper rest can ease panic-inducing symptoms and prevent a panic attack. Water and easily digestible carbohydrates will help refuel your body and brain, and counteract low blood sugar. Contrary to popular advice, stimulants such as caffeine or sugar, or even smoking, can make both the hangover and the anxiety worse, so avoid them. 4. Eat a healthy diet If severe anxiety or panic attacks are a problem for you, make sure you get proper nutrition and exercise. Stable blood sugar is important for a stable mind. Psychotherapy and mindfulness meditation can help you deal with anxiety. 5. Get outside into nature Nature or ‘green therapy’ has a proven effect on anxiety levels and calming panic attacks.  6. Examine your drinking habits. If your panic attacks are alcohol-related, you should also re-examine your drinking patterns and consider cutting down on your drinking. 7. Seek professional advice If you are worried about your drinking and don’t feel you can stop, you should seek professional advice or speak to a telephone helpline such as Alcoholics Anonymous or The Samaritans. If you have been trying to quit drinking for a while but you can’t stop despite the negative consequences on your life, you might want to consider joining a 4-6 week treatment programme at an alcohol rehab clinic like Castle Craig in the UK. At this type of clinic you will undergo detox (if needed) and engage with a therapist who will listen to you and help you develop the skills you need to stay sober. An intensive treatment programme will also include educational presentations delivered by therapists, access to a fitness programme and complementary therapies such as equine therapy. A continuing care plan is essential to mitigate the risk of relapse. A skilled therapist will assess your anxiety levels and panic attacks and be able to create a treatment plan that addresses these issues. 
__label__pos
0.963666
Description of fast matrix multiplication algorithm: ⟨6×6×7:185⟩ Algorithm type 3X4Y4Z4+6X4Y2Z2+32X3Y2Z3+48X3YZ3+24X2Y2Z2+36X2YZ+36XYZ3X4Y4Z46X4Y2Z232X3Y2Z348X3YZ324X2Y2Z236X2YZ36XYZ3*X^4*Y^4*Z^4+6*X^4*Y^2*Z^2+32*X^3*Y^2*Z^3+48*X^3*Y*Z^3+24*X^2*Y^2*Z^2+36*X^2*Y*Z+36*X*Y*Z Algorithm definition The algorithm ⟨6×6×7:185⟩ could be constructed using the following decomposition: ⟨6×6×7:185⟩ = ⟨6×6×4:105⟩ + ⟨6×6×3:80⟩. This decomposition is defined by the following equality: TraceMulA_1_1A_1_2A_1_3A_1_4A_1_5A_1_6A_2_1A_2_2A_2_3A_2_4A_2_5A_2_6A_3_1A_3_2A_3_3A_3_4A_3_5A_3_6A_4_1A_4_2A_4_3A_4_4A_4_5A_4_6A_5_1A_5_2A_5_3A_5_4A_5_5A_5_6A_6_1A_6_2A_6_3A_6_4A_6_5A_6_6B_1_1B_1_2B_1_3B_1_4B_1_5B_1_6B_1_7B_2_1B_2_2B_2_3B_2_4B_2_5B_2_6B_2_7B_3_1B_3_2B_3_3B_3_4B_3_5B_3_6B_3_7B_4_1B_4_2B_4_3B_4_4B_4_5B_4_6B_4_7B_5_1B_5_2B_5_3B_5_4B_5_5B_5_6B_5_7B_6_1B_6_2B_6_3B_6_4B_6_5B_6_6B_6_7C_1_1C_1_2C_1_3C_1_4C_1_5C_1_6C_2_1C_2_2C_2_3C_2_4C_2_5C_2_6C_3_1C_3_2C_3_3C_3_4C_3_5C_3_6C_4_1C_4_2C_4_3C_4_4C_4_5C_4_6C_5_1C_5_2C_5_3C_5_4C_5_5C_5_6C_6_1C_6_2C_6_3C_6_4C_6_5C_6_6C_7_1C_7_2C_7_3C_7_4C_7_5C_7_6=TraceMulA_1_1A_1_2A_1_3A_1_4A_1_5A_1_6A_2_1A_2_2A_2_3A_2_4A_2_5A_2_6A_3_1A_3_2A_3_3A_3_4A_3_5A_3_6A_4_1A_4_2A_4_3A_4_4A_4_5A_4_6A_5_1A_5_2A_5_3A_5_4A_5_5A_5_6A_6_1A_6_2A_6_3A_6_4A_6_5A_6_6B_1_1B_1_2B_1_3B_1_4B_2_1B_2_2B_2_3B_2_4B_3_1B_3_2B_3_3B_3_4B_4_1B_4_2B_4_3B_4_4B_5_1B_5_2B_5_3B_5_4B_6_1B_6_2B_6_3B_6_4C_1_1C_1_2C_1_3C_1_4C_1_5C_1_6C_2_1C_2_2C_2_3C_2_4C_2_5C_2_6C_3_1C_3_2C_3_3C_3_4C_3_5C_3_6C_4_1C_4_2C_4_3C_4_4C_4_5C_4_6+TraceMulA_1_1A_1_2A_1_3A_1_4A_1_5A_1_6A_2_1A_2_2A_2_3A_2_4A_2_5A_2_6A_3_1A_3_2A_3_3A_3_4A_3_5A_3_6A_4_1A_4_2A_4_3A_4_4A_4_5A_4_6A_5_1A_5_2A_5_3A_5_4A_5_5A_5_6A_6_1A_6_2A_6_3A_6_4A_6_5A_6_6B_1_5B_1_6B_1_7B_2_5B_2_6B_2_7B_3_5B_3_6B_3_7B_4_5B_4_6B_4_7B_5_5B_5_6B_5_7B_6_5B_6_6B_6_7C_5_1C_5_2C_5_3C_5_4C_5_5C_5_6C_6_1C_6_2C_6_3C_6_4C_6_5C_6_6C_7_1C_7_2C_7_3C_7_4C_7_5C_7_6TraceMulA_1_1A_1_2A_1_3A_1_4A_1_5A_1_6A_2_1A_2_2A_2_3A_2_4A_2_5A_2_6A_3_1A_3_2A_3_3A_3_4A_3_5A_3_6A_4_1A_4_2A_4_3A_4_4A_4_5A_4_6A_5_1A_5_2A_5_3A_5_4A_5_5A_5_6A_6_1A_6_2A_6_3A_6_4A_6_5A_6_6B_1_1B_1_2B_1_3B_1_4B_1_5B_1_6B_1_7B_2_1B_2_2B_2_3B_2_4B_2_5B_2_6B_2_7B_3_1B_3_2B_3_3B_3_4B_3_5B_3_6B_3_7B_4_1B_4_2B_4_3B_4_4B_4_5B_4_6B_4_7B_5_1B_5_2B_5_3B_5_4B_5_5B_5_6B_5_7B_6_1B_6_2B_6_3B_6_4B_6_5B_6_6B_6_7C_1_1C_1_2C_1_3C_1_4C_1_5C_1_6C_2_1C_2_2C_2_3C_2_4C_2_5C_2_6C_3_1C_3_2C_3_3C_3_4C_3_5C_3_6C_4_1C_4_2C_4_3C_4_4C_4_5C_4_6C_5_1C_5_2C_5_3C_5_4C_5_5C_5_6C_6_1C_6_2C_6_3C_6_4C_6_5C_6_6C_7_1C_7_2C_7_3C_7_4C_7_5C_7_6TraceMulA_1_1A_1_2A_1_3A_1_4A_1_5A_1_6A_2_1A_2_2A_2_3A_2_4A_2_5A_2_6A_3_1A_3_2A_3_3A_3_4A_3_5A_3_6A_4_1A_4_2A_4_3A_4_4A_4_5A_4_6A_5_1A_5_2A_5_3A_5_4A_5_5A_5_6A_6_1A_6_2A_6_3A_6_4A_6_5A_6_6B_1_1B_1_2B_1_3B_1_4B_2_1B_2_2B_2_3B_2_4B_3_1B_3_2B_3_3B_3_4B_4_1B_4_2B_4_3B_4_4B_5_1B_5_2B_5_3B_5_4B_6_1B_6_2B_6_3B_6_4C_1_1C_1_2C_1_3C_1_4C_1_5C_1_6C_2_1C_2_2C_2_3C_2_4C_2_5C_2_6C_3_1C_3_2C_3_3C_3_4C_3_5C_3_6C_4_1C_4_2C_4_3C_4_4C_4_5C_4_6TraceMulA_1_1A_1_2A_1_3A_1_4A_1_5A_1_6A_2_1A_2_2A_2_3A_2_4A_2_5A_2_6A_3_1A_3_2A_3_3A_3_4A_3_5A_3_6A_4_1A_4_2A_4_3A_4_4A_4_5A_4_6A_5_1A_5_2A_5_3A_5_4A_5_5A_5_6A_6_1A_6_2A_6_3A_6_4A_6_5A_6_6B_1_5B_1_6B_1_7B_2_5B_2_6B_2_7B_3_5B_3_6B_3_7B_4_5B_4_6B_4_7B_5_5B_5_6B_5_7B_6_5B_6_6B_6_7C_5_1C_5_2C_5_3C_5_4C_5_5C_5_6C_6_1C_6_2C_6_3C_6_4C_6_5C_6_6C_7_1C_7_2C_7_3C_7_4C_7_5C_7_6Trace(Mul(Matrix(6, 6, [[A_1_1,A_1_2,A_1_3,A_1_4,A_1_5,A_1_6],[A_2_1,A_2_2,A_2_3,A_2_4,A_2_5,A_2_6],[A_3_1,A_3_2,A_3_3,A_3_4,A_3_5,A_3_6],[A_4_1,A_4_2,A_4_3,A_4_4,A_4_5,A_4_6],[A_5_1,A_5_2,A_5_3,A_5_4,A_5_5,A_5_6],[A_6_1,A_6_2,A_6_3,A_6_4,A_6_5,A_6_6]]),Matrix(6, 7, [[B_1_1,B_1_2,B_1_3,B_1_4,B_1_5,B_1_6,B_1_7],[B_2_1,B_2_2,B_2_3,B_2_4,B_2_5,B_2_6,B_2_7],[B_3_1,B_3_2,B_3_3,B_3_4,B_3_5,B_3_6,B_3_7],[B_4_1,B_4_2,B_4_3,B_4_4,B_4_5,B_4_6,B_4_7],[B_5_1,B_5_2,B_5_3,B_5_4,B_5_5,B_5_6,B_5_7],[B_6_1,B_6_2,B_6_3,B_6_4,B_6_5,B_6_6,B_6_7]]),Matrix(7, 6, [[C_1_1,C_1_2,C_1_3,C_1_4,C_1_5,C_1_6],[C_2_1,C_2_2,C_2_3,C_2_4,C_2_5,C_2_6],[C_3_1,C_3_2,C_3_3,C_3_4,C_3_5,C_3_6],[C_4_1,C_4_2,C_4_3,C_4_4,C_4_5,C_4_6],[C_5_1,C_5_2,C_5_3,C_5_4,C_5_5,C_5_6],[C_6_1,C_6_2,C_6_3,C_6_4,C_6_5,C_6_6],[C_7_1,C_7_2,C_7_3,C_7_4,C_7_5,C_7_6]]))) = Trace(Mul(Matrix(6, 6, [[A_1_1,A_1_2,A_1_3,A_1_4,A_1_5,A_1_6],[A_2_1,A_2_2,A_2_3,A_2_4,A_2_5,A_2_6],[A_3_1,A_3_2,A_3_3,A_3_4,A_3_5,A_3_6],[A_4_1,A_4_2,A_4_3,A_4_4,A_4_5,A_4_6],[A_5_1,A_5_2,A_5_3,A_5_4,A_5_5,A_5_6],[A_6_1,A_6_2,A_6_3,A_6_4,A_6_5,A_6_6]]),Matrix(6, 4, [[B_1_1,B_1_2,B_1_3,B_1_4],[B_2_1,B_2_2,B_2_3,B_2_4],[B_3_1,B_3_2,B_3_3,B_3_4],[B_4_1,B_4_2,B_4_3,B_4_4],[B_5_1,B_5_2,B_5_3,B_5_4],[B_6_1,B_6_2,B_6_3,B_6_4]]),Matrix(4, 6, [[C_1_1,C_1_2,C_1_3,C_1_4,C_1_5,C_1_6],[C_2_1,C_2_2,C_2_3,C_2_4,C_2_5,C_2_6],[C_3_1,C_3_2,C_3_3,C_3_4,C_3_5,C_3_6],[C_4_1,C_4_2,C_4_3,C_4_4,C_4_5,C_4_6]])))+Trace(Mul(Matrix(6, 6, [[A_1_1,A_1_2,A_1_3,A_1_4,A_1_5,A_1_6],[A_2_1,A_2_2,A_2_3,A_2_4,A_2_5,A_2_6],[A_3_1,A_3_2,A_3_3,A_3_4,A_3_5,A_3_6],[A_4_1,A_4_2,A_4_3,A_4_4,A_4_5,A_4_6],[A_5_1,A_5_2,A_5_3,A_5_4,A_5_5,A_5_6],[A_6_1,A_6_2,A_6_3,A_6_4,A_6_5,A_6_6]]),Matrix(6, 3, [[B_1_5,B_1_6,B_1_7],[B_2_5,B_2_6,B_2_7],[B_3_5,B_3_6,B_3_7],[B_4_5,B_4_6,B_4_7],[B_5_5,B_5_6,B_5_7],[B_6_5,B_6_6,B_6_7]]),Matrix(3, 6, [[C_5_1,C_5_2,C_5_3,C_5_4,C_5_5,C_5_6],[C_6_1,C_6_2,C_6_3,C_6_4,C_6_5,C_6_6],[C_7_1,C_7_2,C_7_3,C_7_4,C_7_5,C_7_6]]))) N.B.: for any matrices A, B and C such that the expression Tr(Mul(A,B,C)) is defined, one can construct several trilinear homogeneous polynomials P(A,B,C) such that P(A,B,C)=Tr(Mul(A,B,C)) (P(A,B,C) variables are A,B and C's coefficients). Each trilinear P expression encodes a matrix multiplication algorithm: the coefficient in C_i_j of P(A,B,C) is the (i,j)-th entry of the matrix product Mul(A,B)=Transpose(C). Algorithm description These encodings are given in compressed text format using the maple computer algebra system. In each cases, the last line could be understood as a description of the encoding with respect to classical matrix multiplication algorithm. As these outputs are structured, one can construct easily a parser to its favorite format using the maple documentation without this software. Back to main table
__label__pos
0.998514
Optimize CSS delivery wordpress plugin How do I optimize CSS in WordPress? Installation 1. Upload the complete speed-up-optimize-CSS-delivery folder to the /wp-content/plugins/ directory. 2. Activate the plugin through the ‘Plugins’ menu in WordPress. How do I defer CSS in WordPress? Click on show advanced settings (top right). When you see an image like this one in your Autoptimize plugin, tick the box optimize CSS code, and the box Inline and Defer CSS. How do I optimize a WordPress plugin? Installation 1. Upload the zip file and unzip it in the /wp-content/plugins/ directory. 2. Activate the plugin through the ‘Plugins’ menu in WordPress. 3. Go to Settings > Autoptimize and enable the options you want. Generally this means “Optimize HTML/ CSS/ JavaScript”. Where do I put custom CSS in WordPress? No matter what WordPress theme you use, you can tweak CSS with the built-in theme customizer. Navigate to Appearance -> Customize section of your dashboard, scroll down to the bottom of the page and click Additional CSS. This will open an in-built tool that will allow you to add any CSS code. How do I reduce the size of my dom? How to Improve Your YSlow Score 1. Make fewer HTTP requests. 2. Use a content delivery network (CDN) 3. Avoid empty src or href. 4. Add expires headers. 5. Compress components with Gzip. 6. Put CSS at top. 7. Put JavaScript at bottom. Non-render blocking JavaScript. Loading JavaScript asynchronously. 8. Avoid CSS expressions. How do you create a critical in CSS? Using Autoptimize + Free Critical CSS Generator Step 1: Go to https://pegasaas.com/critical-path-CSS-generator/ and enter your URL. Copy the generated “Critical Path CSS”. Step 2: In the Autoptimize settings (turn on advanced settings), under ‘CSS Options’ check ‘Inline and defer CSS’ and paste the copied CSS. See also:  Add CSS to wordpress page How do you defer unused CSS? How to remove unused CSS manually 1. Open Chrome DevTools. 2. Open the command menu with: cmd + shift + p. 3. Type in “Coverage” and click on the “Show Coverage” option. 4. Select a CSS file from the Coverage tab which will open the file up in the Sources tab. How do I add an inline CSS in WordPress? Need for Inline or Internal CSS in WordPress For the second solution of adding CSS, you have two options: Insert the CSS block by hard-coding the CSS style within the header file. Compile CSS and use WordPress enqueue function to insert inline style. How do I defer CSS files? CSS with the path of the CSS file you want to defer load. Remove the snippet for the Second CSS File when you are defer loading just one CSS file. When you want to defer load more than two CSS files you can copy the snippet for yourCSSfile2. CSS and keep pasting copies of this snippet within the script tags. How can I speed up my WordPress site without plugin? Speed Up Your WordPress Website Without Plugins 1. Step 2: Open up your . htaccess file located in the root directory of your website. … 2. Step 3: Limit the number of post revisions. By default, WordPress stores every change you make in your pages and posts. … 3. Step 4: Locate your PHP. INI file. … 4. Step 5: Retest your site on Google PageSpeed Insights to view your new score. How do I use WP Super Cache plugin? Once you install and activate the plugin, go to the Settings → WP Super Cache tab to start configuring the plugin. 1. Step 1: Configure the plugin’s general settings. … 2. Step 2: Go over the plugin’s advanced cache configuration. … 3. Step 3: Turn on content delivery network (CDN) support (optional) See also:  Make text not selectable CSS How do I use Autoptimize plugins? 1. Apparently, you have to install and activate this plugin. … 2. After Autoptimize is ready to use, you need to configure it. … 3. Stay in the JavaScript options, enable the Optimize JavaScript code. … 4. By enabling Optimize JavaScript code, you actually enable the minification of JavaScrip assets to make your website faster. What is a CSS class in WordPress? CSS or Cascading Style Sheets is a style sheet language used to define visual appearance and formatting of HTML documents. WordPress themes use CSS and HTML to output the data generated by WordPress. … There are many websites publishing CSS tutorials for beginners that can help a new WordPress user get started.programmist css Leave a Comment Your email address will not be published. Required fields are marked * Adblock detector
__label__pos
0.997473
The Community for Technology Leaders RSS Icon Subscribe Issue No.08 - August (2005 vol.16) pp: 686-701 ABSTRACT <p><b>Abstract</b>—Efficient and reliable communication is essential for achieving high performance in a networked computing environment. Finite network resources bring about unavoidable competition among in-flight network packets, resulting in network congestion and, possibly, deadlock. Many techniques have been proposed to improve network performance by efficiently handling network congestion and potential deadlock. However, none of them provide an efficient way of accelerating the movement of network packets in congestion toward their destinations. In this paper, we propose a new mechanism for detecting and resolving network congestion and potential deadlocks. The proposed mechanism is based on efficiently tracking paths of congestion and increasing the scheduling priority of packets along those paths. This acts to throttle other packets trying to enter those congested regions—in effect, locking out packets from congested regions until congestion has had the opportunity to disperse. Simulation results show that the proposed technique effectively disperses network congestion and is also applicable in helping to resolve potential deadlock.</p> INDEX TERMS Interconnection networks, congestion, deadlock, router scheduling, router architecture. CITATION Yong Ho Song, Timothy Mark Pinkston, "Distributed Resolution of Network Congestion and Potential Deadlock Using Reservation-Based Scheduling", IEEE Transactions on Parallel & Distributed Systems, vol.16, no. 8, pp. 686-701, August 2005, doi:10.1109/TPDS.2005.93 18 ms (Ver 2.0) Marketing Automation Platform Marketing Automation Tool
__label__pos
0.789264
Twisting Bowel: Symptoms, Causes, Diagnosis & Treatments Twisting bowel is caused when the natural shape of the intestines has changed or a section of the intestines overlap. It is also known as volvulus or colonic volvulus. The bowel is a layman's term for the portion of the alimentary canal in the intestine. This area extends from the pyloric sphincter of the stomach down to the anus. In humans, this area consists of the small and large intestine. The small intestine is further divided into the jejunum, duodenum and ileum, and the large intestine is divided into the cecum, rectum and colon. These areas each work to break down food and absorb nutrients, transferring these nutrients to the blood stream where they can be shared with cells throughout the body. If there is a change in the natural shape of the intestines, this can be known as a twisted bowel. A twist in the small intestine is referred to as a volvulus. Twists in the large intestine are known as a colonic volvulus. These abnormal twists or loops can cause an obstruction or other medical conditions which could be fatal. If signs of twisting bowel present, it is important to seek medical attention as quickly as possible. Symptoms of Twisting Bowel The most common symptoms include dizziness, nausea, vomiting, unexplained swollen stomach, constipation, bloating, and difficulty making a bowel movement and blood stool. Skin near the twist may be distended and tender. Some patients also report shortness of breath, intense fatigue or backaches when suffering from it. Symptoms will vary based on the severity of the twist, the extent of the damage and the portion of the intestines affected. Symptoms may also come and go without causing medical damage. However, symptoms that go unchecked can cut off the nutrient blood or oxygen supply to the rest of the digestive tract. This is known as strangulation of the bowels. If unchecked, this can cause death of the surrounding cells, known as bowel necrosis. Causes of Twisting Bowel A twisted bowel is caused when the intestines fold over themselves. In some cases, they will untwist on their own, but many cases will require medical intervention. Infants born with twisting bowel or an intestinal malrotation are more likely to develop it later in life. Twists in the bowels may also occur after surgery on the abdomen. • Primary causes include gut mobility and poor diet.  These conditions are observed in children and adults. Colonic volvulus is also more common in pregnant women. • Secondary causes include underlying health concerns such as adhesions in the colon or redundant intestinal tissue. These causes are significantly more common in adults over the age of 40. Diagnosis and Treatment of Twisting Bowel Medical examinations If you suspect that you are suffering from twisted bowel, your doctor will need to perform examinations to check for the condition. These may include a stool analysis, barium enema, a computed tomography (CT) scan or magnetic resonance imaging (MRI) scan. If your symptoms match other conditions that cause digestive distress, your doctor may opt to perform a laparotomy. This is a minimally invasive surgical procedure that is used to examine abdominal organs for damage. Surgery Once it is determined, that you are suffering from it, you will likely need to undergo surgery to correct the problem. These surgical procedures are typically minimally invasive, and serve as a way to return the intestines to a natural position. In some cases, the affected section of intestines may be widened to avoid such complications occurring in the future. If the twist in the bowel is serious, your doctor may opt to remove the affected section to minimize the damage. After your surgery, you will need to take medications to minimize your risk of infections. Your doctor may also prescribe medication to help break down your food to avoid causing further irritation to the surgical site. It is better to avoid the twisted bowel in the first place to avoid any discomfort, so you'd better keep drinking 8 glasses of water daily, choose to eat healthy, exercise regularly, and consider colon cleasening sometimes.  Recommended: Why Do You Nausea Sick Following Meals? Having nausea after eating? It indicates your intestines or digestive system is suffering. Why does this happen? And how can you get some relief? Current time: 11/15/2018 08:07:27 am (America/New_York) Memory usage: 1665.39KB
__label__pos
0.974615
Comprehensive Anatomy and Physiology Essay 696 Words3 Pages Lymphatic System Question Sheet 1. Define the term Avascular necrosis. Avascular necrosis is the death of bone tissue due to an interruption of blood supply. 2. Define the term Bandemia. Bandemia refers to an excess of band cells (immature white blood cells) released by the bone marrow into the blood. 3. What is meant by Cardiac silhouette? The cardiac silhouette is the most prominent central feature of the chest x-ray and it produces a familiar gourd shape with the apex of the left ventricle located just behind the left chest nipple. 4. Which (s) condition is the drug Ceftriaxone used to treat? Ceftriaxone is used to treat the infections caused by suscepitable organisms such as skin on skin infections or respiratory tract infections. 5. What does the abbreviation CMV mean? cytomegalovirus 6. Which (s) condition is the drug Colace used to treat ? Colace—(Docusate) is an over-the-counter stool softener used to provide short-term relief to irregular bowel function. 7. Which (s) condition is the drug Dilaudid used to treat? Dilaudid (hydromorphone) is a narcotic pain reliever used to treat moderate to severe pain. 8. What is an Echocardiogram? Echocardiogram is a test using ultrasound to provide pictures of the heart's valves and chambers. 9. Define the term Erythema. An inflammatory reaction that occurs deep in the skin and is characterized by the presence of tender, red, raised lumps or nodules that range in size from 1 to 5 centimeters and are most commonly located over the shins but occasionally on the arms or other areas. 10. Define the term Exudate. Exudate is fluid, such as pus or clear fluid, which leaks out of blood vessels into nearby tissues. 11. Define Focal infiltrate. Focal infiltrate is dense, More about Comprehensive Anatomy and Physiology Essay Open Document
__label__pos
0.950316
media volume locked Discussion in 'Samsung Galaxy S3' started by feqqr, Dec 7, 2012. 1. feqqr feqqr New Member Joined: Dec 7, 2012 Messages: 2 Likes Received: 0 Trophy Points: 1 Ratings: +0 I have a new galaxy s3 and I have a recurring sound problem. Sometimes the volume for music and media locks at Silent and I can't turn it up. I've tried the button and going into settings. All other sounds work and I can adjust the volume with no problems. Any suggestions would be appreciated. Thanks.   2. Miller6386 Miller6386 Developer Developer Joined: Oct 22, 2011 Messages: 1,422 Likes Received: 669 Trophy Points: 268 Location: Beer Tent Capital of The World Ratings: +750 Current Phone Model: Note 4 Twitter: CoreyFMiller What does the slider do if you hit volume up, then select the cog wheel and try to slide the media volume there?   3. feqqr feqqr New Member Joined: Dec 7, 2012 Messages: 2 Likes Received: 0 Trophy Points: 1 Ratings: +0 In both cases, the slider does not move.   4. redvelvet redvelvet New Member Joined: Mar 3, 2013 Messages: 2 Likes Received: 0 Trophy Points: 1 Ratings: +0 I know this was from almost two months ago, but I just had the same issue today. I restarted the phone twice and it came back on. Not the ideal solution, but I have media sound again.   Search tags for this page can't adjust media volume on galaxy s3 , can't adjust volume on galaxy s3 , can't turn media volume up galaxy s3 , can't turn up media volume galaxy s3 , cant turn up volume on samsung galaxy 4s , galaxy s4 media volume , how to turn media volume on galaxy s4 , samsung galaxy s3 media audio locked , samsung galaxy s3 media sound not working , samsung galaxy s3 media volume
__label__pos
0.980431
Waiting Login processing... Trial ends in Request Full Access Tell Your Colleague About Jove 31.8: Current Growth And Decay In RL Circuits TABLE OF CONTENTS JoVE Core Physics A subscription to JoVE is required to view this content. Education Current Growth And Decay In RL Circuits   TRANSCRIPT 31.8: Current Growth And Decay In RL Circuits The current growth and decay in RL circuits can be understood by considering a series RL circuit consisting of a resistor, an inductor, a constant source of emf, and two switches. When the first switch is closed, the circuit is equivalent to a single-loop circuit consisting of a resistor and an inductor connected to a source of emf. In this case, the source of emf produces a current in the circuit. If there were no self-inductance in the circuit, the current would rise immediately to a steady value of ε/R. However, from Faraday's law, the increasing current produces an emf across the inductor, which has opposite polarity. In accordance with Lenz’s law, the induced emf counteracts the increase in the current. As a result, the current starts at zero and increases asymptotically to its final value. Thus, as the current approaches the maximum current ε/R, the stored energy in the inductor increases from zero and asymptotically approaches a maximum value. The growth of current with time is given by Equation1 When the first switch is opened, and the second switch is closed, the circuit again becomes a single-loop circuit but with only a resistor and an inductor. Now, the initial current in the circuit is  ε/R. The current starts from ε/R and decreases exponentially with time as the energy stored in the inductor is depleted. The decay of current with time is given by the relation Equation2 The quantity inductance over resistance is given by Equation3 measures how quickly the current builds toward its final value; this quantity is called the time constant for the circuit. When the current is plotted against time, It grows from zero and approaches ε/R asymptotically. At a time equal to time constant, the current rises to about 63%, of its final value, but during decaying, at the same time constant, it decreases to about 37%, of its original value. Suggested Reading Tags Growth And Decay RL Circuits Resistor Inductor Emf Switches Current Self-inductance Faraday's Law Lenz's Law Steady Value Asymptotically Maximum Current Stored Energy Exponential Decay Inductance Over Resistance Get cutting-edge science videos from JoVE sent straight to your inbox every month. Waiting X Simple Hit Counter
__label__pos
0.917018
Do you have to have tender breasts in early pregnancy? Many pregnant women experience breast changes, but plenty of others don’t – and that’s normal, too. It’s not necessary for your breasts to grow bigger, leak, or feel sore to indicate that you’re having a healthy pregnancy or that you’re ready to breastfeed. Some women’s breasts just don’t change much during pregnancy. Is it possible to be pregnant and not have sore breasts? But because every pregnancy is different, every pregnant woman’s symptoms are different. Some women have breast tenderness as soon as a few days after conception, whereas others don’t experience it until weeks later. Can you be pregnant and have no symptoms at all? It’s possible to be pregnant and have no pregnancy symptoms, but it’s uncommon. Half of all women have no symptoms by 5 weeks of pregnancy, but only 10 percent are 8 weeks pregnant with no symptoms. Is there always breast tenderness in early pregnancy? It can happen in one or both breasts. You may feel it all over, in a specific spot, or moving outward into your armpits. The soreness can be constant, or it can come and go. During the earliest weeks of pregnancy, breast pain tends to be dull and achy. IT\'S FUN:  Can you get pregnant after ovulation is over? What are some unusual signs of early pregnancy? Some weird early signs of pregnancy include: • Nosebleeds. Nosebleeds are quite common in pregnancy due to the hormonal changes that happen in the body. … • Mood swings. … • Headaches. … • Dizziness. … • Acne. … • Stronger sense of smell. … • Strange taste in the mouth. … • Discharge. How does your tummy feel in early pregnancy? The pregnancy hormone progesterone can cause your tummy to feel full, rounded and bloated. If you’re feeling swollen in this area, there’s a possibility you could be pregnant. How do you tell if you’re pregnant without a test? The most common early signs and symptoms of pregnancy might include: 1. Missed period. If you’re in your childbearing years and a week or more has passed without the start of an expected menstrual cycle, you might be pregnant. … 2. Tender, swollen breasts. … 3. Nausea with or without vomiting. … 4. Increased urination. … 5. Fatigue. How can you tell your pregnant by hand pulse? To do so, place your index and middle fingers on the wrist of your other hand, just below your thumb. You should be able to feel a pulse. (You shouldn’t use your thumb to take the measurement because it has a pulse of its own.) Count the heartbeats for 60 seconds. When do pregnancy signs start? When do the symptoms start? Signs and symptoms Timeline (from missed period) fatigue week 4 or 5 nausea week 4 to 6 tingling or aching breasts week 4 to 6 frequent urination week 4 to 6 What was your earliest pregnancy symptom? You may feel your body making changes quickly (within the first month of pregnancy) or you may not notice any symptoms at all. Symptoms of early pregnancy can include a missed period, an increased need to urinate, swollen and tender breasts, fatigue, and morning sickness. IT\'S FUN:  Do formula fed babies drink more?
__label__pos
0.998201
ArduMeter: an Arduino Based Multimeter (Sort Of) The ArduMeter is an Arduino Based Multimeter, which many people have made to perform different operations, and searching on google you could find all the variety of things it can do. I wanted to make one that is portable, quite easy to use, give decent readings to a certain extent, and of course can help me troubleshoot and experience some cool high speed things that regular multimeters are too slow to capture. The different functions that the ArduMeter is capable of performing: 1. Reading voltages 2. Measuring Resistance 3. Measuring Continuity 4. Graphing a Voltage vs Time graph similar to an oscilloscope. 5. And maybe some more in the future. However as you could guess, this is not a high tech device, and all its functionalities are limited in one way or another. In later steps I will be going in details of each of the ArduMeter’s functionalities, the section of the code it covers, and some of its Theory of operation and limitations. The 4th function that Graphs a Voltage vs Time graph is definitely my favorite, since now because of that I could actually see a lot of waveforms and signals, (not as much as their values, but the trend) without having to have an oscilloscope. For example, if you have a sensor that you want to test, you could simply hook up the output of that to ArduMeter and using the Voltage vs Time graph you could see what reading it gives without having to have a computer nearby. Step 1: Materials Used These are the materials I used: 1. Two OLED displays (128*64 0.96 inch) that use the I2C interface. 2. Arduino nano 3. A voltage booster to provide stable 5 volt from the battery’s 3.7 Volt 4. 3.7 Volt Lithium Polymer Battery 5. Two Trimmer Pots of 10K Ohm 6. A Potentiometer (value doesn’t really matter for this one) , and a knob 7. Two momentary Push Button Switches. 8. A 25K-100K resistor for pullup at pin A2. 9. A Red Led (Optional: Used to show if Li-po battery has been connected in reverse to the Booster Circuit, since the booster burned out 3 times for me when I did so) To solder it and make it permanent you would need a few more materials: 1. A perfboard 2. Female PCB Pinheader 3. Lots of Jumper Wires 4. Soldering Iron 5. Hot Glue Gun 6. Heat Shrink Tubing Step 2: Schematic So this is the schematic of the connections I used. These are the connections I used so far: (Don’t have a schematic yet) Momentary push button to D2 & Gnd. Momentary push button to D3 & Gnd. Potentiometer Left- Vcc Middle- A1 Right-Gnd. Trimmer Pot 10K ohml- Left- Gnd Middle- A0 Right- Connect to battery + for voltage measurement while in VoltageRead function, or resistor for resistance function. OLED display 1- Vcc- 5V Gnd-GND SCL-A5 SDA-A4 OLED display 2- Vcc- 5V Gnd-GND SCL-A5 SDA-A4 Battery connects to Booster Circuit. Booster Circuit Connects to Arduino. A Resistor connects from A2 to Gnd to pulldown any floating values. A wire connects from A2 to any incoming signal to read the voltage levels in the oscilloscope function. D11 is set up to be a PWM output to check the oscilloscope function. Buzzer- Positive- D12 Negative-GND. Step 3: How to Use the ArduMeter So when you turn on the ArduMeter, you will see a bunch of options. Use the Potentiometer to scroll through those options. Once the box lands on the Function you want to use, click on the Green Button (connected to D2). How to use those individual options will be in the later steps, where I talk about each of those steps in more detail. Now once you are done with the function, press or even hold in some cases, the Red Button (connected to D3). It should take you back to main menu. Of course I am not a good coder, so some things might not work easily, and sometimes there might be malfunction in the ArduMeter. Tinker with the code that I post in this instructable, and you should be able to find a way around those Bugs hopefully. Step 4: Voltage Dividers So pretty much most of the functions in this ArduMeter is based around Voltage Dividers. So this step is dedicated only to give a general idea of it in case you don’t know about it. Voltage Divider Circuit: A voltage divider is a circuit that basically divides the voltage. Using two resistors it can be easily accomplished. Resistors in series drop voltage in ratio to their resistance. So if you have two resistors in series, a 10 ohm resistor and another 20 ohm resistor, the 20 ohm resistor will drop twice as much voltage as the 10 ohm resistor. So if we connect a 30 Volt battery across the series of resistors, then the 10 ohm resistor would drop 10 Volts and the 20 ohm resistor would drop 20 Volts. Since the voltage across the 10 ohm resistor is 10 Volt, you could attach a load parallel to the resistor and it would also have close to 10 volts across it (Depending on the load). So basically we could use those two resistor to divide the voltage from a 30 volt source to run a device at either 20 volt or 10 volt. Figure 1 shows schematic of a voltage divider using two resistors, as well as the formula used to find the Voltage across the resistors. Figure 1.1 shows the diagram of a Potentiometer (variable resistor) . The arrow (the knob) basically moves along the resistor, changing resistance between Pins 1 and 2, and Pins 2 and 3. However the resistance of the 1 to 3 stays the same throughout. Let’s say resistance between Pin 1 and 2 is R1, and resistance between Pin 2 and 3 is R3, and resistance across pin 1 and 3 is Rt. Then Rt= R1+ R2. Step 5: Idea Behind It: ReadVoltage Function Arduino isn’t capable of measuring voltage in the normal sense, instead it compares voltages. Basically if you hook up a 3.3 volt battery to the Arduino, it doesn’t read that there’s 1.5 volts at one of its pins. Instead it compares the Voltage at the pin to a reference voltage, which by Default would be the supply voltage of the Arduino, which Should be 5 Volts. After comparing, it determines that the Voltage at the pin is 66% of the supply voltage. Then the Arduino outputs the reading as an integer between 0-1023, which we could use , through the use of arithmetics, to find the value of the Voltage at the pin. Voltage at the pin= ( (Reading from Arduino* Reference Voltage) / 1023). In this case the reference Voltage would be the supply Voltage of the Arduino or 5 Volts. However this has a few limitations. • First of all, the test voltage has to be less than the reference Voltage of the Arduino, otherwise there would be an uncontrolled flow of current from the testing voltage source to the Arduino’s chip, which would fry the Arduino. • Secondly, the Voltage reading is done through Comparing voltage to a reference, which by default is the supply voltage of the Arduino. The Supply voltage Should be 5 Volts, however most of the times it isn’t. Most USB cables fail to maintain that voltage level under load, or the voltage supplied doesn’t stay constant at 5V. So there were a few options to work around the problems mentioned above. • I could use a Voltage divider circuit, in order to reduce the Testing Voltage by a factor before it reaches the Arduino pin. So we could use a Voltage divider to get only ⅖ of the Testing Voltage to the Arduino pin and then we could just multiply the voltage reading we get by 5/2 and we would get the original value of the Testing voltage. Even after that there would still be a limit of Testing Voltage, however getting the range of testing voltage up to 0-12.5V should be sufficient for most cases. • Arduino can use different reference voltages to classify the Testing Voltage at its pin. It can use INTERNAL references, where the Arduino provides a stable voltage level for comparison. It can also use an EXTERNAL reference, where an external Voltage source will provide a stable known voltage level for comparison. The External Voltage source requires more circuitry. • For the INTERNAL references, Arduino nano can provide a 1.1 Volt voltage reference, and by default use the supply voltage as reference. In order to read voltage, I used a voltage Divider to increase the range of test voltage, and I used another function to find the Supply Voltage of the Arduino using the 1.1 Volt Internal Reference of the Arduino. I found the ReadVcc() function online, which allows the arduino to find it’s own supply voltage through the use of the internal 1.1 Volt reference. The voltage divider I used was a 10k Trimmer Potentiometer. And that potentiometer ran to the Analog Input pin A0. Step 6: Idea Behind It: Measuring Resistance The resistance function would measure the resistance of a resistor connected across the VCC and the end of the Trimmer Pot whose middle goes to A0. We basically have three resistors in series then, connected from VCC to Ground (the Trimmer pot is connected to ground and A0.). So once again we are going to have a Voltage divider circuit, where the voltage Vr across A0 and Ground would Vcc*( R1/ (R1+R2+ R3)) where R1 is the resistance across Pin 1 and 2 of the Trimmer Pot, R2 is the resistance across Pin 2 and 3 of the Trimmer Pot, and R3 is the resistance of the resistor we attached from the end of the trimmer pot to Vcc. So rearranging the equation we should be able to find R3. And that is how the resistance function determines the resistance of the attached resistor. Step 7: Idea Behind It: Measuring Continuity The continuity function does the same thing as measuring resistance. However whenever that resistance doesn’t exceed a threshold, the ArduMeter turns on an annoying Buzzer to let you know that there is enough continuity between the two points. Otherwise it just tells you to connect object if you hadn’t done so, or it shows that there is no continuity. Step 8: Idea Behind It: Oscilloscope (Voltage Vs Time Graph) This one is probably the easiest to understand, but a bit harder to execute. So all I had to do for this was take Voltage reading at pin A2 and graph it on a scale of 1-5 V. However the challenges were in the taking that reading and graphing it as fast or slow as possible. Arduino has a limitation on how fast it can perform the AnalogRead() function, so there was a limitation on the oscilloscope. The X axis of the graph on the ArduMeter can only go down to 14000 uS, of 14ms. So basically it can read the Voltages every 120 uS or so, and everything in between will pretty much be lost. However it is still not bad, because by graphing the readings, and connecting them by lines, we could still get a pretty good looking graph. Also without a Pulldown resistor at pin A2, it picks up way too much noise, to the point that even touching the wire with your hands would draw a sine wave graph on the display. So to avoid all that, I connected the pullDown resistor. In the picture above, first I tried the oscilloscope with a 555 timer generated PWM signal, and even though the signal should have been around a kiloHertz or so (my components weren’t exact) the ArduMeter managed to make a pretty nice looking graph. Also pin D11 is set to be a PWM output just to test the ArduMeter’s oscilloscope function. By scrolling using the Potentiometer you can change the duty cycle of the PWM. So I also tried graphing that PWM signal with the ArduMeter, and was pretty pleased with the result. Step 9: Assemble It on Breadboard Assembling it on bread board looks like this, quite messy. But it did help me to trouble shoot a lot of problems. I would recommend you to do the same before making it permanent. Tinker around with it on the breadboard, and maybe you can make it much better before you solder it permanently. For More Details: ArduMeter: an Arduino Based Multimeter (Sort Of) Leave a Comment Your email address will not be published. Required fields are marked * Scroll to Top
__label__pos
0.948233
概率论 维基百科,自由的百科全书 跳转至: 导航搜索 典型的概率問題:「擲一顆公正的骰子,出現3點的概率是多少?」 概率論是研究隨機性或不確定性等現象的數學。更精確地說,概率論是用來模擬實驗在同一環境下會產生不同結果的情況。典型的随机實验有掷骰子、扔硬币、抽扑克牌以及輪盤游戏等。 數學家和精算師認為概率是在0至1閉區間内的數字,指定給一發生與失敗是隨機的「事件」。概率P(A)根據概率公理來指定給事件A 一事件A在一事件B確定發生後會發生的概率稱為B給之A條件概率;其數值為{P(B \cap A) \over P(A)}(當P(A)不等於零時)。若B給之A的條件概率和A的概率相同時,則稱AB獨立事件。且AB的此一關係為對稱的,這可以由一同價敘述:「P(A \cap B) = P(A)P(B),當AB為獨立事件時。」中看出。 概率論中的兩個重要概念為隨機變數和隨機變數的概率分佈兩種。 生活例子[编辑] 人們對概率總是有一點觸摸不清的感覺,而事實上也有很多看似奇異的結果: • 1; 六合彩:在六合彩(49選6)中,一共有13,983,816種可能性(參閱組合數學), 如果每周都買一組不相同的號,一年有52周,則在實驗越多次(一直買直到中獎算一次)之後,平均中獎所花的時間會越接近\frac{13983816}{52}=268919。 事實上,即使每周買相同的號,獲得頭獎的概率也是相同的。 但假設每周實際中獎的組合都不重複,268919年的算術推論是正確的,這說明概率和其他數學理論可能導出不同的結論。 • 2; 六合彩:仍然是六合彩。買5, 17, 19, 24, 33, 49中奬概率高還是買1,2,3,4,5,6的中奬概率高? 古典概率論說:一樣。 但實際上機械彩球製造上都有些小的差異,所以每組概率不一定完全相同,但必須累積多期開獎結果後才看得出來。 • 3; 生日悖論:在一個足球場上有23個人(2×11個運動員和1個裁判員), 不可思議的是,在這23人當中至少有兩個人的生日是在同一天的概率要大于50%。 如果這23人都沒有相同的生日也不違反概率,只是小于50%。 • 4; 輪盤遊戲:在遊戲中玩家可能認為,在連續出現多次紅色後,出現黑色的概率會越來越大。 這種判斷也是錯誤的,即出現黑色的概率每次是相等的,因為球本身並沒有「記憶」, 它不會意識到以前都發生了什麼,其概率始終是\frac{18}{37} 但輪盤的前後期開獎數字形成時間序列(可能存在自迴歸模型)。 • 5; 贏取電視節目裡的名車:在參賽者面前有三扇關閉的門,其中只有一扇後面有名車,而其餘的後面是山羊。 遊戲規則是,參賽者先選取一扇門,但在他打開之前,主持人在其餘兩扇門中打開了一扇有山羊的門, 並詢問參賽者是否改變主意選擇另一扇門,以使贏得名車的概率變大。 正確的分析結果是,假如不管開始哪一扇門被選,主持人都打開其餘兩扇門中有山羊的那一扇並詢問參賽者是否改變主意, 則改變主意會使贏得汽車的概率增加一倍;(「標準」的三門問題情況。) 假如主持人只在有名車那扇門被選中時勸誘參賽者打開其它門,則改變主意必輸。(資訊不對稱) 历史[编辑] 作为数学统计基础的概率论的创始人分别是法国数学家帕斯卡费马,其可追溯到公元17世纪。当时的法国宫廷贵族里盛行着掷骰子游戏,游戏规则是玩家连续掷4次骰子,如果其中没有6点出现,玩家赢,如果出现一次6点,则庄家(相当于现在的赌场)赢。按照这一游戏规则,从长期来看,庄家扮演赢家的角色,而玩家大部分时间是输家,因为庄家总是要靠此为生的,因此当时人们也就接受了这种现象。 后来为了使游戏更刺激,游戏规则发生了些许变化,玩家这回用2个骰子连续掷24次,不同时出现2个6点,玩家赢,否则庄家赢。当时人们普遍认为,2次出现6点的概率是一次出现6点的概率的1 / 6,因此6倍于前一种规则的次数,也既是24次赢或输的概率与以前是相等的。然而事实却並非如此,从长期来看,这回庄家处于输家的状态,于是他们去请教当时的数学家帕斯卡,求助其对这种现象作出解释。 其他对概率论的发展作出重要贡献的人还有荷兰物理、数学家惠更斯,瑞士物理、数学家伯努利,法国数学家棣莫弗,法国数学、天文学家拉普拉斯,德国数学家高斯,法国物理、数学家泊松,意大利数学、医学家卡尔达诺以及苏联数学家柯爾莫哥洛夫 事件[编辑] 单位事件、事件空间、随机事件[编辑] 在一次随机试验中可能发生的不能再细分的结果被称为基本事件,或者称为单位事件,用 E 表示。在随机试验中可能发生的所有单位事件的集合称为事件空间,用 S 来表示。例如在一次掷骰子的随机试验中,如果用获得的点数来表示单位事件,那么一共可能出现 6 个单位事件,则事件空间可以表示为 S = \{ 1,2,3,4,5,6 \} 上面的事件空间是由可数有限单位事件组成,事实上还存在着由可数无限以及不可数单位事件组成的事件空间,比如在一次获得正面朝上就停止的随机掷硬币试验中,其事件空间由可数无限单位事件组成,表示为:S={ 正,反正,反反正,反反反正,反反反反正,···},注意到在这个例子中"反反反正"是单位事件。将两根筷子随意扔向桌面,其静止后所形成的交角假设为 \alpha,这个随机试验的事件空间的组成可以表示为 S= \{ \alpha | 0^\circ \le \alpha < 180^\circ \} 随机事件是事件空间 S 的子集,它由事件空间 S 中的单位元素构成,用大写字母 A,B,C\cdots 表示。例如在掷两个骰子的随机试验中,设随机事件 A = “获得的点数和大于10”,则 A 可以由下面 3 个单位事件组成:A = \{ ( 5,6 ),( 6,5 ),( 6,6 ) \} 如果在随机试验中事件空间中的所有可能的单位事件都发生,这个事件被称为 必然事件,表示为 S \subset S ;相应的如果事件空间里不包含任何一个单位事件,则称为不可能事件,表示为 \varnothing \subset S 事件的计算[编辑] 因为事件在一定程度上是以集合的含义定义的,因此可以把集合计算方法直接应用于事件的计算,也就是说,在计算过程中,可以把事件当作集合来对待。 Komplement3.png A 的补集 不属于 A 的事件发生 Vereinigung.png 并集 AB 或者A 或者 B 或者 A, B 同时发生 Durchschnitt.png 交集 AB 事件 A,B 同时发生 Differenz.png 差集 A \ B 不属于 BA 事件发生 Disjunkte.png 空集 AB = ∅ A,B 事件不同时发生 Impliziert.png 子集 BA B 发生,则 A 也一定发生 在轮盘游戏中假设 A 代表事件「球落在红色区域」,B 代表事件"球落在黑色区域",因为事件 AB 没有共同的单位事件,因此可表示为 A\cap B=\varnothing 注意到事件 AB 并不是互补的关系,因为在整个事件空间 S 中还有一个单位事件「零」,其即不是红色也不是黑色,而是绿色,因此 A,B 的补集应该分别表示如下: \bar{A}=S\setminus A=B\cup \left \{ 0 \right \} \bar{B}=S\setminus B=A\cup \left \{ 0 \right \} 概率的定义[编辑] 传统概率 (古典機率)( 拉普拉斯概率 )[编辑] 传统概率的定义是由法国数学家拉普拉斯 ( Laplace ) 提出的。如果一个随机试验所包含的单位事件是有限的,且每个单位事件发生的可能性均相等,则这个随机试验叫做拉普拉斯试验。在拉普拉斯试验中,事件 A 在事件空间 S 中的概率 P(A) 为: Begriff.png 例如,在一次同时掷一个硬币和一个骰子的随机试验中,假设事件 A 为获得国徽面且点数大于 4 ,那么事件 A 的概率应该有如下计算方法:S= { ( 国徽,1 点 ),( 数字,1 点 ),( 国徽,2 点 ),( 数字,2 点 ),( 国徽,3 点 ),( 数字,3 点 ),( 国徽,4 点 ),( 数字,4 点 ),( 国徽,5 点 ),( 数字,5 点 ),( 国徽,6 点 ),( 数字,6 点 ) },A={( 国徽,5 点 ),( 国徽,6 点 )},按照拉普拉斯定义,A 的概率为, P(A)=\frac{2}{12}=\frac{1}{6} 注意到在拉普拉斯试验中存在着若干的疑问,在现实中是否存在着其单位事件的概率具有精确相同的概率值的試驗? 因为我们不知道,硬币以及骰子是否完美,即骰子制造的是否均匀,其重心是否位于正中心,以及轮盘是否倾向于某一个数字。 尽管如此,传统概率在实践中被广泛应用于确定事件的概率值,其理论根据是: 如果没有足够的论据来证明一个事件的概率大于另一个事件的概率,那么可以认为这两个事件的概率值相等 如果仔细观察这个定义会发现拉普拉斯用概率解释了概率,定义中用了相同的可能性 ( 原文是 également possible )一词,其实指的就是"相同的概率"。这个定义也并没有说出,到底什么是概率,以及如何用数字来确定概率。在现实生活中也有一系列问题,无论如何不能用传统概率定义来解释,比如,人寿保险公司无法确定一个 50 岁的人在下一年将死去的概率。 统计概率[编辑] 继传统概率论之后,英国逻辑学約翰·維恩和奥地利数学家理查德提出建立在频率理论基础上的统计概率。他们认为,获得一个事件的概率值的唯一方法是通过对该事件进行 100 次,1000 次或者甚至 10000 次的前后相互独立的 n 次随机试验,针对每次试验均记录下绝对频率值和相对频率h_n (A),随着试验次数 n 的增加,会出现如下事实,即相对频率值会趋于稳定,它在一个特定的值上下浮动,也即是说存在着一个极限值 P(A),相对频率值趋向于这个极限值。这个极限值被称为统计概率,表示为: P(A)=\lim_{n \to \infty}h_n (A) 例如,若想知道在一次掷骰子的随机试验中获得 6 点的概率值可以对其进行 3000 次前后独立的扔掷试验,在每一次试验后记录下出现 6 点的次数,然后通过计算相对频率值可以得到趋向于某一个数的统计概率值。 扔掷数 获得 6 点的绝对频率 获得 6 点的相对频率 1 1 1.00000 2 1 0.50000 3 1 0.33333 4 1 0.25000 5 2 0.40000 10 2 0.20000 20 5 0.25000 100 12 0.12000 200 39 0.19500 300 46 0.15333 400 72 0.18000 500 76 0.15200 600 102 0.17000 700 120 0.17143 1000 170 0.17000 2000 343 0.17150 3000 560 0.16867 上面提到的这个有关相对频率的经验规律是大数定律在现实生活中的反映,大数定律是初等概率论的基础。统计概率在今天的实践中依然具有重要意义,特别是在初等概率论及数理统计等学科中。 现代概率论[编辑] 与初等概率论相对的,是“现代概率论”。因“测度论”的研究与发展,概率论得以建立公理化系统。 一些曾经无法用初等概率论解释的概念因此得以用公理化的语言进行解释。 可以说现代概率论以测度论为理论基础终于得以完善,完成了其现代化进程。 概率公理[编辑] 如果一个函数P:S\to \R, \ A\mapsto P(A)指定给每一个事件空间 S 中的事件 A 一个实数 P(A),并且其满足下面的 3 个公理,那么函数 P 叫做概率函数,相应的 P(A) 叫做事件 A 的概率。 公理 1: 0\le P(A) \le 1 \ (A \in S) 事件 A 的概率 P(A) 是一个0与1之间(包含0与1)的非负实数。 公理 2: P(S)=1 事件空間的概率值为 1 。 公理 3: P(A\cup B)=P(A)+P(B),如果 A\cap B=\varnothing 互斥事件的加法法则。这里需注意:公理3可以推广到可数个互斥事件的聯集 概率的计算[编辑] 需要提及的是下面将要介绍的 9 个计算概率的定理与上面已经提及的事件的计算没有关系,所有关于概率的定理均由概率的 3 个公理得来,同时适用于包括拉普拉斯概率和统计概率在内的所有概率理论。 定理 1 (互补法则)[编辑] A 互补事件的概率始终是 P(\bar{A})=1-P(A), \in S Komplement3.png 证明: 事件 A\overline {A} 是互补关系,由公理 3 和公理 2 可得 P(A)+P(\bar{A})=P(S)=1 \Rightarrow P(\bar{A})=1-P(A) 利用互补法则,可以解决下面这个问题,在两次连续旋转的轮盘游戏中,至少有一次是红色的概率是多少? 第一次旋转红色不出现的概率是 19/37 ,按照乘法法则,第二次也不出现红色的概率是 (19/37)2 = 0.2637,因此在这里互补概率就是指在两次连续旋转中至少有一次是红色的概率, P = 1 - \left( \frac{19}{37} \right)^2 = 0.7363 定理 2[编辑] 不可能事件的概率为零: P(\varnothing)=0 证明: \varnothingS 是互补事件,按照公理 2 有 P(S)=1,再根据上面的定理 1 得到 P(\varnothing)=1-1=0 注意:此定理的逆命题不成立,即概率为零的事件不一定是不可能事件。 例子:按照欧几里得几何的定义和几何概型的计算公式,飞镖飞中靶中一点或一条线的概率为零(点、线的面积为零),但是这不是不可能事件。 同理概率为1的事件不一定是必然事件。 定理 3[编辑] 如果若干事件 A_1,A_2,\cdots A_n \in S 每两两之间是空集关系,那么这些所有事件集合的概率等于单个事件的概率的和。 P(A_1\cup \cdots \cup A_n)=\sum_{j=1}^n P(A_j) 注意针对这一定理有效性的决定因素是 A_1 \cdots A_n 事件不能同时发生。例如,在一次掷骰子中,得到 5 点或者 6 点的概率是: P=P(A_5)+P(A_6)=Wuerfel.png = \frac{2}{6} = \frac{1}{3} 定理 4[编辑] 如果事件 AB 是差集关系,则有, P(A\setminus B)=P(A)-P(A\cap B) Theorem4.png 证明: 事件 A 由下面两个事件组成: A\setminus BA\cap B 由公理 3 得, P(A)=P(A\setminus B)+P(A\cap B) 定理 5 (任意事件加法法则)[编辑] 对于事件空间 S 中的任意两个事件 AB,有如下定理: P(A \cup B) = P(A) + P(B) - P(A \cap B) 证明: 事件 A \cup B 由下面三个事件组成: A\cup B=(A \setminus B)\cup (A\cap B)\cup (B \setminus A) 首先根据定理 4 有: \begin{array}{lcr} P(A\setminus B) & = & P(A)-P(A\cap B)\\ P(B\setminus A) & = & P(B)-P(A\cap B) \end{array} 再根据定理 3 得: \begin{align} P(A\cup B) & =P(A\setminus B)+P(A\cap B)+P(B\setminus A)\\ & =P(A)-P(A\cap B)+P(A\cap B)+P(B)-P(A\cap B)\\ & =P(A)+P(B)-P(A\cap B) \end{align} 例如,在由一共 32 张牌构成的斯卡特扑克牌中随机抽出一张,其或者是"方片"或者是"\mathcal{A}"的概率是多少? 事件 AB 是或者的关系,且可同时发生,就是说抽出的这张牌即可以是"方片",又可以是"\mathcal{A}",AB ( 既发生 A 又发生 B ) 的值是 1 / 32,( 从示意图上也可以看出,即是方片又是\mathcal{A}只有一张,即概率是 1 / 32 ),因此有如下结果: P(A \cup B) = \frac{8}{32} + \frac{4}{32} - \frac{1}{32} = \frac{11}{32} 注意到公理 3 是定理 5 的特殊情况,即 AB 不同时发生,相应的 P(A∩B)=0。 定理 6 (乘法法则)[编辑] 轮盘游戏示意图 2 事件 AB 同时发生的概率是: P(A \cap B) = P(A) \cdot P(B \vert A) = P(B) \cdot P(A \vert B) 公式中的 P ( A | B ) 是指在 B 条件下 A 发生的概率,又称作条件概率。回到上面的斯卡特游戏中,在 32 张牌中随机抽出一张,即是方片又是\mathcal{A}的概率是多少呢?现用 P(A) 代表抽出方片的概率,用 P(B) 代表抽出\mathcal{A}的概率,很明显,AB 之间有一定联系,即 A 里包含有 BB 里又包含有 A,在 A 的条件下发生 B 的概率是 P(B | A)=1/8,则有: P(A \cap B) = P(A) \cdot P(B \vert A) = \frac{8}{32} \cdot\frac{1}{8}=\frac{1}{32} 或者,P(A \cap B) = P(B) \cdot P(A \vert B) = \frac{4}{32} \cdot\frac{1}{4}=\frac{1}{32} 从上面的图中也可以看出,符合条件的只有一张牌,即方片\mathcal{A} 另一个例子,在 32 张斯卡特牌里连续抽两张 ( 第一次抽出的牌不放回去 ),连续得到两个\mathcal{A}的概率是多少呢? AB 分别为连续发生的这两次事件,我们看到,AB 之间有一定联系,即 B 的概率由于 A 发生了变化,属于条件概率,按照公式有:P(A \cap B) = P(A) \cdot P(B \vert A) = \frac{4}{32} \cdot\frac{3}{31}=\frac{3}{248} 定理 7 (无关事件乘法法则)[编辑] 两个不相关联的事件 AB 同时发生的概率是: P(A \cap B) = P (A)\cdot P(B) 注意到这个定理实际上是定理 6 (乘法法则) 的特殊情况,如果事件 AB 没有联系,则有 P(A|B)=P(A),以及 P(B|A)=P(B)。现在观察一下轮盘游戏中两次连续的旋转过程,P ( A ) 代表第一次出现红色的概率,P ( B ) 代表第二次出现红色的概率,可以看出,AB 没有关联,利用上面提到的公式,连续两次出现红色的概率为: P(A \cap B) = \frac{18}{37} \cdot \frac{18}{37} = 0.2367 忽视这一定理是造成许多玩家失败的根源,普遍认为,经过连续出现若干次红色后,黑色出现的概率会越来越大,事实上两种颜色每次出现的概率是相等的,之前出现的红色与之后出现的黑色之间没有任何联系,因为球本身并没有"记忆",它并不"知道"以前都发生了什么。同理,连续 10 次出现红色的概率为 P=(18/37)10 =0.0007 完全概率[编辑] n 个事件 H_1,H_2,...H_n 互相间独立,且共同组成整个事件空间 S,即 H_i\cap H_j=\varnothing( i\neq j ) 以及 H_1\cup H_2\cup ...\cup H_n=S 这时 A 的概率可以表示为, P(A)=\sum_{j=1}^n P(A|H_j)\cdot P(H_j) 证明: A=(A\cap H_1)\cup (A\cap H_2)\cup \ldots \cup (A\cap H_n) 按照公理 3 ,有 P(A)=P(A\cap H_1)+P(A\cap H_2)+\ldots +P(A\cap H_n) 根据乘法法则,P( A\cap H_j)=P( A | H_j)\cdot P( H_j) 因此有, P( A )=P( A | H_1) \cdot P( H_1)+\ldots +P( A | H_n) \cdot P( H_n) P(A)=\sum_{j=1}^n P(A|H_j)\cdot P(H_j) 例如,一个随机试验工具由一个骰子和一个柜子中的三个抽屉组成,抽屉 1 里有 14 个白球和 6 个黑球,抽屉 2 里有 2 个白球和 8 个黑球,抽屉 3 里有 3 个白球和 7 个黑球,试验规则是首先掷骰子,如果获得小于 4 点,则抽屉 1 被选择,如果获得 4 点或者 5 点,则抽屉 2 被选择,其他情况选择抽屉 3 。然后在选择的抽屉里随机抽出一个球,最后抽出的这个球是白球的概率是: P(白)=P(白|抽1)·P(抽1)+P(白|抽2)·P(抽2)+P(白|抽3)·P(抽3) =(14/20)·(3/6)+(2/10)·(2/6)+(3/10)·(1/6) =28/60=0.4667 从例子中可看出,完全概率特别适合于分析具有多层结构的随机试验的情况。 贝叶斯定理[编辑] 贝叶斯定理由英国数学家托马斯·贝叶斯 ( Thomas Bayes 1702-1761 ) 发展,用来描述两个条件概率之间的关系,比如 P(A|B) 和 P(B|A)。按照定理 6 的乘法法则,P(A∩B)=P(A)·P(B|A)=P(B)·P(A|B),可以立刻导出贝叶斯定理: P(A \vert B) = \frac {P(B \vert A) \cdot P(A)} {P(B)} 例如:一座别墅在过去的 20 年里一共发生过 2 次被盗,别墅的主人有一条狗,狗平均每周晚上叫 3 次,在盗贼入侵时狗叫的概率被估计为 0.9,问题是:在狗叫的时候发生入侵的概率是多少? 我们假设 A 事件为狗在晚上叫,B 为盗贼入侵,则 P( A )=3/7 P( B )=2/(20·365.25)=2/7305,P(A | B) = 0.9,按照公式很容易得出结果: P(B \vert A) = 0.9 \cdot \frac{2}{7305}\cdot\frac{7}{3}=0.0005749486653... 另一个例子,现分别有 AB 两个容器,在容器 A 里分别有 7 个红球和 3 个白球,在容器 B 里有 1 个红球和 9 个白球,现已知从这两个容器里任意抽出了一个球,且是红球,问这个红球是来自容器 A 的概率是多少? 假设已经抽出红球为事件 B,从容器 A 里抽出球为事件 A,则有:P ( B ) = 8 / 20,P ( A ) = 1 / 2,P ( B | A ) = 7 / 10,按照公式,则有: P(A \vert B) = \frac{7}{10}\cdot\frac{1}{2}\cdot\frac{20}{8}=\frac{7}{8} 概率分布[编辑] 概率论的应用[编辑] 虽然概率论最早产生于17世纪,然而其公理体系只在20世纪的20至30年代才建立起来并得到迅速发展,在过去的半个世纪里概率论在越来越多的新兴领域显示了它的应用性和实用性,例如:物理化学生物医学心理学社会学政治学教育学经济学以及几乎所有的工程学等领域。特别值得一提的是,概率论是今天数理统计的基础,其结果被用做问卷调查的分析资料或者对经济前景进行预测。 参见[编辑] 参考文献[编辑] 1. (德文) 彼得 缺菲尔 ( Peter Zoefel ):《统计和经济学家》 PEASON Studium 出版社 2003 年 ISBN 3-8273-7062-0 2. (德文) 约瑟夫 西拉 ( Josef Schira ):《统计理论与企业管理》 PEASON Studium 出版社 2003 年 ISBN 3-8273-7041-8 3. (德文) 汉斯-底特 黑伯曼 ( Hans-Dieter Hippmann ):《统计学》 SCHAEFFER POESCHEL 出版社 2003 年 ISBN 3-7910-2119-2 4. (德文) 里波舒尔茨 ( Seymour Lipschutz ):《概率计算-理论和应用》 McGRAW-HILL BOOK COMPANY GmbH 出版社 1980 年 ISBN 0-07-084361-9 5. (德文) 贝尔等 ( Beyer,Hackel,Pieper,Tiedge )《概率计算和数学统计》 Harri Deutsch 出版社 1980 年 ISBN 3-87144-433-2
__label__pos
0.939452
Exploring the Second Largest Animal in the World The second largest animal in the world is the fin whale (Balaenoptera physalus). Adult fin whales can grow up to 27 meters (88 feet) in length and can weigh up to 74,000 kg (163,000 pounds). The only animal larger than the fin whale is the blue whale (Balaenoptera musculus). Overview of the second largest animal in the world • The fin whale is a species of baleen whale that can be found in all of the world’s oceans, except for the Arctic Ocean. • Adult fin whales can grow up to 27 meters (88 feet) in length, making them the second largest animal in the world. • Fin whales have a sleek and streamlined body shape, with a narrow head and pointed snout. • They are known for their unique coloration, which includes a dark gray or brown back and a lighter-colored underside. • Fin whales are capable of swimming at speeds of up to 37 kilometers (23 miles) per hour, making them one of the fastest species of whale. • They are filter feeders and primarily consume small schooling fish, krill, and plankton. • The conservation status of fin whales is currently listed as “endangered” by the International Union for Conservation of Nature (IUCN), due to threats such as commercial whaling, entanglement in fishing gear, and habitat destruction. Brief history and taxonomy of The fin whale a8d05fef dab5 490c 9e51 20a2b0d6aed5 The fin whale, or Balaenoptera physalus, is a species of baleen whale that belongs to the family Balaenopteridae. It is thought to have first appeared in the oceans around 5 million years ago, during the Pliocene epoch. During the 20th century, fin whales were heavily targeted by commercial whaling operations, due to their large size and abundance. It is estimated that over 700,000 fin whales were killed by whalers between 1900 and 1979, causing a significant decline in their population numbers. In terms of taxonomy, the fin whale is classified as follows: • Kingdom: Animalia • Phylum: Chordata • Class: Mammalia • Order: Cetacea • Family: Balaenopteridae • Genus: Balaenoptera • Species: Balaenoptera physalus There is some debate among scientists over the number of subspecies of fin whale, with some proposing up to 7 different subspecies. However, the taxonomy of the fin whale is still not fully resolved, and further research is needed to clarify the species’ evolutionary history. Comparison with the largest animal in the world The fin whale, the second largest animal in the world, is often compared with the largest animal in the world, the blue whale. Here are some comparisons between the two species: • Size: The blue whale is the largest animal in the world, growing up to 30 meters (98 feet) in length and weighing up to 173 tonnes (191 tons), while the fin whale can grow up to 27 meters (88 feet) in length and weigh up to 74,000 kg (163,000 pounds). • Body shape: Blue whales have a long, streamlined body shape with a broad, U-shaped head, while fin whales have a more slender, streamlined body shape with a narrow, pointed head. • Vocalizations: Both species produce a range of vocalizations for communication, navigation, and hunting, but blue whales are known for producing the loudest sounds of any animal on Earth, with their songs being able to be heard over thousands of kilometers. • Diet: Both species are filter feeders that consume primarily small schooling fish, krill, and plankton. However, blue whales generally consume larger amounts of food per day than fin whales due to their larger size. • Conservation status: Both blue whales and fin whales are listed as “endangered” by the International Union for Conservation of Nature (IUCN) due to the impact of commercial whaling, entanglement in fishing gear, and habitat destruction. Average size and weight of the animal The fin whale is the second largest animal in the world, with an average size and weight as follows: • Size: Adult fin whales can grow up to 27 meters (88 feet) in length, although most are typically between 20-25 meters (65-82 feet) long. • Weight: Adult fin whales can weigh up to 74,000 kg (163,000 pounds), with females generally being slightly larger than males. It’s important to note that these are averages, and there can be significant variation in size and weight depending on factors such as age, sex, and geographic location. Additionally, individual specimens may exceed or fall below these average ranges. Species of the Second Largest Animal in the World • The fin whale is a species of baleen whale that belongs to the family Balaenopteridae. • Its scientific name is Balaenoptera physalus. • It is the second largest animal in the world, with adult individuals growing up to 27 meters (88 feet) in length and weighing up to 74,000 kg (163,000 pounds). • Fin whales can be found in all of the world’s oceans, except for the Arctic Ocean. • There is only one recognized species of fin whale, although there is some debate among scientists over the number of subspecies of fin whale, with some proposing up to 7 different subspecies. • The fin whale is a filter feeder, consuming primarily small schooling fish, krill, and plankton. • Like many whale species, fin whales were heavily targeted by commercial whaling operations during the 20th century, leading to a significant decline in their populations. Although commercial whaling is now banned, fin whales continue to face threats such as entanglement in fishing gear, habitat loss, and ship strikes. • The fin whale is currently classified as an endangered species by the International Union for Conservation of Nature (IUCN). Life Cycle and Reproduction of fin whale The life cycle and reproduction of the fin whale, the second-largest animal in the world, are as follows: • Sexual maturity: Fin whales reach sexual maturity at around 6-10 years of age, with males typically maturing later than females. • Mating: During the breeding season, which typically occurs during the winter months in temperate regions, male fin whales compete for females by producing vocalizations and engaging in physical displays such as headstands and tail slapping. • Gestation: The gestation period for fin whales is around 11-12 months, with females giving birth to a single calf every 2-3 years. • Calving: Calves are born in warm, shallow waters during the winter months, and weigh around 2,500-3,000 kg (5,500-6,600 pounds) at birth. They are typically weaned after 6-7 months, at which point they may weigh up to 12,000 kg (26,000 pounds). • Lifespan: Fin whales have a lifespan of around 80-90 years, although this can vary depending on a range of factors such as food availability, predation risk, and environmental conditions. Behavior and Social Structure of Fin Whale The behavior and social structure of the fin whale, the second-largest animal in the world, are as follows: • Solitary or social: Fin whales are typically solitary animals, although they may form loose aggregations in areas of high food abundance. These aggregations are not thought to represent true social groups, as fin whales do not exhibit the complex vocalizations or coordinated behavior seen in some other whale species. • Vocalizations: Fin whales are known for their low-frequency vocalizations, which can be heard over long distances and are thought to be used for communication and echolocation. • Feeding: Fin whales are filter feeders, using baleen plates in their mouths to filter small schooling fish, krill, and plankton from the water. They are known to feed at the surface, as well as at depths of up to 200 meters (660 feet). • Migration: Fin whales are highly migratory, with populations in the Northern Hemisphere typically moving towards polar regions in the summer months to take advantage of seasonal food resources. In the Southern Hemisphere, fin whales are thought to follow a more coastal migration pattern. • Diving behavior: Fin whales are capable of deep dives, and can remain submerged for up to 20 minutes at a time. They are known to perform long, slow dives, followed by shorter periods at the surface to breathe. • Threats: Fin whales are currently facing a range of threats, including entanglement in fishing gear, ship strikes, and habitat loss. Climate change is also expected to impact their prey availability and distribution, which may have significant implications for their survival. Conclusion In conclusion, the fin whale is the second largest animal in the world and is an important species in terms of its ecological and cultural significance. Although much is still unknown about their biology and behavior, ongoing research efforts are helping to increase our understanding of this majestic creature. References: https://www.submon.org/en/what-do-we-know-about-fin-whales/ https://a-z-animals.com/blog/the-10-largest-animals-on-earth/ Leave a Comment
__label__pos
0.976912
Upgrade to Pro — share decks privately, control downloads, hide ads and more … Low cost descriptors for surrogate modelling of energy generation and storage Dan Davies February 28, 2021 Low cost descriptors for surrogate modelling of energy generation and storage Departmental seminar given at STFCs SciML group at the Rutherford Appleton Labs, Didcot, UK. Dan Davies February 28, 2021 Tweet More Decks by Dan Davies Other Decks in Research Transcript 1. Low cost descriptors for surrogate modelling and screening of energy materials Dr Daniel Davies @danwdavies SciML Seminar September 2020 Department of Chemistry 2. Context: Energy materials discovery / design hydrogen 1 H 1.00794 lithium 3 Li 6.941 beryllium 4 Be 9.01218 sodium 11 Na 22.9898 magnesium 12 Mg 24.3050 potassium 19 K 39.0983 calcium 20 Ca 40.078 rubidium 37 Rb 85.4678 strontium 38 Sr 87.62 cesium 55 Cs 132.9055 barium 56 Ba 137.327 scandium 21 Sc 44.9559 titanium 22 Ti 47.867 vanadium 23 V 50.9415 chromium 24 Cr 51.9961 manganese 25 Mn 54.938 iron 26 Fe 55.845 cobalt 27 Co 58.9331 nickel 28 Ni 58.6934 copper 29 Cu 63.546 zinc 30 Zn 65.38 galium 31 Ga 69.723 germanium 32 Ge 72.64 aluminium 13 Al 26.9815 silicon 14 Si 28.0855 boron 5 B 10.811 carbon 6 C 12.0107 nitrogen 7 N 14.0067 oxygen 8 O 15.9994 phosphorus 15 P 30.9737 sulfur 16 S 32.065 arsenic 33 As 74.9216 selenium 34 Se 78.96 fluorine 9 F 18.9984 neon 10 Ne 20.1797 chlorine 17 Cl 35.453 argon 18 Ar 39.948 bromine 35 Br 79.904 krypton 36 Kr 83.798 thallium 81 Tl 204.3833 lead 82 Pb 207.2 indium 49 In 114.818 tin 50 Sn 118.710 antimony 51 Sb 121.760 tellurium 52 Te 127.60 bismuth 83 Bi 208.980 polonium 84 Po 209 iodine 53 I 126.904 xenon 54 Xe 131.293 astatine 85 At 210 radon 86 Rn 222 yttrium 39 Y 88.9059 zirconium 40 Zr 91.224 niobium 41 Nb 92.906 molybdenum 42 Mo 95.96 technetium 43 Tc 98 ruthenium 44 Ru 101.07 rhodium 45 Rh 102.9055 palladium 46 Pd 106.42 silver 47 Ag 107.8682 cadmium 48 Cd 112.411 hafnium 72 Hf 178.49 tantalum 73 Ta 180.9478 tungsten 74 W 183.84 rhenium 75 Re 186.207 osmium 76 Os 190.23 iridium 77 Ir 192.217 platinum 78 Pt 195.084 gold 79 Au 196.9666 mercury 80 Hg 200.59 helium 2 He 4.00260 Walsh Materials Design SMACT Periodic Table lanthanides actinides and other hard-to- pronounce elements +1,-1 +1 +1 +1 +1 +1 +2 +2 +2 +2 +2 +3 +3,+4 tt  +2,+3,+6 +2,+4,+7 +2,+3,+6 +2,+3 +2 +1,+2 +2 +3 tttttt  -3,+3,+5 -2 -1 +3 -4,+4 -3,+3,+5 -2,+2,+4 +6 -1,+1,+3 +5 +7 -1  t  +5,+7 +3 -4,+2,+4 -3,+3,+5 -2,+2,+4 +6 -1,+1,+3 +5 +7 +3 +4 +3,+5 +4,+6 +4,+7 t  +2,+3 +2,+4 +1 +2 +3 -4,+2,+4 -3,+3,+5 -2,+2,+4 +6 +4 +3,+5 t  +4,+6,+7 +4,+8 +3,+4 t  +1,+3 +1,+2 +1,+3 +2,+4 +3,+5 -2,+2,+4 -1,+1 tin 50 Sn 118.710 -4,+2,+4 common oxidation states atomic mass elemental symbol atomic number elemental name +2,+6 +2,+4,+6 +2 Energy Materials PV absorbers TCOs / TCMs PEC materials Thermoelectrics Battery cathodes Solid electrolytes … 3. We can compute a lot… but not everything D. W. Davies et al., Computational screening of all stoichiometric inorganic materials, Chem, 2016 First 100 elements in their known charge states, stoichiometry limit of 8 How many compositions could there be for… • Ay Bz • Ax By Cz • Aw Bx Cy Dz … ensuring charge neutrality and a few other rules about electron distribution? 4. The DFT bottleneck 1010 quaternary compounds ⏳ > 200,000 years ??? Compounds that are (i) stable and (ii) have useful properties 5. Overview LOW COST HIGH COST • Automated first- principles calculations Q 1: What is worth calculating from first principles? Q 2: What is worth making? Surrogate models: • Heuristic screening • ML 6. Overview PART 1: What is worth calculating from first principles - Estimating properties of solar energy materials - Estimating conductivity in energy storage materials PART 2: What is worth making - Calculating stability from first principles 7. We can compute many properties for solar materials accurately but at a cost A. Ganose et al., Beyond methylammonium lead iodide: prospects for the emergent field of ns2 containing solar absorbers, Chem. Commun., 2016 8. We can roughly estimate bandgap in milliseconds A. H. Nethercot, prediction of fermi energies and photoelectric threshold based on electronegativity concepts, Phys. Rev. Lett 1974 W. A. Harrison, Electronic structure and the properties of solids, 1980 B. D. Pelatt et al., Atomic solid state energy scale, JACS, 2011 • Solid state energy (SSE) scale derived from IP and EA of various binary semiconductors “The solid state energy (SSE) scale is obtained by assessing an average EA (for a cation) or an average IP (for an anion) for each atom by using data from compounds having that specific atom as a constituent. For example, the SSE for Al (-2.1 eV) is the average EA for AlN, AlAs, and AlSb.” 9. We can roughly estimate bandgap in milliseconds A. H. Nethercot, prediction of fermi energies and photoelectric threshold based on electronegativity concepts, Phys. Rev. Lett 1974 W. A. Harrison, Electronic structure and the properties of solids, 1980 B. D. Pelatt et al., Atomic solid state energy scale, JACS, 2011 • Solid state energy (SSE) scale derived from IP and EA of various binary semiconductors • Used to screen a space of 160k chalcohalide compositions for water splitting materials D. W. Davies et al., Computer-aided design of metal chalcohalide semiconductors: from chemical composition to crystal structure, Chem. Sci., 2018 10. We can roughly estimate bandgap in milliseconds sometimes IPs of oxides are not good “training data” (e.g. BaO: -5.0 eV, SiO2 : -9.9 eV, Al2 O3 : -12.4 eV…) Input data from Castelli et al., New Light-Harvesting Materials Using Accurate and Efficient Bandgap Calculations, Adv. Energy. Mat., 2015 How to improve on this? Better representation of materials? More sophisticated model? µ(𝝌) Max(𝝌) Min(𝝌) µ(rion ) … y 2.2 3.4 0.9 4.3 … 3.6 3.5 5.3 0.3 3.3 … 5.6 85 compositional features + + Number of trees Error Gradient boosting regression algorithm 11. A simple machine learning approach offers a solution RMSE = 0.95 eV D. W. Davies et al., Data-Driven Discovery of Photoactive Quaternary Oxides Using First-Principles Machine Learning, Chem. Mater., 2019 IPs of oxides are not good “training data” (e.g. BaO: -5.0 eV, SiO2 : -9.9 eV, Al2 O3 : -12.4 eV…) RMSE = 0.95 eV is approaching the limit of accuracy without structural information 12. Modelling charge transport beyond the effective mass approximation L. D. Whalley, effmass: An effective mass package, Journal of Open Source Software, 2018 Effective mass Mobility n-type Conductivity p-type 13. Electron and hole effective mass across metal oxides 5,548 metal oxides Electrons Holes  X M  Z R A Z −4 −2 0 2 4 6 8 Energy (eV) O (p) Sn (s) Sn (p) Sn (d) E.g. SnO2 14. Moving from the band picture to thinking in terms of polarons P. A. Cox, Electronic Structure and Chemistry of Solids, 1987 Quasiparticles describing a charge carrier plus surrounding polarization of the lattice But polarons are currently impossible to model from first principles fully: • Large supercells required even for simple systems • DFT is a mean field theory • DFT relies on the Born- Oppenheimer approximation For latest efforts see: W. H. Sio, et al., Polarons from first principles, without supercells, PRL, 2019 W. H. Sio, et al., Ab initio theory of polarons: Formalism and applications, PRB 2019 15. We can estimate polaron binding energy from effective mass and dielectric tensor S. Pekar, Local quantum states of electrons in an ideal ion crystal, J. Exp. Theor. Phys., 1946 H. Fröhlich, Electrons in lattice fields, Adv. Phys., 1954 16. S. Pekar, Local quantum states of electrons in an ideal ion crystal, J. Exp. Theor. Phys., 1946 H. Fröhlich, Electrons in lattice fields, Adv. Phys., 1954 We can estimate polaron binding energy from effective mass and dielectric tensor 17. We can estimate polaron binding energy from effective mass and dielectric tensor 214 metal oxides Type I Type II Type III D. W. Davies et al., Descriptors for electron ahd nhole charge carriers in metal oxides, J. Phys. Chem. Lett., 2019 18. We can estimate polaron binding energy from effective mass and dielectric tensor 214 metal oxides Formula e h PtO2 0.5 1.5 CuRhO2 0.5 2.2 LiAg3 O2 18 11 NaNbO2 5.9 2.9 Ca4 Bi2 O 12 14 YZnAsO 17 30 NaAg3 O2 14 17 LaZnAsO 7.9 19 YZnPO 13 19 LiNbO2 36 6.6 19. The problem is stability… LOW COST HIGH COST • Heuristic screening • ML models • Automated first- principles calculations Q 1: What is worth calculating from first principles? Q 2: What is worth making? What is worth trying to make? 20. Layered quinary materials as p-type TCs A2+ B3+ O Ch Cu A-B-O-Cu-Ch Prototype: [Cu2 S2 ][Sr3 Sc2 O5 ] • Eg = 3.1 eV • µhole = 150 cm2V-1s-1 • σundoped = 2.8 Scm-1 @ 1017 cm-3 21. Five elements à tunable electronic properties [Cu2 Ch2 ]2- [A3 B2 O5 ]2+ Cu 3d – Ch 2p mixing in VBM à favourable band dispersion and delocalized holes. Large band gap due to perovskite- like layer. A2+ B3+ O Ch Cu A-B-O-Cu-Ch 22. Widening the search for interesting compositions A2+ B3+ O Ch Cu A-B-O-Cu-Ch A = Sr, Ca, Ba, Mg B = Sc, Al, Ga, In, Y, La O S Cu 24 materials 23. Widening the search for interesting compositions A2+ B3+ O Ch Cu A-B-O-Cu-Ch A = Sr, Ca, Ba, Mg, Na, K, Rb, Cs, Zn, Al, Ga, In, Sc, Y, La, Ti, Zr, Hf, Ge, Sn, Pb O Cu 24 materials à 1200 materials ?? 🤔 B = Sr, Ca, Ba, Mg, Na, K, Rb, Cs, Zn, Al, Ga, In, Sc, Y, La, Ti, Zr, Hf, Ge, Sn, Pb S, Se 24. Heuristic design rules narrow down the search space a lot 1. A and B chosen to be electropositive and closed shell 2. qA ≤ qB for perovskite-like framework 3. Goldschmidt tolerance factor for perovskite-like framework (0.7 – 1.0) 4. Charge neutrality t > 1 A too big t < 0.7 A and B similar in size 1200 704 496 154 25. What kind of stability? A D E B + C Dynamic stability Kinetics Phonons Thermodynamic stability Internal/Free energy or 26. First-principles calculations (using e.g. the VASP code), give access to DFT total energy == enthalpy Key parameter of interest: Energy above convex hull of composition phase diagram Materials Project Find competing phases 154 charge neutral PBEsol relaxations Thermodynamic stability Multiple magnetic orderings possible? Generate different spin- ordered supercells Y N 154 candidates 784 competing phases 27. Materials Project Find competing phases 154 charge neutral PBEsol relaxations Thermodynamic stability Multiple magnetic orderings possible? Generate different spin- ordered supercells Y N 154 candidates 784 competing phases Key parameter of interest: Energy above convex hull of composition phase diagram First-principles calculations (using e.g. the VASP code), give access to DFT total energy == enthalpy 28. Studying this many materials is possible with automated first-principles calculations Job script Input files Output files • Vim / Text editor • Bash / Python scripting • SSH & SCP Job script Input files Output files Processing Processing Before Now 29. Mapping out thermodynamic stability Increasing size 87 possibly stable/metastable structures (Ehull < 90 meV/atom) Increasing size t > 1 A too big. t < 0.7 A and B similar in size 30. Thermodynamic stability agrees with experiment so far Cu-S Cu-Se Sc 0 0 In 0 0 Y 46 0 La 132 76 Ehull (meV/atom) 31. Thermodynamic stability agrees with experiment so far Cu-S Cu-Se Ag-S Ag-Se Sc 0 0 2 0 In 0 0 8 0 Y 46 0 46 0 La 132 76 120 107 Ehull (meV/atom) Would a stricter (than < 90 meV/atom) Ehull threshold be more useful for this class of materials? 32. Stability vs synthesizability • It is still not clear how first principles calculations can be used to predict the “synthesizability” of a compound accurately • A closer look at what is stable and what is unstable according to DFT is probably needed. 33. Summary • We can use a range of descriptors to quickly and roughly predict properties of hypothetical energy materials • Predicting the stability of hypothetical compounds remains a challenge. We can do it for some well-known crystal structures but lack the tools to do it for much else. • Even with first-principles methods, thermodynamically stable =/= dynamically stable =/= synthesizable. The chemistry and structure type has a huge impact and this still needs unravelling. • Data-driven techniques have an important role to play in the prediction of the stability of new compounds. 34. Tools and acknowledgements Electronic structure calculations • VASP (www.vasp.at) Everything else is open-source python • SMACT (smact.readthedocs.io) [WMD] • Sumo (sumo.readthedocs.io) [SMTG] • Pymatgen (pymatgen.org) • Atomate (atomate.org) • Jupyter (Jupyter.org) • Scikit-learn (scikit-learn.org) Acknowlegements: • David Scanlon & SMTG (esp. B. A. D. Williamson) • Aron Walsh and WMD group (ICL) • Geoff Hyett Gregory Limburn (Southampton) • MCC @danwdavies Thanks! 35. We have a wide range of useful band gaps Increasing size Increasing size 87 fundamental band gaps (HSE06) ranging from 0 à 3.2 eV 36. Property prediction is just one way ML is applied in chemistry and materials science Targeting discovery of new compounds Enhancing theoretical chemistry Assisting characterization Mining existing literature K. T. Butler et al., Machine learning for molecular and materials science, Nature, 2018
__label__pos
0.680954
General, Research, Technology The most common myths about the Sun: what is worth believing? The sun is one of the stars in the Milky Way galaxyand the only star in our solar system. If it did not exist, there would be no plants, no animals, and you and me on Earth. And all because the heavenly body saturates our planet with vital energy and the heat radiated by it plays a huge role in almost all chemical processes. This, without exaggeration, the main celestial object has been studied for thousands of years, and during this time, erroneous information about it appeared among the people. Many people believe that the Sun is composed of fiery lava. There is also a widespread belief that it is constantly in the same place and does not move at all. And some people do not even realize that the Sun, to which we owe our lives, will someday destroy our planet. As part of this material, I propose to dispel the most common myths about a star called the Sun. The sun is the most important celestial object for us, but we know so little about it Content • 1 What is the Sun? • 2 What is the Sun made of? • 3 Is there water in the sun? • 4 How does the sun move? • 5 Earth's trajectory around the Sun • 6 Will the sun destroy our planet? What is the Sun? First, let's take a look at the general information aboutThe sun. It is a star, that is, a spherical celestial body that emits light and is held in outer space due to its own gravity and internal pressure. At its core, it is a huge gaseous ball of hydrogen and helium, in which thermonuclear reactions constantly occur, in which the nuclei of light elements merge under the influence of high temperatures and form larger elements. At the same time, a huge amount of energy is released, part of which reaches our planet and participates in chemical processes vital for all living organisms. The distance from the Sun to the Earth is 149.6 million kilometers. To appreciate the difference in greatness, it's easier to imagine that the Sun is a huge orange, and the Earth is a tiny poppy seed. Dimensions of the Sun (left) and Earth (right) What is the Sun made of? Some people mistakenly believe that the Sunconsists of fiery lava. This, of course, is not true, because scientifically, lava is a volcanic mass of hot rocks. And the closest star to us consists of highly heated gases and is divided into several different layers: • solar core, which is the central part of the star withradius of about 175 thousand kilometers. This is some kind of a thermonuclear reactor, where the aforementioned collisions of nuclei with the release of a huge amount of energy take place. It is believed that this "fuel" will last for billions of years of existence of the star; • zone of radiant transfer, which is the middle layer of the sun andconsists of hydrogen-helium plasma. This zone got its name because of the way of transferring energy from the core to the surface - radiation. In the core of the Sun, particles of light are formed, referred to as photons. To reach the outer layers of a star, they need to pass through a layer of hydrogen-helium plasma. Along the way, the photons bump into plasma particles, which absorb them and re-emit them in a random direction. So, if the photons escaping outward reach the Earth in 8 minutes, then it may take them millions of years to pass through the middle layer of the Sun, but sooner or later they overcome all obstacles; • convective zone, which accounts for two-thirds of the volume of the sun. Energy transfer also takes place in this layer, but this time due to convection - a way of transferring energy by flows of substances. This phenomenon constantly occurs around us, for example, when a hot water battery heats up the air in a room. If we talk about the chemical composition of the Sun, then italmost the same as that of all other stars. It is about 75% hydrogen, 25% helium and about 1% other elements like carbon, oxygen and nitrogen. Is there water in the sun? Many people are sure that the hot sun does notmaybe water. And this sounds quite logical, because liquid cannot exist in such a hot place. But remember your school chemistry curriculum - the formula for water is very simple and consists of hydrogen and oxygen. But above we have already found out that these elements are on the hot star and there are quite a few of them. Scientists assure that the water molecule is one of the most durable in the Universe and it does not collapse under the influence of high temperatures. But the DNA molecule, from which life can arise, cannot exist in such extreme conditions, although all the components for its creation are there. Scientists of the Middle Ages believed that sunspots were lakes of water. In part, they were right It is important to note that water molecules canform only in areas of the Sun with a minimum temperature. While the sun as a whole heats up to 5.5 thousand degrees Celsius, the sunspots on its surface have a temperature of about 4.5 thousand degrees. Researchers believe that it is in these places that water can form. But you need to understand that she exists in molecular form, not liquid... According to experts from the NASA space agency, if the temperature of the Sun ever drops, then the water on it can take on a liquid form. Also, the Sun can make very strange sounds. You can read about this phenomenon in this material. How does the sun move? Since school years, we know that the Earth andother planets in the solar system revolve around the sun. Therefore, it is logical to assume that the Sun itself is constantly in the same place and does not move at all. But this is far from the case - it also moves, and at a very high speed, but we do not notice this at all, because we move together with the Sun and go along with it a very long way. Sounds complicated? Let's take a closer look at this phenomenon. The approximate location of the Sun in the Milky Way galaxy As we know, the solar system is located inone of the corners of the spiral Milky Way galaxy. It contains a huge number of other cosmic bodies that revolve around the center of the Milky Way. And including the Sun, with a speed of about 217 kilometers per second. This figure may seem truly crazy, but we do not even notice this speed, because the scale of our galaxy is huge and we do not even understand all its greatness. The Sun makes one revolution around the center of the galaxy in 250 million years, that is, in one galactic year. Did you know that the Milky Way galaxy is much larger than we think? Earth's trajectory around the Sun Another common myth says that insummer season The sun is closer to the Earth than in winter. This fact is incorrect from the point of view of the inhabitants of Russia, and in Africa it can be considered partly true. From time to time, our native Earth really approaches the celestial body a little closer. The fact is that the trajectory of our planet's motion around the Sun is not an even circle, but an elongated ellipse. So, during the year, our planet gets closer to the hot star. In Russia, this happens around January 3-4, and it is at this time that the Sun can be seen in the sky from the closest possible distance. And in Africa this moment falls on the summer - that is, for the inhabitants of this region in the summer, the Earth is really located closer to the Sun. Of course, the approach of the Sun affects the temperature on Earth. However, the change appears to be negligible and the average temperature rises by only 2-3 degrees Celsius. The location of the planets in the solar system The approach of the Earth to the Sun is insignificant. But such planets as dwarf Pluto have a more "flattened" trajectory of motion. The dwarf planet makes a circle around the Sun slowly, so one year there lasts about 250 Earth years. During the Plutonian summer, the distance between Pluto and the Sun is 4.5 billion kilometers, and in the winter period increases to 7.5 billion. If the trajectory of the Earth around the Sun were the same, then the average temperature in winter would be about minus 50 degrees Celsius, and this is only in the considered warm equator. And at the poles, the thermometers would show minus 150 degrees. In general, we simply would not have survived. How good it is that the Earth moves, though not in an ideal, but a circle. Will the sun destroy our planet? As scary as it is to realize it, but yes,someday the Sun who gave us life will destroy us. According to scientists, this will happen when there is no thermonuclear "fuel", that is, hydrogen, in the interior of the star. I mentioned above that it should last for billions of years, so our and many future generations have nothing to worry about yet. It is believed that after emptying the fuel, the Sun will swell to a huge size and begin to emit even more energy. This will lead to the fact that even before the depletion of hydrogen reserves, all life will gradually be erased from the face of the Earth and it will be a dry desert. Someday the sun will destroy our planet If you are interested in the news of science and technology, subscribe to our channel in Yandex. Dzen. There you will find materials that have not been published on the site! According to the calculations of the researchers, up to this pointthere are at least 5 billion years left. This is much more than it has been since the days of the dinosaurs. Most likely, by this time people will have already set foot on several future stages of evolution and even migrated to other, safer planets. But we will be able to colonize Mars already this century, because the well-known Elon Musk has already developed a plan and is developing spaceships for long-distance flights with might and main. But, if you think about it, even the colonization of Mars will not save us, because he, too, is believed around the Sun. Therefore, it remains to be hoped that by that time humanity will learn to conquer other stellar systems.
__label__pos
0.993116
Condensate and Feedwater Systems Operation Interactive Condensate and Feedwater Systems Operation Martech Updated Jan 21, 2021  Upon completion of this lesson, you will be able to describe the basic procedures for the start-up and operation of the condensate and feedwater systems. • Explain the five basic steps typically used to place the condensate system in service • Discuss considerations taken into account when initially filling the deaerator storage tank • Describe two methods used to fill the feedwater system • List four common tasks completed when putting high and low-pressure heat exchangers in service • Describe basic checks power plant operators routinely make on the condensate and feedwater systems ;
__label__pos
0.930665
Closest-pair problem From Rosetta Code (Redirected from Closest pair problem) Task Closest-pair problem You are encouraged to solve this task according to the task description, using any language you may know. This page uses content from Wikipedia. The original article was at Closest pair of points problem. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance) Task Provide a function to find the closest two points among a set of given points in two dimensions,   i.e. to solve the   Closest pair of points problem   in the   planar   case. The straightforward solution is a   O(n2)   algorithm   (which we can call brute-force algorithm);   the pseudo-code (using indexes) could be simply: bruteForceClosestPair of P(1), P(2), ... P(N) if N < 2 then returnelse minDistance ← |P(1) - P(2)| minPoints ← { P(1), P(2) } foreach i ∈ [1, N-1] foreach j ∈ [i+1, N] if |P(i) - P(j)| < minDistance then minDistance ← |P(i) - P(j)| minPoints ← { P(i), P(j) } endif endfor endfor return minDistance, minPoints endif A better algorithm is based on the recursive divide&conquer approach,   as explained also at   Wikipedia's Closest pair of points problem,   which is   O(n log n);   a pseudo-code could be: closestPair of (xP, yP) where xP is P(1) .. P(N) sorted by x coordinate, and yP is P(1) .. P(N) sorted by y coordinate (ascending order) if N ≤ 3 then return closest points of xP using brute-force algorithm else xL ← points of xP from 1 to ⌈N/2⌉ xR ← points of xP from ⌈N/2⌉+1 to N xm ← xP(⌈N/2⌉)x yL ← { p ∈ yP : px ≤ xm } yR ← { p ∈ yP : px > xm } (dL, pairL) ← closestPair of (xL, yL) (dR, pairR) ← closestPair of (xR, yR) (dmin, pairMin) ← (dR, pairR) if dL < dR then (dmin, pairMin) ← (dL, pairL) endif yS ← { p ∈ yP : |xm - px| < dmin } nS ← number of points in yS (closest, closestPair) ← (dmin, pairMin) for i from 1 to nS - 1 k ← i + 1 while k ≤ nS and yS(k)y - yS(i)y < dmin if |yS(k) - yS(i)| < closest then (closest, closestPair) ← (|yS(k) - yS(i)|, {yS(k), yS(i)}) endif k ← k + 1 endwhile endfor return closest, closestPair endif References and further readings 360 Assembly * Closest Pair Problem 10/03/2017 CLOSEST CSECT USING CLOSEST,R13 base register B 72(R15) skip savearea DC 17F'0' savearea STM R14,R12,12(R13) save previous context ST R13,4(R15) link backward ST R15,8(R13) link forward LR R13,R15 set addressability LA R6,1 i=1 LA R7,2 j=2 BAL R14,DDCALC dd=(px(i)-px(j))^2+(py(i)-py(j))^2 BAL R14,DDSTORE ddmin=dd; ii=i; jj=j LA R6,1 i=1 DO WHILE=(C,R6,LE,N) do i=1 to n LA R7,1 j=1 DO WHILE=(C,R7,LE,N) do j=1 to n BAL R14,DDCALC dd=(px(i)-px(j))^2+(py(i)-py(j))^2 IF CP,DD,GT,=P'0' THEN if dd>0 then IF CP,DD,LT,DDMIN THEN if dd<ddmin then BAL R14,DDSTORE ddmin=dd; ii=i; jj=j ENDIF , endif ENDIF , endif LA R7,1(R7) j++ ENDDO , enddo j LA R6,1(R6) i++ ENDDO , enddo i ZAP WPD,DDMIN ddmin DP WPD,=PL8'2' ddmin/2 ZAP SQRT2,WPD(8) sqrt2=ddmin/2 ZAP SQRT1,DDMIN sqrt1=ddmin DO WHILE=(CP,SQRT1,NE,SQRT2) do while sqrt1<>sqrt2 ZAP SQRT1,SQRT2 sqrt1=sqrt2 ZAP WPD,DDMIN ddmin DP WPD,SQRT1 /sqrt1 ZAP WP1,WPD(8) ddmin/sqrt1 AP WP1,SQRT1 +sqrt1 ZAP WPD,WP1 ~ DP WPD,=PL8'2' /2 ZAP SQRT2,WPD(8) sqrt2=(sqrt1+(ddmin/sqrt1))/2 ENDDO , enddo while MVC PG,=CL80'the minimum distance ' ZAP WP1,SQRT2 sqrt2 BAL R14,EDITPK edit MVC PG+21(L'WC),WC output XPRNT PG,L'PG print buffer XPRNT =CL22'is between the points:',22 MVC PG,PGP init buffer L R1,II ii SLA R1,4 *16 LA R4,PXY-16(R1) @px(ii) MVC WP1,0(R4) px(ii) BAL R14,EDITPK edit MVC PG+3(L'WC),WC output MVC WP1,8(R4) py(ii) BAL R14,EDITPK edit MVC PG+21(L'WC),WC output XPRNT PG,L'PG print buffer MVC PG,PGP init buffer L R1,JJ jj SLA R1,4 *16 LA R4,PXY-16(R1) @px(jj) MVC WP1,0(R4) px(jj) BAL R14,EDITPK edit MVC PG+3(L'WC),WC output MVC WP1,8(R4) py(jj) BAL R14,EDITPK edit MVC PG+21(L'WC),WC output XPRNT PG,L'PG print buffer L R13,4(0,R13) restore previous savearea pointer LM R14,R12,12(R13) restore previous context XR R15,R15 rc=0 BR R14 exit DDCALC EQU * ---- dd=(px(i)-px(j))^2+(py(i)-py(j))^2 LR R1,R6 i SLA R1,4 *16 LA R4,PXY-16(R1) @px(i) LR R1,R7 j SLA R1,4 *16 LA R5,PXY-16(R1) @px(j) ZAP WP1,0(8,R4) px(i) ZAP WP2,0(8,R5) px(j) SP WP1,WP2 px(i)-px(j) ZAP WPS,WP1 = MP WP1,WPS (px(i)-px(j))*(px(i)-px(j)) ZAP WP2,8(8,R4) py(i) ZAP WP3,8(8,R5) py(j) SP WP2,WP3 py(i)-py(j) ZAP WPS,WP2 = MP WP2,WPS (py(i)-py(j))*(py(i)-py(j)) AP WP1,WP2 (px(i)-px(j))^2+(py(i)-py(j))^2 ZAP DD,WP1 dd=(px(i)-px(j))^2+(py(i)-py(j))^2 BR R14 ---- return DDSTORE EQU * ---- ddmin=dd; ii=i; jj=j ZAP DDMIN,DD ddmin=dd ST R6,II ii=i ST R7,JJ jj=j BR R14 ---- return EDITPK EQU * ---- MVC WM,MASK set mask EDMK WM,WP1 edit and mark BCTR R1,0 -1 MVC 0(1,R1),WM+17 set sign MVC WC,WM len17<-len18 BR R14 ---- return N DC A((PGP-PXY)/16) PXY DC PL8'0.654682',PL8'0.925557',PL8'0.409382',PL8'0.619391' DC PL8'0.891663',PL8'0.888594',PL8'0.716629',PL8'0.996200' DC PL8'0.477721',PL8'0.946355',PL8'0.925092',PL8'0.818220' DC PL8'0.624291',PL8'0.142924',PL8'0.211332',PL8'0.221507' DC PL8'0.293786',PL8'0.691701',PL8'0.839186',PL8'0.728260' PGP DC CL80' [+xxxxxxxxx.xxxxxx,+xxxxxxxxx.xxxxxx]' MASK DC C' ',7X'20',X'21',X'20',C'.',6X'20',C'-' CL18 15num II DS F JJ DS F DD DS PL8 DDMIN DS PL8 SQRT1 DS PL8 SQRT2 DS PL8 WP1 DS PL8 WP2 DS PL8 WP3 DS PL8 WPS DS PL8 WPD DS PL16 WM DS CL18 WC DS CL17 PG DS CL80 YREGS END CLOSEST Output: the minimum distance 0.077910 is between the points: [ 0.891663, 0.888594] [ 0.925092, 0.818220] Ada Dimension independent, but has to be defined at procedure call time (could be a parameter). Output is simple, can be formatted using Float_IO. closest.adb: (uses brute force algorithm) with Ada.Numerics.Generic_Elementary_Functions; with Ada.Text_IO;   procedure Closest is package Math is new Ada.Numerics.Generic_Elementary_Functions (Float);   Dimension : constant := 2; type Vector is array (1 .. Dimension) of Float; type Matrix is array (Positive range <>) of Vector;   -- calculate the distance of two points function Distance (Left, Right : Vector) return Float is Result : Float := 0.0; Offset : Natural := 0; begin loop Result := Result + (Left(Left'First + Offset) - Right(Right'First + Offset))**2; Offset := Offset + 1; exit when Offset >= Left'Length; end loop; return Math.Sqrt (Result); end Distance;   -- determine the two closest points inside a cloud of vectors function Get_Closest_Points (Cloud : Matrix) return Matrix is Result : Matrix (1..2); Min_Distance : Float; begin if Cloud'Length(1) < 2 then raise Constraint_Error; end if; Result := (Cloud (Cloud'First), Cloud (Cloud'First + 1)); Min_Distance := Distance (Cloud (Cloud'First), Cloud (Cloud'First + 1)); for I in Cloud'First (1) .. Cloud'Last(1) - 1 loop for J in I + 1 .. Cloud'Last(1) loop if Distance (Cloud (I), Cloud (J)) < Min_Distance then Min_Distance := Distance (Cloud (I), Cloud (J)); Result := (Cloud (I), Cloud (J)); end if; end loop; end loop; return Result; end Get_Closest_Points;   Test_Cloud : constant Matrix (1 .. 10) := ( (5.0, 9.0), (9.0, 3.0), (2.0, 0.0), (8.0, 4.0), (7.0, 4.0), (9.0, 10.0), (1.0, 9.0), (8.0, 2.0), (0.0, 10.0), (9.0, 6.0)); Closest_Points : Matrix := Get_Closest_Points (Test_Cloud);   Second_Test : constant Matrix (1 .. 10) := ( (0.654682, 0.925557), (0.409382, 0.619391), (0.891663, 0.888594), (0.716629, 0.9962), (0.477721, 0.946355), (0.925092, 0.81822), (0.624291, 0.142924), (0.211332, 0.221507), (0.293786, 0.691701), (0.839186, 0.72826)); Second_Points : Matrix := Get_Closest_Points (Second_Test); begin Ada.Text_IO.Put_Line ("Closest Points:"); Ada.Text_IO.Put_Line ("P1: " & Float'Image (Closest_Points (1) (1)) & " " & Float'Image (Closest_Points (1) (2))); Ada.Text_IO.Put_Line ("P2: " & Float'Image (Closest_Points (2) (1)) & " " & Float'Image (Closest_Points (2) (2))); Ada.Text_IO.Put_Line ("Distance: " & Float'Image (Distance (Closest_Points (1), Closest_Points (2)))); Ada.Text_IO.Put_Line ("Closest Points 2:"); Ada.Text_IO.Put_Line ("P1: " & Float'Image (Second_Points (1) (1)) & " " & Float'Image (Second_Points (1) (2))); Ada.Text_IO.Put_Line ("P2: " & Float'Image (Second_Points (2) (1)) & " " & Float'Image (Second_Points (2) (2))); Ada.Text_IO.Put_Line ("Distance: " & Float'Image (Distance (Second_Points (1), Second_Points (2)))); end Closest; Output: Closest Points: P1: 8.00000E+00 4.00000E+00 P2: 7.00000E+00 4.00000E+00 Distance: 1.00000E+00 Closest Points 2: P1: 8.91663E-01 8.88594E-01 P2: 9.25092E-01 8.18220E-01 Distance: 7.79101E-02 AWK   # syntax: GAWK -f CLOSEST-PAIR_PROBLEM.AWK BEGIN { x[++n] = 0.654682 ; y[n] = 0.925557 x[++n] = 0.409382 ; y[n] = 0.619391 x[++n] = 0.891663 ; y[n] = 0.888594 x[++n] = 0.716629 ; y[n] = 0.996200 x[++n] = 0.477721 ; y[n] = 0.946355 x[++n] = 0.925092 ; y[n] = 0.818220 x[++n] = 0.624291 ; y[n] = 0.142924 x[++n] = 0.211332 ; y[n] = 0.221507 x[++n] = 0.293786 ; y[n] = 0.691701 x[++n] = 0.839186 ; y[n] = 0.728260 min = 1E20 for (i=1; i<=n-1; i++) { for (j=i+1; j<=n; j++) { dsq = (x[i]-x[j])^2 + (y[i]-y[j])^2 if (dsq < min) { min = dsq mini = i minj = j } } } printf("distance between (%.6f,%.6f) and (%.6f,%.6f) is %g\n",x[mini],y[mini],x[minj],y[minj],sqrt(min)) exit(0) }   Output: distance between (0.891663,0.888594) and (0.925092,0.818220) is 0.0779102 BBC BASIC To find the closest pair it is sufficient to compare the squared-distances, it is not necessary to perform the square root for each pair! DIM x(9), y(9)   FOR I% = 0 TO 9 READ x(I%), y(I%) NEXT   min = 1E30 FOR I% = 0 TO 8 FOR J% = I%+1 TO 9 dsq = (x(I%) - x(J%))^2 + (y(I%) - y(J%))^2 IF dsq < min min = dsq : mini% = I% : minj% = J% NEXT NEXT I% PRINT "Closest pair is ";mini% " and ";minj% " at distance "; SQR(min) END   DATA 0.654682, 0.925557 DATA 0.409382, 0.619391 DATA 0.891663, 0.888594 DATA 0.716629, 0.996200 DATA 0.477721, 0.946355 DATA 0.925092, 0.818220 DATA 0.624291, 0.142924 DATA 0.211332, 0.221507 DATA 0.293786, 0.691701 DATA 0.839186, 0.728260   Output: Closest pair is 2 and 5 at distance 0.0779101913 C See Closest-pair problem/C C++ /* Author: Kevin Bacon Date: 04/03/2014 Task: Closest-pair problem */   #include <iostream> #include <vector> #include <utility> #include <cmath> #include <random> #include <chrono> #include <algorithm> #include <iterator>   typedef std::pair<double, double> point_t; typedef std::pair<point_t, point_t> points_t;   double distance_between(const point_t& a, const point_t& b) { return std::sqrt(std::pow(b.first - a.first, 2) + std::pow(b.second - a.second, 2)); }   std::pair<double, points_t> find_closest_brute(const std::vector<point_t>& points) { if (points.size() < 2) { return { -1, { { 0, 0 }, { 0, 0 } } }; } auto minDistance = std::abs(distance_between(points.at(0), points.at(1))); points_t minPoints = { points.at(0), points.at(1) }; for (auto i = std::begin(points); i != (std::end(points) - 1); ++i) { for (auto j = i + 1; j < std::end(points); ++j) { auto newDistance = std::abs(distance_between(*i, *j)); if (newDistance < minDistance) { minDistance = newDistance; minPoints.first = *i; minPoints.second = *j; } } } return { minDistance, minPoints }; }   std::pair<double, points_t> find_closest_optimized(const std::vector<point_t>& xP, const std::vector<point_t>& yP) { if (xP.size() <= 3) { return find_closest_brute(xP); } auto N = xP.size(); auto xL = std::vector<point_t>(); auto xR = std::vector<point_t>(); std::copy(std::begin(xP), std::begin(xP) + (N / 2), std::back_inserter(xL)); std::copy(std::begin(xP) + (N / 2), std::end(xP), std::back_inserter(xR)); auto xM = xP.at(N / 2).first; auto yL = std::vector<point_t>(); auto yR = std::vector<point_t>(); std::copy_if(std::begin(yP), std::end(yP), std::back_inserter(yL), [&xM](const point_t& p) { return p.first <= xM; }); std::copy_if(std::begin(yP), std::end(yP), std::back_inserter(yR), [&xM](const point_t& p) { return p.first > xM; }); auto p1 = find_closest_optimized(xL, yL); auto p2 = find_closest_optimized(xR, yR); auto minPair = (p1.first <= p2.first) ? p1 : p2; auto yS = std::vector<point_t>(); std::copy_if(std::begin(yP), std::end(yP), std::back_inserter(yS), [&minPair, &xM](const point_t& p) { return std::abs(xM - p.first) < minPair.first; }); auto result = minPair; for (auto i = std::begin(yS); i != (std::end(yS) - 1); ++i) { for (auto k = i + 1; k != std::end(yS) && ((k->second - i->second) < minPair.first); ++k) { auto newDistance = std::abs(distance_between(*k, *i)); if (newDistance < result.first) { result = { newDistance, { *k, *i } }; } } } return result; }   void print_point(const point_t& point) { std::cout << "(" << point.first << ", " << point.second << ")"; }   int main(int argc, char * argv[]) { std::default_random_engine re(std::chrono::system_clock::to_time_t( std::chrono::system_clock::now())); std::uniform_real_distribution<double> urd(-500.0, 500.0); std::vector<point_t> points(100); std::generate(std::begin(points), std::end(points), [&urd, &re]() { return point_t { 1000 + urd(re), 1000 + urd(re) }; }); auto answer = find_closest_brute(points); std::sort(std::begin(points), std::end(points), [](const point_t& a, const point_t& b) { return a.first < b.first; }); auto xP = points; std::sort(std::begin(points), std::end(points), [](const point_t& a, const point_t& b) { return a.second < b.second; }); auto yP = points; std::cout << "Min distance (brute): " << answer.first << " "; print_point(answer.second.first); std::cout << ", "; print_point(answer.second.second); answer = find_closest_optimized(xP, yP); std::cout << "\nMin distance (optimized): " << answer.first << " "; print_point(answer.second.first); std::cout << ", "; print_point(answer.second.second); return 0; } Output: Min distance (brute): 6.95886 (932.735, 1002.7), (939.216, 1000.17) Min distance (optimized): 6.95886 (932.735, 1002.7), (939.216, 1000.17) Clojure   (defn distance [[x1 y1] [x2 y2]] (let [dx (- x2 x1), dy (- y2 y1)] (Math/sqrt (+ (* dx dx) (* dy dy)))))   (defn brute-force [points] (let [n (count points)] (when (< 1 n) (apply min-key first (for [i (range 0 (dec n)), :let [p1 (nth points i)], j (range (inc i) n), :let [p2 (nth points j)]] [(distance p1 p2) p1 p2])))))   (defn combine [yS [dmin pmin1 pmin2]] (apply min-key first (conj (for [[p1 p2] (partition 2 1 yS)  :let [[_ py1] p1 [_ py2] p2]  :while (< (- py1 py2) dmin)] [(distance p1 p2) p1 p2]) [dmin pmin1 pmin2])))   (defn closest-pair ([points] (closest-pair (sort-by first points) (sort-by second points))) ([xP yP] (if (< (count xP) 4) (brute-force xP) (let [[xL xR] (partition-all (Math/ceil (/ (count xP) 2)) xP) [xm _] (last xL) {yL true yR false} (group-by (fn [[px _]] (<= px xm)) yP) dL&pairL (closest-pair xL yL) dR&pairR (closest-pair xR yR) [dmin pmin1 pmin2] (min-key first dL&pairL dR&pairR) {yS true} (group-by (fn [[px _]] (< (Math/abs (- xm px)) dmin)) yP)] (combine yS [dmin pmin1 pmin2])))))   Common Lisp Points are conses whose cars are x coördinates and whose cdrs are y coördinates. This version includes the optimizations given in the McGill description of the algorithm. (defun point-distance (p1 p2) (destructuring-bind (x1 . y1) p1 (destructuring-bind (x2 . y2) p2 (let ((dx (- x2 x1)) (dy (- y2 y1))) (sqrt (+ (* dx dx) (* dy dy)))))))   (defun closest-pair-bf (points) (let ((pair (list (first points) (second points))) (dist (point-distance (first points) (second points)))) (dolist (p1 points (values pair dist)) (dolist (p2 points) (unless (eq p1 p2) (let ((pdist (point-distance p1 p2))) (when (< pdist dist) (setf (first pair) p1 (second pair) p2 dist pdist))))))))   (defun closest-pair (points) (labels ((cp (xp &aux (length (length xp))) (if (<= length 3) (multiple-value-bind (pair distance) (closest-pair-bf xp) (values pair distance (sort xp '< :key 'cdr))) (let* ((xr (nthcdr (1- (floor length 2)) xp)) (xm (/ (+ (caar xr) (caadr xr)) 2))) (psetf xr (rest xr) (rest xr) '()) (multiple-value-bind (lpair ldist yl) (cp xp) (multiple-value-bind (rpair rdist yr) (cp xr) (multiple-value-bind (dist pair) (if (< ldist rdist) (values ldist lpair) (values rdist rpair)) (let* ((all-ys (merge 'vector yl yr '< :key 'cdr)) (ys (remove-if #'(lambda (p) (> (abs (- (car p) xm)) dist)) all-ys)) (ns (length ys))) (dotimes (i ns) (do ((k (1+ i) (1+ k))) ((or (= k ns) (> (- (cdr (aref ys k)) (cdr (aref ys i))) dist))) (let ((pd (point-distance (aref ys i) (aref ys k)))) (when (< pd dist) (setf dist pd (first pair) (aref ys i) (second pair) (aref ys k)))))) (values pair dist all-ys))))))))) (multiple-value-bind (pair distance) (cp (sort (copy-list points) '< :key 'car)) (values pair distance)))) C# We provide a small helper class for distance comparisons: class Segment { public Segment(PointF p1, PointF p2) { P1 = p1; P2 = p2; }   public readonly PointF P1; public readonly PointF P2;   public float Length() { return (float)Math.Sqrt(LengthSquared()); }   public float LengthSquared() { return (P1.X - P2.X) * (P1.X - P2.X) + (P1.Y - P2.Y) * (P1.Y - P2.Y); } } Brute force: Segment Closest_BruteForce(List<PointF> points) { int n = points.Count; var result = Enumerable.Range( 0, n-1) .SelectMany( i => Enumerable.Range( i+1, n-(i+1) ) .Select( j => new Segment( points[i], points[j] ))) .OrderBy( seg => seg.LengthSquared()) .First();   return result; } And divide-and-conquer.   public static Segment MyClosestDivide(List<PointF> points) { return MyClosestRec(points.OrderBy(p => p.X).ToList()); }   private static Segment MyClosestRec(List<PointF> pointsByX) { int count = pointsByX.Count; if (count <= 4) return Closest_BruteForce(pointsByX);   // left and right lists sorted by X, as order retained from full list var leftByX = pointsByX.Take(count/2).ToList(); var leftResult = MyClosestRec(leftByX);   var rightByX = pointsByX.Skip(count/2).ToList(); var rightResult = MyClosestRec(rightByX);   var result = rightResult.Length() < leftResult.Length() ? rightResult : leftResult;   // There may be a shorter distance that crosses the divider // Thus, extract all the points within result.Length either side var midX = leftByX.Last().X; var bandWidth = result.Length(); var inBandByX = pointsByX.Where(p => Math.Abs(midX - p.X) <= bandWidth);   // Sort by Y, so we can efficiently check for closer pairs var inBandByY = inBandByX.OrderBy(p => p.Y).ToArray();   int iLast = inBandByY.Length - 1; for (int i = 0; i < iLast; i++ ) { var pLower = inBandByY[i];   for (int j = i + 1; j <= iLast; j++) { var pUpper = inBandByY[j];   // Comparing each point to successivly increasing Y values // Thus, can terminate as soon as deltaY is greater than best result if ((pUpper.Y - pLower.Y) >= result.Length()) break;   if (Segment.Length(pLower, pUpper) < result.Length()) result = new Segment(pLower, pUpper); } }   return result; }   However, the difference in speed is still remarkable. var randomizer = new Random(10); var points = Enumerable.Range( 0, 10000).Select( i => new PointF( (float)randomizer.NextDouble(), (float)randomizer.NextDouble())).ToList(); Stopwatch sw = Stopwatch.StartNew(); var r1 = Closest_BruteForce(points); sw.Stop(); Debugger.Log(1, "", string.Format("Time used (Brute force) (float): {0} ms", sw.Elapsed.TotalMilliseconds)); Stopwatch sw2 = Stopwatch.StartNew(); var result2 = Closest_Recursive(points); sw2.Stop(); Debugger.Log(1, "", string.Format("Time used (Divide & Conquer): {0} ms",sw2.Elapsed.TotalMilliseconds)); Assert.Equal(r1.Length(), result2.Length()); Output: Time used (Brute force) (float): 145731.8935 ms Time used (Divide & Conquer): 1139.2111 ms Non Linq Brute Force:   Segment Closest_BruteForce(List<PointF> points) { Trace.Assert(points.Count >= 2);   int count = points.Count;   // Seed the result - doesn't matter what points are used // This just avoids having to do null checks in the main loop below var result = new Segment(points[0], points[1]); var bestLength = result.Length();   for (int i = 0; i < count; i++) for (int j = i + 1; j < count; j++) if (Segment.Length(points[i], points[j]) < bestLength) { result = new Segment(points[i], points[j]); bestLength = result.Length(); }   return result; } Targeted Search: Much simpler than divide and conquer, and actually runs faster for the random points. Key optimization is that if the distance along the X axis is greater than the best total length you already have, you can terminate the inner loop early. However, as only sorts in the X direction, it degenerates into an N^2 algorithm if all the points have the same X.   Segment Closest(List<PointF> points) { Trace.Assert(points.Count >= 2);   int count = points.Count; points.Sort((lhs, rhs) => lhs.X.CompareTo(rhs.X));   var result = new Segment(points[0], points[1]); var bestLength = result.Length();   for (int i = 0; i < count; i++) { var from = points[i];   for (int j = i + 1; j < count; j++) { var to = points[j];   var dx = to.X - from.X; if (dx >= bestLength) { break; }   if (Segment.Length(from, to) < bestLength) { result = new Segment(from, to); bestLength = result.Length(); } } }   return result; }   Crystal D Compact Versions import std.stdio, std.typecons, std.math, std.algorithm, std.random, std.traits, std.range, std.complex;   auto bruteForceClosestPair(T)(in T[] points) pure nothrow @nogc { // return pairwise(points.length.iota, points.length.iota) // .reduce!(min!((i, j) => abs(points[i] - points[j]))); auto minD = Unqual!(typeof(T.re)).infinity; T minI, minJ; foreach (immutable i, const p1; points.dropBackOne) foreach (const p2; points[i + 1 .. $]) { immutable dist = abs(p1 - p2); if (dist < minD) { minD = dist; minI = p1; minJ = p2; } } return tuple(minD, minI, minJ); }   auto closestPair(T)(T[] points) pure nothrow { static Tuple!(typeof(T.re), T, T) inner(in T[] xP, /*in*/ T[] yP) pure nothrow { if (xP.length <= 3) return xP.bruteForceClosestPair; const Pl = xP[0 .. $ / 2]; const Pr = xP[$ / 2 .. $]; immutable xDiv = Pl.back.re; auto Yr = yP.partition!(p => p.re <= xDiv); immutable dl_pairl = inner(Pl, yP[0 .. yP.length - Yr.length]); immutable dr_pairr = inner(Pr, Yr); immutable dm_pairm = dl_pairl[0]<dr_pairr[0] ? dl_pairl : dr_pairr; immutable dm = dm_pairm[0]; const nextY = yP.filter!(p => abs(p.re - xDiv) < dm).array;   if (nextY.length > 1) { auto minD = typeof(T.re).infinity; size_t minI, minJ; foreach (immutable i; 0 .. nextY.length - 1) foreach (immutable j; i + 1 .. min(i + 8, nextY.length)) { immutable double dist = abs(nextY[i] - nextY[j]); if (dist < minD) { minD = dist; minI = i; minJ = j; } } return dm <= minD ? dm_pairm : typeof(return)(minD, nextY[minI], nextY[minJ]); } else return dm_pairm; }   points.sort!q{ a.re < b.re }; const xP = points.dup; points.sort!q{ a.im < b.im }; return inner(xP, points); }   void main() { alias C = complex; auto pts = [C(5,9), C(9,3), C(2), C(8,4), C(7,4), C(9,10), C(1,9), C(8,2), C(0,10), C(9,6)]; pts.writeln; writeln("bruteForceClosestPair: ", pts.bruteForceClosestPair); writeln(" closestPair: ", pts.closestPair);   rndGen.seed = 1; Complex!double[10_000] points; foreach (ref p; points) p = C(uniform(0.0, 1000.0) + uniform(0.0, 1000.0)); writeln("bruteForceClosestPair: ", points.bruteForceClosestPair); writeln(" closestPair: ", points.closestPair); } Output: [5+9i, 9+3i, 2+0i, 8+4i, 7+4i, 9+10i, 1+9i, 8+2i, 0+10i, 9+6i] bruteForceClosestPair: Tuple!(double, Complex!double, Complex!double)(1, 8+4i, 7+4i) closestPair: Tuple!(double, Complex!double, Complex!double)(1, 7+4i, 8+4i) bruteForceClosestPair: Tuple!(double, Complex!double, Complex!double)(1.76951e-05, 1040.2+0i, 1040.2+0i) closestPair: Tuple!(double, Complex!double, Complex!double)(1.76951e-05, 1040.2+0i, 1040.2+0i) About 1.87 seconds run-time for data generation and brute force version, and about 0.03 seconds for data generation and divide & conquer (10_000 points in both cases) with ldc2 compiler. Faster Brute-force Version import std.stdio, std.random, std.math, std.typecons, std.complex, std.traits;   Nullable!(Tuple!(size_t, size_t)) bfClosestPair2(T)(in Complex!T[] points) pure nothrow @nogc { auto minD = Unqual!(typeof(points[0].re)).infinity; if (points.length < 2) return typeof(return)();   size_t minI, minJ; foreach (immutable i; 0 .. points.length - 1) foreach (immutable j; i + 1 .. points.length) { auto dist = (points[i].re - points[j].re) ^^ 2; if (dist < minD) { dist += (points[i].im - points[j].im) ^^ 2; if (dist < minD) { minD = dist; minI = i; minJ = j; } } }   return typeof(return)(tuple(minI, minJ)); }   void main() { alias C = Complex!double; auto rng = 31415.Xorshift; C[10_000] pts; foreach (ref p; pts) p = C(uniform(0.0, 1000.0, rng), uniform(0.0, 1000.0, rng));   immutable ij = pts.bfClosestPair2; if (ij.isNull) return; writefln("Closest pair: Distance: %f p1, p2: %f, %f", abs(pts[ij[0]] - pts[ij[1]]), pts[ij[0]], pts[ij[1]]); } Output: Closest pair: Distance: 0.019212 p1, p2: 9.74223+119.419i, 9.72306+119.418i About 0.12 seconds run-time for brute-force version 2 (10_000 points) with with LDC2 compiler. Elixir defmodule Closest_pair do # brute-force algorithm: def bruteForce([p0,p1|_] = points), do: bf_loop(points, {distance(p0, p1), {p0, p1}})   defp bf_loop([_], acc), do: acc defp bf_loop([h|t], acc), do: bf_loop(t, bf_loop(h, t, acc))   defp bf_loop(_, [], acc), do: acc defp bf_loop(p0, [p1|t], {minD, minP}) do dist = distance(p0, p1) if dist < minD, do: bf_loop(p0, t, {dist, {p0, p1}}), else: bf_loop(p0, t, {minD, minP}) end   defp distance({p0x,p0y}, {p1x,p1y}) do  :math.sqrt( (p1x - p0x) * (p1x - p0x) + (p1y - p0y) * (p1y - p0y) ) end   # recursive divide&conquer approach: def recursive(points) do recursive(Enum.sort(points), Enum.sort_by(points, fn {_x,y} -> y end)) end   def recursive(xP, _yP) when length(xP) <= 3, do: bruteForce(xP) def recursive(xP, yP) do {xL, xR} = Enum.split(xP, div(length(xP), 2)) {xm, _} = hd(xR) {yL, yR} = Enum.partition(yP, fn {x,_} -> x < xm end) {dL, pairL} = recursive(xL, yL) {dR, pairR} = recursive(xR, yR) {dmin, pairMin} = if dL<dR, do: {dL, pairL}, else: {dR, pairR} yS = Enum.filter(yP, fn {x,_} -> abs(xm - x) < dmin end) merge(yS, {dmin, pairMin}) end   defp merge([_], acc), do: acc defp merge([h|t], acc), do: merge(t, merge_loop(h, t, acc))   defp merge_loop(_, [], acc), do: acc defp merge_loop(p0, [p1|_], {dmin,_}=acc) when dmin <= elem(p1,1) - elem(p0,1), do: acc defp merge_loop(p0, [p1|t], {dmin, pair}) do dist = distance(p0, p1) if dist < dmin, do: merge_loop(p0, t, {dist, {p0, p1}}), else: merge_loop(p0, t, {dmin, pair}) end end   data = [{0.654682, 0.925557}, {0.409382, 0.619391}, {0.891663, 0.888594}, {0.716629, 0.996200}, {0.477721, 0.946355}, {0.925092, 0.818220}, {0.624291, 0.142924}, {0.211332, 0.221507}, {0.293786, 0.691701}, {0.839186, 0.728260}]   IO.inspect Closest_pair.bruteForce(data) IO.inspect Closest_pair.recursive(data)   data2 = for _ <- 1..5000, do: {:rand.uniform, :rand.uniform} IO.puts "\nBrute-force:" IO.inspect :timer.tc(fn -> Closest_pair.bruteForce(data2) end) IO.puts "Recursive divide&conquer:" IO.inspect :timer.tc(fn -> Closest_pair.recursive(data2) end) Output: {0.07791019135517516, {{0.891663, 0.888594}, {0.925092, 0.81822}}} {0.07791019135517516, {{0.891663, 0.888594}, {0.925092, 0.81822}}} Brute-force: {9579000, {2.068674444452469e-4, {{0.9397601102440695, 0.020420581980209674}, {0.9399398976079764, 0.020522908141823986}}}} Recursive divide&conquer: {109000, {2.068674444452469e-4, {{0.9397601102440695, 0.020420581980209674}, {0.9399398976079764, 0.020522908141823986}}}} F# Brute force:   let closest_pairs (xys: Point []) = let n = xys.Length seq { for i in 0..n-2 do for j in i+1..n-1 do yield xys.[i], xys.[j] } |> Seq.minBy (fun (p0, p1) -> (p1 - p0).LengthSquared)   For example:   closest_pairs [|Point(0.0, 0.0); Point(1.0, 0.0); Point (2.0, 2.0)|]   gives:   (0,0, 1,0)   Divide And Conquer:     open System; open System.Drawing; open System.Diagnostics;   let Length (seg : (PointF * PointF) option) = match seg with | None -> System.Single.MaxValue | Some(line) -> let f = fst line let t = snd line   let dx = f.X - t.X let dy = f.Y - t.Y sqrt (dx*dx + dy*dy)     let Shortest a b = if Length(a) < Length(b) then a else b     let rec ClosestBoundY from maxY (ptsByY : PointF list) = match ptsByY with | [] -> None | hd :: tl -> if hd.Y > maxY then None else let toHd = Some(from, hd) let bestToRest = ClosestBoundY from maxY tl Shortest toHd bestToRest     let rec ClosestWithinRange ptsByY maxDy = match ptsByY with | [] -> None | hd :: tl -> let fromHd = ClosestBoundY hd (hd.Y + maxDy) tl let fromRest = ClosestWithinRange tl maxDy Shortest fromHd fromRest     // Cuts pts half way through it's length // Order is not maintained in result lists however let Halve pts = let rec ShiftToFirst first second n = match (n, second) with | 0, _ -> (first, second) // finished the split, so return current state | _, [] -> (first, []) // not enough items, so first takes the whole original list | n, hd::tl -> ShiftToFirst (hd :: first) tl (n-1) // shift 1st item from second to first, then recurse with n-1   let n = (List.length pts) / 2 ShiftToFirst [] pts n     let rec ClosestPair (pts : PointF list) = if List.length pts < 2 then None else let ptsByX = pts |> List.sortBy(fun(p) -> p.X)   let (left, right) = Halve ptsByX let leftResult = ClosestPair left let rightResult = ClosestPair right   let bestInHalf = Shortest leftResult rightResult let bestLength = Length bestInHalf   let divideX = List.head(right).X let inBand = pts |> List.filter(fun(p) -> Math.Abs(p.X - divideX) < bestLength)   let byY = inBand |> List.sortBy(fun(p) -> p.Y) let bestCross = ClosestWithinRange byY bestLength Shortest bestInHalf bestCross     let GeneratePoints n = let rand = new Random() [1..n] |> List.map(fun(i) -> new PointF(float32(rand.NextDouble()), float32(rand.NextDouble())))   let timer = Stopwatch.StartNew() let pts = GeneratePoints (50 * 1000) let closest = ClosestPair pts let takenMs = timer.ElapsedMilliseconds   printfn "Closest Pair '%A'. Distance %f" closest (Length closest) printfn "Took %d [ms]" takenMs   Fantom (Based on the Ruby example.)   class Point { Float x Float y   // create a random point new make (Float x := Float.random * 10, Float y := Float.random * 10) { this.x = x this.y = y }   Float distance (Point p) { ((x-p.x)*(x-p.x) + (y-p.y)*(y-p.y)).sqrt }   override Str toStr () { "($x, $y)" } }   class Main { // use brute force approach static Point[] findClosestPair1 (Point[] points) { if (points.size < 2) return points // list too small Point[] closestPair := [points[0], points[1]] Float closestDistance := points[0].distance(points[1])   (1..<points.size).each |Int i| { ((i+1)..<points.size).each |Int j| { Float trydistance := points[i].distance(points[j]) if (trydistance < closestDistance) { closestPair = [points[i], points[j]] closestDistance = trydistance } } }   return closestPair }   // use recursive divide-and-conquer approach static Point[] findClosestPair2 (Point[] points) { if (points.size <= 3) return findClosestPair1(points) points.sort |Point a, Point b -> Int| { a.x <=> b.x } bestLeft := findClosestPair2 (points[0..(points.size/2)]) bestRight := findClosestPair2 (points[(points.size/2)..-1])   Float minDistance Point[] closePoints := [,] if (bestLeft[0].distance(bestLeft[1]) < bestRight[0].distance(bestRight[1])) { minDistance = bestLeft[0].distance(bestLeft[1]) closePoints = bestLeft } else { minDistance = bestRight[0].distance(bestRight[1]) closePoints = bestRight } yPoints := points.findAll |Point p -> Bool| { (points.last.x - p.x).abs < minDistance }.sort |Point a, Point b -> Int| { a.y <=> b.y }   closestPair := [,] closestDist := Float.posInf   for (Int i := 0; i < yPoints.size - 1; ++i) { for (Int j := (i+1); j < yPoints.size; ++j) { if ((yPoints[j].y - yPoints[i].y) >= minDistance) { break } else { dist := yPoints[i].distance (yPoints[j]) if (dist < closestDist) { closestDist = dist closestPair = [yPoints[i], yPoints[j]] } } } } if (closestDist < minDistance) return closestPair else return closePoints }   public static Void main (Str[] args) { Int numPoints := 10 // default value, in case a number not given on command line if ((args.size > 0) && (args[0].toInt(10, false) != null)) { numPoints = args[0].toInt(10, false) }   Point[] points := [,] numPoints.times { points.add (Point()) }   Int t1 := Duration.now.toMillis echo (findClosestPair1(points.dup)) Int t2 := Duration.now.toMillis echo ("Time taken: ${(t2-t1)}ms") echo (findClosestPair2(points.dup)) Int t3 := Duration.now.toMillis echo ("Time taken: ${(t3-t2)}ms") } }   Output: $ fan closestPoints 1000 [(1.4542885676006445, 8.238581003965352), (1.4528464044751888, 8.234724407229772)] Time taken: 88ms [(1.4528464044751888, 8.234724407229772), (1.4542885676006445, 8.238581003965352)] Time taken: 80ms $ fan closestPoints 10000 [(3.454790171891945, 5.307252398266497), (3.4540208686702245, 5.308350223433488)] Time taken: 6248ms [(3.454790171891945, 5.307252398266497), (3.4540208686702245, 5.308350223433488)] Time taken: 228ms Fortran See Closest pair problem/Fortran Go Brute force package main   import ( "fmt" "math" "math/rand" "time" )   type xy struct { x, y float64 }   const n = 1000 const scale = 100.   func d(p1, p2 xy) float64 { return math.Hypot(p2.x-p1.x, p2.y-p1.y) }   func main() { rand.Seed(time.Now().Unix()) points := make([]xy, n) for i := range points { points[i] = xy{rand.Float64() * scale, rand.Float64() * scale} } p1, p2 := closestPair(points) fmt.Println(p1, p2) fmt.Println("distance:", d(p1, p2)) }   func closestPair(points []xy) (p1, p2 xy) { if len(points) < 2 { panic("at least two points expected") } min := 2 * scale for i, q1 := range points[:len(points)-1] { for _, q2 := range points[i+1:] { if dq := d(q1, q2); dq < min { p1, p2 = q1, q2 min = dq } } } return } O(n) // implementation following algorithm described in // http://www.cs.umd.edu/~samir/grant/cp.pdf package main   import ( "fmt" "math" "math/rand" "time" )   // number of points to search for closest pair const n = 1e6   // size of bounding box for points. // x and y will be random with uniform distribution in the range [0,scale). const scale = 100.   // point struct type xy struct { x, y float64 // coordinates key int64 // an annotation used in the algorithm }   func d(p1, p2 xy) float64 { return math.Hypot(p2.x-p1.x, p2.y-p1.y) }   func main() { rand.Seed(time.Now().Unix()) points := make([]xy, n) for i := range points { points[i] = xy{rand.Float64() * scale, rand.Float64() * scale, 0} } p1, p2 := closestPair(points) fmt.Println(p1, p2) fmt.Println("distance:", d(p1, p2)) }   func closestPair(s []xy) (p1, p2 xy) { if len(s) < 2 { panic("2 points required") } var dxi float64 // step 0 for s1, i := s, 1; ; i++ { // step 1: compute min distance to a random point // (for the case of random data, it's enough to just try // to pick a different point) rp := i % len(s1) xi := s1[rp] dxi = 2 * scale for p, xn := range s1 { if p != rp { if dq := d(xi, xn); dq < dxi { dxi = dq } } }   // step 2: filter invB := 3 / dxi // b is size of a mesh cell mx := int64(scale*invB) + 1 // mx is number of cells along a side // construct map as a histogram: // key is index into mesh. value is count of points in cell hm := map[int64]int{} for ip, p := range s1 { key := int64(p.x*invB)*mx + int64(p.y*invB) s1[ip].key = key hm[key]++ } // construct s2 = s1 less the points without neighbors s2 := make([]xy, 0, len(s1)) nx := []int64{-mx - 1, -mx, -mx + 1, -1, 0, 1, mx - 1, mx, mx + 1} for i, p := range s1 { nn := 0 for _, ofs := range nx { nn += hm[p.key+ofs] if nn > 1 { s2 = append(s2, s1[i]) break } } }   // step 3: done? if len(s2) == 0 { break } s1 = s2 } // step 4: compute answer from approximation invB := 1 / dxi mx := int64(scale*invB) + 1 hm := map[int64][]int{} for i, p := range s { key := int64(p.x*invB)*mx + int64(p.y*invB) s[i].key = key hm[key] = append(hm[key], i) } nx := []int64{-mx - 1, -mx, -mx + 1, -1, 0, 1, mx - 1, mx, mx + 1} var min = scale * 2 for ip, p := range s { for _, ofs := range nx { for _, iq := range hm[p.key+ofs] { if ip != iq { if d1 := d(p, s[iq]); d1 < min { min = d1 p1, p2 = p, s[iq] } } } } } return p1, p2 } Groovy Point class: class Point { final Number x, y Point(Number x = 0, Number y = 0) { this.x = x; this.y = y } Number distance(Point that) { ((this.x - that.x)**2 + (this.y - that.y)**2)**0.5 } String toString() { "{x:${x}, y:${y}}" } } Brute force solution. Incorporates X-only and Y-only pre-checks in two places to cut down on the square root calculations: def bruteClosest(Collection pointCol) { assert pointCol List l = pointCol int n = l.size() assert n > 1 if (n == 2) return [distance:l[0].distance(l[1]), points:[l[0],l[1]]] def answer = [distance: Double.POSITIVE_INFINITY] (0..<(n-1)).each { i -> ((i+1)..<n).findAll { j -> (l[i].x - l[j].x).abs() < answer.distance && (l[i].y - l[j].y).abs() < answer.distance }.each { j -> if ((l[i].x - l[j].x).abs() < answer.distance && (l[i].y - l[j].y).abs() < answer.distance) { def dist = l[i].distance(l[j]) if (dist < answer.distance) { answer = [distance:dist, points:[l[i],l[j]]] } } } } answer } Elegant (divide-and-conquer reduction) solution. Incorporates X-only and Y-only pre-checks in two places (four if you count the inclusion of the brute force solution) to cut down on the square root calculations: def elegantClosest(Collection pointCol) { assert pointCol List xList = (pointCol as List).sort { it.x } List yList = xList.clone().sort { it.y } reductionClosest(xList, xList) }   def reductionClosest(List xPoints, List yPoints) { // assert xPoints && yPoints // assert (xPoints as Set) == (yPoints as Set) int n = xPoints.size() if (n < 10) return bruteClosest(xPoints)   int nMid = Math.ceil(n/2) List xLeft = xPoints[0..<nMid] List xRight = xPoints[nMid..<n] Number xMid = xLeft[-1].x List yLeft = yPoints.findAll { it.x <= xMid } List yRight = yPoints.findAll { it.x > xMid } if (xRight[0].x == xMid) { yLeft = xLeft.collect{ it }.sort { it.y } yRight = xRight.collect{ it }.sort { it.y } }   Map aLeft = reductionClosest(xLeft, yLeft) Map aRight = reductionClosest(xRight, yRight) Map aMin = aRight.distance < aLeft.distance ? aRight : aLeft List yMid = yPoints.findAll { (xMid - it.x).abs() < aMin.distance } int nyMid = yMid.size() if (nyMid < 2) return aMin   Map answer = aMin (0..<(nyMid-1)).each { i -> ((i+1)..<nyMid).findAll { j -> (yMid[j].x - yMid[i].x).abs() < aMin.distance && (yMid[j].y - yMid[i].y).abs() < aMin.distance && yMid[j].distance(yMid[i]) < aMin.distance }.each { k -> if ((yMid[k].x - yMid[i].x).abs() < answer.distance && (yMid[k].y - yMid[i].y).abs() < answer.distance) { def ikDist = yMid[i].distance(yMid[k]) if ( ikDist < answer.distance) { answer = [distance:ikDist, points:[yMid[i],yMid[k]]] } } } } answer } Benchmark/Test: def random = new Random()   (1..4).each { def point10 = (0..<(10**it)).collect { new Point(random.nextInt(1000001) - 500000,random.nextInt(1000001) - 500000) }   def startE = System.currentTimeMillis() def closestE = elegantClosest(point10) def elapsedE = System.currentTimeMillis() - startE println """ ${10**it} POINTS ----------------------------------------- Elegant reduction: elapsed: ${elapsedE/1000} s closest: ${closestE} """     def startB = System.currentTimeMillis() def closestB = bruteClosest(point10) def elapsedB = System.currentTimeMillis() - startB println """Brute force: elapsed: ${elapsedB/1000} s closest: ${closestB}   Speedup ratio (B/E): ${elapsedB/elapsedE} ========================================= """ } Results: 10 POINTS ----------------------------------------- Elegant reduction: elapsed: 0.019 s closest: [distance:85758.5249173515, points:[{x:310073, y:-27339}, {x:382387, y:18761}]] Brute force: elapsed: 0.001 s closest: [distance:85758.5249173515, points:[{x:310073, y:-27339}, {x:382387, y:18761}]] Speedup ratio (B/E): 0.0526315789 ========================================= 100 POINTS ----------------------------------------- Elegant reduction: elapsed: 0.019 s closest: [distance:3166.229934796271, points:[{x:-343735, y:-244394}, {x:-341099, y:-246148}]] Brute force: elapsed: 0.027 s closest: [distance:3166.229934796271, points:[{x:-343735, y:-244394}, {x:-341099, y:-246148}]] Speedup ratio (B/E): 1.4210526316 ========================================= 1000 POINTS ----------------------------------------- Elegant reduction: elapsed: 0.241 s closest: [distance:374.22586762542215, points:[{x:411817, y:-83016}, {x:412038, y:-82714}]] Brute force: elapsed: 0.618 s closest: [distance:374.22586762542215, points:[{x:411817, y:-83016}, {x:412038, y:-82714}]] Speedup ratio (B/E): 2.5643153527 ========================================= 10000 POINTS ----------------------------------------- Elegant reduction: elapsed: 1.957 s closest: [distance:79.00632886041473, points:[{x:187928, y:-452338}, {x:187929, y:-452259}]] Brute force: elapsed: 51.567 s closest: [distance:79.00632886041473, points:[{x:187928, y:-452338}, {x:187929, y:-452259}]] Speedup ratio (B/E): 26.3500255493 ========================================= Haskell BF solution: import Data.List (minimumBy, tails, unfoldr, foldl1') --'   import System.Random (newStdGen, randomRs)   import Control.Arrow ((&&&))   import Data.Ord (comparing)   vecLeng [[a, b], [p, q]] = sqrt $ (a - p) ^ 2 + (b - q) ^ 2   findClosestPair = foldl1'' ((minimumBy (comparing vecLeng) .) . (. return) . (:)) . concatMap (\(x:xs) -> map ((x :) . return) xs) . init . tails   testCP = do g <- newStdGen let pts :: [[Double]] pts = take 1000 . unfoldr (Just . splitAt 2) $ randomRs (-1, 1) g print . (id &&& vecLeng) . findClosestPair $ pts   main = testCP   foldl1'' = foldl1' Output: *Main> testCP ([[0.8347201880148426,0.40774840545089647],[0.8348731214261784,0.4087113189531284]],9.749825850154334e-4) (4.02 secs, 488869056 bytes) Icon and Unicon This is a brute force solution. It combines reading the points with computing the closest pair seen so far. record point(x,y)   procedure main() minDist := 0 minPair := &null every (points := [],p1 := readPoint()) do { if *points == 1 then minDist := dSquared(p1,points[1]) every minDist >=:= dSquared(p1,p2 := !points) do minPair := [p1,p2] push(points, p1) }   if \minPair then { write("(",minPair[1].x,",",minPair[1].y,") -> ", "(",minPair[2].x,",",minPair[2].y,")") } else write("One or fewer points!") end   procedure readPoint() # Skips lines that don't have two numbers on them suspend !&input ? point(numeric(tab(upto(', '))), numeric((move(1),tab(0)))) end   procedure dSquared(p1,p2) # Compute the square of the distance return (p2.x-p1.x)^2 + (p2.y-p1.y)^2 # (sufficient for closeness) end J Solution of the simpler (brute-force) problem: vecl =: +/"1&.:*: NB. length of each vector dist =: <@:vecl@:({: -"1 }:)\ NB. calculate all distances among vectors minpair=: ({~ > {.@($ #: I.@,)@:= <./@;)dist NB. find one pair of the closest points closestpairbf =: (; vecl@:-/)@minpair NB. the pair and their distance Examples of use: ]pts=:10 2 ?@$ 0 0.654682 0.925557 0.409382 0.619391 0.891663 0.888594 0.716629 0.9962 0.477721 0.946355 0.925092 0.81822 0.624291 0.142924 0.211332 0.221507 0.293786 0.691701 0.839186 0.72826   closestpairbf pts +-----------------+---------+ |0.891663 0.888594|0.0779104| |0.925092 0.81822| | +-----------------+---------+ The program also works for higher dimensional vectors: ]pts=:10 4 ?@$ 0 0.559164 0.482993 0.876 0.429769 0.217911 0.729463 0.97227 0.132175 0.479206 0.169165 0.495302 0.362738 0.316673 0.797519 0.745821 0.0598321 0.662585 0.726389 0.658895 0.653457 0.965094 0.664519 0.084712 0.20671 0.840877 0.591713 0.630206 0.99119 0.221416 0.114238 0.0991282 0.174741 0.946262 0.505672 0.776017 0.307362 0.262482 0.540054 0.707342 0.465234   closestpairbf pts +------------------------------------+--------+ |0.217911 0.729463 0.97227 0.132175|0.708555| |0.316673 0.797519 0.745821 0.0598321| | +------------------------------------+--------+ Java Both the brute-force and the divide-and-conquer methods are implemented. Code: import java.util.*;   public class ClosestPair { public static class Point { public final double x; public final double y;   public Point(double x, double y) { this.x = x; this.y = y; }   public String toString() { return "(" + x + ", " + y + ")"; } }   public static class Pair { public Point point1 = null; public Point point2 = null; public double distance = 0.0;   public Pair() { }   public Pair(Point point1, Point point2) { this.point1 = point1; this.point2 = point2; calcDistance(); }   public void update(Point point1, Point point2, double distance) { this.point1 = point1; this.point2 = point2; this.distance = distance; }   public void calcDistance() { this.distance = distance(point1, point2); }   public String toString() { return point1 + "-" + point2 + " : " + distance; } }   public static double distance(Point p1, Point p2) { double xdist = p2.x - p1.x; double ydist = p2.y - p1.y; return Math.hypot(xdist, ydist); }   public static Pair bruteForce(List<? extends Point> points) { int numPoints = points.size(); if (numPoints < 2) return null; Pair pair = new Pair(points.get(0), points.get(1)); if (numPoints > 2) { for (int i = 0; i < numPoints - 1; i++) { Point point1 = points.get(i); for (int j = i + 1; j < numPoints; j++) { Point point2 = points.get(j); double distance = distance(point1, point2); if (distance < pair.distance) pair.update(point1, point2, distance); } } } return pair; }   public static void sortByX(List<? extends Point> points) { Collections.sort(points, new Comparator<Point>() { public int compare(Point point1, Point point2) { if (point1.x < point2.x) return -1; if (point1.x > point2.x) return 1; return 0; } } ); }   public static void sortByY(List<? extends Point> points) { Collections.sort(points, new Comparator<Point>() { public int compare(Point point1, Point point2) { if (point1.y < point2.y) return -1; if (point1.y > point2.y) return 1; return 0; } } ); }   public static Pair divideAndConquer(List<? extends Point> points) { List<Point> pointsSortedByX = new ArrayList<Point>(points); sortByX(pointsSortedByX); List<Point> pointsSortedByY = new ArrayList<Point>(points); sortByY(pointsSortedByY); return divideAndConquer(pointsSortedByX, pointsSortedByY); }   private static Pair divideAndConquer(List<? extends Point> pointsSortedByX, List<? extends Point> pointsSortedByY) { int numPoints = pointsSortedByX.size(); if (numPoints <= 3) return bruteForce(pointsSortedByX);   int dividingIndex = numPoints >>> 1; List<? extends Point> leftOfCenter = pointsSortedByX.subList(0, dividingIndex); List<? extends Point> rightOfCenter = pointsSortedByX.subList(dividingIndex, numPoints);   List<Point> tempList = new ArrayList<Point>(leftOfCenter); sortByY(tempList); Pair closestPair = divideAndConquer(leftOfCenter, tempList);   tempList.clear(); tempList.addAll(rightOfCenter); sortByY(tempList); Pair closestPairRight = divideAndConquer(rightOfCenter, tempList);   if (closestPairRight.distance < closestPair.distance) closestPair = closestPairRight;   tempList.clear(); double shortestDistance =closestPair.distance; double centerX = rightOfCenter.get(0).x; for (Point point : pointsSortedByY) if (Math.abs(centerX - point.x) < shortestDistance) tempList.add(point);   for (int i = 0; i < tempList.size() - 1; i++) { Point point1 = tempList.get(i); for (int j = i + 1; j < tempList.size(); j++) { Point point2 = tempList.get(j); if ((point2.y - point1.y) >= shortestDistance) break; double distance = distance(point1, point2); if (distance < closestPair.distance) { closestPair.update(point1, point2, distance); shortestDistance = distance; } } } return closestPair; }   public static void main(String[] args) { int numPoints = (args.length == 0) ? 1000 : Integer.parseInt(args[0]); List<Point> points = new ArrayList<Point>(); Random r = new Random(); for (int i = 0; i < numPoints; i++) points.add(new Point(r.nextDouble(), r.nextDouble())); System.out.println("Generated " + numPoints + " random points"); long startTime = System.currentTimeMillis(); Pair bruteForceClosestPair = bruteForce(points); long elapsedTime = System.currentTimeMillis() - startTime; System.out.println("Brute force (" + elapsedTime + " ms): " + bruteForceClosestPair); startTime = System.currentTimeMillis(); Pair dqClosestPair = divideAndConquer(points); elapsedTime = System.currentTimeMillis() - startTime; System.out.println("Divide and conquer (" + elapsedTime + " ms): " + dqClosestPair); if (bruteForceClosestPair.distance != dqClosestPair.distance) System.out.println("MISMATCH"); } } Output: java ClosestPair 10000 Generated 10000 random points Brute force (1594 ms): (0.9246533850872104, 0.098709007587097)-(0.924591196030625, 0.09862206991823985) : 1.0689077146927108E-4 Divide and conquer (250 ms): (0.924591196030625, 0.09862206991823985)-(0.9246533850872104, 0.098709007587097) : 1.0689077146927108E-4 JavaScript Using bruteforce algorithm, the bruteforceClosestPair method below expects an array of objects with x- and y-members set to numbers, and returns an object containing the members distance and points. function distance(p1, p2) { var dx = Math.abs(p1.x - p2.x); var dy = Math.abs(p1.y - p2.y); return Math.sqrt(dx*dx + dy*dy); }   function bruteforceClosestPair(arr) { if (arr.length < 2) { return Infinity; } else { var minDist = distance(arr[0], arr[1]); var minPoints = arr.slice(0, 2);   for (var i=0; i<arr.length-1; i++) { for (var j=i+1; j<arr.length; j++) { if (distance(arr[i], arr[j]) < minDist) { minDist = distance(arr[i], arr[j]); minPoints = [ arr[i], arr[j] ]; } } } return { distance: minDist, points: minPoints }; } } divide-and-conquer method:     var Point = function(x, y) { this.x = x; this.y = y; }; Point.prototype.getX = function() { return this.x; }; Point.prototype.getY = function() { return this.y; };   var mergeSort = function mergeSort(points, comp) { if(points.length < 2) return points;     var n = points.length, i = 0, j = 0, leftN = Math.floor(n / 2), rightN = leftN;     var leftPart = mergeSort( points.slice(0, leftN), comp), rightPart = mergeSort( points.slice(rightN), comp );   var sortedPart = [];   while((i < leftPart.length) && (j < rightPart.length)) { if(comp(leftPart[i], rightPart[j]) < 0) { sortedPart.push(leftPart[i]); i += 1; } else { sortedPart.push(rightPart[j]); j += 1; } } while(i < leftPart.length) { sortedPart.push(leftPart[i]); i += 1; } while(j < rightPart.length) { sortedPart.push(rightPart[j]); j += 1; } return sortedPart; };   var closestPair = function _closestPair(Px, Py) { if(Px.length < 2) return { distance: Infinity, pair: [ new Point(0, 0), new Point(0, 0) ] }; if(Px.length < 3) { //find euclid distance var d = Math.sqrt( Math.pow(Math.abs(Px[1].x - Px[0].x), 2) + Math.pow(Math.abs(Px[1].y - Px[0].y), 2) ); return { distance: d, pair: [ Px[0], Px[1] ] }; }   var n = Px.length, leftN = Math.floor(n / 2), rightN = leftN;   var Xl = Px.slice(0, leftN), Xr = Px.slice(rightN), Xm = Xl[leftN - 1], Yl = [], Yr = []; //separate Py for(var i = 0; i < Py.length; i += 1) { if(Py[i].x <= Xm.x) Yl.push(Py[i]); else Yr.push(Py[i]); }   var dLeft = _closestPair(Xl, Yl), dRight = _closestPair(Xr, Yr);   var minDelta = dLeft.distance, closestPair = dLeft.pair; if(dLeft.distance > dRight.distance) { minDelta = dRight.distance; closestPair = dRight.pair; }     //filter points around Xm within delta (minDelta) var closeY = []; for(i = 0; i < Py.length; i += 1) { if(Math.abs(Py[i].x - Xm.x) < minDelta) closeY.push(Py[i]); } //find min within delta. 8 steps max for(i = 0; i < closeY.length; i += 1) { for(var j = i + 1; j < Math.min( (i + 8), closeY.length ); j += 1) { var d = Math.sqrt( Math.pow(Math.abs(closeY[j].x - closeY[i].x), 2) + Math.pow(Math.abs(closeY[j].y - closeY[i].y), 2) ); if(d < minDelta) { minDelta = d; closestPair = [ closeY[i], closeY[j] ] } } }   return { distance: minDelta, pair: closestPair }; };     var points = [ new Point(0.748501, 4.09624), new Point(3.00302, 5.26164), new Point(3.61878, 9.52232), new Point(7.46911, 4.71611), new Point(5.7819, 2.69367), new Point(2.34709, 8.74782), new Point(2.87169, 5.97774), new Point(6.33101, 0.463131), new Point(7.46489, 4.6268), new Point(1.45428, 0.087596) ];   var sortX = function (a, b) { return (a.x < b.x) ? -1 : ((a.x > b.x) ? 1 : 0); } var sortY = function (a, b) { return (a.y < b.y) ? -1 : ((a.y > b.y) ? 1 : 0); }   var Px = mergeSort(points, sortX); var Py = mergeSort(points, sortY);   console.log(JSON.stringify(closestPair(Px, Py))) // {"distance":0.0894096443343775,"pair":[{"x":7.46489,"y":4.6268},{"x":7.46911,"y":4.71611}]}   var points2 = [new Point(37100, 13118), new Point(37134, 1963), new Point(37181, 2008), new Point(37276, 21611), new Point(37307, 9320)];   Px = mergeSort(points2, sortX); Py = mergeSort(points2, sortY);   console.log(JSON.stringify(closestPair(Px, Py))); // {"distance":65.06919393998976,"pair":[{"x":37134,"y":1963},{"x":37181,"y":2008}]}     jq Works with: jq version 1.4 The solution presented here is essentially a direct translation into jq of the pseudo-code presented in the task description, but "closest_pair" is added so that any list of [x,y] points can be presented, and extra lines are added to ensure that xL and yL have the same lengths. Infrastructure: # This definition of "until" is included in recent versions (> 1.4) of jq # Emit the first input that satisfied the condition def until(cond; next): def _until: if cond then . else (next|_until) end; _until;   # Euclidean 2d distance def dist(x;y): [x[0] - y[0], x[1] - y[1]] | map(.*.) | add | sqrt;   # P is an array of points, [x,y]. # Emit the solution in the form [dist, [P1, P2]] def bruteForceClosestPair(P): (P|length) as $length | if $length < 2 then null else reduce range(0; $length-1) as $i ( null; reduce range($i+1; $length) as $j (.; dist(P[$i]; P[$j]) as $d | if . == null or $d < .[0] then [$d, [ P[$i], P[$j] ] ] else . end ) ) end;   def closest_pair:   def abs: if . < 0 then -. else . end; def ceil: floor as $floor | if . == $floor then $floor else $floor + 1 end;   # xP is an array [P(1), .. P(N)] sorted by x coordinate, and # yP is an array [P(1), .. P(N)] sorted by y coordinate (ascending order). # if N <= 3 then return closest points of xP using the brute-force algorithm. def closestPair(xP; yP): if xP|length <= 3 then bruteForceClosestPair(xP) else ((xP|length)/2|ceil) as $N | xP[0:$N] as $xL | xP[$N:] as $xR | xP[$N-1][0] as $xm # middle | (yP | map(select(.[0] <= $xm ))) as $yL0 # might be too long | (yP | map(select(.[0] > $xm ))) as $yR0 # might be too short | (if $yL0|length == $N then $yL0 else $yL0[0:$N] end) as $yL | (if $yL0|length == $N then $yR0 else $yL0[$N:] + $yR0 end) as $yR | closestPair($xL; $yL) as $pairL # [dL, pairL] | closestPair($xR; $yR) as $pairR # [dR, pairR] | (if $pairL[0] < $pairR[0] then $pairL else $pairR end) as $pair # [ dmin, pairMin] | (yP | map(select( (($xm - .[0])|abs) < $pair[0]))) as $yS | ($yS | length) as $nS | $pair[0] as $dmin | reduce range(0; $nS - 1) as $i ( [0, $pair]; # state: [k, [d, [P1,P2]]] .[0] = $i + 1 | until( .[0] as $k | $k >= $nS or ($yS[$k][1] - $yS[$i][1]) >= $dmin; .[0] as $k | dist($yS[$k]; $yS[$i]) as $d | if $d < .[1][0] then [$k+1, [ $d, [$yS[$k], $yS[$i]]]] else .[0] += 1 end) ) | .[1] end; closestPair( sort_by(.[0]); sort_by(.[1])) ; Example from the Mathematica section: def data: [[0.748501, 4.09624], [3.00302, 5.26164], [3.61878, 9.52232], [7.46911, 4.71611], [5.7819, 2.69367], [2.34709, 8.74782], [2.87169, 5.97774], [6.33101, 0.463131], [7.46489, 4.6268], [1.45428, 0.087596] ];   data | closest_pair Output: $jq -M -c -n -f closest_pair.jq [0.0894096443343775,[[7.46489,4.6268],[7.46911,4.71611]]] Kotlin // version 1.1.2   typealias Point = Pair<Double, Double>   fun distance(p1: Point, p2: Point) = Math.hypot(p1.first- p2.first, p1.second - p2.second)   fun bruteForceClosestPair(p: List<Point>): Pair<Double, Pair<Point, Point>> { val n = p.size if (n < 2) throw IllegalArgumentException("Must be at least two points") var minPoints = p[0] to p[1] var minDistance = distance(p[0], p[1]) for (i in 0 until n - 1) for (j in i + 1 until n) { val dist = distance(p[i], p[j]) if (dist < minDistance) { minDistance = dist minPoints = p[i] to p[j] } } return minDistance to Pair(minPoints.first, minPoints.second) }   fun optimizedClosestPair(xP: List<Point>, yP: List<Point>): Pair<Double, Pair<Point, Point>> { val n = xP.size if (n <= 3) return bruteForceClosestPair(xP) val xL = xP.take(n / 2) val xR = xP.drop(n / 2) val xm = xP[n / 2 - 1].first val yL = yP.filter { it.first <= xm } val yR = yP.filter { it.first > xm } val (dL, pairL) = optimizedClosestPair(xL, yL) val (dR, pairR) = optimizedClosestPair(xR, yR) var dmin = dR var pairMin = pairR if (dL < dR) { dmin = dL pairMin = pairL } val yS = yP.filter { Math.abs(xm - it.first) < dmin } val nS = yS.size var closest = dmin var closestPair = pairMin for (i in 0 until nS - 1) { var k = i + 1 while (k < nS && (yS[k].second - yS[i].second < dmin)) { val dist = distance(yS[k], yS[i]) if (dist < closest) { closest = dist closestPair = Pair(yS[k], yS[i]) } k++ } } return closest to closestPair }     fun main(args: Array<String>) { val points = listOf( listOf( 5.0 to 9.0, 9.0 to 3.0, 2.0 to 0.0, 8.0 to 4.0, 7.0 to 4.0, 9.0 to 10.0, 1.0 to 9.0, 8.0 to 2.0, 0.0 to 10.0, 9.0 to 6.0 ), listOf( 0.654682 to 0.925557, 0.409382 to 0.619391, 0.891663 to 0.888594, 0.716629 to 0.996200, 0.477721 to 0.946355, 0.925092 to 0.818220, 0.624291 to 0.142924, 0.211332 to 0.221507, 0.293786 to 0.691701, 0.839186 to 0.728260 ) ) for (p in points) { val (dist, pair) = bruteForceClosestPair(p) println("Closest pair (brute force) is ${pair.first} and ${pair.second}, distance $dist") val xP = p.sortedBy { it.first } val yP = p.sortedBy { it.second } val (dist2, pair2) = optimizedClosestPair(xP, yP) println("Closest pair (optimized) is ${pair2.first} and ${pair2.second}, distance $dist2\n") } } Output: Closest pair (brute force) is (8.0, 4.0) and (7.0, 4.0), distance 1.0 Closest pair (optimized) is (7.0, 4.0) and (8.0, 4.0), distance 1.0 Closest pair (brute force) is (0.891663, 0.888594) and (0.925092, 0.81822), distance 0.07791019135517516 Closest pair (optimized) is (0.891663, 0.888594) and (0.925092, 0.81822), distance 0.07791019135517516 Liberty BASIC NB array terms can not be READ directly.   N =10   dim x( N), y( N)   firstPt =0 secondPt =0   for i =1 to N read f: x( i) =f read f: y( i) =f next i   minDistance =1E6   for i =1 to N -1 for j =i +1 to N dxSq =( x( i) -x( j))^2 dySq =( y( i) -y( j))^2 D =abs( ( dxSq +dySq)^0.5) if D <minDistance then minDistance =D firstPt =i secondPt =j end if next j next i   print "Distance ="; minDistance; " between ( "; x( firstPt); ", "; y( firstPt); ") and ( "; x( secondPt); ", "; y( secondPt); ")"   end   data 0.654682, 0.925557 data 0.409382, 0.619391 data 0.891663, 0.888594 data 0.716629, 0.996200 data 0.477721, 0.946355 data 0.925092, 0.818220 data 0.624291, 0.142924 data 0.211332, 0.221507 data 0.293786, 0.691701 data 0.839186, 0.72826     Distance =0.77910191e-1 between ( 0.891663, 0.888594) and ( 0.925092, 0.81822) Mathematica / Wolfram Language nearestPair[data_] := Block[{pos, dist = N[Outer[EuclideanDistance, data, data, 1]]}, pos = Position[dist, Min[DeleteCases[Flatten[dist], 0.]]]; data[[pos[[1]]]]] Output: nearestPair[{{0.748501, 4.09624}, {3.00302, 5.26164}, {3.61878, 9.52232}, {7.46911, 4.71611}, {5.7819, 2.69367}, {2.34709, 8.74782}, {2.87169, 5.97774}, {6.33101, 0.463131}, {7.46489, 4.6268}, {1.45428, 0.087596}}] {{7.46911, 4.71611}, {7.46489, 4.6268}} MATLAB This solution is an almost direct translation of the above pseudo-code into MATLAB. function [closest,closestpair] = closestPair(xP,yP)   N = numel(xP);   if(N <= 3)   %Brute force closestpair if(N < 2) closest = +Inf; closestpair = {}; else closest = norm(xP{1}-xP{2}); closestpair = {xP{1},xP{2}};   for i = ( 1:N-1 ) for j = ( (i+1):N ) if ( norm(xP{i} - xP{j}) < closest ) closest = norm(xP{i}-xP{j}); closestpair = {xP{i},xP{j}}; end %if end %for end %for end %if (N < 2) else   halfN = ceil(N/2);   xL = { xP{1:halfN} }; xR = { xP{halfN+1:N} }; xm = xP{halfN}(1);   %cellfun( @(p)le(p(1),xm),yP ) is the same as { p ∈ yP : px ≤ xm } yLIndicies = cellfun( @(p)le(p(1),xm),yP );   yL = { yP{yLIndicies} }; yR = { yP{~yLIndicies} };   [dL,pairL] = closestPair(xL,yL); [dR,pairR] = closestPair(xR,yR);   if dL < dR dmin = dL; pairMin = pairL; else dmin = dR; pairMin = pairR; end   %cellfun( @(p)lt(norm(xm-p(1)),dmin),yP ) is the same as %{ p ∈ yP : |xm - px| < dmin } yS = {yP{ cellfun( @(p)lt(norm(xm-p(1)),dmin),yP ) }}; nS = numel(yS);   closest = dmin; closestpair = pairMin;   for i = (1:nS-1) k = i+1;   while( (k<=nS) && (yS{k}(2)-yS{i}(2) < dmin) )   if norm(yS{k}-yS{i}) < closest closest = norm(yS{k}-yS{i}); closestpair = {yS{k},yS{i}}; end   k = k+1; end %while end %for end %if (N <= 3) end %closestPair Output: [distance,pair]=closestPair({[0 -.3],[1 1],[1.5 2],[2 2],[3 3]},{[0 -.3],[1 1],[1.5 2],[2 2],[3 3]})   distance =   0.500000000000000     pair =   [1x2 double] [1x2 double] %The pair is [1.5 2] and [2 2] which is correct Objective-C See Closest-pair problem/Objective-C OCaml     type point = { x : float; y : float }     let cmpPointX (a : point) (b : point) = compare a.x b.x let cmpPointY (a : point) (b : point) = compare a.y b.y     let distSqrd (seg : (point * point) option) = match seg with | None -> max_float | Some(line) -> let a = fst line in let b = snd line in   let dx = a.x -. b.x in let dy = a.y -. b.y in   dx*.dx +. dy*.dy     let dist seg = sqrt (distSqrd seg)     let shortest l1 l2 = if distSqrd l1 < distSqrd l2 then l1 else l2     let halve l = let n = List.length l in BatList.split_at (n/2) l     let rec closestBoundY from maxY (ptsByY : point list) = match ptsByY with | [] -> None | hd :: tl -> if hd.y > maxY then None else let toHd = Some(from, hd) in let bestToRest = closestBoundY from maxY tl in shortest toHd bestToRest     let rec closestInRange ptsByY maxDy = match ptsByY with | [] -> None | hd :: tl -> let fromHd = closestBoundY hd (hd.y +. maxDy) tl in let fromRest = closestInRange tl maxDy in shortest fromHd fromRest     let rec closestPairByX (ptsByX : point list) = if List.length ptsByX < 2 then None else let (left, right) = halve ptsByX in let leftResult = closestPairByX left in let rightResult = closestPairByX right in   let bestInHalf = shortest leftResult rightResult in let bestLength = dist bestInHalf in   let divideX = (List.hd right).x in let inBand = List.filter(fun(p) -> abs_float(p.x -. divideX) < bestLength) ptsByX in   let byY = List.sort cmpPointY inBand in let bestCross = closestInRange byY bestLength in shortest bestInHalf bestCross     let closestPair pts = let ptsByX = List.sort cmpPointX pts in closestPairByX ptsByX     let parsePoint str = let sep = Str.regexp_string "," in let tokens = Str.split sep str in let xStr = List.nth tokens 0 in let yStr = List.nth tokens 1 in   let xVal = (float_of_string xStr) in let yVal = (float_of_string yStr) in   { x = xVal; y = yVal }     let loadPoints filename = let ic = open_in filename in let result = ref [] in try while true do let s = input_line ic in if s <> "" then let p = parsePoint s in result := p :: !result; done; !result with End_of_file -> close_in ic; !result ;;   let loaded = (loadPoints "Points.txt") in let start = Sys.time() in let c = closestPair loaded in let taken = Sys.time() -. start in Printf.printf "Took %f [s]\n" taken;   match c with | None -> Printf.printf "No closest pair\n" | Some(seg) -> let a = fst seg in let b = snd seg in   Printf.printf "(%f, %f) (%f, %f) Dist %f\n" a.x a.y b.x b.y (dist c)     Oz Translation of pseudocode: declare fun {Distance X1#Y1 X2#Y2} {Sqrt {Pow X2-X1 2.0} + {Pow Y2-Y1 2.0}} end   %% brute force fun {BFClosestPair Points=P1|P2|_} Ps = {List.toTuple unit Points} %% for efficient random access N = {Width Ps} MinDist = {NewCell {Distance P1 P2}} MinPoints = {NewCell P1#P2} in for I in 1..N-1 do for J in I+1..N do IJDist = {Distance Ps.I Ps.J} in if IJDist < @MinDist then MinDist := IJDist MinPoints := Ps.I#Ps.J end end end @MinPoints end   %% divide and conquer fun {ClosestPair Points} case {ClosestPair2 {Sort Points {LessThanBy X}} {Sort Points {LessThanBy Y}}} of Distance#Pair then Pair end end   %% XP: points sorted by X, YP: sorted by Y %% returns a pair Distance#Pair fun {ClosestPair2 XP YP} N = {Length XP} = {Length YP} in if N =< 3 then P = {BFClosestPair XP} in {Distance P.1 P.2}#P else XL XR {List.takeDrop XP (N div 2) ?XL ?XR} XM = {Nth XP (N div 2)}.X YL YR {List.partition YP fun {$ P} P.X =< XM end ?YL ?YR} DL#PairL = {ClosestPair2 XL YL} DR#PairR = {ClosestPair2 XR YR} DMin#PairMin = if DL < DR then DL#PairL else DR#PairR end YSList = {Filter YP fun {$ P} {Abs XM-P.X} < DMin end} YS = {List.toTuple unit YSList} %% for efficient random access NS = {Width YS} Closest = {NewCell DMin} ClosestPair = {NewCell PairMin} in for I in 1..NS-1 do for K in I+1..NS while:YS.K.Y - YS.I.Y < DMin do DistKI = {Distance YS.K YS.I} in if DistKI < @Closest then Closest := DistKI ClosestPair := YS.K#YS.I end end end @Closest#@ClosestPair end end   %% To access components when points are represented as pairs X = 1 Y = 2   %% returns a less-than predicate that accesses feature F fun {LessThanBy F} fun {$ A B} A.F < B.F end end   fun {Random Min Max} Min + {Int.toFloat {OS.rand}} * (Max-Min) / {Int.toFloat {OS.randLimits _}} end   fun {RandomPoint} {Random 0.0 100.0}#{Random 0.0 100.0} end   Points = {MakeList 5} in {ForAll Points RandomPoint} {Show Points} {Show {ClosestPair Points}} PARI/GP Naive quadratic solution. closestPair(v)={ my(r=norml2(v[1]-v[2]),at=[1,2]); for(a=1,#v-1, for(b=a+1,#v, if(norml2(v[a]-v[b])<r, at=[a,b]; r=norml2(v[a]-v[b]) ) ) ); [v[at[1]],v[at[2]]] }; Pascal Brute force only calc square of distance, like AWK etc... As fast as D . program closestPoints; {$IFDEF FPC} {$MODE Delphi} {$ENDIF} const PointCnt = 10000;//31623; type TdblPoint = Record ptX, ptY : double; end; tPtLst = array of TdblPoint;   tMinDIstIdx = record md1, md2 : NativeInt; end;   function ClosPointBruteForce(var ptl :tPtLst):tMinDIstIdx; Var i,j,k : NativeInt; mindst2,dst2: double; //square of distance, no need to sqrt p0,p1 : ^TdblPoint; //using pointer, since calc of ptl[?] takes much time Begin i := Low(ptl); j := High(ptl); result.md1 := i;result.md2 := j; mindst2 := sqr(ptl[i].ptX-ptl[j].ptX)+sqr(ptl[i].ptY-ptl[j].ptY); repeat p0 := @ptl[i]; p1 := p0; inc(p1); For k := i+1 to j do Begin dst2:= sqr(p0^.ptX-p1^.ptX)+sqr(p0^.ptY-p1^.ptY); IF mindst2 > dst2 then Begin mindst2 := dst2; result.md1 := i; result.md2 := k; end; inc(p1); end; inc(i); until i = j; end;   var PointLst :tPtLst; cloPt : tMinDIstIdx; i : NativeInt; Begin randomize; setlength(PointLst,PointCnt); For i := 0 to PointCnt-1 do with PointLst[i] do Begin ptX := random; ptY := random; end; cloPt:= ClosPointBruteForce(PointLst) ; i := cloPt.md1; Writeln('P[',i:4,']= x: ',PointLst[i].ptX:0:8, ' y: ',PointLst[i].ptY:0:8); i := cloPt.md2; Writeln('P[',i:4,']= x: ',PointLst[i].ptX:0:8, ' y: ',PointLst[i].ptY:0:8); end. Output: PointCnt = 10000 //without randomize always same results //32-Bit P[ 324]= x: 0.26211815 y: 0.45851455 P[3391]= x: 0.26217852 y: 0.45849116 real 0m0.114s //fpc 3.1.1 32 Bit -O4 -MDelphi..cpu i4330 3.5 Ghz //64-Bit doubles the speed comp switch -O2 ..-O4 same timings P[ 324]= x: 0.26211815 y: 0.45851455 P[3391]= x: 0.26217852 y: 0.45849116 real 0m0.059s //fpc 3.1.1 64 Bit -O4 -MDelphi..cpu i4330 3.5 Ghz //with randomize P[ 47]= x: 0.12408823 y: 0.04501338 P[9429]= x: 0.12399629 y: 0.04496700 //32-Bit PointCnt = { 10000*sqrt(10) } 31623;-> real 0m1.112s 10x times runtime Perl #! /usr/bin/perl use strict; use POSIX qw(ceil);   sub dist { my ( $a, $b) = @_; return sqrt( ($a->[0] - $b->[0])**2 + ($a->[1] - $b->[1])**2 ); }   sub closest_pair_simple { my $ra = shift; my @arr = @$ra; my $inf = 1e600; return $inf if scalar(@arr) < 2; my ( $a, $b, $d ) = ($arr[0], $arr[1], dist($arr[0], $arr[1])); while( @arr ) { my $p = pop @arr; foreach my $l (@arr) { my $t = dist($p, $l); ($a, $b, $d) = ($p, $l, $t) if $t < $d; } } return ($a, $b, $d); }   sub closest_pair { my $r = shift; my @ax = sort { $a->[0] <=> $b->[0] } @$r; my @ay = sort { $a->[1] <=> $b->[1] } @$r; return closest_pair_real(\@ax, \@ay); }   sub closest_pair_real { my ($rx, $ry) = @_; my @xP = @$rx; my @yP = @$ry; my $N = @xP; return closest_pair_simple($rx) if scalar(@xP) <= 3;   my $inf = 1e600; my $midx = ceil($N/2)-1;   my @PL = @xP[0 .. $midx]; my @PR = @xP[$midx+1 .. $N-1];   my $xm = ${$xP[$midx]}[0];   my @yR = (); my @yL = (); foreach my $p (@yP) { if ( ${$p}[0] <= $xm ) { push @yR, $p; } else { push @yL, $p; } }   my ($al, $bl, $dL) = closest_pair_real(\@PL, \@yR); my ($ar, $br, $dR) = closest_pair_real(\@PR, \@yL);   my ($m1, $m2, $dmin) = ($al, $bl, $dL); ($m1, $m2, $dmin) = ($ar, $br, $dR) if $dR < $dL;   my @yS = (); foreach my $p (@yP) { push @yS, $p if abs($xm - ${$p}[0]) < $dmin; }   if ( @yS ) { my ( $w1, $w2, $closest ) = ($m1, $m2, $dmin); foreach my $i (0 .. ($#yS - 1)) {   my $k = $i + 1; while ( ($k <= $#yS) && ( (${$yS[$k]}[1] - ${$yS[$i]}[1]) < $dmin) ) { my $d = dist($yS[$k], $yS[$i]); ($w1, $w2, $closest) = ($yS[$k], $yS[$i], $d) if $d < $closest; $k++; }   } return ($w1, $w2, $closest);   } else { return ($m1, $m2, $dmin); } }       my @points = (); my $N = 5000;   foreach my $i (1..$N) { push @points, [rand(20)-10.0, rand(20)-10.0]; }     my ($a, $b, $d) = closest_pair_simple(\@points); print "$d\n";   my ($a1, $b1, $d1) = closest_pair(\@points); #print "$d1\n"; Time for the brute-force algorithm gave 40.63user 0.12system 0:41.06elapsed, while the divide&conqueer algorithm gave 0.37user 0.00system 0:00.38elapsed with 5000 points. Perl 6 Translation of: Perl 5 We avoid taking square roots in the slow method because the squares are just as comparable. (This doesn't always work in the fast method because of distance assumptions in the algorithm.) sub MAIN ($N = 5000) { my @points = (^$N).map: { [rand * 20 - 10, rand * 20 - 10] }   my ($af, $bf, $df) = closest_pair(@points); say "fast $df at [$af], [$bf]";   my ($as, $bs, $ds) = closest_pair_simple(@points); say "slow $ds at [$as], [$bs]"; }   sub dist-squared($a,$b) { ($a[0] - $b[0]) ** 2 + ($a[1] - $b[1]) ** 2; }   sub closest_pair_simple(@arr is copy) { return Inf if @arr < 2; my ($a, $b, $d) = flat @arr[0,1], dist-squared(|@arr[0,1]); while @arr { my $p = pop @arr; for @arr -> $l { my $t = dist-squared($p, $l); ($a, $b, $d) = $p, $l, $t if $t < $d; } } return $a, $b, sqrt $d; }   sub closest_pair(@r) { my @ax = @r.sort: { .[0] } my @ay = @r.sort: { .[1] } return closest_pair_real(@ax, @ay); }   sub closest_pair_real(@rx, @ry) { return closest_pair_simple(@rx) if @rx <= 3;   my @xP = @rx; my @yP = @ry; my $N = @xP;   my $midx = ceiling($N/2)-1;   my @PL = @xP[0 .. $midx]; my @PR = @xP[$midx+1 ..^ $N];   my $xm = @xP[$midx][0];   my @yR; my @yL; push ($_[0] <= $xm ?? @yR !! @yL), $_ for @yP;   my ($al, $bl, $dL) = closest_pair_real(@PL, @yR); my ($ar, $br, $dR) = closest_pair_real(@PR, @yL);   my ($m1, $m2, $dmin) = $dR < $dL ?? ($ar, $br, $dR) !! ($al, $bl, $dL);   my @yS = @yP.grep: { abs($xm - .[0]) < $dmin }   if @yS { my ($w1, $w2, $closest) = $m1, $m2, $dmin; for 0 ..^ @yS.end -> $i { for $i+1 ..^ @yS -> $k { last unless @yS[$k][1] - @yS[$i][1] < $dmin; my $d = sqrt dist-squared(@yS[$k], @yS[$i]); ($w1, $w2, $closest) = @yS[$k], @yS[$i], $d if $d < $closest; }   } return $w1, $w2, $closest;   } else { return $m1, $m2, $dmin; } } Phix Brute force and divide and conquer (translated from pseudocode) approaches compared function bruteForceClosestPair(sequence s) atom {x1,y1} = s[1], {x2,y2} = s[2], dx = x1-x2, dy = y1-y2, mind = dx*dx+dy*dy sequence minp = s[1..2] for i=1 to length(s)-1 do {x1,y1} = s[i] for j=i+1 to length(s) do {x2,y2} = s[j] dx = x1-x2 dx = dx*dx if dx<mind then dy = y1-y2 dx += dy*dy if dx<mind then mind = dx minp = {s[i],s[j]} end if end if end for end for return {sqrt(mind),minp} end function   sequence testset = sq_rnd(repeat({1,1},10000)) atom t0 = time() sequence points atom d {d,points} = bruteForceClosestPair(testset) -- (Sorting the final point pair makes brute/dc more likely to tally. Note however -- when >1 equidistant pairs exist, brute and dc may well return different pairs; -- it is only a problem if they decide to return different minimum distances.) atom {{x1,y1},{x2,y2}} = sort(points) printf(1,"Closest pair: {%f,%f} {%f,%f}, distance=%f (%3.2fs)\n",{x1,y2,x2,y2,d,time()-t0})   t0 = time() constant X = 1, Y = 2 sequence xP = sort(testset)   function byY(sequence p1, p2) return compare(p1[Y],p2[Y]) end function sequence yP = custom_sort(routine_id("byY"),testset)   function distsq(sequence p1,p2) atom {x1,y1} = p1, {x2,y2} = p2 x1 -= x2 y1 -= y2 return x1*x1 + y1*y1 end function   function closestPair(sequence xP, yP) -- where xP is P(1) .. P(N) sorted by x coordinate, and -- yP is P(1) .. P(N) sorted by y coordinate (ascending order) integer N, midN, k, nS sequence xL, xR, yL, yR, pairL, pairR, pairMin, yS, cPair atom xm, dL, dR, dmin, closest   N = length(xP) if length(yP)!=N then ?9/0 end if -- (sanity check) if N<=3 then return bruteForceClosestPair(xP) end if midN = floor(N/2) xL = xP[1..midN] xR = xP[midN+1..N] xm = xP[midN][X] yL = {} yR = {} for i=1 to N do if yP[i][X]<=xm then yL = append(yL,yP[i]) else yR = append(yR,yP[i]) end if end for {dL, pairL} = closestPair(xL, yL) {dR, pairR} = closestPair(xR, yR) {dmin, pairMin} = {dR, pairR} if dL<dR then {dmin, pairMin} = {dL, pairL} end if yS = {} for i=1 to length(yP) do if abs(xm-yP[i][X])<dmin then yS = append(yS,yP[i]) end if end for nS = length(yS) {closest, cPair} = {dmin*dmin, pairMin} for i=1 to nS-1 do k = i + 1 while k<=nS and (yS[k][Y]-yS[i][Y])<dmin do d = distsq(yS[k],yS[i]) if d<closest then {closest, cPair} = {d, {yS[k], yS[i]}} end if k += 1 end while end for return {sqrt(closest), cPair} end function   {d,points} = closestPair(xP,yP) {{x1,y1},{x2,y2}} = sort(points) -- (see note above) printf(1,"Closest pair: {%f,%f} {%f,%f}, distance=%f (%3.2fs)\n",{x1,y2,x2,y2,d,time()-t0}) Output: Closest pair: {0.0328051,0.0966250} {0.0328850,0.0966250}, distance=0.000120143 (2.37s) Closest pair: {0.0328051,0.0966250} {0.0328850,0.0966250}, distance=0.000120143 (0.14s) PicoLisp (de closestPairBF (Lst) (let Min T (use (Pt1 Pt2) (for P Lst (for Q Lst (or (== P Q) (>= (setq N (let (A (- (car P) (car Q)) B (- (cdr P) (cdr Q))) (+ (* A A) (* B B)) ) ) Min ) (setq Min N Pt1 P Pt2 Q) ) ) ) (list Pt1 Pt2 (sqrt Min)) ) ) ) Test: : (scl 6) -> 6 : (closestPairBF (quote (0.654682 . 0.925557) (0.409382 . 0.619391) (0.891663 . 0.888594) (0.716629 . 0.996200) (0.477721 . 0.946355) (0.925092 . 0.818220) (0.624291 . 0.142924) (0.211332 . 0.221507) (0.293786 . 0.691701) (0.839186 . 0.728260) ) ) -> ((891663 . 888594) (925092 . 818220) 77910) PL/I   /* Closest Pair Problem */ closest: procedure options (main); declare n fixed binary;   get list (n); begin; declare 1 P(n), 2 x float, 2 y float; declare (i, ii, j, jj) fixed binary; declare (distance, min_distance initial (0) ) float;   get list (P); min_distance = sqrt( (P.x(1) - P.x(2))**2 + (P.y(1) - P.y(2))**2 ); ii = 1; jj = 2; do i = 1 to n; do j = 1 to n; distance = sqrt( (P.x(i) - P.x(j))**2 + (P.y(i) - P.y(j))**2 ); if distance > 0 then if distance < min_distance then do; min_distance = distance; ii = i; jj = j; end; end; end; put skip edit ('The minimum distance ', min_distance, ' is between the points [', P.x(ii), ',', P.y(ii), '] and [', P.x(jj), ',', P.y(jj), ']' ) (a, f(6,2)); end; end closest;   PureBasic Brute force version Procedure.d bruteForceClosestPair(Array P.coordinate(1)) Protected N=ArraySize(P()), i, j Protected mindistance.f=Infinity(), t.d Shared a, b If N<2 a=0: b=0 Else For i=0 To N-1 For j=i+1 To N t=Pow(Pow(P(i)\x-P(j)\x,2)+Pow(P(i)\y-P(j)\y,2),0.5) If mindistance>t mindistance=t a=i: b=j EndIf Next Next EndIf ProcedureReturn mindistance EndProcedure   Implementation can be as Structure coordinate x.d y.d EndStructure   Dim DataSet.coordinate(9) Define i, x.d, y.d, a, b   ;- Load data from datasection Restore DataPoints For i=0 To 9 Read.d x: Read.d y DataSet(i)\x=x DataSet(i)\y=y Next i   If OpenConsole() PrintN("Mindistance= "+StrD(bruteForceClosestPair(DataSet()),6)) PrintN("Point 1= "+StrD(DataSet(a)\x,6)+": "+StrD(DataSet(a)\y,6)) PrintN("Point 2= "+StrD(DataSet(b)\x,6)+": "+StrD(DataSet(b)\y,6)) Print(#CRLF$+"Press ENTER to quit"): Input() EndIf   DataSection DataPoints: Data.d 0.654682, 0.925557, 0.409382, 0.619391, 0.891663, 0.888594 Data.d 0.716629, 0.996200, 0.477721, 0.946355, 0.925092, 0.818220 Data.d 0.624291, 0.142924, 0.211332, 0.221507, 0.293786, 0.691701, 0.839186, 0.72826 EndDataSection Output: Mindistance= 0.077910 Point 1= 0.891663: 0.888594 Point 2= 0.925092: 0.818220 Press ENTER to quit Python """ Compute nearest pair of points using two algorithms   First algorithm is 'brute force' comparison of every possible pair. Second, 'divide and conquer', is based on: www.cs.iupui.edu/~xkzou/teaching/CS580/Divide-and-conquer-closestPair.ppt """   from random import randint, randrange from operator import itemgetter, attrgetter   infinity = float('inf')   # Note the use of complex numbers to represent 2D points making distance == abs(P1-P2)   def bruteForceClosestPair(point): numPoints = len(point) if numPoints < 2: return infinity, (None, None) return min( ((abs(point[i] - point[j]), (point[i], point[j])) for i in range(numPoints-1) for j in range(i+1,numPoints)), key=itemgetter(0))   def closestPair(point): xP = sorted(point, key= attrgetter('real')) yP = sorted(point, key= attrgetter('imag')) return _closestPair(xP, yP)   def _closestPair(xP, yP): numPoints = len(xP) if numPoints <= 3: return bruteForceClosestPair(xP) Pl = xP[:numPoints/2] Pr = xP[numPoints/2:] Yl, Yr = [], [] xDivider = Pl[-1].real for p in yP: if p.real <= xDivider: Yl.append(p) else: Yr.append(p) dl, pairl = _closestPair(Pl, Yl) dr, pairr = _closestPair(Pr, Yr) dm, pairm = (dl, pairl) if dl < dr else (dr, pairr) # Points within dm of xDivider sorted by Y coord closeY = [p for p in yP if abs(p.real - xDivider) < dm] numCloseY = len(closeY) if numCloseY > 1: # There is a proof that you only need compare a max of 7 next points closestY = min( ((abs(closeY[i] - closeY[j]), (closeY[i], closeY[j])) for i in range(numCloseY-1) for j in range(i+1,min(i+8, numCloseY))), key=itemgetter(0)) return (dm, pairm) if dm <= closestY[0] else closestY else: return dm, pairm   def times(): ''' Time the different functions ''' import timeit   functions = [bruteForceClosestPair, closestPair] for f in functions: print 'Time for', f.__name__, timeit.Timer( '%s(pointList)' % f.__name__, 'from closestpair import %s, pointList' % f.__name__).timeit(number=1)       pointList = [randint(0,1000)+1j*randint(0,1000) for i in range(2000)]   if __name__ == '__main__': pointList = [(5+9j), (9+3j), (2+0j), (8+4j), (7+4j), (9+10j), (1+9j), (8+2j), 10j, (9+6j)] print pointList print ' bruteForceClosestPair:', bruteForceClosestPair(pointList) print ' closestPair:', closestPair(pointList) for i in range(10): pointList = [randrange(11)+1j*randrange(11) for i in range(10)] print '\n', pointList print ' bruteForceClosestPair:', bruteForceClosestPair(pointList) print ' closestPair:', closestPair(pointList) print '\n' times() times() times() Output: followed by timing comparisons (Note how the two algorithms agree on the minimum distance, but may return a different pair of points if more than one pair of points share that minimum separation): [(5+9j), (9+3j), (2+0j), (8+4j), (7+4j), (9+10j), (1+9j), (8+2j), 10j, (9+6j)] bruteForceClosestPair: (1.0, ((8+4j), (7+4j))) closestPair: (1.0, ((8+4j), (7+4j))) [(10+6j), (7+0j), (9+4j), (4+8j), (7+5j), (6+4j), (1+9j), (6+4j), (1+3j), (5+0j)] bruteForceClosestPair: (0.0, ((6+4j), (6+4j))) closestPair: (0.0, ((6+4j), (6+4j))) [(4+10j), (8+5j), (10+3j), (9+7j), (2+5j), (6+7j), (6+2j), (9+6j), (3+8j), (5+1j)] bruteForceClosestPair: (1.0, ((9+7j), (9+6j))) closestPair: (1.0, ((9+7j), (9+6j))) [(10+0j), (3+10j), (10+7j), (1+8j), (5+10j), (8+8j), (4+7j), (6+2j), (6+10j), (9+3j)] bruteForceClosestPair: (1.0, ((5+10j), (6+10j))) closestPair: (1.0, ((5+10j), (6+10j))) [(3+7j), (5+3j), 0j, (2+9j), (2+5j), (9+6j), (5+9j), (4+3j), (3+8j), (8+7j)] bruteForceClosestPair: (1.0, ((3+7j), (3+8j))) closestPair: (1.0, ((4+3j), (5+3j))) [(4+3j), (10+9j), (2+7j), (7+8j), 0j, (3+10j), (10+2j), (7+10j), (7+3j), (1+4j)] bruteForceClosestPair: (2.0, ((7+8j), (7+10j))) closestPair: (2.0, ((7+8j), (7+10j))) [(9+2j), (9+8j), (6+4j), (7+0j), (10+2j), (10+0j), (2+7j), (10+7j), (9+2j), (1+5j)] bruteForceClosestPair: (0.0, ((9+2j), (9+2j))) closestPair: (0.0, ((9+2j), (9+2j))) [(3+3j), (8+2j), (4+0j), (1+1j), (9+10j), (5+0j), (2+3j), 5j, (5+0j), (7+0j)] bruteForceClosestPair: (0.0, ((5+0j), (5+0j))) closestPair: (0.0, ((5+0j), (5+0j))) [(1+5j), (8+3j), (8+10j), (6+8j), (10+9j), (2+0j), (2+7j), (8+7j), (8+4j), (1+2j)] bruteForceClosestPair: (1.0, ((8+3j), (8+4j))) closestPair: (1.0, ((8+3j), (8+4j))) [(8+4j), (8+6j), (8+0j), 0j, (10+7j), (10+6j), 6j, (1+3j), (1+8j), (6+9j)] bruteForceClosestPair: (1.0, ((10+7j), (10+6j))) closestPair: (1.0, ((10+7j), (10+6j))) [(6+8j), (10+1j), 3j, (7+9j), (4+10j), (4+7j), (5+7j), (6+10j), (4+7j), (2+4j)] bruteForceClosestPair: (0.0, ((4+7j), (4+7j))) closestPair: (0.0, ((4+7j), (4+7j))) Time for bruteForceClosestPair 4.57953371169 Time for closestPair 0.122539596513 Time for bruteForceClosestPair 5.13221177552 Time for closestPair 0.124602707886 Time for bruteForceClosestPair 4.83609397284 Time for closestPair 0.119326618327 >>> R Works with: R version 2.8.1+ Brute force solution as per wikipedia pseudo-code closest_pair_brute <-function(x,y,plotxy=F) { xy = cbind(x,y) cp = bruteforce(xy) cat("\n\nShortest path found = \n From:\t\t(",cp[1],',',cp[2],")\n To:\t\t(",cp[3],',',cp[4],")\n Distance:\t",cp[5],"\n\n",sep="") if(plotxy) { plot(x,y,pch=19,col='black',main="Closest Pair", asp=1) points(cp[1],cp[2],pch=19,col='red') points(cp[3],cp[4],pch=19,col='red') } distance <- function(p1,p2) { x1 = (p1[1]) y1 = (p1[2]) x2 = (p2[1]) y2 = (p2[2]) sqrt((x2-x1)^2 + (y2-y1)^2) } bf_iter <- function(m,p,idx=NA,d=NA,n=1) { dd = distance(p,m[n,]) if((is.na(d) || dd<=d) && p!=m[n,]){d = dd; idx=n;} if(n == length(m[,1])) { c(m[idx,],d) } else bf_iter(m,p,idx,d,n+1) } bruteforce <- function(pmatrix,n=1,pd=c(NA,NA,NA,NA,NA)) { p = pmatrix[n,] ppd = c(p,bf_iter(pmatrix,p)) if(ppd[5]<pd[5] || is.na(pd[5])) pd = ppd if(n==length(pmatrix[,1])) pd else bruteforce(pmatrix,n+1,pd) } } Quicker brute force solution for R that makes use of the apply function native to R for dealing with matrices. It expects x and y to take the form of separate vectors. closestPair<-function(x,y) { distancev <- function(pointsv) { x1 <- pointsv[1] y1 <- pointsv[2] x2 <- pointsv[3] y2 <- pointsv[4] sqrt((x1 - x2)^2 + (y1 - y2)^2) } pairstocompare <- t(combn(length(x),2)) pointsv <- cbind(x[pairstocompare[,1]],y[pairstocompare[,1]],x[pairstocompare[,2]],y[pairstocompare[,2]]) pairstocompare <- cbind(pairstocompare,apply(pointsv,1,distancev)) minrow <- pairstocompare[pairstocompare[,3] == min(pairstocompare[,3])] if (!is.null(nrow(minrow))) {print("More than one point at this distance!"); minrow <- minrow[1,]} cat("The closest pair is:\n\tPoint 1: ",x[minrow[1]],", ",y[minrow[1]], "\n\tPoint 2: ",x[minrow[2]],", ",y[minrow[2]], "\n\tDistance: ",minrow[3],"\n",sep="") c(distance=minrow[3],x1.x=x[minrow[1]],y1.y=y[minrow[1]],x2.x=x[minrow[2]],y2.y=y[minrow[2]]) } This is the quickest version, that makes use of the 'dist' function of R. It takes a two-column object of x,y-values as input, or creates such an object from seperate x and y-vectors. closest.pairs <- function(x, y=NULL, ...){ # takes two-column object(x,y-values), or creates such an object from x and y values if(!is.null(y)) x <- cbind(x, y)   distances <- dist(x) min.dist <- min(distances) point.pair <- combn(1:nrow(x), 2)[, which.min(distances)]   cat("The closest pair is:\n\t", sprintf("Point 1: %.3f, %.3f \n\tPoint 2: %.3f, %.3f \n\tDistance: %.3f.\n", x[point.pair[1],1], x[point.pair[1],2], x[point.pair[2],1], x[point.pair[2],2], min.dist), sep="" ) c( x1=x[point.pair[1],1],y1=x[point.pair[1],2], x2=x[point.pair[2],1],y2=x[point.pair[2],2], distance=min.dist) } Example x = (sample(-1000.00:1000.00,100)) y = (sample(-1000.00:1000.00,length(x))) cp = closest.pairs(x,y) #cp = closestPair(x,y) plot(x,y,pch=19,col='black',main="Closest Pair", asp=1) points(cp["x1.x"],cp["y1.y"],pch=19,col='red') points(cp["x2.x"],cp["y2.y"],pch=19,col='red') #closest_pair_brute(x,y,T)   Performance system.time(closest_pair_brute(x,y), gcFirst = TRUE) Shortest path found = From: (32,-987) To: (25,-993) Distance: 9.219544   user system elapsed 0.35 0.02 0.37   system.time(closest.pairs(x,y), gcFirst = TRUE) The closest pair is: Point 1: 32.000, -987.000 Point 2: 25.000, -993.000 Distance: 9.220.   user system elapsed 0.08 0.00 0.10   system.time(closestPair(x,y), gcFirst = TRUE) The closest pair is: Point 1: 32, -987 Point 2: 25, -993 Distance: 9.219544   user system elapsed 0.17 0.00 0.19     Using dist function for brute force, but divide and conquer (as per pseudocode) for speed: closest.pairs.bruteforce <- function(x, y=NULL) { if (!is.null(y)) { x <- cbind(x,y) } d <- dist(x) cp <- x[combn(1:nrow(x), 2)[, which.min(d)],] list(p1=cp[1,], p2=cp[2,], d=min(d)) }   closest.pairs.dandc <- function(x, y=NULL) { if (!is.null(y)) { x <- cbind(x,y) } if (sd(x[,"x"]) < sd(x[,"y"])) { x <- cbind(x=x[,"y"],y=x[,"x"]) swap <- TRUE } else { swap <- FALSE } xp <- x[order(x[,"x"]),] .cpdandc.rec <- function(xp,yp) { n <- dim(xp)[1] if (n <= 4) { closest.pairs.bruteforce(xp) } else { xl <- xp[1:floor(n/2),] xr <- xp[(floor(n/2)+1):n,] cpl <- .cpdandc.rec(xl) cpr <- .cpdandc.rec(xr) if (cpl$d<cpr$d) cp <- cpl else cp <- cpr cp } } cp <- .cpdandc.rec(xp)   yp <- x[order(x[,"y"]),] xm <- xp[floor(dim(xp)[1]/2),"x"] ys <- yp[which(abs(xm - yp[,"x"]) <= cp$d),] nys <- dim(ys)[1] if (!is.null(nys) && nys > 1) { for (i in 1:(nys-1)) { k <- i + 1 while (k <= nys && ys[i,"y"] - ys[k,"y"] < cp$d) { d <- sqrt((ys[k,"x"]-ys[i,"x"])^2 + (ys[k,"y"]-ys[i,"y"])^2) if (d < cp$d) cp <- list(p1=ys[i,],p2=ys[k,],d=d) k <- k + 1 } } } if (swap) { list(p1=cbind(x=cp$p1["y"],y=cp$p1["x"]),p2=cbind(x=cp$p2["y"],y=cp$p2["x"]),d=cp$d) } else { cp } }   # Test functions cat("How many points?\n") n <- scan(what=integer(),n=1) x <- rnorm(n) y <- rnorm(n) tstart <- proc.time()[3] cat("Closest pairs divide and conquer:\n") print(cp <- closest.pairs.dandc(x,y)) cat(sprintf("That took %.2f seconds.\n",proc.time()[3] - tstart)) plot(x,y) points(c(cp$p1["x"],cp$p2["x"]),c(cp$p1["y"],cp$p2["y"]),col="red") tstart <- proc.time()[3] cat("\nClosest pairs brute force:\n") print(closest.pairs.bruteforce(x,y)) cat(sprintf("That took %.2f seconds.\n",proc.time()[3] - tstart))   Output: How many points? 1: 500 Read 1 item Closest pairs divide and conquer: $p1 x y 1.68807938 0.05876328 $p2 x y 1.68904694 0.05878173 $d [1] 0.0009677302 That took 0.43 seconds. Closest pairs brute force: $p1 x y 1.68807938 0.05876328 $p2 x y 1.68904694 0.05878173 $d [1] 0.0009677302 That took 6.38 seconds. Racket The brute force solution using complex numbers to represent pairs.   #lang racket (define (dist z0 z1) (magnitude (- z1 z0))) (define (dist* zs) (apply dist zs))   (define (closest-pair zs) (if (< (length zs) 2) -inf.0 (first (sort (for/list ([z0 zs]) (list z0 (argmin (λ(z) (if (= z z0) +inf.0 (dist z z0))) zs))) < #:key dist*))))   (define result (closest-pair '(0+1i 1+2i 3+4i))) (displayln (~a "Closest points: " result)) (displayln (~a "Distance: " (dist* result)))   Output:   Closest points: (0+1i 1+2i) Distance: 1.4142135623730951   REXX /*REXX program solves the closest pair of points problem (in two dimensions). */ parse arg N low high seed . /*obtain optional arguments from the CL*/ if N=='' | N=="," then N= 100 /*Not specified? Then use the default.*/ if low=='' | low=="," then low= 0 /* " " " " " " */ if high=='' | high=="," then high=20000 /* " " " " " " */ if datatype(seed,'W') then call random ,,seed /*seed for RANDOM (BIF) repeatability.*/ w=length(high); w=w + (w//2==0) /*╔══════════════════════╗*/ do j=1 for N /*generate N random points.*/ /*║ generate N points. ║*/ @x.j=random(low,high) /* " a random X. */ /*╚══════════════════════╝*/ @y.j=random(low,high) /* " " " Y. */ end /*j*/ /*X and Y make the point*/ A=1; B=2 /* [↓] MINDD is actually the unsquared*/ minDD=(@x.[email protected].B)**2 + (@y.[email protected].B)**2 /*distance between the first two points*/ /* [↓] use of XJ & YJ speed things up.*/ do j=1 for N-1; [email protected].j; [email protected].j /*find minimum distance between a ··· */ do k=j+1 to N /* ··· point and all the other points.*/ dd=(xj - @x.k)**2 + (yj - @y.k)**2 /*compute squared distance from points.*/ if dd<minDD then if dd\=0 then parse value dd j k with minDD A B end /*k*/ /* [↑] needn't take SQRT of DD (yet).*/ end /*j*/ /* [↑] when done, A & B are the ones*/   _= 'For ' N " points, the minimum distance between the two points: " say _ center("x", w, '═')" " center('y', w, "═") ' is: ' sqrt(abs(minDD))/1 say left('', length(_)-1) "["right(@x.A, w)',' right(@y.A, w)"]" say left('', length(_)-1) "["right(@x.B, w)',' right(@y.B, w)"]" exit /*stick a fork in it, we're all done. */ /*──────────────────────────────────────────────────────────────────────────────────────*/ sqrt: procedure; parse arg x; if x=0 then return 0; d=digits(); m.=9; numeric form; h=d+6 numeric digits; parse value format(x,2,1,,0) 'E0' with g 'E' _ .; g=g *.5'e'_ % 2 do j=0 while h>9; m.j=h; h=h%2+1; end /*j*/ do k=j+5 to 0 by -1; numeric digits m.k; g=(g+x/g)*.5; end /*k*/ return g output   when using the default input of:   100 For 100 points, the minimum distance between the two points: ══x══ ══y══ is: 219.228192 [ 7277, 1625] [ 7483, 1700] output   when using the input of:   200 For 200 points, the minimum distance between the two points: ══x══ ══y══ is: 39.408121 [17604, 19166] [17627, 19198] output   when using the input of:   1000 For 1000 points, the minimum distance between the two points: ══x══ ══y══ is: 5.09901951 [ 6264, 19103] [ 6263, 19108] Ring   decimals(10) x = list(10) y = list(10) x[1] = 0.654682 y[1] = 0.925557 x[2] = 0.409382 y[2] = 0.619391 x[3] = 0.891663 y[3] = 0.888594 x[4] = 0.716629 y[4] = 0.996200 x[5] = 0.477721 y[5] = 0.946355 x[6] = 0.925092 y[6] = 0.818220 x[7] = 0.624291 y[7] = 0.142924 x[8] = 0.211332 y[8] = 0.221507 x[9] = 0.293786 y[9] = 0.691701 x[10] = 0.839186 y[10] = 0.728260   min = 10000 for i = 1 to 9 for j = i+1 to 10 dsq = pow((x[i] - x[j]),2) + pow((y[i] - y[j]),2) if dsq < min min = dsq mini = i minj = j ok next next see "closest pair is : " + mini + " and " + minj + " at distance " + sqrt(min)   Output: closest pair is : 3 and 6 at distance 0.0779101914 Ruby Point = Struct.new(:x, :y)   def distance(p1, p2) Math.hypot(p1.x - p2.x, p1.y - p2.y) end   def closest_bruteforce(points) mindist, minpts = Float::MAX, [] points.combination(2) do |pi,pj| dist = distance(pi, pj) if dist < mindist mindist = dist minpts = [pi, pj] end end [mindist, minpts] end   def closest_recursive(points) return closest_bruteforce(points) if points.length <= 3 xP = points.sort_by(&:x) mid = points.length / 2 xm = xP[mid].x dL, pairL = closest_recursive(xP[0,mid]) dR, pairR = closest_recursive(xP[mid..-1]) dmin, dpair = dL<dR ? [dL, pairL] : [dR, pairR] yP = xP.find_all {|p| (xm - p.x).abs < dmin}.sort_by(&:y) closest, closestPair = dmin, dpair 0.upto(yP.length - 2) do |i| (i+1).upto(yP.length - 1) do |k| break if (yP[k].y - yP[i].y) >= dmin dist = distance(yP[i], yP[k]) if dist < closest closest = dist closestPair = [yP[i], yP[k]] end end end [closest, closestPair] end   points = Array.new(100) {Point.new(rand, rand)} p ans1 = closest_bruteforce(points) p ans2 = closest_recursive(points) fail "bogus!" if ans1[0] != ans2[0]   require 'benchmark'   points = Array.new(10000) {Point.new(rand, rand)} Benchmark.bm(12) do |x| x.report("bruteforce") {ans1 = closest_bruteforce(points)} x.report("recursive") {ans2 = closest_recursive(points)} end Sample output [0.005299616045889868, [#<struct Point x=0.24805908871087445, y=0.8413503128160198>, #<struct Point x=0.24355227214243136, y=0.8385620275629906>]] [0.005299616045889868, [#<struct Point x=0.24355227214243136, y=0.8385620275629906>, #<struct Point x=0.24805908871087445, y=0.8413503128160198>]] user system total real bruteforce 43.446000 0.000000 43.446000 ( 43.530062) recursive 0.187000 0.000000 0.187000 ( 0.190000) Run BASIC Courtesy http://dkokenge.com/rbp n =10 ' 10 data points input dim x(n) dim y(n)   pt1 = 0 ' 1st point pt2 = 0 ' 2nd point   for i =1 to n ' read in data read x(i) read y(i) next i   minDist = 1000000   for i =1 to n -1 for j =i +1 to n distXsq =(x(i) -x(j))^2 disYsq =(y(i) -y(j))^2 d =abs((dxSq +disYsq)^0.5) if d <minDist then minDist =d pt1 =i pt2 =j end if next j next i   print "Distance ="; minDist; " between ("; x(pt1); ", "; y(pt1); ") and ("; x(pt2); ", "; y(pt2); ")"   end   data 0.654682, 0.925557 data 0.409382, 0.619391 data 0.891663, 0.888594 data 0.716629, 0.996200 data 0.477721, 0.946355 data 0.925092, 0.818220 data 0.624291, 0.142924 data 0.211332, 0.221507 data 0.293786, 0.691701 data 0.839186, 0.72826 Scala import scala.collection.mutable.ListBuffer import scala.util.Random   object ClosestPair { case class Point(x: Double, y: Double){ def distance(p: Point) = math.hypot(x-p.x, y-p.y)   override def toString = "(" + x + ", " + y + ")" }   case class Pair(point1: Point, point2: Point) { val distance: Double = point1 distance point2   override def toString = { point1 + "-" + point2 + " : " + distance } }   def sortByX(points: List[Point]) = { points.sortBy(point => point.x) }   def sortByY(points: List[Point]) = { points.sortBy(point => point.y) }   def divideAndConquer(points: List[Point]): Pair = { val pointsSortedByX = sortByX(points) val pointsSortedByY = sortByY(points)   divideAndConquer(pointsSortedByX, pointsSortedByY) }   def bruteForce(points: List[Point]): Pair = { val numPoints = points.size if (numPoints < 2) return null var pair = Pair(points(0), points(1)) if (numPoints > 2) { for (i <- 0 until numPoints - 1) { val point1 = points(i) for (j <- i + 1 until numPoints) { val point2 = points(j) val distance = point1 distance point2 if (distance < pair.distance) pair = Pair(point1, point2) } } } return pair }     private def divideAndConquer(pointsSortedByX: List[Point], pointsSortedByY: List[Point]): Pair = { val numPoints = pointsSortedByX.size if(numPoints <= 3) { return bruteForce(pointsSortedByX) }   val dividingIndex = numPoints >>> 1 val leftOfCenter = pointsSortedByX.slice(0, dividingIndex) val rightOfCenter = pointsSortedByX.slice(dividingIndex, numPoints)   var tempList = leftOfCenter.map(x => x) //println(tempList) tempList = sortByY(tempList) var closestPair = divideAndConquer(leftOfCenter, tempList)   tempList = rightOfCenter.map(x => x) tempList = sortByY(tempList)   val closestPairRight = divideAndConquer(rightOfCenter, tempList)   if (closestPairRight.distance < closestPair.distance) closestPair = closestPairRight   tempList = List[Point]() val shortestDistance = closestPair.distance val centerX = rightOfCenter(0).x   for (point <- pointsSortedByY) { if (Math.abs(centerX - point.x) < shortestDistance) tempList = tempList :+ point }   closestPair = shortestDistanceF(tempList, shortestDistance, closestPair) closestPair }   private def shortestDistanceF(tempList: List[Point], shortestDistance: Double, closestPair: Pair ): Pair = { var shortest = shortestDistance var bestResult = closestPair for (i <- 0 until tempList.size) { val point1 = tempList(i) for (j <- i + 1 until tempList.size) { val point2 = tempList(j) if ((point2.y - point1.y) >= shortestDistance) return closestPair val distance = point1 distance point2 if (distance < closestPair.distance) { bestResult = Pair(point1, point2) shortest = distance } } }   closestPair }   def main(args: Array[String]) { val numPoints = if(args.length == 0) 1000 else args(0).toInt   val points = ListBuffer[Point]() val r = new Random() for (i <- 0 until numPoints) { points.+=:(new Point(r.nextDouble(), r.nextDouble())) } println("Generated " + numPoints + " random points")   var startTime = System.currentTimeMillis() val bruteForceClosestPair = bruteForce(points.toList) var elapsedTime = System.currentTimeMillis() - startTime println("Brute force (" + elapsedTime + " ms): " + bruteForceClosestPair)   startTime = System.currentTimeMillis() val dqClosestPair = divideAndConquer(points.toList) elapsedTime = System.currentTimeMillis() - startTime println("Divide and conquer (" + elapsedTime + " ms): " + dqClosestPair) if (bruteForceClosestPair.distance != dqClosestPair.distance) println("MISMATCH") } }   Output: scala ClosestPair 1000 Generated 1000 random points Brute force (981 ms): (0.41984960343173994, 0.4499078600557793)-(0.4198255166110827, 0.45044969701435) : 5.423720721077961E-4 Divide and conquer (52 ms): (0.4198255166110827, 0.45044969701435)-(0.41984960343173994, 0.4499078600557793) : 5.423720721077961E-4 Seed7 This is the brute force algorithm: const type: point is new struct var float: x is 0.0; var float: y is 0.0; end struct;   const func float: distance (in point: p1, in point: p2) is return sqrt((p1.x-p2.x)**2+(p1.y-p2.y)**2);   const func array point: closest_pair (in array point: points) is func result var array point: result is 0 times point.value; local var float: dist is 0.0; var float: minDistance is Infinity; var integer: i is 0; var integer: j is 0; var integer: savei is 0; var integer: savej is 0; begin for i range 1 to pred(length(points)) do for j range succ(i) to length(points) do dist := distance(points[i], points[j]); if dist < minDistance then minDistance := dist; savei := i; savej := j; end if; end for; end for; if minDistance <> Infinity then result := [] (points[savei], points[savej]); end if; end func; Sidef Translation of: Perl 6 func dist_squared(a, b) { sqr(a[0] - b[0]) + sqr(a[1] - b[1]) }   func closest_pair_simple(arr) { arr.len < 2 && return Inf var (a, b, d) = (arr[0, 1], dist_squared(arr[0,1])) arr.clone! while (arr) { var p = arr.pop for l in arr { var t = dist_squared(p, l) if (t < d) { (a, b, d) = (p, l, t) } } } return(a, b, d.sqrt) }   func closest_pair_real(rx, ry) { rx.len <= 3 && return closest_pair_simple(rx)   var N = rx.len var midx = (ceil(N/2)-1) var (PL, PR) = rx.part(midx)   var xm = rx[midx][0]   var yR = [] var yL = []   for item in ry { (item[0] <= xm ? yR : yL) << item }   var (al, bl, dL) = closest_pair_real(PL, yR) var (ar, br, dR) = closest_pair_real(PR, yL)   al == Inf && return (ar, br, dR) ar == Inf && return (al, bl, dL)   var (m1, m2, dmin) = (dR < dL ? [ar, br, dR]...  : [al, bl, dL]...)   var yS = ry.grep { |a| abs(xm - a[0]) < dmin }   var (w1, w2, closest) = (m1, m2, dmin) for i in (0 ..^ yS.end) { for k in (i+1 .. yS.end) { yS[k][1] - yS[i][1] < dmin || break var d = dist_squared(yS[k], yS[i]).sqrt if (d < closest) { (w1, w2, closest) = (yS[k], yS[i], d) } } }   return (w1, w2, closest) }   func closest_pair(r) { var ax = r.sort_by { |a| a[0] } var ay = r.sort_by { |a| a[1] } return closest_pair_real(ax, ay); }   var N = 5000 var points = N.of { [1.rand*20 - 10, 1.rand*20 - 10] } var (af, bf, df) = closest_pair(points) say "#{df} at (#{af.join(' ')}), (#{bf.join(' ')})" Smalltalk See Closest-pair problem/Smalltalk Tcl Each point is represented as a list of two floating-point numbers, the first being the x coordinate, and the second being the y. package require Tcl 8.5   # retrieve the x-coordinate proc x p {lindex $p 0} # retrieve the y-coordinate proc y p {lindex $p 1}   proc distance {p1 p2} { expr {hypot(([x $p1]-[x $p2]), ([y $p1]-[y $p2]))} }   proc closest_bruteforce {points} { set n [llength $points] set mindist Inf set minpts {} for {set i 0} {$i < $n - 1} {incr i} { for {set j [expr {$i + 1}]} {$j < $n} {incr j} { set p1 [lindex $points $i] set p2 [lindex $points $j] set dist [distance $p1 $p2] if {$dist < $mindist} { set mindist $dist set minpts [list $p1 $p2] } } } return [list $mindist $minpts] }   proc closest_recursive {points} { set n [llength $points] if {$n <= 3} { return [closest_bruteforce $points] } set xP [lsort -real -increasing -index 0 $points] set mid [expr {int(ceil($n/2.0))}] set PL [lrange $xP 0 [expr {$mid-1}]] set PR [lrange $xP $mid end] set procname [lindex [info level 0] 0] lassign [$procname $PL] dL pairL lassign [$procname $PR] dR pairR if {$dL < $dR} { set dmin $dL set dpair $pairL } else { set dmin $dR set dpair $pairR }   set xM [x [lindex $PL end]] foreach p $xP { if {abs($xM - [x $p]) < $dmin} { lappend S $p } } set yP [lsort -real -increasing -index 1 $S] set closest Inf set nP [llength $yP] for {set i 0} {$i <= $nP-2} {incr i} { set yPi [lindex $yP $i] for {set k [expr {$i+1}]; set yPk [lindex $yP $k]} { $k < $nP-1 && ([y $yPk]-[y $yPi]) < $dmin } {incr k; set yPk [lindex $yP $k]} { set dist [distance $yPk $yPi] if {$dist < $closest} { set closest $dist set closestPair [list $yPi $yPk] } } } expr {$closest < $dmin ? [list $closest $closestPair] : [list $dmin $dpair]} }   # testing set N 10000 for {set i 1} {$i <= $N} {incr i} { lappend points [list [expr {rand()*100}] [expr {rand()*100}]] }   # instrument the number of calls to [distance] to examine the # efficiency of the recursive solution trace add execution distance enter comparisons proc comparisons args {incr ::comparisons}   puts [format "%-10s  %9s  %9s  %s" method compares time closest] foreach method {bruteforce recursive} { set ::comparisons 0 set time [time {set ::dist($method) [closest_$method $points]} 1] puts [format "%-10s  %9d  %9d  %s" $method $::comparisons [lindex $time 0] [lindex $::dist($method) 0]] } Output: method compares time closest bruteforce 49995000 512967207 0.0015652738546658382 recursive 14613 488094 0.0015652738546658382 Note that the lindex and llength commands are both O(1). Ursala The brute force algorithm is easy. Reading from left to right, clop is defined as a function that forms the Cartesian product of its argument, and then extracts the member whose left side is a minimum with respect to the floating point comparison relation after deleting equal pairs and attaching to the left of each remaining pair the sum of the squares of the differences between corresponding coordinates. #import flo   clop = @iiK0 fleq$-&l+ *EZF ^\~& plus+ sqr~~+ minus~~bbI The divide and conquer algorithm following the specification given above is a little more hairy but not much longer. The eudist library function is used to compute the distance between points. #import std #import flo   clop =   ^(fleq-<&l,fleq-<&r); @blrNCCS ~&lrbhthPX2X+ ~&a^& fleq$-&l+ leql/8?al\^(eudist,~&)*altK33htDSL -+ ^C/~&rr ^(eudist,~&)*tK33htDSL+ @rlrlPXPlX ~| fleq^\~&lr abs+ [email protected], ^/~&ar @farlK30K31XPGbrlrjX3J ^/~&arlhh @W lesser [email protected]+- test program: test_data =   < (1.547290e+00,3.313053e+00), (5.250805e-01,-7.300260e+00), (7.062114e-02,1.220251e-02), (-4.473024e+00,-5.393712e+00), (-2.563714e+00,-3.595341e+00), (-2.132372e+00,2.358850e+00), (2.366238e+00,-9.678425e+00), (-1.745694e+00,3.276434e+00), (8.066843e+00,-9.101268e+00), (-8.256901e+00,-8.717900e+00), (7.397744e+00,-5.366434e+00), (2.060291e-01,2.840891e+00), (-6.935319e+00,-5.192438e+00), (9.690418e+00,-9.175753e+00), (3.448993e+00,2.119052e+00), (-7.769218e+00,4.647406e-01)>   #cast %eeWWA   example = clop test_data Output: The output shows the minimum distance and the two points separated by that distance. (If the brute force algorithm were used, it would have displayed the square of the distance.) 9.957310e-01: ( (-2.132372e+00,2.358850e+00), (-1.745694e+00,3.276434e+00)) Visual FoxPro   CLOSE DATABASES ALL CREATE CURSOR pairs(id I, xcoord B(6), ycoord B(6)) INSERT INTO pairs VALUES (1, 0.654682, 0.925557) INSERT INTO pairs VALUES (2, 0.409382, 0.619391) INSERT INTO pairs VALUES (3, 0.891663, 0.888594) INSERT INTO pairs VALUES (4, 0.716629, 0.996200) INSERT INTO pairs VALUES (5, 0.477721, 0.946355) INSERT INTO pairs VALUES (6, 0.925092, 0.818220) INSERT INTO pairs VALUES (7, 0.624291, 0.142924) INSERT INTO pairs VALUES (8, 0.211332, 0.221507) INSERT INTO pairs VALUES (9, 0.293786, 0.691701) INSERT INTO pairs VALUES (10, 0.839186, 0.728260)   SELECT p1.id As id1, p2.id As id2, ; (p1.xcoord-p2.xcoord)^2 + (p1.ycoord-p2.ycoord)^2 As dist2 ; FROM pairs p1 JOIN pairs p2 ON p1.id < p2.id ORDER BY 3 INTO CURSOR tmp   GO TOP ? "Closest pair is " + TRANSFORM(id1) + " and " + TRANSFORM(id2) + "." ? "Distance is " + TRANSFORM(SQRT(dist2))   Output: Visual FoxPro uses 1 based indexing, Closest pair is 3 and 6. Distance is 0.077910. XPL0 The brute force method is simpler than the recursive solution and is perfectly adequate, even for a thousand points. include c:\cxpl\codes; \intrinsic 'code' declarations   proc ClosestPair(P, N); \Show closest pair of points in array P real P; int N; real Dist2, MinDist2; int I, J, SI, SJ; [MinDist2:= 1e300; for I:= 0 to N-2 do [for J:= I+1 to N-1 do [Dist2:= sq(P(I,0)-P(J,0)) + sq(P(I,1)-P(J,1)); if Dist2 < MinDist2 then \squared distances are sufficient for compares [MinDist2:= Dist2; SI:= I; SJ:= J; ]; ]; ]; IntOut(0, SI); Text(0, " -- "); IntOut(0, SJ); CrLf(0); RlOut(0, P(SI,0)); Text(0, ","); RlOut(0, P(SI,1)); Text(0, " -- "); RlOut(0, P(SJ,0)); Text(0, ","); RlOut(0, P(SJ,1)); CrLf(0); ];   real Data; [Format(1, 6); Data:= [[0.654682, 0.925557], \0 test data from BASIC examples [0.409382, 0.619391], \1 [0.891663, 0.888594], \2 [0.716629, 0.996200], \3 [0.477721, 0.946355], \4 [0.925092, 0.818220], \5 [0.624291, 0.142924], \6 [0.211332, 0.221507], \7 [0.293786, 0.691701], \8 [0.839186, 0.728260]]; \9 ClosestPair(Data, 10); ] Output: 2 -- 5 0.891663,0.888594 -- 0.925092,0.818220 zkl An ugly solution in both time and space. class Point{ fcn init(_x,_y){ var[const] x=_x, y=_y; } fcn distance(p){ (p.x-x).hypot(p.y-y) } fcn toString { String("Point(",x,",",y,")") } }   // find closest two points using brute ugly force: // find all combinations of two points, measure distance, pick smallest fcn closestPoints(points){ pairs:=Utils.Helpers.pickNFrom(2,points); triples:=pairs.apply(fcn([(p1,p2)]){ T(p1,p2,p1.distance(p2)) }); triples.reduce(fcn([(_,_,d1)]p1,[(_,_,d2)]p2){ if(d1 < d2) p1 else p2 }); } points:=T( 5.0, 9.0, 9.0, 3.0, 2.0, 0.0, 8.0, 4.0, 7.0, 4.0, 9.0, 10.0, 1.0, 9.0, 8.0, 2.0, 0.0, 10.0, 9.0, 6.0 ).pump(List,Void.Read,Point);   closestPoints(points).println(); //-->L(Point(8,4),Point(7,4),1)   points:=T( 0.654682, 0.925557, 0.409382, 0.619391, 0.891663, 0.888594, 0.716629, 0.9962, 0.477721, 0.946355, 0.925092, 0.81822, 0.624291, 0.142924, 0.211332, 0.221507, 0.293786, 0.691701, 0.839186, 0.72826) .pump(List,Void.Read,Point); closestPoints(points).println(); Output: L(Point(8,4),Point(7,4),1) L(Point(0.925092,0.81822),Point(0.891663,0.888594),0.0779102) ZX Spectrum Basic Translation of: BBC_BASIC 10 DIM x(10): DIM y(10) 20 FOR i=1 TO 10 30 READ x(i),y(i) 40 NEXT i 50 LET min=1e30 60 FOR i=1 TO 9 70 FOR j=i+1 TO 10 80 LET p1=x(i)-x(j): LET p2=y(i)-y(j): LET dsq=p1*p1+p2*p2 90 IF dsq<min THEN LET min=dsq: LET mini=i: LET minj=j 100 NEXT j 110 NEXT i 120 PRINT "Closest pair is ";mini;" and ";minj;" at distance ";SQR min 130 STOP 140 DATA 0.654682,0.925557 150 DATA 0.409382,0.619391 160 DATA 0.891663,0.888594 170 DATA 0.716629,0.996200 180 DATA 0.477721,0.946355 190 DATA 0.925092,0.818220 200 DATA 0.624291,0.142924 210 DATA 0.211332,0.221507 220 DATA 0.293786,0.691701 230 DATA 0.839186,0.728260
__label__pos
0.968217
Keywords daily activity records, intervention, physical activity   Authors 1. Speck, Barbara J. 2. Looney, Stephen W. Abstract Background: Effective interventions to increase physical activity levels are critical in a nation where inactivity is a national public health problem.   Objective: This pilot study examined whether a minimal intervention (daily records of physical activity) increased activity levels in a community sample of working women.   Methods: In a longitudinal, pretest-posttest design, 49 working women were randomly assigned at the work site level to the control (n = 25) or intervention group (n = 24). At pretest and posttest, subjects completed self-report questionnaires that measured psychological, social-environmental, physical activity, and demographic variables. Subjects in the intervention group kept daily records of their physical activities during the 12-week study, while those in the control group kept no records. In order to compare activity in the two groups, all subjects wore pedometers daily that recorded number of steps.   Results: There was a significant difference between groups in the pedometer values (mean number of daily steps) at the end of the study period (mean difference +/-SE: 2147 +/- 636, p = .022) (2000 steps = approximately 1 mile). Multiple regression analysis showed that only the intervention (p = .003) was a significant predictor of the pedometer values. Hierarchical data analysis was used to account for the intra-class correlation of 0.48 within work site.   Conclusion: Results from this sample of 49 women indicated that mean activity was greater in the intervention group compared to the control group. Recording daily activity is a cost-effective and acceptable intervention that may increase activity levels in women. However, more research is recommended to study the dual role of activity records as a data collection method as well as a potential intervention to increase physical activity.  
__label__pos
0.737458
   HOME TheInfoList In logic Logic is an interdisciplinary field which studies truth and reasoning. Informal logic seeks to characterize Validity (logic), valid arguments informally, for instance by listing varieties of fallacies. Formal logic represents statements and ar ... logic and related fields such as mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and their changes (cal ... and philosophy Philosophy (from , ) is the study of general and fundamental questions, such as those about Metaphysics, existence, reason, Epistemology, knowledge, Ethics, values, Philosophy of mind, mind, and Philosophy of language, language. Such questio ... philosophy , "if and only if" (shortened as "iff") is a biconditional In logic Logic (from Ancient Greek, Greek: grc, wikt:λογική, λογική, label=none, lit=possessed of reason, intellectual, dialectical, argumentative, translit=logikḗ)Also related to (''logos''), "word, thought, idea, argument, ... logical connective In logic Logic is an interdisciplinary field which studies truth and reasoning Reason is the capacity of consciously making sense of things, applying logic Logic (from Ancient Greek, Greek: grc, wikt:λογική, λογική, la ... between statements, where either both statements are true or both are false. The connective is biconditional In logic Logic (from Ancient Greek, Greek: grc, wikt:λογική, λογική, label=none, lit=possessed of reason, intellectual, dialectical, argumentative, translit=logikḗ)Also related to (''logos''), "word, thought, idea, argument, ... (a statement of material equivalence), and can be likened to the standard material conditional The material conditional (also known as material implication) is an binary operator, operation commonly used in mathematical logic, logic. When the conditional symbol \rightarrow is semantics of logic, interpreted as material implication, a fo ... ("only if", equal to "if ... then") combined with its reverse ("if"); hence the name. The result is that the truth of either one of the connected statements requires the truth of the other (i.e. either both statements are true, or both are false), though it is controversial whether the connective thus defined is properly rendered by the English "if and only if"—with its pre-existing meaning. For example, ''P if and only if Q'' means that ''P'' is true whenever ''Q'' is true, and the only case in which ''P'' is true is if ''Q'' is also true, whereas in the case of ''P if Q'', there could be other scenarios where ''P'' is true and ''Q'' is false. In writing, phrases commonly used as alternatives to P "if and only if" Q include: ''Q is necessary and sufficient In logic Logic (from Ancient Greek, Greek: grc, wikt:λογική, λογική, label=none, lit=possessed of reason, intellectual, dialectical, argumentative, translit=logikḗ)Also related to (''logos''), "word, thought, idea, argument, ac ... for P'', ''P is equivalent (or materially equivalent) to Q'' (compare with material implication), ''P precisely if Q'', ''P precisely (or exactly) when Q'', ''P exactly in case Q'', and ''P just in case Q''. Some authors regard "iff" as unsuitable in formal writing; others consider it a "borderline case" and tolerate its use. In logical formulae, logical symbols, such as \leftrightarrow and \Leftrightarrow, are used instead of these phrases; see below. Definition The truth table A truth table is a mathematical table Mathematical tables are lists of numbers showing the results of a calculation with varying arguments. Tables of trigonometric functions were used in ancient Greece and India for applications to astronomy ... truth table of ''P'' \Leftrightarrow ''Q'' is as follows: It is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate XOR gate (sometimes EOR, or EXOR and pronounced as Exclusive OR) is a digital logic gate A logic gate is an idealized model of computation or physical electronic device implementing a Boolean function, a logical operation performed on one ... XOR gate . Usage Notation The corresponding logical symbols are "↔", "\Leftrightarrow", and " ", and sometimes "iff". These are usually treated as equivalent. However, some texts of mathematical logic Mathematical logic is the study of formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory. Research in mathematical logic commonly addresses the mathematical properties of formal sys ... (particularly those on first-order logic First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses Quantificat ... , rather than propositional logic Propositional calculus is a branch of logic Logic is an interdisciplinary field which studies truth and reasoning Reason is the capacity of consciously making sense of things, applying logic Logic (from Ancient Greek, Greek: grc, ... ) make a distinction between these, in which the first, ↔, is used as a symbol in logic formulas, while ⇔ is used in reasoning about those logic formulas (e.g., in metalogic Metalogic is the study of the metatheory A metatheory or meta-theory is a theory A theory is a reason, rational type of abstraction, abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational th ... ). In 's Polish notation Polish notation (PN), also known as normal Polish notation (NPN), Łukasiewicz notation, Warsaw notation, Polish prefix notation or simply prefix notation, is a mathematical notation in which operators ''precede'' their operands, in contrast t ... , it is the prefix symbol 'E'. Another term for this logical connective In logic Logic is an interdisciplinary field which studies truth and reasoning Reason is the capacity of consciously making sense of things, applying logic Logic (from Ancient Greek, Greek: grc, wikt:λογική, λογική, la ... is exclusive nor Logical equality is a logical operator that corresponds to equality in Boolean algebra In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, struct ... . In TeX TeX (, see below), stylized within the system as TeX, is a typesetting system which was designed and mostly written by Donald Knuth and released in 1978. TeX is a popular means of typesetting complex mathematical formulae; it has been noted ... , "if and only if" is shown as a long double arrow: \iff via command \iff. Proofs In most logical system A formal system is used for inferring theorems from axioms according to a set of rules. These rules, which are used for carrying out the inference of theorems from axioms, are the logical calculus of the formal system. A formal system is essentiall ... s, one proves a statement of the form "P iff Q" by proving either "if P, then Q" and "if Q, then P", or "if P, then Q" and "if not-P, then not-Q". Proving these pair of statements sometimes leads to a more natural proof, since there are not obvious conditions in which one would infer a biconditional directly. An alternative is to prove the disjunction In logic, disjunction is a logical connective typically notated \lor whose meaning either refines or corresponds to that of natural language expressions such as "or". In classical logic, it is given a truth functional semantics of logic, sema ... "(P and Q) or (not-P and not-Q)", which itself can be inferred directly from either of its disjuncts—that is, because "iff" is truth-function In logic Logic (from Ancient Greek, Greek: grc, wikt:λογική, λογική, label=none, lit=possessed of reason, intellectual, dialectical, argumentative, translit=logikḗ)Also related to (''logos''), "word, thought, idea, argument, ac ... al, "P iff Q" follows if P and Q have been shown to be both true, or both false. Origin of iff and pronunciation Usage of the abbreviation "iff" first appeared in print in John L. Kelley's 1955 book ''General Topology''. Its invention is often credited to Paul Halmos Paul Richard Halmos ( hu, Halmos Pál; March 3, 1916 – October 2, 2006) was a HungarianHungarian may refer to: * Hungary, a country in Central Europe * Kingdom of Hungary, state of Hungary, existing between 1000 and 1946 * Hungarians, ethnic g ... , who wrote "I invented 'iff,' for 'if and only if'—but I could never believe I was really its first inventor." It is somewhat unclear how "iff" was meant to be pronounced. In current practice, the single 'word' "iff" is almost always read as the four words "if and only if". However, in the preface of ''General Topology'', Kelley suggests that it should be read differently: "In some cases where mathematical content requires 'if and only if' and euphony Phonaesthetics (also spelled phonesthetics in North America North America is a continent entirely within the Northern Hemisphere and almost all within the Western Hemisphere. It can also be described as the northern subcontinent of the A ... euphony demands something less I use Halmos' 'iff'". The authors of one discrete mathematics textbook suggest: "Should you need to pronounce iff, really hang on to the 'ff' so that people hear the difference from 'if'", implying that "iff" could be pronounced as . Usage in definitions Technically, definitions are always "if and only if" statements; some texts — such as Kelley's ''General Topology'' — follow the strict demands of logic, and use "if and only if" or ''iff'' in definitions of new terms. However, this logically correct usage of "if and only if" is relatively uncommon, as the majority of textbooks, research papers and articles (including English Wikipedia articles) follow the special convention to interpret "if" as "if and only if", whenever a mathematical definition is involved (as in "a topological space is compact if every open cover has a finite subcover"). Distinction from "if" and "only if" * ''"Madison will eat the fruit if it is an apple."'' (equivalent to ''"Only if Madison will eat the fruit, can it be an apple"'' or ''"Madison will eat the fruit ''←'' the fruit is an apple"'') *: This states that Madison will eat fruits that are apples. It does not, however, exclude the possibility that Madison might also eat bananas or other types of fruit. All that is known for certain is that she will eat any and all apples that she happens upon. That the fruit is an apple is a ''sufficient'' condition for Madison to eat the fruit. * ''"Madison will eat the fruit only if it is an apple."'' (equivalent to ''"If Madison will eat the fruit, then it is an apple"'' or ''"Madison will eat the fruit ''→'' the fruit is an apple"'') *: This states that the only fruit Madison will eat is an apple. It does not, however, exclude the possibility that Madison will refuse an apple if it is made available, in contrast with (1), which requires Madison to eat any available apple. In this case, that a given fruit is an apple is a ''necessary'' condition for Madison to be eating it. It is not a sufficient condition since Madison might not eat all the apples she is given. * ''"Madison will eat the fruit if and only if it is an apple."'' (equivalent to ''"Madison will eat the fruit ''↔'' the fruit is an apple"'') *: This statement makes it clear that Madison will eat all and only those fruits that are apples. She will not leave any apple uneaten, and she will not eat any other type of fruit. That a given fruit is an apple is both a ''necessary'' and a ''sufficient'' condition for Madison to eat the fruit. Sufficiency is the converse of necessity. That is to say, given ''P''→''Q'' (i.e. if ''P'' then ''Q''), ''P'' would be a sufficient condition for ''Q'', and ''Q'' would be a necessary condition for ''P''. Also, given ''P''→''Q'', it is true that ''¬Q''→''¬P'' (where ¬ is the negation operator, i.e. "not"). This means that the relationship between ''P'' and ''Q'', established by ''P''→''Q'', can be expressed in the following, all equivalent, ways: :''P'' is sufficient for ''Q'' :''Q'' is necessary for ''P'' :''¬Q'' is sufficient for ''¬P'' :''¬P'' is necessary for ''¬Q'' As an example, take the first example above, which states ''P''→''Q'', where ''P'' is "the fruit in question is an apple" and ''Q'' is "Madison will eat the fruit in question". The following are four equivalent ways of expressing this very relationship: :If the fruit in question is an apple, then Madison will eat it. :Only if Madison will eat the fruit in question, is it an apple. :If Madison will not eat the fruit in question, then it is not an apple. :Only if the fruit in question is not an apple, will Madison not eat it. Here, the second example can be restated in the form of ''if...then'' as "If Madison will eat the fruit in question, then it is an apple"; taking this in conjunction with the first example, we find that the third example can be stated as "If the fruit in question is an apple, then Madison will eat it; ''and'' if Madison will eat the fruit, then it is an apple". In terms of Euler diagrams File:Example of A is a proper subset of B.svg, ''A'' is a proper subset of ''B''. A number is in ''A'' only if it is in ''B''; a number is in ''B'' if it is in ''A''. File:Example of C is no proper subset of B.svg, ''C'' is a subset but not a proper subset of ''B''. A number is in ''B'' if and only if it is in ''C'', and a number is in ''C'' if and only if it is in ''B''. Euler diagram An Euler diagram (, ) is a diagrammatic means of representing Set (mathematics), sets and their relationships. They are particularly useful for explaining complex hierarchies and overlapping definitions. They are similar to another set diagramm ... Euler diagram s show logical relationships among events, properties, and so forth. "P only if Q", "if P then Q", and "P→Q" all mean that P is a subset In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ... subset , either proper or improper, of Q. "P if Q", "if Q then P", and Q→P all mean that Q is a proper or improper subset of P. "P if and only if Q" and "Q if and only if P" both mean that the sets P and Q are identical to each other. More general usage Iff is used outside the field of logic as well. Wherever logic is applied, especially in mathematical Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and their changes (cal ... discussions, it has the same meaning as above: it is an abbreviation for ''if and only if'', indicating that one statement is both necessary and sufficient In logic Logic (from Ancient Greek, Greek: grc, wikt:λογική, λογική, label=none, lit=possessed of reason, intellectual, dialectical, argumentative, translit=logikḗ)Also related to (''logos''), "word, thought, idea, argument, ac ... for the other. This is an example of mathematical jargon The language of mathematics has a vast vocabulary A vocabulary, also known as a wordstock or word-stock, is a set of familiar words within a person's language. A vocabulary, usually developed with age, serves as a useful and fundamental tool ... (although, as noted above, ''if'' is more often used than ''iff'' in statements of definition). The elements of ''X'' are ''all and only'' the elements of ''Y'' means: "For any ''z'' in the domain of discourse In the formal sciences Formal science is a branch of science studying formal language disciplines concerned with formal system A formal system is used for inferring theorems from axioms according to a set of rules. These rules, which are used f ... , ''z'' is in ''X'' if and only if ''z'' is in ''Y''." See also * Equivalence relation In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ... * Logical biconditional In logic and mathematics, the logical biconditional, sometimes known as the material biconditional, is the logical connective (\leftrightarrow) used to conjoin two statements and to form the statement " if and only if ", where is known as the ... * Logical equality Logical equality is a logical operator that corresponds to equality in Boolean algebra In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, struct ... * Logical equivalence In logic and mathematics, statements p and q are said to be logically equivalent if they are provable from each other under a set of axioms, or have the same truth value in every model (logic), model. The logical equivalence of p and q is sometimes ... * Polysyllogism A polysyllogism (also called multi-premise syllogism, sorites, climax, or gradatio) is a string of any number of proposition In logic and linguistics, a proposition is the meaning of a declarative sentence (linguistics), sentence. In philosophy, ... References External links * Language Log: "Just in Case"Southern California Philosophy for philosophy graduate students: "Just in Case" {{Common logical symbols Logical connectives Mathematical terminology Necessity and sufficiency
__label__pos
0.544131
[Spice-devel,spice-server,18/20] Allows reds_core_timer_remove to accept NULL for timer Submitted by Frediano Ziglio on Nov. 24, 2016, 5:39 p.m. Details Message ID [email protected] State New Headers show Series "Start cleaning objects on destruction" ( rev: 1 ) in Spice Not browsing as part of any series. Commit Message Frediano Ziglio Nov. 24, 2016, 5:39 p.m. Most of the times the check is done externally so moving inside the function reduce the code. This is similar to the way free(3) works. Signed-off-by: Frediano Ziglio <[email protected]> --- server/char-device.c | 20 ++++++++------------ server/inputs-channel.c | 5 ++--- server/main-channel-client.c | 4 +--- server/reds.c | 4 +++- 4 files changed, 14 insertions(+), 19 deletions(-) Patch hide | download patch | download mbox diff --git a/server/char-device.c b/server/char-device.c index 3b70066..45ce548 100644 --- a/server/char-device.c +++ b/server/char-device.c @@ -176,10 +176,8 @@ static void red_char_device_client_free(RedCharDevice *dev, { GList *l, *next; - if (dev_client->wait_for_tokens_timer) { - reds_core_timer_remove(dev->priv->reds, dev_client->wait_for_tokens_timer); - dev_client->wait_for_tokens_timer = NULL; - } + reds_core_timer_remove(dev->priv->reds, dev_client->wait_for_tokens_timer); + dev_client->wait_for_tokens_timer = NULL; g_queue_free_full(dev_client->send_queue, (GDestroyNotify)red_pipe_item_unref); @@ -990,10 +988,9 @@ static void red_char_device_init_device_instance(RedCharDevice *self) g_return_if_fail(self->priv->reds); - if (self->priv->write_to_dev_timer) { - reds_core_timer_remove(self->priv->reds, self->priv->write_to_dev_timer); - self->priv->write_to_dev_timer = NULL; - } + reds_core_timer_remove(self->priv->reds, self->priv->write_to_dev_timer); + self->priv->write_to_dev_timer = NULL; + if (self->priv->sin == NULL) { return; } @@ -1081,10 +1078,9 @@ red_char_device_finalize(GObject *object) { RedCharDevice *self = RED_CHAR_DEVICE(object); - if (self->priv->write_to_dev_timer) { - reds_core_timer_remove(self->priv->reds, self->priv->write_to_dev_timer); - self->priv->write_to_dev_timer = NULL; - } + reds_core_timer_remove(self->priv->reds, self->priv->write_to_dev_timer); + self->priv->write_to_dev_timer = NULL; + write_buffers_queue_free(&self->priv->write_queue); write_buffers_queue_free(&self->priv->write_bufs_pool); self->priv->cur_pool_size = 0; diff --git a/server/inputs-channel.c b/server/inputs-channel.c index 99c2888..bea0ddf 100644 --- a/server/inputs-channel.c +++ b/server/inputs-channel.c @@ -625,9 +625,8 @@ inputs_channel_finalize(GObject *object) InputsChannel *self = INPUTS_CHANNEL(object); RedsState *reds = red_channel_get_server(RED_CHANNEL(self)); - if (self->key_modifiers_timer) { - reds_core_timer_remove(reds, self->key_modifiers_timer); - } + reds_core_timer_remove(reds, self->key_modifiers_timer); + G_OBJECT_CLASS(inputs_channel_parent_class)->finalize(object); } diff --git a/server/main-channel-client.c b/server/main-channel-client.c index 576b31f..15e168d 100644 --- a/server/main-channel-client.c +++ b/server/main-channel-client.c @@ -189,9 +189,7 @@ static void main_channel_client_finalize(GObject *object) RedsState *reds = red_channel_get_server(red_channel_client_get_channel(RED_CHANNEL_CLIENT(object))); - if (self->priv->ping_timer) { - reds_core_timer_remove(reds, self->priv->ping_timer); - } + reds_core_timer_remove(reds, self->priv->ping_timer); #endif G_OBJECT_CLASS(main_channel_client_parent_class)->finalize(object); } diff --git a/server/reds.c b/server/reds.c index d9af413..19af775 100644 --- a/server/reds.c +++ b/server/reds.c @@ -4212,7 +4212,9 @@ void reds_core_timer_remove(RedsState *reds, g_return_if_fail(reds != NULL); g_return_if_fail(reds->core.timer_remove != NULL); - return reds->core.timer_remove(&reds->core, timer); + if (timer) { + reds->core.timer_remove(&reds->core, timer); + } } void reds_update_client_mouse_allowed(RedsState *reds)
__label__pos
0.986947
  Reasoning about effectful programs and evaluation order Type Thesis Change log Authors Abstract Program transformations have various applications, such as in compiler optimizations. These transformations are often effect-dependent: replacing one program with another relies on some restriction on the side-effects of subprograms. For example, we cannot eliminate a dead computation that raises an exception, or a duplicated computation that prints to the screen. Effect-dependent program transformations can be described formally using effect systems, which annotate types with information about the side-effects of expressions. In this thesis, we extend previous work on effect systems and correctness of effect-dependent transformations in two related directions. First, we consider evaluation order. Effect systems for call-by-value languages are well-known, but are not sound for other evaluation orders. We describe sound and precise effect systems for various evaluation orders, including call-by-name. We also describe an effect system for Levy's call-by-push-value, and show that this subsumes those for call-by-value and call-by-name. This naturally leads us to consider effect-dependent transformations that replace one evaluation order with another. We show how to use the call-by-push-value effect system to prove the correctness of transformations that replace call-by-value with call-by-name, using an argument based on logical relations. Finally, we extend call-by-push-value to additionally capture call-by-need. We use our extension to show a classic example of a relationship between evaluation orders: if the side-effects are restricted to (at most) nontermination, then call-by-name is equivalent to call-by-need. The second direction we consider is non-invertible transformations. A program transformation is non-invertible if only one direction is correct. Such transformations arise, for example, when considering undefined behaviour, nondeterminism, or concurrency. We present a general framework for verifying noninvertible effect-dependent transformations, based on our effect system for call-by-push-value. The framework includes a non-symmetric notion of correctness for effect-dependent transformations, and a denotational semantics based on order-enriched category theory that can be used to prove correctness. Description Date 2019-10-01 Advisors Mycroft, Alan Keywords computational effects, evaluation order, call-by-push-value, call-by-need, categorical semantics Qualification Doctor of Philosophy (PhD) Awarding Institution University of Cambridge Sponsorship EPSRC (1789520)
__label__pos
0.734873
What You Should Know About Prostate Cancer Quick Tip: For an instant life insurance quote in seconds, fill out the form to the right. Facts About Prostate Cancer As with most cancers, there are plenty of statistics to help define the prevalence of prostate cancer. For instance, according to the American Cancer Society: • Approximately 1 in 9 men will receive a prostate cancer diagnosis in their lifetime. • Every year, there are roughly 175,000 new cases diagnosed. • Approximately 1 in 41 men will die of prostate cancer. • Every year, there are roughly 32,000 deaths from prostate cancer. Aside from skin and lung cancer, prostate cancer is the most common cancer affecting men in the United States. Of course, these numbers mask the human toll that prostate or any other cancer takes on not just the patient, but also those closest to them. Let’s explore what exactly prostate cancer is, its symptoms and treatments, and if the disease is treatable with the potential for positive outcomes. What is Prostate Cancer? Above all, to understand prostate cancer, we must first explore what the prostate is and what it does. The prostate is a small gland in the pelvic region that is part of the male reproductive system. The gland rests just below the bladder, in front of the rectum. Its primary purpose is to produce fluid for semen. Prostate cancer develops with malignant cells forming within tissue in the prostate. In fact, the development, however, is most often slow-moving. In some instances, autopsies revealed cancer developing at such a slow pace that older patients died of other causes without knowing or being affected by undiagnosed prostate cancer. The vast majority of cases are considered adenocarcinomas or formed within the gland cells. Other types include neuroendocrine tumors, sarcomas, small cell carcinomas, or transitional cell carcinomas, although each one is scarce. Even with its prevalence, doctors do not fully understand what causes prostate cancer. They do know that as a man ages, their chances of developing prostate cancer also increases. The majority of cases occur in males over the age of 50. African-American men are at higher risk for developing prostate cancer, as are those who’ve had a family history with either a father or brother receiving a diagnosis. Lifestyle and environment are also considered potential contributing factors. Some studies even suggest a link between obesity and a recurrence of prostate cancer or increased risk of death. Prostate Cancer Symptoms and Diagnosis As evidenced by the slow-moving nature of the disease, symptoms and warning signs can be few and far between. Tumors that develop don’t necessarily have anything against which to put pressure, resulting in no pain. There are instances where certain symptoms may be cause for alarm and should prompt a visit to the doctor, including: • Frequent or urgent need to urinate • Problems with the flow of urine (unable to start or stop) • Weak or inconsistent or painful urination, or urination accompanied with a burning sensation • Problem with having or sustaining an erection • Reduction in the amount of ejaculate produced or painful ejaculation • Blood within urine or semen • Pressure or pain in the rectum • Stiffness or any measure of pain in the pelvic region, lower back, or hips or thighs Due to the rarity of symptoms or any outward warning signs, symptoms may have nothing to do with prostate cancer. They could be an indicator of benign prostatic hyperplasia or BPH, which is a non-cancerous enlargement of the prostate that may increase your risk of cancer. Additionally, the symptoms could mean the potential for prostatitis, a painful diagnosis that most often leads to urinary tract infection. As such, any abnormalities in urination or unexplained pain should remain a cause for concern. Seek the advice of a physician if any symptoms occur. Screening With prostate cancer being something akin to an introverted disease, the best form of detection is through regular screening. Starting at age 50 – earlier for those with an established family history or in a high-risk group – men should seek out routine screening for prostate cancer. There are two types of test used for the routine screens: a digital rectal exam or DRE and the prostate-specific antigen or PSA test. Digital Rectal Exam The first prostate cancer test is the DRE. For this exam, a doctor inserts a lubricated, gloved finger into your rectum. With the prostate in front of the rectum, the physician can determine if the prostate is enlarged.  The test is relatively painless. If the results indicate enlargement of the prostate, then further tests would be necessary to confirm the scope and severity of the increased size. Prostate-Specific Antigen (PSA) For the PSA, this test requires blood to be drawn and tested at a lab. The prostate produces an antigen that when abnormally high may indicate the presence of cancer. Although there is no generally accepted “normal” PSA level, the following is often used as a guide when determining test results: • 0 to 2.5 Nanograms per milliliter (ng/mL) is considered safe • 2.6 to 4 ng/mL is considered safe for most • 4 to 10 ng/mL indicates a potential 25% chance of prostate cancer • Greater than 10 ng/mL indicates a potential 50% chance or more of prostate cancer However, in interpreting the test results, a doctor does take into account the following: • Age • Size of prostate • Current or prior medical conditions, including the above mentioned BPH or prostatitis, that might increase PSA levels • Any medications that might also increase PSA levels An elevated PSA level doesn’t always indicate the presence of cancer. Prostate enlargement is a common sign of general aging. Regular testing, though, can help identify if a problem is present and more quickly help to determine the next steps to take. Additional tests include transrectal ultrasound or transrectal MRI. A transrectal biopsy may also be performed to determine not only if cancer is present but if so, also the grade of cancer. The grade applied is also referred to as the Gleason score. Prostate Cancer Treatment Should prostate cancer be diagnosed in a patient, several critical points help determine the severity of the cancer and treatment options. First, after a positive diagnosis, tests are conducted to determine if cancer has spread, either within the prostate itself or other parts of the body. These tests may include additional MRIs or scans, pelvic lymphadenectomy, where lymph nodes are removed from the pelvis and the tissue examined for cancer cells, or seminal vesicle biopsy. When it comes to treatment, there exist multiple options. Treatment is determined by the size and how far its spread, the potential for cancer to grow, or a patient’s age or current level of health.  The most common treatments include: • Watchful Waiting or Active Surveillance  • Surgery • Radiation Therapy  • Cryotherapy • Hormone Therapy • Chemotherapy • Vaccine Treatment • Prevention or Treatment of the Cancer Spreading to Bones There are also several alternative methods that a patient may seek.  There are, of course, risks and side effects associated with all treatments, including a reduction in sex drive, erectile dysfunction or the inability to impregnate a woman. Bowel and bladder issues are also common with prostate cancer treatments and include a leaky bladder or loss of bladder control. Can Prostate Cancer be Prevented? Although there is no guarantee against the prevention of prostate cancer, experts do agree that you can reduce risks and promote better outcomes with changes in your lifestyle. First and foremost, a patient who smokes should quit. It not only raises the risk of a recurring instance of prostate cancer, but it also increases the risk of dying from it. Exercise is also a critical factor for potentially keeping more aggressive cancer at bay. Of course, pairing a regular routine with healthy eating will further your chances of keeping your risk of prostate cancer low. Low-fat diets, with plenty of fruits and vegetables, certainly won’t do you any harm even if research is uneven as to the impact on limiting cancer risks. As for the potential link between prostate cancer and obesity, Dr. Stephen Freedland, director of the Cedars-Sinai Center for Integrated Research in Cancer and Lifestyle (CIRCL), notes: Patients often ask what they can do to combat their prostate cancers. The number one thing I talk to them about is weight loss. Among lifestyle factors, obesity is by far the strongest and clearest link to an aggressive and ultimately deadly course for this disease. What are the Positive Outcomes for Prostate Cancer? There’s little debate that prostate cancer is a very serious disease – one that has made a profound impact on men and their families. Even though its direct causes remain unknown, and data is inconclusive as to robust methods of prevention, prostate cancer is a disease where favorable outcomes are possible. Through regular testing based on age and greater awareness of risk factors, prostate cancer does not have to be a death sentence. A healthy lifestyle, including diet, exercise, maintaining healthy body weight, and avoiding known risks such as smoking are positive steps towards improving your overall health. Even if you are diagnosed with prostate cancer, you stand a far better chance of fighting it if you are otherwise healthy. If you believe you’re at risk of prostate cancer, talk to your doctor. Even if you possess lower risk factors now, as you age those risks can increase, so it’s critical to educate yourself. 
__label__pos
0.74245
877-842-1635 What Is Salinity in Water? Salinity is a word that is fairly common when discussing water quality, but you may not be familiar with what the term means or how it impacts your drinking water. Salinity is the concentration of dissolved salts in a body of water, typically expressed in parts per thousand. While table salt is sodium chloride (NaCl), salinity in drinking water accounts for all mineral salts, including magnesium, calcium, potassium and sulfate in addition to sodium and chloride. While all drinking water supplies contain some level of salinity, homeowners and business owners add concentrated levels of salt to the water that they discharge, which in turn hurts the environment and future water supplies. For example, salt-based water softeners, certain soaps, detergents, cleaning products, and shampoos increase the salinity of the water that flows from the drains in your home. Personal use increases the salinity of your water, which is very difficult to filter at most wastewater treatment plants. Often, the increased salinity is not effectively treated and simply is emitted into bodies of water along with the rest of the discharge. Mineral salts cannot easily be removed from treated wastewater because the salt is dissolved by the time it arrives at the treatment plant. Agricultural and industrial activities also contribute to this steady increase in water salinity throughout the nation. How Does Salinity Affect Us? When salinity rises in wastewater it can permanently damage the ecosystems surrounding your community. Lake and river systems can breakdown over time, resulting in irreversible loss of wildlife and plant life. Increased salinity can also result in additional costly systems being installed in your local wastewater treatment in order to provide potable water. These systems would drive up your water and sewage bills. Taking actions to reduce water salinity keeps costs down, prolongs the life of your local water infrastructure, and helps the planet. How to Decrease Your Water Salinity • Switch from liquid fabric softener to dryer sheets, and use liquid detergents for laundry and for the dishwasher instead of powders. These are simple and affordable lifestyle changes. • Use cleaning products that are environmentally friendly and contain few mineral salts. Avoid cleaning products that contain chlorine, sodium, phosphates, or artificial colors and fragrances. Use less cleaning product when you clean – avoid spraying or pouring too much. • Switch from salt-based water softeners to water softener alternatives with salt free technology. This is the most important step, as salt-based water softeners significantly contribute to increased salinity in a body of water. Our water softener alternatives with salt free technology utilize no salt at all, and operate without the use of electricity. By neutralizing the scaling and slippery effects of mineral ions without adding in sodium ions our systems provide an eco-friendly solution that keeps salt out of your drinking water. The ongoing expenses of salt bags and maintenance also make salt-based water softeners a less viable option for your home. With that said, there are those that have salty water naturally due to salt-water intrusion from the ocean, especially if you live along the coast. In that case, a RO system would be beneficial to filter out harmful contaminants and chemicals. Take the necessary steps to reduce the amount of salt minerals being added to your wastewater, and test your drinking water to ensure nothing else is lurking inside it.
__label__pos
0.91459
#!/usr/bin/python3 import subprocess import argparse import difflib import filecmp import fnmatch import json import sys import re import os fmtr_class = argparse.ArgumentDefaultsHelpFormatter parser = argparse.ArgumentParser(prog = 'nasm-t.py', formatter_class=fmtr_class) parser.add_argument('-d', '--directory', dest = 'dir', default = './travis/test', help = 'Directory with tests') parser.add_argument('--nasm', dest = 'nasm', default = './nasm', help = 'Nasm executable to use') parser.add_argument('--hexdump', dest = 'hexdump', default = '/usr/bin/hexdump', help = 'Hexdump executable to use') sp = parser.add_subparsers(dest = 'cmd') for cmd in ['run']: spp = sp.add_parser(cmd, help = 'Run test cases') spp.add_argument('-t', '--test', dest = 'test', help = 'Run the selected test only', required = False) for cmd in ['new']: spp = sp.add_parser(cmd, help = 'Add a new test case') spp.add_argument('--description', dest = 'description', default = "Description of a test", help = 'Description of a test', required = False) spp.add_argument('--id', dest = 'id', help = 'Test identifier/name', required = True) spp.add_argument('--format', dest = 'format', default = 'bin', help = 'Output format', required = False) spp.add_argument('--source', dest = 'source', help = 'Source file', required = False) spp.add_argument('--option', dest = 'option', default = '-Ox', help = 'NASM options', required = False) spp.add_argument('--ref', dest = 'ref', help = 'Test reference', required = False) spp.add_argument('--error', dest = 'error', help = '"y" if test is supposed to fail or "i" to ignore', required = False) spp.add_argument('--output', dest = 'output', default = 'y', help = 'Output (compiled) file name (or "y")', required = False) spp.add_argument('--stdout', dest = 'stdout', default = 'y', help = 'Filename of stdout file (or "y")', required = False) spp.add_argument('--stderr', dest = 'stderr', default = 'y', help = 'Filename of stderr file (or "y")', required = False) for cmd in ['list']: spp = sp.add_parser(cmd, help = 'List test cases') for cmd in ['update']: spp = sp.add_parser(cmd, help = 'Update test cases with new compiler') spp.add_argument('-t', '--test', dest = 'test', help = 'Update the selected test only', required = False) map_fmt_ext = { 'bin': '.bin', 'elf': '.o', 'elf64': '.o', 'elf32': '.o', 'elfx32': '.o', 'ith': '.ith', 'srec': '.srec', 'obj': '.obj', 'win32': '.obj', 'win64': '.obj', 'coff': '.obj', 'macho': '.o', 'macho32': '.o', 'macho64': '.o', 'aout': '.out', 'aoutb': '.out', 'as86': '.o', 'rdf': '.rdf', } args = parser.parse_args() if args.cmd == None: parser.print_help() sys.exit(1) def read_stdfile(path): with open(path, "rb") as f: data = f.read().decode("utf-8").strip("\n") f.close() return data # # Check if descriptor has mandatory fields def is_valid_desc(desc): if desc == None: return False if 'description' not in desc: return False if desc['description'] == "": return False return True # # Expand ref/id in descriptors array def expand_templates(desc_array): desc_ids = { } for d in desc_array: if 'id' in d: desc_ids[d['id']] = d for i, d in enumerate(desc_array): if 'ref' in d and d['ref'] in desc_ids: ref = desc_ids[d['ref']] own = d.copy() desc_array[i] = ref.copy() for k, v in own.items(): desc_array[i][k] = v del desc_array[i]['id'] return desc_array def prepare_desc(desc, basedir, name, path): if not is_valid_desc(desc): return False # # Put private fields desc['_base-dir'] = basedir desc['_json-file'] = name desc['_json-path'] = path desc['_test-name'] = basedir + os.sep + name[:-5] # # If no target provided never update if 'target' not in desc: desc['target'] = [] desc['update'] = 'false' # # Which code to expect when nasm finishes desc['_wait'] = 0 if 'error' in desc: if desc['error'] == 'expected': desc['_wait'] = 1 # # Walk over targets and generate match templates # if were not provided yet for d in desc['target']: if 'output' in d and not 'match' in d: d['match'] = d['output'] + ".t" return True def read_json(path): desc = None try: with open(path, "rb") as f: try: desc = json.loads(f.read().decode("utf-8").strip("\n")) except: desc = None finally: f.close() except: pass return desc def read_desc(basedir, name): path = basedir + os.sep + name desc = read_json(path) desc_array = [] if type(desc) == dict: if prepare_desc(desc, basedir, name, path) == True: desc_array += [desc] elif type(desc) == list: expand_templates(desc) for de in desc: if prepare_desc(de, basedir, name, path) == True: desc_array += [de] return desc_array def collect_test_desc_from_file(path): if not fnmatch.fnmatch(path, '*.json'): path += '.json' basedir = os.path.dirname(path) filename = os.path.basename(path) return read_desc(basedir, filename) def collect_test_desc_from_dir(basedir): desc_array = [] if os.path.isdir(basedir): for filename in os.listdir(basedir): if os.path.isdir(basedir + os.sep + filename): desc_array += collect_test_desc_from_dir(basedir + os.sep + filename) elif fnmatch.fnmatch(filename, '*.json'): desc = read_desc(basedir, filename) if desc == None: continue desc_array += desc desc_array.sort(key=lambda x: x['_test-name']) return desc_array if args.cmd == 'list': fmt_entry = '%-32s %s' desc_array = collect_test_desc_from_dir(args.dir) print(fmt_entry % ('Name', 'Description')) for desc in desc_array: print(fmt_entry % (desc['_test-name'], desc['description'])) def test_abort(test, message): print("\t%s: %s" % (test, message)) print("=== Test %s ABORT ===" % (test)) sys.exit(1) return False def test_fail(test, message): print("\t%s: %s" % (test, message)) print("=== Test %s FAIL ===" % (test)) return False def test_skip(test, message): print("\t%s: %s" % (test, message)) print("=== Test %s SKIP ===" % (test)) return True def test_over(test): print("=== Test %s ERROR OVER ===" % (test)) return True def test_pass(test): print("=== Test %s PASS ===" % (test)) return True def test_updated(test): print("=== Test %s UPDATED ===" % (test)) return True def run_hexdump(path): p = subprocess.Popen([args.hexdump, "-C", path], stdout = subprocess.PIPE, close_fds = True) if p.wait() == 0: return p return None def show_std(stdname, data): print("\t--- %s" % (stdname)) for i in data.split("\n"): print("\t%s" % i) print("\t---") def cmp_std(from_name, from_data, match_name, match_data): if from_data != match_data: print("\t--- %s" % (from_name)) for i in from_data.split("\n"): print("\t%s" % i) print("\t--- %s" % (match_name)) for i in match_data.split("\n"): print("\t%s" % i) diff = difflib.unified_diff(from_data.split("\n"), match_data.split("\n"), fromfile = from_name, tofile = match_name) for i in diff: print("\t%s" % i.strip("\n")) print("\t---") return False return True def show_diff(test, patha, pathb): pa = run_hexdump(patha) pb = run_hexdump(pathb) if pa == None or pb == None: return test_fail(test, "Can't create dumps") sa = pa.stdout.read().decode("utf-8").strip("\n") sb = pb.stdout.read().decode("utf-8").strip("\n") print("\t--- hexdump %s" % (patha)) for i in sa.split("\n"): print("\t%s" % i) print("\t--- hexdump %s" % (pathb)) for i in sb.split("\n"): print("\t%s" % i) pa.stdout.close() pb.stdout.close() diff = difflib.unified_diff(sa.split("\n"), sb.split("\n"), fromfile = patha, tofile = pathb) for i in diff: print("\t%s" % i.strip("\n")) print("\t---") return True def prepare_run_opts(desc): opts = [] if 'format' in desc: opts += ['-f', desc['format']] if 'option' in desc: opts += desc['option'].split(" ") for t in desc['target']: if 'output' in t: if 'option' in t: opts += t['option'].split(" ") + [desc['_base-dir'] + os.sep + t['output']] else: opts += ['-o', desc['_base-dir'] + os.sep + t['output']] if 'stdout' in t or 'stderr' in t: if 'option' in t: opts += t['option'].split(" ") if 'source' in desc: opts += [desc['_base-dir'] + os.sep + desc['source']] return opts def exec_nasm(desc): print("\tProcessing %s" % (desc['_test-name'])) opts = [args.nasm] + prepare_run_opts(desc) nasm_env = os.environ.copy() nasm_env['NASMENV'] = '--reproducible' desc_env = desc.get('environ') if desc_env: for i in desc_env: v = i.split('=') if len(v) == 2: nasm_env[v[0]] = v[1] else: nasm_env[v[0]] = None print("\tExecuting %s" % (" ".join(opts))) pnasm = subprocess.Popen(opts, stdout = subprocess.PIPE, stderr = subprocess.PIPE, close_fds = True, env = nasm_env) if pnasm == None: test_fail(desc['_test-name'], "Unable to execute test") return None stderr = pnasm.stderr.read(4194304).decode("utf-8").strip("\n") stdout = pnasm.stdout.read(4194304).decode("utf-8").strip("\n") pnasm.stdout.close() pnasm.stderr.close() wait_rc = pnasm.wait(); if desc['_wait'] != wait_rc: if stdout != "": show_std("stdout", stdout) if stderr != "": show_std("stderr", stderr) test_fail(desc['_test-name'], "Unexpected ret code: " + str(wait_rc)) return None, None, None return pnasm, stdout, stderr def test_run(desc): print("=== Running %s ===" % (desc['_test-name'])) if 'disable' in desc: return test_skip(desc['_test-name'], desc["disable"]) pnasm, stdout, stderr = exec_nasm(desc) if pnasm == None: return False for t in desc['target']: if 'output' in t: output = desc['_base-dir'] + os.sep + t['output'] match = desc['_base-dir'] + os.sep + t['match'] if desc['_wait'] == 1: continue print("\tComparing %s %s" % (output, match)) if filecmp.cmp(match, output) == False: show_diff(desc['_test-name'], match, output) return test_fail(desc['_test-name'], match + " and " + output + " files are different") elif 'stdout' in t: print("\tComparing stdout") match = desc['_base-dir'] + os.sep + t['stdout'] match_data = read_stdfile(match) if match_data == None: return test_fail(test, "Can't read " + match) if cmp_std(match, match_data, 'stdout', stdout) == False: return test_fail(desc['_test-name'], "Stdout mismatch") else: stdout = "" elif 'stderr' in t: print("\tComparing stderr") match = desc['_base-dir'] + os.sep + t['stderr'] match_data = read_stdfile(match) if match_data == None: return test_fail(test, "Can't read " + match) if cmp_std(match, match_data, 'stderr', stderr) == False: return test_fail(desc['_test-name'], "Stderr mismatch") else: stderr = "" if stdout != "": show_std("stdout", stdout) return test_fail(desc['_test-name'], "Stdout is not empty") if stderr != "": show_std("stderr", stderr) return test_fail(desc['_test-name'], "Stderr is not empty") return test_pass(desc['_test-name']) # # Compile sources and generate new targets def test_update(desc): print("=== Updating %s ===" % (desc['_test-name'])) if 'update' in desc and desc['update'] == 'false': return test_skip(desc['_test-name'], "No output provided") if 'disable' in desc: return test_skip(desc['_test-name'], desc["disable"]) pnasm, stdout, stderr = exec_nasm(desc) if pnasm == None: return False for t in desc['target']: if 'output' in t: output = desc['_base-dir'] + os.sep + t['output'] match = desc['_base-dir'] + os.sep + t['match'] print("\tMoving %s to %s" % (output, match)) os.rename(output, match) if 'stdout' in t: match = desc['_base-dir'] + os.sep + t['stdout'] print("\tMoving %s to %s" % ('stdout', match)) with open(match, "wb") as f: f.write(stdout.encode("utf-8")) f.close() if 'stderr' in t: match = desc['_base-dir'] + os.sep + t['stderr'] print("\tMoving %s to %s" % ('stderr', match)) with open(match, "wb") as f: f.write(stderr.encode("utf-8")) f.close() return test_updated(desc['_test-name']) # # Create a new empty test case if args.cmd == 'new': # # If no source provided create one # from (ID which is required) if not args.source: args.source = args.id + ".asm" # # Emulate "touch" on source file path_asm = args.dir + os.sep + args.source print("\tCreating %s" % (path_asm)) open(path_asm, 'a').close() # # Fill the test descriptor # # FIXME: We should probably use Jinja path_json = args.dir + os.sep + args.id + ".json" print("\tFilling descriptor %s" % (path_json)) with open(path_json, 'wb') as f: f.write("[\n\t{\n".encode("utf-8")) acc = [] if args.description: acc.append("\t\t\"description\": \"{}\"".format(args.description)) acc.append("\t\t\"id\": \"{}\"".format(args.id)) if args.format: acc.append("\t\t\"format\": \"{}\"".format(args.format)) acc.append("\t\t\"source\": \"{}\"".format(args.source)) if args.option: acc.append("\t\t\"option\": \"{}\"".format(args.option)) if args.ref: acc.append("\t\t\"ref\": \"{}\"".format(args.ref)) if args.error == 'y': acc.append("\t\t\"error\": \"expected\"") elif args.error == 'i': acc.append("\t\t\"error\": \"over\"") f.write(",\n".join(acc).encode("utf-8")) if args.output or args.stdout or args.stderr: acc = [] if args.output: if args.output == 'y': if args.format in map_fmt_ext: args.output = args.id + map_fmt_ext[args.format] acc.append("\t\t\t{{ \"output\": \"{}\" }}".format(args.output)) if args.stdout: if args.stdout == 'y': args.stdout = args.id + '.stdout' acc.append("\t\t\t{{ \"stdout\": \"{}\" }}".format(args.stdout)) if args.stderr: if args.stderr == 'y': args.stderr = args.id + '.stderr' acc.append("\t\t\t{{ \"stderr\": \"{}\" }}".format(args.stderr)) f.write(",\n".encode("utf-8")) f.write("\t\t\"target\": [\n".encode("utf-8")) f.write(",\n".join(acc).encode("utf-8")) f.write("\n\t\t]".encode("utf-8")) f.write("\n\t}\n]\n".encode("utf-8")) f.close() if args.cmd == 'run': desc_array = [] if args.test == None: desc_array = collect_test_desc_from_dir(args.dir) else: desc_array = collect_test_desc_from_file(args.test) if len(desc_array) == 0: test_abort(args.test, "Can't obtain test descriptors") for desc in desc_array: if test_run(desc) == False: if 'error' in desc and desc['error'] == 'over': test_over(desc['_test-name']) else: test_abort(desc['_test-name'], "Error detected") if args.cmd == 'update': desc_array = [] if args.test == None: desc_array = collect_test_desc_from_dir(args.dir) else: desc_array = collect_test_desc_from_file(args.test) if len(desc_array) == 0: test_abort(args.test, "Can't obtain a test descriptors") for desc in desc_array: if test_update(desc) == False: if 'error' in desc and desc['error'] == 'over': test_over(desc['_test-name']) else: test_abort(desc['_test-name'], "Error detected")
__label__pos
0.991141
CloudWatch Internal Agent Configuration 0 We are a tier 1 service owner and because of high traffic we want to push client logs to client owned accounts I am using Cloudwatch Internal Agent in Amazon and wanted to check if this is possible This is my agent configuration { "agent": { "metrics_collection_interval": 1, "region": "us-west-1", "logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log", "debug": false, "run_as_user": "nobody" }, "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/apollo/env/SampleDOTAdopter/var/output/logs/website-log-pusher*", "log_group_name": "SampleDOTAdopter/{stage}/application_log", "log_stream_name": "{hostname}", "timezone": "UTC", "retention_in_days": 30 } ] } }, "credentials": { "role_arn" : "**:role/SampleDOTAdopter" } }, "logs": { "logs_collected": { "files": { "collect_list": [ { "file_path": "/apollo/env/SampleDOTAdopter/var/output/logs/service_log.*", "log_group_name": "SampleDOTAdopter/{stage}/service_log.", "log_stream_name": "{hostname}", "timezone": "UTC", "retention_in_days": 30 } ] } }, "credentials": { "role_arn" : "**:role/DOTPlayground" } } } asked a year ago300 views 1 Answer 0 I don't know with the CW Agent, I haven't tried that. But with something like Fluent-Bit you most definitely can do that. I have started off a blog post in fact here to document just that. I typically output all the logs to CW Logs in the same account, but you can change that behaviour by using an IAM role to assume to publish the logs into another account. I prefer to use Firehose to send the logs into another account which lands the log files to S3 but that pretty much achieves the same thing. profile picture answered a year ago You are not logged in. Log in to post an answer. A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker. Guidelines for Answering Questions
__label__pos
0.996623
what is minute arm in torque In the context of torque, a second arm, also recognised as a lever arm or torque arm, refers to the perpendicular distance between the axis of rotation and the line of action wherever the drive is applied. It plays a important position in analyzing the magnitude of torque or rotational drive exerted on an item. The formulation to work out torque is: Torque = Force × Minute Arm Wherever: – Torque is the rotational force or minute, commonly measured in models such as Newton-meters (Nm) or foot-kilos (lb-ft). – Pressure is the utilized drive performing on the object, measured in units of pressure these kinds of as Newtons (N) or kilos (lb). – Second Arm is the perpendicular length concerning the axis of rotation and the line of action wherever the force is used. It is measured in models these types of as meters (m) or toes (ft). The idea of the minute arm can be illustrated with a very simple case in point: a wrench becoming made use of to tighten a bolt. When you use a pressure to the manage of the wrench, the moment arm is the length concerning the centre of rotation (the bolt) and the position where by you grip the take care of. The for a longer time the instant arm, the higher the leverage and torque you can exert on the bolt. In summary, the minute arm in China torque arm manufacturer refers to the length involving the axis of rotation and China torque arm manufacturer the point where by the force is applied. It decides the effectiveness of the drive in developing rotational motion or torque on an object. A for China torque arm exporter a longer time second arm will allow for greater leverage and torque, although a shorter second arm outcomes in less torque.
__label__pos
0.853896
How antacid travels and processed in your digestive system how antacid travels and processed in your digestive system Learn about your child's digestive system are they suffering from gi disorders like ibs, ibd, fatty liver heartburn heartburn infection infection the mouth starts the digestive process by producing saliva (suh-lye-vuh), even before eating begins. how antacid travels and processed in your digestive system Learn about your child's digestive system are they suffering from gi disorders like ibs, ibd, fatty liver heartburn heartburn infection infection the mouth starts the digestive process by producing saliva (suh-lye-vuh), even before eating begins. how antacid travels and processed in your digestive system Learn about your child's digestive system are they suffering from gi disorders like ibs, ibd, fatty liver heartburn heartburn infection infection the mouth starts the digestive process by producing saliva (suh-lye-vuh), even before eating begins. Nutrition and your digestive system germs and staying healthy how can you avoid germs what are germs squirts fluid into the intestines to help flush food along its path through the digestive tract (gerd) is caused when acid from your stomach travels backwards into the esophagus. In your stomach, but it's a long process that involves many organs together they form the digestive tract the digestive system digestion begins in your mouth, where saliva starts to break down food trigger your heartburn don't lie down for at least 2 to 3. 10 foods that will mess up your digestion andy january 4 though, will wreak long-term havoc on your digestive system, so they're foods best avoided: chocolate noooooo but these spicy foods can be a pain in your heart thanks to heartburn and acid reflux. It enters the stomach and travels through the small and large intestines how do i cure digestion issue update cancel try some digestive enzyme with your food to help your digestive system in food breaking down process. If you've found that other foods negatively affect your digestive system, let us know in the comments if you can't totally cut out processed foods from your diet if you already suffer from heartburn. This process, called digestion, allows your body to get the nutrients and energy it needs from the food you eat so let's find out what's happening to that digestive system webquest author: authorized user last modified by. What you take in must have the nutrients your cells need your digestive system must be able to extract them and your if you have heartburn, something is wrong if your doctor recommends you take probiotics will make your digestion more efficient, enhance lactose digestion. Digestive system home heartburn normal digestive process as you chew your food and swallow, food particles travel from your mouth to the esophagus. Food begins its journey through the digestive system in the mouth how does food travel through the digestive system a: assimilation in the digestive system is the process by which nutrients from foods are taken into the cells of the body after the food has been digested and. 6 foods that your digestive system loves by the alternative daily fast or processed foods your gut needs real food that are easy to be broken down they are also incredibly beneficial to the digestive tract they have powerful antacid effects which help to protect the stomach from. Key concept the digestive system breaks down food procedure using a dropper your tongue pushes food down into your throat food then travels down the esophagus to the stomach apply does an antacid deal with physical or chemical digestion 5. Problems of the digestive system include not eating enough fiber, not drinking enough water, certain medications, and changes in routine (such as travel) several over-the-counter medications are available that may help reduce your symptoms antacids reduce the acid content in the. Heartburn inflammatory bowel disease irritable bowel and do all the other things that are needed to keep the body going without the digestive process, the body isn't going to be able to what does the inside of your digestive system look like list learn all about the mysteries of your. If you are in the market for an over-the-counter antacid the digestive process starts even before you take a bite your teeth are actually part of your digestive system and are responsible for breaking down the food into small pieces. The digestive tract: how the digestive system works by melissa jeffries the digestive tract solid substance that is released into the small intestine in a process that takes more than an hour your liquefied sandwich. The use of essential oils for heartburn has been prevalent for centuries extracted from plants using a unique distillation process sip this slowly at the start of the day to give your digestive system a boost. How antacid travels and processed in your digestive system What happens to your body when you overeat by everybody has a complex system in their brain and digestive system that communicates about how much do not pass go, put down the fork please, otherwise known as satiety hormones they travel to your brain and interact with receptors there. • Digestive enzymes and probiotics can be used to help relieve heartburn and acid foods which reduces the transit time and helps normalize the digestive process the gi tract leads to better digestion in the higher portions of the digestive system enzalase. • Adam video clip: heartburn fill in the blanks digestive system tour lab page 4 your teeth are specialized an adult has _____ teeth on your digestive system cartoon label these parts: mouth esophagus small intestine. • Your stomach will hate you after eating these 4 foods sep 11, 2015 | 5:13 pm by bridget creel certain foods negatively affect your digestive system more than others because of how your body reacts to those ingredients acid reflux can lead to uncomfortable heartburn and make your. • Learn about your child's digestive system are they suffering from gi disorders like ibs, ibd, fatty liver heartburn heartburn infection infection the mouth starts the digestive process by producing saliva (suh-lye-vuh), even before eating begins. Gastritis and other stomach 6-8-2017 a bottle of soylent: slimfast for nerds soy milk fortified with 20% of your recommended daily intake of every essential nutrient disneyland how antacid travels and processed in your digestive system has launched a digital version of the 14-6-2016 what is a. Good digestive health is the ability to process nutrients through properly functioning digestive organs your digestive system breaks down the foods you eat into the nutrients your body needs which will help prevent overeating and thereby improve digestive health 6. Antacids-11 potential problems associated with frequent or long term use of antacids in general home / diseases / diseases, specific / antacid side effects antacid side effects antacids ( and the problems with taking the digestive system is overburdened, and your immune system suffers. Thanks to this process, acid in the stomach of a heartburn sufferer is neutralized ulcers there are some digestive disorders that cannot be cured by alka-seltzer or its many food travels quickly through the esophagus, landing in the stomach digestion process of the digestive system. If your heartburn or other symptoms don't improve with lifestyle changes and medication your digestive system & how it works the digestive system is made up of the gastrointestinal funding process research training & career development funded grants & grant history. How antacid travels and processed in your digestive system Rated 3/5 based on 25 review
__label__pos
0.967409
What is MMR Vaccine and WHAT DOES MMR STAND FOR? MMR stands for “Measles, Mumps and Rubella”, which are types of viruses. A measles virus infection can cause symptoms of fever, cough, stuffy nose, conjunctivitis and rash. Severe complications can lead to brain inflammation. Adults infected by the measles virus have an increased mortality risk compared to children, and measles during pregnancy can lead to premature labor or even miscarriage. A mumps virus infection typically causes swelling of the parotid gland. Other possible complications include inflammation of the testes and ovaries. Adults infected with the mumps virus are at greater risk for more serious complications such as brain inflammation. A fetal rubella virus infection can cause severe birth defects, or even fetal death. In children and adults, the rubella virus infection generally causes rash. Joint pain and seizures are more common complications seen in adult infections. WHEN DOES A PATIENT NEED IT? The measles, mumps and rubella (MMR) vaccine is administered, via injection typically into the upper arm, to prevent measles, mumps and rubella infections. In the United States, routine immunization with the MMR vaccine is recommended for all children, with the first dose administered at around 12-15 months of age, and the second dose administered at around 4-6 years of age. In general, at least one dose of MMR vaccine should be administered to adults born in 1957 or later, unless there is either evidence of immunity to all three viruses, or there is a medical contraindication to the vaccine. Because the MMR vaccine is a live-attenuated vaccine, women who are pregnant or soon plan to become pregnant should not get this vaccine. The importance of MMR vaccination MMR are the viruses which unfortunately could spread with ease. However, CDC is highly recommended the use of MMR vaccine for the people to get protection against the three viruses which are mumps, measles, and rubella. MMR vaccine is highly effective and quite safe to use and has the ability to provide you better protection against 3 common viral diseases with the help of a single injection. Measles, mumps, and rubella are some of the seriously high infectious situations which has the ability to cause some serious, and potentially lethal conditions. To get protection from these situations it is important to consider MMR vaccine a better solution. For better protection and healthier life MMR vaccine is being given through two doses. CDC recommend to use the first dose MMR vaccine in children during 12 to 15 months of their ages. While the second MMR vaccine dose could be given 3 months after the first dose and is recommended to give earlier than 4 to 6 years age. People who has received two doses of MMR vaccine at their time are usually being considered as they are protected for life from the threat of measles, mumps, and rubella issues. Online Appointment! Online Appointment Stay Healthy!! Stay Strong!! To make an appointment online at Southern Nevada Occupational Health Center (SNOHC), click here and file the form or call us at (702) 380-3989 Click Here →
__label__pos
0.952756
What is grouting in construction | Purposes, and Application of grouting What is grouting in construction? The process of filling gaps, space, or joints between two materials like stones, tiles, marbles, etc is known as grouting in construction. Grout is a viscous material (a mixture of cement, water, sand, epoxy, acrylic, and polymer) used as a filler to fill the spaces and joints between ceramics and stone tiles. What is grouting in construction | Purposes, and Application of grouting In civil engineering, grouting refers to inserting the drainable material inside the structure like soil and rock formation, to change its physical properties. In short, Grouting is done to control the groundwater during civil engineering works. Grouting is done generally for high permeable soil which may cause seepage above the concrete structure. Purpose of grouting in construction • Grouting is done for repairing concrete cracks, filling gaps in the tiles, and waterproofing. • Grouting is done for giving additional strength to the foundation of the load-bearing structure. • Grouting is done to change the physical properties of the structure. • Grouting is done for seepage control and preventing landslide. • Grouting is done to reduce surface subsidence. Application of grouting in construction • It is used in filling cracks, voids in the natural rock formation • It is used in pressure grouting in the case of cavities and fissures. • It is used in preventing the collapse of granular soil • It is used in dams and Reservoir for curtain and compaction. • It increases soil stability, strength, and rigidity Advantages of grouting: 1. Grouting can be applied in almost every ground condition. 2. Grouting can be done in limited space. 3. Grouting doesn’t produce vibration and handling carefully can avoid structural damage. 4. Helps to measure the improvement of in-ground structures. 5.  It helps to control seepage, groundwater flow. Types of grouting in construction A) Types of grouting based on the material used: 1) Cement grouting: What is grouting in construction | Purposes, and Application of grouting Cement grouting is done to seal a wide crack especially in gravity dams, canal linings, foundation, and thick concrete wall. This is general grouting in construction. It is composed of neat cement and water or mixture of a sand (4 parts) to cement (1 parts). Before injecting, holes are bored around the field to be excavated with a thin grout. The viscosity is then increased by reducing the water-cement ratio. To ensure the complete grouting, secondary holes are bored between the primary holes.    Cement grouting is further divided into: • Ordinary Portland cement grouting • Microfine cement grouting • Ultrafine cement grouting 1.1 Ordinary Portland cement grouting: It is commonly used for repairing concrete cracks. Since they have the particulate size of 15 microns they can help in filing the wider cracks. 1.2 Micro-fine cement grouting: Finely ground slag, fine fly ash, or Portland cement are mixed with water to allow penetration into the fine cracks. They have the particulate size in the range of 6 to 10 microns. 1.3 Ultra-fine cement grouting: This grout is used for sealing the very fine hairline like cracks and have the particulate size of 3 to 5 microns. They are used for stabilizing waste plumes. How to use white cement to fill gaps? What is grouting in construction | Purposes, and Application of grouting White cement is used to seal the joints to fill the voids and cracks between the ceramic floors and other materials attached to it. White cement now-days has been a constant use for filling the voids and gaps, used as an alternative of painting material in the ceiling, etc. The white cement is mixed with water with the required quantity on how much the tiling needs to be done. The instruction of the company as instructed in the package should be followed and the right ratio mix should be made. The mix is mixed properly and applied to the place where the voids and the cracks are to be filled. 2) Chemical grouts: This is a grout that consists of polymers like acrylic, polyurethane, sodium silicates, epoxy, or any other polymer. It can be introduced into soil pores without any change in original soil volume and structure, and help in changing the support capability of granular soil without disturbing them. This grouting is suitable for tunneling applications without over-excavation. Some advantages of chemical grouts: • it can easily permeate the deep micro cracks. • It is stable and reliable. • It is fast and can be used for emergency repairs. Some drawbacks of chemical grout: • Only specific types of soil are acceptable, • Likely to produce pollution. 3) Bituminous grouting: In this method, hot bitumen is used as a grouting material. Hot bitumen is employed associated with solidify based suspension grout. this is often never really grout from spreading and to create the mechanical quality of the finished result. A hard-oxidized environment friendly, having a high solidification point is used for grouting. Process of bituminous grouting Firstly, the bitumen is heated up to 200 degrees Celsius. At this time the grout has a dynamic viscosity in the range 15 to 100 cp. Unlike another grouting, the hot bitumen’s curing is thermally driven. This hot bitumen turns from its fluid state to a highly viscous elastoplastic state, when it is injected into medium saturated water. Finally, when this is injected the pass is plugged. 4) Resin grouting: In traditional resin grout, it is the composition of epoxy resin mixed with the filler. But new type of water-based resin has been recently developed that is better than the traditional ones. Some advantages of Resin grouting are: • They set harder • do not break down easily Some disadvantages Resin grouting are: • They are expensive • May have an aggressive chemical • Based on the method • Permeation grouting It is also known as penetration grouting and is the most conventional grouting for use. This grouting method is used in non-cohesive soil, sand, and other porous media for filling cracks and joints. It is injected inside the porous medium without disturbing its original structure. It is commonly used in soil and rock deposits to change its geotechnical properties. There are two types of permeation grouting injecting system: • Circulating grout system • Direct grout system Advantages of permeation grouting: • Its help to give strength to sand and gravel, • It helps in the solidification of unstable gravels and sand, at depth up to 60m. • It fills the voids in the sand. B) Types of Grouting based on the the Process 1) Compaction grouting: https://youtu.be/mou95Ailfbk Compaction grouting is done to strengthen the subsurface or surface of the permeable soil to reduce the voids and sinkholes. It is driven to the depth through the drill. Cement, sand, fly ash, and water is then placed from bottom to top according to the pressure criteria. After each step, the drill is lifted up until it is fully taken out. This grouting is commonly called low mobility grouting .                                   Uses of compaction grouting: • It helps to improve the bearing capacity of soil • It helps to solve the soil density problem • It helps in stabilizing underground formations for pipes • Helps to manage sites with sinkhole activity Advantages of compaction grouting: • Rapid installation • Structural foundation connections not required • No spoil generation • Reduce the foundation settlements • Mitigation of liquefaction potential 2) Bentonite grouting: Bentonite is made up of the clay having thixotropic properties that is a highly water-resistant gel which forms the permanent barrier to water flow when mixed with additives. This method is used in the soil particles that cannot accept the cement grouting. This is commonly used for plugging old wells. It is composed of 50 pounds of powdered bentonite to 34 gallons of water in which 50 pounds of washed sand is added. 3) Fracture grouting: In this method, grout uses the low viscosity grouts that splits by hydraulic fracture under the high pressure and enters into the cracks by creating the lenses. It is also known as compensation grouting and is commonly used for structural releveling. Procedure of hydraulic fracture: In this method, a hydraulically pressurized liquid composed of water, sand, and chemical mixture is used to fracture the rock. Artificial cracks are provided with pre-split holes. Then, the grout is passed down the holes. The casing is inserted to the fracture section and grouted. A pressurized fluid carrier is inserted into the opening casing and spread throughout fractures. The casing remains open after fracturing. 4) Jet grouting in construction: What is grouting in construction | Purposes, and Application of grouting This is a process of creating soil concrete column or jet grouted column using high-pressure jet through the nozzle in a borehole. The specially designed drill stem and the monitor are raised and rotated at slow, smooth, and constant speed cutting the soil with water or/and air at high pressure to create the soil concrete column. The end product is then cemented round column. This grout is effective for almost soil. Procedure of jet grouting: 1. Initially, the hole is drilled in the required place and depth. 2. The drill is done until a weak subsoil exists. It may be up to 10 to 20 cm. 3. Then, equipment is placed in the hole to conduct an injection process that consists of a jet grouting string of almost 7 to 10 cm. 4. The string consists of a nozzle to have an injection on high velocity, having a diameter of 1 to 10mm. 5. Then, the string is raised and rotated to seal the whole column with soil and the fluid system. Now, the jetting starts. The string is raised when the fluid is injected. For every raising, there is rotation performed smoothly and constantly. This gives a perfectly refined grouting column. Types of jet grouting system 1. Single 2. Double 3. Triple fluid system. Application of jet grouting • Horizontal barriers • Groundwater control • Tunneling • Supporting excavation • Underpinning I hope this post remains helpful for you. Happy Learning – Civil Concept Contributed by, Shreya Parajuli Read Also, 8 Points to improve the Durability of Concrete structure Aggregates for concrete | Aggregates sizes for concrete Top 5 Difference between Segregation and Bleeding in Concrete Top 10 Quick guides for Reinforced concrete Column design Sharing Is Caring: Your Comment
__label__pos
0.975512
  Network Theory (Part 29) I’m talking about electrical circuits, but I’m interested in them as models of more general physical systems. Last time we started seeing how this works. We developed an analogy between electrical circuits and physical systems made of masses and springs, with friction: Electronics Mechanics charge: Q position: q current: I = \dot{Q} velocity: v = \dot{q} flux linkage: \lambda momentum: p voltage: V = \dot{\lambda} force: F = \dot{p} inductance: L mass: m resistance: R damping coefficient: r   inverse capacitance: 1/C   spring constant: k But this is just the first of a large set of analogies. Let me list some, so you can see how wide-ranging they are! More analogies People in system dynamics often use effort as a term to stand for anything analogous to force or voltage, and flow as a general term to stand for anything analogous to velocity or electric current. They call these variables e and f. To me it’s important that force is the time derivative of momentum, and velocity is the time derivative of position. Following physicists, I write momentum as p and position as q. So, I’ll usually write effort as \dot{p} and flow as \dot{q}. Of course, ‘position’ is a term special to mechanics; it’s nice to have a general term for the thing whose time derivative is flow, that applies to any context. People in systems dynamics seem to use displacement as that general term. It would also be nice to have a general term for the thing whose time derivative is effort… but I don’t know one. So, I’ll use the word momentum. Now let’s see the analogies! Let’s see how displacement q, flow \dot{q}, momentum p and effort \dot{p} show up in several subjects: displacement:    q flow:      \dot q momentum:      p effort:           \dot p Mechanics: translation position velocity momentum force Mechanics: rotation angle angular velocity angular momentum torque Electronics charge current flux linkage voltage Hydraulics volume flow pressure momentum pressure Thermal Physics entropy entropy flow temperature momentum temperature Chemistry moles molar flow chemical momentum chemical potential We’d been considering mechanics of systems that move along a line, via translation, but we can also consider mechanics for systems that turn round and round, via rotation. So, there are two rows for mechanics here. There’s a row for electronics, and then a row for hydraulics, which is closely analogous. In this analogy, a pipe is like a wire. The flow of water plays the role of current. Water pressure plays the role of electrostatic potential. The difference in water pressure between two ends of a pipe is like the voltage across a wire. When water flows through a pipe, the power equals the flow times this pressure difference—just as in an electrical circuit the power is the current times the voltage across the wire. A resistor is like a narrowed pipe: An inductor is like a heavy turbine placed inside a pipe: this makes the water tend to keep flowing at the same rate it’s already flowing! In other words, it provides a kind of ‘inertia’ analogous to mass. A capacitor is like a tank with pipes coming in from both ends, and a rubber sheet dividing it in two lengthwise: When studying electrical circuits as a kid, I was shocked when I first learned that capacitors don’t let the electrons through: it didn’t seem likely you could do anything useful with something like that! But of course you can. Similarly, this gizmo doesn’t let the water through. A voltage source is like a compressor set up to maintain a specified pressure difference between the input and output: Similarly, a current source is like a pump set up to maintain a specified flow. Finally, just as voltage is the time derivative of a fairly obscure quantity called ‘flux linkage’, pressure is the time derivative of an even more obscure quantity which has no standard name. I’m calling it ‘pressure momentum’, thanks to the analogy momentum: force :: pressure momentum: pressure Just as pressure has units of force per area, pressure momentum has units of momentum per area! People invented this analogy back when they were first struggling to understand electricity, before electrons had been observed: Hydraulic analogy, Wikipedia. The famous electrical engineer Oliver Heaviside pooh-poohed this analogy, calling it the “drain-pipe theory”. I think he was making fun of William Henry Preece. Preece was another electrical engineer, who liked the hydraulic analogy and disliked Heaviside’s fancy math. In his inaugural speech as president of the Institution of Electrical Engineers in 1893, Preece proclaimed: True theory does not require the abstruse language of mathematics to make it clear and to render it acceptable. All that is solid and substantial in science and usefully applied in practice, have been made clear by relegating mathematic symbols to their proper store place—the study. According to the judgement of history, Heaviside made more progress in understanding electromagnetism than Preece. But there’s still a nice analogy between electronics and hydraulics. And I’ll eventually use the abstruse language of mathematics to make it very precise! But now let’s move on to the row called ‘thermal physics’. We could also call this ‘thermodynamics’. It works like this. Say you have a physical system in thermal equilibrium and all you can do is heat it up or cool it down ‘reversibly’—that is, while keeping it in thermal equilibrium all along. For example, imagine a box of gas that you can heat up or cool down. If you put a tiny amount dE of energy into the system in the form of heat, then its entropy increases by a tiny amount dS. And they’re related by this equation: dE = TdS where T is the temperature. Another way to say this is \displaystyle{ \frac{dE}{dt} = T \frac{dS}{dt} } where t is time. On the left we have the power put into the system in the form of heat. But since power should be ‘effort’ times ‘flow’, on the right we should have ‘effort’ times ‘flow’. It makes some sense to call dS/dt the ‘entropy flow’. So temperature, T, must play the role of ‘effort’. This is a bit weird. I don’t usually think of temperature as a form of ‘effort’ analogous to force or torque. Stranger still, our analogy says that ‘effort’ should be the time derivative of some kind of ‘momentum’, So, we need to introduce temperature momentum: the integral of temperature over time. I’ve never seen people talk about this concept, so it makes me a bit nervous. But when we have a more complicated physical system like a piston full of gas in thermal equilibrium, we can see the analogy working. Now we have dE = TdS - PdV The change in energy dE of our gas now has two parts. There’s the change in heat energy TdS, which we saw already. But now there’s also the change in energy due to compressing the piston! When we change the volume of the gas by a tiny amount dV, we put in energy -PdV. Now look back at the first chart I drew! It says that pressure is a form of ‘effort’, while volume is a form of ‘displacement’. If you believe that, the equation above should help convince you that temperature is also a form of effort, while entropy is a form of displacement. But what about the minus sign? That’s no big deal: it’s the result of some arbitrary conventions. P is defined to be the outward pressure of the gas on our piston. If this is positive, reducing the volume of the gas takes a positive amount of energy, so we need to stick in a minus sign. I could eliminate this minus sign by changing some conventions—but if I did, the chemistry professors at UCR would haul me away and increase my heat energy by burning me at the stake. Speaking of chemistry: here’s how the chemistry row in the analogy chart works. Suppose we have a piston full of gas made of different kinds of molecules, and there can be chemical reactions that change one kind into another. Now our equation gets fancier: \displaystyle{ dE = TdS - PdV + \sum_i \mu_i dN_i } Here N_i is the number of molecules of the ith kind, while \mu_i is a quantity called a chemical potential. The chemical potential simply says how much energy it takes to increase the number of molecules of a given kind. So, we see that chemical potential is another form of effort, while number of molecules is another form of displacement. But chemists are too busy to count molecules one at a time, so they count them in big bunches called ‘moles’. A mole is the number of atoms in 12 grams of carbon-12. That’s roughly 602,214,150,000,000,000,000,000 atoms. This is called Avogadro’s constant. If we used 1 gram of hydrogen, we’d get a very close number called ‘Avogadro’s number’, which leads to lots of jokes: (He must be desperate because he looks so weird… sort of like a mole!) So, instead of saying that the displacement in chemistry is called ‘number of molecules’, you’ll sound more like an expert if you say ‘moles’. And the corresponding flow is called molar flow. The truly obscure quantity in this row of the chart is the one whose time derivative is chemical potential! I’m calling it chemical momentum simply because I don’t know another name. Why are linear and angular momentum so famous compared to pressure momentum, temperature momentum and chemical momentum? I suspect it’s because the laws of physics are symmetrical under translations and rotations. When the assumptions of Noether’s theorem hold, this guarantees that the total momentum and angular momentum of a closed system are conserved. Apparently the laws of physics lack the symmetries that would make the other kinds of momentum be conserved. This suggests that we should dig deeper and try to understand more deeply how this chart is connected to ideas in classical mechanics, like Noether’s theorem or symplectic geometry. I will try to do that sometime later in this series. More generally, we should try to understand what gives rise to a row in this analogy chart. Are there are lots of rows I haven’t talked about yet, or just a few? There are probably lots. But are there lots of practically important rows that I haven’t talked about—ones that can serve as the basis for new kinds of engineering? Or does something about the structure of the physical world limit the number of such rows? Mildly defective analogies Engineers care a lot about dimensional analysis. So, they often make a big deal about the fact that while effort and flow have different dimensions in different rows of the analogy chart, the following four things are always true: pq has dimensions of action (= energy × time) \dot{p} q has dimensions of energy p \dot{q} has dimensions of energy \dot{p} \dot{q} has dimensions of power (= energy / time) In fact any one of these things implies all the rest. These facts are important when designing ‘mixed systems’, which combine different rows in the chart. For example, in mechatronics, we combine mechanical and electronic elements in a single circuit! And in a hydroelectric dam, power is converted from hydraulic to mechanical and then electric form: One goal of network theory should be to develop a unified language for studying mixed systems! Engineers have already done most of the hard work. And they’ve realized that thanks to conservation of energy, working with pairs of flow and effort variables whose product has dimensions of power is very convenient. It makes it easy to track the flow of energy through these systems. However, people have tried to extend the analogy chart to include ‘mildly defective’ examples where effort times flow doesn’t have dimensions of power. The two most popular are these: displacement:    q flow:      \dot q momentum:      p effort:           \dot p Heat flow heat heat flow temperature momentum temperature Economics inventory product flow economic momentum product price The heat flow analogy comes up because people like to think of heat flow as analogous to electrical current, and temperature as analogous to voltage. Why? Because an insulated wall acts a bit like a resistor! The current flowing through a resistor is a function the voltage across it. Similarly, the heat flowing through an insulated wall is about proportional to the difference in temperature between the inside and the outside. However, there’s a difference. Current times voltage has dimensions of power. Heat flow times temperature does not have dimensions of power. In fact, heat flow by itself already has dimensions of power! So, engineers feel somewhat guilty about this analogy. Being a mathematical physicist, a possible way out presents itself to me: use units where temperature is dimensionless! In fact such units are pretty popular in some circles. But I don’t know if this solution is a real one, or whether it causes some sort of trouble. In the economic example, ‘energy’ has been replaced by ‘money’. So other words, ‘inventory’ times ‘product price’ has units of money. And so does ‘product flow’ times ‘economic momentum’! I’d never heard of economic momentum before I started studying these analogies, but I didn’t make up that term. It’s the thing whose time derivative is ‘product price’. Apparently economists have noticed a tendency for rising prices to keep rising, and falling prices to keep falling… a tendency toward ‘conservation of momentum’ that doesn’t fit into their models of rational behavior. I’m suspicious of any attempt to make economics seem like physics. Unlike elementary particles or rocks, people don’t seem to be very well modelled by simple differential equations. However, some economists have used the above analogy to model economic systems. And I can’t help but find that interesting—even if intellectually dubious when taken too seriously. An auto-analogy Beside the analogy I’ve already described between electronics and mechanics, there’s another one, called ‘Firestone’s analogy': • F.A. Firestone, A new analogy between mechanical and electrical systems, Journal of the Acoustical Society of America 4 (1933), 249–267. Alain Bossavit pointed this out in the comments to Part 27. The idea is to treat current as analogous to force instead of velocity… and treat voltage as analogous to velocity instead of force! In other words, switch your p’s and q’s: Electronics Mechanics          (usual analogy) Mechanics      (Firestone’s analogy) charge position: q momentum: p current velocity: \dot{q} force: \dot{p} flux linkage momentum: p position: q voltage force: \dot{p} velocity: \dot{q} This new analogy is not ‘mildly defective': the product of effort and flow variables still has dimensions of power. But why bother with another analogy? It may be helpful to recall this circuit from last time: It’s described by this differential equation: L \ddot{Q} + R \dot{Q} + C^{-1} Q = V We used the ‘usual analogy’ to translate it into classical mechanics problem, and we got a problem where an object of mass L is hanging from a spring with spring constant 1/C and damping coefficient R, and feeling an additional external force F: m \ddot{q} + r \dot{q} + k q = F And that’s fine. But there’s an intuitive sense in which all three forces are acting ‘in parallel’ on the mass, rather than in series. In other words, all side by side, instead of one after the other. Using Firestone’s analogy, we get a different classical mechanics problem, where the three forces are acting in series. The spring is connected to source of friction, which in turn is connected to an external force. This may seem a bit mysterious. But instead of trying to explain it, I’ll urge you to read his paper, which is short and clearly written. I instead want to make a somewhat different point, which is that we can take a mechanical system, convert it to an electrical one following the usual analogy, and then convert back to a mechanical one using Firestone’s analogy. This gives us an ‘auto-analogy’ between mechanics and itself, which switches p and q. And although I haven’t been able to figure out why from Firestone’s paper, I have other reasons for feeling sure this auto-analogy should contain a minus sign. For example: p \mapsto q, \qquad q \mapsto -p In other words, it should correspond to a 90° rotation in the (p,q) plane. There’s nothing sacred about whether we rotate clockwise or counterclockwise; we can equally well do this: p \mapsto -q, \qquad q \mapsto p But we need the minus sign to get a so-called symplectic transformation of the (p,q) plane. And from my experience with classical mechanics, I’m pretty sure we want that. If I’m wrong, please let me know! I have a feeling we should revisit this issue when we get more deeply into the symplectic aspects of circuit theory. So, I won’t go on now. References The analogies I’ve been talking about are studied in a branch of engineering called system dynamics. You can read more about it here: • Dean C. Karnopp, Donald L. Margolis and Ronald C. Rosenberg, System Dynamics: a Unified Approach, Wiley, New York, 1990. • Forbes T. Brown, Engineering System Dynamics: a Unified Graph-Centered Approach, CRC Press, Boca Raton, 2007. • Francois E. Cellier, Continuous System Modelling, Springer, Berlin, 1991. System dynamics already uses lots of diagrams of networks. One of my goals in weeks to come is to explain the category theory lurking behind these diagrams. 36 Responses to Network Theory (Part 29) 1. amarashiki says: Hi John: You say “(…)we need to introduce temperature momentum: the integral of temperature over time. I’ve never seen people talk about this concept, so it makes me a bit nervous.(…)” It is unprecise. I have talked about integrals of classical variables in Mechanics. I don’t know if you have read (or not) one of the best posts, I think ;) (I think since It is one of the most visited entries), I have ever written in my blog: http://thespectrumofriemannium.wordpress.com/2012/11/10/log053-derivatives-of-position/ You can easily understand that I have thought about that issue in the “mirror space” of Mechanics. The integral of temperature w.r.t. can be seen as the thermodynamical analogue of the “mechanical absement”! Usually, people believe that the only usual kinematical/dynamical variables are the n-th derivatives (specially the 1st, and 2nd are the most important according to the galilean/newtonian/relativistic or even quantum theories). However, some “more interesting” variables do appear when you perform “integrals”. Absement or the reciprocal inverse of position (something like a quantum momentum p=1/x with h=1) is the presement! Perhaps, the invention of infinitesimal calculus and that we are focusing only (generally speaking) in the first, second or third derivative of “position” is just un accident that we choose “position” as the main variable. I have my own suspitions about these facts… Imagine an exotic exoplanet where the ETI discover calculus not by differentiation but by “integration”. How would ETI write the classical equations of motion?And the quantum counterparts? With respect to the interpretation of A=\int T dt beyond to be interpreted as “a momentum”, you can also interpret it as the “farness” of temperature during some time interval. Moreover, note that if you introduce the Boltzmann constant (natural units) there, then you get something that is pretty like an action/angular momentum! In fact, making the full analogy with the mechanical definitions, another interesting magnitude would be the thermodynamical analogue of the presement, something like B=\int \dfrac{1}{T}dt Essentially, introducing again a Boltzmann constant (in the denominator of course this time) we would get the time integral of the celebreted \beta=(k_BT)^{-1}. In this case, you have something like \sim \mbox{TIME}/\mbox{ENERGY}=\mbox{POWER}^{-1}. This magnitude would measure the “nearness” in the thermodynamical sense! In summary, we get: 1) Your idea of the integral of temperature with respect to time is the analogue of the mechanical absement. 2) That “momentum” of temperature measure somehow the “farness” of the thermodynamical equilibrium as t flows! 3) The analogue of presement can be also worked out. It shows that the nearness of the thermodynamical equilibrium is somehow related to the inverse of the energy flow, a.k.a., power. What do you think? 2. Aaron Denney says: Wouldn’t H -> -H reverse time, and thus give a sign flip to momentum? 3. Joerg Paul says: Very interesting article! I wonder, if it is possible to found dualities in the other parts of phyics like Firestones analogy. In rotational mechanics this is easy, just swap angle and angular momentum. But are there useful dualities in hydralics, thermal physics or chemistry? 4. Bossavit says: The idea [of Firestone's analogy] is to treat current as analogous to force instead of velocity… and treat voltage as analogous to velocity instead of force! I believe your mechanical ‘auto-analogy’ will prove essential to understand this puzzle. But reasoning on *lumped* dynamical systems may hide things that would look clearer in the context of the mechanics of continua. I mean this: – Calling q the charge density and j the current density, we have (1) d_t q + div j = 0 (charge conservation) – Calling p the density of momentum and s the stress tensor, we have (in the absence of body forces, that would otherwise appear in the right-hand side) (2) d_t p + div s = 0 (momentum conservation) (The sign convention I use in (2) is not the standard one: The standard stress tensor is minus s. Doing this will enhance the analogy.) The difference between (1) and (2) lies in q being scalar whereas p is vector-valued (or rather, covector-valued, since the force field f = d_t p is better conceived as a field of covectors). But apart from this difference, (1) and (2) are strikingly parallel: Integrating them over a bounded domain D will give a balance of “stuff”, electric charge in the case of (1), momentum in the case of (2). Now, take the integral form of (1) and (2) and let D collapse to a point (“lumping”). One obtains (1′) d_t [charge inside] = [current flowing in] (2′) d_t [momentum inside] = [force exerted by outside matter] and there we are: “current analogous to force”, indeed, and charge analogous to momentum. 5. Frederik De Roo says: P is defined to be the outward pressure of the gas on our piston. If this is positive, reducing the volume of the gas takes a positive amount of energy, so we need to stick in a minus sign. I could eliminate this minus sign by changing some conventions If you use tension instead of pressure, you might get rid of the minus sign without provoking colleagues at UCR? Actually, even though you mention “on our piston” I find the word ‘outward pressure’ somewhat confusing because I would define outward and inward with respect to the (outward) normal of a small element inside the gas, so for me pressure would point inward and tension outward. 6. Arrow says: My bet would be the analogies are a consequence of the fact all kinds of motion have the same fundamental origin – whatever it is. • John Baez says: Yes, I agree. I’d say it’s mostly about Hamiltonian and Lagrangian mechanics, which are the usual ways of understanding motion at the classical (i.e. non-quantum) level… but with few big extra twists: we’re studying open systems, we’re treating them as networks, and we’re allowing dissipation. What I’m doing is warming up to present a mathematical theory of this stuff… which I’m still busy trying to learn/invent. • amarashiki says: I agree too, but with an addition…I believe that there is something BIG in all this analogy, and I think the key idea is to consider or better understand the origin of those kinds of motion. And my conjecture is that there are something else beyond Classical Hamiltonian/Lagrangian mechanics, even if we consider “dissipation”…If the origin of the analogies is that the origin of the motions is the same, what kind of physical principle “catches” all that. I mean, it can hardly be something related to classical mechanics since it “extends it”. What kind of symmetry or invariance transformation is playing here? By the way, I have some questions for John…What about this “generalized momentum” of temperature: A^{(n)}=\int \int \cdots \int T d^nt or B^{(n)}=\in \int \cdots \int (1/T)d^nt? (1) What happens in the limit n\longrightarrow \infty? Can it make some sense? (2) What would happen in the case of the n-th jet derivatives d^n T/dt^n? Remark: In Cosmology, the statefinder variables in the Hubble luminosity distance relation relate important variables like the Hubble parameter,or some densities with the derivatives of the scale factor! So it seems that higher order derivatives make sense too in a “cosmomechanical” framework! • domenico says: I write this result in an other blog, so it is a repeat of one of my theories (I try to write here only new ideas). Each differential equation have an Hamiltonian, if we doubling the number of variable: 0=F(y,\dot y, \ddot y, \cdots ) 0=\frac{d}{dt} F(y,\dot y, \ddot y, \cdots )=\dot y \partial_y F+\ddot y \partial_{\dot y} F+\cdots \left\{ \begin{array}{l} y = y_0 \\ \dot y_{j} = y_{j+1} \\ \dot y_n = G(y_0, y_1, \cdots, y_{n-1}) \end{array} \right. H = p_n G(y_0, y_1, \cdots, y_{n-1})+ \sum^{n-1}_{j=0} p_j y_{j+1} \left\{ \begin{array}{l} \dot y_0 = \ \ \,\frac{\partial H}{\partial p_0} = y_{1} \\ \dot p_0 = -\frac{\partial H}{\partial y_0} = -p_{n} \partial_{0} G(y_0,y_1,\cdots,y_{n-1}) \\ \dot y_j = \ \ \,\frac{\partial H}{\partial p_j} = y_{j+1} \\ \dot p_j = -\frac{\partial H}{\partial y_j} = -p_{j-1}-p_{n} \partial_j G(y_0,y_1,\cdots,y_{n-1}) \\ \dot y_n = \ \ \,\frac{\partial H}{\partial p_n} = G(y_0, y_1, y_{n-1}) \\ \dot p_n = -\frac{\partial H}{\partial y_n} = -p_{n-1} \end{array} \right. then there is possible to write the Hamiltonian for each differential equation, that can describe a dissipative system. The interesting thing is that the Hamiltonian-Jacobi equation and the Schrodinger equation are equal for this differential equation (classical system=quantum system): $ 0 = \partial_t \Psi + H(t,y_j,\partial_j \Psi) $ 7. John Baez says: Over on G+, Alex Golden wrote: This reminds me of the study of Dirac structures, a sort of generalization of Hamiltonian systems that allow I/O relationships. I replied: Yes, that’s a topic I plan to talk about! I found this book pretty helpful: • Vincent Duindam, Alessandro Macchelli, Stefano Stramigioli and Herman Bruyninckx, eds., Modeling and Control of Complex Physical Systems: The Port-Hamiltonian Approach, Springer, Berlin, 2009. If you know other good sources, I’d be happy to hear about them. I turned up this free paper: • Arjan van der Schaft and J. Cervera, Composition of Dirac structures and control of port-Hamiltonian systems, http://www3.nd.edu/~mtns/papers/10432_1.pdf and Jess Robertson turned up an easier paper by one of the same authors: • Arjan van der Schaft, Port-Hamiltonian systems: an introductory survey. • Eugene says: I would treat the claim that port-Hamiltonian systems have Dirac geometry under the hood with caution. Until last summer I more or less accepted this claim at face value it and spent some time and effort to understand it. For conservative systems with non-holonomic constraints Dirac structures look like a right framework, but may be an overkill. But for nonlinear systems with external forces and phase spaces with nontrivial topology I don’t think it works. For a bit I hoped that Courant algebroids, would be enough, but again, I don’t see how to fit in external forces/dissipation nicely. Way back in the 1980s Brocket suggested that there should be something called a Hamiltonian control system. But in practice these systems are of the form H + \mu_1 G_1 + \ldots \mu_n G_n, where H is your Hamiltonian, the G_i‘s are other functions on your phase space, and \mu_i‘s are the control variables. A more general geometric definition doesn’t seem to exist. • John Baez says: Eugene wrote: I would treat the claim that port-Hamiltonian systems have Dirac geometry under the hood with caution. Until last summer I more or less accepted this claim at face value it and spent some time and effort to understand it. Thanks for the warning! Much as it may seem I’ve forgotten, I have your work in mind and I’m slowly creeping toward the point of studying it more carefully and discussing it here. I am going to start by doing some things with 1) electrical networks built from only linear resistors and 2) electrical networks built from only linear ‘passive’ elements, e.g. resistors, inductors and capacitors. I get the impression that Dirac structures are able to handle these cases, even though the resistors introduce dissipation. Do you agree? I think there’s lots of fun left in linear systems, but I eventually want to do nonlinear ones, and then I’ll really need your help. • Eugene says: I should add that what confused me were differences in terminology. Dirac structures were defined by Ted Courant and Alan Weinstein in the early 1990s as a simultaneous generalization of Poisson and symplectic geometry. But in engineering/applied math literature “Dirac structure” is used more loosely. In particular it seems to include Riemannian metrics and combinations of metrics and Poisson tensors. In particular whenever you see dissipation there is a symmetric tensor involved (or, more precisely a sum of a(n almost) Poisson tensor and a metric) and this is not a Dirac structure in the sense of Courant-Weinstein. The terms metriplectic and Leibnitz also gets thrown around and seem to roughly mean the same thing, as far as I can tell. • John Baez says: Thanks. In the review article I cited, in Section 2.3.3, van der Schaft uses Dirac structures to describe a class of port-Hamiltonian systems with dissipation. I’m hoping this formalism is appropriate for describing linear systems with dissipation, like circuits made of linear capacitors, inductors and resistors. Do you know if it is? • Eugene says: I used to be very confused by this paper. I still am. The paper does prominently feature Dirac structures (as defined by Courant and Weinstein and as commonly understood in Poisson geometry) but then there is equation (29) on p. 1349 which has this funny R(x) term. And it is not something you would/should see in a Hamiltonian system defined by a Dirac structure. Another confusing thing about the survey is that flows and efforts are not generalized velocities and momenta; they don’t live in the so called Pontriagin bundle. They live as sections of a certain vector bundle and its dual (trivial vector bundle in van der Schaft’s set up). So what is really being talked about, or so it seems to me, is a generalization of a Courant algebroid-like structure with a metric term. The closest description I have seen in literature is that of a Leibnitz-Dirac structure (cf arziv:1210.1042 [math.DG]). But the geometry seems to be is that of a Leibnitz-Courant algebroid, yet undefined in literature. Looks like you’ll have to invent it! 8. Uncle Al says: I suspect it’s because the laws of physics are symmetrical under translations and rotations. When the assumptions of Noether’s theorem hold, this guarantees that the total momentum and angular momentum of a closed system are conserved. Symmetry under rotation, vacuum isotropy, is rigorously true for massless boson photons. They detect no vacuum anisotropy, refraction, dispersion, dichroism, gyrotropy (arxiv:1208.5288, 0912.5057, 0905.1929, 0706.2031, 1006.1376, 1106.1068). Observation suggests the vacuum is trace chiral anisotropic toward fermionic matter: Baryogenesis Sakharov conditions obtain if cosmic inflation was pseudoscalar false vacuum decay, resolving the Weak interaction. Inflated spacetime is trace chiral anisotropic toward fermionic matter. 1) Parity “violations” are intrinsic. 2) Noetherian connection between vacuum isotropy and angular momentum conservation leaks, hence MOND’s 1.2×10^(-10) m/s^2 Milgrom acceleration, ending dark matter. 3) SUSY and quantum gravitation, despite rigorous persuasive mathematics, empirically fail as written. Strop Occam’s razor with observation. Opposite shoes fit into trace chiral vacuum with different energies. They locally vacuum free fall along trace non-identical minimum action trajectories, violating the Equivalence Principle. Crystallography’s opposite shoes are chemically and macroscopically identical, single crystal test masses in enantiomorphic space groups: P3(1)21 versus P3(2)21 alpha-quartz or P3(1) versus P3(2) gamma-glycine. Run geometric Eötvös experiments. Microwave spectrometers are more accessible. Racemic chiral molecular rotors launched at identical spin temperatures diverge spin temperatures when moving through a vacuum chiral background. Vacuum supersonic expand helium-entrained vapor of racemic D_3-trishomocuban-4-one, an intensely geometrically chiral rigid rotor, to initial <5 kelvin rotation temperature in a chirped pulse FT microwave spectrometer. If enantiomers' rotation temperature spectra diverge, Einstein-Cartan-Sciama-Kibble gravitation's chiral spacetime torsion is validated. Somebody should look. The worst they can do is succeed, explaining everything. DOI: 10.1055/s-0031-1289708 9. dcorfield says: “Preece…disliked Heaviside’s fancy math”. Which just goes to show how relative such judgements are. Heaviside didn’t like the ‘fancy math’ of quaternions, describing them as “antiphysical and unnatural” in opposition to proponents such as Tait. 10. Daniel Mahler says: I wonder if there is an analogy like this for information theory? It might relate to the Thermal Physics analogy via entropy. This could tie in with Charles Bennett’s work on thermodynamics of computation and the cost of erasing information. Maybe the Cramer-Rao bound would turn up as well. • Daniel Mahler says: What applying this analogy to information theory would mean is unclear, but that is a part of the question (i am fishing :)). Here is one thought on how it might play out in a machine learning. Suppose we have a space of data and a space of models, then the model parameters would be the positions and the data variables would be the forces, ie data provides information on how to improve the model. The loss function being optimized might then play the role of something like energy • John Baez says: Information is proportional to entropy, and this analogy is used in machine learning, especially in MaxEnt approaches. In particular, when we choose the probability distribution that maximizes entropy subject to some constraints, we are finding a Gibbs state, and that instantly gives us equations that generalize the one I mentioned in this blog article: \displaystyle{ d E = T d S - P d V + \sum_i \mu_i dN_i } So, I think we could adapt the ‘Thermal Physics’ row of the chart to include a lot of ideas from machine learning. My two posts on Classical Mechanics versus Thermodynamics, and my series on Information Geometry, should give some clues as to how this works. So yes, we should pursue this aspect of the analogy! I’ve been invited to teach a tutorial on information geometry at NIPS 2013, a conference on Neural Information Processing Systems that takes place at Lake Tahoe this December. So I’ve got a great excuse to think about how networks and information theory fit together. 11. Marcus Urruh says: Dear John Baez, I am very curious what you would think about this work by Thomas Etter, “Dynamical Markov States and the Quantum Core”, where he claims he can very simply produce full quantum density matrix formalism from pure statistics of Markov Processes. Or did you already consider these things? It may be the missing link for your theory, if it is not flawed somewhere. Please have a look and tell me if you can make something out of this. Slides from a talk: http://www.boundaryinstitute.org/bi/articles/Dynamical_Markov.pdf Here is a longer paper elaborating on some of his basic ideas. PROCESS, SYSTEM, CAUSALITY, AND QUANTUM MECHANICS A Psychoanalysis of Animal Faith Tom Etter http://www.boundaryinstitute.org/bi/articles/PSCQM.pdf ABSTRACT I shall argue in this paper that a central piece of modern physics does not really belong to physics at all but to elementary probability theory. Given a joint probability distribution D on a set of random variables containing x and y, define a link between x and y to be the condition x=y on D. Define the state S of a link x=y as the joint probability distribution matrix on x and y without the link. The two core laws of quantum mechanics are the Born probability rule, and the unitary dynamical law whose best known form is the Schrödinger’s equation. Von Neumann formulated these two laws in the language of Hilbert space as prob(P) = trace(PS) and S’T = TS respectively, where P is a projection, S and S’ are density matrices, and T is a unitary transformation. We’ll see that if we regard link states as density matrices, the algebraic forms of these two core laws occur as completely general theorems about links. When we extend probability theory by allowing cases to count negatively, we find that the Hilbert space framework of quantum mechanics proper emerges from the assumption that all S’s are symmetrical in rows and columns. On the other hand, Markovian systems emerge when we assume that one of every linked variable pair has a uniform probability distribution. By representing quantum and Markovian structure in this way, we see clearly both how they differ, and also how they can coexist in natural harmony with each other, as they must in quantum measurement, which we’ll examine in some detail. Looking beyond quantum mechanics, we see how both structures have their special places in a much larger continuum of formal systems that we have yet to look for in nature. All the best, Marcus 12. Hamilton says: Hi John! Thanks for the excellent article! Have you seen the work of Gabriel Kron who applied similar reasoning to modeling Schrödinger’s Equation using circuits? • Gabriel Kron, Electric circuit models of the Schrödinger equation, Phys. Rev. 67 (1943), 39–43. 13. Jacques says: Here is a great article by Rosen about analogous systems. http://link.springer.com/article/10.1007/BF02476608 He seems to take the view that this is always possible to construct an analogy (at least the analogy between any physical system with any other subsystem), not unique and not “special” in the sense that there is no universal implication about nature in our analogies :( 14. Ali Moharrer says: As I understood Peter Rowlands (reading his Zero to Infinity physics book), he argued for the relationship between (a way to bridge across) our understanding of the limit of conservative systems (and classical fields) by allowing conservation and non-conservation systems (also fields) to form a dual as part of a non-dual representation of Nature. It appears that Nature allows for a counter-intuitive co-existence of both measurable and non-measurable characterizations. What is the break in the symmetries that differentiate an ideal flow (described by Euler equation) from the real one (Navier Stokes equations)? Can these two systems (a conservative and a dissipative) somehow possess other kinds of unifying features that symmetry principles are special cases in that framework? You can use HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it. Fill in your details below or click an icon to log in: WordPress.com Logo You are commenting using your WordPress.com account. Log Out / Change ) Twitter picture You are commenting using your Twitter account. Log Out / Change ) Facebook photo You are commenting using your Facebook account. Log Out / Change ) Google+ photo You are commenting using your Google+ account. Log Out / Change ) Connecting to %s Follow Get every new post delivered to your Inbox. Join 2,818 other followers
__label__pos
0.978508
The covert orienting of visual attention following severe traumatic brain injury J. L. Mathias, A. J. Bate, John Robertson Crawford Research output: Contribution to journalArticle 31 Citations (Scopus) Abstract Attentional problems have frequently been identified following traumatic brain injuries (TBIs) using both clinical assessments and self-report measures. Unfortunately, most measures of attention do not enable us to determine the underlying basis of these attentional deficits. One exception is Posner's Covert Orienting of Attention Task (COAT), which is designed to identify some of the fundamental mental operations underlying attention. This study sought to determine whether the COAT task could identify discrete attentional deficits following TBI beyond those caused by reduced speed of information processing. Thirty five patients who had sustained a severe TBI were compared to 35 age-matched controls. Results revealed that, although the reaction times of the patients with TBI were significantly slower than the controls, there were no differences between the two groups in terms of their ability to disengage, move, and engage their attention. The introduction of a secondary (language) task produced no significant difference between the two groups on the COAT task. However, there was a significant difference between the two groups on the language-based task, suggesting a deficit in auditory-verbal attention under dual task conditions. Original languageEnglish Pages (from-to)386-398 Number of pages12 JournalJournal of Clinical and Experimental Neuropsychology Volume23 Issue number3 Publication statusPublished - 2001 Keywords • CLOSED-HEAD-INJURY • PARKINSONS-DISEASE • ORIENTATION • SYSTEMS Cite this
__label__pos
0.842525
Polyhedral techniques in combinatorial optimization I : theory @inproceedings{Aardal1996PolyhedralTI, title={Polyhedral techniques in combinatorial optimization I : theory}, author={Aardal and van Hoesel}, year={1996} } • Aardal, van Hoesel • Published 1996 Combinatorial optimization problems appear in many disciplines ranging from management and logistics to mathematics, physics, and chemistry. These problems are usually relatively easy to formulate mathematically, but most of them are computationally hard due to the restriction that a subset of the variables have to take integral values. During the last two decades there has been a remarkable progress in techniques based on the polyhedral description of combinatorial problems, leading to a large… CONTINUE READING Highly Cited This paper has 28 citations. REVIEW CITATIONS
__label__pos
0.991721
Lipschitz stability in the determination of the principal part of a parabolic equation Ganghua Yuan; Masahiro Yamamoto ESAIM: Control, Optimisation and Calculus of Variations (2008) • Volume: 15, Issue: 3, page 525-554 • ISSN: 1292-8119 Abstract top Let y(h)(t,x) be one solution to t y ( t , x ) - i , j = 1 n j ( a i j ( x ) i y ( t , x ) ) = h ( t , x ) , 0 < t < T , x Ω with a non-homogeneous term h, and y | ( 0 , T ) × Ω = 0 , where Ω n is a bounded domain. We discuss an inverse problem of determining n(n+1)/2 unknown functions aij by { ν y ( h ) | ( 0 , T ) × Γ 0 , y ( h ) ( θ , · ) } 1 0 after selecting input sources h 1 , . . . , h 0 suitably, where Γ 0 is an arbitrary subboundary, ν denotes the normal derivative, 0 < θ < T and 0 . In the case of 0 = ( n + 1 ) 2 n / 2 , we prove the Lipschitz stability in the inverse problem if we choose ( h 1 , . . . , h 0 ) from a set { C 0 ( ( 0 , T ) × ω ) } 0 with an arbitrarily fixed subdomain ω Ω . Moreover we can take 0 = ( n + 3 ) n / 2 by making special choices for h , 1 0 . The proof is based on a Carleman estimate. How to cite top Yuan, Ganghua, and Yamamoto, Masahiro. "Lipschitz stability in the determination of the principal part of a parabolic equation." ESAIM: Control, Optimisation and Calculus of Variations 15.3 (2008): 525-554. <http://eudml.org/doc/90925>. @article{Yuan2008, abstract = { Let y(h)(t,x) be one solution to \[ \partial\_t y(t,x) - \sum\_\{i, j=1\}^\{n\}\partial\_\{j\} (a\_\{ij\}(x)\partial\_i y(t,x)) = h(t,x), \thinspace 0<t<T, \thinspace x\in \Omega \] with a non-homogeneous term h, and $y\vert_\{(0,T)\times\partial\Omega\} = 0$, where $\Omega \subset \Bbb R^n$ is a bounded domain. We discuss an inverse problem of determining n(n+1)/2 unknown functions aij by $\\{ \partial_\{\nu\}y(h_\{\ell\})\vert_\{(0,T)\times \Gamma_0\}$, $y(h_\{\ell\})(\theta,\cdot)\\}_\{1\le \ell\le \ell_0\}$ after selecting input sources $h_1, ..., h_\{\ell_0\}$ suitably, where $\Gamma_0$ is an arbitrary subboundary, $\partial_\{\nu\}$ denotes the normal derivative, $0 < \theta < T$ and $\ell_0 \in \Bbb N$. In the case of $\ell_0 = (n+1)^2n/2$, we prove the Lipschitz stability in the inverse problem if we choose $(h_1, ..., h_\{\ell_0\})$ from a set $\{\cal H\} \subset \\{ C_0^\{\infty\} ((0,T)\times \omega)\\}^\{\ell_0\}$ with an arbitrarily fixed subdomain $\omega \subset \Omega$. Moreover we can take $\ell_0 = (n+3)n/2$ by making special choices for $h_\{\ell\}$, $1 \le \ell \le \ell_0$. The proof is based on a Carleman estimate. }, author = {Yuan, Ganghua, Yamamoto, Masahiro}, journal = {ESAIM: Control, Optimisation and Calculus of Variations}, keywords = {Inverse parabolic problem; Carleman estimate; Lipschitz stability; parabolic equation; Lipshitz stability; inverse problem}, language = {eng}, month = {7}, number = {3}, pages = {525-554}, publisher = {EDP Sciences}, title = {Lipschitz stability in the determination of the principal part of a parabolic equation}, url = {http://eudml.org/doc/90925}, volume = {15}, year = {2008}, } TY - JOUR AU - Yuan, Ganghua AU - Yamamoto, Masahiro TI - Lipschitz stability in the determination of the principal part of a parabolic equation JO - ESAIM: Control, Optimisation and Calculus of Variations DA - 2008/7// PB - EDP Sciences VL - 15 IS - 3 SP - 525 EP - 554 AB - Let y(h)(t,x) be one solution to \[ \partial_t y(t,x) - \sum_{i, j=1}^{n}\partial_{j} (a_{ij}(x)\partial_i y(t,x)) = h(t,x), \thinspace 0<t<T, \thinspace x\in \Omega \] with a non-homogeneous term h, and $y\vert_{(0,T)\times\partial\Omega} = 0$, where $\Omega \subset \Bbb R^n$ is a bounded domain. We discuss an inverse problem of determining n(n+1)/2 unknown functions aij by $\{ \partial_{\nu}y(h_{\ell})\vert_{(0,T)\times \Gamma_0}$, $y(h_{\ell})(\theta,\cdot)\}_{1\le \ell\le \ell_0}$ after selecting input sources $h_1, ..., h_{\ell_0}$ suitably, where $\Gamma_0$ is an arbitrary subboundary, $\partial_{\nu}$ denotes the normal derivative, $0 < \theta < T$ and $\ell_0 \in \Bbb N$. In the case of $\ell_0 = (n+1)^2n/2$, we prove the Lipschitz stability in the inverse problem if we choose $(h_1, ..., h_{\ell_0})$ from a set ${\cal H} \subset \{ C_0^{\infty} ((0,T)\times \omega)\}^{\ell_0}$ with an arbitrarily fixed subdomain $\omega \subset \Omega$. Moreover we can take $\ell_0 = (n+3)n/2$ by making special choices for $h_{\ell}$, $1 \le \ell \le \ell_0$. The proof is based on a Carleman estimate. LA - eng KW - Inverse parabolic problem; Carleman estimate; Lipschitz stability; parabolic equation; Lipshitz stability; inverse problem UR - http://eudml.org/doc/90925 ER - References top 1. R.A. Adams, Sobolev Spaces. Academic Press, New York (1975).   2. K.A. Ames and B. Straughan, Non-standard and Improperly Posed Problems. Academic Press, San Diego (1997).   3. L. Baudouin and J.-P. Puel, Uniqueness and stability in an inverse problem for the Schrödinger equation. Inverse Probl.18 (2002) 1537–1554.  Zbl1023.35091 4. M. Bellassoued, Global logarithmic stability in inverse hyperbolic problem by arbitrary boundary observation. Inverse Probl.20 (2004) 1033–1052.  Zbl1061.35162 5. M. Bellassoued and M. Yamamoto, Logarithmic stability in determination of a coefficient in an acoustic equation by arbitrary boundary observation. J. Math. Pures Appl.85 (2006) 193–224.  Zbl1091.35112 6. H. Brezis, Analyse Fonctionnelle. Masson, Paris (1983).   7. A.L. Bukhgeim, Introduction to the Theory of Inverse Probl. VSP, Utrecht (2000).   8. A.L. Bukhgeim and M.V. Klibanov, Global uniqueness of a class of multidimensional inverse problems. Soviet Math. Dokl.24 (1981) 244–247.  Zbl0497.35082 9. D. Chae, O.Yu. Imanuvilov and S.M. Kim, Exact controllability for semilinear parabolic equations with Neumann boundary conditions. J. Dyn. Contr. Syst.2 (1996) 449–483.  Zbl0946.93007 10. J. Cheng and M. Yamamoto, One new strategy for a priori choice of regularizing parameters in Tikhonov's regularization. Inverse Probl.16 (2000) L31–L38.  Zbl0957.65052 11. P.G. Danilaev, Coefficient Inverse Problems for Parabolic Type Equations and Their Application. VSP, Utrecht (2001).   12. A. Elayyan and V. Isakov, On uniqueness of recovery of the discontinuous conductivity coefficient of a parabolic equation. SIAM J. Math. Anal.28 (1997) 49–59.  Zbl0870.35124 13. M.M. Eller and V. Isakov, Carleman estimates with two large parameters and applications. Contemp. Math.268 (2000) 117–136.  Zbl0973.35042 14. C. Fabre, J.-P. Puel and E. Zuazua, Approximate controllability of the semilinear heat equation. Proc. Royal Soc. Edinburgh125A (1995) 31–61.  Zbl0818.93032 15. A.V. Fursikov and O.Yu. Imanuvilov, Controllability of Evolution Equations, in Lecture Notes Series34, Seoul National University, Seoul, South Korea (1996).   16. D. Gilbarg and N.S. Trudinger, Elliptic Partial Differential Equations of Second Order. Springer-Verlag, Berlin (2001).  Zbl1042.35002 17. R. Glowinski and J.L. Lions, Exact and approximate controllability for distributed parameter systems. Acta Numer.3 (1994) 269–378.  Zbl0838.93013 18. L. Hörmander, Linear Partial Differential Operators. Springer-Verlag, Berlin (1963).  Zbl0108.09301 19. O.Yu. Imanuvilov, Controllability of parabolic equations. Sb. Math.186 (1995) 879–900.   20. O.Yu. Imanuvilov and M. Yamamoto, Lipschitz stability in inverse parabolic problems by the Carleman estimate. Inverse Probl.14 (1998) 1229–1245.  Zbl0992.35110 21. O.Yu. Imanuvilov and M. Yamamoto, Global Lipschitz stability in an inverse hyperbolic problem by interior observations. Inverse Probl.17 (2001) 717–728.  Zbl0983.35151 22. O.Yu. Imanuvilov and M. Yamamoto, Carleman estimate for a parabolic equation in a Sobolev space of negative order and its applications, in Control of Nonlinear Distributed Parameter Systems, Marcel Dekker, New York (2001) 113–137.  Zbl0977.93041 23. O.Yu. Imanuvilov and M. Yamamoto, Determination of a coefficient in an acoustic equation with a single measurement. Inverse Probl.19 (2003) 151–171.  Zbl1020.35117 24. O.Yu. Imanuvilov and M. Yamamoto, Carleman inequalities for parabolic equations in Sobolev spaces of negative order and exact controllability for semilinear parabolic equations. Publ. RIMS Kyoto Univ.39 (2003) 227–274.  Zbl1065.35079 25. V. Isakov, Inverse Problems for Partial Differential Equations. Springer-Verlag, Berlin (1998), (2005).  Zbl0908.35134 26. V. Isakov and S. Kindermann, Identification of the diffusion coefficient in a one-dimensional parabolic equation. Inverse Probl.16 (2000) 665–680.  Zbl0962.35188 27. M. Ivanchov, Inverse Problems for Equations of Parabolic Type. VNTL Publishers, Lviv, Ukraine (2003).  Zbl1147.35110 28. A. Khaĭdarov, Carleman estimates and inverse problems for second order hyperbolic equations. Math. USSR Sbornik58 (1987) 267–277.   29. M.V. Klibanov, Inverse problems in the “large” and Carleman bounds. Diff. Equ.20 (1984) 755–760.  Zbl0573.35083 30. M.V. Klibanov, Inverse problems and Carleman estimates. Inverse Probl.8 (1992) 575–596.  Zbl0755.35151 31. M.V. Klibanov, Estimates of initial conditions of parabolic equations and inequalities via lateral Cauchy data. Inverse Probl.22 (2006) 495–514.  Zbl1094.35139 32. M.V. Klibanov and A.A. Timonov, Carleman Estimates for Coefficient Inverse Problems and Numerical Applications. VSP, Utrecht (2004).  Zbl1069.65106 33. M.V. Klibanov and M. Yamamoto, Lipschitz stability of an inverse problem for an accoustic equation. Appl. Anal.85 (2006) 515–538.  Zbl1274.35413 34. M.M. Lavrent'ev, V.G. Romanov and Shishat · skiĭ, Ill-posed Problems of Mathematical Physics and Analysis. American Mathematical Society Providence, Rhode Island (1986).   35. J.L. Lions and E. Magenes, Non-homogeneous Boundary Value Problems and Applications. Springer-Verlag, Berlin (1972).   36. L.E. Payne, Improperly Posed Problems in Partial Differential Equations. SIAM, Philadelphia (1975).  Zbl0302.35003 37. A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer-Verlag, New York (1983).  Zbl0516.47023 38. J.C. Saut and B. Scheurer, Unique continuation for some evolution equations. J. Diff. Eq.66 (1987) 118–139.  Zbl0631.35044 39. E.J.P.G. Schmidt and N. Weck, On the boundary behavior of solutions to elliptic and parabolic equations – with applications to boundary control for parabolic equations. SIAM J. Contr. Opt.16 (1978) 593–598.  Zbl0388.93027 40. M. Yamamoto, Uniqueness and stability in multidimensional hyperbolic inverse problems. J. Math. Pures Appl.78 (1999) 65–98.  Zbl0923.35200 41. M. Yamamoto and J. Zou, Simultaneous reconstruction of the initial temperature and heat radiative coefficient. Inverse Probl.17 (2001) 1181–1202.  Zbl0987.35166 NotesEmbed ? top You must be logged in to post comments. To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear. Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language. Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls. Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.
__label__pos
0.726045
Flutter app testing and debugging Testing And Debugging in Flutter Apps For Flutter developers to produce high-quality applications, testing, and debugging are essential steps in the software development process. This blog aims to help you optimize your development process and produce better apps by examining some best practices and tools for testing and debugging Flutter apps. Unit Testing Individual units or components of your program can be tested separately through unit testing. Unit tests can be created and executed using Flutter’s built-in test package. Among the top techniques for flutter unit testing are: • Writing tests before introducing new features or modifications. • Testing potential pitfalls and error scenarios. • Organizing and maintaining test code. • Using dependency injection and mocking to segregate components. Widget Testing The behavior of individual widgets and their interaction with other widgets can be tested using widget testing. The Flutter testing package allows you to create and execute widget tests with Flutter. Following are some suggestions for widget testing in Flutter. • Creating tests for each widget individually. • Evaluate widget behavior in each stage. • User interactions are simulated using the widget tester API. • Refactoring tests to keep them up to date when the application changes. Integration Testing The interaction between various portions of your software can be tested through integration testing. The Flutter driver package allows you to create and execute integration tests using Flutter. For flutter integration testing some best practices are as follows: • Testing user flow and end-to-end scenarios. • Encapsulating UI elements with the page object pattern. • Establishing situations that closely resemble production situations for testing. • Examining test findings to find and address problems. Debugging tools To assist developers in finding and resolving problems in their apps, flutter offers a variety of debugging tools. Some of the more practical tools are :  • For the purpose of examining memory utilization and performance, the Dart Observatory offers a web-based interface. • The Flutter DevTool offers a collection of tools for assessing and debugging Flutter apps. • Individual widgets in your app can be inspected and debugged using flutter inspector. • Logging and breakpoints which let you pause the execution of the app and print the messages at the precise point of the code. Best Practices For Testing And Debugging In Flutter App 1. Test Driven Development(TDD) Writing any actual code, TDD entails writing automated tests for each section. Developers may make sure that their code is tested, maintainable, and less prone to errors by adhering to TDD.                                                                                                                                                                                                                           1. Use the built-in Flutter test framework  Developers can create and execute unit, widget, and integration tests using Flutter’s integrated testing framework. The framework offers a number of tools, such as the flutter_test package, test function, and expect method, to enable the creation and execution of tests. For your Flutter project, these tools make it simple to write, run, and analyze tests. 1. Use flutter DevTool A collection of performance and debugging tools called Flutter DevTools can assist you in raising the caliber of your project. DevTools has a number of capabilities, such as a timeline view, memory view, and logging view. You can also use these tools to find performance problems, memory leaks, and other issues in your app. 1. Use the Flutter driver for integration testing Developers can conduct integration testing for their Flutter apps using the testing tool Flutter Driver. Developers can create automated tests that interact with the app much like a user would be by using Flutter Driver. Moreover, this tool is used to find flaws and mistakes in your app that unit or widget testing might miss. Tools For Testing And Debugging 1. Flutter Test Developers can conduct unit, widget, and integration tests for their Flutter apps using the command-line tool Flutter Test. The program offers a number of options for filtering and running tests based on particular criteria. Additionally, Flutter Test may produce data on code coverage that can be used to pinpoint portions of your code that require additional testing. 1. Flutter DevTools Flutter DevTools offers a variety of performance and debugging tools to assist you in raising the caliber of your project. DevTools are accessible via the Chrome extension or the Flutter SDK. DevTools also gives you immediate feedback on your app’s functionality, memory utilization, and other factors. 1. Flutter Driver Using the integration testing tool Flutter Driver, you can create automated tests that interact with your app much like a user. In addition to testing your app’s navigation and state management, Flutter Driver tests may replicate user behaviors like tapping and swiping. 1. Firebase TestLab A cloud-based testing platform called Firebase Test Lab enables you to test your app on a variety of hardware and setups. You may use Test Lab to execute automated tests on actual or virtual devices for your app, as well as to evaluate how well it performs in various network scenarios. 1. Sentry Sentry is a solution for error monitoring that can assist you in finding and fixing faults in your app. It offers in-the-moment error reporting and can ping users when fresh issues appear. Sentry can also reveal the underlying causes of errors, which can speed up error correction. Posted in , , , by Tags: To Know Us Better Browse through our work.
__label__pos
0.55421
Why do I receive a "Hostids Do Not Match" warning when installing the network license manager? 119 views (last 30 days) Why do I receive a "Host IDs Do Not Match Error" when installing MATLAB R2021b or newer with a server license.dat file with the error below? The Host ID in the license file must be for this computer. Obtain a new license file generated for the Host ID of this computer from your license administrator or from MathWorks License Center. Accepted Answer MathWorks Support Team MathWorks Support Team on 15 Sep 2022 Edited: MathWorks Support Team on 15 Sep 2022 This warning appears when attempting to install the network license manager using a license file activated to an IP Address. You can dismiss this message and continue through the installation, The license.dat in MATLABROOT\etc may need to be edited. The SERVER line that is created in the license.dat file may be missing the "INTERNET=" syntax before the listed IP Address. To resolve this, edit the license.dat file and add "INTERNET=" in the SERVER line.  For example, the following SERVER line: SERVER test-server 192.168.86.31 27000 Must be updated to: SERVER test-server INTERNET=192.168.86.31 27000 Once the license.dat file is updated, you will be able to start the license manager. Alternatively, instead of using a license file activated to an IP Address, a license file activated to the MAC Address of the computer will not encounter this warning. More Answers (0) Tags No tags entered yet. Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting!
__label__pos
0.546955
In a thin film with a refractive index n = 1,35 inciden Affiliates: 0,03 $how to earn Pay with: i agree with "Terms for Customers" Sold: 0 Content: 40945.PNG 11,67 kB Loyalty discount! If the total amount of your purchases from the seller Юрий Физик more than: 5 $the discount is30% 1 $the discount is10% If you want to know your discount rate, please provide your email: Description In a thin film with a refractive index n = 1,35 incident at an angle 52 ° parallel beam of white light. When a film thickness of the reflected light is most strongly colored in yellow light of 600 nm? Additional information Task 40945. Detailed solution with a short recording conditions, laws and formulas used in the decision, the withdrawal of the calculation formula and the answer. If you have any questions about the decision to write. I try to help. Feedback 0 No feedback yet. Period 1 month 3 months 12 months 0 0 0 0 0 0 In order to counter copyright infringement and property rights, we ask you to immediately inform us at [email protected] the fact of such violations and to provide us with reliable information confirming your copyrights or rights of ownership. Email must contain your contact information (name, phone number, etc.)
__label__pos
0.561457
Changeset 1375 for Deliverables/D4.2-4.3 Ignore: Timestamp: Oct 14, 2011, 5:43:46 PM (9 years ago) Author: mulligan Message: changes, fixing typos etc File: 1 edited Legend: Unmodified Added Removed • Deliverables/D4.2-4.3/reports/D4-2.tex r1374 r1375   143143 144144The Matita compiler's backend consists of five distinct intermediate languages: RTL, RTLntl, ERTL, LTL and LIN. 145 A fifth language, RTLabs, serves as the entry point of the backend and the exit point of the frontend.  145A sixth language, RTLabs, serves as the entry point of the backend and the exit point of the frontend. 146146RTL, RTLntl, ERTL and LTL are `control flow graph based' languages, whereas LIN is a linearised language, the final language before translation to assembly. 147147   150150\paragraph{RTLabs ((Abstract) Register Transfer Language)} 151151As mentioned, this is the final language of the compiler's frontend and the entry point for the backend. 152 This language uses pseudoregisters, not hardware registers.\footnote{There are an unbounded number of pseudoregisters.  Pseudoregisters are converted to hardware registers of stack positions during register allocation.}  152This language uses pseudoregisters, not hardware registers.\footnote{There are an unbounded number of pseudoregisters.  Pseudoregisters are converted to hardware registers or stack positions during register allocation.} 153153Functions still use stackframes, where arguments are passed on the stack and results are stored in addresses. 154 During the pass to RTL, these are eliminated, and instruction selection is carried out.  154During the pass to RTL instruction selection is carried out. 155155 156156\paragraph{RTL (Register Transfer Language)}   162162RTLntl is not present in the O'Caml compiler. 163163 164 \paragraph{ERTL (Extended Register Transfer Language)} 165 In this language most instructions still operate on pseudoregisters, apart from instructions that move data to, and from, the accumulator.  164\paragraph{ERTL (Explicit Register Transfer Language)}  165This is a language very similar to RTLntl.  166However, the calling convention is made explicit, in that functions no longer receive and return inputs and outputs via a high-level mechanism, but rather use stack slots or hadware registers. 166167The ERTL to LTL pass performs the following transformations: liveness analysis, register colouring and register/stack slot allocation. 167168 Note: See TracChangeset for help on using the changeset viewer.
__label__pos
0.676353
Searched refs:iterations (Results 1 - 4 of 4) sorted by relevance /PHP_5_6/sapi/isapi/stresstest/ H A Dstresstest.cpp34 DWORD iterations = 1; variable 478 " -i number of iterations per thread (default=1)\n" 503 iterations = atoi(ap_optarg); 596 for (DWORD j=0; j<iterations; j++) { /PHP_5_6/ext/phar/phar/ H A Dpharcommand.inc719 * @param string $func Function to call on the iterations /PHP_5_6/ext/hash/ H A Dhash.c612 /* {{{ proto string hash_pbkdf2(string algo, string password, string salt, int iterations [, int length = 0, bool raw_output = false]) 619 long loops, i, j, iterations, length = 0, digest_length; local 625 if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "sssl|lb", &algo, &algo_len, &pass, &pass_len, &salt, &salt_len, &iterations, &length, &raw_output) == FAILURE) { 635 if (iterations <= 0) { 636 php_error_docref(NULL TSRMLS_CC, E_WARNING, "Iterations must be a positive integer: %ld", iterations); 702 for (j = 1; j < iterations; j++) { 1195 ZEND_ARG_INFO(0, iterations) /PHP_5_6/ext/openssl/ H A Dopenssl.c266 ZEND_ARG_INFO(0, iterations) 3978 /* {{{ proto string openssl_pbkdf2(string password, string salt, long key_length, long iterations [, string digest_method = "sha1"]) 3982 long key_length = 0, iterations = 0; local 3993 &key_length, &iterations, 4016 if (PKCS5_PBKDF2_HMAC(password, password_len, (unsigned char *)salt, salt_len, iterations, digest, key_length, out_buffer) == 1) { Completed in 27 milliseconds
__label__pos
0.932182
Getting value of selected row in second table after another row is selected in first 219 May 24, 2017, at 10:00 AM Let's say I have a set of two tables that are generated by Javascript. The table values are pulled from an online database. As part of the table creation, an event listener is added to each row in each table. When a row is selected, the event listener automatically adds an ID and class called selected to the target row. In another function, I pull the content of the selected row from the first table and store it in a variable. However, since the selected class and ID are the same in both tables, I can't use it to grab the value of a selected row from the second table; instead, I add the first row's content. What's the best way to get the selected value of the second row? Answer 1 add unique ID to your select input READ ALSO Whats wrong with these lines of Javascript/Jquery? [on hold] Whats wrong with these lines of Javascript/Jquery? [on hold] I am running these lines but somehow it breaks my jsWhen I run them like this it's fine 147 Creating new object from existing object Creating new object from existing object I've been at this for a while and can't figure out what I have and how to work with it 169 Consuming Spring REST API with jQuery Consuming Spring REST API with jQuery I wrote simple Spring REST webappNow I want to consume my API with some client 248 React router how to render a new component inside an existing one React router how to render a new component inside an existing one This is tough to explain, but say I have a nav bar up top that contains all the main linksI click the models link from the navbar up top and it renders the models page down below 356
__label__pos
0.52873
Quantification of magnetic force microscopy using a micronscale current ring Linshu Kong, Stephen Y. Chou Research output: Contribution to journalArticlepeer-review 59 Scopus citations Abstract Metal rings with inner diameters of 1 and 5 μm, fabricated using electron-beam lithography, were used to calibrate magnetic force microscopy (MFM). A MFM tip's effective magnetic charge, q, and effective magnetic moment along the tip's long axis, mz, can be determined from the MFM signal of the ring at a different scan height and a different electric current in the ring. The magnetic moments in the directions transverse to the tip's long axis were estimated by a straight current wire. It was found that for a Si tip coated with 65 nm cobalt on one side, q is 2.8×10-6 emu/cm, mz is 3.8×10-9 emu, and mx and my are in the order of 10-13 emu, which are negligible compared with mz. Furthermore, the MFMs sensitivity to the second derivative of the magnetic field was determined from the minimum ring current for a measurable MFM signal to be 0.1 Oe/nm2. Original languageEnglish (US) Pages (from-to)2043-2045 Number of pages3 JournalApplied Physics Letters Volume70 Issue number15 DOIs StatePublished - Apr 14 1997 Externally publishedYes All Science Journal Classification (ASJC) codes • Physics and Astronomy (miscellaneous) Fingerprint Dive into the research topics of 'Quantification of magnetic force microscopy using a micronscale current ring'. Together they form a unique fingerprint. Cite this
__label__pos
0.754691