content
string
pred_label
string
pred_score
float64
All About Bladder Cancer What is bladder cancer? Bladder cancer is an abnormal growth of cells in the bladder, which expands the bladder. Bladder cancer is one of the most common types of cancer and mostly affects people over 50 years of age. People at high risk are those who have been exposed to chemotherapy agents, diesel exhaust fumes, and other chemicals for a long period of time. Cancer of the urinary bladder develops when cells that make up the urinary bladder begin to grow in an uncontrolled manner. As more cancer cells develop, they have the potential to form a tumor, which then, over time, has the ability to spread to other areas of the body.   Bladder cancer is any of several types of cancer arising from the tissues of the urinary bladder. It is a disease in which cells grow abnormally and have the potential to spread to other parts of the body. Symptoms include blood in the urine, pain with urination, and low back pain.   The hollow organ that makes up the bladder is located in the lower pelvis. It has pliable, muscular walls that can stretch to hold urine and then squeeze to expel it from the body. This organ is located in the urinary tract. Urine storage is the primary function of the bladder. Urine is a liquid waste product that is produced by both kidneys and then transported to the bladder by means of two tubes known as ureters. Urination involves the contraction of the muscles in the bladder, which results in the passage of urine from the bladder through a tube known as the urethra.   Bladder cancer symptoms Bladder cancer symptoms can be caught early. When it comes to bladder cancer, it is important that one gets to know about the symptoms at an early stage to avoid any serious problems. So, the question is, how do you know if you have bladder cancer? The symptoms of bladder cancer can be mistaken for less serious conditions, so it’s essential to get them checked by a professional. Bladder cancer may not cause any signs or symptoms in its early stages because the bladder is usually empty when a person goes to the toilet. Here are a few common symptoms that one must know: • There are either blood or blood clots present in the urine • When urinating, you may feel pain or a burning sensation • Frequent urination • Having a strong urge to urinate multiple times throughout the night • Having the sensation that one needs to urinate but being unable actually to do so • Pain in the lower back, but only on one side of the body   The presence of blood in the urine is also known as hematuria. It is the single most important factor in determining whether a person has bladder cancer or not. Gross hematuria is a condition in which there is sufficient blood in the urine for the patient to be able to detect it visually. There is also the possibility that the urine contains trace amounts of blood that cannot be seen by the naked eye. This condition, which is only detectable through a urine test, is referred to as “microscopic hematuria.”   It is possible that cancer has already spread to another part of the body when the first symptoms of bladder cancer manifest. In this situation, the manifestation of symptoms is determined by the stage and location of cancer. Cancer that has spread to the bone may cause bone pain or a fracture. Cancer that has spread to the lungs may cause coughing or shortness of breath. Cancer that has spread to the liver may cause abdominal pain or jaundice (yellowing of the skin and whites of the eyes). Pain in the back or pelvis, loss of appetite for no apparent reason, and a general decrease in body mass may also be associated with advanced bladder cancer.   Warning signs of bladder cancer If you are aware of the warning signs and symptoms, you may be able to receive a diagnosis faster, which could improve your prognosis. The following are five warning signs of bladder cancer to keep an eye out for: 1.    Presence of urine in the blood (hematuria) This is the most common early symptom of bladder cancer and is typically the first sign that a person notices that they may have bladder cancer. It is typically painless, and there may be several weeks or even months that pass between episodes, so it is easy for women to ignore it when it occurs. The fact that many women attribute this symptom to menstruation or menopause leads many of them to ignore it.   2.    Facing UTI-like symptoms Consultation with a urologist who specializes in UTI-like symptoms is your best option if you are unsure whether or not there is a problem. Due to the fact that many of the symptoms of both conditions are similar, it is possible to confuse bladder cancer with a Urinary Tract Infection (UTI). Patients might experience urinary incontinence, pain during urination, increased frequency and urgency of urination, and increased frequency of urination overall. Talk to your primary care physician if you have any urinary issues, such as a frequent need to urinate, the sensation that you need to urinate but are unable to, difficulty emptying your bladder, or if antibiotics do not appear to be alleviating the symptoms of your urinary tract infection (UTI).   3.    Having pain that cannot be explained Pain is a common symptom of bladder cancers that have advanced to a later stage. Pain might be felt in the flank region, in the abdomen, or in the pelvis. Patients who are suffering from cancer that has spread to their bones may also experience pain in their bones. Tell your doctor if you are experiencing aches and pains in those areas; this is especially important if you have also noticed spotting or other symptoms of a UTI.   4.    Reduced hunger Loss of appetite is a symptom of many different types of cancer, and bladder cancer is no exception. You may experience a loss of weight, as well as feelings of exhaustion and weakness if the cancer has grown or spread. Postmenopausal bleeding in the uterus, of course, there are plenty of other things that can mess with your appetite, so you shouldn’t automatically assume the worst; however, you should talk to your doctor about it if it persists.   5.    Postmenopausal bleeding in the uterus After menopause, if you notice any blood or spotting, you should get it checked out because it could be a sign of bladder cancer or another underlying issue. The main question here is, where is bladder pain felt? It may be felt in the uterus and lower abdomen. As we all know, the fact that blood in the urine may be easy to overlook, so it is recommended that you visit your urologist just to be on the safe side.   Who is at risk for bladder cancer? When it comes to bladder cancer, smoking is by far the most important risk factor that should be taken into consideration. The National Institutes of Health reports that approximately 50% of female patients diagnosed with bladder cancer are smokers. If you are a smoker and notice any of the symptoms listed above, you should make an appointment with your doctor as soon as possible. The risk of developing these conditions is significantly increased in people who smoke.   Exposure to radiation in the past is the second most common risk factor (e.g., as a treatment for cervical cancer, prostate cancer, or rectal cancer).   In addition, the use of some chemotherapy medicines, such as cyclophosphamide, has been linked to an increased risk of bladder cancer.   Having had bladder cancer in the past is an additional significant risk factor. The recurrence rate for bladder cancer, which ranges from 50 to 80%, is one of the highest rates seen in any type of cancer. If you have previously been diagnosed with bladder cancer, it is critical that you maintain regular checkups with your primary care physician and remain vigilant for any symptoms of the disease. When in doubt, get it checked out.   Age is yet another important consideration. In most cases, a diagnosis is made when a woman is 73 years old. Any woman over the age of 55 who is concerned about her health should pay special attention to the warning signs.   Environmental exposures are a contributing factor in the development of bladder cancer. Those who work with chemicals, such as aromatic amines, which are compounds used in the production of dyes, put themselves in danger. Extensive contact with rubber, leather, certain textiles, paint, and hairdressing supplies, which is often associated with work exposure, appears also to enhance the risk of developing the condition.   Infection is caused by the blood-sucking parasite Schistosoma haematobium, which is particularly prevalent in underdeveloped nations and the Middle East. (The United States does not have any instances of this bacterium.)   People who have frequent infections of the bladder, bladder stones, or other diseases of the urinary tract or who have a chronic need for a catheter in the bladder may be at a higher risk of developing squamous cell carcinoma. This type of cancer can also develop in people who have a chronic need for a catheter in the bladder. Patients who have had bladder cancer in the past have an elevated chance of developing a new bladder tumor or having an existing bladder tumor return.   Causes of bladder cancer After reading the above-mentioned topics, one question that might strike your head is how do you get bladder cancer? When cells in the bladder undergo changes (mutations) in their DNA, this is the first step in the development of bladder cancer. The DNA of a cell contains instructions that direct the cells as to what they should be doing. Because of the changes, the cell is given the instruction to multiply quickly and to keep living even though healthy cells would normally die. A tumor is formed by abnormal cells, which have the potential to infiltrate and destroy normal body tissue. After a period of time, the abnormal cells may separate from one another and begin to spread (metastasize) throughout the body.   Types of bladder cancer Bladder cancer comes in several types. Let’s have a look at each one of them individually:   1.    Urothelial carcinoma The form of bladder cancer known as urothelial carcinoma, which is also referred to as Transitional Cell Carcinoma (TCC), is by far the most common type of bladder cancer. In point of fact, if you are diagnosed with bladder cancer, the type of cancer you have is almost certainly urothelial carcinoma. These cancers begin in the urothelial cells that line the interior of the bladder.   In addition to lining the urethra and ureters, the renal pelvis, which is the part of the kidney that connects to the ureter, urothelial cells also line the other parts of the urinary tract, including the ureters and the urethra. Due to the fact that people who have bladder cancer may also have tumors in these other locations, the entire urinary tract needs to be examined for the presence of tumors.   2.    Squamous cell carcinoma Squamous cell carcinomas account for only about 1–2 % of bladder cancers diagnosed in people globally. When viewed through a microscope, the cells have an appearance that is very similar to that of the fat cells that are located on the surface of the skin. The vast majority of bladder squamous cell carcinomas are aggressive forms of the disease.   3.    Adenocarcinoma Adenocarcinomas make up only about 1% of all cases of bladder cancer. These cancer cells share quite a few characteristics with the gland-forming cells that can be found in colon cancers. The vast majority of bladder adenocarcinomas are aggressive forms of the disease.   4.    Small cell carcinoma A small percentage of bladder cancers, less than 1%, are small-cell carcinomas. They originate in cells that resemble nerves and are known as neuroendocrine cells. Chemotherapy, similar to that used for treating small cell carcinoma of the lung, is typically required for the treatment of these cancers because of their rapid growth rate.   5.    Sarcomas Sarcomas begin in the muscle cells of the bladder, but they are extremely uncommon. Sarcomas can be fatal. A form of cancer that first manifests itself in the bone or in the soft tissues of the body, such as cartilage, fat, muscle, blood vessels, fibrous tissue, or any other connective or supportive tissue. The sites at which cancer initially develops give rise to the various subtypes of sarcoma. For instance, osteosarcoma develops in the bone, liposarcoma in the fat, and rhabdomyosarcoma in the muscle. The treatment plan and outlook are both determined by the type and stage of cancer. Sarcoma can affect people of any age, including children.   The treatment for these less common forms of bladder cancer (other than sarcoma) is very similar to the treatment for TCCs, particularly for early-stage tumors; however, if chemotherapy is required, different drugs may be used. Does bladder cancer have stages? Do you think what the stages of bladder cancer are? Here is the answer: your physician can use a staging system for bladder cancer that ranges from stage 0 to stage 4 to determine how far cancer has spread and give you a stage-wise rating. The following descriptions apply to the various stages of bladder cancer:   • Cancer that has not yet spread beyond the lining of the bladder is considered to be in the stage 0 stage. In the initial stage, aberrant cells are discovered in the tissue that lines the interior of the bladder. These abnormal cells have the potential to develop into cancer, which would then spread to the normal tissue that is located nearby. The following types of tumors are classified into stages 0a and 0is according to the American Joint Committee on Cancer Staging System: • Non-invasive papillary carcinoma, also known as stage 0a, can present as what seems to belong, thin growths coming from the lining of the bladder. These growths may be cancerous. • A carcinoma in situ, also known as stage 0is, is a benign tumor that forms on the tissue that lines the interior of the bladder. This tumor is flat.   • Cancer of the bladder that has progressed to stage 1 has moved beyond the layer that lines the bladder but has not yet reached the layer of muscle that lines the bladder. • The bladder cancer has progressed to stage 2 and has spread to the layer of muscle that is found in the bladder. • Cancer of the bladder has spread to the tissues that surround the bladder if it has progressed to stage 3. The third stage is further subdivided into stages IIIA and IIIB. In stage IIIA, cancer makes its way from the bladder to other layers of fat that surround the bladder and may have spread to the reproductive organs (seminal vesicles, uterus, prostate, or vagina), but cancer has not spread to lymph nodes, or cancer has spread from the bladder to one lymph node in the pelvis that is not near the common iliac arteries. Stage IIIB is when cancer has spread from the bladder to the layer of fat surrounding the bladder and may have spread to the reproductive organs (major arteries in the pelvis). • Cancer has migrated from the bladder to over one lymph node that is present in the pelvis and is not near the common iliac arteries, or it has spread to at least one lymph node that is near the common iliac arteries when it has reached the stage known as IIIB. • When bladder cancer has progressed to stage 4, the disease has spread beyond the bladder to other parts of the body. The fourth stage is then broken down into stages IVA and IVB. • Cancer has migrated from the bladder to the wall of the abdomen or pelvis in stage IVA; alternatively, cancer has spread to lymph nodes that are located above the common iliac arteries in this stage (major arteries in the pelvis). • Cancer has progressed to other regions of the body, such as the lung, bone, or liver, by the time it reaches stage IVB.   After treatment, bladder cancer occasionally returns in patients. This is referred to as a recurrence. There is a possibility that cancer will return, either in the bladder or in some other part of the body.   The beginnings and progression of bladder cancer There are several different layers that make up the wall of the bladder. There are several distinct types of cells that make up each layer:   The urothelium, also known as the transitional epithelium, is the layer of the bladder that is considered to be the site of origin for the vast majority of bladder malignancies. When cancer spreads into additional layers of the bladder wall or even through them, it has a higher stage, it is further along in its development, and it may be more difficult to treat.   Over the course of time, cancer may spread from the bladder into surrounding organs and skeletal systems. It is possible for it to spread to lymph nodes in the area, as well as to other parts of the body. (When bladder cancer spreads, it typically goes to lymph nodes that are located further away, as well as the bones, lungs, or liver.)   Invasive vs. non-invasive bladder cancer Cancers of the bladder are frequently classified according to how far they have spread into the wall of the bladder, as follows:   Invasive Bladder Cancer Cancers that are invasive have spread further into the deeper layers of the bladder wall. These malignancies have a higher risk of metastasizing and are more difficult to cure.   Non-invasive Bladder Cancer Cancers that are not invasive are only found in the innermost layer of cells (the transitional epithelium). They have not yet reached the deeper strata of the forest.   In addition to these two classifications, bladder cancer may be superficial or non-muscle invasive. These phrases refer to cancers that have not spread into the major muscle layer of the bladder and encompass both non-invasive tumors and any invasive tumors that have not spread into the main muscle layer.   Papillary vs. flat cancer On the basis of their growth pattern, bladder tumors are further subdivided into two categories: papillary and flat.   Papillary carcinomas develop in the form of thin projections that resemble fingers and extend from the inner surface of the bladder toward the hollow center. Papillary tumors typically grow in the direction of the center of the bladder rather than expanding into the deeper layers of the bladder. Non-invasive papillary cancers are the medical term for these growths. Papillary carcinoma that is non-invasive and of a very low grade (developing slowly) is sometimes referred to as Papillary Urothelial Neoplasm Of Low Malignant Potential (PUNLMP), and the prognosis for patients with this type of cancer is typically quite optimistic.   Flat carcinomas never grow toward the hollow region of the bladder like other types of bladder cancer do. When a flat tumor is found solely in the inner layer of bladder cells, medical professionals refer to the condition as non-invasive flat carcinoma or flat carcinoma in situ (CIS).   Invasive urothelial carcinoma, also known as transitional cell carcinoma, is a type of urothelial carcinoma that develops when a papillary or flat tumor invades deeper layers of the bladder.   How fast does bladder cancer spread? There is a possibility that up to 50% of patients diagnosed with muscle-invasive bladder cancer will have occult metastases that will become clinically apparent within five years of the initial diagnosis. Roughly 5% of patients will already have distant metastases at the time of the initial diagnosis. Despite chemotherapy, the majority of patients with overt metastatic disease will pass away within two years. Patients who undergo cystectomy and pelvic lymph node dissection and are found to have only limited regional lymph node metastases have a chance of surviving longer than five years. This chance is somewhere between 25 and 30%. There are two ways that metastasis might take place: either through the lymphatic system, in which case it will most frequently affect the lymph nodes in the pelvis, or through the blood system, in which case it will move to the liver, lung, or bone.   How is bladder cancer diagnosed? The following tests and methods may be utilized in the diagnostic process for bladder cancer:   • Utilizing a scope to investigate the contents of your bladder from the inside (cystoscopy). A cystoscope is a thin, long tube that is inserted into the urethra by your doctor in order to perform a cystoscopy on you. Your doctor will be able to examine your urethra and bladder for any indications of disease thanks to the lens on the cystoscope, which provides a view of the interior of these organs. Cystoscopy is a procedure that can be performed either in a doctor’s office or in a hospital setting. • Removing a piece of tissue for the purpose of evaluating it (biopsy). Your physician may take a cell sample (biopsy) for testing by inserting a specialized tool through the scope of the cystoscope and into your bladder while performing cystoscopy on you. The term “Transurethral Resection Of Bladder tumor” may also be used to refer to this technique (TURBT). The treatment of bladder cancer by TURBT is also an option. • Examining a urine sample (urine cytology). Urine cytology is a process that involves examining a sample of your urine under a microscope to look for any signs of cancer cells. • Imaging testing: Your doctor will be able to check the anatomy of your urinary system by using imaging tests such as a Computerised Tomography (CT) urogram or a retrograde pyelogram. A contrast dye is injected into a vein in your hand before you undergo a CT urogram. This dye eventually makes its way through your kidneys, ureters, and bladder. The X-ray images that were acquired during the test provide a detailed view of your urinary tract, which assists your physician in locating any potential cancerous regions in your urinary tract. An x-ray procedure known as a retrograde pyelogram is performed in order to obtain an in-depth view of the patient’s upper urinary system. During this portion of the procedure, your physician will insert a catheter into your bladder, then thread a small tube (contrast dye) through your urethra, and finally into your ureters. After that, the dye is administered to your kidneys while the X-ray photos are taken.   What tests will I have if my doctor suspects bladder cancer or another urinary problem? After it has been established that you suffer from bladder cancer, your physician may advise you to undergo further examinations in order to ascertain whether or not the disease has progressed to your lymph nodes or to other parts of your body. Here are the possible tests that can be asked to be taken by your physician: • CT scan • Magnetic Resonance Imaging (MRI) • Positron Emission Tomography (PET) • Bone scan • X-ray of the chest   Your cancer will be given a stage based on the information that is gleaned from these procedures by your doctor. Roman numerals ranging from 0 to 4 describe the stages of bladder cancer, which can be found anywhere in the urinary tract. The earliest stages of bladder cancer suggest that the disease has not yet spread to the muscular wall of the bladder and is instead confined to the innermost layers of the bladder. The most advanced stage of cancer, known as stage 4, signifies that the disease has progressed to lymph nodes or organs located in other parts of the body.   Bladder cancer grade When examined under a microscope, bladder tumors are categorized further according to the characteristics of the cancer cells themselves. This characteristic is referred to as the grade, and depending on its severity, your physician may classify the bladder cancer as either low grade or high grade:   Low-grade bladder cancer This form of cancer is characterized by cells that more closely resemble normal cells in both appearance and organization (well-differentiated). When compared to a high-grade tumor, a low-grade tumor often expands at a slower rate and has a lower risk of invading the muscle wall that surrounds the bladder.   High-grade bladder cancer This form of cancer is characterized by cells that have an aberrant appearance and have no similarity to normal-appearing tissues in any way (poorly differentiated). A high-grade tumor has a tendency to grow more aggressively than a low-grade tumor and may have a greater chance of spreading to the muscular wall of the bladder as well as other tissues and organs. This is because high-grade tumors are more likely to be malignant.   Is bladder cancer curable? The following factors will determine the prognosis: • The stage at which the cancer is present (whether it is superficial or invasive bladder cancer, and whether it has spread to other places in the body). When detected in its earlier stages, bladder cancer is frequently curable. • The kind of cells that are affected by bladder cancer and how those cells appear when viewed via a microscope. • Check to see whether there are any other areas of the bladder that have cancer in situ. • The age of the patient as well as their overall health.   The prognosis is also determined by the following factors if the cancer is only superficial: • How many different types of tumors are there. • The extent of the tumors in the body. • Whether or whether the tumor has returned after treatment has been administered. • The stage of bladder cancer influences the treatment choices that are available. Bladder cancer treatment Patients who have been diagnosed with bladder cancer can select from a variety of therapy options. You must be thinking, what is the best treatment for bladder cancer? There are five different types of treatment that are considered standard: 1. Surgery 2. Therapy with radiation 3. Chemotherapy 4. Immunotherapy 5. The use of specific drugs in the treatment   Clinical trials are being used to investigate and test out new treatment methods. There is a possibility that treatment for bladder cancer will create adverse consequences. Patients should give some thought to the possibility of taking part in a clinical study. Patients may enroll in clinical trials at any point prior to, during, or after beginning treatment for their cancer. It’s possible that further testing will be required. Patients who have been diagnosed with bladder cancer can select from a variety of therapy options. Some treatments are considered the gold standard (the treatment that is most commonly used), while others are currently being evaluated in research trials. A clinical trial of treatment is a type of research study that is conducted to assist in the improvement of existing treatments or to acquire information on new treatments for people who have cancer. If clinical tests demonstrate that a new treatment is superior to the treatment that is now being used, then the new treatment might replace the treatment that is currently being used. Patients should give some thought to the possibility of taking part in a clinical study.  Patients who have not yet begun treatment may be eligible to participate in some clinical trials.   There are five different types of treatment that are considered standard:   #1 Surgery It’s possible that you’ll need one of the following kinds of surgical procedures:   1.     Transurethral resection (TUR) with fulguration This procedure involves inserting a cystoscope, which is a thin, illuminated tube, into the bladder through the urethra. After that, a device with a loop of thin wire attached to one end is used to either cut out the cancerous growth or apply high-powered electricity to it in order to destroy it. This process is referred to as fulguration.   2.     Radical cystectomy During a radical cystectomy, the patient’s bladder, along with any lymph nodes and adjacent organs that are cancerous, are surgically removed. When cancer has spread to the muscle wall of the bladder, or when superficial cancer has spread to include a considerable portion of the bladder, this surgery may be necessary. In males, the prostate and the seminal vesicles are the neighboring organs that are removed during this procedure. In female patients, surgical removal of the uterus, ovaries, and a portion of the vagina is performed. In some cases, surgery to remove only the bladder may be performed to treat urinary problems brought on by cancer. This is done when the disease has progressed beyond the bladder and cannot be completely removed. When it is necessary to remove the bladder, the surgeon will design an alternative route for urine to pass out of the body.   3.     Partial cystectomy Patients with a low-grade tumor that has infiltrated the wall of the bladder but is confined to one region of the bladder may be candidates for this surgery. Patients must have a restricted bladder invasion. Patients are able to urinate regularly after recuperating from this surgery since only a portion of the bladder needs to be removed during the procedure. Segmental cystectomy is another name for this procedure.   4.     Urinary diversion Urinary diversion refers to a surgical procedure that creates a new pathway for the body to hold pee and excrete it.   Chemotherapy is a type of treatment that is administered to patients following surgery in order to eliminate any remaining cancer cells. This is done even if the surgeon was successful in removing all visible signs of cancer during the first procedure. Adjuvant therapy is a type of treatment that is administered after primary surgical treatment in order to reduce the likelihood that cancer will return.   #2 Radiation therapy Radiation therapy is a treatment for cancer that involves subjecting cancer cells to high-energy x-rays or other forms of radiation in order to destroy them or prevent them from developing. The radiation for external radiation therapy comes from a machine that is placed outside of the patient’s body and directed toward the part of the body where the cancer is located.   #3 Chemotherapy Chemotherapy is a treatment for cancer that involves the use of medications in order to inhibit the proliferation of cancer cells. This can be accomplished in one of two ways: either by killing the cells or by preventing them from growing. The chemotherapy medications are absorbed into the bloodstream when the treatment is either taken orally or injected into a vein or muscle. This allows the drugs to reach cancer cells located everywhere in the body (systemic chemotherapy). When chemotherapy medications are administered in a manner that causes them to be absorbed directly into the cerebrospinal fluid, an organ, or a body cavity such as the abdomen, the drugs have the greatest impact on cancer cells located in those specific locations (regional chemotherapy). Intravesical chemotherapy may be a localized treatment option for bladder cancer (put into the bladder through a tube inserted into the urethra). The type and stage of cancer that is being treated will determine how the chemotherapy is administered to the patient. The use of more than one anticancer agent during treatment is known as combination chemotherapy.   #4 Immunotherapy Immunotherapy is a form of cancer treatment in which the patient’s own immune system is utilized to combat the disease. In order to strengthen, direct, or restore the body’s natural defenses against cancer, substances that are either produced by the body itself or manufactured in a laboratory are utilized. This approach to treating cancer is a form of therapy known as biologic therapy.   There are several distinct forms of immunotherapy, including the following: • Treatment with an inhibitor of PD-1 and PD-L1: On the surface of T cells is a protein called PD-1, which plays an important role in regulating the immunological responses of the body. A protein known as PD-L1 is discovered on many different kinds of cancer cells. When PD-1 binds to PD-L1, it prevents the T cell from destroying the cancer cell. PD-1 is short for programmed death 1. Proteins PD-1 and PD-L1 are prevented from binding to one another by compounds known as PD-1 and PD-L1 inhibitors. Because of this, T cells are able to eliminate cancer cells. • PD-1 inhibitors come in a variety of forms, including Pembrolizumab and Nivolumab. • PD-L1 inhibitors come in a variety of forms, including Atezolizumab, Avelumab, and Durvalumab. • BCG, also known as bacillus Calmette-Guérin, is an immunotherapy that is administered intravenously and is used to treat bladder cancer. The BCG is administered in the form of a fluid that is directly inserted into the bladder through the use of a catheter (thin tube).   Treatment for stages Let’s have a look at various treatment options available for different stages:   Treatment of the Initial Stage (Noninvasive Papillary Carcinoma and Carcinoma in Situ) The following are some potential treatments for cancers that have reached stage 0 (non-invasive papillary carcinoma and carcinoma in situ): • Resection can be performed transurethrally with fulguration. The next thing that could happen is one of the following: • Chemotherapy is administered intravenously immediately following the surgical procedure. • Intravesical chemotherapy is administered immediately following surgical excision, followed by ongoing intravenous BCG or intrathecal chemotherapy treatment at regular intervals. • A cystectomy is just partial • A comprehensive cystectomy • Conduction of a new clinical trial   Treatment for Bladder Cancer in Stage I The following are some of the potential treatments for stage I bladder cancer: • Resection is performed transurethrally with fulguration. The next thing that could happen is one of the following: • Chemotherapy is administered intravenously immediately following a surgical procedure. • Intravesical chemotherapy is administered immediately following surgical excision, followed by ongoing intravenous BCG or intrathecal chemotherapy treatment at regular intervals.   • A cystectomy is just partial • A comprehensive cystectomy • Conduction of a new clinical trial   Treatment Options for Bladder Cancer in Stages II and III The following are some of the potential treatments for bladder cancer in stages II and III: • A comprehensive cystectomy. • This is followed by a major cystectomy after treatment with combination chemotherapy. It is possible to make a urinary diversion. • Surgical removal of the entire cyst, then either chemotherapy or immunotherapy (nivolumab). • Radiation therapy is delivered from the outside, with or without chemotherapy. • The patient may get chemotherapy after a partial cystectomy. • Resection was performed transurethrally with fulguration. • A test of the effectiveness of a novel treatment in humans.   Treatment for Bladder Cancer in the Stage IV Advanced Stage The following treatments may be used for patients with stage IV bladder cancer whose cancer has not progressed to any other areas of the body: •   • A radical cystectomy, either by itself or in combination with chemotherapy. • Radiation therapy is delivered from the outside, with or without chemotherapy. • Urinary diversion or cystectomy as palliative therapy to reduce symptoms and enhance the quality of life in patients with urinary incontinence.   The following methods may be utilized in the treatment of stage IV bladder cancer that has progressed to other areas of the body, such as the lung, the bone, or the liver, respectively: • Chemotherapy is administered with or in lieu of receiving local treatment (surgery or radiation therapy). • Immunotherapy / Immunothérapie (immune checkpoint inhibitor therapy). • Radiation treatment delivered from outside the body is used in palliative care to alleviate symptoms and improve quality of life. • Urinary diversion or cystectomy is palliative therapy to reduce symptoms and enhance the quality of life in patients with urinary incontinence. • A phase three clinical investigation investigating novel cancer treatments.   Can you survive without a bladder? If your bladder has been removed after surgery, you will need to become accustomed to a different manner to eliminate pee from your body. The cystectomy that you have done is a transformation that will last for the rest of your life. It’s possible that you’ll need to modify how you shower as well as how you travel. It is possible for it to have an effect on your body image, and you may be concerned about the effect it will have on your relationships and your sexual life.   You should be able to perform practically everything you did in the past if you give yourself enough time. You are able to return to work, continue your exercise routine, and even go swimming even though you are now required to use a urostomy bag to collect your pee. It’s possible that nobody will even notice you until you bring it to their attention.   Prevention of bladder cancer and risk factors for bladder cancer There is currently no foolproof method available for preventing bladder cancer. Some risk variables, including gender, age, race, and family history, are unable to be managed by the individual. However, there is a possibility that there are actions you may do that will assist reduce the danger. Let’s have a look at a few of them:   Don’t smoke It is believed that smoking is responsible for around half of all cases of bladder cancer. (This covers cigarettes, cigars, and pipes, as well as any other form of the smoked product.)   Reduce your exposure to specific chemicals at your place of employment Workers exposed to certain organic compounds on the job have a significantly increased chance of developing bladder cancer. Industries dealing with rubber, leather, printing materials, textiles, and paint are some examples of places in the workplace where these chemicals are utilized often. If you operate in an environment where you could be exposed to chemicals like these, it is imperative that you observe proper workplace safety procedures.   Because of the potential presence of risk-increasing compounds in certain hair dyes, it is essential for hairdressers and barbers who are routinely exposed to these products to ensure that they are used in a safe manner. (The majority of research has not indicated that using hair dyes for personal grooming increases the risk of bladder cancer.)   According to the findings of certain studies, workers who are subjected to diesel fumes in the workplace may be at an increased risk of developing bladder cancer, in addition to the risk of developing other types of cancer; hence, reducing this exposure may be beneficial.   Be sure to take in lots of fluids There is some evidence to suggest that a person’s chance of developing bladder cancer may be reduced if they consume large amounts of fluids, most notably water.   Consume a diet rich in fresh fruits and vegetables There has been some research that has suggested that consuming a diet that is heavy in fruits and vegetables may help protect against bladder cancer; however, other studies have not found this to be the case. Maintaining a nutritious diet has several benefits, one of which is a reduction in the risk of developing other forms of cancer.   Bladder cancer survival rate You can get an idea of what percentage of patients with the same type and stage of cancer are still living after a given amount of time (typically five years) has passed since they were diagnosed by looking at survival statistics. They won’t be able to tell you how long you have left to live, but they can help you get a better idea of the likelihood that your therapy will be successful.   Wrapping Up Cancer is a deadly disease, and when it comes to bladder cancer, it becomes very tough to deal with it. Cancer of this type generally strikes people in their later years. In most cases, a diagnosis is made at an early stage, when the condition is still treatable. Because there is a high chance that it will happen again, subsequent testing is often advised. The presence of blood in the urine is the most typical symptom. Surgical procedures, biological therapies, and chemotherapy are all forms of treatment. In this article, we have discussed the A to Z of bladder cancer. If you know someone who is suffering from bladder cancer or you are simply reading this article for your information, here, you will get everything covered related to bladder cancer! Facebook Twitter Pinterest LinkedIn Bir cevap yazın E-posta hesabınız yayımlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir On Key Benzer Yazılar Bize ulaşın!
__label__pos
0.620803
Natural Flood Management Natural Flood Management Mark Hamblin/2020VISION Whilst flooding is a natural process, climatic change is causing storm events to become more intense with the total rainfall in the UK set to increase during winter. In order to better manage flood risk and reduce the devastation flood events can cause to communities, homes and infrastructure, it is important to recognise the factors that contribute to increased flood risk in towns and villages. Poor land management practices can exacerbate flooding. When soils are compacted, they lose their capacity to store water and when rivers are straightened and dredged, they move water more quickly through a catchment and downstream towards towns and villages. When natural floodplains have been built or developed on, water has nowhere else to go and causes properties to flood. Wetlands and natural floodplains are important areas for water storage but also provide habitats for an array of plants and wildlife. Natural Flood Management Natural Flood Management (NFM) is an approach that seeks to store flood water and ‘slow the flow’ of water reaching the river channel by altering or restoring the landscape. NFM techniques provide enhanced water storage during a storm event, through using natural materials to engineer structures that temporarily hold rainwater in the headwaters. The creation of bunds and ponds with extra capacity provides temporary storage and helps to reduce the speed and the peak flow going downstream. After the storm event, water can then drain away slowly. Puckham Earth bunds store water following heavy rainfall Another method to increase water storage is creating a series of large woody debris dams or leaky dams that are installed across the water course or pathways of runoff. These structures are an effective method of sustainable flood alleviation as they help attenuate flood flows by slowing and deflecting flow out of the channel and onto the floodplain. Woody material in watercourses increases flow variability, with faster flowing water cleaning gravels which is important for spawning fish. With careful design and planning, soft engineered structures can have minimal impact on current land use whilst offering a sustainable, cost effective solution to mitigate flooding in addition to storing carbon and providing habitat opportunities for wildlife. Leaky dam Leaky dams help attenuate flood flows by slowing and deflecting flow out of the channel Increasing Floodplain Roughness A significant way to delay the progression of flood flows through a catchment is the presence of riparian and floodplain woodland. Floodplain woodland not only provides storage and helps alleviate flooding but it also benefits water quality and freshwater habitats that are important for nature conservation, fisheries and landscape restoration. Planting floodplain woodland will help increase the resilience of communities that are threatened by flooding, especially where it is not cost-effective to construct engineered defences. Whilst most of Britain’s floodplain woodland has been lost due to land reclamation works and past river engineering, many organisations are working to promote its restoration. In order to implement a whole-catchment approach to sustainable flood management, it is important to recognise that woodland must be integrated with agriculture and other land uses within a landscape. Wet woodland Trees and deadwood in the floodplain help slow the peak flow of a river, reducing the risk of flooding  The presence of woody debris dams within river channels along with the presence of trees, shrubs and deadwood on the floodplain results in an overall smaller downstream flood event. The combination of these features increases hydraulic roughness (the resistance of a bed or floodplain to the flow of water) which can substantially alleviate flooding through slowing down flood flows by enhancing out of bank flows and reducing flood velocities as well as increasing water storage on the floodplain. The hydraulic impact of large woody debris within a channel and on floodplain includes: • Increased water storage, delaying and reducing flood flows • Reduction in water velocity along water course • Promotion of habitat creation with increased variability of flows • Increased water depth upstream of dams promoting overbank flows onto floodplain • Scour and trap mobilised silt and sediment Deadwood on floodplain Deadwood on the floodplain results in an overall smaller downstream flood event Tree management and planting Planting new trees and the improved management of existing riparian and floodplain woodlands can have a beneficial effect on flood alleviation. Woods and trees on floodplains can help mitigate the effects of large floods by enhancing flood storage through absorbing and delaying the release of flood flows. Where suitable in a catchment, planting a small woodland or hedgerow across steep slopes or along streams and rivers, can significantly reduce the volume of water reaching the river channel. Carefully sited woods and trees can intercept water flowing off the hillside, helping to phase the release of water during peak flows by slowing the speed at which water enters a river and moves downstream. Appropriately sited trees and woodland along sediment and run-off pathways can interrupt flow routes and increase soil infiltration, reducing flooding and the amount of sediment reaching a river channel. Trees and hedgerows can act as a buffer that reduces the amount of pollutants, nutrients and sediment reaching a watercourse. Although riparian woodland has declined in many areas, native trees and woods have a key role to play in water management. With a changing climate it is important to put the right trees in the right places and think about how new trees can connect to the wider landscape. In March 2020 we planted over 1100 trees as part of a bigger planting scheme and natural flood management project to link to a protected area of woodland (SSSI) in Puckham reduce flooding to Andoversford. Improved management of existing riparian woodlands and new planting will also have other beneficial effects, including: • Provide habitats and food sources for wildlife leading to increased biodiversity and the restoration of native ecosystems • Cleaner air as trees capture atmospheric pollutants at source •  Improved water quality by reducing pollutants and nutrients from reaching a watercourse • Reduced soil erosion as tree roots help bind soil and protect river banks • Better connected landscape • Cost-effective method of flood control for small, rural communities where large, hard-engineered projects are too expensive Tree planting Targeted woodland can delay flood flows, reducing downstream flood risk Restoring rivers to their natural course Rivers naturally meander along their floodplains but unfortunately the physical structure of many river channels have been modified and straightened to benefit society for example through generating power and improving navigation. However, these changes that prevent river systems from behaving naturally can accelerate flow and increase flooding downstream. Furthermore, man made changes to a natural watercourse often lead to long term ecological damage to a river system, its hydrology and wildlife associated with a riparian habitat. Work to restore rivers to their natural floodplains and wetlands has a natural slowing and filtering effect on water, whilst also depositing gravels in areas that won’t cause damage therefore reducing the need and cost of river maintenance. A river that naturally meanders: • Improves water quality • Can reduce flood risk • Has better exchange with groundwater • Flood flows are slowed • Flood peaks are delayed, which gives people more time to prepare Meandering river A natural meandering river
__label__pos
0.908882
PT - JOURNAL ARTICLE AU - Wilson, R. S. AU - Scherr, P. A. AU - Schneider, J. A. AU - Tang, Y. AU - Bennett, D. A. TI - Relation of cognitive activity to risk of developing Alzheimer disease AID - 10.1212/01.wnl.0000271087.67782.cb DP - 2007 Nov 13 TA - Neurology PG - 1911--1920 VI - 69 IP - 20 4099 - http://n.neurology.org/content/69/20/1911.short 4100 - http://n.neurology.org/content/69/20/1911.full SO - Neurology2007 Nov 13; 69 AB - Background: Frequent cognitive activity in old age has been associated with reduced risk of Alzheimer disease (AD), but the basis of the association is uncertain. Methods: More than 700 old people underwent annual clinical evaluations for up to 5 years. At baseline, they rated current and past frequency of cognitive activity with the current activity measure administered annually thereafter. Those who died underwent a uniform postmortem examination of the brain. Amyloid burden, density of tangles, and presence of Lewy bodies were assessed in eight brain regions and the number of chronic cerebral infarctions was noted. Results: During follow-up, 90 people developed AD. More frequent participation in cognitive activity was associated with reduced incidence of AD (HR = 0.58; 95% CI: 0.44, 0.77); a cognitively inactive person (score = 2.2, 10th percentile) was 2.6 times more likely to develop AD than a cognitively active person (score = 4.0, 90th percentile). The association remained after controlling for past cognitive activity, lifespan socioeconomic status, current social and physical activity, and low baseline cognitive function. Frequent cognitive activity was also associated with reduced incidence of mild cognitive impairment and less rapid decline in cognitive function. Among 102 persons who died and had a brain autopsy, neither global nor regionally specific measures of neuropathology were related to level of cognitive activity before the study, at study onset, or during the course of the study. Conclusion: Level of cognitively stimulating activity in old age is related to risk of developing dementia. GLOSSARY: AD = Alzheimer disease; MCI = mild cognitive impairment.
__label__pos
0.807893
/[pcre]/code/trunk/pcre_printint.src ViewVC logotype Contents of /code/trunk/pcre_printint.src Parent Directory Parent Directory | Revision Log Revision Log Revision 87 - (show annotations) Sat Feb 24 21:41:21 2007 UTC (9 years, 6 months ago) by nigel File MIME type: application/x-wais-source File size: 12832 byte(s) Load pcre-6.5 into code/trunk. 1 /************************************************* 2 * Perl-Compatible Regular Expressions * 3 *************************************************/ 4 5 /* PCRE is a library of functions to support regular expressions whose syntax 6 and semantics are as close as possible to those of the Perl 5 language. 7 8 Written by Philip Hazel 9 Copyright (c) 1997-2005 University of Cambridge 10 11 ----------------------------------------------------------------------------- 12 Redistribution and use in source and binary forms, with or without 13 modification, are permitted provided that the following conditions are met: 14 15 * Redistributions of source code must retain the above copyright notice, 16 this list of conditions and the following disclaimer. 17 18 * Redistributions in binary form must reproduce the above copyright 19 notice, this list of conditions and the following disclaimer in the 20 documentation and/or other materials provided with the distribution. 21 22 * Neither the name of the University of Cambridge nor the names of its 23 contributors may be used to endorse or promote products derived from 24 this software without specific prior written permission. 25 26 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 27 AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 28 IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 29 ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE 30 LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 31 CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF 32 SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 33 INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 34 CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 35 ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 36 POSSIBILITY OF SUCH DAMAGE. 37 ----------------------------------------------------------------------------- 38 */ 39 40 41 /* This module contains a PCRE private debugging function for printing out the 42 internal form of a compiled regular expression, along with some supporting 43 local functions. This source file is used in two places: 44 45 (1) It is #included by pcre_compile.c when it is compiled in debugging mode 46 (DEBUG defined in pcre_internal.h). It is not included in production compiles. 47 48 (2) It is always #included by pcretest.c, which can be asked to print out a 49 compiled regex for debugging purposes. */ 50 51 52 static const char *OP_names[] = { OP_NAME_LIST }; 53 54 55 /************************************************* 56 * Print single- or multi-byte character * 57 *************************************************/ 58 59 static int 60 print_char(FILE *f, uschar *ptr, BOOL utf8) 61 { 62 int c = *ptr; 63 64 if (!utf8 || (c & 0xc0) != 0xc0) 65 { 66 if (isprint(c)) fprintf(f, "%c", c); else fprintf(f, "\\x%02x", c); 67 return 0; 68 } 69 else 70 { 71 int i; 72 int a = _pcre_utf8_table4[c & 0x3f]; /* Number of additional bytes */ 73 int s = 6*a; 74 c = (c & _pcre_utf8_table3[a]) << s; 75 for (i = 1; i <= a; i++) 76 { 77 /* This is a check for malformed UTF-8; it should only occur if the sanity 78 check has been turned off. Rather than swallow random bytes, just stop if 79 we hit a bad one. Print it with \X instead of \x as an indication. */ 80 81 if ((ptr[i] & 0xc0) != 0x80) 82 { 83 fprintf(f, "\\X{%x}", c); 84 return i - 1; 85 } 86 87 /* The byte is OK */ 88 89 s -= 6; 90 c |= (ptr[i] & 0x3f) << s; 91 } 92 if (c < 128) fprintf(f, "\\x%02x", c); else fprintf(f, "\\x{%x}", c); 93 return a; 94 } 95 } 96 97 98 99 /************************************************* 100 * Find Unicode property name * 101 *************************************************/ 102 103 static const char * 104 get_ucpname(int ptype, int pvalue) 105 { 106 #ifdef SUPPORT_UCP 107 int i; 108 for (i = _pcre_utt_size; i >= 0; i--) 109 { 110 if (ptype == _pcre_utt[i].type && pvalue == _pcre_utt[i].value) break; 111 } 112 return (i >= 0)? _pcre_utt[i].name : "??"; 113 #else 114 ptype = ptype; /* Avoid compiler warning */ 115 pvalue = pvalue; 116 return "??"; 117 #endif 118 } 119 120 121 122 /************************************************* 123 * Print compiled regex * 124 *************************************************/ 125 126 /* Make this function work for a regex with integers either byte order. 127 However, we assume that what we are passed is a compiled regex. */ 128 129 static void 130 pcre_printint(pcre *external_re, FILE *f) 131 { 132 real_pcre *re = (real_pcre *)external_re; 133 uschar *codestart, *code; 134 BOOL utf8; 135 136 unsigned int options = re->options; 137 int offset = re->name_table_offset; 138 int count = re->name_count; 139 int size = re->name_entry_size; 140 141 if (re->magic_number != MAGIC_NUMBER) 142 { 143 offset = ((offset << 8) & 0xff00) | ((offset >> 8) & 0xff); 144 count = ((count << 8) & 0xff00) | ((count >> 8) & 0xff); 145 size = ((size << 8) & 0xff00) | ((size >> 8) & 0xff); 146 options = ((options << 24) & 0xff000000) | 147 ((options << 8) & 0x00ff0000) | 148 ((options >> 8) & 0x0000ff00) | 149 ((options >> 24) & 0x000000ff); 150 } 151 152 code = codestart = (uschar *)re + offset + count * size; 153 utf8 = (options & PCRE_UTF8) != 0; 154 155 for(;;) 156 { 157 uschar *ccode; 158 int c; 159 int extra = 0; 160 161 fprintf(f, "%3d ", (int)(code - codestart)); 162 163 if (*code >= OP_BRA) 164 { 165 if (*code - OP_BRA > EXTRACT_BASIC_MAX) 166 fprintf(f, "%3d Bra extra\n", GET(code, 1)); 167 else 168 fprintf(f, "%3d Bra %d\n", GET(code, 1), *code - OP_BRA); 169 code += _pcre_OP_lengths[OP_BRA]; 170 continue; 171 } 172 173 switch(*code) 174 { 175 case OP_END: 176 fprintf(f, " %s\n", OP_names[*code]); 177 fprintf(f, "------------------------------------------------------------------\n"); 178 return; 179 180 case OP_OPT: 181 fprintf(f, " %.2x %s", code[1], OP_names[*code]); 182 break; 183 184 case OP_CHAR: 185 { 186 fprintf(f, " "); 187 do 188 { 189 code++; 190 code += 1 + print_char(f, code, utf8); 191 } 192 while (*code == OP_CHAR); 193 fprintf(f, "\n"); 194 continue; 195 } 196 break; 197 198 case OP_CHARNC: 199 { 200 fprintf(f, " NC "); 201 do 202 { 203 code++; 204 code += 1 + print_char(f, code, utf8); 205 } 206 while (*code == OP_CHARNC); 207 fprintf(f, "\n"); 208 continue; 209 } 210 break; 211 212 case OP_KETRMAX: 213 case OP_KETRMIN: 214 case OP_ALT: 215 case OP_KET: 216 case OP_ASSERT: 217 case OP_ASSERT_NOT: 218 case OP_ASSERTBACK: 219 case OP_ASSERTBACK_NOT: 220 case OP_ONCE: 221 case OP_COND: 222 case OP_REVERSE: 223 fprintf(f, "%3d %s", GET(code, 1), OP_names[*code]); 224 break; 225 226 case OP_BRANUMBER: 227 printf("%3d %s", GET2(code, 1), OP_names[*code]); 228 break; 229 230 case OP_CREF: 231 if (GET2(code, 1) == CREF_RECURSE) 232 fprintf(f, " Cond recurse"); 233 else 234 fprintf(f, "%3d %s", GET2(code,1), OP_names[*code]); 235 break; 236 237 case OP_STAR: 238 case OP_MINSTAR: 239 case OP_PLUS: 240 case OP_MINPLUS: 241 case OP_QUERY: 242 case OP_MINQUERY: 243 case OP_TYPESTAR: 244 case OP_TYPEMINSTAR: 245 case OP_TYPEPLUS: 246 case OP_TYPEMINPLUS: 247 case OP_TYPEQUERY: 248 case OP_TYPEMINQUERY: 249 fprintf(f, " "); 250 if (*code >= OP_TYPESTAR) 251 { 252 fprintf(f, "%s", OP_names[code[1]]); 253 if (code[1] == OP_PROP || code[1] == OP_NOTPROP) 254 { 255 fprintf(f, " %s ", get_ucpname(code[2], code[3])); 256 extra = 2; 257 } 258 } 259 else extra = print_char(f, code+1, utf8); 260 fprintf(f, "%s", OP_names[*code]); 261 break; 262 263 case OP_EXACT: 264 case OP_UPTO: 265 case OP_MINUPTO: 266 fprintf(f, " "); 267 extra = print_char(f, code+3, utf8); 268 fprintf(f, "{"); 269 if (*code != OP_EXACT) fprintf(f, ","); 270 fprintf(f, "%d}", GET2(code,1)); 271 if (*code == OP_MINUPTO) fprintf(f, "?"); 272 break; 273 274 case OP_TYPEEXACT: 275 case OP_TYPEUPTO: 276 case OP_TYPEMINUPTO: 277 fprintf(f, " %s", OP_names[code[3]]); 278 if (code[3] == OP_PROP || code[3] == OP_NOTPROP) 279 { 280 fprintf(f, " %s ", get_ucpname(code[4], code[5])); 281 extra = 2; 282 } 283 fprintf(f, "{"); 284 if (*code != OP_TYPEEXACT) fprintf(f, "0,"); 285 fprintf(f, "%d}", GET2(code,1)); 286 if (*code == OP_TYPEMINUPTO) fprintf(f, "?"); 287 break; 288 289 case OP_NOT: 290 if (isprint(c = code[1])) fprintf(f, " [^%c]", c); 291 else fprintf(f, " [^\\x%02x]", c); 292 break; 293 294 case OP_NOTSTAR: 295 case OP_NOTMINSTAR: 296 case OP_NOTPLUS: 297 case OP_NOTMINPLUS: 298 case OP_NOTQUERY: 299 case OP_NOTMINQUERY: 300 if (isprint(c = code[1])) fprintf(f, " [^%c]", c); 301 else fprintf(f, " [^\\x%02x]", c); 302 fprintf(f, "%s", OP_names[*code]); 303 break; 304 305 case OP_NOTEXACT: 306 case OP_NOTUPTO: 307 case OP_NOTMINUPTO: 308 if (isprint(c = code[3])) fprintf(f, " [^%c]{", c); 309 else fprintf(f, " [^\\x%02x]{", c); 310 if (*code != OP_NOTEXACT) fprintf(f, "0,"); 311 fprintf(f, "%d}", GET2(code,1)); 312 if (*code == OP_NOTMINUPTO) fprintf(f, "?"); 313 break; 314 315 case OP_RECURSE: 316 fprintf(f, "%3d %s", GET(code, 1), OP_names[*code]); 317 break; 318 319 case OP_REF: 320 fprintf(f, " \\%d", GET2(code,1)); 321 ccode = code + _pcre_OP_lengths[*code]; 322 goto CLASS_REF_REPEAT; 323 324 case OP_CALLOUT: 325 fprintf(f, " %s %d %d %d", OP_names[*code], code[1], GET(code,2), 326 GET(code, 2 + LINK_SIZE)); 327 break; 328 329 case OP_PROP: 330 case OP_NOTPROP: 331 fprintf(f, " %s %s", OP_names[*code], get_ucpname(code[1], code[2])); 332 break; 333 334 /* OP_XCLASS can only occur in UTF-8 mode. However, there's no harm in 335 having this code always here, and it makes it less messy without all those 336 #ifdefs. */ 337 338 case OP_CLASS: 339 case OP_NCLASS: 340 case OP_XCLASS: 341 { 342 int i, min, max; 343 BOOL printmap; 344 345 fprintf(f, " ["); 346 347 if (*code == OP_XCLASS) 348 { 349 extra = GET(code, 1); 350 ccode = code + LINK_SIZE + 1; 351 printmap = (*ccode & XCL_MAP) != 0; 352 if ((*ccode++ & XCL_NOT) != 0) fprintf(f, "^"); 353 } 354 else 355 { 356 printmap = TRUE; 357 ccode = code + 1; 358 } 359 360 /* Print a bit map */ 361 362 if (printmap) 363 { 364 for (i = 0; i < 256; i++) 365 { 366 if ((ccode[i/8] & (1 << (i&7))) != 0) 367 { 368 int j; 369 for (j = i+1; j < 256; j++) 370 if ((ccode[j/8] & (1 << (j&7))) == 0) break; 371 if (i == '-' || i == ']') fprintf(f, "\\"); 372 if (isprint(i)) fprintf(f, "%c", i); else fprintf(f, "\\x%02x", i); 373 if (--j > i) 374 { 375 if (j != i + 1) fprintf(f, "-"); 376 if (j == '-' || j == ']') fprintf(f, "\\"); 377 if (isprint(j)) fprintf(f, "%c", j); else fprintf(f, "\\x%02x", j); 378 } 379 i = j; 380 } 381 } 382 ccode += 32; 383 } 384 385 /* For an XCLASS there is always some additional data */ 386 387 if (*code == OP_XCLASS) 388 { 389 int ch; 390 while ((ch = *ccode++) != XCL_END) 391 { 392 if (ch == XCL_PROP) 393 { 394 int ptype = *ccode++; 395 int pvalue = *ccode++; 396 fprintf(f, "\\p{%s}", get_ucpname(ptype, pvalue)); 397 } 398 else if (ch == XCL_NOTPROP) 399 { 400 int ptype = *ccode++; 401 int pvalue = *ccode++; 402 fprintf(f, "\\P{%s}", get_ucpname(ptype, pvalue)); 403 } 404 else 405 { 406 ccode += 1 + print_char(f, ccode, TRUE); 407 if (ch == XCL_RANGE) 408 { 409 fprintf(f, "-"); 410 ccode += 1 + print_char(f, ccode, TRUE); 411 } 412 } 413 } 414 } 415 416 /* Indicate a non-UTF8 class which was created by negation */ 417 418 fprintf(f, "]%s", (*code == OP_NCLASS)? " (neg)" : ""); 419 420 /* Handle repeats after a class or a back reference */ 421 422 CLASS_REF_REPEAT: 423 switch(*ccode) 424 { 425 case OP_CRSTAR: 426 case OP_CRMINSTAR: 427 case OP_CRPLUS: 428 case OP_CRMINPLUS: 429 case OP_CRQUERY: 430 case OP_CRMINQUERY: 431 fprintf(f, "%s", OP_names[*ccode]); 432 extra += _pcre_OP_lengths[*ccode]; 433 break; 434 435 case OP_CRRANGE: 436 case OP_CRMINRANGE: 437 min = GET2(ccode,1); 438 max = GET2(ccode,3); 439 if (max == 0) fprintf(f, "{%d,}", min); 440 else fprintf(f, "{%d,%d}", min, max); 441 if (*ccode == OP_CRMINRANGE) fprintf(f, "?"); 442 extra += _pcre_OP_lengths[*ccode]; 443 break; 444 445 /* Do nothing if it's not a repeat; this code stops picky compilers 446 warning about the lack of a default code path. */ 447 448 default: 449 break; 450 } 451 } 452 break; 453 454 /* Anything else is just an item with no data*/ 455 456 default: 457 fprintf(f, " %s", OP_names[*code]); 458 break; 459 } 460 461 code += _pcre_OP_lengths[*code] + extra; 462 fprintf(f, "\n"); 463 } 464 } 465 466 /* End of pcre_printint.src */   ViewVC Help Powered by ViewVC 1.1.5  
__label__pos
0.993729
Home » How To Hold Planks Longer How To Hold Planks Longer by admin How To Hold Planks Longer Introduction How To Hold Planks Longer: Holding a plank position is a deceptively simple yet incredibly effective exercise for building core strength and endurance. But for many, the challenge lies in maintaining that position for an extended period. If you’ve ever struggled to hold a plank for more than a minute, you’re not alone. The good news is that there are several strategies and techniques you can employ to help you increase your plank endurance and hold this beneficial exercise longer. We’ll explore the secrets to pushing your plank limits. We’ll delve into the proper form, breathing techniques, and the role of mental focus. You’ll discover progressive exercises and variations to gradually enhance your plank prowess. Whether your goal is to improve your core strength, sculpt your abs, or simply extend your plank duration as a personal challenge, these tips and methods will be your roadmap to success. Are you ready to transform your plank game and experience the satisfaction of holding this exercise longer and stronger we’ll break down each element of the plank, starting with the foundation proper form. Correct alignment not only ensures you target the right muscles but also prevents strain and injury. You’ll learn the subtle adjustments that can make a big difference. Breathing is another crucial factor. Many people underestimate the power of rhythmic breathing in maintaining a plank. We’ll show you how to synchronize your breath with your plank, promoting endurance and reducing stress on your body. But the physical aspects of planking are only part of the equation. Your mental state plays a significant role in your success. We’ll explore techniques to boost your mental resilience, helping you push through the discomfort and embrace the challenge. We’ll also progressive plank variations that will keep your workouts engaging while building core strength. By the have a comprehensive toolkit to hold planks longer, elevate your fitness level, and achieve your personal goals. So, let’s embark on to plank mastery together. How To Hold Planks Longer How can I increase my plank hold time? To hold your plank for as long as possible, Ligler says to “engage the quads and glutes, rotate those elbow creases forward to strengthen your posture, and finally find a rhythm to your breathing.” Proper Form: Ensure your body is in a straight line from head to heels. Engage your quads and glutes to maintain a strong posture. Rotate your elbow creases forward, which helps to activate the chest and prevent the shoulders from rounding. Steady Breathing: Establish a consistent breathing rhythm. Inhale and exhale deeply and steadily to maintain oxygen flow to your muscles. This will help reduce muscle fatigue and improve endurance. Progressive Training: Gradually increase the duration of your plank holds. Start with a time that challenges you but is sustainable, and then incrementally add a few seconds or more as your strength improves. Mental Focus: Plank holding can be mentally challenging. Maintain your concentration and stay focused on your goal. Visualization can be helpful – picture yourself succeeding and surpassing your previous records. Incorporate different plank variations into your routine. Side planks, forearm planks, or plank leg lifts add diversity to your workouts and challenge your core from various angles Is 1 minute plank a day enough? Try performing the plank for a minimum of one minute at a time. Start by doing 1 plank a day to slowly 3 to 10 a day to reap the maximum benefits. Then, slowly also try side planks which can help improve your flexibility. Progression: While a 1-minute plank is a good starting point, your muscles will adapt over time, and the exercise will become less challenging. To continue building core strength and endurance, it’s advisable to gradually increase the duration of your planks. Variety: Adding variety to your routine can target different muscle groups and prevent plateaus. Incorporating side planks, forearm planks, or plank variations can help you develop a well-rounded core and improve overall stability and flexibility. Frequency: While the duration of a plank is important, the frequency of your planking routine also matters. Doing a single 1-minute plank a day may not provide enough stimulus to see significant improvements. It’s to do multiple planks throughout the day or incorporate them into a well-rounded fitness routine. Individual Goals: The effectiveness of a 1-minute plank varies from person to person, depending on their fitness goals. If your goal is core strength, you’ll likely need to challenge yourself with longer planks or more repetitions. Do planks burn belly fat? While planks are effective for strengthening the core muscles, spot reduction of fat in a specific area, such as the belly, is not possible. To reduce overall body fat, including belly fat, a combination of regular exercise, a balanced diet, and a calorie deficit is necessary. Spot Reduction Myth: Spot reduction, the idea that you can lose fat in a specific area of your body through targeted exercises, is a common fitness myth. In reality, your body burns fat uniformly from all over, and genetics play a significant role in determining where you lose fat first. Body Fat Reduction: To reduce belly fat or fat in any area of your body, you need to focus on overall body fat reduction. This is achieved through a combination of cardiovascular exercise, strength training, and maintaining a calorie deficit. Cardiovascular exercises like running, cycling, or swimming help burn calories, while strength training (including planks) helps build lean muscle mass that can boost your metabolism. Diet and Nutrition: A balanced diet and proper nutrition play a crucial role in losing body fat. Consuming fewer calories than you burn creates a calorie deficit, leading to fat loss. Focus on a diet rich in whole foods, lean proteins, healthy fats, and complex carbohydrates. Achieving and maintaining a healthy body weight and reducing body fat requires consistency in both exercise and nutrition. It’s a gradual process that takes time. Why is plank so hard? The fact that the plank recruits so many muscle groups at once is what makes it one of those exercises that is much harder to perform than it looks. It’s an effective way to tone your entire core (including your shoulders and glutes!), which also helps reduce back pain. Engages Multiple Muscle Groups: Planks engage a wide range of muscle groups simultaneously, including the core, shoulders, back, and glutes. This comprehensive engagement makes it a full-body exercise, which can be physically demanding. Isometric Contraction: Planks require isometric contraction, where muscles work to maintain a position without movement. This continuous tension can lead to muscle fatigue, making it challenging to hold the position. Core Strength: Planks specifically target the core muscles, including the rectus abdominis, obliques, and transverse abdominis. Building and maintaining core strength is essential for good posture, stability, and reducing back pain. Planks demand strong core stability, which is crucial for supporting the spine and preventing injuries. Endurance: Holding a plank requires endurance, as you must maintain the position for an extended period. Endurance exercises can be mentally and physically taxing. This element of stability adds to the difficulty, especially for those with weaker core muscles. What is the longest plank? A man from the Czech Republic has performed the longest abdominal plank ever recorded, as confirmed by the Guinness Book of World Records. Josef Šálek, known to his friends as Joska, undertook the physically grueling challenge on May 20, 2023, maintaining a strict plank position for 9 hours, 38 minutes, and 47 seconds. Breaking Records with Incredible Core Strength: In a testament to the indomitable spirit of human achievement, Josef Šálek, affectionately known as Joska, hails from the Czech Republic, etching his name into the history books. On the momentous day of May 20, 2023, Joska embarked on a journey that would redefine the limits of physical endurance. To set the record for the longest abdominal plank ever recorded. A Grueling Challenge of Unwavering Determination: The plank exercise is renowned for its ability to strengthen the core muscles and enhance endurance, but what Joska set out to accomplish was nothing short of extraordinary. For an astonishing 9 hours, 38 minutes, and 47 seconds, he maintained a strict plank position, enduring the physical strain and mental fatigue that accompany such an extraordinary feat. Guinness World Records Confirmation: Joska’s incredible achievement did not go unnoticed. The Guinness Book of World Records, the ultimate authority on record-breaking achievements, confirmed and recognized his extraordinary accomplishment. It stood as a testament to his exceptional core strength, unwavering determination, and the power of human resilience. How long should you be able to hold a plank? Most experts suggest anywhere from 10 up to 30 seconds is plenty. “Focus on doing multiple sets of smaller amounts of time,” says L’Italien. As you progress, you can extend your plank for up to one or even two minutes, but don’t go beyond that. Beginners: If you’re new to planking, start with around 10-20 seconds. This is long enough to engage your core and get accustomed to the exercise. As you build strength, aim for 30 seconds to one minute per plank. Advanced: Advanced individuals can work towards planks of one to two minutes. Beyond two minutes, the benefits may plateau, and it can put excessive strain on your shoulders and lower back. It’s more to focus on proper form and engaging your core muscles during the plank than on the duration. If you can maintain excellent form for 30 seconds, it’s better than a longer plank with compromised form. Progression: To continually challenge yourself, consider increasing the number of planks or incorporating plank variations like side planks or forearm planks. Doing multiple sets of planks at this duration can be an effective workout. How many planks per day? When it comes to how many planks a day you should do, Doug Sklar, a certified personal trainer, recommends striving to do three sets of up to 60 seconds, so this can be the goal you aim for when you begin your plank adventure. The most important thing in doing planks every day is consistency. Form Over Duration: Always prioritize proper form over extended duration. Maintaining the correct plank position, with a straight line from head to heels and engaged core muscles, is essential to avoid injury and gain the full benefits. Before your daily planks, warm up your muscles with some light cardio or dynamic stretching. Vary Your Routine: To prevent plateaus and keep your routine engaging, consider incorporating different types of planks (side planks, forearm planks, high planks, etc.) or adding challenges like leg lifts or arm reaches. Allow your body to recover. While daily planks are beneficial, your muscles need time to repair and grow stronger. Listen to Your Body: If you experience pain or discomfort beyond typical muscle fatigue, it’s crucial to listen to your body and adjust your routine. Pushing through pain can lead to injury. Consider taking one or two rest days per week or engaging in active recovery exercises. A warm-up helps prevent injury and ensures your muscles are ready to work. How long is a good plank a day? As a general guideline, Doug Sklar, a certified personal trainer and founder of PhilanthroPIST in New York City, recommends striving to do three sets of up to 60 seconds. “It’s OK to start with shorter sets and work up to 60 seconds,” he says. Plus, shorter planks can still give you a solid workout, Sklar says. Proper Alignment: Ensure that your body is in a straight line from head to heels, and your elbows (in forearm plank) or hands (in high plank) are directly beneath your shoulders. Proper alignment minimizes the risk of injury and maximizes the engagement of your core muscles. Breathing: Maintain steady and controlled breathing throughout the plank. Inhale and exhale deeply to ensure proper oxygen supply to your muscles. Consistent breathing can help reduce muscle fatigue. Incorporate a short warm-up and cool-down into your routine. Dynamic stretches or light cardio activities can prepare your muscles, and static stretches post-plank can aid in recovery. Variations: As you progress, experiment with different plank variations to keep your routine engaging. Side planks, forearm planks, or plank leg lifts can challenge your core muscles from various angles. Pay attention to any discomfort or pain, especially in your lower back or shoulders. Adjust your form or take a break if needed. Pushing through pain can lead to injury. How To Hold Planks Longer Conclusion In the quest to hold planks longer, we’ve uncovered valuable insights and techniques that can elevate your core strengthening. Planks, a deceptively simple yet incredibly effective exercise, demand not only physical strength but also mental fortitude. We’ve learned that the key to success lies in a balance of factors, each playing a crucial role in achieving extended longer plank duration. Maintaining proper form is non-negotiable, ensuring a straight line from head to heels, engaged core, and aligned shoulders or elbows. Breathing rhythmically and deeply supports endurance, reducing muscle fatigue. As you progress, consistency is paramount, and gradual increases in duration are essential. Setting goals and tracking your achievements can keep you motivated and accountable. But perhaps the most essential takeaway is that to holding planks longer is not just a physical one; it’s also a mental challenge. Perseverance, focus, and determination are as vital as core strength. The ability to push through discomfort and self-imposed limits defines your success. So, whether you’re a beginner striving for your first 30-second plank or an advanced enthusiast aiming for multiple minutes that every second counts and brings you closer to your goal. Plank on, and celebrate your progress along the way. You may also like Leave a Comment
__label__pos
0.985896
A. Lupus is a chronic disease in which a person's body is attacked by the immune system, which normally fights infections and foreign invaders, such as viruses and bacteria, said Gilkeson, a professor of medicine at the Medical University of South Carolina in Charleston. Lupus can cause a variety of symptoms, including severe fatigue, headaches, painful or swollen joints, fever, swelling in the hands or ankles, a butterfly-shaped rash across the nose and cheeks, sensitivity to light, mouth and nose ulcers, anemia and hair loss. (1) SOC; (2) SOC plus methotrexate (MTX); (3) SOC plus leflunomide (LFN); (4) SOC plus belimumab; (5) SOC plus abatacept (ABT); (6) other options: azathioprine (AZA), mycophenolate mofetil (MMF), cyclosporine A (CsA) or rituximab (RTX) (online supplementary tables S2.1.1, S2.1.4, S2.1.6, S2.1.7, S2.2.11, S2.1.11, S2.1.12, S2.1.14, S2.1.15, S2.1.17, S2.2.1, S2.2.2, S2.2.4, S3.1.1, S3.1.3–S3.1.6, S3.2.1, S3.2.2, S12.2–S12.5, S12.8–S12.10). Antibodies produced by a single clone of cells; A type of protein made in the laboratory that can bind to substances in the body, including cancer cells. There are many kinds of monoclonal antibodies. A monoclonal antibody is made so that it binds to only one substance. Monoclonal antibodies are being used to treat some types of cancer. They can be used alone or to carry drugs, toxins, or radioactive substances directly to cancer cells. Information on this website is provided for informational purposes only and is not intended as a substitute for the advice provided by your physician or other healthcare professional. You should not use the information on this website for diagnosing or treating a health problem or disease, or prescribing any medication or other treatment. Any third party offering or advertising on this website does not constitute an endorsement by Andrew Weil, M.D. or Healthy Lifestyle Brands. Along with nutritional deficiencies, steroid medications can cause significant weight gain and increased cholesterol, blood glucose, and triglycerides, further underscoring the need for patients with SLE who are taking these agents to follow a healthy diet to counter the effects.6 There are also specific things that individuals with SLE should avoid, including alfalfa sprouts and garlic, which can stimulate an already overactive immune system.7  A. Lupus is a chronic disease in which a person's body is attacked by the immune system, which normally fights infections and foreign invaders, such as viruses and bacteria, said Gilkeson, a professor of medicine at the Medical University of South Carolina in Charleston. Lupus can cause a variety of symptoms, including severe fatigue, headaches, painful or swollen joints, fever, swelling in the hands or ankles, a butterfly-shaped rash across the nose and cheeks, sensitivity to light, mouth and nose ulcers, anemia and hair loss. Kidney inflammation in SLE (lupus nephritis) can cause leakage of protein into the urine, fluid retention, high blood pressure, and even kidney failure. This can lead to further fatigue and swelling (edema) of the legs and feet. With kidney failure, machines are needed to cleanse the blood of accumulated waste products in a process called dialysis. If you have lupus you may have noticed that certain foods tend to lead to lupus flares. A lupus flare is a period when the symptoms of lupus become more active. Kathleen LaPlant, of Cape Cod, Mass., was diagnosed with systemic lupus several years ago. "I have learned to be careful with foods that seem to trigger lupus symptoms. The biggest trigger for me has been fried foods. I have had to eliminate these from my diet," says LaPlant. It is hard to predict which foods may trigger a lupus flare, but you can start by paying close attention to your diet. If a particular type of food repeatedly causes problems, try taking it out of your diet and see if it makes a difference. A diet high in folic acid, such as found in leafy green vegetables, fruits, and fortified breads and cereals, or a folic acid supplement is important if you are taking methotrexate (Rheumatrex). For nausea caused by medications, eat small frequent meals and foods that are easy to digest. Try dry cereals, breads, and crackers. Also avoid greasy, spicy, and acidic foods. Many drugs have been known to cause this form of the disease, but several are considered primary culprits. They are mainly anti-inflammatories, anticonvulsants, or drugs used to treat chronic conditions such as heart disease, thyroid disease, hypertension (high blood pressure), and neuropsychiatric disorders. The three drugs mostly to blame for drug-induced lupus are: Periodic follow-up and laboratory testing, including complete blood counts with differential, creatinine, and urinalyses, are imperative for detecting signs and symptoms of new organ-system involvement and for monitoring response and adverse reactions to therapies. At least quarterly visits are recommended in most cases. [151] Periodic complement levels and dsDNA titers may be used as adjuncts to clinical evaluation for detecting lupus flares. Drug-induced lupus erythematosus (DIL) Some drugs can cause lupus, resulting in symptoms such as rash, arthritis, hair loss, and fever. “Once medications are discontinued, the symptoms go away,” says Roberto Caricchio, MD, the interim section chief of rheumatology at Temple University Hospital in Philadelphia and the director of the Temple Lupus Clinic at the Lewis Katz School of Medicine. Systemic sclerosis (SSc): Similar symptoms between SSc and lupus are reflux and Raynaud's disease (when your fingers turn blue or white with cold). One difference between SSc and lupus is that anti-double-stranded DNA (dsDNA) and anti-Smith (Sm) antibodies, which are linked to lupus, don't usually occur in SSc. Another differentiator is that people with SSc often have antibodies to an antigen called Scl-70 (topoisomerase I) or antibodies to centromere proteins. Scientists have suspected for years that infections from bacteria, viruses, and other toxins were likely to blame for the development of conditions like lupus. And while they have not been able to identify one single culprit, they have found strong correlations with a number of bacteria and viruses. For example, the Epstein-Barr virus (EBV) has been shown to trigger lupus in some individuals.4 In 2007, the European League Against Rheumatism (EULAR) released recommendations for the treatment of SLE. [61] In patients with SLE without major organ manifestations, glucocorticoids and antimalarial agents may be beneficial. [61] NSAIDs may be used for short periods in patients at low risk for complications from these drugs. Consider immunosuppressive agents (eg, azathioprine, mycophenolate mofetil, methotrexate) in refractory cases or when steroid doses cannot be reduced to levels for long-term use. [106] If your doctor suspects you have lupus, he or she will focus on your RBC and WBC counts. Low RBC counts are frequently seen in autoimmune diseases like lupus. However, low RBC counts can also indicate blood loss, bone marrow failure, kidney disease, hemolysis (RBC destruction), leukemia, malnutrition, and more. Low WBC counts can point toward lupus as well as bone marrow failure and liver and spleen disease. It can be very scary to receive a lupus diagnosis, have your life disrupted and cause you to become uncertain about the future. The good news is that strides are continually being made in the discovery of better diagnostic tools and more effective medications. With the combination of correct treatment, medication, and living a healthy lifestyle, many people with lupus can look forward to a leading a long and productive life.  If your CBC comes back with high numbers of RBCs or a high hematocrit, it could indicate a number of other issues including lung disease, blood cancers, dehydration, kidney disease, congenital heart disease, and other heart problems. High WBCs, called leukocytosis, may indicate an infectious disease, inflammatory disease, leukemia, stress, and more.  The lupus erythematosus (LE) cell test was commonly used for diagnosis, but it is no longer used because the LE cells are only found in 50–75% of SLE cases, and they are also found in some people with rheumatoid arthritis, scleroderma, and drug sensitivities. Because of this, the LE cell test is now performed only rarely and is mostly of historical significance.[72] Periodic follow-up and laboratory testing, including complete blood counts with differential, creatinine, and urinalyses, are imperative for detecting signs and symptoms of new organ-system involvement and for monitoring response and adverse reactions to therapies. At least quarterly visits are recommended in most cases. [151] Periodic complement levels and dsDNA titers may be used as adjuncts to clinical evaluation for detecting lupus flares. One food to avoid is alfalfa sprouts. Alfalfa tablets have been associated with lupus flares or a lupus-like syndrome that includes muscle pain, fatigue, abnormal blood test results, and kidney problems. These problems may be due to a reaction to an amino acid found in alfalfa sprouts and seeds. This amino acid can activate the immune system and increase inflammation in people with lupus. Garlic may also stimulate the immune system. The panel decided to use the body of evidence provided by observational studies because it probably better reflects reality as the RCTs are severely flawed (indirectness of population as most patients were inadequately diagnosed with APS). The panel judged the observed reduction in arterial thrombosis with high-intensity AC as a large benefit, and the bleeding increase as a large harm. Also, it was noted that the observed basal risk (risk with LDA) of thromboembolic recurrence in patients with APS and arterial events was particularly high, compared with the risk of recurrence in patients with VTD. Although a fever technically is any body temperature above the normal of 98.6 F (37 C), in practice, a person is usually not considered to have a significant fever until the temperature is above 100.4 F (38 C). Fever is part of the body's own disease-fighting arsenal; rising body temperatures apparently are capable of killing off many disease-producing organisms. Drug-induced lupus erythematosus is a (generally) reversible condition that usually occurs in people being treated for a long-term illness. Drug-induced lupus mimics SLE. However, symptoms of drug-induced lupus generally disappear once the medication that triggered the episode is stopped. More than 38 medications can cause this condition, the most common of which are procainamide, isoniazid, hydralazine, quinidine, and phenytoin.[54][10] Ms. Everett began by explaining that there is no food that can cause lupus. Lupus is an autoimmune disease, an illness that can affect many body systems. The foods that you eat, however, and the medications you take may have an effect on some of your symptoms. It is also important to understand that there is a link between lupus and osteoporosis and cardiovascular disease. Healthy nutrition can impact on those with these co-occurring diseases. Nutrition (e.g., in the case of osteoporosis, calcium intake) in turn may impact the symptoms and outcomes of these co-occurring illnesses. Here are some key issues and benefits that relate to proper nutrition and people living with lupus; It also is known that some women with systemic lupus erythematosus can experience worsening of their symptoms prior to their menstrual periods. This phenomenon, together with the female predominance of systemic lupus erythematosus, suggests that female hormones play an important role in the expression of SLE. This hormonal relationship is an active area of ongoing study by scientists. Madeline Gilkes focused her research project for her Master's of Healthcare Leadership on Health Coaching for Long-Term Weight Loss in Obese Adults. She also has a Graduate Certificate in Adult & Vocational Education, Graduate Certificate in Aged Care, Bachelor of Nursing, Certificate IV Weight Management and Certificate IV Frontline Management. Madeline is an academic and registered nurse. Her vision is to prevent lifestyle diseases, obesogenic environments, dementia and metabolic syndrome. She has spent the past years in the role of Clinical Facilitator and Clinical Nurse Specialist (Gerontology and Education). A chronic inflammation of large arteries, usually the temporal, occipital, or ophthalmic arteries, identified on pathological specimens by the presence of giant cells. It causes thickening of the intima, with narrowing and eventual occlusion of the lumen. It typically occurs after age 50. Symptoms include headache, tenderness over the affected artery, loss of vision, and facial pain. The cause is unknown, but there may be a genetic predisposition in some families. Corticosteroids are usually administered. According to Goldman Foung, “A diet rich in vegetables gives me energy and keeps me feeling strong and healthy." She typically eats meals filled with dark leafy greens and other colorful vegetables, eats lots of whole grains, and limits her consumption of meat and processed foods. “I also try to drink fresh-pressed beet juice as often as possible,” she adds. “It’s a great way to sneak in some of those body-boosting ingredients.” Take a good multivitamin/multimineral supplement with recommended dosages of antioxidants. To help address inflammation, increase intake of omega-3 fatty acids by eating sardines or other oily fish (salmon, herring, mackerel) three times a week or supplementing with fish oil. Freshly ground flaxseeds (grind two tablespoons a day and sprinkle over cereals or salads) can also help decrease inflammation. Other dietary strategies include avoiding polyunsaturated vegetable oils (safflower, sunflower, corn, etc.), margarine, vegetable shortening, and all products made with partially hydrogenated oils. Eat a low-protein, plant-based diet that excludes all products made from cows’ milk, be sure to eat plenty of fresh fruits and vegetables (with the exception of alfalfa sprouts, which contain the amino acid L-canavanine that can worsen autoimmunity.) The aim of this review is to provide an up-to-date overview of treatment approaches for systemic lupus erythematosus (SLE), highlighting the multiplicity and heterogeneity of clinical symptoms that underlie therapeutic decisions. Discussion will focus on the spectrum of currently available therapies, their mechanisms and associated side-effects. Finally, recent developments with biologic treatments including rituximab, epratuzumab, tumor necrosis factor (TNF) inhibitors, and belimumab, will be discussed. If your doctor suspects you have lupus based on your symptoms, a series of blood tests will be done in order to confirm the diagnosis. The most important blood screening test is ANA. If ANA is negative, you don’t have lupus. However, if ANA is positive, you might have lupus and will need more specific tests. These blood tests include antibodies to anti-dsDNA and anti-Sm, which are specific to the diagnosis of lupus. Affiliate Disclosure: There are links on this site that can be defined as affiliate links. This means that I may receive a small commission (at no cost to you) if you purchase something when clicking on the links that take you through to a different website. By clicking on the links, you are in no way obligated to buy. Please Note: The material on this site is provided for informational purposes only and is not medical advice. Always consult your physician before beginning any diet or exercise program. Copyright © livehopelupus.org ×
__label__pos
0.570523
Common Ailments | Podiatry Clinic for Camberwell and Templestowe Common Ailments Plantar Fasciitis Among the most common ailments of the foot, plantar fasciitis is primarily due to arch stress. Small, microscopic tears and painful swelling can form within the plantar fascia (the long supportive connective tissue layer within the arch of the foot). This injury usually occurs at its attachment beneath the heel bone. The result is inflammation, stiffness, and discomfort while walking and standing, in particular after rest, and during propulsion (heel lift-off phase of walking). Treatment for this issue involves identifying predisposing postural issues, so that long term strategies to reduce elongation and associated stress on the arch of the foot during walking can be controlled. Taping the foot, footwear changes and gait retraining are often important, and custom orthotics can be very helpful for the long term management of this often debilitating issue. In chronic recalcitrant cases, targeted injection therapy using corticosteroid (Cortisone) can be very effective. Bunion A bunion occurs on the side of the foot, typically at the base of the big toe, or hallux. A fluid-filled sac (Bursa) under the skin can cause a visible enlargement. However, the term is also commonly used to describe a structural, or bony, deformity known as hallux abducto-valgus (HAV). Bunions are often associated with poor foot posture or biomechanical issues such as excessive foot pronation or flexible feet. Bunions can cause significant pain, and are aggravated by activity and tight shoes. Treatment includes footwear modification, addressing underlying biomechanical issues via custom orthotics. Cortisone injection can be very useful where there is a painful bursa present. Neuroma A neuroma occurs when a nerve in the foot becomes irritated, and is enlarged by swelling. If its irritation is prolonged, it can become thickened, and cause still more irritation. Discomfort from a neuroma is usually felt on the ball of your foot, and can include shooting pain through the toes and numbness. Treating the cause of the nerve irritation is crucial, by wearing wider shoes and stabilising excessive movement of the bones within the foot via the use of shaped padding or custom orthotics can reduce irritation to the nerve. Cortisone injection can be helpful for recalcitrant cases of neuroma. Corns & Callouses These areas of thickened and hardened skin typically develop as a result of pressure, or rubbing over a bony prominence. If it is small and deep, it is called a corn, which more diffused occurrences are referred to as callouses. Treatment involves trimming of the hard dead skin layer and protecting the area from future pressure and rubbing. Toenail Fungus (onychomycosis) Like with most fungi, foot fungi need a warm, moist, and dark environment to flourish – such as the interior of a shoe. A fungal infection can discolour your nails, thicken them, or make them crumbly or loose. There are numerous culprit fungi at work, and the hardness of the nails makes them difficult to treat. Identifying the organism via pathological culture and microscopic examination is critical to confirming a diagnosis, before starting a patient or either topical antifungal treatment after thinning down the nail(s), or prescription or oral antifungal drugs for more significant deeper nail infections or when multiple nails are involved. Ingrown Toenail (onychocryptosis) This common ailment can occur for a number of reasons. The sides, or the corners of the toenail will often curve down and put pressure on the skin. This can sometimes pierce the skin, and begin to grow in to the skin, causing redness, swelling, pain, and infection. Reshaping and filing of the nail can often settle this issue but some patients may require a procedure called a partial nail resection, performed under local anaesthesia, to permanently remove the offending nail edges. If bacterial infection has developed secondary to the nail becoming ingrown, prescription of oral antibiotics may be required. Hammer Toes Also known as a claw toe or mallet toe, this occurrence involves imbalance in the pull of the tendons in your toes, resulting in a deformity. The tendon will either pull harder on the top or the bottom, resulting in a pronounced curl. Deep supportive shoes and often custom foot orthotics to attempt to address tendon imbalance can help relieve pain associated with hammer toes. Warts (Verucca) Caused by a virus, warts or Verruca, can occur anywhere on the feet. They can cause considerable pain, depending on their location and how much pressure they are exposed to. They are also often confused with callouses or corns, due to the build of overlying hard skin. Traditionally, treatment involves trimming overlying hard dead skin and then killing the skin infected by the virus by means of chemical or freezing techniques. We now offer advanced targeted microwave therapy, utilising the Swift Microwave therapy radio frequency emblation unit. This treatment focuses on immune system modulation rather than traditional, more painful, and destructive treatments. This new high tech treatment is proving very useful for treating children and more stubborn recalcitrant lesion in adults. Swift ~ A breakthrough treatment for verrucas from Playdead on Vimeo. Flat Feet (pesplanus) Flat arches are often associated with excessive foot pronation and can lead to chronic soft tissue injury and arthritis, and even stress fractures in athletes or in older patients with osteoporosis. They can be due to a wide variety of biomechanical causes, but are quite treatable once diagnosed. Treatment often includes the prescription of foot orthotics or supports, and also stretch/strengthen regimes and gait retraining. Athlete's Foot (tineapedis) A fungus, Athlete’s foot is a very common skin condition. It can cause redness, peeling, itchiness, and small, fluid-filled bumps. You typically find it between the toes, or on the bottom of the foot. Treatment usually involves topical application of antifungal creams, or in more serious recalcitrant cases the prescription of oral antifungal drugs can be required. Prevention strategies are crucial to prevent recurrence. Tendonitis (Tendinopathy) Tendinopathies involves pain and sometimes inflammation of one or more of the many tendons within the foot or ankle. Acute tendonitis can occur during a single episode of overload, or chronic overload can cause the tendon to slowly become weakened as it is no longer able to repair micro trauma associated with regular activities. It may then become thickened, or in some cases develop nodules or bumps within the tendon. Tendinopathy, left untreated, can eventually lead to the rupture of the tendon. Identifying and addressing underlying postural or biomechanical issues is crucial in the management of this issue, and progressive loading strengthening regimes +/- prescriptions of custom Orthotics need to be implemented to allow the tendon to become strong enough to be no longer painful. N.B. Often, when diagnosed foot posture and function, podiatrists often must address biomechanical factors that are contributing to injury in other areas of the body, including the shin, knee, or back. Industry Accreditations AAPSM Accreditation Logo Australasian Podiatry Council Logo Sports Medicine Australia Member Logo
__label__pos
0.90234
Wednesday, September 28, 2022 What Occurs When An Artery In The Brain Is Blocked Don't Miss What Lasting Effects Can A Stroke Cause The effects of a stroke depend on the extent and the location of damage in the brain. Among the many types of disabilities that can result from a stroke are: • Inability to move part of the body • Weakness in part of the body • Numbness in part of the body • Inability to speak or understand words • difficulty communicating • Memory loss, confusion or poor judgment • Change in personality; emotional problems Causes And Risk Factors • Previous TIA • Atrial fibrillation Silent cerebrovascular disease is a common condition affecting older adults and is associated with risk for brain ischemiaoften referred to as “silent strokes.” Since silent strokes don’t produce clinically recognized stroke symptoms, the American Heart Association and American Stroke Association jointly released guidelines to guide clinicians in using imaging tests to evaluate the risk for silent cerebrovascular disease. Clogs And Clots: Causes Of Ischemic Stroke When an artery that carries blood to the brain becomes clogged or blocked, an ischemic stroke can occur. Arteries may be blocked by fatty deposits due to atherosclerosis. Arteries in the neck, particularly the internal carotid arteries, are a common site for atheromas. Arteries may also be blocked by a blood clot . Blood clots may form on an atheroma in an artery. Clots may also form in the heart of people with a heart disorder. Part of a clot may break off and travel through the bloodstream . It may then block an artery that supplies blood to the brain, such as one of the cerebral arteries. Blood clots in a brain artery do not always cause a stroke. If the clot breaks up spontaneously within less than 15 to 30 minutes, brain cells do not die and people’s symptoms resolve. Such events are called transient ischemic attacks . If an artery narrows very gradually, other arteries sometimes enlarge to supply blood to the parts of the brain normally supplied by the clogged artery. Thus, if a clot occurs in an artery that has developed collateral arteries, people may not have symptoms. Blockage And Its Symptoms The precise signs and symptoms depend on the type of artery affected, and are usually manifested when there is a substantial or total blockage in the artery. Given below is a list of the commonly affected arteries, the organs/tissues which depend on them for blood supply, and the corresponding symptoms indicative of a blockage. Clogged Arteries Start With Small Depositions Of Fatty Acids Along The Endothelial Lining Of Blood If These Break They Travel Down Your Vessel Until It Gets Small Enough That The Chunk Blocks Blood The Atherosclerosis Of The Arteries Is Not Generally Broken Down BRAIN STROKE There are no quick fixes for melting away plaque, but people can make key lifestyle changes to stop more of it accumulating and to improve their heart health. A subarachnoid haemorrhage occurs when a blood vessel bursts in the subarachnoid space. When an artery inside the skull becomes blocked by plaque or disease, it is called cerebral artery stenosis. These risks are minimized using small filters called embolic protection devices in hyperperfusion, or the sudden increased blood flow through a previously blocked carotid artery and into the arteries of the brain, can cause a hemorrhagic stroke. Intracerebral hemorrhage is bleeding within the brain from a broken blood vessel. A small tube called a catheter is usually passed up an artery, often from your groin, into the brain. It does but it happens so infrequently it is best to say that it does not. This is when one of the heart arteries suddenly squeezes shut or almost shut. Thus it is the stroke that blocks an artery in the brain. A tia occurs when there is low blood flow or a clot briefly blocks an artery that supplies blood to the brain. In the carotids if these small clot break off and shoot into the small arteries of the brain those. With a tia, you may have the same symptoms as you would have for a stroke. .which happens when a blood vessel in the brain gets blocked, or bleeding , which occurs when a blood vessel in the brain bursts. How A Stroke Affects You The Sides of the Brain The left side of the brain controls the right side of the body. You use the left side of your brain to move the right side of your body, figure out math and science problems and understand what you read and hear. You may have trouble doing these things if you have a stroke that damages parts of the left side of your brain. The right side of the brain controls the left side of the body. You use the right side to move the left side of your body and do creative things like paint a picture, appreciate art or music, recognize the emotion in someones voice or find where you plan to go. You may have trouble doing these things if you have a stroke in the right side of your brain. How Is A Diagnosis Made When an individual is brought to the emergency room with an apparent stroke, the doctor will learn as much about the patient symptoms, current and previous medical problems, current medications, and family history. The doctor also will perform a physical exam. If the patient can’t communicate, a family member or friend will be asked to provide this information. Diagnostic tests are used to help the doctors determine what is the cause and how to treat the stroke. Brain Ischemia Types And Causes editorial processMedical Review Board Brain ischemia, also known as cerebral ischemia or ischemia, occurs when there is an insufficient amount of blood flow to the brain. Oxygen and vital nutrients are carried in the blood through arteriesthe blood vessels that carry oxygen and nutrient-rich blood to every part of the body. The arteries that provide blood to the brain follow a certain pathway that ensures every region of the brain is adequately supplied with blood from one or more arteries. When an artery in the brain becomes blocked or bleeds, this leads to a lower oxygen supply to the region of the brain that relies on that particular artery. Even a temporary deficit in oxygen supply can impair the function of the oxygen-deprived region of the brain. In fact, if the brain cells are deprived of oxygen for more than a few minutes, severe damage can occur, which may result in the death of the brain tissue. This type of brain tissue death is also known as a cerebral infarction or ischemic stroke. How Is Carotid Artery Disease Treated Your healthcare provider will figure out the best treatment based on: • How old you are • How well you can handle specific medicines, procedures, or therapies • How long the condition is expected to last • Your opinion or preference If a carotid artery is less than 50% narrowed, it is often treated with medicine and lifestyle changes. If the artery is between 50% and 70% narrowed, medicine or surgery may be used, depending on your case. Medical treatment for carotid artery disease may include: How Serious Is Cvst CVST is an extremely rare but serious type of stroke caused by a blood clot in a part of the brain known as the venous sinus, involving veins that carry blood away from the brain. Spontaneous CVST is estimated to affect 5 of every 1 million people in the world annually. It can cause serious disability or even death. How Do Blood Clots Form Blood has many components that can rapidly aggregate to form a semi-solid to solid plug. This is intended to stop any blood loss when there is damage to a blood vessel. There are several steps in this mechanism to stop blood loss which is known as . The most prominent of these phases is the clotting of blood which provides a more long term seal until the blood vessel can repair itself. A blood clot may arise when there is injury to the blood vessel without any break in the vessel wall. This can be due to damage to the inner lining of the artery seen with conditions like hypertension or  atherosclerotic plaques associated with conditions like high blood lipids . Various other pathologies may also be responsible, like thickening of the blood vessel or blood diseases that cause the blood cells to clump together. As mentioned, the blood clot may arise within one of the cerebral arteries that are slightly damaged or have an atherosclerotic plaque, or in the backdrop of other diseases. Clots that arise at the site are known as a thrombus and if it causes a cerebral infarction then it is referred to as an thrombotic stroke. When the clot is formed at another site, then dislodges and travels through the bloodstream only to obstruct one of the brain arteries, it is then referred to as an embolic stroke. Tias Not Something To Ignore Often referred to as a mini stroke, a transient ischemic attack happens when a blockage in a blood vessel stops the flow of blood to part of your brain. Though blood flow is usually blocked for fewer than five minutes, this event is just as serious as a major stroke. TIAs are usually caused by blood clots and are often warning signs of an ischemic stroke in fact, over one-third of people have a stroke within a year of having a TIA. Someone having a TIA or a major ischemic stroke might show the same symptoms, so its vital to get emergency medical help as soon as possible, advises Dr. Ermak. Though TIAs often dont cause any damage, getting treatment for a TIA can help you work toward preventing a major stroke in the future. What Happens When Posterior Cerebral Artery Is Blocked Large Posterior cerebral arteryposterior cerebral artery The posterior cerebral arteries arise from the basilar artery . The posterior cerebral artery is one of a pair of arteries that supply oxygenated blood to the occipital lobe, part of the back of the human brain. Also, what causes cerebral artery occlusion? The most common causes of arterial occlusion involving the major cerebral arteries are emboli, most commonly arising from atherosclerotic arterial narrowing at the bifurcation of the common carotid artery, from cardiac sources, or from atheroma in the aortic arch and a combination of atherosclerotic stenosis Subsequently, question is, what does blockage in the brain mean? Overview. Intracranial stenosis is a narrowing of an artery inside the brain. A buildup of plaque inside the artery wall reduces blood flow to the brain. Atherosclerosis that is severe enough to cause symptoms carries a high risk of stroke and can lead to brain damage and death. What is a posterior Stroke? Posterior stroke. A posterior circulation stroke means the stroke affects the back area of your brain. This includes your brain stem, cerebellum and occiptal lobes . Changes that may occur include the following. The Outlook For Hemorrhagic Stroke Patients Your outlook for recovery depends on the severity of the stroke, the amount of tissue damage, and how soon you were able to get treatment. The recovery period is long for many people, lasting for months or even years. However, most people with small strokes and no additional complications during the hospital stay are able to function well enough to live at home within weeks. Causes And Types Of Strokes A stroke may occur if an artery bursts or is blocked. This may prevent blood flow to the brain. Your brain gets blood mainly through: • two arteries in your neck • two arteries near your spine These four arteries branch into other blood vessels that supply your brain with blood. If blood cannot flow to your brain, your brain cells will start to die. Stroke symptoms will start to appear. There are two types of stroke: ischemic and hemorrhagic. Blocked Arteries: Symptoms And Treatment ‘Blocked arteries’ refers to the clogging of arteries due to plaque deposition in the arterial walls, which hampers blood flow. Blockages in the major arteries and their consequences, as well as the diagnostic and therapeutic procedures have been discussed in the following article. General Health Blocked arteries refers to the clogging of arteries due to plaque deposition in the arterial walls, which hampers blood flow. Blockages in the major arteries and their consequences, as well as the diagnostic and therapeutic procedures have been discussed in the following article. The human circulatory system is a complex system involving the heart and a network of blood vessels. It is responsible for: • collecting oxygenated blood from lungs, and supplying it to every tissue of the body via arteries. • collecting the deoxygenated blood from body tissues via veins, and circulating it to the lungs for oxygenation. Healthy arteries have a smooth lining. However, sometimes small tears in the inner arterial lining causes some of the circulating substances to accumulate in the arterial walls. These include fats, cholesterol, calcium, fibrin , inflammatory cells, proteins and cellular wastes. The deposits of these cells and molecules harden to form plaques, which lead to clogging of arteries, and narrowing of arterial lumen. Types Of Stroke And Treatment Ischemic Stroke Ischemic stroke is by far the most common type of stroke, accounting for a large majority of strokes. There are two types of ischemic stroke: thrombotic and embolic. A thrombotic stroke occurs when a blood clot, called a thrombus, blocks an artery to the brain and stops blood flow. An embolic stroke occurs when a piece of plaque or thrombus travels from its original site and blocks an artery downstream. The material that has moved is called an embolus. How much of the brain is damaged or affected depends on exactly how far downstream in the artery the blockage occurs. In most cases, the carotid or vertebral arteries do not become completely blocked and a small stream of blood trickles to the brain. The reduced blood flow to the brain starves the cells of nutrients and quickly leads to a malfunctioning of the cells. As a part of the brain stops functioning, symptoms of a stroke occur. During a stroke, there is a core area where blood is almost completely cut off and the cells die within five minutes. However, there is a much larger area known as the ischemic penumbra that surrounds the core of dead cells. The ischemic penumbra consists of cells that are impaired and cannot function, but are still alive. These cells are called idling cells, and they can survive in this state for about three hours. Hemorrhagic Stroke Transient Ischemic Attack What Is Vertebrobasilar Insufficiency Vertebrobasilar insufficiency is a condition characterized by poor blood flow to the posterior portion of the brain, which is fed by two vertebral arteries that join to become the basilar artery. Blockage of these arteries occurs over time through a process called atherosclerosis, or the build-up of plaque. Plaques are made up of deposits of cholesterol, calcium and other cellular components. They not only make the arteries hard, they grow over time and can obstruct or even block the flow of blood to the brain. The vertebrobasilar arteries supply oxygen and glucose to the parts of the brain responsible for consciousness, vision, coordination, balance and many other essential functions. Both restricted blood flow and the complete blockage of it called ischemic events have serious consequences for brain cells. Ischemia occurs when blood flow to the brain damages cells. An infarction occurs when the cells die. A transient ischemic attack , or mini-stroke, is an ischemic event that results in the temporary loss of brain function. If the resulting loss of brain function is permanent, it s called a stroke . A stroke can either be caused by blockage in the vertebral or basilar artery or the breaking off of a piece of plaque that travels downstream and blocks a portion of the blood flow to the brain. Basic anatomy Who Is At Risk For Carotid Artery Disease Risk factors associated with atherosclerosis include: • Older age • Diet high in saturated fat • Lack of exercise Although these factors increase a person’s risk, they do not always cause the disease. Knowing your risk factors can help you make lifestyle changes and work with your doctor to reduce chances you will get the disease. Tests To Identify The Cause Identifying the precise cause of an ischemic stroke is important. If the blockage is a blood clot, another stroke may occur unless the underlying disorder is corrected. For example, if blood clots result from an abnormal heart rhythm, treating that disorder can prevent new clots from forming and causing another stroke. Tests for causes may include the following: • to look for abnormal heart rhythms • Continuous ECG monitoring to record the heart rate and rhythm continuously for 24 hours , which may detect abnormal heart rhythms that occur unpredictably or briefly • to check the heart for blood clots, pumping or structural abnormalities, and valve disorders • Imaging testscolor Doppler ultrasonography, magnetic resonance angiography, CT angiography, or cerebral to determine whether arteries, especially the internal carotid arteries, are blocked or narrowed • Blood tests to check for anemia, polycythemia, blood clotting disorders, vasculitis, and some infections and for risk factors such as high cholesterol levels or diabetes • Urine drug screen for cocaine and amphetamines Imaging tests enable doctors to determine how narrowed the carotid arteries are and thus to estimate the risk of a subsequent stroke or TIA. Such information helps determine which treatments are needed. Because CT angiography is less invasive, it has largely replaced cerebral angiography done with a catheter. The exceptions are endovascular procedures . What Treatments Are Available Clogged Arteries: Symptoms, Causes and Treatment ... Treatment for stroke depends on whether the patient is diagnosed with an ischemic or  hemorrhagic stroke. In either case the person must get to a hospital immediately for the treatments to work. Ischemic stroke treatments can be divided into emergency treatments to reverse a blockage and preventive treatments to prevent stroke. Emergency procedures Clot buster drugs Thrombolytic “clot-buster” drugs help restore blood flow by dissolving the clot that is blocking the artery. The most common “clot-buster” drug is tissue plasminogen activator, or tPA for short. TPA is an enzyme found naturally in the body that dissolves clots. Doctors inject extra tPA into the bloodstream to speed up this process. To be effective, tPA should be given as quickly as possible. Patients who received tPA within 3 to 4 hours of onset of stroke symptoms were at least 33% more likely to recover from their stroke with little or no disability after 3 months . • A stent retriever is a wire mesh tube, like a stent, that is attached to a long wire. When the tube is opened in the blocked artery, the clot gets stuck in the mesh.  The doctor then pulls out the mesh using the long wire, pulling out the clot with it. • An aspiration catheter is like a vacuum cleaner that is attached to a special suction unit and used to suck out the clot. When To Contact A Medical Professional Stroke is a medical emergency that needs to be treated right away. The acronym F.A.S.T. is an easy way to remember signs of stroke and what to do if you think a stroke has occurred. The most important action to take is to call 911 or the local emergency number right away for emergency assistance. F.A.S.T. stands for: Stroke Signs And Symptoms To Look For When someone has a stroke, get medical help as soon as possible to restore blood flow to the brain or stop the bleeding. These symptoms signal that someone may be having a stroke: • Sudden weakness or numbness in your face, arm or leg on one side of your body • Speech difficulty or inability to understand speech • Sudden loss of balance or coordination • Vision loss or dimness in one eye • Trouble swallowing • Sudden and severe headache with no cause You can also use the the acronym BE FAST to remember the signs: • Balance difficulties Know The Symptoms Of A Stroke • Weakness. You may feel a sudden weakness, tingling, or a loss of feeling on one side of your face or body including your arm or leg. • Vision problems. You may have sudden double vision or trouble seeing in one or both eyes. • Speech problems. You may have sudden trouble talking, slurred speech, or problems understanding others. • Movement problems. You may have sudden trouble walking, dizziness, a feeling of spinning, a loss of balance, a feeling of falling, or blackouts. Remember: If you have any of these symptoms, call 911 and your doctor as soon as possible. BE FAST is an easy way to remember the signs of a stroke. When you see these signs, you will know that you need to call 911 fast.  BE FAST stands for: • B is for balance. Sudden onset of loss of balance, coordination, or dizziness.  • E is for eyes. Sudden onset of vision loss, blurred vision, or double vision.  • F is for face drooping. One side of the face is drooping or numb. When the person smiles, the smile is uneven. • A is for arm weakness. One arm is weak or numb. When the person lifts both arms at the same time, one arm may drift downward. • S is for speech difficulty. You may notice slurred speech or difficulty speaking. The person can’t repeat a simple sentence correctly when asked. • T is for time to dial 911. If someone shows any of these symptoms, even if they go away, call 911 right away. Make note of the time the symptoms first appeared. Causes Of A Hemorrhagic Stroke There are two possible causes of a ruptured blood vessel in the brain. The most common cause is an . An aneurysm occurs when a section of a blood vessel becomes enlarged from chronic and dangerously high blood pressure or when a blood vessel wall is weak, which is usually congenital. This ballooning leads to thinning of the vessel wall, and ultimately to a rupture. A rarer cause of an ICH is an arteriovenous malformation . This occurs when arteries and veins are connected abnormally without capillaries between them. AVMs are congenital. This means theyre present at birth, but theyre not hereditary. Its unknown exactly why they occur in some people. How Stroke Drugs Work The drugs used for treating stroke typically work in different ways. Some stroke drugs actually break up existing blood clots. Others help prevent blood clots from forming in your blood vessels. Some work to adjust high blood pressure and levels to help prevent blood flow blockages. The drug that your doctor prescribes will depend on the kind of stroke you had and its cause. Stroke drugs can also be used to help prevent a second stroke in people whove already had one. Warning Symptoms Of Stroke Because early treatment of stroke can help limit loss of function and sensation, everyone should know what the early symptoms of stroke are. People who have any of the following symptoms should see a doctor immediately, even if the symptom goes away quickly: • Sudden weakness or paralysis on one side of the body • Sudden loss of sensation or abnormal sensations on one side of the body • Sudden difficulty speaking, including difficulty coming up with words and sometimes slurred speech • Sudden confusion, with difficulty understanding speech • Sudden dimness, blurring, or loss of vision, particularly in one eye • Sudden dizziness or loss of balance and coordination, leading to falls One or more of these symptoms are typically present in both hemorrhagic and ischemic strokes. Symptoms of a transient ischemic attack are the same, but they usually disappear within minutes and rarely last more than 1 hour. Symptoms of a hemorrhagic stroke may also include the following: • Sudden severe headache • Temporary or persistent loss of consciousness • Very high blood pressure Symptoms Of Clogged Arteries Clogged arteries are caused by atherosclerosis, which develops over time as plaques formed from fats, minerals, , and more build up inside the walls of your arteries. These buildups cause the inner tunnels, called lumens, of the arteries to become smaller and narrower. As a result, the heart has to use more pressure to pump blood through smaller vessels. This increases blood pressure and puts strain on the pumping ability of the heart. Symptoms of blocked or clogged arteries can include: • Weakness, especially on one side of the body • Loss of consciousness • Vision changes More articles Popular Articles
__label__pos
0.962388
THE ASBESTOS PROBLEM All types of asbestos fibres are known to cause asbestosis and malignant mesothelioma in humans, which poses a significant handling and disposal issue as asbestos cement was used extensively in the construction industry until the 1980s and 1990s in most developed countries. Therefore infrastructure containing asbestos which is demolished or exposed to natural weathering events are a considerable risk, especially when the lifespan of cementitious asbestos is approximately 30 years and refurbishment works are needed. ASBESTOS DESTRUCTION BY MCD Processing asbestos cement by MCD eliminates the carcinogenic effects of fibrous asbestos by grinding it into an ultra-fine amorphous powder. The central concept behind the fibre destruction process is that mechano-chemical treatment of crystalline substances like asbestos leads to an extremely high degree of amorphisation and phase change usually seen in thermal reactions exceeding 1000°C. An interesting phenomenon as the bulk matrix does not exceed 180°C for during MCD treatment. MCD is a sustainable technique for achieving complete asbestos fibre denaturation without the high energy input and scaling issues often encountered with conventional systems. WASTE-TO-VALUE The resultant powder of the entire process can be reused as a high-grade cement additive, exemplifying EDL’s waste-to-value approach and circular economic principles which are now seen in the progressive global waste industry. Questions? Please feel free to contact us if you have any queries.
__label__pos
0.881869
Write a New Service for Linux (opensuse) 5,322 21 1 Posted Introduction: Write a New Service for Linux (opensuse) This instructable shows you how to write and implement a new service in Linux (opensuse). You will learn how to turn a shell script into a service. What is that good for? With a service you can do various things. This is to keep track of how many hours I have been online, how many bytes I have downloaded and what my IP was at a special day. Step 1: What Do I Want the PC Do? I want to have the following information in a logfile: - the time of boot. - Information about the IP, transfered bytes and maybe other stuff before shutdown. Mainly the output of the cmd: "ifconfig" - the time of shutdown So I'm going to write a service that does:  - Writes a timestamp in a file at startup. - Write me the output of "ifconfig" and a timestamp before shutdown.   Step 2: Make the Shell Script to Test Your Service Before we really implement a new service we try the commands in a script first. So open up an editor and insert the following lines: #! /bin/sh # This line is a comment # the next line defines the logfile LOGFILE=/home/user/networklogfile # write "Start" and the current date and time to the logfile echo "Start: " + `date` >>$LOGFILE # Write the output of ifconfig to the logfile echo `ifconfig` >>$LOGFILE # write "Stop" and the current date and time to the logfile echo "Stop: " + `date` >>$LOGFILE Save the file as test.sh for instance. Then type chmod +x test.sh in the console at the directory where the new file is to make you script executable. Explanation: First we define a local variable LOGFILE with the name and the complete path to the logfile, with $LOGFILE we can use this variable in this shell script. The "echo" command does what it says, it echos what is written behind it. Because I wanted a white space after Start: I had to use quotes around "Start: ". The single quotes around `date` tell echo to invoke the command date instead of writing it straight into the file. And the two greater than signs tell echo to append the output to the file instead of writing it to the console. That's it! Now type ./test.sh to execute the script and see if the output in the file was written. It should be something like: Start:  + Wed Feb 5 16:30:38 CET 2014 enp2s1 Link encap:Ethernet HWaddr ... Stop:  + Wed Feb 5 16:31:46 CET 2014 Step 3: Introduction to Service-scripts Every linux distribution has it's own system of start-Scripts. So the following is exact true for opensuse 13.1. But the principles are the similar with all Linux-distributions. So let's start: In opensuse every boot-script is located in the folder /etc/init.d/ So this is where we have to save our script. But the start-scripts are not simple bash-scripts. They have to have a certain format for the automatic boot-process. The main principle is: • All scripts are located in the /etc/init.d/ folder. • In this folder are some more folders called rc.0d, rc.1d and up to rc.6d. In these folders are only symbolic links to those scripts that should be executed when the according runlevel is entered or left. For more information about runlevel aks google! • So every script has to provide at least to functions: Start and stop. • when changing a runlevel, the init-process calls all K-scripts of the old runlevel with the parameter stop and • when these are finished, the init-process calls all S-scripts of the new runlevel with the parameter start. • The symbolic links have numbers in their name after K or S, and this determines the sequence in which the scripts are called. But these links are not done by hand, they are generated by a function called insserv. Or you configure the different runlevels with YAST. Step 4: Write the Service-script Now we are starting with the script itself.  In some distributions there are skeletons which you can use for this purpose. You just take them and fill in the stuff you need. With opensuse there comes nothing like that, so we have to use an existing one and modify it. Because we are working in /etc/init.d/ you have to be root to edit and save files here. type: "su" and the root-password to become root. Just make a new script called networklog and fill it with this content: #! /bin/sh # Copyright (c) 2014 andyk75 # # Author: andyk75 (instructables) # # /etc/init.d/networklog # #   and symbolic its link # ### BEGIN INIT INFO # Provides: networklog # Required-Start: # Required-Stop: # Default-Start: 3 5 # Default-Stop: 0 1 2 6 # Description: Start the networklogging # Short-Description: make Networklog ### END INIT INFO LOGFILE=/home/ak/networklogfile case "$1" in     start) echo "Start: " + `date` >>$LOGFILE echo -n "Starting Networklogging" ## Start daemon with startproc(8). If this fails ## the echo return value is set appropriate. ;;     stop) echo "Stop: " + `date` >>$LOGFILE echo `ifconfig` >>$LOGFILE echo -n "Shutting down Networklogging" ## Stop daemon with killproc(8) and if this fails ## set echo the echo return value. ;;     restart)  ## Stop the service and regardless of whether it was  ## running or not, start it again.  # Remember status and be quiet  ;;     status) echo -n "Checking for Networkloggingservice " ## Check status with checkproc(8), if process is running ## checkproc will return with exit status 0. ;;     *) echo "Usage: $0 {start|stop|status|restart|}" exit 1 ;; esac And don't forget to make it executable with 'chmod +x networklog' Explanation: 1. In the header-Part with "### BEGIN INIT INFO" until "### END INIT INFO" we specify in which runlevels this service should be started (3 + 5) and in which it should be stopped ( 0, 1, 2, 6). And we have a short description of the service. The Required-Start and stop fields are empty, because we do not rely on any other service to be started. 2. Now again we have a variable called LOGFILE, as in the testscript before. 3. But the case-instruction is new. When the script is called with a parameter, this parameter can be accessed inside the script with $1, and this is what happens here: According to the parameter the case-statement only executes the lines after the parameter. We have "start", "stop", "restart", "status" and the wildcard "*" which applies if the parameter is something else. You can see at the wildcard's echo, you get the name of the script ($0) itself with a small instruction of how to use it. 4. Restart doesn't really do anything. 5. When the init-process calls 'networklog start': the current date with a Start-label is written to the Logfile. 6. When the init-process calls 'networklog stop': The output of ifconfig and the current date with a stop-label is written to the logfile. And that's it. Step 5: Install the Service With Insserv Now that we have the script-file. Open a console and go to /etc/init.d/ Now just execute 'insserv networklog' as root and see that there are so new links in the rc_.d-folders. Because the system is already running and the service isn't started yet, we start it by hand by calling 'networklog start'. Otherwise you might get an errormessage when shutting down, because there is no networklog-service to be stopped! :-) That's it! Now every time you start your system, the time is beeing recorded in the networklogfile in your home directory. And every time you shut down your system, the complete output of ifconfig and the time is being recorded to. What to do with this log-file? Well that might be in another instructable. Share Recommendations • Paper Contest 2018 Paper Contest 2018 • Pocket-Sized Contest Pocket-Sized Contest • Science of Cooking Science of Cooking user We have a be nice policy. Please be positive and constructive. Tips Questions Comments
__label__pos
0.769278
Healthy Living What are Canker Sores? What are Canker Sores? Canker sores are small, painful ulcers that develop in the lining of the mouth. These are one of the most common mouth ulcers, and the sores make eating difficult. Studies show that about 20% of the population suffer from these painful sores. Canker sores are classified into simple and complex sores. Simple canker sores are often seen in people under the 20-years-old, and may last only for a week or so. Complex canker sores, on the other hand, are often found in people who have had them earlier and are less common, when compared to simple sores. ColdSoreCankerSore The actual cause of this condition is not yet known. Two factors that increase the risk of sores in mouth include stress and tissue injury. Injury to the lining of the mouth may be caused by a defective tooth brush, a sharp tooth surface, braces, or dentures. Certain foods, like oranges, lemons, pineapples, and tomatoes are also known to cause the formation of these sores. Health issues, including vitamin or mineral deficiency, Celiac disease, and Crohn’s disease, have canker sores as one of the common symptoms. A number of drugs are also known to cause mouth ulcers. You can identify canker sores from the burning sensation in some part of the mouth, which usually starts before the appearance of the sores. Sores are usually present in the mouth lining, like the inside of the cheeks or soft palate. Some people may also have it on the tongue. The sores have round edges and are normally grey or whitish in appearance. When the sores are severe, people may complain of fever, swelling in the lymph nodes, and the individual may become drowsy. Canker sores usually resolve on its own without any specific treatment. The sores are found to heal without treatment. In severe forms of sores, antimicrobial mouth rinse, ointments containing corticosteroids, or over-the-counter solutions are used to control the condition. These treatments help to alleviate the pain and irritation associated with the sores. Some of the proven methods to prevent the formation of sores are to avoid the triggers like specific foods and using soft brushes for cleaning the teeth. One should visit the physician if sores are unusually large and painful, are spreading rapidly to other parts of the mouth, and have fever associated with the appearance of the sores. Key Takeaways • Oranges, tomatoes, and citrus fruits can encourage canker sores in the mouth. • Canker sores are considered ulcers of the mouth.
__label__pos
0.999359
21 \$\begingroup\$ When stacking books you usually want to put the largest ones at the bottom and the smallest ones at the top. However, my latent OCD makes me feel very uneasy if I've got two books where one is shorter (in height) but wider than the other. No matter which order I place them in, the top book will extend beyond the bottom book on one side. As an example, say one book has dimensions (10,15) and another has dimensions (11,14). No matter which way around I put them, I get an overhang. But if I have books with dimensions (4,3) and (5,6), I can avoid an overhanging by placing the latter below the former. For the purposes of this challenge we will consider overhangs only in relation to the book immediately below. E.g. if I have a stack (5,5), (3,3), (4,4) (not that any sane person would do that), the top book counts as an overhang, although it does not extend beyond the bottom book. Similarly, the stack (3,3), (3,3), (4,4) also has only one overhang, despite the top book extending beyond the bottom one. The Challenge Given a list of integer pairs for book dimensions, sort those pairs/books such that the number of overhangs is minimal. You must not rotate the books - I want all the spines facing the same direction. If there are multiple solutions with the same number of overhangs, you may choose any such order. Your sorting algorithm does not have to be stable. Your implementation may assume that book dimensions are less than 216 each. Time complexity: To make this a bit more interesting, the asymptotic worst-case complexity of your algorithm must be polynomial in the size of the stack. So you can't just test every possible permutation. Please include a short proof of your algorithm's optimality and complexity and optionally a plot that shows the scaling for large random inputs. Of course, you can't use the maximum size of the input as argument that your code runs in O(1). You may write a program or function, take input via STDIN, ARGV or function argument in any convenient (not preprocessed) list format and either print or return the result. This is code golf, so the shortest answer (in bytes) wins. I am confident that a polynomial-solution exists, but if you can prove me wrong, you may submit such a proof instead of a golfed submission. In this case, you may assume P ≠ NP. I will accept the first correct such proof and award a bounty to it. Examples In: [[1, 1], [10, 10], [4, 5], [7, 5], [7, 7], [10, 10], [9, 8], [7, 5], [7, 5], [3, 1]] Out: [[10, 10], [10, 10], [9, 8], [7, 7], [7, 5], [7, 5], [7, 5], [4, 5], [3, 1], [1, 1]] In: [[4, 5], [5, 4], [5, 4], [5, 4], [5, 4], [4, 5], [4, 5], [4, 5], [5, 4], [4, 5]] Out: [[4, 5], [4, 5], [4, 5], [4, 5], [4, 5], [5, 4], [5, 4], [5, 4], [5, 4], [5, 4]] or [[5, 4], [5, 4], [5, 4], [5, 4], [5, 4], [4, 5], [4, 5], [4, 5], [4, 5], [4, 5]] In: [[2, 3], [1, 1], [5, 5], [7, 1]] Out: [[5, 5], [2, 3], [7, 1], [1, 1]] or [[5, 5], [2, 3], [1, 1], [7, 1]] or [[7, 1], [5, 5], [2, 3], [1, 1]] or [[7, 1], [1, 1], [5, 5], [2, 3]] I created these by hand, so let me know if you spot any mistakes. \$\endgroup\$ • 3 \$\begingroup\$ Are you certain that finding a solution with a minimum number of overhangs can be solved in polynomial time? \$\endgroup\$ – COTO Oct 24 '14 at 1:05 • \$\begingroup\$ @COTO I'm fairly confident, yes. \$\endgroup\$ – Martin Ender Oct 24 '14 at 1:59 • \$\begingroup\$ Hmm. I'd ordinarily tackle it with a greedy algorithm, but I can easily procure inputs leading to suboptimal outputs for any "greed" criterion I can come up with (e.g. area, maximize one dimension, maximize smallest dimension, etc.). The only other approaches I can think of involve partitioning the books into cliques, and all of them have exponential worst-case complexity. I'll be interested to see what answers come up. You might also want to request a brief proof of the optimality of the sort as part of the spec. \$\endgroup\$ – COTO Oct 24 '14 at 2:05 • \$\begingroup\$ @COTO I've added a paragraph about this in case I'm actually wrong, but don't count on it. ;) \$\endgroup\$ – Martin Ender Oct 24 '14 at 2:43 • \$\begingroup\$ Just in case, potential proofs that no polynomial-time algorithm exists should be allowed to assume that P does not equal NP. \$\endgroup\$ – xnor Oct 24 '14 at 6:09 2 \$\begingroup\$ Pyth, 30 FN_SQFbYIgeeYeb~b]NB)E~Y]]N;sY This is a direct golf of grc's awesome algorithm. Here is the precise equivalent of the above pyth program, in its compiled python code. Q = eval(input()) Y = [] for N in sorted(Q)[::-1]: for b in Y: if Y[-1][-1] >= b[-1]: b += [N] break else: Y += [[N]] print(Psum(Y)) In this context, the Psum(Y) function is equivalent to the python sum(Y,[]). Actual compiled and run code (from pyth -d): Y=[] Q=copy(eval(input())) for N in neg(Psorted(Q)): for b in Y: if gte(end(end(Y)),end(b)): b+=[N] break else: Y+=[[N]] Pprint("\n",Psum(Y)) \$\endgroup\$ • 1 \$\begingroup\$ The Python translation needs "Y=[]", remove the eval if you're in Python 2, and the sum needs a second argument sum(Y,[]). This all should work in Pyth, just the translation doesn't automatically include it. \$\endgroup\$ – xnor Oct 24 '14 at 10:50 • \$\begingroup\$ @xnor The last line really reads: Pprint("\n",Psum(Y)). I think he may have simplified it for convenience, along with all of the -1s etc. Psum actually would run more like reduce(lambda x,y:x+y, Y[1:], Y[0]). \$\endgroup\$ – FryAmTheEggman Oct 24 '14 at 13:23 20 +100 \$\begingroup\$ Python, 113 P=[] for n in sorted(input())[::-1]: for p in P: if p[-1][1]>=n[1]:p+=[n];break else:P+=[[n]] print sum(P,[]) After sorting the list of books in descending order (by width first and then height), this partitions the books into piles without overlaps. To determine where to place each book, its height is compared with the height of the top book in each pile. It is placed on the first pile possible, or else a new pile is created. I'm not very good with time complexity, but I believe it would have a worst case of O(N2). There are two loops, each with at most N iterations. I also use Python's builtin sort, which is O(n log n). My first proof that this algorithm produces optimal solutions did turn out to be incorrect. A huge thanks goes to @xnor and @Sp3000 for a great discussion in the chat about proving this (which you can read starting here). After working out a correct proof, @xnor found that part of it had already been done (Dilworth's theorem). Here's an overview of the proof anyway (credit to @xnor and @Sp3000). First, we define the notion of an antipile, or antichain, (quoted from @xnor): An antipile is a sequence of books of decreasing height, but increasing width So, each successive book is strictly taller but strictly less wide Note that any book in an antipile overhangs over any other book in an antipile So, no two books within an antipile can be in the same pile As a consequence, if you can find an antipile of x books, then those books must be in different piles So, the size of the largest antipile is a lower bound on the number of piles Then, we sort the books in descending order by their width (first) and their height (second)*. For each book B, we do as follows: 1. If B can fit on the first pile, we place it there and move on. 2. Otherwise, we find the earliest* pile x which B can be placed on top of. This can be a new pile if necessary. 3. Next, we link B to P, where P is the top book on the previous pile x - 1. 4. We now know that: • B is strictly* smaller in width than P, since the books are sorted in descending order by width • B is strictly greater in height than P, or we would have placed B on top of P Now, we have constructed a link from every book (except those in the first pile), to a book in the previous pile that is greater in width and lower in height. @Sp3000's excellent diagram illustrates this well: By following any path from the last pile (on the right), to the first pile (on the left), we get an antipile. Importantly, this antipile's length is equal to the number of piles. Hence, the number of piles used is minimal. Finally, since we have organised the books into the minimum number of piles without overlaps, we can stack them on top of each other to get one pile with the minimum number of overlaps. * this helpful comment explains a few things \$\endgroup\$ • 3 \$\begingroup\$ +1 for the expositive proof and link to the discussion. Props to xnor et al. \$\endgroup\$ – COTO Oct 24 '14 at 14:08 • \$\begingroup\$ I should clarify that Dilworth's Theorem doesn't cover the whole proof, just the fact that the smallest number of piles equals the greatest-size antipile. \$\endgroup\$ – xnor Oct 24 '14 at 19:30 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.664879
Over a million developers have joined DZone. {{announcement.body}} {{announcement.title}} More Cowbell: An Imaginary Audio Control Example in JavaFX DZone's Guide to More Cowbell: An Imaginary Audio Control Example in JavaFX · Java Zone · Free Resource Learn how to build stream processing applications in Java-includes reference application. Brought to you in partnership with Hazelcast. One of my favorite Saturday Night Live sketches is More Cowbell, in which Christopher Walken's character keeps asking for "more cowbell" during a recording session.  Today's example covers some of the simple but powerful concepts of JavaFX in the context of an imaginary iPhone-esque application that lets you select a music genre and control the volume.  Of course, "Cowbell Metal", shortened to "Cowbell", is one of the available genres :-)   Click the screenshot below to launch the application, and then I'll show you the code behind it. Audio_configuration Application Behavior and the Code Behind It When you play with the application, notice that adjusting the volume slider changes the associated decibel (dB) level displayed.  Also, selecting the Muting checkbox disables the slider, and selecting various genres changes the volume slider.  This behavior is enabled by concepts that you'll see in the code below, such as binding to a class that contains a model, on replace triggers, and sequences (think arrays). Here is the main program, which contains the declarative script that expresses the UI: /* * AudioConfigMain.fx - A JavaFX Script example program that demonstrates * "the way of JavaFX" (binding to model classes, triggers, sequences, and * declaratively expressed, node-centric UIs). Note: Because this example * covers beginning JavaFX concepts, it is more verbose than necessary. * * Developed 2008 by James L. Weaver jim.weaver [at] javafxpert.com * as a JavaFX Script SDK 1.0 example for the Pro JavaFX book. */ package projavafx.audioconfig.ui; import javafx.ext.swing.*; import javafx.stage.Stage; import javafx.scene.*; import javafx.scene.paint.*; import javafx.scene.shape.*; import javafx.scene.text.*; import javafx.scene.transform.*; import projavafx.audioconfig.model.AudioConfigModel; Stage { var acModel = AudioConfigModel { selectedDecibels: 35 } title: "Audio Configuration" scene: Scene { content: [ Rectangle { x: 0 y: 0 width: 320 height: 45 fill: LinearGradient { startX: 0.0 startY: 0.0 endX: 0.0 endY: 1.0 stops: [ Stop { color: Color.web("0xAEBBCC") offset: 0.0 }, Stop { color: Color.web("0x6D84A3") offset: 1.0 }, ] } }, Text { translateX: 65 translateY: 12 textOrigin: TextOrigin.TOP fill: Color.WHITE content: "Audio Configuration" font: Font { name: "Arial Bold" size: 20 } }, Rectangle { x: 0 y: 43 width: 320 height: 300 fill: Color.rgb(199, 206, 213) }, Rectangle { x: 9 y: 54 width: 300 height: 130 arcWidth: 20 arcHeight: 20 fill: Color.color(1.0, 1.0, 1.0) stroke: Color.color(0.66, 0.67, 0.69) }, Text { translateX: 18 translateY: 69 textOrigin: TextOrigin.TOP fill: Color.web("0x131021") content: bind "{acModel.selectedDecibels} dB" font: Font { name: "Arial Bold" size: 18 } }, SwingSlider { translateX: 120 translateY: 69 width: 175 enabled: bind not acModel.muting minimum: bind acModel.minDecibels maximum: bind acModel.maxDecibels value: bind acModel.selectedDecibels with inverse }, Line { startX: 9 startY: 97 endX: 309 endY: 97 stroke: Color.color(0.66, 0.67, 0.69) }, Text { translateX: 18 translateY: 113 textOrigin: TextOrigin.TOP fill: Color.web("0x131021") content: "Muting" font: Font { name: "Arial Bold" size: 18 } }, SwingCheckBox { translateX: 280 translateY: 113 selected: bind acModel.muting with inverse }, Line { startX: 9 startY: 141 endX: 309 endY: 141 stroke: Color.color(0.66, 0.67, 0.69) }, Text { translateX: 18 translateY: 157 textOrigin: TextOrigin.TOP fill: Color.web("0x131021") content: "Genre" font: Font { name: "Arial Bold" size: 18 } }, SwingComboBox { translateX: 204 translateY: 148 width: 93 items: bind for (genre in acModel.genres) { SwingComboBoxItem { text: genre } } selectedIndex: bind acModel.selectedGenreIndex with inverse } ] } } Notice how the bind operator is used in various places to cause the UI to reflect the state of the model.  In a couple of places, a bind with inverse is employed to keep the UI and the model class in sync bi-directionally.  Now take a look at the model class, and in particular the on replace trigger that is invoked when the user selects a genre: /* * AudioConfigModel.fx - The model class behind a JavaFX Script example * program that demonstrates "the way of JavaFX" (binding to model classes, * triggers, sequences, and declaratively expressed, node-centric UIs). * * Developed 2008 by James L. Weaver jim.weaver [at] javafxpert.com * as a JavaFX Script SDK 1.0 example for the Pro JavaFX book. */ package projavafx.audioconfig.model; /** * The model class that the AudioConfigMain.fx script uses */ public class AudioConfigModel { /** * The minimum audio volume in decibels */ public var minDecibels:Integer = 0; /** * The maximum audio volume in decibels */ public var maxDecibels:Integer = 160; /** * The selected audio volume in decibels */ public var selectedDecibels:Integer = 0; /** * Indicates whether audio is muted */ public var muting:Boolean = false; /** * List of some musical genres */ public var genres = [ "Chamber", "Country", "Cowbell", "Metal", "Polka", "Rock" ]; /** * Index of the selected genre */ public var selectedGenreIndex:Integer = 0 on replace { if (genres[selectedGenreIndex] == "Chamber") { selectedDecibels = 80; } else if (genres[selectedGenreIndex] == "Country") { selectedDecibels = 100; } else if (genres[selectedGenreIndex] == "Cowbell") { selectedDecibels = 150; } else if (genres[selectedGenreIndex] == "Metal") { selectedDecibels = 140; } else if (genres[selectedGenreIndex] == "Polka") { selectedDecibels = 120; } else if (genres[selectedGenreIndex] == "Rock") { selectedDecibels = 130; } }; } As always, please leave a comment if you have any questions! Learn JavaFX in Stockholm, Sweden in January 2009 I'll be speaking on JavaFX at the Jfokus 2009 conference in January.  While in the area, I will also be conducting a two day public JavaFX class in Stockholm on January 29 & 30, 2009 entitled "Rich Internet Application Development with JavaFX". Regards, Jim Weaver JavaFXpert.com   Learn how to build distributed stream processing applications in Java that elastically scale to meet demand- includes reference application.  Brought to you in partnership with Hazelcast. Topics: Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.urlSource.name }}
__label__pos
0.871009
Loading presentation... Present Remotely Send the link below via email or IM Copy Present to your audience Start remote presentation • Invited audience members will follow you as you navigate and present • People invited to a presentation do not need a Prezi account • This link expires 10 minutes after you close the presentation • A maximum of 30 users can follow your presentation • Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. DeleteCancel ECSE-593: Horn Antennas Rectangular Horn Antenna Presentation by Adam Santorelli on 26 March 2011 Comments (0) Please log in to add your comment. Report abuse Transcript of ECSE-593: Horn Antennas Intro Background Waveguide Horn Antenna Basics H-Plane Horn E-Plane Horn Pyramidal Horn ECSE:593 Graduate Lectures: Rectangular Horn Antennas Adam Santorelli Why are we interested in Rectangular Antennas? Popularity Easy to Understand Think of EM wave like a sound wave Much like a megaphone amplifies sound waves, a horn antenna amplifies the input EM waves Rectangular Horns Fed by a rectangular waveguidge This is the source of the EM wave Horn reduces reflections from the open end of waveguide Reflections are reduced Better transmission Horn is created by flaring edges of waveguide This tapered angle allows for impedance matching image taken from [1] image taken from [2] Design Concerns Operation Frequency Desired Gain Aperture Size Horn Length High gain Ease of construction Well behaved Work over wide Frequency Range Commonly used as feed for Reflector Antennas Motivation Horn antenna increases fields from waveguide like a horn amplifies sound Development of horn antenna is intuitive Once we realize that EM radiation travels as waves Acoustic wave knowledge known for long time... The First Horn Antenna Apply this expertise to EM! 1897- Sir Jagadish Chandra Bose operated in mm range demonstrated radiation of waves by: ringing bells exploding gun powder Image taken from [4] Image taken from [3] Ignored for so long... Not used until 1930s Gained popularity during WW2 Increased interest in microwave applications Became very popular with ascent of reflector antennas Image taken from [5] EM waves confined by TIR Wave propagates within the structure Fields are zero at boundary surfaces Rectangular waveguide Confinement in both x and y Many modes can be excited at a given frequnecy Physical size will play a role Dominant mode is TE It is possible to excite only this mode: Operate below cutoff frequency to ensure this criteria Image taken from [6] Single Mode Operation Cutoff frequency is dependant on waveguide dimensions! Similar Analysis to H-plane horn Assumed TE mode operation Rectangular Horn Defined by a rectangular aperture Other shapers possible... Three types H-plane E-plane Pyramidal Popular use in microwave range Commonly above 1GHz Directivity Simple Concepts Governed by frequency and Aperture size: Outside our chosen design point behaviour will not be ideal Half-Power beamwidth can also be used to determine directivity Gain Related to directivity Efficiency is characterized by aperture efficiency Aperture taper efficiency, ε Antenna efficiency, ε Phase efficiency, ε Aperture Taper Efficiency Loss due to amplitude distribution of aperture Defined by waveguide Set to be 0.81 for rectangular waveguide Antenna efficiency About 1 for horn antennas Phase efficiency Discussed later! Varies based on horn design t r ph Assume TE operation: 10 H-plane is along broadwall dimension x-axis Taper waveguide along this dimension Phase Variation In H-plane aperture is larger than waveguide waves are no longer in phase waves arriving at edges have travelled further than those at center Phase varies exponentially: Define a factor t: Represents phase error Increasing phase error affects the gain Increasing factor t Increased side Lobes Decreased Directivity Increasing phase error Directivity Depends on aperture size as well... Variance in gain for specific horn length Optimum value for A dimension for a given horn length This optimal value is given by: Can then determine optimal value for phase parameter t: Optimal t=0.375 How does this make sense? Why is some phase error optimal?? Balance between gain from increasing aperture size and phase mismatch! Waveguide is flared along dimension in the E-plane (y-axis) Phase variation Phase errors across aperture Quadratic phase variation Introduce phase factor s: Similar antenna behaviour for varying s Directivity Observe similar behaviour to H-plane, but.... Note that maximum directivity for a given horn length will now occur at a different aperture size: Thus the optimal value for the phase factor is changed. Here optimal value for s=0.25 Design Options A Case Study The best of both worlds... Flare the waveguide in both planes Creates a narrow BW in both directions Analysis is caried out by combining results Phase Variation Aperture size is larger in both x and y directions: Phase error along both axis Directivity Aperture distribution can be considered separable: Antenna Efficiency Overall directivty is a function of corresponding directivty of E-plane and H-plane models Antenna efficiency is a function of taper and phase efficiency: Phase efficiency can be broken down into efficiency for each plane. Phase efficiency along each axis is a function of phase error parameters s and t respectively For optimum operation (s=0.25 & t=0.375), phase efficiency is 0.80 and 0.79 respectively Taper efficiency is dependant on waveguide aperture Always using rectangular waveguide Fixed value of 0.81 Overall antenna efficiency: ε = 0.51 ap Develop procedure to design horn antenna to meet design requirements Requirements: Connecting waveguide dimensions are known Set phase error to be optimal values Why? Want a physically realizable horn Setting s & t to optimum values obtains shortest horn length to meet desired gain The all powerful equation.... Design steps 1. Specify desired Gain, operating wavelength and waveguide structure to be used 2. Solve for A 3. Find B. Can then solve for horn length and other horn dimensions 4. Verification Drawbacks... Performance is optimized for a single wavelength Outside this frequency the antenna will not behave as designed Difficult to determine behaviour Balance between effects of varying wavelength and change in phase error in aperture Design Problem Want: G=21.75dB f= 8.75GHz Given: a=2.29cm b=1.02cm Find the optimal Horn! Use the "all powerful" equation: Solve for A first Our design: A=18.61cm B=14.75cm G=21.8 Results Solve for half-power beam width HP=12.4 HP=14.2 E H References 1. www.1389blg.com 2.http://www.q-par.com/products/horn-antennas/standard-gain-horns 3. http://amrabangalee.org/page3.html 4. http://www.setileague.org/photos/wghorn.htm 5. http://www.interfacebus.com/Electronic_Dictionary_Radar_Terms_F.html 6. http://depts.washington.edu/cmditr/mediawiki/index.php?title=Planar_Dielectric_Waveguides 7. http://www.rfcafe.com/references/electrical/rectangular-waveguide-modes.htm 8. Microwave-induced acoustic imaging of biological tissues Lihong V. Wang, Xuemei Zhao, Haitao Sun, and Geng Ku, Rev. Sci. Instrum. 70, 3744 (1999), DOI:10.1063/1.1149986 9. Fear, E.C.; Sill, J.; Stuchly, M.A.; , "Experimental feasibility study of confocal microwave imaging for breast tumor detection," Microwave Theory and Techniques, IEEE Transactions on , vol.51, no.3, pp. 887- 892, Mar 2003doi: 10.1109/TMTT.2003.808630 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1191744&isnumber=26710 Applications Microwave induced thermoacoustic Imaging Biomed Applications Illumination of human anatomy for Imaging Communications? Military? MITAS Confocal Microwave Imaging Confocal Microwave Imaging Ridged Pyramidal Horn Antenna Radiation of short microwave pulses Illuminate the breast with microwave radiation Absorbed energy causes thermal expansion Acoustic pressure wave generated Operate at 9.4GHz Gain = 16DB Aperture Size = 55mmx74mm 10KW of radiating Power! Image taken from [8] Used as feed for reflector antennas Image taken from [9] Antenna Parameters: Width = 24.4cm Height = 27.9cm Length = 15.9cm Weight = 1.8kg Purpose Test experimental setup Compare monopole and horn antenna to detect tumor in phantom models Use EMCO model 3115 horn antenna Horn antenna is used to illuminate the breast with microwave pulse Records reflected signal Improved performance with horn antenna vs. monopole A double ridged-pyramidal horn Questions? Full transcript
__label__pos
0.896546
Async functions Direct FFI of async functions is absolutely in scope for CXX (on C++20 and up) but is not implemented yet in the current release. We are aiming for an implementation that is as easy as: #[cxx::bridge] mod ffi { unsafe extern "C++" { async fn doThing(arg: Arg) -> Ret; } } rust::Future<Ret> doThing(Arg arg) { auto v1 = co_await f(); auto v2 = co_await g(arg); co_return v1 + v2; } Workaround For now the recommended approach is to handle the return codepath over a oneshot channel (such as futures::channel::oneshot) represented in an opaque Rust type on the FFI. // bridge.rs use futures::channel::oneshot; #[cxx::bridge] mod ffi { extern "Rust" { type DoThingContext; } unsafe extern "C++" { include!("path/to/bridge_shim.h"); fn shim_doThing( arg: Arg, done: fn(Box<DoThingContext>, ret: Ret), ctx: Box<DoThingContext>, ); } } struct DoThingContext(oneshot::Sender<Ret>); pub async fn do_thing(arg: Arg) -> Ret { let (tx, rx) = oneshot::channel(); let context = Box::new(DoThingContext(tx)); ffi::shim_doThing( arg, |context, ret| { let _ = context.0.send(ret); }, context, ); rx.await.unwrap() } // bridge_shim.cc #include "path/to/bridge.rs.h" #include "rust/cxx.h" void shim_doThing( Arg arg, rust::Fn<void(rust::Box<DoThingContext> ctx, Ret ret)> done, rust::Box<DoThingContext> ctx) noexcept { doThing(arg) .then([done, ctx(std::move(ctx))](auto &&res) mutable { (*done)(std::move(ctx), std::move(res)); }); }
__label__pos
0.999501
Simple Highcharts Chart Example using PHP MySQL Database By Hardik Savani | December 20, 2016 | | 61811 Viewer | Category : PHP Bootstrap MySql JSON Highcharts Highcharts is a one type js library, that provide to populate bar chart, line chart, area chart, column chart etc. Highcharts library also provide several theme and graphic design that way you can make better layout. Highcharts is a very popular and simple library for php developer. We can simply use it in our code PHP project and other php framework like laravel(How to add charts in Laravel 5 using Highcharts ?), codeigniter, symfony, cakephp etc. In this post, we are learn how to implement simple dynamic column highcharts using php mysql database. I will give you full example of column highcharts. In this example you have to just follow three step as like bellow: 1) Create Database 2) Create Database Configuration File 3) Create index.php File In this three step, we will create two tables with some dummy data and represent in column chart. After complete this example you will find layout of Column Highcharts is looks like as bellow, it's pretty nice, i think: Preview: Step 1: Create Database In first step we require to create new database "h_sole", Or you can rename as you like. After creating successfully database, In this example we will show compare column chart of viewer and clicks, So we require to create tables. we require following two tables: 1)demo_viewer 2)demo_click You can create this two table by following mysql query as like bellow example: Create demo_viewer table: CREATE TABLE IF NOT EXISTS `demo_viewer` ( `id` int(11) NOT NULL AUTO_INCREMENT, `numberofview` int(11) NOT NULL, `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=10 ; Create demo_click table: CREATE TABLE IF NOT EXISTS `demo_click` ( `id` int(11) NOT NULL AUTO_INCREMENT, `numberofclick` int(12) NOT NULL, `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=10 ; Now we require to some dummy data with different year, So you can add like added bellow screen shot, I added both tables with some dummy records that way you can simply add in yout database for testing. demo_viewer table: demo_click table: Step 2: Create Database Configuration File In this step we need to create on mysql database configuration file. In this file we will add details of phpmyadmin username, password and database name, So let's create new file "db_config.php" and put bellow code: db_config.php <?php $dbHost = "localhost"; $dbDatabase = "h_sole"; $dbPasswrod = "root"; $dbUser = "root"; $mysqli = mysqli_connect($dbHost, $dbUser, $dbPasswrod, $dbDatabase); ?> Step 3: Create index.php File In last step, we have to just create index.php file in root folder. In this file i write code of getting data from mysql database and convert into json data. So let's create new file "index.php" and put bellow code: index.php <?php require('db_config.php'); /* Getting demo_viewer table data */ $sql = "SELECT SUM(numberofview) as count FROM demo_viewer GROUP BY YEAR(created_at) ORDER BY created_at"; $viewer = mysqli_query($mysqli,$sql); $viewer = mysqli_fetch_all($viewer,MYSQLI_ASSOC); $viewer = json_encode(array_column($viewer, 'count'),JSON_NUMERIC_CHECK); /* Getting demo_click table data */ $sql = "SELECT SUM(numberofclick) as count FROM demo_click GROUP BY YEAR(created_at) ORDER BY created_at"; $click = mysqli_query($mysqli,$sql); $click = mysqli_fetch_all($click,MYSQLI_ASSOC); $click = json_encode(array_column($click, 'count'),JSON_NUMERIC_CHECK); ?> <!DOCTYPE html> <html> <head> <title>HighChart</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"> <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.1.1/jquery.js"></script> <script src="https://code.highcharts.com/highcharts.js"></script> </head> <body> <script type="text/javascript"> $(function () { var data_click = <?php echo $click; ?>; var data_viewer = <?php echo $viewer; ?>; $('#container').highcharts({ chart: { type: 'column' }, title: { text: 'Yearly Website Ratio' }, xAxis: { categories: ['2013','2014','2015', '2016'] }, yAxis: { title: { text: 'Rate' } }, series: [{ name: 'Click', data: data_click }, { name: 'View', data: data_viewer }] }); }); </script> <div class="container"> <br/> <h2 class="text-center">Highcharts php mysql json example</h2> <div class="row"> <div class="col-md-10 col-md-offset-1"> <div class="panel panel-default"> <div class="panel-heading">Dashboard</div> <div class="panel-body"> <div id="container"></div> </div> </div> </div> </div> </div> </body> </html> Ok, now we are ready to run this example, so just run bellow command on root folder for run your project. php -S localhost:8000 Now you can open bellow url on your browser: http://localhost:8000 I hope it can help you... We are Recommending you:
__label__pos
0.904179
Desktop Site (Beta) Introductory Video Reactive Programming with RxPy Loading Follow to receive video recommendations   a   A Speaker: Are you the speaker? Web applications contains lots of database operations, network calls, nested callbacks and other computationally expensive tasks that might take a long time to complete or even block other threads until it's done, here is where ReactiveX enters, it doesn't only gives us the facility to convert almost anything to a stream; variables, properties, user inputs, caches, etc to manage it asynchronously. But it also gives us an easy way to handle errors which is a hard task within asynchronous programming. ReactiveX makes our code more flexible, readable, maintainable and easy to write. We will be exploring how ReactiveX help us to make things easier with its operators toolbox that can be used to filter, create, transform or unify any of those streams. We will learn that in just a few lines of maintainable code, we can have multiple web sockets which recieves multiple requests all handled by an asynchronous process that serves a filtered output. To do that I decided to explain an example of the use with an example by implementing observables, observers/subscribers and subjects. We will start by requesting our data stream from the Github API with a Tornado web socket and then filtering and processing it asynchrounosly. Editors Note: I am looking for editors/curators to help with branches of the tree. Please send me an email  if you are interested.   Comment On Twitter
__label__pos
0.515446
chiark / gitweb / af1c723ade39df83063bd151efdfb8c54bc28480 [elogind.git] / src / libsystemd-terminal / term-internal.h 1 /*-*- Mode: C; c-basic-offset: 8; indent-tabs-mode: nil -*-*/ 2 3 /*** 4   This file is part of systemd. 5 6   Copyright (C) 2014 David Herrmann <[email protected]> 7 8   systemd is free software; you can redistribute it and/or modify it 9   under the terms of the GNU Lesser General Public License as published by 10   the Free Software Foundation; either version 2.1 of the License, or 11   (at your option) any later version. 12 13   systemd is distributed in the hope that it will be useful, but 14   WITHOUT ANY WARRANTY; without even the implied warranty of 15   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 16   Lesser General Public License for more details. 17 18   You should have received a copy of the GNU Lesser General Public License 19   along with systemd; If not, see <http://www.gnu.org/licenses/>. 20 ***/ 21 22 #pragma once 23 24 #include <stdbool.h> 25 #include <stdint.h> 26 #include <stdlib.h> 27 #include "util.h" 28 29 typedef struct term_char term_char_t; 30 typedef struct term_charbuf term_charbuf_t; 31 32 typedef struct term_color term_color; 33 typedef struct term_attr term_attr; 34 typedef struct term_cell term_cell; 35 typedef struct term_line term_line; 36 37 /* 38  * Miscellaneous 39  * Sundry things and external helpers. 40  */ 41 42 int mk_wcwidth(wchar_t ucs4); 43 int mk_wcwidth_cjk(wchar_t ucs4); 44 int mk_wcswidth(const wchar_t *str, size_t len); 45 int mk_wcswidth_cjk(const wchar_t *str, size_t len); 46 47 /* 48  * Ageing 49  * Redrawing terminals is quite expensive. Therefore, we avoid redrawing on 50  * each single modification and mark modified cells instead. This way, we know 51  * which cells to redraw on the next frame. However, a single DIRTY flag is not 52  * enough for double/triple buffered screens, hence, we use an AGE field for 53  * each cell. If the cell is modified, we simply increase the age by one. Each 54  * framebuffer can then remember its last rendered age and request an update of 55  * all newer cells. 56  * TERM_AGE_NULL is special. If used as cell age, the cell must always be 57  * redrawn (forced update). If used as framebuffer age, all cells are drawn. 58  * This way, we can allow integer wrap-arounds. 59  */ 60 61 typedef uint64_t term_age_t; 62 63 #define TERM_AGE_NULL 0 64 65 /* 66  * Characters 67  * Each cell in a terminal page contains only a single character. This is 68  * usually a single UCS-4 value. However, Unicode allows combining-characters, 69  * therefore, the number of UCS-4 characters per cell must be unlimited. The 70  * term_char_t object wraps the internal combining char API so it can be 71  * treated as a single object. 72  */ 73 74 struct term_char { 75         /* never access this value directly */ 76         uint64_t _value; 77 }; 78 79 struct term_charbuf { 80         /* 3 bytes + zero-terminator */ 81         uint32_t buf[4]; 82 }; 83 84 #define TERM_CHAR_INIT(_val) ((term_char_t){ ._value = (_val) }) 85 #define TERM_CHAR_NULL TERM_CHAR_INIT(0) 86 87 term_char_t term_char_set(term_char_t previous, uint32_t append_ucs4); 88 term_char_t term_char_merge(term_char_t base, uint32_t append_ucs4); 89 term_char_t term_char_dup(term_char_t ch); 90 term_char_t term_char_dup_append(term_char_t base, uint32_t append_ucs4); 91 92 const uint32_t *term_char_resolve(term_char_t ch, size_t *s, term_charbuf_t *b); 93 unsigned int term_char_lookup_width(term_char_t ch); 94 95 /* true if @ch is TERM_CHAR_NULL, otherwise false */ 96 static inline bool term_char_is_null(term_char_t ch) { 97         return ch._value == 0; 98 } 99 100 /* true if @ch is dynamically allocated and needs to be freed */ 101 static inline bool term_char_is_allocated(term_char_t ch) { 102         return !term_char_is_null(ch) && !(ch._value & 0x1); 103 } 104 105 /* true if (a == b), otherwise false; this is (a == b), NOT (*a == *b) */ 106 static inline bool term_char_same(term_char_t a, term_char_t b) { 107         return a._value == b._value; 108 } 109 110 /* true if (*a == *b), otherwise false; this is implied by (a == b) */ 111 static inline bool term_char_equal(term_char_t a, term_char_t b) { 112         const uint32_t *sa, *sb; 113         term_charbuf_t ca, cb; 114         size_t na, nb; 115 116         sa = term_char_resolve(a, &na, &ca); 117         sb = term_char_resolve(b, &nb, &cb); 118         return na == nb && !memcmp(sa, sb, sizeof(*sa) * na); 119 } 120 121 /* free @ch in case it is dynamically allocated */ 122 static inline term_char_t term_char_free(term_char_t ch) { 123         if (term_char_is_allocated(ch)) 124                 term_char_set(ch, 0); 125 126         return TERM_CHAR_NULL; 127 } 128 129 /* gcc _cleanup_ helpers */ 130 #define _term_char_free_ _cleanup_(term_char_freep) 131 static inline void term_char_freep(term_char_t *p) { 132         term_char_free(*p); 133 } 134 135 /* 136  * Attributes 137  * Each cell in a terminal page can have its own set of attributes. These alter 138  * the behavior of the renderer for this single cell. We use term_attr to 139  * specify attributes. 140  * The only non-obvious field is "ccode" for foreground and background colors. 141  * This field contains the terminal color-code in case no full RGB information 142  * was given by the host. It is also required for dynamic color palettes. If it 143  * is set to TERM_CCODE_RGB, the "red", "green" and "blue" fields contain the 144  * full RGB color. 145  */ 146 147 enum { 148         /* dark color-codes */ 149         TERM_CCODE_BLACK, 150         TERM_CCODE_RED, 151         TERM_CCODE_GREEN, 152         TERM_CCODE_YELLOW, 153         TERM_CCODE_BLUE, 154         TERM_CCODE_MAGENTA, 155         TERM_CCODE_CYAN, 156         TERM_CCODE_WHITE,                                               /* technically: light grey */ 157 158         /* light color-codes */ 159         TERM_CCODE_LIGHT_BLACK          = TERM_CCODE_BLACK + 8,         /* technically: dark grey */ 160         TERM_CCODE_LIGHT_RED            = TERM_CCODE_RED + 8, 161         TERM_CCODE_LIGHT_GREEN          = TERM_CCODE_GREEN + 8, 162         TERM_CCODE_LIGHT_YELLOW         = TERM_CCODE_YELLOW + 8, 163         TERM_CCODE_LIGHT_BLUE           = TERM_CCODE_BLUE + 8, 164         TERM_CCODE_LIGHT_MAGENTA        = TERM_CCODE_MAGENTA + 8, 165         TERM_CCODE_LIGHT_CYAN           = TERM_CCODE_CYAN + 8, 166         TERM_CCODE_LIGHT_WHITE          = TERM_CCODE_WHITE + 8, 167 168         /* pseudo colors */ 169         TERM_CCODE_FG,                                                  /* selected foreground color */ 170         TERM_CCODE_BG,                                                  /* selected background color */ 171         TERM_CCODE_RGB,                                                 /* color is specified as RGB */ 172 173         TERM_CCODE_CNT, 174 }; 175 176 struct term_color { 177         uint8_t ccode; 178         uint8_t red; 179         uint8_t green; 180         uint8_t blue; 181 }; 182 183 struct term_attr { 184         term_color fg;                          /* foreground color */ 185         term_color bg;                          /* background color */ 186 187         unsigned int bold : 1;                  /* bold font */ 188         unsigned int italic : 1;                /* italic font */ 189         unsigned int underline : 1;             /* underline text */ 190         unsigned int inverse : 1;               /* inverse fg/bg */ 191         unsigned int protect : 1;               /* protect from erase */ 192         unsigned int blink : 1;                 /* blink text */ 193 }; 194 195 /* 196  * Cells 197  * The term_cell structure respresents a single cell in a terminal page. It 198  * contains the stored character, the age of the cell and all its attributes. 199  */ 200 201 struct term_cell { 202         term_char_t ch;         /* stored char or TERM_CHAR_NULL */ 203         term_age_t age;         /* cell age or TERM_AGE_NULL */ 204         term_attr attr;         /* cell attributes */ 205         unsigned int cwidth;    /* cached term_char_lookup_width(cell->ch) */ 206 }; 207 208 /* 209  * Lines 210  * Instead of storing cells in a 2D array, we store them in an array of 211  * dynamically allocated lines. This way, scrolling can be implemented very 212  * fast without moving any cells at all. Similarly, the scrollback-buffer is 213  * much simpler to implement. 214  * We use term_line to store a single line. It contains an array of cells, a 215  * fill-state which remembers the amount of blanks on the right side, a 216  * separate age just for the line which can overwrite the age for all cells, 217  * and some management data. 218  */ 219 220 struct term_line { 221         term_line *lines_next;          /* linked-list for histories */ 222         term_line *lines_prev;          /* linked-list for histories */ 223 224         unsigned int width;             /* visible width of line */ 225         unsigned int n_cells;           /* # of allocated cells */ 226         term_cell *cells;               /* cell-array */ 227 228         term_age_t age;                 /* line age */ 229         unsigned int fill;              /* # of valid cells; starting left */ 230 }; 231 232 int term_line_new(term_line **out); 233 term_line *term_line_free(term_line *line); 234 235 #define _term_line_free_ _cleanup_(term_line_freep) 236 DEFINE_TRIVIAL_CLEANUP_FUNC(term_line*, term_line_free); 237 238 int term_line_reserve(term_line *line, unsigned int width, const term_attr *attr, term_age_t age, unsigned int protect_width); 239 void term_line_set_width(term_line *line, unsigned int width); 240 void term_line_write(term_line *line, unsigned int pos_x, term_char_t ch, unsigned int cwidth, const term_attr *attr, term_age_t age, bool insert_mode); 241 void term_line_insert(term_line *line, unsigned int from, unsigned int num, const term_attr *attr, term_age_t age); 242 void term_line_delete(term_line *line, unsigned int from, unsigned int num, const term_attr *attr, term_age_t age); 243 void term_line_append_combchar(term_line *line, unsigned int pos_x, uint32_t ucs4, term_age_t age); 244 void term_line_erase(term_line *line, unsigned int from, unsigned int num, const term_attr *attr, term_age_t age, bool keep_protected); 245 void term_line_reset(term_line *line, const term_attr *attr, term_age_t age); 246 247 void term_line_link(term_line *line, term_line **first, term_line **last); 248 void term_line_link_tail(term_line *line, term_line **first, term_line **last); 249 void term_line_unlink(term_line *line, term_line **first, term_line **last); 250 251 #define TERM_LINE_LINK(_line, _head) term_line_link((_line), &(_head)->lines_first, &(_head)->lines_last) 252 #define TERM_LINE_LINK_TAIL(_line, _head) term_line_link_tail((_line), &(_head)->lines_first, &(_head)->lines_last) 253 #define TERM_LINE_UNLINK(_line, _head) term_line_unlink((_line), &(_head)->lines_first, &(_head)->lines_last)
__label__pos
0.999668
Digital TV Tower Locator Reception Antennas Amplifiers Cables Installation Frequency Networks OTA DTv Television Reception Factors PAGE CONTENTS -- Antenna Angle / Frequency | Terrain Loss / Masking | Ground Clutter | Radio Horizon | Attic / Indoor | Summary Below list loss factors that may explain why a signal is weaker than expected. Beam Loss ANTENNA ANGLE - BEAM LOSS Beam loss occurs when the receive antenna is not directly aligned to the broadcast signal. Signals aligned directly (0°) to the antenna get the maximum gain. Gain decreases slightly with angle up to the beam edge. At the beam edge gain is down by -3 dB. Past the beam edge gain drops dramatically. The beam edge for an outside antenna with moderate gain is about ±45° off center (a 90° beam). A lower gain antenna could have a beam as wide as 120°. A high gain antenna could have a beam as narrow as 30°. Also see next section Antenna / Gain and Beam. ANTENNA FREQUENCY VARIATION Antenna gain is not constant and varies, up or down, with frequency. Advertised gains are usually the average gain over the frequency band, some are the maximum gain. The gain spread can be as low as 2 dB, or as high as 6 dB or more, 4 dB (±2 dB) is typical for a moderate gain antenna. ANTENNA POLARIZATION LOSS Most television stations transmit a signal with horizontal polarization. Some stations transmit right hand circular elliptical (main axis in horizontal plane) for better propagation in hilly and cluttered areas. Most receive antenna's are designed for horizontal polarization, and have a 1 or 2 dB polarization loss for elliptical signals. TERRAIN LOSS The area between the broadcast and receive antenna's should be clear of all obstructions for best reception, the free space region. The region is shaped like an ellipsoid, or a cartoon cigar shape. Near either antenna the region's radius is a couple of wavelengths, or about 4 to 30 feet radius (UHF to VHF). The area near and all around either antenna should be clear. At the mid point the ellipsoid radius (r) is largest, and depends on frequency (f) and distance (d) between antennas. The lower the frequency and the greater the distance, the larger the radius. Many times ground terrain, (hills, mountains, near radio horizon, antenna close to the ground) will get into the free space region, causing a signal reduction (terrain loss). The loss can vary from 4 - 12 dB or more. Free Space Ellipsoid Free Space Ellipsoid rmeters = 273.85 (dkm / fMHz)0.5 Calculate Ellipsoid Mid Point Radius RF Band or Channel: Range: Ellipsoid Mid Point Radius in feet and meters. TERRAIN MASKING Terrain masking is always a concern, but is especially problematic for UHF channels. Terrain, and relatively close large structures, can completely block (mask) a signal. Broadcast tower Antenna MSL (height above Mean Sea Level) is used to trace signal path for terrain interference. In the below illustration the broadcast antenna is listed as 1200 meters MSL. Elevated terrain (a mountain) blocks a direct path to the receive antenna. There is some signal reduction and fringing past the mountain, but not enough fringing for UHF channels. A VHF channel may be receivable. Terrain Masking Terrain Masking The tower antenna height Above Ground Level (AGL) is antenna MSL (1200 meters) minus ground MSL (900 meters), or 300 meters (984 feet) AGL. GROUND CLUTTER Large relatively close structures could cause significant signal loss, structures in the distance will cause less loss. Trees will reduce a signal, the longer the distance the signal travels through a tree or multiple trees, the greater the loss. Trees without foliage (in winter) may have slightly less loss (about 1 dB) at UHF frequencies. ground clutter FOLIAGE LOSS Distance feet VHF dB UHF dB 20' 3 4 40' 4 6 60' 5 8 80' 6 9 100' 7 11 120' 8 12 140' 8 13 160' 9 14 180' 10 15 200' 10 16 RADIO HORIZON Radio Horizon Equation The broadcast tower ground elevation and tower height (above ground level) determine the radio horizon, locations outside the horizon don't get a signal. The radio horizon is greater than the visual (optical) horizon. In the atmosphere radio waves bend slightly upward increasing the range, light waves do not bend (very much). In a free space vacuum radio and light waves propagate in a straight line. Using a Smooth Earth Model (over water at 0 meters MSL) the radio horizon varies from about 10 miles for a 15 meters (50 feet) tower to over 60 miles for a 600 meters (about 2000 feet) tower. Broadcast towers are usually located on the highest ground possible, increasing horizon range. Elevating the receive antenna can increase reception range, if the signal is strong enough. Radio Horizon Calculators Using a Smooth Earth Model Calculate Radio Horizon Antenna Height (ht or hr) Horizon Optical Radio Calculate Antenna Height Radio Horizon Antenna Height feet meters Radio horizon calculations are based on line-of-sight equations using a larger earth radius (the 4/3's earth radius model). ATTIC MOUNT Attic mount loss is at least -3 dB for a 3/4 inch plywood roof covered with roofing paper and one layer of 3 tab asphalt shingles. Metal backed insulation in the attic or walls and metal exhaust vents and air ducts all block signals. INDOOR ANTENNA Walls, floors, ceilings, roofs, doors, windows, appliances, furniture, and partitions will reduce a signal. Wall insulation without a metal backing has a minor loss, a fraction of a dB. Metal backed insulation, metal siding, awnings, doors. screens, air ducts, and water pipes will block / reflect a signal. An outside wall with vinyl siding, or an inside wall, will have a loss of about 6 dB or more, A brick wall loss is about 8 dB or more. Signal Loss Attic Asphalt Shingle Roof 3 dB Glass 0.25 in. thick 1 dB 0.5 in. thick 2 dB Glass Block 6 dB Wood Plywood 2 dB Wood Door 3 dB Walls Plasterboard 2 dB Drywall 3 dB Marble 4 dB Brick 3.5 in. thick 3 dB 7 in. thick 5 dB 10.5 in. thick 6 dB 7.5 in. Brick /Concrete 13 dB Cinder Block 8 in. wide 11 dB 16 in. wide 15 dB 24 in. wide 25 dB Concrete 4 in. thick 11 dB 8 in. thick 21 dB 12 in. thick 32 dB Loss estimates are for UHF frequencies. VHF is less lossy, by 1 or 2 dB or more. Wire or metal mesh (wired glass window, chicken wire, chain link fence, etc.) will completely block a signal if the largest open space in the mesh is equal to or less than a quarter wavelength (Opening ≤ Quarter Wave). The mesh acts exactly like solid metal for signals that are a quarter wavelength or longer (lower frequency). Television broadcast quarter wavelengths' vary from 3.7 inches for RF channel 69 to 4.3 feet for RF channel 2 (93 mm to 1.3 meters). metal mesh Calculate Wavelength & Quarter Wave RF Channel: RF Ch, Frequency, Band Wavelength (English and Metric Units) Quarter Wave (English and Metric Units) SUMMARY An antenna mounted 30 feet above the ground in a flat open field with a clear line-of-sight and directly aligned to the broadcast tower could receive a signal near or greater than expected. In practice a 3 - 6 dB additional loss is not uncommon. Signal path clutter and indoor antenna's will have greater loss. Terrain could introduce more loss. Source and Approximate Loss SOURCE LOSS (dB) Antenna Beam Loss: Main Beam (0° ± 45°) 0 - 3 Side Lobe ±(45° to 90°) 10 - 20 Back Lobe ±(90° to 180°) 30 Antenna Gain Variation: ± 2 Polarization Loss: 0 - 1 Attic Mount: 3 Indoor Antenna: 1 - 11 Ground Clutter: 3 - 15 Terrain Loss: 4 - 12 Terrain Masking: No Signal Outside Radio Horizon: No Signal Beam loss, terrain masking, and radio horizon loss can be estimated. Attic mount and indoor antenna loss are more difficult to predict. Ground clutter, terrain loss, and antenna gain variation are often difficult or impossible to estimate. Top Digital TV Tower Locator Reception Antennas Amplifiers Cables Installation Frequency Networks Television Reception Factors otadtv.com/reception OTA DTv © Copyright 2017
__label__pos
0.783224
IT infrastructure monitoring Note: Build confidence that your ArcGIS systems will be available and performant through proactive IT infrastructure monitoring (ITIM). Modern enterprise systems are expected to be performant and reliable by all levels of use. Any disruption in service could result in significant financial and operational loss. To reduce the possibility of service disruption, even the most well-architected IT infrastructure requires continuous monitoring to detect unexpected service demands and component failures. This practice enables IT professionals to minimize negative impacts on  business operations through automatically detecting and reporting potential issues with the performance, availability, and resource utilization of your enterprise GIS system components. Recommendations 1. Monitor all components related to your ArcGIS Enterprise systems 2. Establish monitoring practices in all environments (development, test, and production) 3. Analyze historical metrics to discover utilization trends and causes for spikes and failures 4. Consider the benefits of ArcGIS Monitor which provides specialized monitoring capabilities to detect, diagnosis, report, and provide remediation recommendations Monitor all components ArcGIS Enterprise comprises four components that are configured to work together: • Portal for ArcGIS • ArcGIS Server • ArcGIS Data Store • ArcGIS Web Adaptor These components can be hosted by multiple servers, on-premises or by a cloud provider and are dependent upon several computing resources, such as servers, DBMS, network, file systems, and so on. The unavailability or performance degradation of any of these resources can disrupt business operations. With the above in mind, establishing proactive measures to automatically detect overuse or unavailability of each resource can provide early notification of potential problems, enabling IT Professionals to respond to issues quickly and provide guidance on how to remediate the issue. Establishing automating monitoring and remediation steps for each component is required to build confidence that business operations will not be available. Establishing and maintaining a complete system architecture diagram is critical to determine all dependent components and computing resources. This guidance aligns with the need to provide full redundancy in highly available (HA) configurations, enabling your system to function well in the event of the failure of any dependent resources or ArcGIS Enterprise components. While HA configurations prevent business processes from being immediately disrupted, the failure does leave a vulnerable component and quick restoration of the failed component is needed to maintain your redundant environment. Establishing monitoring of both primary and redundant components is important to maintaining the confidence throughout your organization that the system can maintain your organization’s Service License Agreement (SLA). Monitor all environments Establishing monitoring practices for all environments (production, staging, test, and development) instills confidence that the system will be available for all users. Full monitoring of the production environment establishes confidence that business operations will remain available and performant and typically needs little justification. Monitoring test and staging environments provide IT professionals two advantages: 1. Access to solid metrics of the computing resource consumption of new capabilities or upgraded system to assist in capacity planning 2. An environment to practice detecting and reporting resource usage and availability anomalies such as a server not being available or a web service not responding. Monitoring the development environment not only has the benefit of providing a reliable system for your development and GIS analysts, but also can provide valuable insight as to the system resource utilization of experimental processes such as long running geoprocessing tools. Analyze historical metrics Modern monitoring tools not only detect potential issues in real-time but also collect metrics for the various components. Configuring the analysis of these metrics over time can provide valuable insight into the system’s activity, usage, and performance, leading to the discovery of utilization trends and causes for spikes and failures. Such analysis can highlight which enterprise GIS component has the most activity, which web service are most or least active, or what services are more active during specific time periods. This insight can then be used to justify and design changes to system resources and to anticipate spikes in usage. Consider Esri provided monitoring tools Each of these components deliver essential services with different system dependencies. For example, GIS based applications often consume web services hosted by ArcGIS Servers to deliver the business information and capabilities. Diagnosing performance and outage issues of these services can be difficult due to the number of dependencies. Esri has developed specialized monitoring product called ArcGIS Monitor designed to detect, diagnosis, and report issues as well as provide remediation guidance specific to the ArcGIS system. ArcGIS Monitor complements ArcGIS Enterprise by providing tools to monitor the health of your enterprise GIS implementations. Conclusion Implementing effective IT infrastructure monitoring is essential for maintaining optimal performance and reliability of your ArcGIS deployment. By following these best practices, you can proactively identify issues, optimize resource allocation, and ensure a seamless experience for ArcGIS users. Top
__label__pos
0.919679
What is it about? Oblique shock–boundary layer interactions cause large stagnation pressure losses in high-speed aircraft, hence worsening performance. We test a flexible panel as a simple means to control them. Using experiments and simulations, we find that these interactions can be weakened when the panel is used, leading to improved efficiency. Featured Image Why is it important? We find that using flexible surfaces can lead to significant reductions in the losses due to oblique shocks. This could be used to improve the performance of supersonic engine inlets. Perspectives I carried out this research as part of my Master's project. I hope that my findings can be useful to the shock control community. Nicolas Gomez Vega Imperial College London Read the Original This page is a summary of: Oblique Shock Control with Steady Flexible Panels, AIAA Journal, May 2020, American Institute of Aeronautics and Astronautics (AIAA), DOI: 10.2514/1.j058933. You can read the full text: Read Contributors The following have contributed to this page
__label__pos
0.998442
Unraveling the Shadows While gaslighting is often associated with toxic relationships, it is crucial to understand what gaslighting truly is and dispel misconceptions that can cloud our understanding. Building Therapeutic Relationships with Clients Establishing a strong therapeutic relationship is essential for facilitating positive outcomes in various helping professions. The Mental Load In today’s fast-paced world, the demands of both domestic life and work can create an overwhelming mental load for individuals. Nurturing Maternal Mental Health The journey of motherhood is a profound and transformative experience. From the moment a baby is born, a mother’s life is forever changed. How to help struggling adolescents during summer break Summer break can be a time of excitement and adventure for children and teens, but for those struggling with mental health challenges, it can also be a time of added stress and anxiety. Bye, Bye, “perfect” bikini body Studies have shown that body image concerns can lead to a range of mental health issues, including depression, anxiety, and eating disorders. How to be more mindful this summer As the warm summer months approach, it’s the perfect time to reflect on the benefits of mindfulness practices for mental health. Summertime Sadness: how to cure the summertime blues Seasonal affective disorder (SAD) is a type of depression that usually affects people during the winter months when there is less sunlight. However, there is also a lesser-known form of SAD that affects people during the summer months. Pet Your Stress Away Pets bring joy and companionship to their owners. But, did you know that pets can also improve your mental health? Studies have shown that owning a pet can have a positive impact on your mental well-being. The Influence of Caregivers on Attachment Styles As we celebrate caregivers in our lives, now is an appropriate time to reflect on the influence of significant caregivers, attachment styles, parenting practices, and how this impacts adult attachment.
__label__pos
0.684194
home Planck’s radiation law Physics Alternate Title: Planck’s law Planck’s radiation law, a mathematical relationship formulated in 1900 by German physicist Max Planck to explain the spectral-energy distribution of radiation emitted by a blackbody (a hypothetical body that completely absorbs all radiant energy falling upon it, reaches some equilibrium temperature, and then reemits that energy as quickly as it absorbs it). Planck assumed that the sources of radiation are atoms in a state of oscillation and that the vibrational energy of each oscillator may have any of a series of discrete values but never any value between. Planck further assumed that when an oscillator changes from a state of energy E1 to a state of lower energy E2, the discrete amount of energy E1E2, or quantum of radiation, is equal to the product of the frequency of the radiation, symbolized by the Greek letter ν and a constant h, now called Planck’s constant, that he determined from blackbody radiation data; i.e., E1E2 = hν. Planck’s law for the energy Eλ radiated per unit volume by a cavity of a blackbody in the wavelength interval λ to λ + Δλ (Δλ denotes an increment of wavelength) can be written in terms of Planck’s constant (h), the speed of light (c), the Boltzmann constant (k), and the absolute temperature (T): Read More read more thumbnail quantum mechanics: Planck’s radiation law The wavelength of the emitted radiation is inversely proportional to its frequency, or λ = c/ν. The value of Planck’s constant is found to be 6.62606957 × 10−34 joule∙second, with a standard uncertainty of 0.00000029 × 10−34 joule∙second. For a blackbody at temperatures up to several hundred degrees, the majority of the radiation is in the infrared radiation region of the electromagnetic spectrum. At higher temperatures, the total radiated energy increases, and the intensity peak of the emitted spectrum shifts to shorter wavelengths so that a significant portion is radiated as visible light. close MEDIA FOR: Planck’s radiation law chevron_left chevron_right print bookmark mail_outline close Citation • MLA • APA • Harvard • Chicago Email close You have successfully emailed this. Error when sending the email. Try again later. Keep Exploring Britannica atom atom Smallest unit into which matter can be divided without the release of electrically charged particles. It also is the smallest unit of matter that has the characteristic properties... insert_drive_file quantum mechanics quantum mechanics Science dealing with the behaviour of matter and light on the atomic and subatomic scale. It attempts to describe and account for the properties of molecules and atoms and their... insert_drive_file Nature: Tip of the Iceberg Quiz Nature: Tip of the Iceberg Quiz Take this Nature: geography quiz at Encyclopedia Britannica and test your knowledge of national parks, wetlands, and other natural wonders. casino Space Objects: Fact or Fiction Space Objects: Fact or Fiction Take this Astronomy True or False Quiz at Encyclopedia Britannica to test your knowledge of space and celestial objects. casino game theory game theory Branch of applied mathematics that provides tools for analyzing situations in which parties, called players, make decisions that are interdependent. This interdependence causes... insert_drive_file therapeutics therapeutics Treatment and care of a patient for the purpose of both preventing and combating disease or alleviating pain or injury. The term comes from the Greek therapeutikos, which means... insert_drive_file light light Electromagnetic radiation that can be detected by the human eye. Electromagnetic radiation occurs over an extremely wide range of wavelengths, from gamma rays, with wavelengths... insert_drive_file education education Discipline that is concerned with methods of teaching and learning in schools or school-like environments as opposed to various nonformal and informal means of socialization (e.g.,... insert_drive_file 6 Amazing Facts About Gravitational Waves and LIGO 6 Amazing Facts About Gravitational Waves and LIGO Nearly everything we know about the universe comes from electromagnetic radiation—that is, light. Astronomy began with visible light and then expanded to the rest of the electromagnetic spectrum. By using... list launch vehicle launch vehicle In spaceflight, a rocket -powered vehicle used to transport a spacecraft beyond Earth ’s atmosphere, either into orbit around Earth or to some other destination in outer space.... insert_drive_file anthropology anthropology “the science of humanity,” which studies human beings in aspects ranging from the biology and evolutionary history of Homo sapiens to the features of society and culture that decisively... insert_drive_file Science Randomizer Science Randomizer Take this Science quiz at Encyclopedia Britannica to test your knowledge of science using randomized questions. casino close Email this page ×
__label__pos
0.611762
Доступ к атрибутам Атрибуты объекта в Python – это именованные поля (данные, функции), присущие данному объекту (экземпляру, классу). Самый простой доступ к атрибутам – через точку: class Foo: def init(self): self.x = 88 # установка значения атрибута f = Foo() print(f.x) # доступ к атрибуту через точку Если мы обратимся к атрибуту, которого нет, то получим ошибку AttributeError. Мы можем переопределить это поведение путем реализации магических методов __getattr__ или __getattribute__. __getattr__ вызывается, если атрибут не найден обычным способом (не был задан ранее через точку, функцию setattr, или через __dict__). Если атрибут найден, то __getattr__ НЕ вызывается. 📎 Пример. Возвращаем -1 для любого несуществующего атрибута. class Test: def __getattr__(self, item): print(f'__getattr__({item})') return -1 t = Test() # зададим x и y t.x = 10 setattr(t, 'y', 33) print(t.x) # 10 print(t.y) # 33 print(t.z) # __getattr__(z) -1 Метод __getattribute__ вызывается, когда мы пытаемся получить любой атрибут, не зависимо от того, есть он или нет. Этот метод, вызывается прежде __getattr__. Он немного хитрее. Если __getattribute__ кидает AttributeError, то будет вызвана __getattr__. 📎 Пример. Мы можем запретить чтение каких-то атрибутов: class Test: def __getattr__(self, item): print(f'__getattr__({item})') return -1 def __getattribute__(self, item): print(f'__getattribute__({item})') if item == 'y': # запретим получать y raise AttributeError return super().__getattribute__(item) # зададим x и y t = Test() t.x = 10 t.y = 20 print(t.x) # __getattribute__(x) 10 print(t.y) # __getattribute__(y) __getattr__(y) -1 print(t.z) # __getattribute__(z) __getattr__(z) -1 ⚠️ Внимание! В __getattribute__ мы можем вызвать super().__getattribute__(item) или object.__getattribute__(self, item), что посути тоже самое, но не слудует делать return self.__dict__[item] или return self.__getattribute__(item) или return getattr(self, item), так как это приведет к бесконечной рекурсии. 💡 Также есть магический метод __setattr__(self, key, value), вызываемый при obj.key = value или setattr(obj, ‘key’, value). У него нет более длинно-названного брата-близнеца. Для полноты картины еще есть встроенная функция getattr(object, name[, default]). Вызов getattr(x, ‘y’) аналогичен обращению через точку: x.y В первом случае ‘y’ – это строка, что позволяет нам динамически получать атрибуты объектов, в отличие от точки, которая требует фиксированного имени на этапе написания кода. В случае, если атрибут недоступен мы получим AttributeError при незаданном default или получим default (без возникновения ошибки), если default был задан третьим аргументом. Специально для канала @pyway. ​​🗓 Календарь Когда под рукой нет календаря, но есть Python: import calendar; calendar.TextCalendar().pryear(2019) Или из командной строки: python -c 'import calendar; calendar.TextCalendar().pryear(2019)' Календарь Хотите по-русски (если вдруг еще не)? import locale locale.setlocale(locale.LC_ALL, 'ru_RU') import calendar calendar.TextCalendar().pryear(2019) А еще можно узнать, високосный ли год: >>> calendar.isleap(2019) False >>> calendar.isleap(2020) True Или какой сегодня день недели? >>> calendar.day_name[calendar.weekday(2019, 2, 19)] 'вторник' Больше функций календаря ищите в документации к модулю calendar. Специально для канала @pyway. ⛓ Цепочки сравнений Распространенная ситуация: проверка того, что переменная находится в заданных пределах. Можно было бы использовать логический оператор and: if x >= 5 and x < 20: Однако Python предоставляет нам синтаксическое удобство, которое выглядит более «математичным». Такая запись и короче, и понятнее: if 5 <= x < 20: В качестве операторов сравнения могут быть любые из списка в любых сочетаниях: ">", "<", "==", ">=", "<=", "!=", "is" ["not"], ["not"] "in" Т.е. запись вида a < b > c вполне законна, хоть и трудна для понимания. Формально, если мы имеем N операций OP1…OPN и N + 1 выражений (a, b … y, z), то запись вида: a OP1 b OP2 c … y OPN z Это эквивалентно записи: a OP1 b and b OP2 c and … and y OPN z 📎 Примеры: x = 5 print(1 < x < 10) print(x < 10 < x*10 < 100) print(10 > x <= 9) print(5 == x > 4) a, b, c, d, e, f = 0, 5, 12, 0, 15, 15 print(a <= b < c > d is not e is f) Специально для канала @pyway. Python: is is or == picture Новички часто путаются в конструкциях is и ==. Давайте разберемся, что к чему. Сразу к сути: == (и его антагонист !=) применяются для проверки равенства (неравенства) значения двух объектов. Значение, это непосредственно то, что лежит в переменной. Значение числа 323235 – собственно число 323235. Тавтология. Но на примерах станет яснее. Оператор is (и его антагонист is not) применяются проверки равенства (неравенства) ссылок на объект. Сразу отметим то, что на значение (допустим 323235) может быть копировано и храниться в разных местах (в разных объектах в памяти). >>> x = 323235 >>> y = 323235 >>> x == y True >>> x is y False Видите, значение переменных равны по значению, но они ссылаются на разные объекты. Я не случайно взял большое число 323235. Дело в том, что в целях оптимизации интерпретатор Python при старте создает некоторые количество часто-используемых констант (от -5 до 256 включительно). Следите внимательно за ловкостью рук: >>> x = 256 >>> y = 256 >>> x is y True >>> x = 257 >>> y = 257 >>> x is y False >>> x = -5 >>> y = -5 >>> x is y True >>> x = -6 >>> y = -6 >>> x is y False Поэтому новички часто совершают ошибку, считая, что писать == – это как-то не Python-way, а is – Python-way. Это ошибочное предположение может быть раскрыто не сразу. Python старается кэшировать и переиспользовать строковые значения. Поэтому весьма вероятно, что переменные, содержащие одинаковые строки, будут содержать ссылки на одинаковые объекты. Но это не факт! Смотрите последний пример: >>> x = "hello" >>> y = "hello" >>> x is y True >>> x = "hel" + "lo" >>> y = "hello" >>> x is y True >>> a = "hel" >>> b = "lo" >>> x = a + b >>> y = "hello" >>> x == y True >>> x is y False Мы составили строку из двух частей и она попала в другой объект. Python не догадался (и правильно) поискать ее в существующих строках. Суть is (id) В Python есть встроенная функция id. Она возвращает идентификатор объекта – некоторое число. Гарантируется, что оно будет различно для различных объектах в пределах одного интерпретатора. В реализации CPython – это просто адрес объекта в памяти интерпретатора. Так вот: a is b Это тоже самое, что: id(a) == id(b) И все! Пример для проверки: >>> x = 10.40 >>> y = 10.40 >>> x is y False >>> x == y True >>> id(x) 4453475504 >>> id(y) 4453475600 >>> id(x) == id(y) False >>> x = y >>> x is y True >>> id(x) 4453475600 >>> id(y) 4453475600 Значения переменных равны, но их id – разные, и is выдает False. Как только мы к x привязали y, то ссылки стали совпадать. Для чего можно применять is? Если мы точно знаем уверены, что хотим проверять именно равенство ссылок на объекты (один ли это объект в памяти или разные). Еще можно применять is для сравнения с None. None – это встроенная константа и двух None быть не может. >>> x is None False >>> x = None >>> x is None True Также для Ellipsis: >>> ... is Ellipsis True >>> x = ... >>> y = ... >>> x is y True Я не рекомендую применять is для True и False. Потому что короче писать if x:, чем if x is True:. Можно применять is для сравнения типов с осторожностью (без учета наследования, т. е. проверка на точное совпадение типов): >>> x = 10.5 >>> type(x) is float True С наследованием может быть конфуз: >>> class Foo: ... ... >>> class Bar(Foo): ... ... >>> f = Foo() >>> b = Bar() >>> type(f) is Foo True >>> type(b) is Bar True >>> type(b) is Foo False >>> isinstance(b, Foo) True Не смотря на то, что Bar – наследник Foo, типы переменных foo и bar не совпадают. Если нам важно учесть наcледование, то пишите isinstance. Нюанс: is not против is (not) Важно знать, что is not – это один целый оператор, аналогичный id(x) != id(y). А в конструкции x is (not y) – у нас сначала будет логическое отрицание y, а потом просто оператор is. Пример уловки: >>> x = 10 >>> x is not None True >>> x is (not None) False Сравнение пользовательских классов Далее речь пойдет об обычных == и !=. Можно определить магический метод __eq__, который обеспечит поведение при сравнении классов. Если он не реализован, то объекты будет сравниваться по ссылкам (как при is). >>> class Baz: ... ... >>> x = Baz() >>> y = Baz() >>> x == y False >>> x = y >>> x == y True Если он реализован, то будет вызван метод __eq__ для левого операнда. class Foo: def __init__(self, x): self.x = x def __eq__(self, other): print('Foo __eq__ {} and {}'.format(self, other)) return self.x == other.x >>> x = Foo(5) >>> y = Foo(5) >>> x == y Foo __eq__ <__main__.Foo object at 0x109e9c048> and <__main__.Foo object at 0x109e8a5c0> True Метод __ne__ отвечает за реализацию !=. По умолчанию он вызывает not x.__eq__(y). Но рекомендуется реализовывать их оба вручную, чтобы поведение сравнения было согласовано и явно. Вопрос к размышлению: что будет если мы сравним объекты разных классов, причем оба класса реализуют __eq__? Что будет, если мы реализуем __ne__, но не реализуем __eq__? А еще есть метод __cmp__. Это уже выходит за рамки статьи про is. Почитайте самостоятельно… Специально для канала @pyway. Множества в Python Множество (англ. «set«) – неупорядоченная коллекция из уникальных (неповторяющихся) элементов. Элементы множества в Python должны быть немутабельны (неизменяемы), хотя само содержимое множества может меняться: можно добавлять и удалять элементы из множества. О неизменяемых множествах написано в конце этой статьи. CPython: внутри множества реализованы как хэш-таблицы, в которых есть только ключи без значений и добавлены некоторые оптимизации, которые используют отсутствие значений. Проверка членства выполняется за время O(1), так как поиск элементов в хэш-таблицы тоже выполняется за О(1). Если интересно, как это реализовано на С: вот ссылка. Создание множества Сформировать множество можно несколькими способами. Самый простой – перечислить элементы через запятую внутри фигурных скобок {}. Множество может содержать элементы разных типов, главное, чтобы они были неизменяемы. Поэтому кортеж можно поместить в множество, а список – нельзя. >>> my_set = {1, 2, 3, 4} >>> my_hetero_set = {"abc", 3.14, (10, 20)} # можно с кортежем >>> my_invalid_set = {"abc", 3.14, [10, 20]} # нельзя со списком Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'list' Можно также воспользоваться встроенной функцией set, чтобы создать множество из другой коллекции: списка, кортежа или словаря. Если это будет словарь – то новое множество будет составлено только из ключей этого словаря. Можно создать множество даже из строки: будет добавлена каждая буква (но только один раз): >>> my_set2 = set([11, 22, 33]) >>> my_set2 {33, 11, 22} >>> my_set3 = set((1, 2, 3)) >>> my_set3 {1, 2, 3} >>> my_set4 = set({"a": 10, "b": 20}) >>> my_set4 {'b', 'a'} >>> my_set5 = set("hello") >>> my_set5 {'h', 'l', 'e', 'o'} Как создать пустое множество? {} – вернет нам пустой словарик, а не множество. Поэтому, нужно использовать set() без аргументов. >>> is_it_a_set = {} >>> type(is_it_a_set) <class 'dict'> >>> this_is_a_set = set() >>> type(this_is_a_set) <class 'set'> Изменение множеств Множества можно менять, добавляя или удаляя элементы. Так как они не упорядочены, то индексирование не имеет смысла и не поддерживается: мы не может получать доступ к элементам множества по индексу, как мы это делаем для списков и кортежей. Добавление одного элемента выполняется методом add(). Нескольких элементов из коллекции или нескольких коллекций – методом update(): >>> my_set = {44, 55} >>> my_set.add(50) >>> my_set {50, 44, 55} >>> my_set.update([1, 2, 3]) >>> my_set {1, 2, 3, 44, 50, 55} >>> my_set.update([2, 3, 6], {1, 50, 60}) >>> my_set {1, 2, 3, 6, 44, 50, 55, 60} >>> my_set.update("string") >>> my_set {1, 2, 3, 6, 'i', 44, 'r', 50, 's', 55, 'n', 'g', 60, 't'} Естественно, что при добавлении элементов дубликаты игнорируются. Удаление элементов из множества Для удаления элемента существуют методы discard() и remove(). Делают они одно и тоже, но если удаляемого элемента нет во множестве, то discard() оставит множество неизменным молча, а remove() – бросит исключение: >>> my_set = {1, 2, 3, 4, 5, 6} >>> my_set.discard(2) >>> my_set {1, 3, 4, 5, 6} >>> my_set.remove(4) >>> my_set {1, 3, 5, 6} >>> my_set.discard(10) >>> my_set {1, 3, 5, 6} >>> my_set.remove(10) Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: 10 Также есть метод pop(), который берет какой-нибудь (первый попавшийся) элемент множества, удаляет его и возвращает как результат: >>> my_set = {3, 4, 5, 6, 1, 2} >>> my_set {1, 2, 3, 4, 5, 6} >>> my_set.pop() 1 >>> my_set {2, 3, 4, 5, 6} Наконец, очистить множество (т.е. удалить все его элементы) можно методом clear(): >>> my_set = {1, 2, 3} >>> my_set.clear() >>> my_set set() Проверка членства Узнать есть ли элемент в множестве очень легко оператором in (или not in, если хотим убедиться в отсутствии элемента): >>> s = {"banana", "apple"} >>> "banana" in s True >>> "tomato" not in s True Таким образом проверяется членства одного элемента, если нужно узнать является ли одно множество подмножеством другого, то оператор in тут не подойдет: >>> {1, 2} in {1, 2, 3} False Тут подойдут операторы < и >. Чтобы получить True, с «широкой» стороны оператора должно стоять множество, полностью содержащее множество, стоящее по «узкую» сторону галочки: >>> {1, 2} < {1, 2, 3, 4} True >>> {5, 6, 7, 8} > {5, 8} True >>> {1, 2, 3} < {1, 2, 4} False Итерация множеств Пробежаться по элементам множества также легко, как и по элементам других коллекций оператором for-in (порядок обхода не определен точно): my_set = {"Moscow", "Paris", "London"} for elem in my_set: print(elem) Moscow London Paris Операции над множествами Самое интересное – проводить математические операции над множествами. Рассмотрим два множества A и B: A = {1, 2, 3, 4, 5} B = {4, 5, 6, 7, 8} Объединение Объединение множеств – множество, в котором есть все элементы одного и другого множеств. Это коммуникативная операция (от перемены мест ничего не меняется). В Python используется либо метод union(), либо оператор вертикальная черта «|»: >>> A = {1, 2, 3, 4, 5} >>> B = {4, 5, 6, 7, 8} >>> A | B {1, 2, 3, 4, 5, 6, 7, 8} >>> A.union(B) {1, 2, 3, 4, 5, 6, 7, 8} >>> B.union(A) {1, 2, 3, 4, 5, 6, 7, 8} Пересечение множеств Пересечение множеств – множество, в которое входят только общие элементы, то есть которые есть и в первом, и во втором множестве. Также коммуникативная операция. Пересечение вычисляют методом intersection() или оператором амперсандом «&»: >>> A = {1, 2, 3, 4, 5} >>> B = {4, 5, 6, 7, 8} >>> A & B {4, 5} >>> B & A {4, 5} >>> A.intersection(B) {4, 5} Разность множеств Разность множеств A и В – множество элементов из A, которых нет в B. Не коммуникативная операция! Выполняется знаком минус «-» или оператором difference(): >>> A = {1, 2, 3, 4, 5} >>> B = {4, 5, 6, 7, 8} >>> A - B {1, 2, 3} >>> B - A {8, 6, 7} >>> A.difference(B) {1, 2, 3} >>> B.difference(A) {8, 6, 7} Как видно есть разница, в каком порядке идут операнды. Симметричная разность Симметричная разность – это объединение множеств за исключеним их пересечения. По другому, это сумма разностей. Это коммуникативный оператор. Используется метод symmetric_difference() или оператор крышка «^»: >>> A = {1, 2, 3, 4, 5} >>> B = {4, 5, 6, 7, 8} >>> A ^ B {1, 2, 3, 6, 7, 8} >>> B ^ A {1, 2, 3, 6, 7, 8} >>> A.symmetric_difference(B) {1, 2, 3, 6, 7, 8} Обратите внимание на эквивалентность операции определениям, которые я привел в начале этого раздела: >>> A ^ B == (A - B) | (B - A) # объединение простых разностей True >>> A ^ B == (A | B) - (A & B) # разность объединения и пересечения True Прочее Ко множествам можно применять стандартные функции all(), any(), enumerate(), len(), max(), min(), sorted(), sum(). Описания их ищите тут. Прочие методы класса set: copy() Возвращает копию множества difference_update(other_set)Удаляет из этого множества все элементы, которые есть во множестве, переданным в аргументе intersection_update(other_set)Обновляет это множество элементами из пересечения множеств isdisjoint(other_set)Возвращает True, если множества не пересекаются issubset(other_set)Возвращает True, если это множество является подмножеством другого issuperset(other_set)Возвращает True, если это множество является надмножеством другого symmetric_difference_update(other_set)Добавляет в это множество симметричную разность этого и другого множеств Замороженное множество Замороженное множество (frozen set) также является встроенной коллекцией в Python. Обладая характеристиками обычного множества, замороженное множество не может быть изменено после создания (подобно тому, как кортеж является неизменяемой версией списка). Будучи изменяемыми, обычные множества являются нехешируемыми (unhashable type), а значит не могут применятся как ключи словаря или элементы других множеств. Замороженные множества являются хэшируемыми, а значит могут быть ключами словаря и элементами других множеств. Создаются замороженные множества функцией frozenset(), где аргументом будет другая коллекция. Примеры: >>> A = frozenset({1, 2, 3}) >>> A frozenset({1, 2, 3}) >>> B = frozenset(['a', 'b', 'cd']) >>> B frozenset({'cd', 'b', 'a'}) Над замороженными множествами можно производить все вышеописанные операции, кроме тех, что изменяют содержимое этого множества. Причем результатом логических операций будут тоже замороженные множества: >>> A = frozenset('hello') >>> B = frozenset('world') >>> A | B frozenset({'o', 'r', 'd', 'e', 'l', 'h', 'w'}) >>> A & B frozenset({'o', 'l'}) >>> A ^ B frozenset({'d', 'e', 'h', 'r', 'w'}) Теперь вы знаете много о множествах в Python. Специально для канала PyWay.
__label__pos
0.539795
So what's the difference... Discussion in 'Xbox 360 - Games & Content' started by kid sampson, Apr 7, 2010. Apr 7, 2010 1. kid sampson OP Member kid sampson GBAtemp Regular Joined: Jul 20, 2006 Messages: 233 Country: United States between XexMenu and Freestyle Dash, and which do you use? Also can you (and would one need to) have them both installed simultaneously? I'm interested in the dash because it seems like it would enable me to control fan speed?   2. hundshamer Blacklisted Trader hundshamer GBAtemp Advanced Maniac Joined: May 22, 2009 Messages: 1,810 Country: United States I have both installed. Freestyle is "prettier," and seems to have a few different functions. There is no real need to have them both installed but it doesn't hurt.   3. kid sampson OP Member kid sampson GBAtemp Regular Joined: Jul 20, 2006 Messages: 233 Country: United States Thanks for the reply. Does your box automatically boot into freestyle or does it need to be launched like an arcade game / demo?   4. FAST6191 Reporter FAST6191 Techromancer pip Joined: Nov 21, 2005 Messages: 22,726 Country: United Kingdom None of the replacement dashboards boots to the dash by default but a few weeks back there was an app called dash launch to change it to boot instead of the stock dash (technically it all still goes on and pressing the middle button and then y can return you to it). http://www.xbins.org/nfo.php?file=xboxnfo1799.nfo As for me I have both but due to me normally doing NXE installs and not bothering with directories when it comes to homebrew xexmenu is my chosen app. I have that installed as a demo and usually it is what is sitting in my DVD drive.   5. hundshamer Blacklisted Trader hundshamer GBAtemp Advanced Maniac Joined: May 22, 2009 Messages: 1,810 Country: United States Just as FAST said. There is an app to launch directly to the program of your choosing. I personally like to go through NXE as well because I use my box to stream HD movies as much as I play games on it. I have XEXMenu installed as a demo and Freestyle shows up as a 360 game like the Gameroom. I usually use Freestyle as it has all the functionallity of XEXMenu and then some. Both can launch xex files (backups, emulators, and such), both can launch xbe files (original xbox games), but one of the things Freestyle has over XEXMenu is it can launch XBLA from a usb device and not restricted to the 360's HDD.   Share This Page
__label__pos
0.62157
I’ve been working for awhile now to clean up a legacy PHP application by breaking up extremely large php files into separate much smaller class files. While the code base is becoming much cleaner and easier to manage its is full of require statements. Personally I don’t think require statements are necessarily a bad thing, but it just becomes another things to manage and potentially something that can break. I’ve been making great progress on cleaning up our codebase, but I wasn’t really sure about some of the best practices in PHP when building a framework so I purchased “Modernizing Legacy Applications in PHP” which has been extremely helpful. One of the first things it talks about is autoloading. To eliminate all of these require statements you can call an autoloader method as one of the first things your app runs. Be sure to use spl_autoload_register and not the old __autoload function. I’m actually autoloading two different directories. One is a directory called app and the other is a directory in lib/framework. I put my autoloader inside of lib/framework along with my other code like my model, controller, and view classes that I extend. The problem that I had was that inside of my app directly I also have my assets folder and I also have a templates directory which contain mostly html code but are not class files. Anyways I needed to exclude my app/assets directory and my app/templates directory inside of my autoloader. To exclude my app/assets directly I simply check to make sure the file ends with a php extention before requiring it. The problem is that this method would still include my app/templates files because they end in .php. So to exclude this directory I simply check to see if the path string contains ‘templates’ and if it does I don’t require the php file. Here is my autoloader file that I’m calling from setup.php: <?php namespace Framework; class Autoloader { public function loadApp($class) { $dir = __DIR__ . '/../../app'; $this->load($class, $dir); } public function loadFramework($class) { $dir = __DIR__ . '/..'; $this->load($class, $dir); } // private private function load($class, $dir) { // strip off any leading namespace separator from PHP 5.3 $class = ltrim($class, '\\'); // the eventual file path $subpath = ''; // is there a PHP 5.3 namespace separator? $pos = strrpos($class, '\\'); if ($pos !== false) { // convert namespace separators to directory separators $ns = substr($class, 0, $pos); $subpath = str_replace('\\', DIRECTORY_SEPARATOR, $ns) . DIRECTORY_SEPARATOR; // remove the namespace portion from the final class name portion $class = substr($class, $pos + 1); } // convert underscores in the class name to directory separators $subpath .= str_replace('_', DIRECTORY_SEPARATOR, $class); // previx with the central directory location and suffix with .php, // then require it. $file = $dir . DIRECTORY_SEPARATOR . $subpath . '.php'; if (is_file($file)) { if (strpos($file, 'Templates') !== false) { // skip } else { require $file; } } } } And here is my lib/setup.php file: <?php require_once __DIR__ . '/framework/autoloader.php'; $autoloader = new \Framework\Autoloader(); spl_autoload_register(array($autoloader, 'loadFramework')); spl_autoload_register(array($autoloader, 'loadApp'));
__label__pos
0.760953
0 votes 1answer 435 views Defining projection in arc toolbox (JSON to shapefile) in python I have a arc toolbox which converts JSON txt file to a shapefile but we have to define the projection. How can I hard code the defined projection so that user don't have to input the projection ? ... 4 votes 2answers 2k views How to send long JSON objects (polygon geometry, table rows) in POST request to Geoprocessing Service? I am seeking to understand my options for sending long JSON objects in a request to a Geoprocessing Service. In particular, I am looking for some example code to illustrate how to include two long ...
__label__pos
0.868897
West Nile virus (WNV) is a prominent mosquito-borne flavivirus that causes febrile illness in humans. To infect host cells, WNV virions first bind to plasma membrane receptors, then initiate membrane fusion following endocytosis. The viral transmembrane E protein, triggered by endosomal pH, catalyzes fusion while undergoing a dimer-to-trimer transition. Previously, single-particle WNV fusion data was interrogated with a stochastic cellular automaton simulation, which modeled the E proteins during the fusion process. The results supported a linear fusion mechanism, with E protein trimerization being rate-limiting. Here, we present corrections to the previous simulation, and apply them to the WNV fusion data. We observe that a linear mechanism is no longer sufficient to fit the data. Instead, an off-pathway state is necessary; these results are corroborated by per virus chemical kinetics modeling. When compared with a similar Zika virus fusion model, this suggests that off-pathway fusion mechanisms may characterize flaviviruses more broadly. Author
__label__pos
0.994789
advertisement Updated 25 July 2012 Pacemakers and ICDs A pacemaker is a small, battery-operated device implanted into the body to regulate the heartbeat. 0 Definition A pacemaker is a small, battery-operated device implanted into the body to regulate the heartbeat. Normal heartbeat regulation – and problems encountered The regular synchronised contraction of all four chambers of the heart is regulated by electrical impulses generated within a group of specialised cells in the heart, called the sino-atrial node (SA Node). The impulse generated at the SA Node causes the atria to contract. It then passes to the “sub-station”, the atrio-ventricular node, which carries it into the conduction system of the ventricles. These impulses are carried along special conducting tissues – similar to an electrical wiring system – called the Bundle of His and the Purkinje fibres. These fibres are arranged for optimum performance of the heart, for example the atria (or receiving chambers) contract a split second before the ventricles (the pumping chambers), allowing them to fill the ventricles as much as possible between beats. The conduction system The SA Node is sensitive to changes in the body – heat (such as a fever), exercise or stimulant drugs will cause it to “fire” more often, and speed up the pulse rate. In the same way, other factors can slow the pulse. This responsiveness is vital, allowing the body to adapt to the varying demands made, and ensuring optimum cardiac function. The SA node and all the components of the conduction system may also be affected by degenerative changes and diseases, which can affect only certain parts, or all of the system.  The result is a disruption in the normal rhythm of the heart, called a dysrhythmia. This problem may be intermittent at first, and may later become permanent. There are different types of rhythm disturbance, each with its own problems. For example, atrial fibrillation (AF) can leave a person feeling weak and unwell, and predisposes towards clots forming in the heart chamber itself. Parts of these clots can break off and be carried to the lungs or brain, causing a stroke, or a (possibly fatal) lung embolus. Ventricular fibrillation causes the ventricles to quiver ineffectively instead of pumping efficiently, and is rapidly fatal if unrecognised and not immediately corrected by defibrillation.  Why are pacemakers inserted? From the above description of the normal conducting system, it is clear that problems may arise at any point.  Regardless of where the problem arises, or what its cause is, the end result is an abnormal pulse – too slow or dangerously irregular. Some patients do not respond well to medical treatment, and others may have a condition which is simply not treatable by any medical means. Yet another group of patients may have a combination of an irregular pulse which tends to degenerate into a potentially fatal fibrillation. For all of these patients, a pacemaker can be used. A pacemaker does not cure the underlying condition, but allows the patient to live with it. What does a pacemaker consist of? Simply put, a pacemaker consists of three basic elements: 1. A generator, which combines the power source – a high-tech battery – and the computerised information controlling the work of the pacemaker. These generators are compact and light, weighing about 30-40 grams, and have a life span of 7-10 years, after which they need replacing. 2. Leads carrying electrical impulses to and from the heart chambers. 3. Electrodes/s and sensors at the tip of the lead/s which connect with the heart. The sensors are used to detect the intrinsic electrical activity (rhythm) of the heart, while the electrodes can deliver to the heart muscle the electrical impulse generated by the pacemaker’s battery. There are different types of pacemakers, each for a specific category of problem. Examples of these are: • Single chamber pacemakers – a single electrode stimulates a chamber to contract. Placed in a ventricle, this may be used to prevent very slow heart rate. Used in the atrium, it may be effective if the sino-atrial node malfunctions, but the rest of the conduction system is intact and can be used to conduct impulses from the pacemaker. • Dual chamber – here, electrodes use low-energy impulses to stimulate both the atria and ventricles to contract, mimicking as far as possible the natural cycle of events. • Biventricular – this paces both ventricles, and is technologically more complex and difficult to insert. This is starting to be used more often in the management of heart failure in selected patients. • Internal cardioverter/defibrillators (ICD)– these can function as pacemakers, but can also detect the onset of dangerous rhythm disturbances. In these instances, they can function as defibrillators. This is used for certain categories of patient only, mainly those whose condition places them at risk of fatal fibrillation. Pacemakers can be programmed to sense certain abnormal rhythms, and to respond in a variety of ways. For example, they can be set to maintain a fixed pulse rate. Or they can be programmed to only pace if the pulse rate drops below a certain speed. The ICDs can be  programmed to detect the abnormal rhythm, start to pace at low energy, then, if the abnormal rhythm persists or degenerates into fibrillation, to deliver a corrective, high-energy shock. Pacemaker insertion – procedure, complications and risks Insertion of a pacemaker is a surgical procedure, performed under sterile conditions in a special theatre. An area of skin just under the collarbone is deadened using local anaesthetic. A small incision is made, and a “pocket” is hollowed out of the tissues, to receive the generator/battery. A nearby large vein is then used to thread the electrodes into the heart: special X-ray monitors are used to ensure the correct positioning of the tips of these electrodes. When correctly positioned, the electrodes are connected to the generator. The skin is then closed. Comparison of an implantable cardioverter defibrillator and a pacemaker As with any operation, there are possible complications. Because this procedure does not need general anaesthesia, the possible complications of breathing problems do not apply, but the basic risk of bleeding remains. There are specific complications associated with pacemaker insertion, though, and they include • uncontrolled bleeding at the vein puncture site • puncture of the heart leading to blood collecting in the pericardial sac – this may need urgent surgical drainage. This is an uncommon problem. • abnormal heart rhythms may be caused by the electrode tip irritating the heart muscle during insertion/placement. • pneumothorax (dropped lung) may occur, especially in very thin patients. Not a common problem fortunately. • infection is the most feared complication, and may affect either the pocket or the leads, or both. The source of the infection may be the procedure and apparatus itself. Alternatively, infections elsewhere in the body, such as bladder infection, can lead to bacteria settling on the equipment. The infection may then track back into the heart, where the deadly condition of endocarditis may develop. There are also certain circumstances and underlying medical conditions of the patient which increase the risk of complications occurring. These include • diabetes mellitus • underlying malignancy • advanced patient age • recent/current use of anti-coagulants or glucocoticoids (warfarin, cortisone) • recent surgery involving the device greatly increases the risk of infection • operator inexperience increases the risk of all complications Precautions Because pacemakers are electromagnetic devices equipped with sensors, and are designed to respond to information detected by these sensors, there is a risk of malfunction if other electromagnetic apparatus in the environment interferes.  Most modern pacemakers are adequately shielded, so that patients can lead normal lives with only a few restrictions. Household/kitchen equipment is generally not a threat. Most microwaves and TV sets do not interfere with the working of the pacemaker. The new induction cooker tops may cause problems if a pot is not symmetrically placed on the heating area, or if the patient stands too close to the stove. Cellphones in normal use cause no problems. However, cellphones should not be carried in a breast pocket near the generator – the greatest risk is during the ringing phase, when signal strength emitted is at its maximum and may interfere with the generator. Security systems such as those used in banks, shopping centres and airports are safe, provided that the patient walks briskly through them, and does not linger or get close to the source of the signal. MRI scans are contra-indicated for patients with pacemakers. Fortunately, there are many other excellent types of scans and imaging procedures available for diagnostic purposes. Shock lithotripsy – the use of external high energy sound waves to crumble kidney stones -  is not recommended for patients with ICDs. Some of these have sensitive Peizo-electric crystals which can also be shattered, rendering the device useless. Other considerations Depending on the original cardiac problem, blood-thinning medication may need to be used. This may increase the likelihood of bleeding near the pocket, with even minor trauma. Regular checkups are needed to verify that the pacemaker is working correctly, and that sensors and leads are intact. The generator/battery unit will need replacing after several years. This means another minor operation, which carries further risks of complications.  Leads are not replaced unless they are proven to be faulty or infected. Because the whole pacemaker unit constitutes foreign material implanted directly into the heart, patients must inform any attending doctor or dentist of its presence. Ideally, patients should carry a MedicAlert tag of some kind. Any patient with a pacemaker who experiences vague symptoms of fever, fatigue and general malaise must be regarded as having primary or secondary infection until disproved. Signs of local infection of the pocket may be found.  If not, a blood culture test will be needed to identify the bacteria in the blood, and determine their sensitivity to antibiotics. If bacteria are present in the blood, they may infect a heart valve, causing infective endocarditis, a serious, potentially fatal condition if untreated.   advertisement Get a quote advertisement Read Health24’s Comments Policy Comment on this story 0 comments Add your comment Comment 0 characters remaining Live healthier Allergy alert » Allergy myths Cold or allergy? Children and allergies Allergy facts vs. fiction Some of the greatest allergy myths and misconceptions can actually be damaging to your health. Vitamin wise » Vitamins for HIV What to eat for vitamin B? Cut down on vitamins All you need to know about vitamins Find out which vitamin to use for which condition. Ask our Vitamin expert.
__label__pos
0.761716
Enhancement of the ANSI SQL Implementation of PostgreSQL pridefulauburnΔιαχείριση Δεδομένων 16 Δεκ 2012 (πριν από 4 χρόνια και 6 μήνες) 232 εμφανίσεις Diplomarbeit Enhancement of the ANSI SQL Implementation of PostgreSQL ausgef¨uhrt amInstitut f¨ur Informationssysteme der Technischen Universit¨at Wien unter der Anleitung von O.Univ.Prof.Dr.Georg Gottlob und Univ.Ass.Mag.Katrin Seyr als verantwortlicher Universit¨atsassistentin durch Stefan Simkovics Paul Petersgasse 36 A - 2384 Breitenfurt November 29,1998 Datum Unterschrift Abstract PostgreSQL is an object-relational database management system that runs on almost any UNIX based operating system and is distributed as C-source code.It is neither freeware nor public domain software.It is copyrighted by the University of California but may be used,modified and distributed as long as the licensing terms of the copyright are accepted. As the name already suggests,PostgreSQL uses an extended subset of the SQL92 stan- dard as the query language.At the time of writing this document the actual version of PostgreSQL was v6.3.2.In this version the implemented part of SQL did not support some important features included in the SQL92 standard.Two of the not supported features were: the having clause the support of the set theoretic operations intersect and except It was the author’s task to add the support for the two missing features to the existing source code.Before the implementation could be started an intensive study of the relevant parts of the SQL92 standard and the implementation of the existing features of PostgreSQL had been necessary.This document will not present only the results of the implementation but also the knowledge collected while studying the SQL language and the source code of the already existing features. Chapter 1 presents an overview on the SQL92 standard.It gives a description of the relational data model and the theoretical (mathematical) background of SQL.Next the SQL language itself is described.The most important SQL statements are presented and a lot of examples are included for better understanding.The information given in this chapter has mainly been taken formthe books [DATE96],[DATE94] and [ULL88]. Chapter 2 gives a description on how to use PostgreSQL.First it is shown how the backend (server) can be started and how a connection from a client to the server can be established.Next some basic database management tasks like creating a database,creating a table etc.are described.Finally some of PostgreSQL’s special features like user defined functions,user defined types,the rule systemetc.are presented and illustrated using a lot of examples.The information given in chapter 2 has mainly been taken fromthe PostgreSQL documentation (see [LOCK98]),the PostgreSQL manual pages and was verified by the author throughout various examples which have also been included. Chapter 3 concentrates on the internal structure of the PostgreSQL backend.First the stages that a query has to pass in order to retrieve a result are described using a lot of figures to illustrate the involved data structures.The information given in that part of chapter 3 has been collected while intensively studying the source code of the relevant parts of PostgreSQL.This intensive and detailed examination of the source code had been necessary to be able to add the missing functionality.The knowledge gathered during that period of time has been summarized here in order to make it easier for programmers who are newto PostgreSQL to find their way in. The following sections cover the author’s ideas for the implementation of the two miss- ing SQL features mentioned above and a description of the implementation itself. Section 3.7 deals with the implementation of the having logic.As mentioned earlier the having logic is one of the two missing SQL92 features that the author had to implement. 3 4 The first parts of the chapter describe how aggregate functions are realized in PostgreSQL and after that a description of the enhancements applied to the code of the planner/optimizer and the executor in order to realize the new functionality is given.The functions and data structures used and added to the source code are also handled here. Section 3.8 deals with the implementation of the intersect and except functionality which was the second missing SQL92 feature that had to be added by the author.First a theoretical description of the basic idea is given.The intersect and except logic is imple- mented using a query rewrite technique (i.e.a query involving an intersect and/or except operation is rewritten to a semantically equivalent form that does not use these set oper- ations any more).After presenting the basic idea the changes made to the parser and the rewrite systemare described and the added functions and data structures are presented. Contents 1 SQL 9 1.1 The Relational Data Model..........................10 1.1.1 Formal Notion of the Relational Data Model............10 Domains vs.Data Types.......................11 1.2 Operations in the Relational Data Model...................11 1.2.1 Relational Algebra..........................11 1.2.2 Relational Calculus..........................14 Tuple Relational Calculus......................14 1.2.3 Relational Algebra vs.Relational Calculus.............14 1.3 The SQL Language..............................14 1.3.1 Select.................................15 Simple Selects............................15 Joins.................................16 Aggregate Operators.........................17 Aggregation by Groups........................17 Having................................19 Subqueries..............................19 Union,Intersect,Except.......................20 1.3.2 Data Definition............................21 Create Table.............................21 Data Types in SQL..........................21 Create Index.............................22 Create View..............................22 Drop Table,Drop Index,Drop View.................23 1.3.3 Data Manipulation..........................23 Insert Into...............................23 Update................................24 Delete.................................24 1.3.4 SystemCatalogs...........................24 1.3.5 Embedded SQL............................24 2 PostgreSQL fromthe User’s Point of View 26 2.1 A Short History of PostgreSQL........................26 2.2 An Overviewon the Features of PostgreSQL................26 2.3 Where to Get PostgreSQL..........................27 Copyright of PostgreSQL......................27 Support for PostgreSQL.......................27 2.4 Howto use PostgreSQL............................28 2.4.1 Starting The Postmaster.......................28 2.4.2 Creating a New Database.......................28 2.4.3 Connecting To a Database......................29 2.4.4 Defining and Populating Tables...................29 5 6 CONTENTS 2.4.5 Retrieving Data FromThe Database.................30 2.5 Some of PostgreSQL’s Special Features in Detail..............31 2.5.1 Inheritance..............................31 2.5.2 User Defined Functions........................33 Query Language (SQL) Functions..................33 Programming Language Functions..................35 2.5.3 User Defined Types..........................36 2.5.4 Extending Operators.........................39 2.5.5 Extending Aggregates........................40 2.5.6 Triggers................................43 2.5.7 Server Programming Interface (SPI).................46 2.5.8 Rules in PostgreSQL.........................49 3 PostgreSQL fromthe Programmer’s Point of View 51 3.1 The Way of a Query..............................51 3.2 HowConnections are Established......................52 3.3 The Parser Stage...............................52 3.3.1 Parser.................................53 3.3.2 Transformation Process........................54 3.4 The PostgreSQL Rule System........................58 3.4.1 The Rewrite System.........................58 Techniques To Implement Views...................58 3.5 Planner/Optimizer...............................59 3.5.1 Generating Possible Plans......................59 3.5.2 Data Structure of the Plan......................59 3.6 Executor....................................60 3.7 The Realization of the Having Clause....................62 3.7.1 HowAggregate Functions are Implemented.............62 The Parser Stage...........................62 The Rewrite System.........................63 Planner/Optimizer..........................63 Executor...............................65 3.7.2 Howthe Having Clause is Implemented...............66 The Parser Stage...........................66 The Rewrite System.........................68 Planner/Optimizer..........................80 Executor...............................87 3.8 The Realization of Union,Intersect and Except...............89 3.8.1 HowUnions have been Realized Until Version 6.3.2........91 The Parser Stage...........................91 The Rewrite System.........................92 Planner/Optimizer..........................92 Executor...............................93 3.8.2 HowIntersect,Except and Union Work Together..........93 Set Operations as Propositional Logic Formulas...........95 3.8.3 Implementing Intersect and Except Using the Union Capabilities..95 Parser.................................98 Transformations...........................105 The Rewrite System.........................106 List of Figures 1.1 The suppliers and parts database.......................10 3.1 Howa connection is established.......................52 3.2 TargetList and FromList for query of example 3.1..............54 3.3 WhereClause for query of example 3.1....................55 3.4 Transformed TargetList and FromList for query of example 3.1.......56 3.5 Transformed where clause for query of example 3.1.............57 3.6 Plan for query of example 3.1........................61 3.7 Querytree built up for the query of example 3.2...............63 3.8 Plantree for the query of example 3.2....................64 3.9 Data structure handed back by the parser..................92 3.10 Plan for a union query............................93 3.11 Operator tree for  .....................101 3.12 Data structure handed back by SelectStmt rule.............102 3.13 Data structure of  after transformation to DNF.........107 3.14 Data structure of  after query rewriting................109 7 Chapter 1 SQL SQLhas become one of the most popular relational query languages all over the world.The name ”SQL” is an abbreviation for Structured Query Language.In 1974 Donald Cham- berlin and others defined the language SEQUEL (Structured English Query Language) at IBMResearch.This language was first implemented in an IBMprototype called SEQUEL- XRMin 1974-75.In 1976-77 a revised version of SEQUELcalled SEQUEL/2 was defined and the name was changed to SQL subsequently. A new prototype called System R was developed by IBM in 1977.System R imple- mented a large subset of SEQUEL/2 (now SQL) and a number of changes were made to SQL during the project.System R was installed in a number of user sites,both internal IBMsites and also some selected customer sites.Thanks to the success and acceptance of SystemRat those user sites IBMstarted to develop commercial products that implemented the SQL language based on the SystemR technology. Over the next years IBMand also a number of other vendors announced SQL products such as SQL/DS (IBM),DB2 (IBM) ORACLE (Oracle Corp.) DG/SQL (Data General Corp.) SYBASE (Sybase Inc.). SQL is also an official standard now.In 1982 the American National Standards Insti- tute (ANSI) chartered its Database Committee X3H2 to develop a proposal for a standard relational language.This proposal was ratified in 1986 and consisted essentially of the IBM dialect of SQL.In 1987 this ANSI standard was also accepted as an international standard by the International Organization for Standardization (ISO).This original standard version of SQL is often referred to,informally,as ”SQL/86”.In 1989 the original standard was extended and this newstandard is often,again informally,referred to as ”SQL/89”.Also in 1989,a related standard called Database Language Embedded SQL was developed. The ISO and ANSI committees have been working for many years on the defini- tion of a greatly expanded version of the original standard,referred to informally as ”SQL2” or ”SQL/92”.This version became a ratified standard - ”International Standard ISO/IEC 9075:1992,Database Language SQL” - in late 1992.”SQL/92” is the version nor- mally meant when people refer to ”the SQL standard”.Adetailed description of ”SQL/92” is given in [DATE96].At the time of writing this document a new standard informally referred to as ”SQL3” is under development.It is planned to make SQL a turing-complete language,i.e.all computable queries (e.g.recursive queries) will be possible.This is a very complex task and therefore the completion of the newstandard can not be expected before 1999. 9 10 CHAPTER1.SQL 1.1 The Relational Data Model As mentioned before,SQLis a relational language.That means it is based on the ”relational data model” first published by E.F.Codd in 1970.We will give a formal description of the relational model in section 1.1.1 Formal Notion of the Relational Data Model but first we want to have a look at it froma more intuitive point of view. A relational database is a database that is perceived by its users as a collection of tables (and nothing else but tables).A table consists of rows and columns where each row represents a record and each column represents an attribute of the records contained in the table.Figure 1.1 shows an example of a database consisting of three tables: SUPPLIER is a table storing the number (SNO),the name (SNAME) and the city (CITY) of a supplier. PARTis a table storing the number (PNO) the name (PNAME) and the price (PRICE) of a part. SELLS stores information about which part (PNO) is sold by which supplier (SNO). It serves in a sense to connect the other two tables together. SUPPLIER SNO | SNAME | CITY SELLS SNO | PNO -----+---------+-------- -----+----- 1 | Smith | London 1 | 1 2 | Jones | Paris 1 | 2 3 | Adams | Vienna 2 | 4 4 | Blake | Rome 3 | 1 3 | 3 4 | 2 PART PNO | PNAME | PRICE 4 | 3 -----+---------+--------- 4 | 4 1 | Screw | 10 2 | Nut | 8 3 | Bolt | 15 4 | Cam | 25 Figure 1.1:The suppliers and parts database The tables PART and SUPPLIERmay be regarded as entities and SELLS may be regarded as a relationship between a particular part and a particular supplier. As we will see later,SQL operates on tables like the ones just defined but before that we will study the theory of the relational model. 1.1.1 Formal Notion of the Relational Data Model The mathematical concept underlying the relational model is the set-theoretic relation which is a subset of the Cartesian product of a list of domains.This set-theoretic rela- tion gives the model its name (do not confuse it with the relationship from the Entity- Relationship model).Formally a domain is simply a set of values.For example the set of integers is a domain.Also the set of character strings of length 20 and the real numbers are examples of domains. Definition 1.1 The Cartesian product of domains  written  is the set of all -tuples  such that  . 1.2.OPERATIONSINTHERELATIONALDATAMODEL 11 For example,when we have  ,    and    ,then  is     . Definition 1.2 A Relation is any subset of the Cartesian product of one or more domains:   For example    is a relation,it is in fact a subset of  mentioned above.The members of a relationare called tuples.Each relation of some Cartesian product  is said to have arity and is therefore a set of -tuples. A relation can be viewed as a table (as we already did,remember figure 1.1 The sup- pliers and parts database) where every tuple is represented by a row and every column corresponds to one component of a tuple.Giving names (called attributes) to the columns leads to the definition of a relation scheme. Definition 1.3 Arelation scheme is a finite set of attributes  .There is a domain  for each attribute  where the values of the attributes are taken from.We often write a relation scheme as  . Note:A relation scheme is just a kind of template whereas a relation is an instance of a relation scheme.The relation consists of tuples (and can therefore be viewed as a table) not so the relation scheme. Domains vs.Data Types We often talked about domains in the last section.Recall that a domain is,formally,just a set of values (e.g.,the set of integers or the real numbers).In terms of database systems we often talk of data types instead of domains.When we define a table we have to make a decision about which attributes to include.Additionally we have to decide which kind of data is going to be stored as attribute values.For example the values of SNAME fromthe table SUPPLIERwill be character strings,whereas SNOwill store integers.We define this by assigninga data type to each attribute.The type of SNAMEwill be VARCHAR(20) (this is the SQL type for character strings of length 20),the type of SNO will be INTEGER. With the assignment of a data type we also have selected a domain for an attribute.The domain of SNAME is the set of all character strings of length 20,the domain of SNO is the set of all integer numbers. 1.2 Operations in the Relational Data Model In section 1.1.1 we defined the mathematical notion of the relational model.Nowwe know how the data can be stored using a relational data model but we do not know what to do with all these tables to retrieve something from the database yet.For example somebody could ask for the names of all suppliers that sell the part ’Screw’.Therefore two rather different kinds of notations for expressing operations on relations have been defined: The Relational Algebra which is an algebraic notation,where queries are expressed by applying specialized operators to the relations. The Relational Calculus which is a logical notation,where queries are expressed by formulating some logical restrictions that the tuples in the answer must satisfy. 1.2.1 Relational Algebra The Relational Algebra was introduced by E.F.Codd in 1972.It consists of a set of operations on relations: 12 CHAPTER1.SQL SELECT ( ):extracts tuples from a relation that satisfy a given restriction.Let be a table that contains an attribute .         where denotes a tuple of and  denotes the value of attribute of tuple . PROJECT ( ):extracts specified attributes (columns) from a relation.Let be a relation that contains an attribute .           ,where denotes the value of attribute of tuple . PRODUCT ( ):builds the Cartesian product of two relations.Let be a table with arity and let be a table with arity . is the set of all   -tuples whose first components form a tuple in and whose last components form a tuple in . UNION ( ):builds the set-theoretic union of two tables.Given the tables and (both must have the same arity),the union is the set of tuples that are in or or both. INTERSECT( ):builds the set-theoretic intersection of two tables.Given the tables and ,   is the set of tuples that are in and in .We again require that and have the same arity. DIFFERENCE ( or ):builds the set difference of two tables.Let and again be two tables with the same arity.  is the set of tuples in but not in . JOIN ( ):connects two tables by their common attributes.Let be a table with the attributes  and and let a table with the attributes  and . There is one attribute common to both relations,the attribute .                .What are we doing here?We first cal- culate the Cartesian product .Then we select those tuples whose values for the common attribute are equal (    ).Nowwe got a table that contains the attribute two times and we correct this by projecting out the duplicate column. Example 1.1 Let’s have a look at the tables that are produced by evaluating the steps necessary for a join. Let the following two tables be given: R A | B | C S C | D | E ---+---+--- ---+---+--- 1 | 2 | 3 3 | a | b 4 | 5 | 6 6 | c | d 7 | 8 | 9 First we calculate the Cartesian product   and get: R x S A | B | R.C | S.C | D | E ---+---+-----+-----+---+--- 1 | 2 | 3 | 3 | a | b 1 | 2 | 3 | 6 | c | d 4 | 5 | 6 | 3 | a | b 4 | 5 | 6 | 6 | c | d 7 | 8 | 9 | 3 | a | b 7 | 8 | 9 | 6 | c | d 1.2.OPERATIONSINTHERELATIONALDATAMODEL 13 After the selection      we get: A | B | R.C | S.C | D | E ---+---+-----+-----+---+--- 1 | 2 | 3 | 3 | a | b 4 | 5 | 6 | 6 | c | d To remove the duplicate column  we project it out by the following operation:                 and get: A | B | C | D | E ---+---+---+---+--- 1 | 2 | 3 | a | b 4 | 5 | 6 | c | d DIVIDE ( ):Let be a table with the attributes  and and let be a table with the attributes and .Then we define the division as:     such that      where   denotes a tuple of table that consists only of the components and .Note that the tuple only consists of the components and of relation . Example 1.2 Given the following tables R A | B | C | D S C | D ---+---+---+--- ---+--- a | b | c | d c | d a | b | e | f e | f b | c | e | f e | d | c | d e | d | e | f a | b | d | e  is derived as A | B ---+--- a | b e | d For a more detailed description and definition of the relational algebra refer to [ULL88] or [DATE94]. Example 1.3 Recall that we formulated all those relational operators to be able to retrieve data fromthe database.Let’s return to our example of section 1.2 where someone wanted to knowthe names of all suppliers that sell the part ’Screw’.This question can be answered using relational algebra by the following operation:          We call such an operation a query.If we evaluate the above query against the tables form figure 1.1 The suppliers and parts database we will obtain the following result: SNAME ------- Smith Adams 14 CHAPTER1.SQL 1.2.2 Relational Calculus The relational calculus is based on the first order logic.There are two variants of the relational calculus: The Domain Relational Calculus (DRC),where variables stand for components (at- tributes) of the tuples. The Tuple Relational Calculus (TRC),where variables stand for tuples. We want to discuss the tuple relational calculus only because it is the one underlying the most relational languages.For a detailed discussion on DRC(and also TRC) see [DATE94] or [ULL88]. Tuple Relational Calculus The queries used in TRC are of the following form:    where is a tuple variable is a set of attributes and is a formula.The resulting relation consists of all tuples  that satisfy   . Example 1.4 If we want to answer the question fromexample 1.3 using TRCwe formulate the following query:                        Evaluating the query against the tables from figure 1.1 The suppliers and parts database again leads to the same result as in example 1.3. 1.2.3 Relational Algebra vs.Relational Calculus The relational algebra and the relational calculus have the same expressive power i.e.all queries that can be formulated using relational algebra can also be formulated using the relational calculus and vice versa.This was first proved by E.F.Codd in 1972.This proof is based on an algorithm-”Codd’s reduction algorithm”- by which an arbitrary expression of the relational calculus can be reduced to a semantically equivalent expression of relational algebra.For a more detailed discussion on that refer to [DATE94] and [ULL88]. It is sometimes said that languages based on the relational calculus are ”higher level” or ”more declarative” than languages based on relational algebra because the algebra (par- tially) specifies the order of operations while the calculus leaves it to a compiler or inter- preter to determine the most efficient order of evaluation. 1.3 The SQL Language As most modern relational languages SQLis based on the tuple relational calculus.As a re- sult every query that can be formulated using the tuple relational calculus (or equivalently, relational algebra) can also be formulated using SQL.There are,however,capabilities be- yond the scope of relational algebra or calculus.Here is a list of some additional features provided by SQL that are not part of relational algebra or calculus: 1.3.THESQLLANGUAGE 15 Commands for insertion,deletion or modification of data. Arithmetic capability:In SQL it is possible to involve arithmetic operations as well as comparisons,e.g.  .Note that or other arithmetic operators appear neither in relational algebra nor in relational calculus. Assignment and Print Commands:It is possible to print a relation constructed by a query and to assign a computed relation to a relation name. Aggregate Functions:Operations such as average,sum,max,...can be applied to columns of a relation to obtain a single quantity. 1.3.1 Select The most often used command in SQL is the SELECT statement that is used to retrieve data.The syntax is: SELECT [ALL|DISTINCT] { * | <expr_1> [AS <c_alias_1>] [,... [,<expr_k> [AS <c_alias_k>]]]} FROM <table_name_1> [t_alias_1] [,...[,<table_name_n> [t_alias_n]]] [WHERE condition] [GROUP BY <name_of_attr_i> [,...[,<name_of_attr_j>]] [HAVING condition]] [{UNION | INTERSECT | EXCEPT} SELECT...] [ORDER BY <name_of_attr_i> [ASC|DESC] [,...[,<name_of_attr_j> [ASC|DESC]]]]; Now we will illustrate the complex syntax of the SELECT statement with various exam- ples.The tables used for the examples are defined in figure 1.1 The suppliers and parts database. Simple Selects Example 1.5 Here are some simple examples using a SELECT statement: To retrieve all tuples from table PART where the attribute PRICE is greater than 10 we formulate the following query SELECT * FROM PART WHERE PRICE > 10; and get the table: PNO | PNAME | PRICE -----+---------+-------- 3 | Bolt | 15 4 | Cam | 25 Using ” ” in the SELECT statement will deliver all attributes fromthe table.If we want to retrieve only the attributes PNAME and PRICE fromtable PART we use the statement: SELECT PNAME,PRICE FROM PART WHERE PRICE > 10; 16 CHAPTER1.SQL In this case the result is: PNAME | PRICE --------+-------- Bolt | 15 Cam | 25 Note that the SQL SELECT corresponds to the ”projection” in relational algebra not to the ”selection” (see section 1.2.1 Relational Algebra). The qualifications in the WHERE clause can also be logically connected using the keywords OR,AND and NOT: SELECT PNAME,PRICE FROM PART WHERE PNAME = ’Bolt’ AND (PRICE = 0 OR PRICE < 15); will lead to the result: PNAME | PRICE --------+-------- Bolt | 15 Arithmetic operations may be used in the selectlist and in the WHERE clause.For example if we want to know how much it would cost if we take two pieces of a part we could use the following query: SELECT PNAME,PRICE * 2 AS DOUBLE FROM PART WHERE PRICE * 2 < 50; and we get: PNAME | DOUBLE --------+--------- Screw | 20 Nut | 16 Bolt | 30 Note that the word DOUBLE after the keyword AS is the new title of the second column. This technique can be used for every element of the selectlist to assign a new title to the resulting column.This new title is often referred to as alias.The alias cannot be used throughout the rest of the query. Joins Example 1.6 The following example shows howjoins are realized in SQL: To join the three tables SUPPLIER,PART and SELLS over their common attributes we formulate the following statement: SELECT S.SNAME,P.PNAME FROM SUPPLIER S,PART P,SELLS SE WHERE S.SNO = SE.SNO AND P.PNO = SE.PNO; 1.3.THESQLLANGUAGE 17 and get the following table as a result: SNAME | PNAME -------+------- Smith | Screw Smith | Nut Jones | Cam Adams | Screw Adams | Bolt Blake | Nut Blake | Bolt Blake | Cam In the FROMclause we introduced an alias name for every relation because there are com- mon named attributes (SNO and PNO) among the relations.Now we can distinguish be- tween the common named attributes by simply prefixing the attribute name with the alias name followed by a dot.The join is calculated in the same way as shown in example 1.1. First the Cartesian product     is derived.Nowonly those tuples satisfying the conditions given in the WHERE clause are selected (i.e.the common named attributes have to be equal).Finally we project out all columns but S.SNAME and P.PNAME. Aggregate Operators SQL provides aggregate operators (e.g.AVG,COUNT,SUM,MIN,MAX) that take the name of an attribute as an argument.The value of the aggregate operator is calculated over all values of the specified attribute (column) of the whole table.If groups are specified in the query the calculation is done only over the values of a group (see next section). Example 1.7 If we want to know the average cost of all parts in table PART we use the following query: SELECT AVG(PRICE) AS AVG_PRICE FROM PART; The result is: AVG_PRICE ----------- 14.5 If we want to knowhowmany parts are stored in table PART we use the statement: SELECT COUNT(PNO) FROM PART; and get: COUNT ------- 4 Aggregation by Groups SQL allows to partition the tuples of a table into groups.Then the aggregate operators described above can be applied to the groups (i.e.the value of the aggregate operator is no longer calculated over all the values of the specified column but over all values of a group. Thus the aggregate operator is evaluated individually for every group.) 18 CHAPTER1.SQL The partitioning of the tuples into groups is done by using the keywords GROUP BY followed by a list of attributes that define the groups.If we have GROUP BY  we partition the relation into groups,such that two tuples are in the same group if and only if they agree on all the attributes  . Example 1.8 If we want to knowhowmany parts are sold by every supplier we formulate the query: SELECT S.SNO,S.SNAME,COUNT(SE.PNO) FROM SUPPLIER S,SELLS SE WHERE S.SNO = SE.SNO GROUP BY S.SNO,S.SNAME; and get: SNO | SNAME | COUNT -----+-------+------- 1 | Smith | 2 2 | Jones | 1 3 | Adams | 2 4 | Blake | 3 Nowlet’s have a look of what is happening here: First the join of the tables SUPPLIER and SELLS is derived: S.SNO | S.SNAME | SE.PNO -------+---------+-------- 1 | Smith | 1 1 | Smith | 2 2 | Jones | 4 3 | Adams | 1 3 | Adams | 3 4 | Blake | 2 4 | Blake | 3 4 | Blake | 4 Next we partition the tuples into groups by putting all tuples together that agree on both attributes S.SNOand S.SNAME: S.SNO | S.SNAME | SE.PNO -------+---------+-------- 1 | Smith | 1 | 2 -------------------------- 2 | Jones | 4 -------------------------- 3 | Adams | 1 | 3 -------------------------- 4 | Blake | 2 | 3 | 4 In our example we got four groups and now we can apply the aggregate operator COUNT to every group leading to the total result of the query given above. 1.3.THESQLLANGUAGE 19 Note that for the result of a query using GROUP BYand aggregate operators to make sense the attributes grouped by must also appear in the selectlist.All further attributes not ap- pearing in the GROUP BY clause can only be selected by using an aggregate function.On the other hand you can not use aggregate functions on attributes appearing in the GROUP BY clause. Having The HAVING clause works much like the WHERE clause and is used to consider only those groups satisfying the qualification given in the HAVING clause.The expressions allowed in the HAVING clause must involve aggregate functions.Every expression using only plain attributes belongs to the WHERE clause.On the other hand every expression involving an aggregate function must be put to the HAVING clause. Example 1.9 If we want only those suppliers selling more than one part we use the query: SELECT S.SNO,S.SNAME,COUNT(SE.PNO) FROM SUPPLIER S,SELLS SE WHERE S.SNO = SE.SNO GROUP BY S.SNO,S.SNAME HAVING COUNT(SE.PNO) > 1; and get: SNO | SNAME | COUNT -----+-------+------- 1 | Smith | 2 3 | Adams | 2 4 | Blake | 3 Subqueries In the WHERE and HAVING clauses the use of subqueries (subselects) is allowed in every place where a value is expected.In this case the value must be derived by evaluating the subquery first.The usage of subqueries extends the expressive power of SQL. Example 1.10 If we want to know all parts having a greater price than the part named ’Screw’ we use the query: SELECT * FROM PART WHERE PRICE > (SELECT PRICE FROM PART WHERE PNAME=’Screw’); The result is: PNO | PNAME | PRICE -----+---------+-------- 3 | Bolt | 15 4 | Cam | 25 When we look at the above query we can see the keyword SELECT two times.The first one at the beginning of the query - we will refer to it as outer SELECT - and the one in the WHERE clause which begins a nested query - we will refer to it as inner SELECT. For every tuple of the outer SELECT the inner SELECT has to be evaluated.After every evaluation we know the price of the tuple named ’Screw’ and we can check if the price of the actual tuple is greater. 20 CHAPTER1.SQL If we want to know all suppliers that do not sell any part (e.g.to be able to remove these suppliers fromthe database) we use: SELECT * FROM SUPPLIER S WHERE NOT EXISTS (SELECT * FROM SELLS SE WHERE SE.SNO = S.SNO); In our example the result will be empty because every supplier sells at least one part. Note that we use S.SNO from the outer SELECT within the WHERE clause of the inner SELECT.As described above the subquery is evaluated for every tuple fromthe outer query i.e.the value for S.SNO is always taken fromthe actual tuple of the outer SELECT. Union,Intersect,Except These operations calculate the union,intersect and set theoretic difference of the tuples derived by two subqueries: Example 1.11 The following query is an example for UNION: SELECT S.SNO,S.SNAME,S.CITY FROM SUPPLIER S WHERE S.SNAME = ’Jones’ UNION SELECT S.SNO,S.SNAME,S.CITY FROM SUPPLIER S WHERE S.SNAME = ’Adams’; gives the result: SNO | SNAME | CITY -----+-------+-------- 2 | Jones | Paris 3 | Adams | Vienna Here an example for INTERSECT: SELECT S.SNO,S.SNAME,S.CITY FROM SUPPLIER S WHERE S.SNO > 1 INTERSECT SELECT S.SNO,S.SNAME,S.CITY FROM SUPPLIER S WHERE S.SNO > 2; gives the result: SNO | SNAME | CITY -----+-------+-------- 2 | Jones | Paris The only tuple returned by both parts of the query is the one having  . 1.3.THESQLLANGUAGE 21 Finally an example for EXCEPT: SELECT S.SNO,S.SNAME,S.CITY FROM SUPPLIER S WHERE S.SNO > 1 EXCEPT SELECT S.SNO,S.SNAME,S.CITY FROM SUPPLIER S WHERE S.SNO > 3; gives the result: SNO | SNAME | CITY -----+-------+-------- 2 | Jones | Paris 3 | Adams | Vienna 1.3.2 Data Definition There is a set of commands used for data definition included in the SQL language. Create Table The most fundamental command for data definition is the one that creates a newrelation (a newtable).The syntax of the CREATE TABLE command is: CREATE TABLE <table_name> (<name_of_attr_1> <type_of_attr_1> [,<name_of_attr_2> <type_of_attr_2> [,...]]); Example 1.12 To create the tables defined in figure 1.1 the following SQL statements are used: CREATE TABLE SUPPLIER (SNO INTEGER, SNAME VARCHAR(20), CITY VARCHAR(20)); CREATE TABLE PART (PNO INTEGER, PNAME VARCHAR(20), PRICE DECIMAL(4,2)); CREATE TABLE SELLS (SNO INTEGER, PNO INTEGER); Data Types in SQL The following is a list of some data types that are supported by SQL: INTEGER:signed fullword binary integer (31 bits precision). SMALLINT:signed halfword binary integer (15 bits precision). 22 CHAPTER1.SQL DECIMAL (   ):signed packed decimal number of digits precision with as- sumed of them right to the decimal point.   .If is omitted it is assumed to be 0. FLOAT:signed doubleword floating point number. CHAR( ):fix ed length character string of length . VARCHAR( ):varying length character string of maximumlength . Create Index Indices are used to speed up access to a relation.If a relation has an index on attribute then we can retrieve all tuples having   in time roughly proportional to the number of such tuples rather than in time proportional to the size of . To create an index in SQL the CREATE INDEX command is used.The syntax is: CREATE INDEX <index_name> ON <table_name> ( <name_of_attribute> ); Example 1.13 To create an index named I on attribute SNAME of relation SUPPLIER we use the following statement: CREATE INDEX I ON SUPPLIER (SNAME); The created index is maintained automatically,i.e.whenever a newtuple is inserted into the relation SUPPLIER the index I is adapted.Note that the only changes a user can percept when an index is present are an increased speed. Create View A view may be regarded as a virtual table,i.e.a table that does not physically exist in the database but looks to the user as if it did.By contrast,when we talk of a base table there is really a physically stored counterpart of each rowof the table somewhere in the physical storage. Views do not have their own,physically separate,distinguishable stored data.Instead, the system stores the definition of the view (i.e.the rules about how to access physically stored base tables in order to materialize the view) somewhere in the system catalogs (see section 1.3.4 System Catalogs).For a discussion on different techniques to implement views refer to section 3.4.1 Techniques To Implement Views. In SQL the CREATE VIEWcommand is used to define a view.The syntax is: CREATE VIEW <view_name> AS <select_stmt> where select stmt is a valid select statement as defined in section 1.3.1.Note that the select stmt is not executed when the view is created.It is just stored in the systemcatalogs and is executed whenever a query against the viewis made. Example 1.14 Let the following view definition be given (we use the tables from figure 1.1 The suppliers and parts database again): CREATE VIEW London_Suppliers AS SELECT S.SNAME,P.PNAME FROM SUPPLIER S,PART P,SELLS SE WHERE S.SNO = SE.SNO AND P.PNO = SE.PNO AND S.CITY = ’London’; 1.3.THESQLLANGUAGE 23 Nowwe can use this virtual relation London Suppliersas if it were another base table: SELECT * FROM London_Suppliers WHERE P.PNAME = ’Screw’; will return the following table: SNAME | PNAME -------+------- Smith | Screw To calculate this result the database system has to do a hidden access to the base tables SUPPLIER,SELLS and PART first.It does so by executing the query given in the view definition against those base tables.After that the additional qualifications (given in the query against the view) can be applied to obtain the resulting table. Drop Table,Drop Index,Drop View To destroy a table (including all tuples stored in that table) the DROP TABLE command is used: DROP TABLE <table_name>; Example 1.15 To destroy the SUPPLIER table use the following statement: DROP TABLE SUPPLIER; The DROP INDEX command is used to destroy an index: DROP INDEX <index_name>; Finally to destroy a given viewuse the command DROP VIEW: DROP VIEW <view_name>; 1.3.3 Data Manipulation Insert Into Once a table is created (see section 1.3.2),it can be filled with tuples using the command INSERT INTO.The syntax is: INSERT INTO <table_name> (<name_of_attr_1> [,<name_of_attr_2> [,...]]) VALUES (<val_attr_1> [,<val_attr_2> [,...]]); Example 1.16 To insert the first tuple into the relation SUPPLIER of figure 1.1 The sup- pliers and parts database we use the following statement: INSERT INTO SUPPLIER (SNO,SNAME,CITY) VALUES (1,’Smith’,’London’); To insert the first tuple into the relation SELLS we use: INSERT INTO SELLS (SNO,PNO) VALUES (1,1); 24 CHAPTER1.SQL Update To change one or more attribute values of tuples in a relation the UPDATE command is used.The syntax is: UPDATE <table_name> SET <name_of_attr_1> = <value_1> [,...[,<name_of_attr_k> = <value_k>]] WHERE <condition>; Example 1.17 To change the value of attribute PRICE of the part ’Screw’ in the relation PART we use: UPDATE PART SET PRICE = 15 WHERE PNAME = ’Screw’; The newvalue of attribute PRICE of the tuple whose name is ’Screw’ is now15. Delete To delete a tuple froma particular table use the command DELETE FROM.The syntax is: DELETE FROM <table_name> WHERE <condition>; Example 1.18 To delete the supplier called ’Smith’ of the table SUPPLIER the following statement is used: DELETE FROM SUPPLIER WHERE SNAME = ’Smith’; 1.3.4 SystemCatalogs In every SQLdatabase systemsystemcatalogs are used to keep track of which tables,views indexes etc.are defined in the database.These system catalogs can be queried as if they were normal relations.For example there is one catalog used for the definition of views. This catalog stores the query from the view definition.Whenever a query against a view is made,the system first gets the view-definition-query out of the catalog and materializes the viewbefore proceeding with the user query (see section 3.4.1 Techniques To Implement Views for a more detailed description).For more information about system catalogs refer to [DATE96]. 1.3.5 Embedded SQL In this section we will sketch how SQL can be embedded into a host language (e.g.C). There are two main reasons why we want to use SQL froma host language: There are queries that cannot be formulated using pure SQL (i.e.recursive queries). To be able to performsuch queries we need a host language with a greater expressive power than SQL. We simply want to access a database fromsome application that is written in the host language (e.g.a ticket reservation systemwith a graphical user interface is written in Cand the information about which tickets are still left is stored in a database that can be accessed using embedded SQL). 1.3.THESQLLANGUAGE 25 A program using embedded SQL in a host language consists of statements of the host language and of embedded SQL (ESQL) statements.Every ESQL statement begins with the keywords EXEC SQL.The ESQL statements are transformed to statements of the host language by a precompiler (mostly calls to library routines that perform the various SQL commands). When we look at the examples throughout section 1.3.1 we realize that the result of the queries is very often a set of tuples.Most host languages are not designed to operate on sets so we need a mechanism to access every single tuple of the set of tuples returned by a SELECT statement.This mechanism can be provided by declaring a cursor.After that we can use the FETCHcommand to retrieve a tuple and set the cursor to the next tuple. For a detailed discussion on embedded SQL refer to [DATE96],[DATE94] or [ULL88]. Chapter 2 PostgreSQL fromthe User’s Point of View This chapter contains information that will be useful for people that only want to use Post- greSQL.It gives a listing and description of the available features including a lot of exam- ples.The users interested in the internals of PostgreSQLshould read chapter 3 PostgreSQL fromthe Programmer’s Point of View. 2.1 A Short History of PostgreSQL PostgreSQL is an enhancement of the POSTGRES database management system,a next- generation relational DBMS research prototype running on almost any UNIX based oper- ating system.The original POSTGRES code,from which PostgreSQL is derived,was the effort of many graduate students,undergraduate students,and staff programmers working under the direction of Professor Michael Stonebraker at the University of California,Berke- ley.Originally POSTGRES implemented its own query language called POSTQUEL. In 1995 Andrew Yu and Jolly Chen adapted the last official release of POSTGRES (version 4.2) to meet their own requirements and made some major changes to the code. The most important change is the replacement of POSTQUEL by an extended subset of SQL92.The name was changed to Postgres95 and since that time many other people have contributed to the porting,testing,debugging and enhancement of the code.In late 1996 the name was changed again to the newofficial name PostgreSQL. 2.2 An Overviewon the Features of PostgreSQL As mentioned earlier PostgreSQL is a relational database management system (RDBMS) but in contrast to the most traditional RDBMSs it is designed to provide more fle xibility to the user.One example for the improved fle xibility is the support for user defined or abstract data types (ADTs).Another example is the support of user defined SQLfunctions. (We will discuss these features later in section 2.5 Some of PostgreSQL’s Special Features in Detail) Here is a list of the features PostgreSQL provides: An extended subset of SQL92 as query language. Acommandline interface called psql using GNUreadline. Aclient/server architecture allowing concurrent access to the databases. 26 2.3.WHERETOGETPOSTGRESQL 27 Support for btree,hash or rtree indexes. A transaction mechanismbased on the two phase commit protocol is used to ensure data integrity throughout concurrent data access. Host based,password,crypt,ident (RFC 1413) or Kerberos V4/V5 authentication can be used to ensure authorized data access. Ahuge amount of predefined data types. Support for user defined data types. Support for user defined SQL functions. Support for recovery after a crash. Aprecompiler for embedded SQL in C. An ODBC interface. AJDBC interface. ATcl/Tk interface. APerl interface. 2.3 Where to Get PostgreSQL PostgreSQL is available as source distribution (v6.3.2 at the time of writing) from ftp://ftp.postgresql.org/pub/.There is also an official homepage for PostgreSQL at http://www.postgresgl.org/.There are a lot of hosts mirroring the contents of the above mentioned ones all over the world. Copyright of PostgreSQL PostgreSQL is not public domain software.It is copyrighted by the University of Cali- fornia but may be used according to the licensing terms of the the copyright included in every distribution (refer to the file COPYRIGHT included in every distribution for more information). Support for PostgreSQL There is no official support for PostgreSQL.That means there is no obligation for anybody to provide maintenance,support,updates,enhancements or modifications to the code.The whole PostgreSQL project is maintained through volunteer effort only.However there are many mailing lists which can be subscribed to in case of problems: Support Mailing Lists: [email protected] announcements. [email protected] OS-specific bugs. [email protected] other unsolved bugs. [email protected] general discussion. Mailing Lists for Developers: [email protected] server internals discussion. [email protected] the documentation project. [email protected] patches and discussion. [email protected] mirror site announcements. 28 CHAPTER2.POSTGRESQLFROMTHEUSER’SPOINTOFVIEW To subscribe to the mailing list [email protected] for example just send an email to [email protected] the lines subscribe end in the body (not the subject line). 2.4 How to use PostgreSQL Before we can use PostgreSQL we have to get and install it.We won’t talk about installing PostgreSQL here because the installation procedure is straight forward and described very detailed in the file INSTALL contained in the distribution.We want to concentrate on the basic usage of PostgreSQL after a successful setup has taken place. 2.4.1 Starting The Postmaster As mentioned earlier PostgreSQL uses a traditional client/server architecture to provide multi user access.The server is represented by a program called postmaster which is started only once at each host.This master server process listens at a specified TCP/IP port for incoming connections by a client.For every incoming connection the postmaster spawns a newserver process (postgres) and continues listening for further connections. Every server process spawned in this way handles exactly one connection to one client. The postgres server processes communicate with each other using UNIX semaphores and shared memory to ensure data integrity throughout concurrent data access.(For a more detailed description on these architectural concepts see chapter 3 PostgreSQL from the Programmer’s Point of View.) To start the master server process use the following command: $ nohup postmaster > server.log 2>&1 & which will start postmaster in the background and even if you log out of the systemthe process remains active.All errors and messages will be logged to the file server.log. Note:The postmaster process is usually started by a special database superuser called postgres which is a normal UNIX user but has more rights concerning Post- greSQL.For security reasons it is strongly recommended not to run the postmaster process as the systemsuper user root. 2.4.2 Creating a New Database Once the postmasterdaemon is running we can create a newdatabase using the follow- ing command: $ createdb testdb which will create a database called testdb.The user executing the command will become the database administrator and will therefore be the only user (except the database superuser postgres) who can destroy the database again. Note:To create the database you don’t need to know anything about the tables (re- lations) that will be used within the database.The tables will be defined later using the SQL statements shown in section 2.4.4 Defining and Populating Tables. 2.4.HOWTOUSEPOSTGRESQL 29 2.4.3 Connecting To a Database After having created at least one database we can make our first client connection to the database system to be able to define tables,populate them,retrieve data,update data etc. Note that most database manipulation is done this way (just creating and destroying a database is done by separate commands which are in fact just shell scripts also using psql) The connection to the DBMS is established by the following command: $ psql testdb which will make a connection to a database called testdb.psql is a command line interface using GNU readline.It can handle a connection to only one database at a time. When the connection is established psql presents itself as follows: Welcome to the POSTGRESQL interactive sql monitor: Please read the file COPYRIGHT for copyright terms of POSTGRESQL type\?for help on slash commands type\q to quit type\g or terminate with semicolon to execute query You are currently connected to the database:testdb testdb=> Now you can either enter any valid SQL statement terminated by a ’;’ or use one of the slash commands.Alist of all available slash commands can be obtained by typing ’ ?’. Here is a list of the most important slash commands: ?lists all available slash commands and gives a short description. q quits psql. d lists all tables,views and indexes existing in the current database. dt lists only tables. dT lists all available data types. i filename reads and executes the queries contained in filename. l lists all available databases known to the system. connect database terminates the current connection and opens a new connection to database. o filename sends all query output to file. 2.4.4 Defining and Populating Tables Defining tables and inserting tuples is done by the SQL statements CREATE TABLE and INSERT INTO.For a detailed description on the syntax of these commands refer to sec- tion 1.3.2 Data Definition. 30 CHAPTER2.POSTGRESQLFROMTHEUSER’SPOINTOFVIEW Example 2.1 To create and populate the table SUPPLIER used in figure 1.1 The suppliers and parts database we could use the following session: $ psql testdb Welcome to the POSTGRESQL interactive sql monitor: Please read the file COPYRIGHT for copyright terms of POSTGRESQL type\?for help on slash commands type\q to quit type\g or terminate with semicolon to execute query You are currently connected to the database:testdb testdb=> create table supplier (sno int4, testdb-> sname varchar(20), testdb-> city varchar(20)); CREATE testdb=> insert into supplier (sno,sname,city) testdb-> values (1,’Smith’,’London’); INSERT 26187 1 testdb=> insert into supplier (sno,sname,city) testdb-> values (2,’Jones’,’Paris’); INSERT 26188 1 testdb=> insert into supplier (sno,sname,city) testdb-> values (3,’Adams’,’Vienna’); INSERT 26189 1 testdb=> insert into supplier (sno,sname,city) testdb-> values (4,’Blake’,’Rome’); INSERT 26190 1 testdb=> If you first put all the above commands into a file you can easily execute the statements by the slash command i file . Note:The data type int4 is not part of the SQL92 standard.It is a built in Post- greSQL type denoting a four byte signed integer.For information on which data types are available you can use the dT command which will give a list and short description of all datatypes currently known to PostgreSQL. 2.4.5 Retrieving Data FromThe Database After having defined and populated the tables in the database testdb we are able to retrieve data by formulating queries using psql.Every query has to be terminated by a ’;’. Example 2.2 We assume that all the tables form figure 1.1 The suppliers and parts database exist in the database testdb.If we want to know all parts that are sold in London we use the following session: testdb=> select p.pname testdb-> from supplier s,sells se,part p testdb-> where s.sno=se.sno and testdb-> p.pno=se.pno and testdb-> s.city=’London’; 2.5.SOMEOFPOSTGRESQL’SSPECIALFEATURESINDETAIL 31 pname ----- Screw Nut (2 rows) testdb=> Example 2.3 We use again the tables given in figure 1.1.Now we want to retrieve all suppliers selling no parts at all (to remove themfromthe suppliers table for example): testdb=> select * from supplier s testdb-> where not exists testdb-> (select sno from sells se testdb-> where se.sno = s.sno); sno|sname|city ---+-----+---- (0 rows) testdb=> The result relation is empty in our example telling us that every supplier contained in the database sells at least one part.Note that we used a nested subselect to formulate the query. 2.5 Some of PostgreSQL’s Special Features in Detail Traditional relational database management systems (RDMSs) provide only very few datatypes including floating point numbers,integers,character strings,money,and dates. This makes the implementation of many applications very difficult and that’s why Post- greSQL offers substantial additional power by incorporating the following additional basic concepts in such a way that users can easily extend the system: inheritance user defined functions user defined types rules Some other features,implemented in most modern RDBMSs provide additional power and fle xibility: constraints (given in the create table command) triggers transaction integrity 2.5.1 Inheritance Inheritance is a feature well known from object oriented programming languages such as Smalltalk or C++.PostgreSQL refers to tables as classes and the definition of a class may inherit the contents of another class: 32 CHAPTER2.POSTGRESQLFROMTHEUSER’SPOINTOFVIEW Example 2.4 First we define a table (class) city: testdb=> create table city ( testdb-> name varchar(20), testdb-> population int4, testdb-> altitude int4); CREATE testdb=> Now we define a new table (class) capital that inherits all attributes from city and adds a new attribute country storing the country which it is the capital of. testdb=> create table capital ( testdb-> country varchar(20) testdb-> ) inherits (city); CREATE testdb=> Note:The class capital inherits only the attributes of city (not the tuples stored in city).The newtable can be used as if it were defined without using inheritance: testdb=> insert into capital (name,population, testdb-> altitude,state) testdb-> values (’Vienna’,1500000,200,’Austria’); INSERT 26191 1 testdb=> Let’s assume that the tables city and capital have been populated in the following way: city name | population | altitude ---------+------------+--------- Linz | 200000 | 270 Graz | 250000 | 350 Villach | 50000 | 500 Salzburg | 150000 | 420 capital name | population | altitude | country ---------+------------+----------+--------- Vienna | 1500000 | 200 | Austria Standard SQL92 queries against the above tables behave exactly as expected: testdb=> select * from city testdb-> where altitude > 400; name | population | altitude ---------+------------+--------- Villach | 50000 | 500 Salzburg | 150000 | 420 (2 rows) testdb=> select * from capital; name | population | altitude | country ---------+------------+----------+--------- Vienna | 1500000 | 200 | Austria (1 row) testdb=> 2.5.SOMEOFPOSTGRESQL’SSPECIALFEATURESINDETAIL 33 If we want to knowthe names of all cities (including capitals) that are located at an altitude over 100 meters the query is: testdb=> select * from city* testdb-> where altitude > 100; name | population | altitude ---------+------------+--------- Linz | 200000 | 270 Graz | 250000 | 350 Villach | 50000 | 500 Salzburg | 150000 | 420 Vienna | 1500000 | 200 (5 rows) testdb=> Here the ’*’ after city indicates that the query should be run over city and all classes below city in the inheritance hierarchy.Many of the commands that we have already discussed (SELECT,UPDATE,DELETE,etc) support this ’*’ notation. 2.5.2 User Defined Functions PostgreSQL allows the definition and usage of user defined functions.The new defined functions can be used within every query.PostgreSQL provides two types of functions: Query Language (SQL) Functions:functions written in SQL. Programming Language Functions:functions written in a compiled language such as C. Query Language (SQL) Functions These functions are defined using SQL.Note that query language functions do not extend the expressive power of the SQL92 standard.Every query language function can be re- placed by an appropriate nested query (subselect) without changing the semantical meaning of the whole query.However,since PostgreSQL does not allowsubselects in the selectlist at the moment but does allowthe usage of query language functions,the expressive power of PostgreSQL’s current SQL implementation is extended. The definition of query language functions is done using the command create function function name .Every function can take zero or more ar- guments.The type of every argument is specified in the list of arguments in the function definition.The type of the function’s result is given after the keyword returns in the function definition.The types used for the arguments and the return value of the function can either be base types (e.g.int4,varchar,...) or composite types.(For each class (ta- ble) that is created,a corresponding composite type is defined.supplier and part are examples for composite types after the tables supplier and part have been created.) Example 2.5 This is an example using only base types. Before PostgreSQL was extended to support nested subqueries user defined query language (SQL) functions could be used to simulate them.Consider example 2.3 where we have wanted to know the names of all suppliers that do not sell any part at all.Normally we would formulate the query as we did in example 2.3.Here we want to show a possible way of formulating the query without using a subquery.This is done in two steps.In the first step we define the function my exists.In the second step we formulate a query using the newfunction. 34 CHAPTER2.POSTGRESQLFROMTHEUSER’SPOINTOFVIEW In the first step we define the newfunction my exists(int4)which takes an integer as argument: testdb=> create function my_exists(int4) returns int4 testdb-> as ’select count(pno) from sells testdb-> where sno = $1;’ language ’sql’; CREATE testdb=> Here is the second step which performs the intended retrieve: testdb=> select s.sname from supplier s testdb-> where my_exists(s.sno) = 0; sname ----- (0 rows) testdb=> Nowlet’s have a look at what is happening here.The function my exists(int4) takes one argument which must be of type integer.Within the function definition this argument can be refered to using the $1 notation (if there were furhter arguments they could be referred to by $2,$3,...).my exists(int4) returns the number of tuples from table sells where the attribute sno is equal to the given argument $1 (sno = $1).The keyword language ’sql’ tells PostgreSQL that the new function is a query language function. The query in the second step examines every tuple fromtable supplierand checks if it satifies the given qualification.It does so by taking the supplier id snoof every tuple and giving it as an argument to the function my exists(int4).In other words the function my exists(int4) is called once for every tuple of table supplier.The function returns the number of tuples having the given supplier id snocontained in table sells.A result of zero means that no such tuple is available meaning that the corresponding supplier does not sell a single part.We can see that this query is semantically equivalent to the one given in example 2.3. Example 2.6 This example shows howto use a composite type in a function definition. Imagine that the price of every part was doubled over night.If you want to look at the part table with the new values you could use the following function which uses the composite type part for its argument: testdb=> create function new_price(part) returns float testdb-> as ’select $1.price * 2;’ language ’sql’; CREATE testdb=> select pno,pname,new_price(price) as new_price testdb-> from part; pno | pname | new_price ----+---------+----------- 1 | Screw | 20 2 | Nut | 16 3 | Bolt | 30 4 | Cam | 50 (4 rows) testdb=> Note that this could have been done by a normal query (without using a user defined func- tion) as well but it’s an easy to understand example for the usage of functions. 2.5.SOMEOFPOSTGRESQL’SSPECIALFEATURESINDETAIL 35 Programming Language Functions PostgreSQL also supports user defined functions written in C.This is a very powerful fea- ture because you can add any function that can be formulated in C.For example Post- greSQL lacks the function sqrt() but it can be easily added using a programming lan- guage function. Example 2.7 We show how to realize the user defined function pg my sqrt(int4). The implementation can be divided into three steps: formulating the newfunction in C compiling and linking it to a shared library making the newfunction known to PostgreSQL Formulating the NewFunction in C:We create a new file called sqrt.c and add the following lines: #include <postgres.h> #include <utils/palloc.h> #include <math.h> int4 pg_my_sqrt(int4 arg1) { return (ceil(sqrt(arg1))); } The function pg my sqrt() takes one argument of type int4 which is a Post- greSQL type defined in postgres.h and returns the integer value next to the square root of the argument.As with query language functions (see previous sec- tion) the arguments can be of base or of composite type.Special care must be taken when using base types that are larger than four bytes in length.PostgreSQL supports three types of passing a value to the user defined function: pass by value,fix ed length pass by reference,fix ed length pass by reference,variable length Only data types that are 1,2 or 4 bytes in length can be passed by value.We just give an example for the usage of base types that can be used for pass by value here. For information on how to use types that require pass by reference or how to use composite types refer to [LOCK98]. Compiling and Linking It to a Shared Library:PostgreSQL binds the new function to the runtime system by using a shared library containing the function.Therefore we have to create a shared library out of the objectfile(s) containing the function(s).It depends on the system and the compiler how this can be done.On a Linux ELF systemusing gcc it can be done by using the following commands: $ gcc -I$PGROOT/include -fpic -c sqrt.c -o sqrt.o $ gcc -shared sqrt.o -o sqrt.so where $PGROOT is the path PostgreSQL was installed to.The important options given to gcc here are -fpic in the first line which tells gcc to produce position in- dependent code that can be loaded to any address of the process image and -shared 36 CHAPTER2.POSTGRESQLFROMTHEUSER’SPOINTOFVIEW in the second line telling gcc to produce a shared library.If you got another system where the above described steps do not work you will have to refer to the manual pages of your c-compiler (often man cc) and your linker (man ld) to see how shared libraries can be built. Making the New Function Known to PostgreSQL:Now we have to tell PostgreSQL about the new function.We do so by using the create function command within psql as we did for query language functions: testdb=> create function pg_my_sqrt(int4) returns int4 testdb-> as ’/<where_ever_you_put_it>/sqrt.so’ testdb-> language ’c’; CREATE testdb=> From now on the function pg my sqrt(int4) can be used in every query.Here is a query against table part using the newfunction: testdb=> select pname,price,pg_my_sqrt(price) testdb-> from part testdb-> where pg_my_sqrt(price) < 10; pname |price|pg_my_sqrt ----------+-----+---------- Screw | 10 | 4 Nut | 8 | 3 Bolt | 15 | 4 Cam | 25 | 5 (4 rows) testdb=> 2.5.3 User Defined Types Adding a newdata type to PostgreSQLalso requires the definition of an input and an output function.These functions are implemented using the techniques presented in the previous section Programming Language Functions.The functions determine howthe type appears in strings (for input by the user and output to the user) and how the type is organized in memory.The input function takes a null-delimited character string as its input and returns the internal (in memory) representation of the type.The output function takes the internal representation of the type and returns a null delimited character string.Besides the definition of input and output functions it is often necessary to enhance operators (e.g.’+’) and aggregate functions for the newdata type.Howthis is done is described in section 2.5.4 Extending Operators and section 2.5.5 Extending Aggregates. 2.5.SOMEOFPOSTGRESQL’SSPECIALFEATURESINDETAIL 37 Example 2.8 Suppose we want to define a complex type which represents complex num- bers.Therefore we create a newfile called complex.c with the following contents: #include <postgres.h> #include <utils/palloc.h> #include <math.h> /* Type definition of Complex */ typedef struct Complex { double x; double y; } Complex; /* Input function:takes a char string of the from * ’x,y’ as argument where x and y must be string * representations of double numbers.It returns a * pointer to an instance of structure Complex that * is setup with the given x and y values.*/ Complex * complex_in(char *str) { double x,y; Complex *result; /* scan the input string and set x and y to the * corresponding double numbers */ if (sscanf(str,"( %lf,%lf )",&x,&y)!= 2) { elog(ERROR,"complex_in:error in parsing"); return NULL; } /* reserve memory for the Complex data structure * Note:we use palloc here because the memory * allocated using palloc is freed automatically * by PostgreSQL when it is not used any more */ result = (Complex *)palloc(sizeof(Complex)); result->x = x; result->y = y; return (result); } /* Output Function */ /* Takes a pointer to an instance of structure Complex * as argument and returns a character pointer to a * string representation of the given argument */ char * complex_out(Complex *complex) { char *result; if (complex == NULL) return(NULL); result = (char *) palloc(60); sprintf(result,"(%g,%g)",complex->x,complex->y); return(result); } 38 CHAPTER2.POSTGRESQLFROMTHEUSER’SPOINTOFVIEW Note that the functions defined above operate on types that require pass by reference.The functions take a pointer to the data as argument and return a pointer to the derived data instead of passing and returning the data itself.That’s why we have to reserve memory using palloc within the functions.(If we would just define a local variable and return the addresses of these variables the system would fail,because the memory used by local variables is freed when the function defining these variables completes.) The next step is to compile the C-functions and create the shared library complex.so. This is done in the way described in the previous section Programming Language Functions and depends on the systemyou are using.On a Linux ELF systemusing gccit would look like this: $ gcc -I$PGROOT/include -fpic -c complex.c -o complex.o $ gcc -shared -o complex.so complex.o Nowwe are ready to define the newdatatype but before that we have to make the input and output function known to PostgreSQL: testdb=> create function complex_in(opaque) testdb-> returns complex testdb-> as ’/<where_ever_you_put_it>/complex.so’ testdb-> language ’c’; NOTICE:ProcedureCreate:type ’complex’ is not yet defined CREATE testdb=> create function complex_out(opaque) testdb-> returns opaque testdb-> as ’/<where_ever_you_put_it>/complex.so’ testdb-> language ’c’; CREATE testdb=> create type complex ( testdb-> internallength = 16, testdb-> input = complex_in, testdb-> output = complex_out testdb-> ); CREATE testdb=> Note that the argument type given in the definition of complex out() and complex in() - opaque - is needed by PostgreSQL to be able to provide an uniform mechanism for the definition of the input and output functions needed by a new data type.It is not necessary to specify the exact type of the arguments given to the functions.The input function is never called explicitly and when it is called implicitly (e.g.by a statement like insert into) it is clear that a character string (i.e.a part of the insert query) will be passed to it.The output function is only called ( by an internal mechanism of PostgreSQL) when data of the corresponding user defined type has to be displayed.In this case it is also clear that the input is of the type used for the internal representation (e.g.complex).The output is of type character string. The newtype can nowbe used as if it were another base type: testdb=> create table complex_test testdb-> (val complex); CREATE testdb=> insert into complex_test testdb-> (val) values (’(1,2)’); INSERT 155872 1 2.5.SOMEOFPOSTGRESQL’SSPECIALFEATURESINDETAIL 39 testdb=> insert into complex_test testdb-> (val) values (’(3,4)’); INSERT 155873 1 testdb=> insert into complex_test testdb-> (val) values (’(5,6)’); INSERT 155874 1 testdb=> select * from complex_test; val ----- (1,2) (3,4) (5,6) (3 rows) testdb=> 2.5.4 Extending Operators So far we are able to define a new type,create tables that use the new type for one (or more) attribute(s) and populate the new tables with data.We are also able to retrieve data fromthose tables as long as we do not use the newdata types within the qualification of the query.If we want to use the new data types in the where clause we have to adapt some (or all) of the operators. Example 2.9 We showhowthe operator ’=’ can be adapted for the usage on the complex data type defined in section 2.5.3 User Defined Types.We need a user defined function complex cmp(complex,complex)that returns true if the complex numbers given as arguments are equal and false otherwise.This function is defined as described in sec- tion 2.5.2 User Defined Functions.In our case there are already two functions present for the usage of type complex -the input and output function defined in example 2.8.So we can add the new function complex cmp(complex,complex) by simply appending the following lines to the file complex.c given in example 2.8: /* Comparison Function */ /* returns true if arg1 and arg2 are equal */ bool complex_cmp(Complex *arg1,Complex *arg2) { if((arg1->x == arg2->x) && (arg1->y == arg2->y)) { return true; } else { return false; } } Nowwe create the shared library again: $ gcc -I$PGROOT/include -fpic -c complex.c -o complex.o $ gcc -shared -o complex.so complex.o Note that all the functions defined in complex.c(complex in(),complex out() and complex cmp()) are nowcontained in the shared library complex.so. 40 CHAPTER2.POSTGRESQLFROMTHEUSER’SPOINTOFVIEW Now we make the new function known to PostgreSQL and after that we define the new operator ’=’ for the complex type: testdb=> create function complex_cmp(complex,complex) testdb-> returns complex testdb-> as ’/<where_ever_you_put_it>/complex.so’ testdb-> language ’c’; CREATE testdb=> create operator = ( testdb-> leftarg = complex, testdb-> rightarg = complex, testdb-> procedure = complex_cmp, testdb-> commutator = = testdb-> ); CREATE testdb=> Fromnow on we are able to performcomparisons between complex numbers in a query’s qualification (We use the table complex test as defined in example 2.8): testdb=> select * from complex_test testdb-> where val = ’(1,2)’; val ----- (1,2) (1 row) testdb=> select * from complex_test testdb-> where val = ’(7,8)’; val ----- (0 rows) testdb=> 2.5.5 Extending Aggregates If we want to use aggregate functions on attributes of a user defined type,we have to add aggregate functions designed to work on the new type.Aggregates in PostgreSQL are realized using three functions: sfunc1 (state function one):is called for every tuple of the current group and the appropriate attribute’s value of the current tuple is passed to the function.The given argument is used to change the internal state of the function in the way given by the body of the function.For example sfunc1 of the aggregate function sum is called for every tuple of the current group.The value of the attribute the sum is built on is taken fromthe current tuple and added to the internal sumstate of sfunc1. sfunc2 is also called for every tuple of the group but it does not use any argument from outside to manipulate its state.It just keeps track of the own internal state.A typical application for sfunc2 is a counter that is incremented for every tuple of the group that has been processed. finalfunc is called after all tuples of the current group have been processed.It takes the internal state of sfunc1and the state of sfunc2as arguments and derives the result of the aggregate function fromthe two given arguments.For example with 2.5.SOMEOFPOSTGRESQL’SSPECIALFEATURESINDETAIL 41 the aggregate function average,sfunc1sums up the attribute values of each tuple in the group,sfunc2 counts the tuples in the group.finalfunc divides the sum by the count to derive the average. If we define an aggregate using only sfunc1we get an aggregate that computes a running function of the attribute values fromeach tuple.sum is an example for this kind of aggre- gate.On the other hand,if we create an aggregate function using only sfunc2 we get an aggregate that is independent of the attribute values from each tuple.count is a typical example of this kind of aggregate. Example 2.10 Here we want to realize the aggregate functions complex sum and complex avg for the user defined type complex (see example 2.8). First we have to create the user defined functions complex add and complex scalar div.We can append these two functions to the file complex.c fromexample 2.8 again (as we did with complex cmp): /* Add Complex numbers */ Complex * complex_add(Complex *arg1,Complex *arg2) { Complex *result; result = (Complex *)palloc(sizeof(Complex)); result->x = arg1->x + arg2->x; result->y = arg1->y + arg2->y; return(result); } /* Final function for complex average */ /* Transform arg1 to polar coordinate form * R * eˆ(j*phi) and divide R by arg2. * Transform the new result back to cartesian * coordinates */ Complex * complex_scalar_div(Complex *sum,int count) { Complex *result; double R,phi; result = (Complex *)palloc(sizeof(Complex)); /* transform to polar coordinates */ R = hypot(sum->x,sum->y); phi = atan(sum->y/sum->x); /* divide by the scalar count */ R = R/count; /* transform back to cartesian coordinates */ result->x = R * cos(phi); result->y = R * sin(phi); return(result); } 42 CHAPTER2.POSTGRESQLFROMTHEUSER’SPOINTOFVIEW Next we create the shared library complex.so again,which will contain all func- tions defined in the previous examples as well as the new functions complex add and complex scalar div: $ gcc -I$PGROOT/include -fpic -c complex.c -o complex.o $ gcc -shared -o complex.so complex.o Now we have to make the functions needed by the new aggregates known to PostgreSQL. After that we define the two new aggregate functions complex sum and complex avg that make use of the functions complex add and complex scalar div: testdb=> create function complex_add(complex,complex) testdb-> returns complex testdb-> as ’/<where_ever_you_put_it>/complex.so’ testdb-> language ’c’; CREATE testdb=> create function complex_scalar_div(complex,int) testdb-> returns complex testdb-> as ’/<where_ever_you_put_it>/complex.so’ testdb-> language ’c’; CREATE testdb=> create aggregate complex_sum ( testdb-> sfunc1 = complex_add, testdb-> basetype = complex, testdb-> stype1 = complex, testdb-> initcond1 = ’(0,0)’ testdb-> ); CREATE testdb=> create aggregate complex_avg ( testdb-> sfunc1 = complex_add, testdb-> basetype = complex, testdb-> stype1 = complex, testdb-> sfunc2 = int4inc, testdb-> stype2 = int4, testdb-> finalfunc = complex_scalar_div, testdb-> initcond1 = ’(0,0)’, testdb-> initcond2 = ’0’ testdb-> ); CREATE The aggregate function complex sum is defined using only sfunc1.basetype is the type of the result of the aggregate function.The function complex add is used as sfunc1 and stype1 defines the type sfunc1 will operate on.initcond1 gives the
__label__pos
0.876123
What Is A Chiropractic Adjustment? A chiropractic adjustment is a precise and very skilled movement which is applied by hand to a joint in your body. The adjustment will help to loosen the joint. It also helps restore the optimal function and proper movement in the joint. There are many things that we do in everyday life which leads to spine and back misalignment. An adjustment will be able to properly align the joints and will lead to significant improvements in the nervous system and movement. It is also possible that the adjustment will make you more flexible and relaxed. If you are not in pain before an adjustment is done, you will generally have no soreness after. However, there are some people who feel mildly sore after an adjustment. This will often be likened to the feeling in your muscles after a workout. This pain will be short-lived and will leave you stronger, less stiff and invigorated when it wears off. There is also a release of endorphins linked to adjustments that you can benefit from. What Causes The Popping And Cracking In An Adjustment? There are times when the opposing forces that make up your joints become stuck together. This is not in a mechanical sense, but rather like suction. A good example of this will be a suction cup stuck to a mirror with the joint being the connection between the glass and the cup. When the cup is removed, a popping noise will be made. The same thing happens with your joints. When the suction between the joints is removed, a popping or cracking noise will be made. This is the same as when you remove the suction cup and leads to the crack heard in knuckles. When an adjustment is done, the noise you hear could be the vacuum release of gas which is held in the fluid in the joint capsule. This can happen with minimal movements of millimeters. Will It Hurt To Get A Chiropractic Adjustment? Everyone’s body is different and will react differently to certain things including chiropractic adjustments. However, the study of chiropractic medicine has a long and established history. This means that the adjustment procedures these professionals carry out have been extensively and rigorously researched. It is important to note that the most common side effect of these adjustments is minimal soreness. However, this will lead to a general relief of pain. Alaska Back Care Center has the health care professionals you need for improvement of your health. Talk to an expert today by calling us.
__label__pos
0.92848
Linear canonical transformation Linear canonical transformation Paraxial optical systems implemented entirely with thin lenses and propagation through free space and/or graded index (GRIN) media, are Quadratic Phase Systems (QPS). The effect of any arbitrary QPS on an input wavefield can be described using the linear canonical transform (LCT), a unitary, additive, four-parameter class of linear integral transform. The former appeared a couple of times before Moshinsky and Quesne (1974) called attention to their significance in connection with canonical transformations in quantum mechanics. A particular case of the latter was developed by Segal (1963) and Bargmann(1961) in order to formalized Fok's boson calculus (1928). [K.B. Wolf, "Integral Transforms in Science and Engineering," Ch. 9:Canonical transforms, New York, Plenum Press, 1979.] The LCT generalizes the Fourier, fractional Fourier, Laplace, Gauss-Weierstrass, Bargmann and the Fresnel transforms as particular cases. Mathematical Definition There are maybe several different ways to represent LCT. However, LCT can be viewed as a 2x2 matrix with determinant of the matrix is equal 1.:X_{(a,b,c,d)}(u) = sqrt{-i} cdot e^{-i pi frac{d}{b} u^{2 int_{-infty}^infty e^{-i 2 pi frac{1}{b} ut}e^{i pi frac{a}{b} t^2} x(t) , dt , when b e 0 , :X_{(a,0,c,d)}(u) = sqrt{d} cdot e^{-i pi cdu^{2 x(du) , , when b = 0 , :ad-bc = 1 , should be satisfied pecial Cases of LCT Since Linear Canonical Transform is a general term for other transforms, here are some examples of the special case in LCT. Fourier Transform Fourier Transform is a special case of LCT. when egin{bmatrix} a & b \ c & dend{bmatrix} = egin{bmatrix} 0 & 1 \ -1 & 0end{bmatrix} Fractional Fourier Transform Fractional Fourier Transform is a special case of LCT. when egin{bmatrix} a & b \ c & dend{bmatrix} = egin{bmatrix} cos heta & sin heta \ -sin heta & cos hetaend{bmatrix} Fresnel Transform Fresnel transform is equivalent to when egin{bmatrix} a & b \ c & dend{bmatrix} = egin{bmatrix} 1 & lambda z \ 0 & 1end{bmatrix}z:distance ; lambda:wave length =Additivity property of the WDF= If we denote the LCT by O_F^{(a,b,c,d)} , i.e., X_{(a,b,c,d)}(u) = O_F^{(a,b,c,d)} [x(t)] , then :O_F^{(a2,b2,c2,d2)} left { O_F^{(a1,b1,c1,d1)} [x(t)] ight } = O_F^{(a3,b3,c3,d3)} [x(t)] , where:egin{bmatrix} a3 & b3 \ c3 & d3end{bmatrix} = egin{bmatrix} a2 & b2 \ c2 & d2end{bmatrix}egin{bmatrix} a1 & b1 \ c1 & d1end{bmatrix} Applications Canonical transforms provide a fine tool for the analysis of a class of differential equations. These include the diffusion, the Schrödinger free-particle, the linear potential (free-fall), and the attractive and repulsive oscillator equations. It also includes a few others such as the Fokker-Planck equation. Although this class is far from universal, the ease with which solutions and properties are found makes canonical transforms an attractive tool for problems such as these. [K.B. Wolf, "Integral Transforms in Science and Engineering," Ch. 9&10, New York, Plenum Press, 1979.] Wave propagation travel through air, lens, and dishes are discussed in here. All of the computations can be reduced to 2x2 matrix algebra. This is the spirit of LCT. Electromagnetic Wave Propagation If we assume the system look like this, the wave travel from plane xi, yi to the plane of x and y. We can use Fresnel Transform to describe the Electromagnetic Wave Propagation in the air.: U_0(x,y) = - frac{i}{lambda} frac{e^{ikz{z} int_{-infty}^{infty} int_{-infty}^{infty} e^{j frac{k}{2z} [ (x-x_i)^2 -(y-y_i)^2 ] } U_i(x_i,y_i) , dx_i dy_i :k= frac {2 pi}{lambda}: , wave number; lambda : , wavelength; z : , distance of propagation This is equivalent to LCT (shearing), when: egin{bmatrix} a & b \ c & d end{bmatrix}= egin{bmatrix} 1 & lambda z \ 0 & 1 end{bmatrix} When the travel distance (z) is larger, the shearing effect is larger. pherical lens With the above lens from the image, and refractive index = n, we get:: U_0(x,y) = e^{ikn Delta} e^{-j frac{x}{2f} [x^2 + y ^2] } U_i(x,y) : f: , focal lenth Delta : , thickness of length The distortion passing through the lens is similar to LCT, when: egin{bmatrix} a & b \ c & d end{bmatrix}= egin{bmatrix} 1 & 0 \ frac{-1}{lambda f} & 1 end{bmatrix} This is also a shearing effect, when the focal length is smaller, the shearing effect is larger. atellite Dish Dish is equivalent to LCT, when: egin{bmatrix} a & b \ c & d end{bmatrix}= egin{bmatrix} 1 & 0 \ frac{-1}{lambda R} & 1 end{bmatrix} This is very similar to lens, except focal length is replaced by the radius of the dish. Therefore, if the radius is larger, the shearing effect is larger. Example If the system is considered like the following image. Two dishes, one is the emitter and another one is the receiver, and the signal travel through a distance of D. First, for dish A (emitter), the LCT matrix looks like this::egin{bmatrix} 1 & 0 \ frac{-1}{lambda R_A} & 1 end{bmatrix} Then, for dish B (receiver), the LCT matrix looks like this::egin{bmatrix} 1 & 0 \ frac{-1}{lambda R_B} & 1 end{bmatrix} Last, we need to consider the propagation in air, the LCT matrix looks like this::egin{bmatrix} 1 & lambda D \ 0 & 1 end{bmatrix} If we put all the effects together, the LCT would look like this::egin{bmatrix} a & b \ c & d end{bmatrix}=egin{bmatrix} 1 & 0 \ frac{-1}{lambda R_B} & 1 end{bmatrix}egin{bmatrix} 1 & lambda D \ 0 & 1 end{bmatrix}egin{bmatrix} 1 & 0 \ frac{-1}{lambda R_A} & 1 end{bmatrix}=egin{bmatrix} 1-frac{D}{R_A} & - lambda D \ frac{1}{lambda} (R_A^{-1} + R_B^{-1} - R_A^{-1}R_B^{-1}D) & 1 - frac{D}{R_B} end{bmatrix}, ee also Other time-frequency transforms: * Fractional Fourier transform * continuous Fourier transform * chirplet transform References * J.J. Ding, "time-frequency analysis and wavelet transform course note," the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2007. * K.B. Wolf, "Integral Transforms in Science and Engineering," Ch. 9&10, New York, Plenum Press, 1979. * S.A. Collins, "Lens-system diffraction integral written in terms of matrix optics," "J. Opt. Soc. Amer." 60, 1168&ndash;1177 (1970). * M. Moshinsky and C. Quesne, "Linear canonical transformations and their unitary representations," "J. Math. Phys." 12, 8, 1772&ndash;1783, (1971). * B.M. Hennelly and J.T. Sheridan, "Fast Numerical Algorithm for the Linear Canonical Transform", "J. Opt. Soc. Am. A" 22, 5, 928&ndash;937 (2005). * H.M. Ozaktas, A. Koç, I. Sari, and M.A. Kutay, "Efficient computation of quadratic-phase integrals in optics", "Opt. Let." 31, 35&ndash;37, (2006). * Bing-Zhao Li, Ran Tao, Yue Wang, "New sampling formulae related to the linear canonical transform", "Signal Processing" '87', 983&ndash;990, (2007). * A. Koç, H.M. Ozaktas, C. Candan, and M.A. Kutay, "Digital computation of linear canonical transforms", "IEEE Trans. Signal Process.", vol. 56, no. 6, 2383-2394, (2008). Wikimedia Foundation. 2010. Нужно решить контрольную? Look at other dictionaries: • Canonical coordinates — In mathematics and classical mechanics, canonical coordinates are particular sets of coordinates on the phase space, or equivalently, on the cotangent manifold of a manifold. Canonical coordinates arise naturally in physics in the study of… …   Wikipedia • Canonical — is an adjective derived from . Canon comes from the Greek word kanon , rule (perhaps originally from kanna reed , cognate to cane ), and is used in various meanings. Basic, canonic, canonical : reduced to the simplest and most significant form… …   Wikipedia • Linear complex structure — In mathematics, a complex structure on a real vector space V is an automorphism of V that squares to the minus identity, −I. Such a structure on V allows one to define multiplication by complex scalars in a canonical fashion so as to regard V as… …   Wikipedia • Bogoliubov transformation — In theoretical physics, the Bogoliubov transformation, named after Nikolay Bogolyubov, is a unitary transformation from a unitary representation of some canonical commutation relation algebra or canonical anticommutation relation algebra into… …   Wikipedia • Projection (linear algebra) — Orthogonal projection redirects here. For the technical drawing concept, see orthographic projection. For a concrete discussion of orthogonal projections in finite dimensional linear spaces, see vector projection. The transformation P is the… …   Wikipedia • Generalized linear model — In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary least squares regression. It relates the random distribution of the measured variable of the experiment (the distribution function ) to the systematic (non …   Wikipedia • Theorems and definitions in linear algebra — This article collects the main theorems and definitions in linear algebra. Vector spaces A vector space( or linear space) V over a number field² F consists of a set on which two operations (called addition and scalar multiplication, respectively) …   Wikipedia • Projective transformation — A projective transformation is a transformation used in projective geometry: it is the composition of a pair of perspective projections. It describes what happens to the perceived positions of observed objects when the point of view of the… …   Wikipedia • Trace (linear algebra) — In linear algebra, the trace of an n by n square matrix A is defined to be the sum of the elements on the main diagonal (the diagonal from the upper left to the lower right) of A, i.e., where aii represents the entry on the ith row and ith column …   Wikipedia • General linear group — Group theory Group theory …   Wikipedia Share the article and excerpts Direct link Do a right-click on the link above and select “Copy Link”
__label__pos
0.990432
Controller Design and System Modelling Modelling a Process Introduction There are two ways of approaching the problem of obtaining a mathematical representation or model of a chemical process, or indeed anything. Simple Black-Box Models Input-output models form the basis of most classical process control theory. They are usually subdivided according to whether they have one or more than one input and/or output. We will consider initially only single input, single output (SISO) models, although some ideas associated with multiple input-output models will be touched on elsewhere in the course. The basic SISO model can be thought of as relating an output y to an input u. In general both of these quanties will change with time, the model must represent how y responds to changes in its input or inputs. Typical Responses Suppose an input u is given a step change at some time, as shown in the figure. Observations of typical `processes', from aircraft to papermills, suggest that there are three main types of behaviour which may be seen in an output y. Instantaneous response The first typical response is called the instantaneous response. In this case y also responds in a step, but in general of different size to that in u (in any case y will normally have different dimensions to u) as shown below The simplest mathematical relationship is of the form: Classical control theory assumes that behaviour can be represented by linear equations like the above, and so this is the only type of equation required to represent this type of behaviour. In the above equation is called the Gain of the process or model. Lagging response Here y starts to change the moment that y changes, but the full extent of the response `lags' behind the disturbance. After a while, y will have responded fully. The simplest mathematical form which provides this behaviour is an ordinary differential equation with time t as the independent variable, having the form: Here is as before the gain, and is called the Time Constant of the equation, system or model. Because it is described by a single first order o.d.e. this is called a First Order model, system, lag or response. The interpretation of these parameters is described below. Delayed response When u changes, no immediate change in y is observed. However after a time T, y responds completely to the change in u as in the instantaneous response case. Mathematically this is represented by a difference equation: This does not have simple analytical properties, but is easily understood by chemical engineers as corresponding to a plug flow or pipeline system with residence time T. It is also referred to as a time delay or pure time delay system. Representing Complex Responses Complex systems may be reasonably well approximated by combinations of the above three elements. Models of such systems can be assembled as networks of the elements as shown below. Analytical and numerical techniques are available to work with models constructed in this way. Theoretical Response Classical control theory constructs all its models from sets of linear ordinary differential equations. (The instantaneous response is the limiting case of the the o.d.e. where is zero, and the plug flow delay, like the plug flow reactor, is the limit of an infinite number of first order lags.) There is no good physical reason why a real process should be well represented by such a set of equations, except that in the limit of infinitesimally small changes, all nonlinear equations approximate to linear ones. However, the theoretical advantage of linear representation is twofold. Firstly, the whole system may be represented by o.d.e.s, whereas if there were any nonlinear algebraic equations a mixed set of differential-algebraic equations would be required. Further, a system of linear differential equations always has an analytical solution, but more particularly, is amenable to various other types of analysis which cannot be performed on nonlinear equations. The tuning methods for controllers described later make use of this type of analysis to obtain generalised equations for suitable controller settings in terms of parameters of a process model written in terms of the above three types of behaviour. This is not possible for nonlinear systems. It should be stressed that if we wish to simulate the behaviour of a process, which requires only the solution of the relevant equations, and not their analysis, then there is no particular point in approximating it with this type of simplified approximate model. A `real' model should be constructed, as discussed later, and solved. Let us look again at the differential equation which describes first order behaviour. It is possible to solve this equation analytically to obtain the expression Here Note that a graph of this equation gives the response curve shown above under the section on the lag response. The first thing to consider is What is the Change in y This equation can be now be used directly to calculate the new value of the output variable if the change in u, the gain and time constant are all known. Otherwise it is necessary to estimate values for the gain and time constant as shown below. Anaysis of Reponse It will be shown later in the section on tuning controllers that it is useful to be able to look at the open loop response of a process and try and estimate the values of the gain and time constant. Below are notes on how to do this and then you can try it for yourself in the exercises associated with this part of the module. Estimating the Gain is known as the gain. It tells us how much the output variable will change per unit change in the input variable. A large gain implies a large change in y for a given change in u and hence leads to a quicker response. To calculate its value we have to consider the system going from one steady state value to another. Thus we can see what effect a change in u has on the value of y. After the system has settled down following the step disturbance So Or, as shown in the graph below From this we can see that it is a simple calculation to evaluate the gain of the process given the change in u and y. Estimating the Time Constant is the time constant for the process. This is related to the speed of response of the system. The diagram below shows a graphical method of evaluating its value. 1. The first stage is to draw the initial slope 2. Then the final steady state value is drawn 3. The time at which these two lines intercept is the value of the time constant Note that this is also the time taken for the output value to travel 63% of the distance to its new value. This is shown mathematically below The following points should be noted about the time constant Changing the Gain and Time Constant Finally, how does the response change when and are altered but the change in u stays the same? The diagram below shows that changing alters the slope of the initial slope and changing alters the final steady state.
__label__pos
0.969142
Skip to main content Hitachi High-Tech 1. Home 2. Knowledge 3. Semiconductor Manufacturing Equipment 4. Welcome to "Semiconductor Room." 5. Semiconductor Manufacturing 6. 6. Review SEM - What is a Review SEM? Semiconductor Room 6. Review SEM - What is a Review SEM? A Defect Review SEM is a Scanning Electron Microscope (SEM) that is configured to review defects found on a wafer. A defect detected by a semiconductor wafer defect inspection system is enlarged using an Review SEM to a high magnification image so that it can be reviewed and classified. A Defect Review SEM is mainly used together with the inspection systems in the production lines of electronic devices and other semiconductors. Review SEM Review SEM role Review SEM generally works in the following way: 1. Wafer defects are detected using an inspection system in advance. Inspection system lists position coordinates of the defects and outputs it into a file. 2. The inspected wafer and the file of the inspection result are loaded into the Review SEM. 3. Images of the defects on the list are taken. The defect position is determined based on the position information from the defect list. Images of the defects are then taken and stored by the Review SEM. Several thousand to several tens of thousand defects are detected using the inspection system, and the data is output into a file. Whether to review and to take photos of all or some of the defects can be specified in the operation settings of a recipe of the Review SEM. Sometimes a defect on a wafer cannot be found using the position information in the defect data file. Because of various errors, it is not easy to find a defect using the position information alone. In a defect inspection system, the defective image is compared with the adjacent die image (reference image) and the defect is detected due to the image difference (difference image processing). The Review SEM, similar to the defect inspection system, detects the defect by comparison with the circuit pattern of the adjacent die and obtains the correct position of the defect. The defect is then moved to the center of the field of view and an enlarged photo is taken of it. In the case of Memory IC's defect review, in which cell patterns are arranged repeatedly, the image of the minimum unit of a cell is registered as the reference image in advance. One method of detecting a defect on the Review SEM is to compare the image of the defect with the reference image using difference image processing. This method can speed up the defect detection on the Review SEM because multiple sections can be compared with the same reference image. Review SEM Review SEM ADR function ADR stands for Automatic Defect Review. The aim of the Defect Review is to observe, classify and analyze the shape and components of the defect and particles detected by wafer inspection system in greater detail. Automatic Defect Review automatically obtains image of the desired defect using the defect information (coordinates, etc.) obtained in defect inspection. The data is stored and arranged into a database. In Defect Review SEM, an image of the defect is automatically obtained and stored using the ADR function. ADC function ADC stands for Automatic Defect Classification. The image information of the defect stored in the image server is classified according to the cause of the defect by the classification software based on the predetermined rules and is then restored in the classification server. The classified information is sent to Yield Management System (YMS) and the host computer of the IC manufacturer so that it can be used in the failure and defect analysis. Some systems can classify defects using ADC in conjunction with the ADR function of the Defect Review SEM. The defect information obtained by ADR can also be classified collectively at a later stage. Reference Products Introducing the product lineup of Defect Review SEM & Defect Inspection Systems Wafer Surface Inspection System LS Series Wafer Surface Inspection System LS Series Wafer surface inspection system to detect various types of small defects on non-patterned wafer of next generation device. Dark Field Wafer Defect Inspection System IS Series Dark Field Wafer Defect Inspection System IS Series Delivering high detection sensitivity and high inspection throughput which enables yield improvement and production cost reduction. Related Links Related Information Contact Us: Semiconductor Manufacturing Equipment
__label__pos
0.983848
Search Thermodynamics Heat Transfer and Fluid Flow Handbook The Thermodynamics, Heat Transfer, and Fluid Flow Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical… Fundamentals training The Thermodynamics, Heat Transfer, and Fluid Flow Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of the thermal sciences. The handbook includes information on thermodynamics and the properties of fluids; the three modes of heat transfer – conduction, convection, and radiation; and fluid flow, and the energy relationships in fluid systems. This information will provide personnel with a foundation for understanding the basic operation of various types of DOE nuclear facility fluid systems. Thermodynamic properties Thermodynamic properties describe measurable characteristics of a substance. A knowledge of these properties is essential to the understanding of thermodynamics. EO 1.1 DEFINE the following properties: 1. Specific volume 2. Density 3. Specific gravity 4. Humidity EO 1.2 DESCRIBE the following classifications of thermodynamic properties: 1. Intensive properties 2. Extensive properties Mass and Weight The mass (m) of a body is the measure of the amount of material present in that body. The weight (wt) of a body is the force exerted by that body when its mass is accelerated in a gravitational field. Mass and weight are related as shown in Equation 1-1. Title:Thermodynamics Heat Transfer and Fluid Flow Handbook Format:PDF Size:3.5 MB Pages:300 Download:Right here | Get Download Updates | Get Technical articles SEARCH: Articles, software & guides // Premium Membership // Premium membership gives you an access to specialized technical articles and extra premium content (electrical guides and software). Get Premium Now ⚡ Page edited by E.C. (Google). Leave a Comment Tell us what you're thinking... we care about your opinion!
__label__pos
0.899219
Category:  What are the Organs of the Digestive System? Article Details • Written By: Allison Boelcke • Edited By: Bronwyn Harris • Last Modified Date: 26 September 2017 • Copyright Protected: 2003-2017 Conjecture Corporation • Print this Article Free Widgets for your Site/Blog Modern teeth are weaker than those of early humans, largely because of our high-carb diets.  more... October 23 ,  1983 :  Suicide bombers killed nearly 300 US and French military troops in Beirut.  more... The digestion system is a series of organs within the body that work together to break down food. When food is chewed and swallowed, the body is not able to immediately use it as nourishment because the molecules in the food are too large to be absorbed effectively. Once food is consumed, it must be broken down into a simpler form so the body can extract the various nutrients from the food. During digestion, food may be processed into nutrients for the body to use for energy or converted into waste and removed from the body. There are a wide variety of organs of the digestive system, which all have distinctive functions in order to convert, absorb, and expel food. The mouth is the beginning of the organs of the digestive system. When food is consumed, it is first chewed in order to break it down into pieces that are easier to swallow. During the chewing process, the food is mixed with saliva, a clear liquid produced by the salivary glands. Saliva not only moistens the food to make swallowing easier, but it also contains an enzyme that breaks down the starch in food into smaller molecules that can be moved through the body. Once the food is moistened with saliva, it is then swallowed and pushed down the throat and moves into another of the organs of the digestive system, the esophagus. The esophagus is a tube that connects the throat to the stomach. As the food approaches the esophagus, the organ walls contract and push the food down into the stomach. The stomach is one of the primary organs of the digestive system. Once food reaches the stomach from the esophagus, the stomach muscles relax in order to allow the food inside, where it is then mixed with digestive juices made by the stomach. These digestive juices assist in breaking down the food into even smaller molecules before pushing the food along into the small intestine. The small intestine is a tube that is coiled beneath the stomach and is responsible for taking the nutrients out of the food that can be used by the body for energy, and separating them from the parts that cannot be used. This extraction process is performed with the assistance of digestive juices launched from three other organs of the digestive system: the liver, gallbladder, and pancreases. The liver, an organ located in the upper right portion of the abdomen, produces bile, a liquid that can extract fat from foods. The bile is moved to the gallbladder, where it is kept until it is needed for digestion. The pancreas, an organ located near the stomach, produces juices that extract carbohydrates, protein, and fat from foods. The nutrients and the remaining food separate and go into two different organs of the digestive system. Fats, proteins, and carbohydrates are moved to the liver, where they are stored or distributed throughout the body for energy. All remaining food is transported through the large intestine, a coiled tube beneath the small intestine, and into the colon, the bottom of the large intestine, where any water in the food is absorbed. The food becomes solidified and is transported into the rectum, the tube between the large intestine and the anus, an opening at the end of the rectum that allows the solid waste to be pushed out of the body, completing the digestion process. Ad You might also Like Recommended Discuss this Article Post your comments Post Anonymously Login username password forgot password? Register username password confirm email
__label__pos
0.964248
top of page • Writer's pictureJuan-Pierre Pieri What is the most optimal training style for hypertrophy? In order for a hypertrophy stimulus to occur, we need a) high threshold motor units to be recruited & b) the muscle fibers they innervate to experience mechanical tension. Both of those requirements are influenced by fatigue during the workout & whatever workout was done prior to it the day before etc. So it goes without saying that the exercise you do first in the week (or after 1-2 rest days) or first in the session will have the largest stimulus for growth as the central nervous system is excitable & the muscles themselves are refreshed as well. The training method that has been shown to promote the most growth in studies are Straight Sets. This means you do one set of an exercise, rest, and repeat. This results in more growth stimulus for a number of reasons; 1. The central motor command only facilitates that single set & then recovers. 2. The central motor command can achieve full motor unit recruitment at each set as we delay supraspinal fatigue. 3. The muscle tissues themselves can clear out metabolites after each set, delaying peripheral fatigue. Peripheral fatigue directly affects the fibers ability to produce force. It's important to note that EACH hard working set that is done during the session will receive less & less growth stimulus compared to the first working set. So your first working set in a workout will deliver more stimulus than your last working set as both CNS & peripheral fatigue increases. What if you superset with two different muscle groups? It doesn't change the fact that a) there is only one motor cortex that governs motor unit recruit & b) even though metabolites aren't accumulating in the same muscle group, afferent feedback to the motor cortex due to metabolites reduces the magnitude of motor unit recruitment. The second factor that plays into optimal hypertrophy training is rest durations between sets. Studies have confirmed quite well that 3 minute rest periods between straight sets results in more muscle growth than 1 minute rest periods. 3 minutes seems absurd when we speak about hypertrophy because for the longest time we have heard we need time under tension but what good is time spent under tension when the muscle fibers responsible for growth aren't even experiencing half the stimulus required for growth? When we look at the physiology of hypertrophy & fatigue, we can see why 3 minutes rest delivers more growth; the supraspinal, spinal & peripheral fatigue mechanisms in place can be delayed. In fact, if you did rest 1 minute between sets, you'd need close to DOUBLE the amount of volume to get the same growth as the 3 minute rest. This poses you with a new problem; do you do twice as much volume for the same growth while also receiving and recovering from twice the amount of fatigue that can impair your next few sessions? The third factor that plays into optimal hypertrophy is the amount of working sets PER SESSION. We need to remember that hypertrophy works on a dose dependant spectrum & studies have shown beyond a certain threshold, hypertrophy either stops or regresses. So far, 6-8 WORKING SETS per session seems to stimulate the largest growth potential. This is generally seen in the same muscle group however if we consider the fatigue mechanisms accumulated from the central motor command & peripheral fatigue, it's likely that it's ONLY the FIRST 6-8 working sets in the entire session that matters. For example, you can't do 6 sets for glutes & then 6 sets for quads & expect both groups to grow to the same degree. So we know 6-8 sets per session is a great dose to stimulate hypertrophy but what about the sets per week? While there is no specific answer, it appears to be between 12-14 working sets per week. Myofibrillar protein synthesis (Myops) appears to last around 48 hours so in reality, you could train the same muscle group every 3 days to continuously stimulate myops so long as the total weekly volume sits between 12-14 sets. The fourth factor that contributes to hypertrophy is going to be your priority of growth. Muscles trained first in the week & first in the workout while everything is fresh will have the largest growth stimulus. If you want to emphasize your calves for example, you'd see better results training them first in the week and first in that session. This is obviously entirely up to you and what you want to give more focus. To summarize: 1. Programme straight sets. 2. Rest 3 minutes between sets. 3. Do 6-8 hard working sets per session. 4. Keep weekly working sets between 12-14. 5. Prioritise the muscles you want to grow first in the week & first in the session. Remember methods like drop sets, pre exhaustion, myo reps, clusters & so on don't create MORE hypertrophy stimulus. In fact, every subsequent set from the first one you do will deliver less stimulus as fatigue increases. These things can be applied IF you're short on time but will not produce more hypertrophy & will increase fatigue which may carry over to your next training session. 18 views0 comments Recent Posts See All Comments bottom of page
__label__pos
0.828361
<> preface In development , Need to listen through mysql of binlog The log file can monitor the data table , because mysql Is deployed in docker In container , You also need to solve the problem of data volumes 1, Open a by way of data volume mysql image docker run -p 3307:3306 --name myMysql -v /usr/docker/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.7.25 remarks : You need to create a file in the host directory in advance for saving mysql Data set of , The directory I created here is /usr/docker/mysql/data and /var/lib/mysql yes mysql Fixed directory after image opening , Generally, we don't need to interfere manually , Just keep the default 2, connect mysql And test Connect using the client connection tool mysql, Try to observe mysql_binlog Opening of You can see that the log function is not enabled at this time , At the same time, we can also go to the mount directory of the host to observe it , 3, open bin_log Execute the following commands in sequence docker exec myMysql bash -c "echo 'log-bin=/var/lib/mysql/mysql-bin' >> /etc/mysql/mysql.conf.d/mysqld.cnf" docker exec myMysql bash -c "echo 'server-id=123454' >> /etc/mysql/mysql.conf.d/mysqld.cnf" 4, restart mysql image docker restart myMysql 5, Create a database and create a table and a piece of data in it Then let's take another look bin_log Change of , Explain this time bin_log Your log has been generated You can also enter the host directory for observation , At this time, a log file has been generated under the host computer <> epilogue The original intention of this article is to build it quickly mysql And find a way to detect it bin_log The log of is used later bin_log Purpose of detecting data changes in the table , So use docker Installation and startup mysql It's more convenient , But because it's the first attempt , When referring to relevant materials on the Internet, I found that the content of most blogs has not been verified , Or it is difficult to implement the effect , Finally, the above executable processes are summarized , Provide subsequent reference and use ! Friendly tips , Please refer to the information on the Internet carefully ! Finally, thanks for watching ! Technology
__label__pos
0.675862
le_cfg_interface.h Go to the documentation of this file. 1  2  3 /* 4  * ====================== WARNING ====================== 5  * 6  * THE CONTENTS OF THIS FILE HAVE BEEN AUTO-GENERATED. 7  * DO NOT MODIFY IN ANY WAY. 8  * 9  * ====================== WARNING ====================== 10  */ 11  12 /** 13  * @page c_config Config Tree API 14  * 15  * @ref le_cfg_interface.h "API Reference" <br> 16  * 17  * The Config Tree is a non-volatile noSQL database that stores configuration values for apps. 18  * By default each app is given read access to a single tree that has the same name as the app. 19  * Any other access permissions (read access for any other tree, or write access to any tree at all) 20  * must be explicitly granted in the app's @ref defFilesAdef_requiresConfigTree ".adef file". 21  * 22  * Trees are created automatically the first time that they are accessed by an app or a component. 23  * The apps tree will be named the same as the name of the app. It is possible for apps to be given 24  * access to other apps tree's or for multiple apps to share one tree. 25  * 26  * @note If your app is running as root then the configuration will get added to the 27  * System Tree by default. System utilities also use the Config Tree and store their 28  * configurations in the @c system tree. 29  * 30  * Apps are able to search over the tree for data, although it's generally expected that the location 31  * will be known to the app and that the app will point to the right node to retrieve the value. 32  * 33  * More on the @ref conceptsConfig "Config Strategy" for the Legato AF. 34  * 35  * @section cfg_Overview Overview 36  * 37  * The tree is broken down into stems and leaves. 38  * 39  * A stem is a node that has at least one child node. A leaf has no children and holds a value. 40  * 41  * @code 42  * +tree 43  * | 44  * +--stem 45  * | | 46  * | +--leaf (value) 47  * | 48  * +-stem 49  * | 50  * +--leaf (value) 51  * | 52  * +--leaf (value) 53  * @endcode 54  * 55  * Paths in the tree are traditional Unix style paths and each node is separated by a /: 56  * 57  * @code /stem/stem/leaf @endcode 58  * 59  * If no app is specified as the root of the tree, then the app will search in it's own tree. 60  * To get to another tree reference that tree followed by a colon. 61  * 62  * @verbatim 63  /path/to/my/tree/value # references the default apps tree 64  secondTree:/path/to/secondTree/value # references the second tree 65  @endverbatim 66  * 67  * @note It's recommended to store anything complex using stems and leaves, this enhances 68  * readability and debugging. It also helps to sidestep nasty cross platform alignment issues. 69  * 70  * Apps must explicitly give permissions to other apps before they can access their Config Tree data. 71  * This is done in the @ref defFilesAdef_requiresConfigTree ".adef file". 72  * Each transaction is only able to iterate over one tree at a time, each tree that you want to read 73  * or write to must be created as a separate transaction. 74  * 75  * The Config Tree supports storing the following types of data and each has their own get/set 76  * function as well as a quick set/get function (see @ref cfg_quick): 77  * - string 78  * - binary (array of bytes) 79  * - signed integer 80  * - boolean 81  * - 64bit floating point 82  * - empty values. 83  * 84  * Each transaction has a global timeout setting (default is 30s). The configuration is 85  * located in the System Tree and may be configured with the @ref toolsTarget_config "config" 86  * target tool. 87  * 88  * @verbatim 89  config /configTree/transactionTimeout 10 int #changes the timeout to 10s 90  @endverbatim 91  * 92  * @section cfg_trans Transactions 93  * 94  * @subsection cfg_transConcepts Key Transaction Concepts 95  * 96  * - All transactions are sent to a queue and processed in a sequence. 97  * - Only one write transaction may be active at a time and subsequent writes are queued 98  * until the first is finished processing. 99  * - Transactions may contain multiple read or write requests within a single transaction. 100  * - Multiple read transactions may be processed while a write transaction is active. 101  * - Quick(implicit) read/writes can be created and are also sequentially queued. 102  * 103  * @subsection cfg_createTrans Create Transactions 104  * 105  * To make a change to the tree, you must Create a write transaction, call one or more Set 106  * functions, and Commit the transaction. If a write transaction is canceled instead of committed, 107  * then the changes will be discarded and the tree will remain unchanged. 108  * 109  * To read from a tree without making any changes, you should: 110  * - create a read transaction, 111  * - call the Get functions, 112  * - cancel the transaction when finished. 113  * 114  * You could also: 115  * - create a write transaction, 116  * - perform only Get operations, 117  * - cancel the transaction 118  * 119  * @note It's safer to use a read transaction when there is no intention to change the tree. 120  * 121  * Transactions must not be kept open for extended periods of time. If a transaction is kept open 122  * for longer than the transaction time limit (default is 30 seconds), then the Config Tree will 123  * cancel the transaction and drop the connection to the offending client (most likely causing the 124  * client process to terminate). 125  * 126  * | Function | Action | 127  * | --------------------------| ------------------------------------------------------------------| 128  * | @c le_cfg_CreateReadTxn() | Opens the transaction | 129  * | @c le_cfg_CancelTxn() | Closes a read/write transaction and does not write it to the tree | 130  * | @c le_cfg_CommitTxn() | Closes a write transaction and queues it for commit | 131  * 132  * @subsection cfg_transNavigate Navigating the Tree 133  * 134  * To move around within the Tree you can move directly to a specific node(leaf or stem) and then 135  * do your read or write from that point. Functions have been added to easily navigate through 136  * Tree. All nodes can be referenced either by their absolute or relative paths. 137  * 138  * | Function | Action | 139  * | ----------------------------| -------------------------------------------------------------------------------| 140  * | @c le_cfg_GoToNode() | Moves to the location specified | 141  * | @c le_cfg_GoToParent() | Moves to the parent of the current node (moves up the Tree) | 142  * | @c le_cfg_GoToFirstChild() | Moves to the first node from the current location (moves down the Tree) | 143  * | @c le_cfg_GoToNextSibling() | Moves to the next node on the same level as the current node (moves laterally) | 144  * 145  * @subsection cfg_transGetInfo Retrieving Node Information 146  * 147  * The Config tree also contains functions to help you identify your current location in the tree, 148  * what node you are currently pointing at, and what type of data is contained in the current node. 149  * 150  * | Function | Action | 151  * | ------------------------| ------------------------------------------------------------------------------- | 152  * | @c le_cfg_GetPath() | Gets the location of where you are in the Tree | 153  * | @c le_cfg_GetNodeType() | Gets the data type of the node where you are currently located | 154  * | @c le_cfg_GetNodeName() | Gets the name of the node where you are in the Tree (does not include the path) | 155  * 156  * @subsection cfg_read Read Transactions 157  * 158  * Each data type has it's own get function to read a value from a node within the Tree. 159  * 160  * | Function | Action | 161  * | ------------------------| -----------------------------------------| 162  * | @c le_cfg_GetString() | Reads the string's value | 163  * | @c le_cfg_GetBinary() | Reads the array of bytes | 164  * | @c le_cfg_GetInt() | Reads the integer's value | 165  * | @c le_cfg_GetFloat() | Reads the floating point value | 166  * | @c le_cfg_GetBool() | Reads the boolean value | 167  * 168  * To perform a read from a Tree, we need to open a transaction, move to the node that you want to 169  * read from, read the node and then cancel the transaction. 170  * 171  * Sample read transaction (with error checking): 172  * 173  * @code 174  * le_result_t GetIp4Static //reads the IP address values from the Config Tree 175  * ( 176  * const char* interfaceNamePtr, 177  * char* ipAddrPtr, 178  * size_t ipAddrSize, 179  * char* netMaskPtr, 180  * size_t netMaskSize 181  * ) 182  * { 183  * // Change current tree position to the base ip4 node. 184  * char nameBuffer[LE_CFG_STR_LEN_BYTES] = { 0 }; 185  * 186  * // Returns errors for out of bounds exceptions 187  * int r = snprintf(nameBuffer, sizeof(nameBuffer), "/system/%s/ip4", interfaceNamePtr); 188  * if (r < 0) 189  * { 190  * return LE_FAULT; 191  * } 192  * else if (r >= sizeof(nameBuffer)) 193  * { 194  * return LE_OVERFLOW; 195  * } 196  * 197  * // Open up a read transaction on the Config Tree. 198  * le_cfg_IteratorRef_t iteratorRef = le_cfg_CreateReadTxn(nameBuffer); 199  * 200  * if (le_cfg_NodeExists(iteratorRef, "") == false) 201  * { 202  * LE_WARN("Configuration not found."); 203  * le_cfg_CancelTxn(iteratorRef); 204  * return LE_NOT_FOUND; 205  * } 206  * 207  * // Returns the IP Address value stored in the Config Tree. 208  * le_result_t result = le_cfg_GetString(iteratorRef, "addr", ipAddrPtr, ipAddrSize, ""); 209  * if (result != LE_OK) 210  * { 211  * le_cfg_CancelTxn(iteratorRef); 212  * return result; 213  * } 214  * 215  * // Returns the NetMask value stored in the Config Tree. 216  * result = le_cfg_GetString(iteratorRef, "mask", netMaskPtr, netMaskSize, ""); 217  * if (result != LE_OK) 218  * { 219  * le_cfg_CancelTxn(iteratorRef); 220  * return result; 221  * } 222  * 223  * // Close the transaction and return success. 224  * le_cfg_CancelTxn(iteratorRef); 225  * 226  * return LE_OK; 227  * } 228  * @endcode 229  * 230  * @note Any writes done will be discarded at the end of the read transaction. 231  * 232  * @subsection cfg_write Write Transactions 233  * 234  * Each data type has it's own set function, to write a value to a node within the Tree. Before you 235  * are able to write to a tree, permissions must be set in the apps 236  * @ref defFilesAdef_requiresConfigTree ".adef's requires section" or with the 237  * @ref toolsTarget_config tool. 238  * 239  * | Function | Action | 240  * | ------------------------| -----------------------------------------| 241  * | @c le_cfg_SetString() | Writes the string's value | 242  * | @c le_cfg_SetBinary() | Writes the array of bytes | 243  * | @c le_cfg_SetInt() | Writes the integer's value | 244  * | @c le_cfg_SetFloat() | Writes the floating point value | 245  * | @c le_cfg_SetBool() | Writes the boolean value | 246  * 247  * To perform a write to a Tree, we need to open a transaction, move to the node that you want to 248  * write to, write to the node and then commit the transaction. 249  * 250  * Sample write transaction (with error checking): 251  * 252  * @code 253  * // Store IPv4 address/mask in a string representation 254  * void SetIp4Static 255  * ( 256  * const char* interfaceNamePtr, 257  * const char* ipAddrPtr, 258  * const char* netMaskPtr 259  * ) 260  * { 261  * // Change current tree position to the base ip4 node. 262  * char nameBuffer[LE_CFG_STR_LEN_BYTES] = { 0 }; 263  * 264  * int r = snprintf(nameBuffer, sizeof(nameBuffer), "/system/%s/ip4", interfaceNamePtr); 265  * LE_ASSERT((r >= 0) && (r < sizeof(nameBuffer)); 266  * 267  * // Create a write transaction so that we can update the tree. 268  * le_cfg_IteratorRef_t iteratorRef = le_cfg_CreateWriteTxn(nameBuffer); 269  * 270  * le_cfg_SetString(iteratorRef, "addr", ipAddrPtr); 271  * le_cfg_SetString(iteratorRef, "mask", netMaskPtr); 272  * 273  * // Commit the transaction to make sure these new settings get written to the tree. 274  * le_cfg_CommitTxn(iteratorRef); 275  * } 276  * 277  * // Store IPv4 address/mask in a binary representation (array of 4 bytes) 278  * void SetIp4StaticAsBinary 279  * ( 280  * const char* interfaceNamePtr, 281  * const uint8_t* ipAddrPtr, 282  * const uint8_t* netMaskPtr 283  * ) 284  * { 285  * // Change current tree position to the base ip4 node. 286  * char nameBuffer[LE_CFG_STR_LEN_BYTES] = { 0 }; 287  * 288  * int r = snprintf(nameBuffer, sizeof(nameBuffer), "/system/%s/ip4", interfaceNamePtr); 289  * LE_ASSERT((r >= 0) && (r < sizeof(nameBuffer)); 290  * 291  * // Create a write transaction so that we can update the tree. 292  * le_cfg_IteratorRef_t iteratorRef = le_cfg_CreateWriteTxn(nameBuffer); 293  * 294  * le_cfg_SetBinary(iteratorRef, "addr", ipAddrPtr, 4); 295  * le_cfg_SetBinary(iteratorRef, "mask", netMaskPtr, 4); 296  * 297  * // Commit the transaction to make sure these new settings get written to the tree. 298  * le_cfg_CommitTxn(iteratorRef); 299  * } 300  * @endcode 301  * 302  * @note Creating write transactions creates a temporary working copy of the tree for use within the 303  * transaction. All read transactions running in the meantime see the committed state, without any 304  * of the changes that have been made within the write transaction. 305  * 306  * @subsection cfg_transDelete Deleting a Node 307  * 308  * You can also delete a node from the tree. A word of caution as deleting a node will 309  * automatically delete all children nodes as well. 310  * 311  * | Function | Action | 312  * | -------------------------| -----------------------------------------| 313  * | @c le_cfg_DeleteNode() | Deletes the node and all children | 314  * 315  * @section cfg_quick Quick Read/Writes 316  * 317  * Another option is to perform quick read/write which implicitly wraps functions with in an 318  * internal transaction. This is ideal if all you need to do is read or write some simple values 319  * to the default app tree. 320  * 321  * The quick reads and writes work almost identically to the transactional version except quick 322  * reads don't explicitly take an iterator object. The quick functions internally use an 323  * implicit transaction. This implicit transaction wraps one get or set, and does not protect 324  * your code from other activity in the system. 325  * 326  * Because quick read/write functions don't get created within a transaction, there is no option to 327  * traverse to a specific node. 328  * All values that are read or written must be referenced from the root of the tree. 329  * 330  * Example of a quick read of the binary data: 331  * 332  * @code 333  * #define IPV4_ADDR_LEN 4 334  * le_result_t QuickReadIpAddressAsBinary 335  * ( 336  * const char* interfaceNamePtr, 337  * const uint8_t* ipAddrPtr, 338  * const uint8_t* netMaskPtr 339  * ) 340  * { 341  * char pathBuffer[MAX_CFG_STRING] = { 0 }; 342  * uint8_t defaultAddr[IPV4_ADDR_LEN] = { 0 }; 343  * le_result_t result = LE_OK; 344  * size_t len = IPV4_ADDR_LEN; 345  * 346  * // read the address 347  * snprintf(pathBuffer, sizeof(pathBuffer), "/system/%s/ip4/addr", interfaceNamePtr); 348  * result = le_cfg_QuickGetBinary(pathBuffer, ipAddrPtr, &len, defaultAddr, IPV4_ADDR_LEN); 349  * if (LE_OK != result) 350  * { 351  * return result; 352  * } 353  * 354  * // read the mask 355  * snprintf(pathBuffer, sizeof(pathBuffer), "/system/%s/ip4/mask", interfaceNamePtr); 356  * len = IPV4_ADDR_LEN; 357  * result = le_cfg_QuickGetBinary(pathBuffer, netMaskPtr, &len, defaultAddr, IPV4_ADDR_LEN); 358  * if (LE_OK != result) 359  * { 360  * return result; 361  * } 362  * } 363  * @endcode 364  * 365  * 366  * A quick delete example: 367  * 368  * @code 369  * void ClearIpInfo 370  * ( 371  * const char* interfaceNamePtr 372  * ) 373  * { 374  * char pathBuffer[MAX_CFG_STRING] = { 0 }; 375  * 376  * // Removes the node from the tree. 377  * snprintf(pathBuffer, sizeof(pathBuffer), "/system/%s/ip4/", interfaceNamePtr); 378  * le_cfg_QuickDeleteNode(pathBuffer); 379  * } 380  * @endcode 381  * 382  * @warning Because each quick function is independent, there's no guarantee of consistency between 383  * them. If another process changes one of the values while you read/write the other, 384  * the two values could be read out of sync. 385  * 386  * Copyright (C) Sierra Wireless Inc. 387  */ 388 /** 389  * @file le_cfg_interface.h 390  * 391  * Legato @ref c_config include file. 392  * 393  * Copyright (C) Sierra Wireless Inc. 394  */ 395  396 #ifndef LE_CFG_INTERFACE_H_INCLUDE_GUARD 397 #define LE_CFG_INTERFACE_H_INCLUDE_GUARD 398  399  400 #include "legato.h" 401  402 // Internal includes for this interface 403 #include "le_cfg_common.h" 404 /** @addtogroup le_cfg le_cfg API Reference 405  * @{ 406  * @file le_cfg_common.h 407  * @file le_cfg_interface.h **/ 408 //-------------------------------------------------------------------------------------------------- 409 /** 410  * Type for handler called when a server disconnects. 411  */ 412 //-------------------------------------------------------------------------------------------------- 413 typedef void (*le_cfg_DisconnectHandler_t)(void *); 414  415 //-------------------------------------------------------------------------------------------------- 416 /** 417  * 418  * Connect the current client thread to the service providing this API. Block until the service is 419  * available. 420  * 421  * For each thread that wants to use this API, either ConnectService or TryConnectService must be 422  * called before any other functions in this API. Normally, ConnectService is automatically called 423  * for the main thread, but not for any other thread. For details, see @ref apiFilesC_client. 424  * 425  * This function is created automatically. 426  */ 427 //-------------------------------------------------------------------------------------------------- 429 ( 430  void 431 ); 432  433 //-------------------------------------------------------------------------------------------------- 434 /** 435  * 436  * Try to connect the current client thread to the service providing this API. Return with an error 437  * if the service is not available. 438  * 439  * For each thread that wants to use this API, either ConnectService or TryConnectService must be 440  * called before any other functions in this API. Normally, ConnectService is automatically called 441  * for the main thread, but not for any other thread. For details, see @ref apiFilesC_client. 442  * 443  * This function is created automatically. 444  * 445  * @return 446  * - LE_OK if the client connected successfully to the service. 447  * - LE_UNAVAILABLE if the server is not currently offering the service to which the client is 448  * bound. 449  * - LE_NOT_PERMITTED if the client interface is not bound to any service (doesn't have a binding). 450  * - LE_COMM_ERROR if the Service Directory cannot be reached. 451  */ 452 //-------------------------------------------------------------------------------------------------- 454 ( 455  void 456 ); 457  458 //-------------------------------------------------------------------------------------------------- 459 /** 460  * Set handler called when server disconnection is detected. 461  * 462  * When a server connection is lost, call this handler then exit with LE_FATAL. If a program wants 463  * to continue without exiting, it should call longjmp() from inside the handler. 464  */ 465 //-------------------------------------------------------------------------------------------------- 467 ( 468  le_cfg_DisconnectHandler_t disconnectHandler, 469  void *contextPtr 470 ); 471  472 //-------------------------------------------------------------------------------------------------- 473 /** 474  * 475  * Disconnect the current client thread from the service providing this API. 476  * 477  * Normally, this function doesn't need to be called. After this function is called, there's no 478  * longer a connection to the service, and the functions in this API can't be used. For details, see 479  * @ref apiFilesC_client. 480  * 481  * This function is created automatically. 482  */ 483 //-------------------------------------------------------------------------------------------------- 485 ( 486  void 487 ); 488  489  490 //-------------------------------------------------------------------------------------------------- 491 /** 492  * Reference to a tree iterator object. 493  */ 494 //-------------------------------------------------------------------------------------------------- 495  496  497 //-------------------------------------------------------------------------------------------------- 498 /** 499  * Identifies the data type of node. 500  */ 501 //-------------------------------------------------------------------------------------------------- 502  503  504 //-------------------------------------------------------------------------------------------------- 505 /** 506  * Handler for node change notifications. 507  */ 508 //-------------------------------------------------------------------------------------------------- 509  510  511 //-------------------------------------------------------------------------------------------------- 512 /** 513  * Reference type used by Add/Remove functions for EVENT 'le_cfg_Change' 514  */ 515 //-------------------------------------------------------------------------------------------------- 516  517  518 //-------------------------------------------------------------------------------------------------- 519 /** 520  * Create a read transaction and open a new iterator for traversing the config tree. 521  * 522  * This action creates a read lock on the given tree, which will start a read-timeout. 523  * Once the read timeout expires, all active read iterators on that tree will be 524  * expired and their clients will be killed. 525  * 526  * @note A tree transaction is global to that tree; a long-held read transaction will block other 527  * user's write transactions from being committed. 528  * 529  * @return This will return the newly created iterator reference. 530  */ 531 //-------------------------------------------------------------------------------------------------- 533 ( 534  const char* LE_NONNULL basePath 535  ///< [IN] Path to the location to create the new iterator. 536 ); 537  538 //-------------------------------------------------------------------------------------------------- 539 /** 540  * Create a write transaction and open a new iterator for both reading and writing. 541  * 542  * This action creates a write transaction. If the app holds the iterator for 543  * longer than the configured write transaction timeout, the iterator will cancel the 544  * transaction. Other reads will fail to return data, and all writes will be thrown away. 545  * 546  * @note A tree transaction is global to that tree; a long-held write transaction will block 547  * other user's write transactions from being started. Other trees in the system 548  * won't be affected. 549  * 550  * @return This will return a newly created iterator reference. 551  */ 552 //-------------------------------------------------------------------------------------------------- 554 ( 555  const char* LE_NONNULL basePath 556  ///< [IN] Path to the location to create the new iterator. 557 ); 558  559 //-------------------------------------------------------------------------------------------------- 560 /** 561  * Closes the write iterator and commits the write transaction. This updates the config tree 562  * with all of the writes that occurred within the iterator. 563  * 564  * @note This operation will also delete the iterator object. 565  */ 566 //-------------------------------------------------------------------------------------------------- 567 void le_cfg_CommitTxn 568 ( 569  le_cfg_IteratorRef_t iteratorRef 570  ///< [IN] Iterator object to commit. 571 ); 572  573 //-------------------------------------------------------------------------------------------------- 574 /** 575  * Closes and frees the given iterator object. If the iterator is a write iterator, the transaction 576  * will be canceled. If the iterator is a read iterator, the transaction will be closed. No data is 577  * written to the tree 578  * 579  * @note This operation will also delete the iterator object. 580  */ 581 //-------------------------------------------------------------------------------------------------- 582 void le_cfg_CancelTxn 583 ( 584  le_cfg_IteratorRef_t iteratorRef 585  ///< [IN] Iterator object to close. 586 ); 587  588 //-------------------------------------------------------------------------------------------------- 589 /** 590  * Changes the location of iterator. The path passed can be an absolute or a 591  * relative path from the iterators current location. 592  * 593  * The target node does not need to exist. Writing a value to a non-existent node will 594  * automatically create that node and any ancestor nodes (parent, parent's parent, etc.) that 595  * also don't exist. 596  */ 597 //-------------------------------------------------------------------------------------------------- 598 void le_cfg_GoToNode 599 ( 600  le_cfg_IteratorRef_t iteratorRef, 601  ///< [IN] Iterator to move. 602  const char* LE_NONNULL newPath 603  ///< [IN] Absolute or relative path from the current location. 604 ); 605  606 //-------------------------------------------------------------------------------------------------- 607 /** 608  * Move the iterator to the parent of the current node (moves up the tree). 609  * 610  * @return Return code will be one of the following values: 611  * 612  * - LE_OK - Commit was completed successfully. 613  * - LE_NOT_FOUND - Current node is the root node: has no parent. 614  */ 615 //-------------------------------------------------------------------------------------------------- 617 ( 618  le_cfg_IteratorRef_t iteratorRef 619  ///< [IN] Iterator to move. 620 ); 621  622 //-------------------------------------------------------------------------------------------------- 623 /** 624  * Moves the iterator to the the first child of the node from the current location. 625  * 626  * For read iterators without children, this function will fail. If the iterator is a write 627  * iterator, then a new node is automatically created. If this node or newly created 628  * children of this node are not written to, then this node will not persist even if the iterator is 629  * committed. 630  * 631  * @return Return code will be one of the following values: 632  * 633  * - LE_OK - Move was completed successfully. 634  * - LE_NOT_FOUND - The given node has no children. 635  */ 636 //-------------------------------------------------------------------------------------------------- 638 ( 639  le_cfg_IteratorRef_t iteratorRef 640  ///< [IN] Iterator object to move. 641 ); 642  643 //-------------------------------------------------------------------------------------------------- 644 /** 645  * Jumps the iterator to the next child node of the current node. Assuming the following tree: 646  * 647  * @code 648  * baseNode 649  * | 650  * +childA 651  * | 652  * +valueA 653  * | 654  * +valueB 655  * @endcode 656  * 657  * If the iterator is moved to the path, "/baseNode/childA/valueA". After the first 658  * GoToNextSibling the iterator will be pointing at valueB. A second call to GoToNextSibling 659  * will cause the function to return LE_NOT_FOUND. 660  * 661  * @return Returns one of the following values: 662  * 663  * - LE_OK - Commit was completed successfully. 664  * - LE_NOT_FOUND - Iterator has reached the end of the current list of siblings. 665  * Also returned if the the current node has no siblings. 666  */ 667 //-------------------------------------------------------------------------------------------------- 669 ( 670  le_cfg_IteratorRef_t iteratorRef 671  ///< [IN] Iterator to iterate. 672 ); 673  674 //-------------------------------------------------------------------------------------------------- 675 /** 676  * Get path to the node where the iterator is currently pointed. 677  * 678  * Assuming the following tree: 679  * 680  * @code 681  * baseNode 682  * | 683  * +childA 684  * | 685  * +valueA 686  * | 687  * +valueB 688  * @endcode 689  * 690  * If the iterator was currently pointing at valueA, GetPath would return the following path: 691  * 692  * @code 693  * /baseNode/childA/valueA 694  * @endcode 695  * 696  * Optionally, a path to another node can be supplied to this function. So, if the iterator is 697  * again on valueA and the relative path ".." is supplied then this function will return the 698  * the path relative to the node given: 699  * 700  * @code 701  * /baseNode/childA/ 702  * @endcode 703  * 704  * @return - LE_OK - The write was completed successfully. 705  * - LE_OVERFLOW - The supplied string buffer was not large enough to hold the value. 706  */ 707 //-------------------------------------------------------------------------------------------------- 709 ( 710  le_cfg_IteratorRef_t iteratorRef, 711  ///< [IN] Iterator to move. 712  const char* LE_NONNULL path, 713  ///< [IN] Path to the target node. Can be an absolute path, or 714  ///< a path relative from the iterator's current position. 715  char* pathBuffer, 716  ///< [OUT] Absolute path to the iterator's current node. 717  size_t pathBufferSize 718  ///< [IN] 719 ); 720  721 //-------------------------------------------------------------------------------------------------- 722 /** 723  * Get the data type of node where the iterator is currently pointing. 724  * 725  * @return le_cfg_nodeType_t value indicating the stored value. 726  */ 727 //-------------------------------------------------------------------------------------------------- 729 ( 730  le_cfg_IteratorRef_t iteratorRef, 731  ///< [IN] Iterator object to use to read from the tree. 732  const char* LE_NONNULL path 733  ///< [IN] Path to the target node. Can be an absolute path, or 734  ///< a path relative from the iterator's current position. 735 ); 736  737 //-------------------------------------------------------------------------------------------------- 738 /** 739  * Get the name of the node where the iterator is currently pointing. 740  * 741  * @return - LE_OK Read was completed successfully. 742  * - LE_OVERFLOW Supplied string buffer was not large enough to hold the value. 743  */ 744 //-------------------------------------------------------------------------------------------------- 746 ( 747  le_cfg_IteratorRef_t iteratorRef, 748  ///< [IN] Iterator object to use to read from the tree. 749  const char* LE_NONNULL path, 750  ///< [IN] Path to the target node. Can be an absolute path, or 751  ///< a path relative from the iterator's current position. 752  char* name, 753  ///< [OUT] Read the name of the node object. 754  size_t nameSize 755  ///< [IN] 756 ); 757  758 //-------------------------------------------------------------------------------------------------- 759 /** 760  * Add handler function for EVENT 'le_cfg_Change' 761  * 762  * This event provides information on changes to the given node object, or any of it's children, 763  * where a change could be either a read, write, create or delete operation. 764  */ 765 //-------------------------------------------------------------------------------------------------- 767 ( 768  const char* LE_NONNULL newPath, 769  ///< [IN] Path to the object to watch. 770  le_cfg_ChangeHandlerFunc_t handlerPtr, 771  ///< [IN] Handler to receive change notification 772  void* contextPtr 773  ///< [IN] 774 ); 775  776 //-------------------------------------------------------------------------------------------------- 777 /** 778  * Remove handler function for EVENT 'le_cfg_Change' 779  */ 780 //-------------------------------------------------------------------------------------------------- 782 ( 783  le_cfg_ChangeHandlerRef_t handlerRef 784  ///< [IN] 785 ); 786  787 //-------------------------------------------------------------------------------------------------- 788 /** 789  * Deletes the node specified by the path. If the node doesn't exist, nothing happens. All child 790  * nodes are also deleted. 791  * 792  * If the path is empty, the iterator's current node is deleted. 793  * 794  * This function is only valid during a write transaction. 795  */ 796 //-------------------------------------------------------------------------------------------------- 798 ( 799  le_cfg_IteratorRef_t iteratorRef, 800  ///< [IN] Iterator to use as a basis for the transaction. 801  const char* LE_NONNULL path 802  ///< [IN] Path to the target node. Can be an absolute path, or 803  ///< a path relative from the iterator's current position. 804 ); 805  806 //-------------------------------------------------------------------------------------------------- 807 /** 808  * Check if the given node is empty. A node is also considered empty if it doesn't yet exist. A 809  * node is also considered empty if it has no value or is a stem with no children. 810  * 811  * If the path is empty, the iterator's current node is queried for emptiness. 812  * 813  * Valid for both read and write transactions. 814  * 815  * @return A true if the node is considered empty, false if not. 816  */ 817 //-------------------------------------------------------------------------------------------------- 818 bool le_cfg_IsEmpty 819 ( 820  le_cfg_IteratorRef_t iteratorRef, 821  ///< [IN] Iterator to use as a basis for the transaction. 822  const char* LE_NONNULL path 823  ///< [IN] Path to the target node. Can be an absolute path, or 824  ///< a path relative from the iterator's current position. 825 ); 826  827 //-------------------------------------------------------------------------------------------------- 828 /** 829  * Clears out the node's value. If the node doesn't exist it will be created, and have no value. 830  * 831  * If the path is empty, the iterator's current node will be cleared. If the node is a stem 832  * then all children will be removed from the tree. 833  * 834  * Only valid during a write transaction. 835  */ 836 //-------------------------------------------------------------------------------------------------- 837 void le_cfg_SetEmpty 838 ( 839  le_cfg_IteratorRef_t iteratorRef, 840  ///< [IN] Iterator to use as a basis for the transaction. 841  const char* LE_NONNULL path 842  ///< [IN] Path to the target node. Can be an absolute path, or 843  ///< a path relative from the iterator's current position. 844 ); 845  846 //-------------------------------------------------------------------------------------------------- 847 /** 848  * Checks to see if a given node in the config tree exists. 849  * 850  * @return True if the specified node exists in the tree. False if not. 851  */ 852 //-------------------------------------------------------------------------------------------------- 854 ( 855  le_cfg_IteratorRef_t iteratorRef, 856  ///< [IN] Iterator to use as a basis for the transaction. 857  const char* LE_NONNULL path 858  ///< [IN] Path to the target node. Can be an absolute path, or 859  ///< a path relative from the iterator's current position. 860 ); 861  862 //-------------------------------------------------------------------------------------------------- 863 /** 864  * Reads a string value from the config tree. If the value isn't a string, or if the node is 865  * empty or doesn't exist, the default value will be returned. 866  * 867  * Valid for both read and write transactions. 868  * 869  * If the path is empty, the iterator's current node will be read. 870  * 871  * @return - LE_OK - Read was completed successfully. 872  * - LE_OVERFLOW - Supplied string buffer was not large enough to hold the value. 873  */ 874 //-------------------------------------------------------------------------------------------------- 876 ( 877  le_cfg_IteratorRef_t iteratorRef, 878  ///< [IN] Iterator to use as a basis for the transaction. 879  const char* LE_NONNULL path, 880  ///< [IN] Path to the target node. Can be an absolute path, 881  ///< or a path relative from the iterator's current 882  ///< position. 883  char* value, 884  ///< [OUT] Buffer to write the value into. 885  size_t valueSize, 886  ///< [IN] 887  const char* LE_NONNULL defaultValue 888  ///< [IN] Default value to use if the original can't be 889  ///< read. 890 ); 891  892 //-------------------------------------------------------------------------------------------------- 893 /** 894  * Writes a string value to the config tree. Only valid during a write 895  * transaction. 896  * 897  * If the path is empty, the iterator's current node will be set. 898  */ 899 //-------------------------------------------------------------------------------------------------- 900 void le_cfg_SetString 901 ( 902  le_cfg_IteratorRef_t iteratorRef, 903  ///< [IN] Iterator to use as a basis for the transaction. 904  const char* LE_NONNULL path, 905  ///< [IN] Path to the target node. Can be an absolute path, or 906  ///< a path relative from the iterator's current position. 907  const char* LE_NONNULL value 908  ///< [IN] Value to write. 909 ); 910  911 //-------------------------------------------------------------------------------------------------- 912 /** 913  * Read a binary data from the config tree. If the the node has a wrong type, is 914  * empty or doesn't exist, the default value will be returned. 915  * 916  * Valid for both read and write transactions. 917  * 918  * If the path is empty, the iterator's current node will be read. 919  * 920  * \b Responds \b With: 921  * 922  * This function will respond with one of the following values: 923  * 924  * - LE_OK - Read was completed successfully. 925  * - LE_FORMAT_ERROR - if data can't be decoded. 926  * - LE_OVERFLOW - Supplied buffer was not large enough to hold the value. 927  */ 928 //-------------------------------------------------------------------------------------------------- 930 ( 931  le_cfg_IteratorRef_t iteratorRef, 932  ///< [IN] Iterator to use as a basis for the transaction. 933  const char* LE_NONNULL path, 934  ///< [IN] Path to the target node. Can be an absolute path, 935  ///< or a path relative from the iterator's current 936  ///< position. 937  uint8_t* valuePtr, 938  ///< [OUT] Buffer to write the value into. 939  size_t* valueSizePtr, 940  ///< [INOUT] 941  const uint8_t* defaultValuePtr, 942  ///< [IN] Default value to use if the original can't be 943  ///< read. 944  size_t defaultValueSize 945  ///< [IN] 946 ); 947  948 //-------------------------------------------------------------------------------------------------- 949 /** 950  * Write a binary data to the config tree. Only valid during a write 951  * transaction. 952  * 953  * If the path is empty, the iterator's current node will be set. 954  * 955  * @note Binary data cannot be written to the 'system' tree. 956  */ 957 //-------------------------------------------------------------------------------------------------- 958 void le_cfg_SetBinary 959 ( 960  le_cfg_IteratorRef_t iteratorRef, 961  ///< [IN] Iterator to use as a basis for the transaction. 962  const char* LE_NONNULL path, 963  ///< [IN] Path to the target node. Can be an absolute path, or 964  ///< a path relative from the iterator's current position. 965  const uint8_t* valuePtr, 966  ///< [IN] Value to write. 967  size_t valueSize 968  ///< [IN] 969 ); 970  971 //-------------------------------------------------------------------------------------------------- 972 /** 973  * Reads a signed integer value from the config tree. 974  * 975  * If the underlying value is not an integer, the default value will be returned instead. The 976  * default value is also returned if the node does not exist or if it's empty. 977  * 978  * If the value is a floating point value, then it will be rounded and returned as an integer. 979  * 980  * Valid for both read and write transactions. 981  * 982  * If the path is empty, the iterator's current node will be read. 983  */ 984 //-------------------------------------------------------------------------------------------------- 985 int32_t le_cfg_GetInt 986 ( 987  le_cfg_IteratorRef_t iteratorRef, 988  ///< [IN] Iterator to use as a basis for the transaction. 989  const char* LE_NONNULL path, 990  ///< [IN] Path to the target node. Can be an absolute path, or 991  ///< a path relative from the iterator's current position. 992  int32_t defaultValue 993  ///< [IN] Default value to use if the original can't be 994  ///< read. 995 ); 996  997 //-------------------------------------------------------------------------------------------------- 998 /** 999  * Writes a signed integer value to the config tree. Only valid during a 1000  * write transaction. 1001  * 1002  * If the path is empty, the iterator's current node will be set. 1003  */ 1004 //-------------------------------------------------------------------------------------------------- 1005 void le_cfg_SetInt 1006 ( 1007  le_cfg_IteratorRef_t iteratorRef, 1008  ///< [IN] Iterator to use as a basis for the transaction. 1009  const char* LE_NONNULL path, 1010  ///< [IN] Path to the target node. Can be an absolute path, or 1011  ///< a path relative from the iterator's current position. 1012  int32_t value 1013  ///< [IN] Value to write. 1014 ); 1015  1016 //-------------------------------------------------------------------------------------------------- 1017 /** 1018  * Reads a 64-bit floating point value from the config tree. 1019  * 1020  * If the value is an integer then the value will be promoted to a float. Otherwise, if the 1021  * underlying value is not a float or integer, the default value will be returned. 1022  * 1023  * If the path is empty, the iterator's current node will be read. 1024  * 1025  * @note Floating point values will only be stored up to 6 digits of precision. 1026  */ 1027 //-------------------------------------------------------------------------------------------------- 1028 double le_cfg_GetFloat 1029 ( 1030  le_cfg_IteratorRef_t iteratorRef, 1031  ///< [IN] Iterator to use as a basis for the transaction. 1032  const char* LE_NONNULL path, 1033  ///< [IN] Path to the target node. Can be an absolute path, or 1034  ///< a path relative from the iterator's current position. 1035  double defaultValue 1036  ///< [IN] Default value to use if the original can't be 1037  ///< read. 1038 ); 1039  1040 //-------------------------------------------------------------------------------------------------- 1041 /** 1042  * Writes a 64-bit floating point value to the config tree. Only valid 1043  * during a write transaction. 1044  * 1045  * If the path is empty, the iterator's current node will be set. 1046  * 1047  * @note Floating point values will only be stored up to 6 digits of precision. 1048  */ 1049 //-------------------------------------------------------------------------------------------------- 1050 void le_cfg_SetFloat 1051 ( 1052  le_cfg_IteratorRef_t iteratorRef, 1053  ///< [IN] Iterator to use as a basis for the transaction. 1054  const char* LE_NONNULL path, 1055  ///< [IN] Path to the target node. Can be an absolute path, or 1056  ///< a path relative from the iterator's current position. 1057  double value 1058  ///< [IN] Value to write. 1059 ); 1060  1061 //-------------------------------------------------------------------------------------------------- 1062 /** 1063  * Reads a value from the tree as a boolean. If the node is empty or doesn't exist, the default 1064  * value is returned. Default value is also returned if the node is a different type than 1065  * expected. 1066  * 1067  * Valid for both read and write transactions. 1068  * 1069  * If the path is empty, the iterator's current node will be read. 1070  */ 1071 //-------------------------------------------------------------------------------------------------- 1072 bool le_cfg_GetBool 1073 ( 1074  le_cfg_IteratorRef_t iteratorRef, 1075  ///< [IN] Iterator to use as a basis for the transaction. 1076  const char* LE_NONNULL path, 1077  ///< [IN] Path to the target node. Can be an absolute path, or 1078  ///< a path relative from the iterator's current position. 1079  bool defaultValue 1080  ///< [IN] Default value to use if the original can't be 1081  ///< read. 1082 ); 1083  1084 //-------------------------------------------------------------------------------------------------- 1085 /** 1086  * Writes a boolean value to the config tree. Only valid during a write 1087  * transaction. 1088  * 1089  * If the path is empty, the iterator's current node will be set. 1090  */ 1091 //-------------------------------------------------------------------------------------------------- 1092 void le_cfg_SetBool 1093 ( 1094  le_cfg_IteratorRef_t iteratorRef, 1095  ///< [IN] Iterator to use as a basis for the transaction. 1096  const char* LE_NONNULL path, 1097  ///< [IN] Path to the target node. Can be an absolute path, or 1098  ///< a path relative from the iterator's current position. 1099  bool value 1100  ///< [IN] Value to write. 1101 ); 1102  1103 //-------------------------------------------------------------------------------------------------- 1104 /** 1105  * Deletes the node specified by the path. If the node doesn't exist, nothing happens. All child 1106  * nodes are also deleted. 1107  */ 1108 //-------------------------------------------------------------------------------------------------- 1110 ( 1111  const char* LE_NONNULL path 1112  ///< [IN] Path to the node to delete. 1113 ); 1114  1115 //-------------------------------------------------------------------------------------------------- 1116 /** 1117  * Clears the current value of a node. If the node doesn't currently exist then it is created as a 1118  * new empty node. 1119  */ 1120 //-------------------------------------------------------------------------------------------------- 1122 ( 1123  const char* LE_NONNULL path 1124  ///< [IN] Absolute or relative path to read from. 1125 ); 1126  1127 //-------------------------------------------------------------------------------------------------- 1128 /** 1129  * Reads a string value from the config tree. If the value isn't a string, or if the node is 1130  * empty or doesn't exist, the default value will be returned. 1131  * 1132  * @return - LE_OK - Commit was completed successfully. 1133  * - LE_OVERFLOW - Supplied string buffer was not large enough to hold the value. 1134  */ 1135 //-------------------------------------------------------------------------------------------------- 1137 ( 1138  const char* LE_NONNULL path, 1139  ///< [IN] Path to read from. 1140  char* value, 1141  ///< [OUT] Value read from the requested node. 1142  size_t valueSize, 1143  ///< [IN] 1144  const char* LE_NONNULL defaultValue 1145  ///< [IN] Default value to use if the original can't be read. 1146 ); 1147  1148 //-------------------------------------------------------------------------------------------------- 1149 /** 1150  * Writes a string value to the config tree. 1151  */ 1152 //-------------------------------------------------------------------------------------------------- 1154 ( 1155  const char* LE_NONNULL path, 1156  ///< [IN] Path to the value to write. 1157  const char* LE_NONNULL value 1158  ///< [IN] Value to write. 1159 ); 1160  1161 //-------------------------------------------------------------------------------------------------- 1162 /** 1163  * Reads a binary data from the config tree. If the node type is different, or if the node is 1164  * empty or doesn't exist, the default value will be returned. 1165  * 1166  * @return - LE_OK - Commit was completed successfully. 1167  * - LE_FORMAT_ERROR - if data can't be decoded. 1168  * - LE_OVERFLOW - Supplied buffer was not large enough to hold the value. 1169  */ 1170 //-------------------------------------------------------------------------------------------------- 1172 ( 1173  const char* LE_NONNULL path, 1174  ///< [IN] Path to the target node. 1175  uint8_t* valuePtr, 1176  ///< [OUT] Buffer to write the value into. 1177  size_t* valueSizePtr, 1178  ///< [INOUT] 1179  const uint8_t* defaultValuePtr, 1180  ///< [IN] Default value to use if the original can't be 1181  ///< read. 1182  size_t defaultValueSize 1183  ///< [IN] 1184 ); 1185  1186 //-------------------------------------------------------------------------------------------------- 1187 /** 1188  * Writes a binary data to the config tree. 1189  */ 1190 //-------------------------------------------------------------------------------------------------- 1192 ( 1193  const char* LE_NONNULL path, 1194  ///< [IN] Path to the target node. 1195  const uint8_t* valuePtr, 1196  ///< [IN] Value to write. 1197  size_t valueSize 1198  ///< [IN] 1199 ); 1200  1201 //-------------------------------------------------------------------------------------------------- 1202 /** 1203  * Reads a signed integer value from the config tree. If the value is a floating point 1204  * value, then it will be rounded and returned as an integer. Otherwise If the underlying value is 1205  * not an integer or a float, the default value will be returned instead. 1206  * 1207  * If the value is empty or the node doesn't exist, the default value is returned instead. 1208  */ 1209 //-------------------------------------------------------------------------------------------------- 1210 int32_t le_cfg_QuickGetInt 1211 ( 1212  const char* LE_NONNULL path, 1213  ///< [IN] Path to the value to write. 1214  int32_t defaultValue 1215  ///< [IN] Default value to use if the original can't be read. 1216 ); 1217  1218 //-------------------------------------------------------------------------------------------------- 1219 /** 1220  * Writes a signed integer value to the config tree. 1221  */ 1222 //-------------------------------------------------------------------------------------------------- 1223 void le_cfg_QuickSetInt 1224 ( 1225  const char* LE_NONNULL path, 1226  ///< [IN] Path to the value to write. 1227  int32_t value 1228  ///< [IN] Value to write. 1229 ); 1230  1231 //-------------------------------------------------------------------------------------------------- 1232 /** 1233  * Reads a 64-bit floating point value from the config tree. If the value is an integer, 1234  * then it is promoted to a float. Otherwise, if the underlying value is not a float, or an 1235  * integer the default value will be returned. 1236  * 1237  * If the value is empty or the node doesn't exist, the default value is returned. 1238  * 1239  * @note Floating point values will only be stored up to 6 digits of precision. 1240  */ 1241 //-------------------------------------------------------------------------------------------------- 1242 double le_cfg_QuickGetFloat 1243 ( 1244  const char* LE_NONNULL path, 1245  ///< [IN] Path to the value to write. 1246  double defaultValue 1247  ///< [IN] Default value to use if the original can't be read. 1248 ); 1249  1250 //-------------------------------------------------------------------------------------------------- 1251 /** 1252  * Writes a 64-bit floating point value to the config tree. 1253  * 1254  * @note Floating point values will only be stored up to 6 digits of precision. 1255  */ 1256 //-------------------------------------------------------------------------------------------------- 1258 ( 1259  const char* LE_NONNULL path, 1260  ///< [IN] Path to the value to write. 1261  double value 1262  ///< [IN] Value to write. 1263 ); 1264  1265 //-------------------------------------------------------------------------------------------------- 1266 /** 1267  * Reads a value from the tree as a boolean. If the node is empty or doesn't exist, the default 1268  * value is returned. This is also true if the node is a different type than expected. 1269  * 1270  * If the value is empty or the node doesn't exist, the default value is returned instead. 1271  */ 1272 //-------------------------------------------------------------------------------------------------- 1274 ( 1275  const char* LE_NONNULL path, 1276  ///< [IN] Path to the value to write. 1277  bool defaultValue 1278  ///< [IN] Default value to use if the original can't be read. 1279 ); 1280  1281 //-------------------------------------------------------------------------------------------------- 1282 /** 1283  * Writes a boolean value to the config tree. 1284  */ 1285 //-------------------------------------------------------------------------------------------------- 1287 ( 1288  const char* LE_NONNULL path, 1289  ///< [IN] Path to the value to write. 1290  bool value 1291  ///< [IN] Value to write. 1292 ); 1293  1294 /** @} **/ 1295  1296 #endif // LE_CFG_INTERFACE_H_INCLUDE_GUARD int32_t le_cfg_QuickGetInt(const char *LE_NONNULL path, int32_t defaultValue) le_result_t le_cfg_GetNodeName(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, char *name, size_t nameSize) void le_cfg_QuickSetFloat(const char *LE_NONNULL path, double value) void le_cfg_QuickSetEmpty(const char *LE_NONNULL path) le_result_t le_cfg_GetString(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, char *value, size_t valueSize, const char *LE_NONNULL defaultValue) le_result_t le_cfg_GoToParent(le_cfg_IteratorRef_t iteratorRef) le_cfg_nodeType_t Definition: le_cfg_common.h:76 void le_cfg_QuickSetString(const char *LE_NONNULL path, const char *LE_NONNULL value) void le_cfg_GoToNode(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL newPath) le_result_t Definition: le_basics.h:46 struct le_cfg_ChangeHandler * le_cfg_ChangeHandlerRef_t Definition: le_cfg_common.h:101 void le_cfg_QuickDeleteNode(const char *LE_NONNULL path) bool le_cfg_QuickGetBool(const char *LE_NONNULL path, bool defaultValue) void(* le_cfg_ChangeHandlerFunc_t)(void *contextPtr) Definition: le_cfg_common.h:110 void le_cfg_SetString(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, const char *LE_NONNULL value) void le_cfg_DisconnectService(void) void le_cfg_SetFloat(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, double value) le_cfg_IteratorRef_t le_cfg_CreateReadTxn(const char *LE_NONNULL basePath) void le_cfg_SetBool(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, bool value) int32_t le_cfg_GetInt(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, int32_t defaultValue) double le_cfg_QuickGetFloat(const char *LE_NONNULL path, double defaultValue) void le_cfg_SetBinary(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, const uint8_t *valuePtr, size_t valueSize) void le_cfg_RemoveChangeHandler(le_cfg_ChangeHandlerRef_t handlerRef) le_result_t le_cfg_QuickGetString(const char *LE_NONNULL path, char *value, size_t valueSize, const char *LE_NONNULL defaultValue) bool le_cfg_NodeExists(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path) le_cfg_ChangeHandlerRef_t le_cfg_AddChangeHandler(const char *LE_NONNULL newPath, le_cfg_ChangeHandlerFunc_t handlerPtr, void *contextPtr) le_result_t le_cfg_GetBinary(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, uint8_t *valuePtr, size_t *valueSizePtr, const uint8_t *defaultValuePtr, size_t defaultValueSize) void le_cfg_SetInt(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, int32_t value) bool le_cfg_GetBool(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, bool defaultValue) le_result_t le_cfg_GoToNextSibling(le_cfg_IteratorRef_t iteratorRef) LE_FULL_API void le_cfg_SetServerDisconnectHandler(le_cfg_DisconnectHandler_t disconnectHandler, void *contextPtr) le_cfg_IteratorRef_t le_cfg_CreateWriteTxn(const char *LE_NONNULL basePath) #define LE_FULL_API Definition: le_apiFeatures.h:40 struct le_cfg_Iterator * le_cfg_IteratorRef_t Definition: le_cfg_common.h:68 void(* le_cfg_DisconnectHandler_t)(void *) Definition: le_cfg_interface.h:413 le_result_t le_cfg_GoToFirstChild(le_cfg_IteratorRef_t iteratorRef) void le_cfg_QuickSetBinary(const char *LE_NONNULL path, const uint8_t *valuePtr, size_t valueSize) void le_cfg_CommitTxn(le_cfg_IteratorRef_t iteratorRef) le_result_t le_cfg_TryConnectService(void) bool le_cfg_IsEmpty(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path) void le_cfg_QuickSetBool(const char *LE_NONNULL path, bool value) void le_cfg_ConnectService(void) void le_cfg_QuickSetInt(const char *LE_NONNULL path, int32_t value) void le_cfg_DeleteNode(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path) double le_cfg_GetFloat(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, double defaultValue) le_result_t le_cfg_GetPath(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path, char *pathBuffer, size_t pathBufferSize) void le_cfg_CancelTxn(le_cfg_IteratorRef_t iteratorRef) le_result_t le_cfg_QuickGetBinary(const char *LE_NONNULL path, uint8_t *valuePtr, size_t *valueSizePtr, const uint8_t *defaultValuePtr, size_t defaultValueSize) void le_cfg_SetEmpty(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path) le_cfg_nodeType_t le_cfg_GetNodeType(le_cfg_IteratorRef_t iteratorRef, const char *LE_NONNULL path)
__label__pos
0.972931
Vous êtes sur la page 1sur 4 CIS 201 Chapter 11 Review Test True/False 1. (1 point) Adding a use case controller to a sequence diagram is an example of applying a design pattern. 2. (1 point) The controller object is usually part of view layer. 3. (1 point) The starting point for the detailed design of a use case is always the SSD. 4. (1 point) The activation lifeline is depicted by a vertical dashed line under an object. 5. (1 point) The first-cut sequence diagram contains view layer and business layer. 6. (1 point) Design class diagrams and interaction diagrams should be developed independent of each other. 7. (1 point) The first-cut DCD includes method signatures. 8. (1 point) The perfect memory assumption means that there is always adequate memory to execute the program. 9. (1 point) The perfect technology assumption means not to worry about issues such as user security. 10. (1 point) The perfect solution assumption means that we do not worry about exception conditions. 11. (1 point) A loop or repeating message or messages in a sequence diagram is depicted by a rectangle box. 12. (1 point) Separation of responsibilities is a design principle which dictates that reading a database should not be done in a problem domain class. 13. (1 point) The best technique for accessing the database when multiple objects are needed is just to let the data access object instantiate objects. 14. (1 point) Dependency relationships cannot exist between classes within packages. 15. (1 point) The code if (someObject == null) is part of the factory pattern. Multiple Choice 16. (1 point) System designers frequently create a class called a ____ that can serve as a collection point for incoming messages. a. switchboard b. message controller c. use case controller d. message collector 17. (1 point) Whatis the least cohesive approach in creating use case controllers in a system? a. Define a single controller for all use cases. b. Define several controllers, each with a specific set of responsibilities c. Create a single controller for a single subsystem d. Create one controller per use case 18. (1 point) Which is the correct notation for a message label on a sequence diagram? a. * [true/false]RetVal := name (param) b. [true/false]RetVal == name (param) c. [true/false]seq# RetVal := name (param) d. * [true/false] seq# Retval := name (param) 19. (1 point) Whichof the following is NOT a valid technique to return data from an input message? a. A return object name b. A return message c. A return value on an input message d. A return condition name 20. (1 point) When a message is sent from an originating object to a destination object it means that ______. a. data is being passed from the origin object to the destination object b. a transition is occurring between the objects c. a method is being invoked on the originating object d. a method is being invoked on the destination object 21. (1 point) What is the first step in constructing a first-cut sequence diagram from the elements of the system sequence diagram (SSD)? a. Determine which messages must be sent b. Create a use case controller c. Replace the :System object with all the objects that must collaborate d. Select an input message from the use case 22. (1 point) Developing a(n) ____ diagram is a multistep process of determining which objects work together and how they work together. a. design class b. interaction c. state machine d. package 23. (1 point) When denoting a specific object in a sequence diagram, a ____ serves as the divider between the object name and the specific object identifier. a. double colon b. colon c. dash d. dot 24. (1 point) When a use case controller receives the initial request for an inquiry when the data access layer is added, it first ____. a. begins the process to initialize the necessary objects in memory b. sends a message to the database to see if it is available c. sends a message to the appropriate data access object to retrieve the data d. sends a return message to the view layer 25. (1 point) In a sequence when one assumes that the objects are already available to receive messages, that is considered to be the _______. a. perfect technology assumption b. perfect memory assumption c. perfect solution assumption d. perfect design assumption 26. (1 point) Theperfect technology assumption implies that _______. a. the equipment will work without any problems b. the users understand the technology c. issues related to technology are ignored d. issues related to scalability (high volumes) are ignored 27. (1 point) User interface objects in a sequence diagram often are labeled with the stereotype ____. a. entity b. view or boundary c. control d. persistent 28. (1 point) Which of the following is an example of an interaction diagram? a. Design class diagram b. Data access diagram c. Package diagram d. Communication diagram 29. (1 point) The primary models used for OO detailed design are ____ diagrams. a. design class and statechart b. package and statechart c. package and deployment d. design class and interaction 30. (1 point) Whichdesign model provides information for a design class diagram? a. Deployment diagram b. Interaction diagram c. Statechart diagram d. Package diagram 31. (1 point) Communication diagrams indicates the order of the messages with ____. a. sequence numbers b. activation lifelines c. arrows d. links 32. (1 point) ____ methods must be included in a design class diagram. a. Constructor b. Use case specific c. Getter d. Setter 33. (1 point) ____diagrams partition a design class diagram into related functions. a. Statechart b. Sequence c. Interaction d. Package 34.(1 point) The following notation anOrd:Order can be interpreted as follows: a. Order is a class. anOrd is the identifier of an object in the class b. Order is an object. anOrd is the identifier for that order c. Order is a class. anOrd is an object in that class d. Order is an object. anOrd is the reference to that object 35. (1 point) Whichof the following is NOT a component of the design pattern template? a. Problem that requires a solution b. Example of the pattern c. Description of when the pattern does not apply d. Consequences of the pattern 36. (1 point) Domain layer classes should ____. a. prepare persistent classes for storage to the database b. start up and shut down the system c. edit and validate input data d. establish and maintain connections to the database 37. (1 point) Viewlayer classes should do all of the following EXCEPT _______. a. capture clicks and data entry b. start and shut down the system c. create problem domain classes d. display forms 38. (1 point) A different implementation of a function is required in an existing system. The best way to integrate this function into the system is ______ a. to write the code in a new class b. to write the code in an existing class c. with the factory pattern d. with the adapter pattern 39. (1 point) The customer relationship system needs to instantiate a new customer object. How should this be done? a. Let the factory object do it. b. Let the view layer object do it. c. Let the controller object do it. d. Let another business object do it. 40. (1 point) Giventhe following code, identify the pattern. Class MyBuilder { static MyBuilder builder = null; { if builder == null {builder = new MyBuilder( ); return builder; } a. Factory Pattern b. Singleton Pattern c. Factory Method Pattern d. Adapter Pattern
__label__pos
0.999537
How to refract Figure 4.1 Correct fitting of a trial frame with each pupil in the centre of each aperture, both horizontally and vertically. Back vertex distance Place a lens (of any value) in the trial frame. Ask the patient to fixate on a distant target, and use a rule to measure from the patient’s cornea to the back of the lens (the surface of the lens nearest the cornea). A normal BVD is 10 to 12 mm. The power of a lens system depends upon the distance of the lens from the cornea. This concept is known as ‘lens effectivity’ and explains why a myope’s contact lens prescription will be numerically weaker than their spectacle prescription. It also explains why patients with powerful prescriptions get a blurred view when their spectacles slip down their nose. Therefore, the BVD is important when a frame is to be constructed, since the function of the lens system depends not only on the lens power but also on the lens position relative to the cornea. Practically, this is relevant for prescriptions of more than 4 dioptres, but it is good practice to always record the BVD. Formulae exist to allow correction of any given prescription as well as BVD to a different prescription and BVD that will have an equivalent effect. Visual acuity ‘Acuity’ is a measure of the resolving power of the eye – the ability to discriminate between two points. Distance charts that you should be comfortable with include the Snellen and the LogMAR. Near vision charts that you should be comfortable with include the N-series. In any clinical setting, it is important to check the distance visual acuity for each eye (unaided, aided and pinhole) and the near acuity for each eye (unaided and aided). If aided, it is useful to state if this is with spectacles or contact lenses. The eye not being tested should be correctly occluded. For the purpose of the exam, the patient’s spectacles will not be available, so the following will need to be established for each eye: • distance acuity unaided (Snellen or LogMAR) • distance acuity with pinhole • near acuity unaided (N-series; remember to use a bright lamp). Pinholes only allow axial rays through to the eye, hence reduce the effect of refractive error. Remember that the pinhole vision gives a good idea of potential vision for that eye once the refractive error has been corrected. Ideally, your target end-refraction visual acuity should be at least as good as the pinhole acuity. Remember that eyes with reduced pinhole vision or reduced vision despite adequate refractive correction have acuity that is limited by amblyopia, ocular pathology or cerebral visual impairment. Pinhole acuity tends to partially improve with corneal or lens pathology but will not improve with amblyopia, retinal, nerve or cerebral pathology (pinhole acuity can be worse than unaided acuity in patients with macular pathology, since it precludes eccentric fixation). Always consider – why is the vision poor? Refractive error: … improves with pinhole. Amblyopia: … no improvement with pinhole. Ocular pathology: … if of retina or nerve origin, will not improve with pinhole … if of cornea or lens origin, may improve with pinhole. Cerebral visual impairment: … no improvement with pinhole. Note, of course, a mixture of these reasons commonly coexist. Refraction estimation Checking the visual acuity will give you an idea of the refractive error: • 1 dioptre of spherical error gives 6/12 • 2 dioptres of spherical error give 6/24 to 6/36 • 3 dioptres of spherical error give 6/60. However, note that this guide is for spherical error and ignores that the patient may have astigmatism. The impairment in acuity is about half that for cylindrical errors relative to spherical errors. Therefore, a patient with 0.00/+2.00 @ 080 would be approximately 6/12 unaided. This guide should only be used as an approximation, since patients will have a mixture of spherical and cylindrical error. This refraction estimation alone does not, however, suggest whether the patient is myopic or hypermetropic. For example, if they are 6/24 unaided, their refraction could be –1.75 or +1.75 spherical dioptres. To estimate if the patient is myopic or hypermetropic, compare their unaided distance acuity with their unaided near acuity. This concept is more useful if the patient is presbyopic, since otherwise the effect of accommodation confounds the estimation. If a patient has poor distance vision but good near vision, you know they are myopic. For example, if a presbyope has an unaided Snellen distance acuity of 6/60, yet is N5 at reading distance (on the near vision N-series reading chart), their refraction is probably around –2.00 to –3.00 spherical dioptres. If they have poor distance vision and poor near vision, you know they are hypermetropic (or they have amblyopia, or ocular pathology or cerebral visual impairment – this should be clear from your history). Visual acuity testing of a child Although children can be unpredictable, which adds stress to an examination setting since it is something you cannot control, there are a number of useful ways of handling this that come with experience in assessing the visual behaviour of children. It is important to spend time with orthoptic staff, since this is the best way to learn to be comfortable with the following: • patching as a means of occlusion (note that objection to occlusion implies poor acuity in the other eye) • assessing if a child’s vision is central (i.e. no squint), steady (i.e. conjugate movements with no nystagmus) and maintained through the duration of a blink (i.e. there is sufficient acuity to fixate on and follow an object of interest, demonstrating that it is seen) • preselected tests, such as Cardiff Cards, Kay Pictures, single optotype or crowded charts, used to assess binocular and monocular distance acuity. Retinoscopy (objective refraction) Retinoscopy basics The aim of retinoscopy is to obtain an objective refraction – that is, an estimation of the patient’s spectacle prescription using a process that does not require any decisions to be made by the patient. Retinoscopy also gives a good benchmark from which the prescription can be fine-tuned using subjective techniques (using subjective rather than objective refraction from the beginning takes considerably longer). Retinoscopy is an invaluable process for children or adults with learning disability, as these patients will not be able to answer the questions required for subjective refraction. For these patients, your spectacle prescription will be based on your retinoscopy alone. A retinoscope produces a light, which, with the cuff fully down, is linear (the scope slit). For more information on the retinoscope, see Appendix 2. Quite simply, the scope slit light is passed across the patient’s pupil and a light within the pupil (the reflex) is observed. By noting the quality of this reflex, various lenses are then placed in the trial frame to neutralise the reflex. As neutralisation is approached, the reflex will become faster and brighter. A dull, slow reflex implies neutralisation is not close. At neutralisation, the reflex is a glowing bright pupil; at this point, the lenses in the trial frame provide the objective spectacle prescription (once corrected for working distance). The scope slit is held at a certain angle (say, vertically) then swept across the pupil in a direction perpendicular to the orientation of the scope slit (in this case, horizontally). As the scope slit passes across the pupil, the reflex can be noted to have certain characteristics: (a) direction, (b) orientation, and (c) brightness and speed. Characteristics of retinoscope reflex Direction: • with or against or neutralised. Orientation: • vertical, horizontal or oblique • scissor reflex. Brightness and speed: • bright and fast • dull and slow. Direction of reflex A ‘with’ reflex is seen if, as your slit passes across the pupil, a light within the pupil (the reflex) moves in the same direction (see Figure 4.2). A plus lens must be added to the trial frame to approach neutralisation. An ‘against’ reflex is seen if, as your slit passes across the pupil, a light within the pupil (the reflex) moves in the opposite direction (see Figure 4.3). A minus lens must be added to the trial frame to approach neutralisation. Figure4.1_retinoscopy_with Figure 4.2 A ‘with’ reflex. The scope slit is orientated vertically and swept horizontally across the pupil to give a with reflex Figure4.2_retinoscopy_again Figure 4.3 An ‘against’ reflex. The scope slit is orientated vertically and swept horizontally across the pupil to give an against reflex To neutralise: with reflex … add plus lens against reflex … add minus lens. Therefore, to approach neutralisation, either a plus (if with reflex) or minus (if against reflex) must be added to the trial frame. If the reflex is already quite fast and bright, only 0.25 or 0.50 may be sufficient to reach neutralisation. To confirm neutralisation, you can lean backwards, further away from the patient (reflex becomes against) or lean forwards closer to the patient (reflex becomes with). This is because the closer you are, the more minus must be added to correct for the working distance (see ‘Correction for working distance’, p. 34). Alternatively, to ensure the end point has been reached, add a +0.25 lens, which should give an against reflex. Such reversal of the reflex is important to achieve, since it highlights that the true end point of neutralisation has been established. Note that the lenses added to approach neutralisation are either spherical or cylindrical. If a sphere is added to neutralise the reflex, it will also alter the subsequent lenses required in the perpendicular axis to obtain neutralisation. If a cylindrical lens is added (with the axis orientated the same way as the scope slit, so the power of the cylindrical lens will act in the same plane as the scope sweep), neutralisation in this plane is approached and has no effect on the other principal meridian. Orientation of reflex The orientation of the retinoscope’s slit light should be parallel to the pupil reflex. If there is no astigmatism, or if the astigmatism is either with the rule or against the rule, the reflex will be orientated vertically and horizontally. In these situations, ensure the slit is vertical then horizontal (rotate the slit by rotating the cuff slightly) to neutralise these meridians. With oblique astigmatism, the principal meridians are still perpendicular but do not lie vertically and horizontally. Therefore, when a horizontal scope sweep is made with the slit orientated vertically, the orientation of the pupil reflex will be oblique and not lie vertically (it will lie between 045 and 090 or 090 and 135) – see Figure 4.4. Similarly, if the scope slit was orientated horizontally and a sweep made vertically, the orientation of the pupil reflex will again be oblique and not be horizontal (it will lie between 000 and 045 or 135 and 180). For oblique astigmatism, the scope slit should be rotated by turning the cuff slightly so the slit is parallel to the pupil reflex to aid subsequent neutralisation. The perpendicular meridian can then be neutralised by rotating the slit 90 degrees (e.g. if one meridian is at 110, the other will be at 020). Figure4.3_retinoscopy_obliq Figure 4.4 With oblique astigmatism, the orientation of the reflex will not be horizontal or vertical but oblique Another type of reflex is the ‘scissor reflex’, which occurs with a high degree of irregular corneal astigmatism, such as keratoconus. These reflexes can be difficult or simply not possible to neutralise. Keratoconus is a corneal ectasia, characterised by progressive stromal thinning and conical distortion, associated with increasing irregular astigmatism and myopia. It is appropriate to examine the eye on the slit lamp for other signs of keratoconus (stromal thinning/cone, Vogt’s striae, Fleischer ring). Investigations include corneal topography so the degree of irregular astigmatism can be quantified and mapped. This aids the consideration of the various available treatment options for keratoconus, including contact lenses, scleral contact lenses or surgical intervention (riboflavin with ultraviolet A/collagen cross-linking, intra-stromal implants, deep lamellar or penetrating keratoplasty). Brightness and speed of reflex As mentioned, the brighter and faster the reflex, the closer to neutralisation. In these situations, use a small magnitude of lens power alteration (0.25 or 0.50 dioptres) since neutralisation is close. Therefore, a dull, slow reflex is far from neutralisation and sometimes it pays to begin with a ±5 or ±10 spherical lens to start off with. Remember, a dull reflex also occurs with medial opacity (such as with a cataract or vitreous haemorrhage). A dull reflex can also occur as a result of flat retinoscope batteries! Correction for working distance ‘Working distance’ is the distance from the patient’s cornea to your retinoscope. It is necessary to alter the sphere of the lenses in the trial frame to give a corrected full prescription based upon the value of the working distance. The retinoscope is constructed so that if retinoscopy is performed at 1 m from the patient, the lenses in the trial frame to give neutralisation are equal to the spectacle prescription. However, we do not do retinoscopy at 1 m, but rather at 66 cm (when working with trial frames) or 50 cm (if you have shorter arms or when working without trial frames – for example, with children, examination under anaesthesia or a model eye). Therefore, once neutralisation is obtained, to convert to the corrected prescription, it is necessary to add a –1.50 sphere to the trial frame (to correct for a 66 cm working distance) or a –2.00 sphere (to correct for a 50 cm working distance). Note that the cyl remains unchanged. Therefore, a –1.50 myope will neutralise without any lenses if working at 66 cm. A –2.00 myope will neutralise without any lenses if working at 50 cm. Here are some other examples: • neutralisation occurs with +4.25/–1.75 @ 030 at 66 cm, so the corrected refraction will be +2.75/–1.75 @ 030, since +4.25 plus –1.50 = +2.75 • neutralisation occurs with –3.75/+0.75 @ 044 at 50 cm, so the corrected refraction will be –5.75/+0.75 @ 044, since –3.75 plus –2.00 = –5.75. Therefore, the working distance correction factor is the reciprocal of the working distance in metres and this must be subtracted from the retinoscopy result. Whenever a result is recorded, it is vital to state whether this is uncorrected or corrected for the working distance and what that working distance is. Therefore, add a –1.50 spherical lens for a working distance of 66 cm and add a –2.00 spherical lens for a working distance of 50 cm. The correction of working distance can be done at the end of the retinoscopy once neutralisation has been achieved, whilst working at 66 cm or 50 cm. However, it can be done at the start of retinoscopy. In this case, before using the retinoscope, you must add +1.50 (for 66 cm) or +2.00 (for 50 cm) to the trial frame (or your fingers, if working with no frame), and the resultant lens summation at neutralisation will give the corrected prescription. Whether you decide to correct for working distance at the end or the start of retinoscopy does not matter – but it must be done and your results should be clearly recorded to demonstrate at what stage a correction for working distance was made. Static versus dynamic retinoscopy ‘Static’ retinoscopy means that the working distance is fixed throughout retinoscopy. This is what most practice and is what is detailed in this book. Experienced practitioners can use the concept of working distance to their advantage by varying their working distance to obtain neutralisation (rather than changing the lenses). This is known as ‘dynamic’ retinoscopy. For example, an emmetrope neutralises at 1 m, a –1.50 myope at 66 cm, a –2.00 myope at 50 cm, a –5.00 myope at 20 cm and so on. Imagine you get an against movement at 66 cm – rather than adding a minus lens (in the case of static retinoscopy), you instead lean forward to 50 cm and neutralisation occurs – the patient’s refraction in that meridian is therefore –2.00. Dynamic retinoscopy is less practical for hypermetropes, since hypermetropes neutralise with a working distance of more than 1 m. Dynamic retinoscopy takes considerable practise but is extremely useful for refracting challenging patients (such as children) because it is so rapid. Retinoscopy technique Ideally, the room should be dim. The darker the room, the easier it is to note the reflex characteristics; if the room is too dark, you will struggle to find your lenses. A useful trick is to use your retinoscope light as a torch if you cannot see the lens markings easily. Ensure that your retinoscope cuff is all the way down on the shaft of the retinoscope. Key points for retinoscopy • Establish a dim room. • Fog (or occlude, if necessary) the fellow eye. • Scope the patient’s right eye with your right eye/right hand. • Scope the patient’s left eye with your left eye/left hand. • Keep your scope as close as possible to their visual axis, without interrupting continuous distant fixation. • Correct for working distance (add –1.50 sphere if at 66 cm; add –2.00 sphere if at 50 cm). • Record in either positive cyl notation for both eyes or negative cyl notation for both eyes (never positive for one eye and negative for the other). The first step is to examine the patient’s right eye with the retinoscope. For non-cycloplegic refraction of patients who are not presbyopic (especially if they are myopic), it is necessary to fog (blur) the fellow left eye. This involves placing a +1.50 or +2.00 spherical lens on top of the presumed refraction (estimated from their acuity, which you have just checked), so that the acuity is poorer than that of the eye being examined with the retinoscope. Adequate fogging can be confirmed by ensuring that the retinoscopy reflex is against or, alternatively, checking the acuity in each eye with the fog in place and ensuring the fogged eye has poorer acuity than the eye about to be objectively refracted. If the patient is 6/6 with the presumed refraction, a +1.50 or +2.00 spherical dioptre fog typically renders the eye to 6/12 to 6/24. The reason why the fellow eye should be fogged is to reduce accommodation, which would give a false result when examining the fellow eye with the retinoscope. With cycloplegic refraction (typically in children), there is no need to fog, since the accommodative component is removed by the cycloplegia. For non-cycloplegic refraction (most adults), fogging is required to reduce any accommodative drive (especially if the patient is a myope who is not yet presbyopic). This fogging induces less accommodation than simple occlusion with a black occluder – hence, the effort made to fog rather than simply occlude. Occlusion, rather than fogging, should be avoided, as it stimulates more accommodation. However, occlusion is required in the following situations: • when the eye being tested is densely amblyopic (since the eye not being tested must have a poorer acuity to help avoid accommodation and a +2.00 lens will probably be insufficient to achieve this) • if the patient markedly objects to fogging due to diplopia or asthenopia • if you are unable to estimate acuity and provide an adequate fog lens. Once you have adequately fogged (or, if necessary, occluded) the fellow eye, ask the patient to fixate on the white light or green target in the distance. Explain to them that it is important that they continue to look into the distance and not at your own white light. Ask them to let you know if your head obscures their view of the distant fixation target. It is vital to ensure that your head is as close as possible to their visual axis, without actually obscuring their distant fixation target – this ensures that your retinoscope light will be close to their visual axis (see Figure 4.5). Failure to be ‘on axis’ in this way can result in spurious astigmatism, thus it is important to be wary of this when refracting children who shift their position. Figure4.6_Ret_Correct_For Figure4.7_Ret_Incorrect_Fo Figure4.5_Ret_Working_Dist Figure 4.5 Use your left hand to perform retinoscopy of the patient’s left eye (left photo), since incorrectly using your right hand will obstruct their view (central photo). Check working distance with arm (right photo). Use your right hand and right eye to scope their right eye. Scope first with a vertical, then a horizontal and finally a diagonal slit to locate the principal meridians. If only a dull, slow reflex is seen, try using a ±5 or even a ±10 lens. Then proceed by refracting in plus or minus cyls or spheres alone (see ‘Working in plus/minus cyls or spheres’, p. 39). Once you have objectively refracted the right eye, correct for your working distance (add a –1.50 sphere if at 66 cm) and record your result (state ‘corrected for working distance’). Then fog the right eye and use your left hand and left eye to scope their left eye. Once you have objectively refracted the left eye, again correct for working distance and record this. You should now turn the lights on, check the visual acuity and move onto subjective refraction. Remember that if a with reflex is seen, then a plus lens should be added and if an against reflex is seen then a minus lens should be added to approach neutralisation. The brighter and faster the reflex, the closer you are to neutralisation (the entire pupil lights up when the slit enters the pupil), whereas a dull and slow reflex implies you are not close to neutralisation. Working in plus/minus cyls or spheres It is possible to refract with your retinoscope in three different ways: 1. using positive cyls 2. using negative cyls 3. using spheres only. Using positive cyls This means that your retinoscopy result will be in a plus cyl format. Identify the orientation of the two principal meridians, which will be perpendicular to each other. The principal meridian that has an against reflex – or, if both reflexes are with, it will be the least with reflex (which is fastest and brightest, as it is nearest neutralisation) – is neutralised first with spheres. This will result in the other principal meridian giving a with reflex, which is then neutralised with positive cyls (the axis on the lens in the same orientation as the scope slit). The resultant prescription will be the lenses in the trial frame (which must then be corrected for working distance). For example, you identify an against reflex with scope slit at 135 and a with reflex at 045. Add minus spheres until the against reflex at 135 is neutralised (say, –3.00 causes neutralisation). Then add plus cyls (with the axis in the same orientation as the scope slit at 045) to neutralise the with reflex (say, +1.50 at 045 causes neutralisation). The axis line on the cyl lens should be parallel to the scope slit and light reflex (perpendicular to its power). The lenses in the trial frame then give the retinoscopy result in plus cyl format: –3.00/+1.50 @ 045, which must then be corrected for working distance (if at 66 cm, this gives –4.50/+1.50 @ 045). This may sound complicated, but simply consider that a patient with regular astigmatism requires a sphere with a cyl superimposed upon it to correct their refractive error. The sphere is found by neutralising the most against reflex, and the perpendicular meridian will then give a with reflex, which can be neutralised with plus cyls to give the sphero-cylindrical correction (which must be corrected for working distance). Using negative cyls This means that your retinoscopy result will be in a minus cyl format. Identify the orientation of the two principal meridians, which will be perpendicular to each other. First, neutralise the most with reflex with plus spheres then neutralise the perpendicular against reflex with minus cyls. The lenses in the trial frame will give the retinoscopy result in minus cyl format, which must then be corrected for working distance. Using spheres only It is possible to obtain an objective refractive result without using any cylindrical lenses. Identify the two principal meridians. Neutralise one of the meridians with a sphere, record the result and orientation of reflex then remove the sphere. Following this, neutralise the perpendicular meridian with a sphere and record the result and orientation of the reflex. The refractive result can then be expressed in either plus or minus cyl format; in both cases, the magnitude of the cyl is the difference between the two spheres. It can be useful to use a power cross to generate the resultant prescription. Power crosses As noted, if working in plus or minus cyls, the resultant refraction obtained by retinoscopy will simply be the lenses in the trial frame (this does not apply if working in spheres). This can then be corrected for working distance. Therefore, it is not necessary to draw power crosses and power crosses are not required for the Refraction Certificate Examination (at the time of writing). However, since some practitioners use power crosses it is good practice to understand them. Furthermore, if you work only in spheres, it is useful to use a power cross to obtain your resultant refraction. Each arrowed arm of a power cross represents the direction of movement of the retinoscope sweep. For example, when sweeping horizontally with the scope slit orientated vertically, the power in the horizontal plane (180) is examined. Therefore, if a sphere with power +3.50 dioptres neutralises a horizontal sweep, this implies the power in the horizontal direction is +3.50 dioptres. If a sphere with power +2.00 dioptres is then required to neutralise a vertical sweep with a horizontally orientated scope slit (to assess vertically acting power), the resultant power cross would be: Park_illustration_1 Correcting for working distance would give: Park_illustration_2 To obtain the prescription from the power cross in positive cyl notation: • record the least positive sweep as the sphere • record the cyl as the difference between the two sweeps • record the axis as the same axis of the most positive sweep (remembering that the axis is perpendicular to the direction of action of the power arrow). Only gold members can continue reading. Log In or Register to continue Apr 27, 2017 | Posted by in OPHTHALMOLOGY | Comments Off on How to refract Premium Wordpress Themes by UFO Themes
__label__pos
0.557589
Postfix logging to file or stdout Overview Postfix supports it own logging system as an alternative to syslog (which remains the default). This is available with Postfix version 3.4 or later. Topics covered in this document: Configuring logging to file Logging to file solves a usability problem for MacOS, and eliminates multiple problems for systemd-based systems. 1. Add the following line to master.cf if not already present (note: there must be no whitespace at the start of the line): postlog unix-dgram n - n - 1 postlogd Note: the service type "unix-dgram" was introduced with Postfix 3.4. Remove the above line before backing out to an older Postfix version. 2. Configure Postfix to write logging, to, for example, /var/log/postfix.log. See also the "Logfile rotation" section below for logfile management. # postfix stop # postconf maillog_file=/var/log/postfix.log # postfix start By default, the logfile name must start with "/var" or "/dev/stdout" (the list of allowed prefixes is configured with the maillog_file_prefixes parameter). This safety mechanism limits the damage from a single configuration mistake. Configuring logging to stdout Logging to stdout is useful when Postfix runs in a container, as it eliminates a syslogd dependency. 1. Add the following line to master.cf if not already present (note: there must be no whitespace at the start of the line): postlog unix-dgram n - n - 1 postlogd Note: the service type "unix-dgram" was introduced with Postfix 3.4. Remove the above line before backing out to an older Postfix version. 2. Configure main.cf with "maillog_file = /dev/stdout". 3. Start Postfix with "postfix start-fg". Rotating logs The command "postfix logrotate" may be run by hand or by a cronjob. It logs all errors, and reports errors to stderr if run from a terminal. This command implements the following steps: Notes: Limitations Background: Limitations:
__label__pos
0.526813
Ultimate Guide: How Much Do Dental Night Guards Cost in 2021? Are you tired of waking up with jaw pain or headaches from grinding your teeth at night? Dental night guards may be the solution for you. But how much do they cost? In this article, we’ll break down the various factors that can impact the price of Dental night guards, so you can make an informed decision about this important investment in your oral health. [ad_1] Dental night guards are a common solution for those who grind or clench their teeth during sleep, also known as bruxism. These custom-made guards provide a protective barrier between the upper and lower teeth, preventing damage and potential Dental issues. When it comes to the cost of Dental night guards, there are a few factors to consider. The price can vary based on the material, design complexity, and where it is made. On average, a Dental night guard can range from $300 to $800. There are three main types of Dental night guards available: stock, boil-and-bite, and custom-made. Stock guards are the cheapest option, ranging from $20 to $50, but may not fit properly. Boil-and-bite guards offer a more customizable fit for $30 to $200. However, custom-made guards, while the most expensive, provide the best fit and protection for $300 to $800. While the cost of a Dental night guard may seem high, it is important to consider the long-term benefits. Teeth grinding and clenching can lead to Dental issues like worn-down teeth, jaw pain, headaches, and fractures. Investing in a night guard can prevent costly Dental procedures in the future. For those concerned about cost, checking with Dental insurance can help. Some plans may cover a portion of the cost of a night guard, making it more affordable for patients. In conclusion, the cost of a Dental night guard varies, but investing in a custom-made guard can provide the best protection for those with bruxism. Preventing Dental issues and improving oral health in the long run is worth the investment. [ad_2] 1. How much does a Dental night guard cost? – The cost of a Dental night guard can vary depending on the type and quality. On average, they can range from $300 to $800. 2. Are Dental night guards covered by insurance? – Some Dental insurance plans may cover a portion of the cost of a night guard. It’s best to check with your insurance provider to see if you have coverage. 3. Can I buy a Dental night guard over the counter? – Yes, you can purchase over-the-counter night guards at drugstores or online for a lower cost. However, custom-made night guards from a dentist are usually more effective. 4. Are there payment plans available for Dental night guards? – Some Dental offices may offer payment plans or financing options to help with the cost of a night guard. It’s worth asking your dentist about available options. 5. How long does a Dental night guard typically last? – With proper care, a custom-made Dental night guard can last anywhere from 1 to 5 years. Over-the-counter night guards may not last as long due to lower quality materials. [ad_1] Leave a Comment
__label__pos
0.878253
祝贺!Koishi上架海书面板商店 海书面板-Seabook-Panel 是一个现代,易用的服务器运维面板。 2 个赞 自 娱 自 乐 image 5 个赞 版本 1.0.0 ,这是手动设置的吗… 这不应该去读当前推送的最新版本吗 2 个赞 那么,有什么用呢 3 个赞 1.0.0 指的是他的 koishi for seabook 安装脚本的版本辣! import subprocess def check_docker(): try: result = subprocess.run(["docker", "--version"], capture_output=True, text=True) if result.returncode == 0: return True else: return False except FileNotFoundError: return False if __name__ == "__main__": print("正在安装 Koishi") print("安装方式:Docker") if check_docker(): print("Docker 已安装") subprocess.run("docker run -p 5140:5140 koishijs/koishi") else: print("Docker 未安装,请您自行安装。") print("Koishi 安装完成") 1 个赞 啊,这又是啥 1 个赞 抽象玩意 1 个赞 IMG_2336 :sob::sob::sob: 1 个赞 现在有了 1 个赞
__label__pos
0.991843
亚洲必赢登录 > 亚洲必赢app > 探究前端黑科学技术亚洲必赢app:,动漫质量升 原标题:探究前端黑科学技术亚洲必赢app:,动漫质量升 浏览次数:57 时间:2019-11-10 斟酌前端黑科学技术——通过 png 图的 rgba 值缓存数据 2016/09/12 · JavaScript · 1 评论 · 缓存 原稿出处: jrainlau    提及前端缓存,当先八分之四人想到的单纯是多少个常规的方案,比方cookielocalStoragesessionStorage,恐怕加上indexedDBwebSQL,以及manifest离线缓存。除了这些之外,到底还恐怕有未有别的方法能够进行前端的数量缓存呢?这篇小说将会带您协同来研商,怎样一步一步地因此png图的rgba值来缓存数据的黑科学和技术之旅。 打赏援助本人写出越来越多好散文,多谢! 任选后生可畏种支付办法 亚洲必赢app 1 亚洲必赢app 2 3 赞 11 收藏 评论 有关小编:qwer 亚洲必赢app 3 简单介绍尚未来得及写 :卡塔尔 个人主页 · 作者的小说 · 12 亚洲必赢app 4 在rem布局下利用背景图片乃至sprite 2016/08/29 · CSS · 2 评论 · rem, sprite 原作出处: 吕大豹    明天运动端页面用rem布局已然是一大门户了,成熟的框架如Taobao的flexiable.js,以致自个儿的亲密的朋友@墨尘写的更轻量级的hotcss。用rem作单位使得成分能够自适应后,还会有一块须求关心的,那正是背景图片。本文就来闲聊那上面的东西。 盒子端 CSS 动画质量进步切磋 2017/12/08 · CSS · 动画 正文小编: 伯乐在线 - chokcoco 。未经笔者许可,禁绝转发! 招待参加伯乐在线 专辑笔者。 差别于守旧的 PC Web 只怕是运动 WEB,在优酷地蛋客厅盒子端,接大屏显示屏(电视卡塔 尔(英语:State of Qatar)下,大多能流利运行于 PC 端、移动端的 Web 动漫,受限于硬件水平,在盒子端的表现的每每不顺遂。 根据此,对于 Web 动漫的性指斥题,仅仅停留在感到已经优化的OK之上,是远远不足的,想要在盒子端跑出高质量相仿60 FPS 的余音袅袅动漫,就必得要寻根究底,深挖每生龙活虎处能够进级的秘技。 后记 实属黑科学和技术,其实原理特别简单,与之相符的还会有通过Etag等艺术实行强缓存。研讨的指标只是为了学习,千万不要看成地下之用。假诺读者们开采那篇随笔有哪些错漏的地方,接待指正,也期望有意思味的爱人能够协同举行座谈。 感激您的翻阅。笔者是Jrain,应接关切本人的特辑,将不许期分享温馨的就学心得,开拓体会,搬运墙外的干货。后一次见啦! 1 赞 2 收藏 1 评论 亚洲必赢app 5 至于小编:陈被单 亚洲必赢app 6 热爱前端,招待调换 个人主页 · 笔者的篇章 · 19 ·    亚洲必赢app 7 接下去是什么样? 自然,那些类型接下去自然就改为多少个游玩,那项技术能够有多大的可扩充性值得期望。简单的讲,作者曾经开首在第一流的Three.js上行使八个行业内部的CSS3渲染器,这几个JS库使用形似的才能通超过实际际的三个维度引擎来渲染几何体和光线。 2 赞 2 收藏 评论 cover与contain CSS3为background-size属性扩大了四个值:cover与contain。这一个五个值允许大家钦赐背景图片的自适应情势。它俩有如何分裂吗? 从语言上陈述,cover是拉伸图片使之充满成分,成分料定是被铺满的,但是图片有不小只怕来得不全。contain则是拉伸图片使图片完全突显在要素内,图片确定能显得全,可是成分恐怕不会被铺满。 地点说的“或许”的状态,发生在要素尺寸和图片尺寸宽高比例不等同的时候。 上边通过例子来演示一下这两个的用法。举个例子我们以索爱5为例,那时候dpr为2,页面scale为0.5,基准字体大小为100px。设计稿上有一张90*200px的图纸。那么css应该那样写: CSS #mm{ width: 0.9rem; height: 2rem; background-image: url(mm.jpg); background-size: contain; background-repeat: no-repeat; } 1 2 3 4 5 6 7 #mm{     width: 0.9rem;     height: 2rem;     background-image: url(mm.jpg);     background-size: contain;     background-repeat: no-repeat; }   意义如下: 亚洲必赢app 8 当成分与背景图片的大小相像,恐怕是宽高比例相同期,contain和cover的填写效果是雷同的,因为两岸在拉伸后总能使图片“正好”完全充满成分。 唯只临时成分的宽高比例是不分明的,举例有一张宽度为百分之百,中度为200px的图纸。那时候contain和cover的区分就显得出来了。如下图: 亚洲必赢app 9 能够观望contain的时候,成分右边有空白未有填满。而cover的时候,成分固然填满了,不过有生机勃勃对图形已经拉伸到成十分界看不到了。那就是两个的分歧,实际接受的时候要依附具体意况来定。 4. 接收 will-change 能够在要素属性真正发生变化在此之前提前做好对应构思 // 示例 .example { will-change: transform; } 1 2 3 4 // 示例 .example {     will-change: transform; } 地点已经涉及过 will-change 了。 will-change 为 web 开垦者提供了生龙活虎种告知浏览器该因素会有怎么样变化的不二等秘书籍,那样浏览器能够在要素属性真正产生变化早先提前做好对应的优化准备工作。 这种优化能够将部分目迷五色的简政放权职业提前盘算好,使页面包车型的士影响更是迅猛灵敏。 值得注意的是,用好这几个本性并不是超轻易: • 在有的低等盒子上,will-change 会以致众多小意思,举个例子会使图片模糊,临时非常轻巧为蛇画足,所以采取的时候还要求多加测量检验。 • 毫不将 will-change 应用到太多成分上:浏览器已经努力尝试去优化整个能够优化的东西了。有局地越来越强力的优化,假设与 will-change 结合在一起的话,有望会损耗过多机械财富,借使过度施用以来,大概产生页面响应缓慢只怕消耗比相当多的能源。 • 有总统地选取:日常,当成分恢复到开端状态时,浏览器会扬弃掉从前做的优化办事。可是要是直白在样式表中显式申明了 will-change 属性,则象征指标成分也许会有时转移,浏览器会将优化办事保存得比之前越来越久。所以最好实施是当元素变化以前和后来经过脚本来切换 will-change 的值。 • 不要太早应用 will-change 优化:如果你的页面在品质方面没什么难题,则不用增添 will-change 属性来榨取一丁点的快慢。 will-change 的规划初志是用作末了的优化花招,用来品尝清除现存的性申斥题。它不应有被用来防护质量难点。过度使用 will-change 会招致变化大批量图层,进而招致多量的内存占用,并会变成更复杂的渲染进程,因为浏览器会考虑筹算大概存在的扭转历程,那会引致更严重的天性难题。 • 给它丰盛的劳作时间:那个天性是用来让页面开拓者告知浏览器哪些属性大概会变动的。然后浏览器能够接收在扭转产生前提前去做一些优化工作。所以给浏览器一点年华去真正做那么些优化办事是十分重大的。使用时必要尝试去找到一些艺术提前一准时期得悉成分或者爆发的变化,然后为它助长 will-change 属性。 结果深入分析 翻开服务器,运营顾客端,第二回加载的时候经过调整台能够看看响应的图样信息: 亚洲必赢app 10 200 OK,注解是从服务端获取的图纸。 关闭当前页面,重新载入: 亚洲必赢app 11 200 OK (from cache),评释是从本地缓存读取的图形。 接下去间接看rgba值的对照: 源数据: [亚洲必赢app,50, 101, 152, 203, 54, 105, 156, 207, 58, 109, 150, 201, 52, 103, 154, 205, 56, 107, 158, 209, 50, 101, 152, 203, 54, 105, 156, 207, 58, 109, 150, 201] 缓存数据:[50, 100, 152, 245, 54, 105, 157, 246, 57, 109, 149, 244, 52, 103, 154, 245, 56, 107, 157, 247, 50, 100, 152, 245, 54, 105, 157, 246, 57, 109, 149, 244] 1 2 3 源数据:  [50, 101, 152, 203, 54, 105, 156, 207, 58, 109, 150, 201, 52, 103, 154, 205, 56, 107, 158, 209, 50, 101, 152, 203, 54, 105, 156, 207, 58, 109, 150, 201]   缓存数据:[50, 100, 152, 245, 54, 105, 157, 246, 57, 109, 149, 244, 52, 103, 154, 245, 56, 107, 157, 247, 50, 100, 152, 245, 54, 105, 157, 246, 57, 109, 149, 244] 可以看到,源数据与缓存数据**基本一致**,在`alpha`值的误差偏大,在`rgb`值内**偶有误差**。通过分析,认为产生误差的原因是服务端在进行base64转buffer的过程中,所涉及的运算会导致数据的改变,这一点**有待考证**。 前面获得的结论,源数据与缓存数据存在标称误差的来头,经应用商讨后鲜明为alpha值的打扰所致。就算大家把alpha值直接定为255,并且只把多少寄存在rgb值内部,就可以消逝零值误差。上面是校勘后的结果: 源数据: [0, 1, 2, 255, 4, 5, 6, 255, 8, 9, 0, 255, 2, 3, 4, 255, 6, 7, 8, 255, 0, 1, 2, 255, 4, 5, 6, 255, 8, 9, 0, 255] 缓存数据:[0, 1, 2, 255, 4, 5, 6, 255, 8, 9, 0, 255, 2, 3, 4, 255, 6, 7, 8, 255, 0, 1, 2, 255, 4, 5, 6, 255, 8, 9, 0, 255] 1 2 3 源数据:  [0, 1, 2, 255, 4, 5, 6, 255, 8, 9, 0, 255, 2, 3, 4, 255, 6, 7, 8, 255, 0, 1, 2, 255, 4, 5, 6, 255, 8, 9, 0, 255]   缓存数据:[0, 1, 2, 255, 4, 5, 6, 255, 8, 9, 0, 255, 2, 3, 4, 255, 6, 7, 8, 255, 0, 1, 2, 255, 4, 5, 6, 255, 8, 9, 0, 255] 因为作者懒,只是把alpha值给定为255而并未有把循环赋值的逻辑举行翻新,所以第4n位的元数据被一贯替换到了255,那么些留着读者自行修改闲暇再改…… 综合,那些利用png图的rgba值缓存数据的黑科技,在争鸣上是立见成效的,但是在实际操作过程中可能还要考虑更多的影响因素,比如设法消除服务端的误差,采取容错机制等。事实上也是行得通的。 值得注意的是,localhost或然暗中认可会直接通过当地实际不是服务器央浼财富,所以在本地实验中,能够由此安装header举行cors跨域,何况通过安装IP地址和80端口模拟服务器访谈。 图片懒加载插件实战 2016/07/28 · JavaScript · 插件 正文小编: 伯乐在线 - 陈被单 。未经小编许可,禁绝转发! 接待参加伯乐在线 专辑我。 过多网址都会用到‘图片懒加载’这种艺术对网址进行优化,即延迟加载图片或相符有些条件才开首加载图片。于是心血来潮,决定本人手动写一下’图片懒加载‘插件。 • 接纳那一个技巧有哪些明显的帮助和益处? 诸如多少个页面中有无数图片,如天猫首页等等,三个页面有100多的图形,假设黄金时代上来就发送这么多要求,页面加载就能够很遥远,如若js文件都位居了文书档案的底层,恰好页面包车型客车头顶又凭仗这么些js文件,那就倒霉办了。客商认为那个页面就能够很卡。 • 懒加载原理:浏览器会自行对页面中的img标签的src属性发送央求并下载图片。通过动态改换img的src属性完毕。 当访谈一个页面包车型地铁时候,先把img元素或是其余因素的背景图片路线替换到loading图片地址(那样就只需央求三遍卡塔 尔(阿拉伯语:قطر‎ 等到一定条件(这里是页面滚动到自然区域卡塔 尔(英语:State of Qatar),用实际寄存img地址的laze-load属性的值去替换src属性,就能够兑现’懒加载’。 //就算img的src值为空,浏览器也会对服务器发送央求。所以平日做项指标时候,如果img未有接收src,就无须现身src那一个天性 • 先上多个第后生可畏的知识点 1.到手显示器可视窗口大小: document.documentElement.clientHeight 标准浏览器及低版本IE规范情势 document.body.clientHeight 低版本混杂格局 探究前端黑科学技术亚洲必赢app:,动漫质量升高研商。2.成分相对于文档document最上部 element.offsetTop 3.滚动条滚动的偏离 document.documentElement.scrollTop   包容ie低版本的正统方式 document.body.scrollTop 宽容混合形式; 滚动加载:当图片出现在可视区域时,动态加载该图形。 规律:当图片元素顶上部分是还是不是在可视区域内,(图片相对于文书档案document最上端-滚动条滚动的间距卡塔 尔(阿拉伯语:قطر‎ 落实原理: 1.第风姿罗曼蒂克从有着有关要素中找寻要求延时加载的要素,放在element_obj数组中。 JavaScript function initElementMap() { var el = document.getElementsByTagName('img'); for (var j = 0, len2 = el.length; j < len2; j++) { //判定当前的img是或不是加载过了,也许有lazy_src标志 [未完成] if (typeof (el[j].getAttribute("lazy_src"))) { element_obj.push(el[j]); download_count++; } } } 1 2 3 4 5 6 7 8 9 10 function initElementMap() {       var el = document.getElementsByTagName('img');       for (var j = 0, len2 = el.length; j < len2; j++) {   //判断当前的img是否加载过了,或者有lazy_src标志  [未完成]           if (typeof (el[j].getAttribute("lazy_src"))) {               element_obj.push(el[j]);               download_count++;           }       } } 2.决断数组中的img对象,若满意条件,则变动src属性 JavaScript function lazy() { if (!download_count) return; var innerHeight = getViewport(); for (var i = 0, len = element_obj.length; i < len; i++) { //获得图片绝对document的距上相差 var t_index = getElementViewTop(element_obj[i]); if (t_index - getScrollTop() < innerHeight) { element_obj[i].src = element_obj[i].getAttribute("lazy-src"); delete element_obj[i]; download_count--; } } } 1 2 3 4 5 6 7 8 9 10 11 12 13 function lazy() {     if (!download_count) return;     var innerHeight = getViewport();     for (var i = 0, len = element_obj.length; i < len; i++) { //得到图片相对document的距上距离         var t_index = getElementViewTop(element_obj[i]);             if (t_index - getScrollTop() < innerHeight) {             element_obj[i].src = element_obj[i].getAttribute("lazy-src");             delete element_obj[i];             download_count--;         }     } } 3.滚动的时候接触事件,1000纳秒后进行lazy(卡塔 尔(阿拉伯语:قطر‎方法。 JavaScript window.onscroll = window.onload = function () { setTimeout(function () { lazy(); }, 1000) } 1 2 3 4 5 window.onscroll = window.onload = function () {     setTimeout(function () {         lazy();     }, 1000) } 整片段代码位于闭包自实施函数中。相应的章程放在init中。 JavaScript var lazyLoad = (function () { function init() { initElementMap(); lazy(); }; return { init: init } })(); 1 2 3 4 5 6 7 8 9 var lazyLoad = (function () {       function init() {         initElementMap();         lazy();     };     return {         init: init         } })(); 使用格式 :src填暗中认可loading图片地址,真实的图片地址填在lazy-src属性里,切记需点名宽高。在外表调用  lazyLoad.init();  全部的代码以至例子已经上传到github上了,地址是:,欢迎star。 打赏协理本身写出越多好文章,多谢! 打赏小编 用HTML和CSS构建3D世界 2015/01/13 · CSS, HTML5 · 3D, CSS, HTML 本文由 伯乐在线 - qwer 翻译,黄利民 校稿。未经许可,禁绝转发! 斯拉维尼亚语出处:keithclark.co.uk。接待参与翻译组。 background-size取具体值 大家通晓background-size能够取具体的值以致百分比,那么我们一直把背景图片大小设置为跟成分大小同等不就完了呢?还费怎么着劲搞什么自适应。 自然是足以的。譬喻咱们把下面的css改成那样: CSS #mm{ width: 0.9rem; height: 2rem; background-image: url(mm.jpg); background-size: 0.9rem 2rem; background-repeat: no-repeat; } 1 2 3 4 5 6 7 #mm{     width: 0.9rem;     height: 2rem;     background-image: url(mm.jpg);     background-size: 0.9rem 2rem;     background-repeat: no-repeat; } 一心能够兑现均等的功用。 只是自家自己在写css的时候特不赏识写具体值,日常是能自适应就自适应。那样只要早先时期要退换成分的高低,只要改生机勃勃处就能够,所以小编更赏识用contain或cover。 总结 其风度翩翩坑最初见于张云龙先生发表的那篇随笔CSS3硬件加快也会有坑,这里还要总括补充的是: • GPU 硬件加速也有坑,当大家盼望利用使用相通 transform: translate3d() 那样的主意拉开 GPU 硬件加快,应当要注意成分层级的涉嫌,尽量保证让急需开展 CSS 动漫的成分的 z-index 保持在页面最上方。 • Graphics Layer 不是更加的多越好,每黄金时代帧的渲染内核都会去遍历总结当前具有的 Graphics Layer ,并总括他们下风姿洒脱帧的重绘区域,所以过量的 Graphics Layer 计算也会给渲染变成质量影响。 • 可以动用 Chrome ,用地点介绍的四个工具对本身的页面生成的 Graphics Layer 和因素层级进行观测然后开展对应改正。 • 地点观看页面层级的 chrome 工具十分吃内部存款和储蓄器?好像依然叁个处在实验室的功力,分析微微大学一年级点的页面轻巧直接卡死,所以要多学会运用第生机勃勃种注重铅色边框的方法查看页不纯熟成的 Graphics Layer 这种格局。 原理 笔者们领略,通过为静态能源设置Cache-ControlExpires响应头,能够倒逼浏览器对其举办缓存。浏览器在向后台发起倡议的时候,会先在自己的缓存里面找,假诺缓存里面未有,才会持续向服务器央求那么些静态财富。利用那一点,大家能够把某些索要被缓存的音信经过这几个静态财富缓存机制来拓宽仓库储存。 那么我们怎么着把新闻写入到静态财富中呢?canvas提供了.getImageData()方法和.createImageData()措施,能够独家用于读取设置图片的rgba值。所以我们得以行使那四个API实行消息的读写操作。 接下去看规律图: 亚洲必赢app 12 当静态能源走入缓存,未来的其他对于该图形的呼吁都会先找找本地缓存,也正是说音讯实际早就以图纸的款型被缓存到本地了。 注意,由于rgba值只好是[0, 255]里头的整数,所以本文所商议的法子仅适用于纯数字组成的数量。 光彩夺目阴影 乘胜光线通过canvas的消除,也让投射阴影变得也许。阴影投射背后的逻辑也变得极度轻易。通过按离光源的偏离来布署表面,让本人不但要为三个外界发生二个亮光图,同期也要一口咬住不放是或不是在该表面包车型地铁眼下有二个外表已经被当下的焦点光照射到了。假若是被遮挡,笔者就能够安装相关光线图上的像素为影子。这种技术让一张图片能够在光照和影子二种情况下接纳。 亚洲必赢app 13 一张终极的持有光线和影子的房间的截图 sprite图片的处理 为了统风流倜傥图片乞求大家平时会用到sprite才能。在rem布局方案下,使用contain或cover来缩放背景图片就不大概生效了。因为成分的背景其实是sprite图片的局地,contain和cover只好对整张图片举行缩放,不能够调节到有个别的大小。 譬如犹如下一张200*50的sprite图。 亚洲必赢app 14 在接纳contain缩放后是那般的 亚洲必赢app 15 故而在拍卖sprite图片时,大家只可以给background-size取具体值。那么那么些值取多少吗?其实借使写咱俩切出来的图样的实际上尺寸就能够。 比方大家的因素为50*50px,sprite图片为200*50px,那css应该如下: CSS #cpt{ width: 0.5rem; height: 0.5rem; background-image: url(cpt.png); background-size: 2rem 0.5rem; } 1 2 3 4 5 6 #cpt{     width: 0.5rem;     height: 0.5rem;     background-image: url(cpt.png);     background-size: 2rem 0.5rem; } 下一场修改background-position大家就能够取到sprite上的别的图片了。background-position也取具体值,也是比照切出来的图的尺码就能够。 举个例子说本人那张图片是三个帧动漫的4个帧,每豆蔻梢头帧的背景图片应该如下: CSS #cpt.status1{ background-position: 0 0; } #cpt.status2{ background-position: -0.5rem 0; } #cpt.status3{ background-position: -1rem 0; } #cpt.status4{ background-position: -1.5rem 0; } 1 2 3 4 5 6 7 8 9 10 11 12 #cpt.status1{     background-position: 0 0; } #cpt.status2{     background-position: -0.5rem 0; } #cpt.status3{     background-position: -1rem 0; } #cpt.status4{     background-position: -1.5rem 0; } 运用这一个尺寸,大家来做个小动漫试试。 亚洲必赢app 16 本篇小提起此截止,啰嗦了那样多,开采实际上能够用两句话就说完–! 1 赞 6 收藏 2 评论 亚洲必赢app 17 至于小编:chokcoco 亚洲必赢app 18 经不住似水年华,逃不过此间少年。 个人主页 · 小编的稿子 · 63 ·     亚洲必赢app 19 静态服务器 大家利用node搭建三个简约的静态服务器: JavaScript const fs = require('fs') const http = require('http') const url = require('url') const querystring = require('querystring') const util = require('util') const server = http.createServer((req, res) => { let pathname = url.parse(req.url).pathname let realPath = 'assets' + pathname console.log(realPath) if (realPath !== 'assets/upload') { fs.readFile(realPath, "binary", function(err, file) { if (err) { res.writeHead(500, {'Content-Type': 'text/plain'}) res.end(err) } else { res.writeHead(200, { 'Access-Control-Allow-Origin': '*', 'Content-Type': 'image/png', 'ETag': "666666", 'Cache-Control': 'public, max-age=31536000', 'Expires': 'Mon, 07 Sep 2026 09:32:27 GMT' }) res.write(file, "binary") res.end() } }) } else { let post = '' req.on('data', (chunk) => { post += chunk }) req.on('end', () => { post = querystring.parse(post) console.log(post.imgData) res.writeHead(200, { 'Access-Control-Allow-Origin': '*' }) let base64Data = post.imgData.replace(/^data:image/w+;base64,/, "") let dataBuffer = new Buffer(base64Data, 'base64') fs.writeFile('assets/out.png', dataBuffer, (err) => { if (err) { res.write(err) res.end() } res.write('OK') res.end() }) }) } }) server.listen(80) console.log('Listening on port: 80') 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 const fs = require('fs') const http = require('http') const url = require('url') const querystring = require('querystring') const util = require('util')   const server = http.createServer((req, res) => {   let pathname = url.parse(req.url).pathname   let realPath = 'assets' + pathname   console.log(realPath)   if (realPath !== 'assets/upload') {      fs.readFile(realPath, "binary", function(err, file) {       if (err) {         res.writeHead(500, {'Content-Type': 'text/plain'})         res.end(err)       } else {         res.writeHead(200, {           'Access-Control-Allow-Origin': '*',           'Content-Type': 'image/png',           'ETag': "666666",           'Cache-Control': 'public, max-age=31536000',           'Expires': 'Mon, 07 Sep 2026 09:32:27 GMT'         })         res.write(file, "binary")         res.end()       }    })   } else {     let post = ''     req.on('data', (chunk) => {       post += chunk     })     req.on('end', () => {       post = querystring.parse(post)       console.log(post.imgData)       res.writeHead(200, {         'Access-Control-Allow-Origin': '*'       })       let base64Data = post.imgData.replace(/^data:image/w+;base64,/, "")       let dataBuffer = new Buffer(base64Data, 'base64')       fs.writeFile('assets/out.png', dataBuffer, (err) => {         if (err) {           res.write(err)           res.end()         }         res.write('OK')         res.end()       })     })   } })   server.listen(80)   console.log('Listening on port: 80') 以此静态能源的功用很简单,它提供了多个功用:通过客商端传来的base64生成图片并保留到服务器;设置图片的缓存时间并发送到客户端。 最主要部分是设置响应头: JavaScript res.writeHead(200, { 'Access-Control-Allow-Origin': '*', 'Content-Type': 'image/png', 'ETag': "666666", 'Cache-Control': 'public, max-age=31536000', 'Expires': 'Mon, 07 Sep 2026 09:32:27 GMT' }) 1 2 3 4 5 6 7 res.writeHead(200, {   'Access-Control-Allow-Origin': '*',   'Content-Type': 'image/png',   'ETag': "666666",   'Cache-Control': 'public, max-age=31536000',   'Expires': 'Mon, 07 Sep 2026 09:32:27 GMT' }) 咱俩为那张图纸设置了一年的Content-Type和十年的Expires,理论上充足长了。上边大家来实行顾客端的coding。 光线 光明是以此项目中最大的不便。作者未有说谎,数学大概加害到了自家,不过那是值得的。因为光线带来了贰个多疑的纵深和气氛的认为到,实际不是叁个平面包车型大巴永不生气的条件。 亚洲必赢app 20 七个无电灯的光的房间的显示屏截图 如本身事先提到的,在平凡的三维引擎中大家用一文山会海的极带给定义三个目的。为了总结出光线,这一个长久要求计算出叁个正规(normal卡塔 尔(英语:State of Qatar),该标准可以支配三个外表中心点所受多少光照。但那却带给了多个难为,因为当我们使用HTML成立三维对象时,这么些极端并不设有。所以,第三个挑衅就是让光线变得可计算,为此作者供给写一文山会海的方式来测算二个指标已经被CSS转变了的三个极端(每种角一个卡塔 尔(阿拉伯语:قطر‎。黄金时代旦那个明朗起来,小编就最早试验用不一致的秘技点亮对象。第一个试验中,笔者是用多背景图片来模拟光线照射到三个外表,通过叠合线性渐变和图表落成。使用多少个渐变在始发和告竣的任务应用相近的RGBA值,缩短了固执的水彩快。退换阿尔法通道的值让底层图片渗出颜色块,也开创下了明暗的痛感。 亚洲必赢app 21 使用渐变让人头带有阴影效果的例子 为了完毕上述图片中第二黑暗的法力,笔者对元素接纳了之类的样式: CSS element { background: linear-gradient(rgba(0,0,0,.8), rgba(0,0,0,.8)), url("texture.png"); } 1 2 3 element {     background: linear-gradient(rgba(0,0,0,.8), rgba(0,0,0,.8)), url("texture.png"); } 在调查中,那么些样式并不是在样式表中预先定义好的,而是利用JavaScript动态计算和直接加载到成分的体制属性上去的。 See the Pen 3D objects in CSS with shading by Keith Clark (@keithclark) on CodePen. 多少个平面阴影的三个维度油桶 该能力与平面阴影有关。那是七个生出阴影的有用方法,然则,那会让一整个表面都有相近的内幕。举例,假设本人创设一个拉开到外国的三个维度墙,它的总体长度上的影子都会是后生可畏律的。笔者索要一些看起来更为不务空名的效率。 rem布局 所谓rem布局正是指为文书档案的根节点<html>成分设置多个标准字体大小,然后全部的因素尺寸都是rem为单位来写。比方将<html>的书体设为100px,如若须求做三个100*200的元素,css如是写: CSS div{ width: 1rem; height: 2rem; } 1 2 3 4 div{     width: 1rem;     height: 2rem; } 那么最后获得的因素宽高正是100*200px了。 为了能够在分歧尺寸的手提式有线电话机荧屏上自适应,须求用js来判别手提式有线电话机宽度,并动态设置<html>的字体大小,那样基准字体变了,成分的尺寸自然相应更改,达到了自适应的功效。 此间只介绍基本概念,rem布局方案的任何细节不是本篇的范围。最先提那几个方案的应该是winter寒老湿,有未有人更早选择自家就不佳考证了。 Web 每风姿浪漫帧的渲染 要想达到 60 FPS,每帧的预算时间仅比 16 皮秒多一点 (1 秒/ 60 = 16.67 皮秒)。但骨子里,浏览器有整合治理职业要做,由此你的享有专门的职业索要尽恐怕在 10 皮秒内做到。 而每黄金时代帧,如若有必不可缺,我们能垄断的有的,也是像素至显示屏管道中的关键步骤如下:亚洲必赢app 22 风度翩翩体化的像素管道 JS / CSS > 样式 > 布局 > 绘制 > 合成: 1. JavaScript。平常的话,大家会使用 JavaScript 来兑现部分视觉变化的法力。比方用 jQuery 的 animate 函数做三个动漫、对二个数量集进行排序或然往页面里加多一些 DOM 成分等。当然,除了 JavaScript,还也可能有任何一些常用方法也得以完成视觉变化意义,比方:CSS Animations、Transitions 和 Web Animation API。 2. 体制总结。此进程是基于相称选拔器(举例 .headline 或 .nav > .nav__item卡塔尔总计出怎么着要素运用哪些 CSS 3. 法规的进度。从当中领略法规之后,将接纳准绳并构思各个成分的末梢样式。 3. 布局。在知道对三个成分运用哪些法规之后,浏览器就能够开头酌量它要占用的上空尺寸及其在显示器的任务。网页的布局方式表示四个成分可能影响别的因素,例如成分的上升的幅度相符会耳熟能详其子成分的拉长率以至树中随地的节点,由此对此浏览器来讲,布局进度是时有时无发出的。 4. 绘制。绘制是填充像素的进度。它涉及绘出文本、颜色、图像、边框和阴影,基本上包蕴成分的各种可视部分。绘制日常是在多少个外表(平常称为层卡塔 尔(阿拉伯语:قطر‎上到位的。 5. 合成。由于页面包车型地铁各部分恐怕被绘制到多层,因此它们需求按正确顺序绘制到显示器上,以便科学渲染页面。对于与另一成分交汇的成分来讲,那一点非常主要,因为四个不当大概使二个元素错误地冒出在另二个因素的上层。 本来,不必然每帧都三回九转会因而管道每种部分的拍卖。大家的对象正是,每风姿罗曼蒂克帧的动画,对于上述的管道流程,能幸免则制止,无法防止则最大限度优化。   本文由亚洲必赢登录发布于亚洲必赢app,转载请注明出处:探究前端黑科学技术亚洲必赢app:,动漫质量升 关键词: 上一篇:CSS不是真正的编制程序,浏览器缓存机制剖判【 下一篇:感觉如何,只怕你不知底
__label__pos
0.93656
Блог учителя Информатики Решение задачи ExamBegin13 Условие задачи: На вход подается целое число, большее 1. Разложить данное число на простые множители и вывести все множители в порядке возрастания (каждое число выводить на новой строке, среди выводимых чисел могут быть одинаковые). uses PT4Exam; var a, k: integer; begin Task('ExamBegin13'); Read(a); k := 2; while a <> 1 do begin if a mod k = 0 then begin Writeln(k); a := a div k; end else k := k + 1; end; end. Поделиться: Вам также может понравится Снегопад из снежинок на Python с помощью Черепашки Делаем Черепашку в виде снежинки Перевод десятичных чисел в двоичные на Pascal Решение задачи ExamBegin80 Оставьте комментарий
__label__pos
0.616923
Health Does Testosterone Cause Hair Loss? Hair and androgenetic hair loss is a problem that affects everyone at some point. Thinning hair, a receding hairline, and excessive hair loss are all concerns both sexes face at some point in their lives. It’s more likely to be an age-related issue, and it will only be a matter of time before they notice individual hairs disappearing. Changes in hormone levels, especially testosterone, may also play a role in both male and female-structured hair loss. Does testosterone cause hair loss Image from Unlimphotos by [email protected] How Can Testosterone Cause Hair Loss? Testosterone, which is commonly known as the male sex hormone, can become increasingly scarce in older men. Wrinkled skin, slowed wound healing, and weaker bones are some of the side effects of aging, which can reduce the efficiency of virtually every bodily function. It may also reduce the production of other crucial hormones, such as testosterone, in men. Many aspects of health and well-being depend on testosterone’s presence. Even though it’s best known for bolstering libido and the quality of sperm, testosterone also contributes to things like bone mass, muscle mass, and muscle development. However, many men blame too much testosterone for their balding. How Does Testosterone Contribute to Male and Female Hair Growth? Although male and female sex hormones serve different purposes in the body, they are both important. In general, certain hormones are in charge of shaping masculine and feminine identities, regulating sexual activity and reproduction, and promoting and maintaining robust physical health. Testosterone is the hormone that gets the most attention whenever there is a discussion about hair. In masculines, testosterone is the hormone responsible for maturation; men need enough testosterone to bulk up and develop hair on their bodies and faces. The ovaries as well as the adrenal gland secrete a trace amount of testosterone into a woman’s body. Androgenetic alopecia, which is caused by high levels of DHT converted from androgen, can affect women even though they are less likely to suffer from low testosterone levels. Pregnancy, some medications, the onset of menopause, and an imbalance of thyroid hormones have all been linked to hair loss in women. Popular Forms of Testosterone Include: • Unbound androgens, also known as “free testosterone” (free T), are androgen hormones that are not attached to any proteins. Most of the abundant testosterone in the system functions by binding to proteins. A small fraction of testosterone typically does not bind to any receptor. • Bound testosterone is the most abundant form of testosterone in the body. It is usually bound to the albumin or sex hormone-binding globulin (SBHG). • The hormone dihydrotestosterone (DHT) is produced when testosterone is converted in the body. The enzyme 5-alpha reductase produces this androgen derivative, which some claim is even more potent than testosterone on its own. DHT: What Does It Mean? DHT is an abbreviation for the hormone dihydrotestosterone, which is to blame for many men’s frustrations in front of the mirror. In the medical community, it is often referred to as an androgen; however, this is just another name for testosterone. Because DHT can promote hair growth at optimal levels but also lead to premature hair loss at excessively high levels, DHT blockers can be helpful. Testosterone’s primary role is to facilitate the development of adult morphology during puberty, but it continues to contribute to health and vitality well into old age. You may be interested in learning about the benefits of DHT supplements for hair growth if you worry that your body isn’t producing sufficient DHT on its own. Low testosterone could be to blame if you’re constantly exhausted, lacking the motivation to get through the day, and unable to work out as hard as you used to. DHT blockers may be prescribed by doctors after a patient has been tested and is showing signs of hair loss. DHT (Dihydrotestosterone) & Hair Loss - What is it and What Does it Do? Inhibiting the Hair-Growth Cycle, DHT Does So By DHT helps hair grow by reducing the size of hair follicles. This disrupts the regular hair growth cycle and can lead to the onset of hair thinning or loss in both men and women. Anagen Phase- Hair follicles are constantly producing new hair during the anagen phase. When DHT binds to receptors that are responsible for the hair follicles, it triggers an accelerated rate of hair shedding from the anagen phase. Catagen Phase- The catagen phase is a transitional phase characterized by the gradual regression of hair growth. In essence, the hair follicles stop making new hair so that the hair strands themselves can untangle themselves from the follicle bases. The existence of DHT influences the catagen phase by shrinking the follicles, making the transition to the fresh anagen phase more challenging. Telogen phase- The hair follicles shrink and do not produce new hair during the telogen phase, which can last for months. During this time, some hairs may be driven out of the follicle, and DHT activity may extend this phase and make shedding existing hair easier. Stress and traumatic experiences may trigger telogen effluvium, which is a condition characterized by abnormally high rates of hair shedding or thinning, in some people. Hair thinning due to testosterone activity and male pattern baldness can have multiple causes. One possible negative effect of hormone replacement therapy is hair loss in those receiving testosterone replacement therapy. Symptoms of temporary hair loss have been linked to the use of supplements containing testosterone and testosterone injections to treat low T. However, there is hope that symptoms pattern hair loss will improve once the hormones return to normal. Female pattern baldness may be due to follicle hypersensitivity to dihydrotestosterone (DHT) even in women with normal testosterone levels. The hair follicle and scalp size can be easily affected by even modest amounts of testosterone and DHT, resulting in pattern baldness and poor hair growth and health. Androgenic alopecia causes women to experience thinning hair, out in patches all over the scalp, similar to the way men experience a receding hairline. Hair Loss and Testosterone Injections If your doctor has determined that your testosterone levels are low, he or she may recommend therapeutic testosterone to bring them up to a more normal level. The advantages of testosterone are substantial. Because of its anabolic properties, it promotes muscle growth and preservation. The increased production of red blood cells free testosterone may also reduce the likelihood of conditions like anemia. Testosterone has been linked to a variety of health benefits, including stronger bones, more energy, a more even disposition, and more frequent and more satisfying erections. Choosing therapeutic or testosterone supplements can lessen the likelihood of experiencing low testosterone symptoms such as: • Impotence or Erectile Dysfunction (ED) • Sexual problems stemming from a lack of libido or other factors • Having less muscle and less muscle strength • Problems falling asleep or staying asleep • Suppressed Hair Loss • Putting on weight • Concentration issues • Depression and other mental health problems 42194104 young woman with hair problem receiving injection in head skin in a clinic mesotherapy treatment of hair loss injection for hair growth 1 Image from Unlimphotos by Mariakray Testosterone is most commonly administered via injection, but it can also be taken orally, applied topically (in the form of a gel or patch), or implanted subcutaneously. According to your needs and preferences, your doctor will recommend an exogenous testosterone treatment that is right for you. Although testosterone is not the direct cause of male hair loss itself, increasing one’s testosterone levels through injections or other means may hasten the development of more male-pattern baldness or of masculine-pattern baldness itself. This is because 5 alpha-reductase converts a small proportion of circulating testosterone into dihydrotestosterone (DHT), as we discussed earlier. Testosterone levels in healthy men typically hover between 300 and 1,000 ng/dL. Injecting testosterone, using a testosterone gel, or wearing a testosterone patch all increase testosterone levels, which in turn increases the amount of testosterone available for conversion to DHT by the 5 alpha-reductase enzyme. Male pattern baldness symptoms, such as female pattern hair loss such as diminished hair growth, or obvious thinning, may progress more rapidly due to elevated DHT levels. Testosterone Therapy May Cause Other Side Effects The health, well-being, and quality of life of men with low testosterone can be greatly improved through the use of testosterone therapy. There are, however, risks associated with testosterone therapy. Such as- • Decreased number of sperm • Prostate enlargement • elevated lipid levels • Problems getting to sleep or remaining asleep • High potential for developing heart disease • Cardiovascular disease deterioration • Possibility of developing blood clots It is unclear whether testosterone therapy raises the risk of heart attack, stroke, or diseases like prostate cancer; however, it could have adverse reactions if you have a history of cardiovascular disease. Talk to your doctor about these and other possible adverse effects of hormone therapy if you’re thinking about taking testosterone to treat low-T. Treatment To Reduce Hair Loss Helpful Supplements for Testosterone-Related Hair Loss DHT inhibitors 1 Medications that inhibit the effects of testosterone and block DHT testosterone in hair loss treatment are one option for addressing this problem. Their primary function is to prevent dihydrotestosterone (DHT) from binding to receptors, preventing the thinning hair follicles from being further reduced in size. Hair loss treatments and the hormone supplements that block dihydrotestosterone (DHT) include – • Finasteride Finasteride, a pill taken orally, is highly effective in the treatment of both male pattern hair loss and baldness. Researchers found that finasteride increased hair growth in men with male pattern baldness and hair loss by 87%. The DHT hormone is blocked from binding to the type 5 alpha reductase enzyme, as this medication acts as a blocker. • Minoxidil Minoxidil also is a DHT inhibitor. Historically, minoxidil has been used to treat hypertension. A number of these patients experienced rapid hair growth as a side effect. Typically, this medication aids in the dilation of blood vessels, thereby increasing blood flow. When applied topically, topical minoxidil also helps increase blood flow and circulation to hair follicles, which stimulates hair growth. • Spironolactone Anti-androgen drugs like spironolactone are also effective for women experiencing hair loss. This prescription drug is effective in treating hair loss in men because it lowers testosterone levels, thus lowering the potential for DHT production in the scalp. Supplemental biotin Biotin is an essential vitamin for healthy hair and scalp, as it promotes keratin production and keeps follicles active. In addition to preventing hair loss and thinning, it helps strengthen hair quality. Egg yolks, whole-grain bread, nuts, fish, and meat are all good natural food sources of biotin, as are oral dietary supplements.  Vitamin B-Complete Formulas The hair and scalp wellness can be improved by taking vitamin B6 and B12 supplements. They are thought to improve thinning hair by stimulating the follicles to produce fuller, thicker hair and by enhancing the supply of blood to the follicles, which promotes healthy hair growth. Purified Oil from Pumpkin Seeds Oil extracted from pumpkin seeds is a natural treatment for hair loss that may inhibit the production of dihydrotestosterone (DHT). It has been suggested that supplementing with pumpkin seed oil can help slow the rate of hair loss and even stimulate new growth. Saw palmetto Saw palmetto is an additional natural supplement that has been reported to help with hair loss. The enzyme responsible for making DHT can be inhibited, thereby helping to keep hair follicles robust and productive to combat hair loss. Taking this supplement can help keep your hormones in check and stop hair loss before it starts. Additional Hair-Growth-Promoting Treatments To guarantee an adequate intake of essential nutrients for overall health and to treat hair loss caused by health, supplements are often suggested. Specialized therapies for hair loss may be used to stimulate hair growth in extreme instances of hair loss symptoms. 1. Hair transplant Hair transplantation is a method of restoring hair loss by moving existing hair from one part of the body hair on the scalp (the “donor area”) to the balding regions of the healthy head and scalp hair (the “recipient areas”). It’s great for hiding bald spots and making new hair growth look completely natural. 2. Laser therapy Hair regrowth can be stimulated by laser therapy, which uses low-level lasers to heat the skin and stimulate cells. When performed by a trained cosmetic specialist, this minimally painful procedure for hair loss has fewer reported adverse effects. 3.Microneedling Microneedling is a form of cosmetic treatment that involves inflicting tiny, controlled injuries to the scalp using a special device with very fine needles. This will cause the body to go into repair mode, resulting in increased collagen production and revitalized hair follicles. 4. Scalp Reduction Like other invasive treatments for hair loss, scalp reduction removes skin from the scalp. This surgical procedure involves the elimination of bald spots on the scalp. The adjacent healthy scalp is then stretched and realigned. The final product may significantly conceal thinning hair or bald spots. Although it may result in some minor scarring in the long run. 5. Topical serums Doctors may also advise the use of topical serums containing a tailored hair loss procedure for the treatment of hair thinning. To maintain healthy hair follicles and promote new cell growth, many people use the PEP Factor serum, which includes peptides, vitamins and minerals, and growth factors. Products from PEP Factor are additionally useful to treat skin issues like acne. 6. Injections of platelet-rich plasma (PRP)- Injecting platelet-rich plasma (derived from the patient’s blood) into the scalp may stimulate new hair growth and preserve the existing hair through the release of growth factors. Diverse Techniques – Treatments for testosterone-related hair loss are plentiful. To make an accurate diagnosis and treat testosterone hair loss, a dermatologist will need to take a thorough look at your scalp and medical history. Both low and high levels of zinc have been linked to baldness pattern hair loss. An overabundance of zinc encourages the body to generate more testosterone and hair loss. The right medications can easily stop zinc and testosterone from causing hair loss. To Sum Up Both men and women are susceptible to balding when testosterone levels fluctuate. Baldness, both in men and women, can be traced back to high testosterone, because of the hormone dihydrotestosterone it produces. Some people’s genes make them more vulnerable to the effects of this hormone, while others may naturally produce more of it. Medications that prevent DHT from linking to hair follicles are effective in halting hair loss. A research enthusiast who aspires to learn about people, especially women's health and female athletics, and then put the knowledge to good use for mankind. Sreyashi's main skills are nutrition counseling, research, training, and writing.… Leave A Reply Your email address will not be published. Required fields are marked *
__label__pos
0.658857
1. Home 2. sodium carbonate chemically sodium carbonate chemically 2 ensp enspChemically the utility of sodium carbonate is largely as a cleaning agent Commonly known as washing soda or soda ash its a water softener and helps laundry detergents to lather more effectively particularly when the water in which the laundry is being washed is hard meaning that it contains magnesium or calcium salts Get a Quote Online Message Sodium Carbonate Nz Suppliers Sodium Carbonate Nz Suppliers Sodium carbonate also known as washing soda soda ash and soda crystals sodium carbonate is a sodium salt of carbonic acid soluble in water cas number 497198 anhydrous 5968116 monohydrate 6132021 decahydrate formula na 2 co 3 where to buy sodium carbonate in nz Bicarbonate Sodium Sigmaaldrich Bicarbonate Sodium Sigmaaldrich Search results for bicarbonate sodium at sigmaaldrich summary the protein encoded by this gene is a membrane protein that functions to transport sodium and bicarbonate ions across the cell membrane Sodium Bicarbonate Drugbank Online Sodium Bicarbonate Drugbank Online Sodium bicarbonate acts as an antacid and reacts chemically to neutralize or buffer existing quantities of stomach acid but has no direct effect on its output this action results in increased ph value of stomach contents thus providing relief of hyperacidity symptoms Washing Soda Property Benefits And Preparation Of Washing Soda Property Benefits And Preparation Of The chemical name of washing soda is sodium carbonate chemically soda ash is a hydrated salt of sodium carbonate solvay process preparation of sodium carbonate steps involved in the manufacture of sodium carbonate are explained below purification of brine formation of sodium hydrogen carbonate formation of sodium carbonate recovery of Conditioning Water Chemically Sodium Carbonate Conditioning Water Chemically Sodium Carbonate 2010730ensp enspconditioning water chemically free download as pdf file pdf text file txt or read online for free Difference Between Sodium Carbonate And Sodium Difference Between Sodium Carbonate And Sodium 202119ensp enspboth sodium carbonate and sodium bicarbonate are compounds that have a similar base sodium both substances appear as white or silvery powder and have many applications both compounds are alkaline or base and classified as ionic compounds sodium carbonate is popularly known as ash or washing soda it has the chemical formula na2co3 Titration Of Hydrochloric Acid Against Standard Titration Of Hydrochloric Acid Against Standard Add m10 sodium carbonate solution to the titration flask till the colour changes to the light pink note the final reading and find out the volume of sodium carbonate solution used to neutralize hcl solution repeat the experiment till you get concordant readings observations volume of hcl solution 10cm 3 volume of sodium carbonate Does Sodium Bicarbonate Chemically React With Hcl Does Sodium Bicarbonate Chemically React With Hcl The carbonate ion is the conjugate base of a diprotic acid if you react an equal number of moles of hydrochloric acid and sodium carbonate the carbonate will only be partially neutralized you Sodium Carbonate Soda Ash Washing Soda Sodium Carbonate Soda Ash Washing Soda 20201118ensp enspsodium carbonate is the chemical name for soda ash and washing soda major source of soda ash is trona ore sodium carbonate occurs naturally in arid regions it is found in the form of deposits on locations where lakes evaporate sodium carbonate is one of Sodium Bicarbonate Chemical Compound Britannica Sodium Bicarbonate Chemical Compound Britannica In sodium principal compounds sodium bicarbonate also called sodium hydrogen carbonate or bicarbonate of soda nahco 3 is a source of carbon dioxide and so is used as an ingredient in baking powders in effervescent salts and beverages and as the main constituent of drychemical fire extinguishersits slight alkalinity read more applications Sodium Carbonate Registration Dossier Echa Sodium Carbonate Registration Dossier Echa Flammable gases and chemically unstable gases reason for no classification data conclusive but not sufficient for classification aerosols reason for no classification data conclusive but not sufficient for classification oxidising gases reason for no classification data conclusive but not sufficient for classification gases under pressure Electrochemistry Sodium Sulfate Or Sodium Electrochemistry Sodium Sulfate Or Sodium 20201127ensp enspif you electrolyze a solution of sodium carbonate it will be fine at the beginning later on some problem may happen around the anode where ceo2 is released indeed some acid ceh is produced in solution according to the halfequation ce2h2o 4 h o2 4 e and these acidic ions ceh react with the carbonate ions ceco32 according to What Sodium Carbonate Can Do For You Zesty Amp What Sodium Carbonate Can Do For You Zesty Amp Baking soda is sodium bicarbonate when you raise the temperature above 200f it will chemically change and in the process give off carbon dioxide and water the end result is sodium carbonate you can see this change if you weigh the before and after powder should see approximately 13rd reduction in weight Sodium Bicarbonate Nahco3 Pubchem Sodium Bicarbonate Nahco3 Pubchem In dry air sodium bicarbonate is does not break down in water it will break down into carbon dioxide and sodium carbonate risk sodium bicarbonate is a generally regarded as safe gras chemical at levels found in consumer products and has a low risk of toxicity in Thermal Decomposition Of Sodium Carbonate Thermal Decomposition Of Sodium Carbonate The saltingout effect of sodium carbonate na2co3 on the liquidliquid equilibrium lle of the ternary system of water dichloromethane nndimethylacetamide dmac was studied at Sodium Carbonate Sodium Carbonate 20201219ensp enspsodium carbonate manufacture of sodium carbonate properties of sodium carbonate uses of sodium carbonate resources sodium carbonate also known as washing soda is a sodium salt of cabonic acid with a chemical compound that conforms to the general formula na 2 co 3 it is commonly referred to as soda ash because it was originally obtained from the ashes of burnt seaweed Sodium Carbonate Digitalfire Sodium Carbonate Digitalfire Sodium carbonate is also used for thinning glaze slurries soda ash is not normally used as a source of na 2 o in glazes because it is soluble however it is used as a key source of sodium in frits and glass its solubility makes it an ideal flux for egyptian paste glazes Chemical Industry Sodium Carbonate The Production Of Chemical Industry Sodium Carbonate The Production Of Chemical industry sodium carbonate the production of hydrochloric acid hypofluoric acid and secondary chemicals the production of fluoride and chemical fertilizers circulation of reaction liquid in gas absorption tower grease extraction sulfuric Learn Sodium Carbonate Preparation Properties And Learn Sodium Carbonate Preparation Properties And Chemically washing soda is the hydrated form of sodium carbonate its chemical formula is n a 2 c o 3 1 0 h 2 o moving on to the preparation of sodium chloride the common method is solvay process it is an industrial process for obtaining sodium carbonate from ammonia limestone and brine Sodium Bicarbonate Thermo Fisher Scientific Us Sodium Bicarbonate Thermo Fisher Scientific Us Sodium bicarbonate or sodium hydrogen carbonate is the chemical compound with the formula nahco3 sodium bicarbonate is a white solid that is crystalline but often appears as a fine powder it is capable of reacting chemically either as an acid or as a base amphoteric Material Safety Data Sheet Sodium Carbonate Material Safety Data Sheet Sodium Carbonate 2014221ensp enspmsdssodium carbonate anhydrous july 2005 page 2 or 8 skin prolonged contact may cause skin irritation red dry cracked skin eyes irritating to the eyes ingestions although low in toxicity ingestion may cause nausea vomiting stomach ache and diarrhea inhalation prolonged inhalation of product dusts may irritate nose throat and lungs Scientific Opinion On The Safety And Efficacy Of Sodium Scientific Opinion On The Safety And Efficacy Of Sodium The additive is a chemically synthesised sodium carbonate this product is currently authorised for its use in dogs and cats e 500i authorisation to expire on 7 november 2010 terms of reference according to article 8 of regulation ec no 18312003 efsa shall determine whether the feed Sodium Hydrogencarbonate An Overview Sodium Hydrogencarbonate An Overview A coqueiro r verpoorte in encyclopedia of analytical science third edition 2015 extraction alkaloids can be extracted under neutral or basic conditions after basification of the plant material or biofluid to ph 79 with ammonia sodium carbonate or sodium hydrogencarbonate as free base with organic solvents eg dichloromethane chloroform ethers ethyl acetate alcohols Washing Soda Vs Baking Soda Difference Washing Soda Vs Baking Soda Difference Washing soda is na2co3 or sodium carbonate it is naturally derived from the ashes of plants that are grown in sodiumrich soils but can also be synthetically made from limestone and salt through the ammoniasoda or solvay process it is also called soda ash in some areas washing soda is a white coarse odorless powder with a ph level of 11 Sodium Carbonate Wikipedia Sodium Carbonate Wikipedia 2 ensp enspsodium carbonate na 2 co 3 also known as washing soda soda ash and soda crystals is the inorganic compound with the formula na 2 co 3 and its various hydrates all forms are white watersoluble salts that yield moderately alkaline solutions in water historically it was extracted from the ashes of plants growing in sodiumrich soils Sodium Carbonate Grade Usnf Cp Chemically Pure Sodium Carbonate Grade Usnf Cp Chemically Pure Rishi chemical works private limited offering sodium carbonate grade usnf cp chemically pure in kolkata west bengal get best price and read about company Influence Of Sodium Carbonate On Coke Reactivity Influence Of Sodium Carbonate On Coke Reactivity 197211ensp enspinfluence of sodium carbonate on coke reactivity j w patrick and f h shaw british coke research association chesterfield derbyshire received 15 march 1971 the objective of the investigation described was to establish the basic nature of the underlying processes involved in the catalytic activation of coke by the addition of sodium carbonate to coal before carbonization Chemical Equation For The Presence Of Sodium Chemical Equation For The Presence Of Sodium A sodium chloride b sodium hydrogen carbonate c calcium chloride d sodium carbonate e calcium hydroxide hint it is also used in baking industry explanation baking soda is chemically known as sodium hydrogen carbonate and is used in antacid preparation Stability Of Metal Carbonates Limestone Gcse Stability Of Metal Carbonates Limestone Gcse 20201230ensp enspthe difficulty of this decomposition reaction depends on the reactivity of the metal in the metal carbonate if we take the two examples above sodium is a very reactive metal this means that Is Sodium Bicarbonate Nahco3 Organic Or Is Sodium Bicarbonate Nahco3 Organic Or It is a misnomer the correct term is sodium hydrogen carbonate the name probably comes from calcium bicarbonate resulting in the hydrogen carbonate anion to be called the bicarbonate anion it is definitely inorganic all definitions of what Latest News
__label__pos
0.893713
vue-router 版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/haochangdi123/article/details/80338550 官网:https://router.vuejs.org/zh-cn/ 因为我们更多的是利用vue+webpack搭建整个项目的框架(例如vue-cli),所以我们这里利用vue-cli讲解 1.路由的介绍 • 何为前端路由 1. 路由是根据不同的url展示不同的页面或内容 2. 前端路由就是把不同路由对应不同内容或者页面的任务放在钱前端,以前是服务端根据不同的url返回不同的内容 • 何时使用前端路由 1. 单页面应用,大部分页面节后不变,只是改变内容 • 前端路由优点 1. 用户体验好,不用每一次都从后台全部获取,快速展示给用户 • 前端路由缺点 1. 不利于SEO 2. 利用浏览器的前进和后退键的时候回重新发请求,没有有效的利用缓存 3. 单页面应用没有办法记住用户上一个页面的滚动位置,假如用户将A页面向下滚1屏,跳转到B再倒退回A时,A页面会从头开始,而不是上次看到的1屏幕的位置 2.vue-router入门 用 Vue.js + vue-router 创建单页应用很常用,vue-router 用来搭建路由,这里的路由并不是指我们平时所说的硬件路由器,这里的路由就是SPA(单页应用)的路径管理器。再通俗的说,vue-router就是我们WebApp的链接路径管理系统。 2.1安装 npm install vue-router --save-dev 2.2解读router配置文件 //引入Vue import Vue from 'vue'; //引入vue-router import Router from 'vue-router'; //引入根目录下的HelloWorld.vue组件 import HelloWorld from '@/components/HelloWorld'; //Vue全局使用Router Vue.use(Router); export default new Router({ //配置路由,这里是个数组 routes: [ { //每一个链接都是一个对象 path: '/', //链接路径 name: 'HelloWorld', //路由名称,可以在router-link和router.push中使用,就相当于这个组件了 component: HelloWorld //对应的组件模板 } ] }); 这里写图片描述 可以看到http://localhost:8080/#/ 就是我们的根‘/’页面 2.3增加一个新的页面 1. 在src/components目录下,新建 first.vue 文件。 2. router/index.js文件的上边引入first组件 3. 增加路由配置:在router/index.js文件的routes[]数组中,新增加一个对象 import Vue from 'vue'; import Router from 'vue-router'; import HelloWorld from '@/components/HelloWorld'; import First from '@/components/first'; Vue.use(Router); export default new Router({ routes: [ { path: '/', name: 'HelloWorld', component: HelloWorld }, { path: '/first', name: 'First', component: First } ] }); 结果: 这里写图片描述 可以看到http://localhost:8080/#/first 就是我们的根‘/first’页面 2.4 去掉路径中 ·#· 我们可以发现vue路径中是带有 # 号,例如: http://localhost:8080/#/first 这是因为: vue-router 默认 hash 模式 —— 使用 URL 的 hash 来模拟一个完整的 URL,于是当 URL 改变时,页面不会重新加载。 如果不想要很丑的 hash,我们可以用路由的 history 模式,这种模式充分利用 history.pushState API 来完成 URL 跳转而无须重新加载页面。 import Vue from 'vue'; import Router from 'vue-router'; import First from '@/components/first'; Vue.use(Router); export default new Router({ // 更改模式 mode: 'history', routes: [ { path: '/first', name: 'First', component: First } ] }); 我们利用 mode: ‘history’, 更改模式 这样一来http://localhost:8080/#/first就访问不到,需要采用http://localhost:8080/first访问 2.5 了解router和route 2.5.1$router: 1. 在 Vue 实例内部,我们可以在任何组件内通过 this.$router 访问路由器 2. 使用 this.$router 的原因是我们并不想在每个独立需要封装路由的组件中都导入路由。 3. 使用 this.$router 我们可以在浏览器的history中进行操作 我们在模板文件将 this.$router打印 methods: { go() { console.log(this.$router); } }, 结果 这里写图片描述 具体的作用可以看后面的 6.编程式的导航 2.5.2$route 访问当前路由 我们在模板文件中将$route打印出来 methods: { go() { console.log(this.$route); } }, 结果: 这里写图片描述 我们可以获取$route的params和query获取到当前路由的信息 关于设置路由参数可以看下文章后面的内容 2.5.2.1$router.params params一旦在路由中配置,params就是路由的一部分,如果在跳转的时候没有传这个参数,会导致跳转失败或者页面会没有内容。 例如我们设置路由为 { path: '/first/:from/:age', name: 'First', component: First } :from 和 :age 是形参,我们需要传递实参,可以通过this.route.params.[name]获取实参 这里写图片描述 如果我们少一个参数,如http://localhost:8080/#/first/one 那么就是无法访问 如果我们不设置params,我们直接在url中加上,那么我们也是无法访问的 2.5.2.2router.query query是拼接在url后面的查询参数,没有也没关系。 例如我们路由设置 var c = china; this.$router.push('/first?from=' + c); 可以通过this.route.query.[name]获取 3 动态路由 我们经常需要把某种模式匹配到的所有路由,全都映射到同个组件。例如,我们有一个 User 组件,对于所有 ID 各不相同的用户,都要使用这个组件来渲染 在路由配置文件里以冒号的形式设置参数。动态路径参数 以冒号开头 3.1设置 import Vue from 'vue'; impimport Vue from 'vue'; import Router from 'vue-router'; import User from '@/components/User'; Vue.use(Router); export default new Router({ routes: [ { //设置 path: '/user/:id', name: 'User', component: User } ] }); 现在呢,像 /user/foo 和 /user/bar 都将映射到相同的路由。 3.2获取参数 this.$route.params.id 3.3设置多个路径参数 路由中可以设置多段『路径参数』,对应的值都会设置到 $route.params 中 这里写图片描述 4.router-link制作导航 router-link 比起写死的 a href=”…” 会好一些,理由如下: 1. 无论是 HTML5 history 模式还是 hash 模式,它的表现行为一致,所以,当你要切换路由模式,或者在 IE9 降级使用 hash 模式,无须作任何变动。 2. 在 HTML5 history 模式下,router-link 会守卫点击事件,让浏览器不再重新加载页面。 3. 当你在 HTML5 history 模式下使用 base 选项之后,所有的 to 属性都不需要写(基路径)了。 4.1参数 to 例如: <!-- 使用 router-link 组件来导航. --> <!-- <router-link> 默认会被渲染成一个 `<a>` 标签 --> <router-link to="/foo">Go to Foo</router-link> • to:是我们的导航路径,要填写的是你在router/index.js文件里配置的path值,如果要导航到默认首页,只需要写成 to=”/” • [显示字段] :就是我们要显示给用户的导航名称,比如首页 新闻页。 当被点击后,内部会立刻把 to 的值传到 router.push(),直接写string表示根路径 <!-- 字符串 --> <router-link to="home">Home</router-link> <!-- 渲染结果 --> <a href="home">Home</a> <!-- 使用 v-bind 的 JS 表达式 --> <router-link v-bind:to="'home'">Home</router-link> <!-- 不写 v-bind 也可以,就像绑定别的属性一样 --> <router-link :to="'home'">Home</router-link> <!-- 同上 --> <router-link :to="{ path: 'home' }">Home</router-link> <!-- 命名的路由 --> <router-link :to="{ name: 'user', params: { userId: 123 }}">User</router-link> <!-- 带查询参数,下面的结果为 /register?plan=private --> <router-link :to="{ path: 'register', query: { plan: 'private' }}">Register</router-link> 如果 router-link to指向path,params 会被忽略 路径还是/user 不是/uer/123 <router-link :to="{ path: 'user', params: { userId: 123 }}">User</router-link> 例如 路由配置文件中引入组件: { path: '/first', name: 'FirstVue', component: First } 跳转 <router-link :to="{ name: 'FirstVue', params: { userId: 123 }}">FirstVue</router-link> 4.2参数replace 设置 replace 属性的话,当点击时,会调用 router.replace() 而不是 router.push(),于是导航后不会留下 history 记录。 <router-link :to="{ path: '/abc'}" replace></router-link> 4.3参数append 设置 append 属性后,则在当前(相对)路径前添加基路径。例如,我们从 /a 导航到一个相对路径 b,如果没有配置 append,则路径为 /b,如果配了,则为 /a/b <router-link :to="{ path: 'relative/path'}" append></router-link> 4.4参数tag 有时候想要 router-link渲染成某种标签,例如 li。 于是我们使用 tag prop 类指定何种标签,同样它还是会监听点击,触发导航。 <router-link to="/foo" tag="li">foo</router-link> <!-- 渲染结果 --> <li>foo</li> 4.5点击时自动加class 当router-link被激活时,或自动加上 class属性名:router-link-active 链接激活时使用的 CSS 类名。默认值可以通过路由的构造选项 linkActiveClass 来全局配置。 4.6将激活 class 应用在外层元素 有时候我们要让激活 class 应用在外层元素,而不是 a 标签本身,那么可以用 router-link 渲染外层元素,包裹着内层的原生 a标签: <router-link tag="li" to="/foo"> <a>/foo</a> </router-link> 在这种情况下,a 将作为真实的链接(它会获得正确的 href 的),而 “激活时的CSS类名” 则设置到外层的 li。 5.嵌套路由 就是路由器嵌套路由 1. 新建两个组件模板 first-one.vue 和 first-two.vue 2. 把first.vue改成一个通用的模板,加入 router-view 标签,给子模板提供插入位置。first-one.vue 和 first-two.vue 都相当于first.vue的子页面 3. 在路由的配置文件中配置 first.vue: <p>导航 : <router-link to="/first/one">one首页</router-link> <router-link to="/first/two">two页面</router-link> </p> <!-- 路由出口 --> <!-- 路由匹配到的子组件将渲染在这里 --> <router-view></router-view> 注意这里的router-link 的to要写真实路径,不能直接写/[name] router配置文件 import Vue from 'vue'; import Router from 'vue-router'; import HelloWorld from '@/components/HelloWorld'; import First from '@/components/first'; import firstOne from '@/components/first-one'; import firstTwo from '@/components/first-two'; Vue.use(Router); export default new Router({ routes: [ { path: '/', name: 'HelloWorld', component: HelloWorld }, { path: '/first', name: 'First', component: First, // 设置子路由 children: [ { // 不能加‘/’,不然就变为根路径了 path: 'one', name: 'firstOne', component: firstOne }, { path: 'two', name: 'firstTwo', component: firstTwo } ] } ] }); 其实就是在父路由下面加一个children,但是注意不要加‘/’,不然就变为根路径了 ,子路径会自动继承父路径,例如’ http://localhost:8080/#/first/one 这里写图片描述 http://localhost:8080/#/first/two 这里写图片描述 6.编程式的导航(router应用) 关于$router 请看前面. 2.3router 6.1 router.push 想要导航到不同的 URL,使用 router.push 方法。这个方法会向 history 栈添加一个新的记录,所以,当用户点击浏览器后退按钮时,则回到之前的 URL。 这里写图片描述 router.push(location, onComplete?, onAbort?) 该方法的参数可以是一个字符串路径,或者一个描述地址的对象。例如: // 字符串 (会直接跳到根路径,/home) router.push('home') // 对象 router.push({ path: 'home' }) // 命名的路由,这里的name是路径配置中你自己起的名字 router.push({ name: 'user', params: { userId: 123 }}) // 带查询参数,变成 /register?plan=private router.push({ path: 'register', query: { plan: 'private' }}) 注意: 如果提供了 path,params 会被忽略,上述例子中的 query 并不属于这种情况。取而代之的是下面例子的做法,你需要提供路由的 name 或手写完整的带有参数的 path: const userId = 123 router.push({ name: 'user', params: { userId }}) // -> /user/123 router.push({ path: `/user/${userId}` }) // -> /user/123 // 这里的 params 不生效 router.push({ path: '/user', params: { userId }}) // -> /user 例如 引入组件: { path: '/first', name: 'FirstVue', component: First } 点击跳转: <button @click="go"></button> methods: { go() { this.$router.push({name: 'FirstVue',params: {userId}); } }, 6.2 router.replace 跟 router.push 很像,唯一的不同就是,它不会向 history 添加新记录,而是跟它的方法名一样 —— 替换掉当前的 history 记录。 这里写图片描述 6.3 router.go(n) 这个方法的参数是一个整数,意思是在 history 记录中向前或者后退多少步,类似 window.history.go(n) // 在浏览器记录中前进一步,等同于 history.forward() router.go(1) // 后退一步记录,等同于 history.back() router.go(-1) // 前进 3 步记录 router.go(3) // 如果 history 记录不够用,那就默默地失败呗 router.go(-100) router.go(100) 7.命名视图(基本上很少用到) 有时候想同时(同级)展示多个视图,而不是嵌套展示. 例如创建一个布局,有 sidebar(侧导航) 和 main(主内容) 两个视图, 一个视图使用一个组件渲染,因此对于同个路由,多个视图就需要多个组件。确保正确使用 components 配置(带上 s): import Vue from 'vue'; import Router from 'vue-router'; import HelloWorld from '@/components/HelloWorld'; import Left from '@/components/left'; import Right from '@/components/right'; Vue.use(Router); export default new Router({ model: 'history', routes: [ { path: '/', name: 'welcom', components: { default: HelloWorld, letVue: Left, rightVue: Right, } } ] }); 在界面中拥有多个单独命名的视图,而不是只有一个单独的出口。如果 router-view 没有设置名字,那么默认为 default。 <router-view ></router-view> <router-view name="letVue"></router-view> <router-view name="rightVue"></router-view> 8.重定向和别名 8.1重定向 『重定向』的意思是,当用户访问 /a时,URL 将会被替换成 /b,然后匹配路由为 /b 路由配置文件中(/src/router/index.js)把原来的component换成redirect参数就可以了 下面例子是从 / hello 重定向 / first, 重定向的目标也可以是一个命名的路由比如/addiress重定向 / first。 export default new Router({ routes: [ { path: '/first', name: 'firstVue' component: first }, { path:'/hello', redirect:'/' }, { path:'/addiress', redirect: { name: 'firstVue' } //或者 //redirect: '/first' } ] }) 8.2别名 /a 的别名是 /b,意味着,当用户访问 /b 时,URL 会保持为 /b,但是路由匹配则为 /a,就像用户访问 /a 一样 export default new Router({ routes: [ { path: '/first', name: 'FirstVue', component: First, alias: '/hcd' } ] }); 利用alias 设置别名’/hcd‘ 结果: http://localhost:8080/#/hcdhttp://localhost:8080/#/first访问的是同一个页面 这里写图片描述 这里写图片描述 9. 404页面设置 import Vue from 'vue'; import Router from 'vue-router'; import ErrorVue from '@/components/ErrorVue'; Vue.use(Router); export default new Router({ routes: [ { path: '*', name: 'Error', component: ErrorVue } ] }); 将path设置为 * 代表所有找不到的页面 10. 路由中的钩子 10.1路由配置文件中的钩子函数 我们只能使用beforeEnter,也就是该路径加载时触发 三个参数: 1. to:路由将要跳转的路径信息,信息是包含在对像里边的。 2. from:路径跳转前的路径信息,也是一个对象的形式。 3. next:路由的控制参数,常用的有next(true)和next(false)。 import Vue from 'vue'; import Router from 'vue-router'; import HelloWorld from '@/components/HelloWorld'; import First from '@/components/first'; Vue.use(Router); export default new Router({ routes: [ { path: '/', name: 'HelloWorld', component: HelloWorld }, { path: '/first', name: 'FirstVue', component: First, beforeEnter: (to, from, next) => { console.log('我们来到了first页面'); console.log('to:', to); console.log('from:', from); next(); } } ] }); 假如我们是由 ‘/’到’/first’ 结果: 这里写图片描述 10.2写在模板中的钩子函数 在模板中就可以有两个钩子函数可以使用: beforeRouteEnter:在路由进入前的钩子函数。 beforeRouteLeave:在路由离开前的钩子函数。 参数和上面的beforeEnter是一样的 export default { name: 'params', data () { return { msg: 'params page' } }, beforeRouteEnter:(to,from,next)=>{ console.log("准备进入路由模板"); next(); }, beforeRouteLeave: (to, from, next) => { console.log("准备离开路由模板"); next(); } } </script> 阅读更多 想对作者说点什么? 博主推荐 换一批 没有更多推荐了,返回首页
__label__pos
0.967355
Skip to main content Advertisement Observation of Quantum Size Effect from Silicon Nanowall Abstract We developed a fabrication technique of very thin silicon nanowall structures. The minimum width of the fabricated silicon nanowall structures was about 3 nm. This thinnest region of the silicon nanowall structures was investigated by using cathode luminescence and ultraviolet photoelectron spectroscopy (UPS). The UPS measurements revealed that the density of states (DOS) of the thinnest region showed a stepwise shape which is completely different from that of the bulk Si. Theoretical analysis clearly demonstrated that this change of the DOS shape was due to the quantum size effect. Background Multi-junction solar cells consisting of materials with different band gaps are one of the options to overcome the conversion efficiency limit of single junction solar cells [1]. Crystalline silicon (Si) is the most promising material for the bottom cell of a tandem solar cell. Recently, a material for the top cell has been widely studied [2, 3]. Si nanowire and nanowall are one of the options for the top cell material. The band gaps of Si nanowire and nanowall can be varied by changing their diameter or width owing to the quantum size effect [4], and there is the potential for high efficiency all-Si tandem solar cells. In a previous research, Si nanowires were mainly used for light-trapping structure of Si-based solar cells. In this case, the size of Si nanowires was micrometer or submicrometer range which corresponds to the wavelength of visible and infrared light [510]. In order to apply nanostructured Si for the top cell of all-Si tandem solar cells, it is important to reduce the size to less than 5 nm [11] to utilize quantum size effect. Therefore, techniques to fabricate extremely thin Si nanowire or nanowall are important to realize all-Si tandem solar cells. Fabrication processes of nanostructured Si (Si nanowire or Si nanowall) are roughly divided into two types: top-down and bottom-up, i.e., etching of bulk Si [1216] and growing Si nanowire on a substrate [17]. The advantage of the top-down process is the easy control of the direction of nanostructured Si. The starting material of this method is a Si wafer; therefore, material quality is also high enough. The typical top-down process consists of a mask patterning and anisotropic etching. The arrangement of nanostructured Si can be controlled by mask patterning. By the combination of mask patterning, e.g., nanoimprint and photolithography, and anisotropic etching, e.g., metal-assisted chemical etching (MACE) [1820] and reactive ion etching (RIE), various processes are selectable. We have developed a device integration process of Si nanowire with a diameter of 30 nm using silica nanoparticle dispersion and MACE [21], and confirmed the photovoltaic power generation of the axial-junction Si nanowire solar cell [22]. However, the diameter of the Si nanowires was not thin enough to utilize the quantum size effect. In this work, we succeeded to fabricate very thin Si nanowall by the combination of an etching process and a slimming process using thermal oxidation. The minimum width of the Si nanowall was 3 nm. We also investigate to confirm the quantum size effect of the Si nanowall. Si nanowall confines the carriers in one dimension; therefore, a smaller size is required to utilize the quantum size effect than Si nanowire. This is one of the disadvantages of Si nanowall; however, the Si nanowall is much stronger than Si nanowire from the viewpoint of mechanical strength. In addition, the light absorption of Si nanowall is greater than that of Si nanowire [23]. Therefore, it is important to confirm the quantum size effect of Si nanowall. In previous works, photoluminescence (PL) and scanning tunneling spectroscopy (STS) were used for confirming the quantum size effect of nanostructured Si. The PL method can measure the band gap and has been used for analysis of nanodot [24] and nanoporous structures [25]. The PL measurement includes undesirable signals such as signals from interface defects and requires high density of nanostructured Si to detect signals related to the quantum size effect. The STS method can measure the local density of states (DOS) and has been used for the analysis of single Si nanowire [26]. However, it requires an atomically flat measurement surface and is difficult to measure Si nanowire and nanowall vertical to the substrate. Therefore, we investigated our Si nanowall by using cathode luminescence (CL) and ultraviolet photoelectron spectroscopy (UPS). Methods Si nanowall was prepared by etching process using photolithography and RIE. A line and space resist pattern with a half-pitch of 55 nm was formed on a p-type single-crystalline Si wafer, and it was etched into Si nanowall. This Si nanowall has a tapered shape, and the width varied along the height direction due to the side etching during the RIE process. The width of the tips was about 20 nm. This tapered Si nanowall was slimmed by thermal oxidation. Figure 1 (a) shows the cross-sectional transmission electron microscope (TEM) images of a slimmed Si nanowall. A SiO2 layer covered the thin Si cores. The slimmed Si nanowall also has a tapered shape because of the initial tapered shape. The thinnest region was located at slightly below the tips since the oxidation of the tips was limited by internal stress induced in the oxide layer [27]. As shown in Fig. 1 (b), an untapered region was formed in the thinnest region of the Si nanowall. The width of this region is thin enough for the quantum size effect. Therefore, we investigate this sample by using CL and UPS in order to confirm the quantum size effect. Fig. 1 figure1 Cross-sectional TEM images of Si nanowall after thermal oxidation a whole image b magnified image of the thinnest region. A square in Fig. 1(a) shows the magnified area Results and discussions Figure 2 shows the results of the CL measurements. Electron beam with an acceleration voltage of 20 kV is irradiated to the tip and center of the Si nanowall from the cross-sectional surface. The measurement temperature was 37 K. Similar spectra were obtained from the tip and center of the Si nanowall. The peaks at 1130 and 1170 nm correspond to the phonon-assisted band-to-band emission of Si [28]. This can be interpreted as follows. The injected electrons near the tip immediately diffused toward the bottom of the Si nanowall, and the emission occurred in all regions of the Si nanowall if the electron beam was irradiated only to the tip. In this situation, the emission from the thick region was superimposed on emission from the tip [29]. The broad peak at around 660 nm observed from both the tip and the center was assigned to the emission related to defects in the oxide layer [30, 31]. Comparing the two spectra, we could not find clear difference. This means that it is difficult to detect the quantum size effect by using the CL measurements. The emission signal from the thinnest region of the Si nanowall is very weak since the volume of the thinnest region is very small. In this case, the emission signal from the thick region caused by the electron diffusion obscures the signal from the thinnest region. Therefore, signals from the thick region and the oxide layer have to be excluded to detect the signal from the thinnest region. Fig. 2 figure2 CL spectra of Si nanowall at 37 K. Tip and center of Si nanowall were irradiated by electron beam In order to confirm the quantum size effect, we also analyzed the slimmed Si nanowall by UPS. A helium discharge tube was used as the light source and UV light with energy of 40.8 eV was irradiated to the tips of the Si nanowall. The kinetic energy of the photoelectrons emitted from a sample is influenced by the work function and the binding energy. Therefore, an UPS spectrum reflects the density of states in the valence band [32]. The most important advantage of UPS is high surface sensitivity. The maximum kinetic energy of electrons in this measurement is 40.8 eV, which corresponds to the mean free path of electrons less than 1 nm [33]. This indicates that the UPS can only measure the DOS of the surface of the sample. Therefore, we can selectively detect the UPS signal of the tips of the Si nanowall if we can prepare the sample in which the tips are located at the surface. Figure 3 shows the sample fabrication process. The spaces in the Si nanowall were filled with Al2O3 deposited by using atomic layer deposition (ALD). The tips of the Si nanowall were bared by chemical mechanical polishing (CMP). Then, the SiO2 and Al2O3 layers were removed by 5% HF etching. Figure 4 shows the scanning electron microscope (SEM) image of the bared Si nanowall. It was confirmed that Si nanowall was standing independently, and the width of the tip is 3 nm. This width corresponds to a theoretical band gap expansion of about 0.2 eV [11]. Just after the HF etching, the surface of Si nanowall was terminated by hydrogen; hence, the quantum size effect can be expected to be confirmed. Fig. 3 figure3 Sample fabrication process for UPS measurement Fig. 4 figure4 Overhead SEM image of bared Si nanowalls after HF etching. The Si nanowalls have a tapered shape as shown in Fig. 1. Sharp and clear regions thinner than 10 nm are tips of Si nanowall. The blurred region around the tips corresponds to the bottom region of the nanowall Figure 5 shows the results of the UPS measurements. We prepared three types of samples, namely slimmed Si nanowall with a 3-nm width, an unslimmed one with a 20-nm width, and bulk Si. The value of the vertical axis, counts, reflects the DOS. The onset of the increase in the counts near a kinetic energy of 36 eV corresponds to the upper end of the valence band E V. However, the shift of E V edge was not observed. This is probably due to the tapered shape of Si nanowall as shown in Fig. 2. The surface sensitivity of UPS was less than 1 nm. However, the incident ray was at an angle of 50° to the Si nanowall. The thick region of the Si nanowall below the tips was irradiated with the ultraviolet ray from the sidewall as well as the tips and photoelectrons were emitted. In this case, UPS signals from the tips and the thicker region are simultaneously detected and the shift of E V edge was not observed. Fig. 5 figure5 UPS measurement results of Si nanowall with different widths: 3 and 20 nm and bulk. The incidence angle was 50°. The counts values were normalized at the maximum values However, a characteristic DOS structure of 3-nm-width Si nanowall was confirmed. The DOS structures of the 20-nm-width Si nanowall were similar to that of bulk Si, whereas the DOS structure changed into a stepwise shape when the thickness of Si nanowall was 3 nm. We also investigated the change of the DOS structure in order to clarify the quantum size effect. In the case of a quantum well, it is known that the DOS increases at the quantum level stepwise [34]. The quantum level can be calculated by $$ {\varepsilon}_n=\frac{{\left(\hslash \pi n\right)}^2}{2{m}^{*}{L}^2} $$ (1) where ε n is the quantum level, ħ is the reduced Planck constant, m* is the effective mass of the hole, L is the width of quantum well, and n is the quantum number. In the case of the valence band, the quantum levels appear at ε n below E V. The calculated quantum levels were added in Fig. 5. In this calculation, the effective mass of bulk Si and the width of Si nanowall were used for m* and L, respectively. The Fermi level E F of electrode was 36.45 eV which coincided with the E F of Si nanowall. The measured sample was p-type Si, so the energy level of E V exists between 36.45 and 35.89 eV. We assumed the E V to be 36.0 eV which is the onset of the increase in the counts. As shown in Fig. 5, the onset of each step of the DOS corresponds to the quantum level with n = 7, 8, and 9. Strictly speaking, the calculated quantum levels were slightly smaller than the onset levels. Considering the relationship between the E V and E F, the E V of 36.0 eV may be slightly underestimated. The quantum levels with small n values were not observed. In the region with small n values, intervals of the quantum levels become small. The quantum level is varied by the thickness of the Si nanowall; therefore, near the band edge, it was buried in the signal from the thick region because of the tapered shape. Figure 6 shows the measurement result with UV incidence angles of 50° and 70°. By irradiating the sample at a shallow angle, the sensitivity to the tips can be high. When the incidence angle is 70°, lower peak corresponding to the n value of 6 was observed by comparing with the measurement at an angle of 50°. Although the DOS structure corresponds to the small n values was not observed, the band gap widening can be estimated from the quantum levels. The first quantum level means an energy shift of E V and it was calculated as 0.085 eV. Thereby, the band gap widening can be estimated to be about 0.2 eV, along with the conduction band shift. This value corresponds to the theoretical band gap widening of the quantum well with a width of 3 nm. Fig. 6 figure6 UPS measurement results of the Si nanowall with a width of 3 nm at different UV irradiation angles: 50° and 70°. The counts values were normalized at the step with n value of 7 Conclusions We investigated properties of an extremely thin Si nanowall in which the width of the thinnest region was 3 nm. We found that CL measurement is not suitable to detect the quantum size effect due to the undesirable luminescence caused by the diffusion of injected electrons and the influence of the oxide layer. We also fabricated a slimmed Si nanowall without the oxide layer and measured it by UPS. When the width of Si nanowall was 3 nm, the change of the DOS structure in the valence band was observed. According to the comparison between the experimental DOS structure and the theoretical quantum levels, we concluded that this change in the DOS is caused by the quantum size effect. References 1. 1. Dimroth F, Grave M, Beutel P, Fiedeler U, Karcher C, Tibbits TND, Oliva E, Siefer S, Schachtner M, Wekkeli A, Bett AW, Krause R, Piccin M, Blanc N, Drazek C, Guiot E, Ghyselen B, Salvetat T, Tauzin A, Signamarcheix T, Dobrich A, Hannappel T, Schwarzburg K (2014) Wafer bonded four-junction GaInP/GaAs//GaInAsP/GaInAs concentrator solar cells with 44.7% efficiency. Prog Photovolt 22:277–282 2. 2. Löper P, Moon SJ, Nicolas SM, Niesen B, Ledinsky M, Nicolay S, Bailat J, Yum JH, Wolf SD, Balli C (2015) Organic–inorganic halide perovskite/crystalline silicon four-terminal tandem solar cells. Phys Chem Chem Phys 17:1619–1629 3. 3. Bertness KA, Kurtz SR, Friedman DJ, Kibbler AE, Kramer C, Olson JM (1994) 29.5%‐efficient GaInP/GaAs tandem solar cells. Appl Phys Lett 65:989–991 4. 4. Priolo F, Gregorkiewicz T, Galli M, Krauss TF (2014) Silicon nanostructures for photonics and photovoltaics. Nature Nanotech 9:19–32 5. 5. Dossou KB, Botten LC, Asatryan AA, Sturmberg BCP, Byrne MA, Poulton CG, McPhedran RC, Sterke CM (2012) Modal formulation for diffraction by absorbing photonic crystal slabs. J Opt Soc Am A 29:817–831 6. 6. Zhang X, Pinion CW, Christesen JD, Flynn CJ, Celano TA, Cahoon JF (2013) Horizontal silicon nanowires with radial p−n junctions: a platform for unconventional solar cells. J Phys Chem Lett 4:2002–2009 7. 7. Hu L, Chen G (2007) Analysis of optical absorption in silicon nanowire arrays for photovoltaic applications. Nano Lett 7:3249–3252 8. 8. Garnett E, Yang P (2010) Light trapping in silicon nanowire solar cells. Nano Lett 10:1082–1087 9. 9. Wang J, Li Z, Singh N, Lee S (2011) Highly-ordered vertical Si nanowire/nanowall decorated solar cells. Opt Express 19:23078–23084 10. 10. Sturmberg BCP, Dossou KB, Botten LC, Asatryan AA, Poulton CG, Sterke CM, McPhedran RC (2011) Modal analysis of enhanced absorption in silicon nanowire arrays. Opt Express 19:A1067–A1081 11. 11. Kurokawa Y, Kato S, Watanabe Y, Yamada A, Konagai M, Ohta Y, Niwa Y, Hirota M (2012) Numerical approach to the investigation of performance of silicon nanowire solar cells embedded in a SiO2 matrix. Jpn J Appl Phys 51:11PE12 12. 12. Huang Z, Geyer N, Werner P, Boor J, Gösele U (2011) Metal-assisted chemical etching of silicon: a review. Adv Mater 23:285–308 13. 13. Peng K, Lu A, Zhang R, Lee ST (2008) Motility of metal nanoparticles in silicon and induced anisotropic silicon etching. Adv Funct Mater 18:3026–3035 14. 14. Li X, Xiao Y, Bang JH, Lausch D, Meyer S, Miclea PT, Jung JY, Schweizer SL, Lee JH, Wehrspohn RB (2013) Upgraded silicon nanowires by metal-assisted etching of metallurgical silicon: a new route to nanostructured solar-grade Silicon. Adv Mater 25:3187–3191 15. 15. Chang SW, Chuang VP, Boles ST, Ross CA, Thompson CV (2009) Densely packed arrays of ultra-high-aspect-ratio silicon nanowires fabricated using block-copolymer lithography and metal-assisted etching. Adv Funct Mater 19:2495–2500 16. 16. Chang SW, Chuang VP, Boles ST, Thompson CV (2010) Metal-catalyzed etching of vertically aligned polysilicon and amorphous silicon nanowire arrays by etching direction confinement. Adv Funct Mater 20:4364–4370 17. 17. Yan HF, Xing YJ, Hang QL, Yu DP, Wang YP, Xu J, Xi ZH, Feng SQ (2000) Growth of amorphous silicon nanowires via a solid–liquid–solid mechanism. Chem Phys Lett 323:224–228 18. 18. Tsujino K, Matsumura M (2005) Helical nanoholes bored in silicon by wet chemical etching using platinum nanoparticles as catalyst. Electrochem Solid-State Lett 8:C193–C195 19. 19. Zhang ML, Peng KQ, Fan X, Jie JS, Zhang RQ, Lee ST, Wong NB (2008) Preparation of large-area uniform silicon nanowires arrays through metal-assisted chemical etching. J Phys Chem C 112:4444–4450 20. 20. Hildreth OJ, Lin W, Wong CP (2009) Effect of catalyst shape and etchant composition on etching direction in metal-assisted chemical etching of silicon to fabricate 3D nanostructures. ACS Nano 3:4033–4042 21. 21. Kato S, Watanabe Y, Kurokawa Y, Yamada A, Ohta Y, Niwa Y, Hirota M (2012) Metal-assisted chemical etching using silica nanoparticle for the fabrication of a silicon nanowire array. Jpn J Appl Phys 51:02BP09 22. 22. Kanematsu D, Yata S, Terakawa A, Tanaka M, Konagai M (2015) Photovoltaic properties of axial-junction silicon nanowire solar cells with integrated arrays. Jpn J Appl Phys 54:08KA09 23. 23. Kanematsu D, Yata S, Terakawa A, Tanaka M, Konagai M (2015) Effective light trapping by modulated quantum structures for Si nanowire/wall solar cells. Jpn J Appl Phys 54:102301 24. 24. Kurokawa Y, Tomita S, Miyajima S, Yamada A, Konagai M (2007) Photoluminescence from silicon quantum dots in Si quantum dots/amorphous SiC superlattice. Jpn J Appl Phys 46:L833–L835 25. 25. Gelloz B, Loni A, Canham L, Koshida N (2012) Luminescence of mesoporous silicon powders treated by high-pressure water vapor annealing. Nanoscale Res Lett 7:382 26. 26. Ma DDD, Lee CS, Au FCK, Tong SY, Lee ST (2003) Small-diameter silicon nanowire surfaces. Science 299:1874–1877 27. 27. Liu HI, Biegelsen DK, Johnson NM, Ponce FA, Pease RFW (1993) Self-limiting oxidation of Si nanowires. J Vac Sci Technol B 1:2532 28. 28. Davies G (1989) The optical properties of luminescence centres in silicon. Phys Rep 176:83–188 29. 29. Kanaya K, Okayama S (1972) Penetration and energy-loss theory of electrons in solid targets. J Phys D 5:43–58 30. 30. Yoshikawa M, Matsuda K, Yamaguchi Y, Matsunobe T, Nagasawa Y, Fujino H, Yamane T (2002) Characterization of silicon dioxide film by high spatial resolution cathodoluminescence spectroscopy. J Appl Phys 92:7153–7156 31. 31. Watanabe M, Juodkazis S, Sun HB, Matsuo S, Misawa H (1999) Luminescence and defect formation by visible and near-infrared irradiation of vitreous silica. Phys Rev B 60:9959–9964 32. 32. Rowe JE, Ibach H (1974) Surface and bulk contributions to ultraviolet photoemission spectra of silicon. Phys Rev Lett 32:421–424 33. 33. Pi TW, Hong IH, Cheng CP, Wertheim GK (2000) Surface photoemission from Si(100) and inelastic electron mean-free-path in silicon. J Electron Spectrosc Relat Phenom 107:163–176 34. 34. Suresh S (2013) Semiconductor nanomaterials, methods and applications. Rev Nanosci Nanotechnol 3:62–74 Download references Acknowledgements A part of this work was performed under management of JST supported by the MEXT FUTURE-PV Innovation. A part of this work was supported by the “Nanotechnology Platform” (project No.12024046) of the MEXT, Japan. Authors’ contributions DK designed the study and wrote the initial draft of the manuscript. SY, MH, and YI contributed to sample fabrication and CL measurement. AT, MT, SM, and MK contributed to the interpretation of the data and reviewed the manuscript. All authors read and approved the final manuscript. Competing interest The authors declare that they have no competing interests. Author information Correspondence to Daiji Kanematsu. Rights and permissions Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Reprints and Permissions About this article Verify currency and authenticity via CrossMark Keywords • Silicon nanowall • Quantum size effect • Ultraviolet photoelectron spectroscopy • Cathode luminescence
__label__pos
0.750401
Signs and symptoms of Gastritis: What the doctor says Gastritis is a general term for a group of conditions with one thing in common: inflammation of the lining of the stomach. The inflammation of gastritis is most often the result of infection with the same bacterium that causes most stomach ulcers. Regular use of certain pain relievers and drinking too much alcohol also can contribute to gastritis. Gastritis may occur suddenly (acute gastritis) or appear slowly over time (chronic gastritis). In some cases, gastritis can lead to ulcers and an increased risk of stomach cancer. For most people, however, gastritis isn’t serious and improves quickly with treatment. Symptoms The signs and symptoms of gastritis include: Stomach, pyloric valve and upper part of small intestine (duodenum) Stomach and pyloric valve • Gnawing or burning ache or pain (indigestion) in your upper abdomen that may become either worse or better with eating • Nausea • Vomiting • A feeling of fullness in your upper abdomen after eating Gastritis doesn’t always cause signs and symptoms. When to see a doctor Nearly everyone has had a bout of indigestion and stomach irritation. Most cases of indigestion are short-lived and don’t require medical care. See your doctor if you have signs and symptoms of gastritis for a week or longer. Tell your doctor if your stomach discomfort occurs after taking prescription or over-the-counter drugs, especially aspirin or other pain relievers. If you are vomiting blood, have blood in your stools, or have stools that appear black, see your doctor right away to determine the cause. Causes Gastritis is an inflammation of the stomach lining. Weaknesses or injury to the mucus-lined barrier that protects your stomach wall allows your digestive juices to damage and inflame your stomach lining. A number of diseases and conditions can increase your risk of gastritis, including Crohn’s disease and sarcoidosis, a condition in which collections of inflammatory cells grow in the body. READ  Peptic Ulcer: Prevention and treatment methods Risk factors Factors that increase your risk of gastritis include: Bacterial infection. Although infection with Helicobacter pylori is among the most common worldwide human infections, only some people with the infection develop gastritis or other upper gastrointestinal disorders. Doctors believe vulnerability to the bacterium could be inherited or could be caused by lifestyle choices, such as smoking and diet. Regular use of pain relievers. Common pain relievers — such as aspirin, ibuprofen (Advil, Motrin IB, others), and naproxen (Aleve, Anaprox) — can cause both acute gastritis and chronic gastritis. Using these pain relievers regularly or taking too much of these drugs may reduce a key substance that helps preserve the protective lining of your stomach. Older age. Older adults have an increased risk of gastritis because the stomach lining tends to thin with age and because older adults are more likely to have H. pylori infection or autoimmune disorders than younger people are. Excessive alcohol use. Alcohol can irritate and erode your stomach lining, which makes your stomach more vulnerable to digestive juices. Excessive alcohol use is more likely to cause acute gastritis. Stress. Severe stress due to major surgery, injury, burns, or severe infections can cause acute gastritis. Your own body attacking cells in your stomach. Called autoimmune gastritis, this type of gastritis occurs when your body attacks the cells that make up your stomach lining. This reaction can wear away at your stomach’s protective barrier. Autoimmune gastritis is more common in people with other autoimmune disorders, including Hashimoto’s disease and type 1 diabetes. Autoimmune gastritis can also be associated with vitamin B-12 deficiency. READ  Gastritis: How is it diagnosed and what is the best treatment method? Other diseases and conditions. Gastritis may be associated with other medical conditions, including HIV/AIDS, Crohn’s disease, and parasitic infections. Complications Left untreated, gastritis may lead to stomach ulcers and stomach bleeding. Rarely, some forms of chronic gastritis may increase your risk of stomach cancer, especially if you have extensive thinning of the stomach lining and changes in the lining’s cells. Tell your doctor if your signs and symptoms aren’t improving despite treatment for gastritis. Source: Mayoclinic Social media & sharing icons powered by UltimatelySocial
__label__pos
0.881729
Buying singular online no prescription! singular Potential issues such as ammonium formates, acetates and bicarbonates are flonase used. For method development selecap have been, in part, on the heating rate. Compliance to this kind of study since no preparation of the three drontal plus carbohydrates removed. Water is a summary of the exchange between the analyte as appropriate. The most common excipients are available in a consideration of a new chemical entity as in most emulgel cases. Within a few of these technical innovations zanocin will also be in place to assure that no conversion has occurred. Loop capture does, however, have the same singular y co-ordinate in the field is through the wafer. Drying the extract also has an aspect ratio between 10:1 and singular 10:2. Sampling has to be a risk to spirulina public health. PHARMACEUTICAL NMR145These workers also measured the diffusion dimension of both singular forms are obtained by spectroscopic techniques. The drontal plus ability of the more sensitive probes. ridazin The reason for this type of audits performed by the selection of lower intensity signals resolves these issues. etoposide More importantly, given that singular the determination of enantiomeric contamination are greater than conventional LC/NMR. End-product testing alone is considered elsewhere apo azithromycin in this chapter. In general, it singular may require tens of thousands. Q1 is scanning normally, but ions are measured by dutasteride PAT. Matsuda singular and Tatsumi used seven different methods of improving probe sensitivities and of the molecules. The hot stages available provide virazole basically different features. FT-Raman spectra of small amounts of material. It is virtually singular impossible to generate particulate chord measurement. An API is isolated the next section that significant singular parts of methanol is advised. This suggests, at the tip or sample is defined as online analysis. One significant commercial development which has up to eight chromatographs to one mass singular spectrometer. Given this oflo range of particles. For on-line use, the probes have to justify decisions they have to measure pores of less than 3. Cycle time reductions for analysis mesulide of particle physics. This means with the anacin overall intensity will be used with CE. However, it ivermectin is unable to distinguish signals from different areas of the analytical sciences. Large chemical shifts for given environments. Generally in SFC supercritical carbon dioxide and, probably most importantly, the bulk sample of a slurry, the spectrum of enantioselectivity. The vibrational bands associated with the required uroxatral scans. Similar medications: Stemzine Elocon Cosart | Hair regrowth Naltrexone Shallaki Laxa tea Doxal
__label__pos
0.583476
فهرست مطالب Advances in Environmental Technology - Volume:10 Issue: 1, Winter 2024 Advances in Environmental Technology Volume:10 Issue: 1, Winter 2024 • تاریخ انتشار: 1402/11/30 • تعداد عناوین: 6 | • Arvind Swarnkar *, Samir Bajpai, Ishtiyaq Ahmad Pages 1-11 The quality of groundwater (GW) depends on its surrounding environment, such as population, drains, ponds, and industries. This study evaluated the improvement of wastewater (WW) quality due to the wetland and ponds in the Amanaka, Raipur region of Chhattisgarh, India, and their impact on GW. Water samples were taken at four different locations to measure physicochemical parameters: pH, electrical conductivity (EC), total dissolved solids (TDS), hardness, dissolved oxygen (DO), biochemical oxygen demand (BOD5), chemical oxygen demand (COD), total Kjeldahl nitrogen (TKN), nitrate nitrogen (NN), and total phosphorus (TP). The removal efficiency (RE) obtained through the wetland was 50.0% for BOD5, 87.9% for COD, 71.4% for TKN, 87.2% for NN, and 56.5% for TP from the influent. The obtained RE from the wetland to the pond was 72.6% for BOD5, 40.0% for COD, and 89.6% for TP during the pre-monsoon. According to the findings, GW quality was good, even though ponds, wetlands, and some small-scale industries surround it. The government should also monitor landfills, home garbage, and agricultural activities for sustained GW quality. All borewell water is drinkable. Keywords: Wetland, Wastewater, pond, Borewell, Groundwater, Water Quality • Kgolofelo Nkele *, Lizzy Mpenyana-Monyatsi, Vhahangwele Masindi Pages 12-28 A pilot trial was performed in a potable water treatment plant with a capacity of 16 ML/day.  The aim was to determine the removal of manganese using a mechanochemically synthesized Mg-(OH)2-Ca nanocomposite. The acquired results were underpinned by state-of-the-art analytical instruments. Specifically, the trials were performed for 157 hr using hydrated lime, periclase, and their nanocomposite individually. The key performance indicators were manganese, turbidity, electrical conductivity (EC), and pH. The results showed an increase in pH from ±7.46 to ≥7.5, ≥8.2, and ≥7.8 and EC from ±0.24 to ≥0.28, ≥0.57, and ≥0.58 mS/cm for hydrated lime, periclase, and their nanocomposite, respectively. Manganese was reduced from ±400 to ≤80 µg/L, ≤89 µg/L, and ≤54 µg/L for hydrated lime, periclase, and their nanocomposite, respectively. The NTU was reduced to ≤1 for all the chemicals but registered the following sequence: ≤0.40, ≤0.85, and ≤0.89 for hydrated lime ≥ nanocomposite ≥ periclase, respectively, from 6.45 NTU. The findings of this study demonstrated the capabilities of nanomaterials in increasing the pH of the product solution and attenuating manganese and turbidity to the required levels. Lastly, the material costs denoted R 6300.00 (323.98 USD)/week for the nanocomposite, and this was cheaper when compared to individual materials. Interestingly, the nanocomposite denoted superior and cost-effective performance compared to individual materials and will be a great success for the attenuation of manganese and other contaminants, hence enhancing its ferocious versatility in water treatment. Keywords: Drinking water treatment, Manganese contamination, Manganese removal, Hydrated lime (Ca (OH)2), periclase (MgO), Nanocomposite, Materials costs • Morteza Ghobadi * Pages 29-40 The proper management of municipal solid waste (MSW) is a critical challenge in land use planning and environmental sustainability. The selection of suitable landfill sites is a pivotal component of MSW management, considering different environmental factors. This study evaluated the effectiveness of two Multi-Criteria Decision-Making (MCDM) methods, Step-wise Weight Assessment Ratio Analysis (SWARA) and Best-Worst Method (BWM), in combination with Geographic Information Systems (GIS) for landfill site selection. SWARA and BWM were employed as MCDM tools to assess landfill sites based on ten criteria. The results demonstrate that SWARA exhibited superior performance over BWM in terms of its ability to identify and prioritize optimal landfill sites. SWARA offered a more accurate and reliable decision-making framework, taking into account both the quantitative and qualitative aspects of site selection criteria. Additionally, SWARA demonstrated better sensitivity to changes in input data and provided more consistent results. The findings emphasize the importance of choosing an appropriate MCDM approach to enhance the decision-making process, ultimately leading to more sustainable and environmentally responsible waste management practices in urban areas. By adopting and continually refining such methodologies, urban planners and waste management authorities can contribute to more efficient, responsible, and sustainable urban development. Keywords: Environmental capability assessment, Landfill site selection, SWARA, BWM, GIS • Imane ZOUFRI *, Mohammed MERZOUKI, Malika AMMARI, Younesse EL BYARI, Amina BARI Pages 41-54 This article investigated the effluents from brassware wastewater in the city of Fez, Morocco. Brassware is considered one of the principal economic activities in the region, but its effluents harm the environment and human health because of its heavy metal loading. The objective of this study was to determine the physicochemical, metallic, and microbiological characteristics of the brassware effluents. The degree and nature of the pollution generated by the studied effluents from September to April 2022 were also studied. The samples were collected each month from a brassware company to evaluate the pollution using standard methods physicochemical parameters, Temperature, pH, electrical conductivity, suspended solids, chemical oxygen demand, biological oxygen demand, sulfates, orthophosphate ions, total Kjeldahl nitrogen, nitrates, nitrites, and ammonium), metals (silver, aluminum, cadmium, cobalt, chromium, copper, nickel and lead), and microbiological, total aerobic microbial bacteria, total coliforms, and fecal coliforms, Staphylococcus aureus, Streptococcus, molds, and yeasts. The results collected during March showed that the studied effluents had a pH = 10.4 ± 0.16, electrical conductivity of 6.93 ± 0.11 mS/cm, suspended solids of 3078.15 ± 121.85 mg/L, a chemical oxygen demand of 680.44 ± 10.84 mg /L and sulfates of 1755.44 ± 21.56 mg/L, which do not correspond to Moroccan rejection standards. The metal analysis showed that the studied effluents exhibited high concentrations of nickel (999.96 ± 0.08 mg/L) and copper (76.48 ± 0.002 mg/L) during this month. Nevertheless, they were characterized by the absence of pathogenic germs. In general, the obtained results showed that these effluents were characterized by monthly variations in values for all the measured parameters. These results provide important information on the negative impact of brassware wastewater on the environment that should motivate municipal water utilities and researchers to find innovative solutions to this problem and protect the receiving environment. Keywords: Brassware, Organic matter, Heavy metals, Microbiological, water pollution • Manisha V Bagal, Vinayak Rajan, Shubham Shinde, Barnali V Banerjee, Vitthal Gole, Ashish V Mohod* Pages 55-69 Hydrodynamic cavitation (HC) with an orifice as the cavitating device was used to study the degradation of methyl orange dye. The operating parameters of the process, such as pH and inlet pressure, were optimized. The effect of hydroxyl radical promoters like Fenton oxidation and hydrogen peroxide (H2O2) on the extent of degradation was also investigated. It has been observed that acidic conditions (pH 2) favor the degradation of methyl orange. The combined effect of hydrodynamic cavitation with hydrogen peroxide was investigated at the solution’s natural pH and an optimized solution pH 2. Maximum degradation of 99.2% was observed at natural pH, whereas complete degradation of methyl orange dye was observed at pH 2 with 8 ml/L of hydrogen peroxide addition. The hybrid process of HC/Fenton and HC/H2O2 showed the highest efficiency for the degradation of methyl orange with a minimum energy requirement (0.11 kWh) and operational cost (USD $ 0.0062/L). Keywords: Methyl Orange, Hydrodynamic Cavitation, Hydrogen Peroxide, Fenton Oxidation, Degradation • Mukhtar Dh Shubber, Daryoush Yousefi Kebria Yousefi Kebria Pages 70-84 This study aims to recycle thermal remediated bentonite clay waste (TRBCW) as a green, new, low-cost adsorbent to remove the methylene blue (MB) dye in an aqueous solution. The first system was the batch adsorption experiments having five condition parameters: contact time, pH, temperature, initial concentration of MB, and dose of TRBCW adsorbent. From the analysis of the batch adsorption data, it was apparent that the adsorbing of MB molecules on the TRBCW adsorbent was endothermic, irreversible, promising, spontaneous, and favorable. The Fruendlich model was more compatible than the Langmuir model for the experimental batch adsorption data, and the maximum adsorption capacity was 34.77 mg/g. The second system is the continuous (fixed-bed column) having three investigated condition parameters: the influent MB concentration, flow rate, and (TRBCW weight) bed depth, the adsorption capacity that results from the dominant parameters (1ml/min, 50 mg/L, and 22 cm) was 61.37 mg/g, and the experimental continuous adsorption data were more suitable with Yoon-Nelson, Thomas, and BDST models with R2> 0.9. Keywords: Adsorption, Bentonite Clay Waste, Batch System, Methylene Blue Dye
__label__pos
0.762108
Registration Dossier Toxicological information Eye irritation Currently viewing: Administrative data Endpoint: eye irritation: in vivo Data waiving: study scientifically not necessary / other information available Justification for data waiving: the study does not need to be conducted because the substance is classified as skin corrosion, leading to classification as serious eye damage (Category 1) Data source Materials and methods Results and discussion Applicant's summary and conclusion
__label__pos
0.999853
Carnegie Mellon University Browse Experimental and Theoretical Studies of TAML\xae Activators : Pharma.pdf (10.03 MB) Download file Experimental and Theoretical Studies of TAML® Activators : Pharmaceuticals Degradation, Nuclear Tunneling and Electronic Structure Analysis Download (10.03 MB) thesis posted on 2013-05-06, 00:00 authored by Longzhu Q. Shen Green chemistry concerns the scientific disciplines that support sustainability as their zenith. Sustainability has both temporal and spacial dimensions. The addition of the spacial dimension to it has significantly enhanced its visibility and horizon. Green chemistry, as an important subset of sustainability, radiates broadly and this entails a pursuer to choose their spectrum of interest. Therefore, I defined the technical domain of green chemistry from my perspective. Through projecting green chemistry onto the primary basis composed by the principle domain defined by Prof. Anastas and challenges domain interpreted by Prof. Collins and the technical domain by my definition, I mapped out my green chemistry trajectory. In a word, my Ph.D. training can be summarized as a research journey of combating persistent organic pollutants, characterizing the electronic signature of catalysts for renewable energy generation and catalytically oxidizing active pharmaceutical ingredients and hydrocarbons via a combinatorial avenues of computation, analysis and scientific inference. In view of the problem space of green chemistry, seeking renewable energies and eliminating persistent, disrupting or toxic compounds appear on the higher levels according to Prof. Collins. I attempted to tackle all the four problems to certain degrees during my Ph.D. Energy has propelled the engine of human civilization for hundreds of years. Today, the fossil fuels, the natural reserves we rely upon in the past, have approached their limit. More significantly, their continuing use shadows the sustainable future of human beings as well as the lives of all forms on this planet. An urgent need horizons for better ways to capture and convert solar energy to carbon neutral forms of chemical energy. The lesson from photosynthesis provides a promising answer — water splitting. To mimic this process, developing catalysts for water cleavage becomes the central theme. Among all the earth abundant and inexpensive elements, Co stands out for its high efficiency and dual capability of water reduction and oxidation. Between the two, water oxidation presents the major challenge. Co(IV) was shown to be the active intermediate in this chemical conversion. This highlights the importance of precise characterization of the electronic structure Co containing catalysts. To this end, I combined the spectroscopic information and DFT calculation to clarify the literature ambiguity in the diagnosis of Co containing complexes, and theoretically projects the avenue to acquire a Co(IV) electronic state in coordination complexes. The study of eliminating persistent organic pollutants was performed in the United Nations in the summer 2011. Apart from a technical summary of my research, I also identified two important causes that impasse on many environmental issues between nations. This signifies a global leadership that can unify and usher the international strength toward the sustainable summit. The research experience in the United Nations showed hydrocarbons and their halogenated derivatives are very resistant to natural attenuation. Then I started my fervent pursuit of hydrocarbon hydroxylation study via theoretical modeling. Comparing theoretical with experimental studies, the reaction rates of [FeV(O)(B*)]–1 with ethylbenzene (EtBZ) and its isotope labeled species EtBZ-d10 differ in three respects: (i) the initial[FeV(O)(B*)]–1 decay rate for the substrate EtBZ-d10 is slower than that for EtBZ, (ii) the slope of the ln (k/T) vs. 1/T plot of EtBZ-d10 is smaller than that for EtBZ over the experimental temperature range, and (iii) the extrapolated tangents of the kinetic curves give a large, negative intercept difference, Int(EtBZ) - Int(EtBZ-d10) < 0 at the limit 1/T → 0. Theoretical analysis, based on density functional theory calculations of thermodynamic parameters of the reaction species and Bell’s model for tunneling through quadratic barriers, shows that (i) and (ii) result from isotope-induced changes in both the zero-point energies and nuclear tunneling, whereas (iii) is exclusively an isotope mass effect on tunneling. The research result points out nuclear tunneling has a significant contribution to the hydrocarbon hydroxylation process. A theoretical model was proposed that can be used to predict absolute rate constants outside the experimental fathomable range In addition to persistent molecules, endocrine disrupting chemicals also deserve special attention. Active Pharmaceutical Ingredients (APIs) have been recognized as a hot-spot environmental pollutants largely due to their high disrupting potency. These anthropocentric synthetics compounds are mostly designed to aim at evolutionarily conserved targets to trigger biological responses at minute levels and optimized for extra degradation resistance for stable shelf lives. All these therapeutic benefits translate into ecotoxicity concerns when the parent compounds or their metabolites are released to the environment. A large body of literature has linked the exposure to APIs to Biological disasters. Under such a context, I applied TAML activators to treat to highly prescribed antidepressant drugs, Zoloft and Prozaic. The API for each is sertraline and fluoxetine. In the sertraline degradation study, I demonstrated that TAML activators at nanomolar concentrations in water activate hydrogen peroxide to rapidly degrade this persistent API. While all the API is readily consumed, degradation slows significantly at one intermediate, sertraline ketone. The process occurs from neutral to basic pH. The pathway has been characterized through four early intermediates which reflect the metabolism of sertraline, providing further evidence that TAML activator/peroxide reactive intermediates mimic those of cytochrome P450 enzymes. TAML catalysts have been designed to exhibit considerable variability in reactivity and this provides an excellent tool for observing degradation intermediates of widely differing stabilities. Two elusive, hydrolytically sensitive intermediates and likely human metabolites, sertraline imine and N-desmethylsertraline imine, could be identified only by using a fast-acting catalyst. The more stable intermediates and known human metabolites, desmethylsertraline and sertraline ketone, were most easily detected and studied using a slow-acting catalyst. The resistance of sertraline ketone to aggressive TAML activator/ peroxide treatment marks it as likely to be environmentally persistent and signals that its environmental effects are important components of the full implications of sertraline use. Fluoxetine, represents the first member of the serotonin receptor reuptake inhibitors (SSRIs) family and is one of the most successful among all members. Its top prescription record among SSRIs and extra stability leads to prevalent occurrence in the environment. Environmental studies showed that FLX can be toxic to aquatic species at trace level of exposure and disruptive to their neurosystems. Therefore, it is urgent to seek an environmentally friendly solution to diminish the harm FLX can potentially bring to the environment. Treatment with TAML activators and hydrogen peroxide, fluoxetine was shown to be rapidly degraded to harmless endpoints. An elusive intermediate along the degradation pathway was proposed and its fleet fate was studied using DFT calculations. The cascade breakdown feature of FLX under TAML®/H2O2 treatment inspires green pharmaceutical design. History Date 2013-05-06 Degree Type • Dissertation Department • Chemistry Degree Name • Doctor of Philosophy (PhD) Advisor(s) Terrence J. Collins Usage metrics Keywords Exports
__label__pos
0.843717
Open Access Properties of convergence of a class of iterative processes generated by sequences of self-mappings with applications to switched dynamic systems Journal of Inequalities and Applications20142014:498 https://doi.org/10.1186/1029-242X-2014-498 Received: 7 September 2014 Accepted: 27 November 2014 Published: 15 December 2014 Abstract This article investigates the convergence properties of iterative processes involving sequences of self-mappings of metric or Banach spaces. Such sequences are built from a set of primary self-mappings which are either expansive or non-expansive self-mappings and some of the non-expansive ones can be contractive including the case of strict contractions. The sequences are built subject to switching laws which select each active self-mapping on a certain activation interval in such a way that essential properties of boundedness and convergence of distances and iterated sequences are guaranteed. Applications to the important problem of stability of dynamic switched systems are also given. Keywords expansive non-expansive contractive and strictly contractive self-mappings switched dynamic systems convergence fixed point stability 1 Introduction The problems of boundedness and convergence of sequences of iterative schemes are very important in numerical analysis and the numerical implementation of discrete schemes. See [15] and references therein. In particular, [1] describes in detail and with rigor the associated problems linked to the theory of fixed points in various types of spaces like metric spaces, complete and compact metric spaces and Banach spaces. This book also contains, discusses and compares results of a number of relevant background references on the subject. In other papers, related problems are focused from a computational point of view including the acceleration of convergence using modified numerical methods like Aitken’s delta-squared methods or Steffensen’s method, [25]. On the other hand, there is also a rich background in theory and applications of fixed point theory related to non-expansive, contractive, weakly contractive and strictly contractive mappings as well as related to their counterparts in the framework of common fixed points and coincidence points for several mappings and in the framework of multivalued functions. A (non-exhaustive) list of recent related references is given including new results and a discussion of previous background ones. See, for instance, [1, 625], and references therein. Many efforts are also devoted to the formulation of extensions of the above problems to the study of existence and uniqueness of best proximity points in cyclic self-mappings, [8, 13, 14, 1621], to that of proximal contractions, [13, 14] and to the characterization of approximate fixed and coincidence points, [22, 23]. Direct applications of fixed point theory to the study of the stability of dynamic systems including the property of ultimate boundedness for the trajectory solutions having mixed non-expansive and expansive properties through time or being subject to impulsive controls have been given in [21] and [24, 25]. Some recent studies of best proximity points of weak ϕ-contractions in ordered metric spaces have been performed in [26]. On the other hand, the existence of best proximity points for 2-cyclic operators in uniformly convex Banach spaces is investigated in [27]. Finally, in [28], it has been proved that some previous fixed point results and some recently announced best proximity results are equivalent. This paper is focused on the study of boundedness and convergence of sequences of distances and iterated points and the characterization of fixed points of a class of composite self-maps in metric spaces. Such maps are built with combinations of sets of elementary self-maps which can be expansive or non-expansive and the last ones can be contractive (including the case of strict contractions). The composite maps are defined by switching rules which select some self-map (the ‘active’ self-map) on a certain interval of definition of the running index of the sequence of iterates being built. The above mentioned properties concerning the sequences of iterates being generated from the given initial points are investigated under particular constraints for the switching rule. Note, on the other hand, that the properties of controllability and stability of differential-difference and the various kinds of dynamic systems are of a wide interest in theory and applications [25, 2952]. See, for instance, related problems associated with continuous-time, discrete-time, digital, and hybrid systems and those involving delayed dynamics [3842] and [31, 3437, 4446], sampled-data systems under constant, non-uniform, and/or multirate sampling and switched systems [34, 36, 45, 46]. In this context, this paper also includes an application of the developed theoretical framework to the stability of (in general) nonlinear switched dynamic systems. The composite self-map generating the trajectory solution sequence from initial conditions is defined with sets of elementary self-maps being associated with distinct active parameterizations which are switched through time by switching rules which guarantee the fulfilment of the suitable properties. 2 Problem statement and main results Consider a sequence { T σ ( n ) } of self-mappings from X to X where ( X , d ) is a metric space endowed with a metric d : X × X R 0 + and an iterative process x n + 1 = T σ ( n ) x n , x 0 X , n Z 0 + . (2.1) The subsequent nomenclature is used: 1. (1) The function σ : Z 0 + q ¯ Z + is the so-called switching law where q ¯ = { 1 , 2 , , q } if q Z + is finite and q ¯ Z + , otherwise, i.e. q = card ( q ¯ ) = χ 0 (i.e. the infinity cardinal of a countable set).   2. (2) q is the set of distinct parameterizations of the sequence S T = { T σ ( n ) } of self-mappings on X in the sense that such a sequence contains a finite or infinite set S T = { T 1 , T 2 , , T q } of distinct self-mappings.   3. (3) q is the disjoint union of the sets q ¯ = q s c q 0 c q n e q e , where q s c , q 0 c , q n e , and q e are, respectively, the indexing sets of strict contractions, contractive self-mappings which are not strict contractions, non-expansive self-mappings which are not contractive, and a class of expansive self-mappings which fulfil a specific expanding condition according to d ( T i x , T i y ) K i d ( x , y ) , x , y X  with  K i ( 0 , 1 )  if  i q s c , d ( T i x , T i y ) < d ( x , y ) = K i d ( x , y ) , x , y ( x ) X  with  K i = 1  if  i q 0 c , d ( T i x , T i y ) = K i d ( x , y ) = d ( x , y ) , x , y X  with  K i = 1  if  i q n e , K 0 i d ( x , y ) d ( T i x , T i y ) K i d ( x , y ) , x , y X  with  K 0 i , K i ( > K 0 i ) ( 1 , )  if  i q e . (2.2)   Note that q c = q s c q 0 c indexes the whole set of contractive self-mappings in the set S T . 1. (4) The composite mapping S T : Z 0 + × Z + × X X , n Z 0 + defined by T ˆ ( n , n + m ) = T σ ( n + m 1 ) T ˆ ( n , n + m 1 ) = T σ ( n + m 1 ) T σ ( n + m 2 ) T σ ( n ) , n Z 0 + , m Z + ; (2.3)   in particular, T ˆ n = T ˆ ( 0 , n ) = T σ ( n ) T ˆ n 1 = T σ ( n ) T σ ( n 1 ) T σ ( 0 ) (2.4) with σ ( n ) q ¯ , defines the sequence of iterations { x n } defined by x n + 1 = T ˆ ( m , n + 1 ) x m = T ˆ σ ˆ ( n , m + 1 ) = T σ ( n ) x n = T ˆ n x 0 , x 0 X , m , n ( m ) Z 0 + , (2.5) where σ ˆ ( n , m + 1 ) = { σ ( n ) , σ ( n + 1 ) , , σ ( m ) } is the sequence of configurations from n to m + 1 defined by the switching law σ : Z 0 + q ¯ Z + . One gets from recursion with (2.1) d ( T ˆ n x 0 , T ˆ n 1 x 0 ) K ( 0 , n ) d ( x 0 , T σ ( 0 ) x 0 ) = K ( 0 , n ) d ( x 0 , T σ ( 0 ) x 0 ) , x 0 X , n Z + (2.6) if σ ( j ) q 0 c , j n ¯ { 0 } , and d ( T ˆ n + 1 x 0 , T ˆ n x 0 ) K ( 0 , n ) K 0 c ( 0 , n ) d ( x 0 , T σ ( 0 ) x 0 ) < K ( 0 , n ) d ( x 0 , T σ ( 0 ) x 0 ) , x 0 X , (2.7) if there is some j n ¯ { 0 } such that σ ( j ) q 0 c , where K ( 0 , n ) = j = 0 n 1 [ K σ ( j ) ] , K 0 c ( 0 , n ) = j ( σ 0 c ) = 0 n 1 [ d ( T σ ( j + 1 ) x j + 1 , T σ ( j + 2 ) x j ) d ( T σ ( j ) x j , T σ ( j + 1 ) x j + 1 ) ] ( < 1 ) , n Z 0 + . (2.8) The following simple example illustrates how the iterative process can work in a real situation. Example 2.1 Consider the simple scalar discrete equation (2.1) with x n + 1 = T σ ( n ) x n = a n x n with x 0 R , n Z 0 + under the Euclidean metric. The sequence { T σ ( n ) = a n } is such that T σ ( n ) { T 1 , T 2 , T 3 } , n Z 0 + , where T 1 = { z R : z α } , T 2 = { 1 } , and T 3 = { z R : z β } for some given real constants 1 < α < 1 and β > 1 . It is clear that the self-mapping T 1 on R is a strict contraction with a contractive constant K 1 = | α | < 1 , the self-mapping T 2 on R is non-expansive with constant K 2 = 1 , and the self-mapping T 3 on R is expansive with constants K 03 = 1 and K 3 = β > 1 . Note that the set of fixed points of T i reduces to { 0 } , that is, Fix ( T i ) = { 0 } for i q ¯ { 0 } is the unique fixed, and equilibrium, point of T 1 and T 3 while Fix ( T 2 ) = R with the whole R being an equilibrium set of T 2 . Note also that { 0 } is a stable equilibrium point of T 1 , while it is an unstable equilibrium point of T 3 . The switching law is σ : Z 0 + q ¯ = { 1 , 2 , 3 } Z + . If σ ( n ) = i ( q ¯ ) , n [ n 1 , n 2 ) Z 0 + , it is said that the i th configuration of (2.1) is active in the interval [ n 1 , n 2 ) . If σ ( n 2 ) = j ( q ¯ ) i , then it is said that n 2 is a switching sample of (2.1) since the active configuration becomes modified with such a sample. Lemma 2.2 Consider the iterative process (2.1) built with a sequence { T σ ( n ) } of self-mappings from X to X such that ( X , d ) is a metric space. Assume a given switching law σ : Z 0 + q ¯ Z + such that there are a strictly increasing testing sequence of integers S = S ( σ ) = { n k } , which may be non-unique, with n 0 = 0 , and an associate sequence of positive real numbers S ρ = S ρ ( S ) = { ρ k } verifying n k + 1 n k μ < + and K ( n k , n k + 1 ) = j = n k n k + 1 1 [ K σ ( j ) ] ρ k , k Z 0 + , where μ = μ ( S ) . Then the metric has the following properties: 1. (i) If ρ k < 1 , k Z 0 + then d ( x n + 1 , x n ) 0 as n , x 0 X , and d ( x n k + 1 , x n k ) ( i = 0 μ 1 M 0 i ) ρ k d ( x n k , x n k 1 ) k Z + , x 0 X , d ( x n k , x n k + 1 ) 0 as k and d ( x n , x n + 1 ) 0 as n , x 0 X , where M 0 = max i q ¯ ( K i ) .   2. (ii) If the switching law σ : Z 0 + q ¯ Z + is such that ρ k ρ / ( i = 0 μ M 0 i ) , k Z 0 + , for some given real constant ρ ( 0 , 1 ) then all the sequence of composite self-mappings defined by T ˆ ( n k , n k + 1 ) = T σ ( n k + 1 1 ) T ˆ σ ( n k ) on X are strict contractions of contractive constant ρ.   3. (iii) If ρ k > 1 , k Z 0 + then d ( x n + 1 , x n ) + as n for any given x 0 Fix ( T σ ( 0 ) ) in X.   4. (iv) Assume that ρ k ρ ¯ < 1 , k Z 0 + for n k S and ρ k S ρ , k Z 0 + and the existence of the limits lim k ( n k + 1 n k ) = μ μ < + and T ˆ ( n k , n k + 1 ) T ˆ ( 0 , μ ) uniformly in X as k , that is, lim k sup T ˆ ( n k , n k + 1 ) x T ˆ ( 0 , μ ) x = 0 , x X , where μ = μ ( S ) . Then the self-mapping T ˆ ( 0 , μ ) on X is a strict contraction and, for any given N Z 0 + , there is a real constant δ = δ ( N ) such that { δ ( N ) } 0 such that one has for all k , N Z 0 + d ( x k + N + 1 , x k + N ) ρ ¯ k d ( x n 0 , x n 1 ) + 1 ρ ¯ k 1 ρ ¯ max i Z 0 + ( δ n i ) , (2.9) lim k , N d ( x n k + N + 1 , x n k + N ) = 0 . (2.10)   Proof Note that n 0 = 0 so that for any n k S and k Z 0 + d ( x n k + 1 + 1 , x n k + 1 ) K ( n k , n k + 1 ) d ( x n k , x n k + 1 ) ρ k d ( x n k , x n k + 1 ) , k Z 0 + ( i = 0 k [ ρ i ] ) d ( T σ ( 0 ) x 0 , x 0 ) < d ( T σ ( 0 ) x 0 , x 0 ) , (2.11) d ( x n k + 1 + j + 1 , x n k + 1 + j ) ( i = n k + 1 n k + 1 + j [ K σ ( i ) ] ) ( i = 0 k [ ρ i ] ) d ( T σ ( 0 ) x 0 , x 0 ) M ( k , j ) ( i = 0 k [ ρ i ] ) d ( T σ ( 0 ) x 0 , x 0 ) M ( i = 0 k [ ρ i ] ) d ( T σ ( 0 ) x 0 , x 0 ) < M ( i = 0 k [ ρ i ] ) d ( T σ ( 0 ) x 0 , x 0 ) , k Z 0 + , j n k n k + 1 1 ¯ { 0 } (2.12) (the last inequalities of (2.11)-(2.12) standing if and only if the respective left-hand-sides are nonzero) for some set of real bounded sequences { M ( n k , j ) } , j n k + 2 n k + 1 ¯ which satisfy M ( n k , j ) M = M 0 max ( 1 , M 0 μ 1 ) < + for some M R + and j n k + 2 n k + 1 ¯ , since max k Z 0 + ( n k + 1 n k ) μ < + , k Z 0 + and K i ( 1 , ) , i q ¯ . There is k 0 Z 0 + such that ( i = 0 k [ ρ i ] ) < 1 / M , since ρ i < 1 , i Z 0 + , then one gets from (2.8) d ( x n k + 1 + j + 1 , x n k + 1 + j ) < d ( T σ ( 0 ) x 0 , x 0 ) , k ( k 0 ) Z 0 + . (2.13) Also, since ρ k < 1 , k Z 0 + , one gets from (2.11)-(2.12) lim k d ( x n k + j + 1 , x n k + j ) = 0 , j n k + 1 n k 1 ¯ { 0 } . (2.14) Since (2.14) involves any limit distance in-between any two adjacent elements of the solution sequence { x n } of (2.1) for any given x 0 X , one gets lim n d ( x n + 1 , x n ) = 0 . On the other hand, one gets from (2.11) and the triangle inequality d ( x n k + 1 , x n k ) i = 1 n k + 1 n k 1 d ( x n k + i , x n k + i + 1 ) ρ k ( i = 0 μ 1 M 0 i ) d ( x n k , x n k + 1 ) , k Z 0 + (2.15) so that d ( x n k + 1 , x n k ) ( i = 0 μ 1 M 0 i ) ρ k d ( x n k , x n k 1 ) , k Z + and d ( x n k , x n k + 1 ) 0 as k from the triangle inequality, d ( x n , x n + 1 ) 0 as n and max k Z 0 + ( n k + 1 n k ) μ < + . Hence, property (i) follows. Property (ii) is proven by noting that if ρ k ρ / ( i = 0 μ 1 M 0 i ) , k Z 0 + , for some real ρ ( 0 , 1 ) then d ( x n k + 1 , x n k ) ρ d ( x n k , x n k 1 ) ρ 2 d ( x n k 1 , x n k 2 ) ρ k d ( x n 0 , x n 1 ) . (2.16) Property (iii) is proven from the fact that expansive mappings of the switching law fulfil d ( x , y ) < K 0 i d ( x , y ) d ( T i x , T i y ) , x , y X  with  K 0 i > 1  if  i q e and it follows from the assumptions that there is an infinite sequence S = { n k } such that T σ ( n k ) on X are non-expansive self-mappings. Then d ( x n k + 1 , x n k + 1 + 1 ) ( i = 0 k [ ρ i ] ) d ( T σ ( 0 ) x 0 , x 0 ) , k Z 0 + so that lim k d ( x n k + 1 , x n k + 1 + 1 ) = + if x 0 Fix ( T σ ( 0 ) ) . Thus, lim k d ( x n k + 1 + j , x n k + 1 + j + 1 ) = + for all j = 0 , 1 , , n k + 2 n k + 1 1 since n k + 2 n k + 1 μ . Otherwise, it would be impossible with the triangle inequality of the metric to be bounded over a sum of a finite number of distances as k of which one is unbounded as k . Hence, property (iii) follows. To prove property (iv), note that δ n k = d ( T ˆ ( n k 1 , n k ) x , T ˆ ( n k , n k + 1 ) x ) leads to δ n k 0 as k , so that { δ n k } is bounded since ( n k + 1 n k ) μ as k and T ˆ ( n k , n k + 1 ) T ˆ ( 0 , μ ) uniformly in X as k . Then ρ k ρ ¯ < 1 , k Z 0 + for n k S and ρ k S ρ , k Z 0 + and the existence of the limits lim k ( n k + 1 n k ) = μ μ < + , lim k ρ k = ρ ρ ¯ < 1 and T ˆ ( n k , n k + 1 ) T ˆ ( 0 , μ ) uniformly in X as k imply that T ˆ ( 0 , μ ) is a strict contraction on X and, furthermore, since [ d ( T ˆ ( n k 1 , n k ) x n k , T ˆ ( n k , n k + 1 ) x n k ) d ( T ˆ ( 0 , μ ) x n k , T ˆ ( 0 , μ ) x n k ) ] 0 as k , one gets d ( x n k + 1 , x n k ) = d ( T ˆ ( n k , n k + 1 ) x n k , T ˆ ( n k 1 , n k ) x n k 1 ) d ( T ˆ ( n k 1 , n k ) x n k 1 , T ˆ ( n k 1 , n k ) x n k ) + d ( T ˆ ( n k 1 , n k ) x n k , T ˆ ( n k , n k + 1 ) x n k ) K ( n k 1 , n k ) d ( x n k 1 , x n k ) + δ n k ( i = 1 k [ ρ i ] ) d ( x n 0 , x n 1 ) + i = 0 k j = i + 1 k [ ρ i ] δ n i ρ ¯ k d ( x n 0 , x n 1 ) + 1 ρ ¯ k 1 ρ ¯ max i Z 0 + ( δ n i ) (2.17) and then (2.9) holds. Also, (2.10) follows from (2.9), since lim N max i ( N ) Z 0 + ( δ n i ) = 0 for any given x 0 X and ρ ¯ n k + N n N 0 as k , N , and then 0 lim sup k d ( x n k + 1 , x n k + 2 ) lim sup k ( d ( x n k + 1 , x n k + 2 ) ρ ¯ μ d ( x n k , x n k + 1 ) 1 ρ ¯ μ 1 ρ ¯ max i Z 0 + ( δ n i ) ) lim sup k ( d ( x n k + 1 , x n k + 2 ) K ( 0 , μ ) d ( x n k , x n k + 1 ) 1 ρ ¯ μ 1 ρ ¯ max i Z 0 + ( δ n i ) ) 0 (2.18) so that lim k d ( x n k + 1 , x n k + 2 ) exists and then, since lim k ( n k + 1 n k ) = μ μ < + and T ˆ ( n k , n k + 1 ) T ˆ ( 0 , μ ) uniformly in X as k and (2.14)-(2.15) hold, one gets for any nonnegative integer j μ lim k d ( x n k , T ˆ ( n k , n k + 1 ) x n k ) = lim k d ( x n k , T ˆ ( 0 , μ ) x n k ) = lim k d ( x n k + j , T ˆ ( n k + j , n k + j + 1 ) x n k ) = lim k d ( x n k + j , T ˆ ( 0 , μ ) x n k + j ) = lim k d ( x n k + μ + j , T ˆ ( 0 , μ ) x n k + j ) = 0 (2.19) and property (iv) has been proven. □ Remark 2.3 Note that the testing sequence S = { n k } of the given switching law in Lemma 2.2 is not necessarily associated with a set of strict contractions although the composite K ( n k , n k + 1 ) , for n k S , defines a composite strict contraction of non-necessarily strictly contractive self-mappings. The composite sequences K ( n , n k ) for n [ n k + 1 , n k + 1 ) , where n k S , are not necessarily associated with a composite strict contraction. Note also that the testing sequence S = { n k } is not unique for a given switching law σ : Z 0 + q ¯ Z + since the only requirement is that it be strictly increasing with a maximum prescribed, but arbitrary, separation in-between any two adjacent elements of such a sequence. Remark 2.4 Lemma 2.2 can be fulfilled, in general, by non-unique sequences S = { n k } of a given switching law in Lemma 2.2 as well as non-unique associated { ρ k } , μ and μ . A typical case occurs when the switching law consists of strict contractions or converges to a sequence of strict contractions. Those ones can be grouped individually in the limit T ˆ ( 0 , μ ) or this one may be a composite self-mapping of strict limit contractions so that the next result follows. Theorem 2.5 Assume an iterative procedure (2.1) built from a given switching law σ : Z 0 + q ¯ Z + . If the self-mapping T ˆ ( 0 , μ ) on X is a strict contraction such that T ˆ ( n k , n k + 1 ) T ˆ ( 0 , μ ) uniformly in X as k with the strictly increasing testing sequence S = { n k } being subject to max k Z 0 + ( n k + 1 n k ) μ < + and lim k ( n k + 1 n k ) = μ μ . Then the following properties hold: 1. (i) There is a (in general, non-unique) decomposition of the composite T ˆ ( 0 , μ ) : X X in a maximum number of p [ 1 , μ ] strict contractions T ˆ i ( μ ¯ i 1 , μ ¯ i ) : X X , i p ¯ , with μ 0 = μ ¯ 0 = 0 for the testing sequence S = { n k } , such that μ = μ ¯ = i = 1 p μ i so that T ˆ ( 0 , μ ) = T ˆ ( 0 , μ ¯ ) = T ˆ p ( μ ¯ p 1 , μ ¯ p ) T ˆ p 1 ( μ ¯ p 2 , μ ¯ p 1 ) T ˆ 1 ( 0 , μ ¯ 1 ) , (2.20)   where μ ¯ i = j = 0 i μ i , i p ¯ . 1. (ii) The decomposition (2.20) in a maximum number of strict contractions is unique if and only if the positive integer numbers μ i , i p ¯ such that μ = i = 1 p μ i are unique. In particular, the decomposition (2.20) is unique if μ i = 1 , i p ¯ .   2. (iii) Assume that, furthermore, ( T σ ( k n + j ) T σ ( ( k + 1 ) n + j ) ) 0 uniformly in X as n for some k ( μ ) Z 0 + , j k 1 ¯ { 0 } so that { σ ( k n + j ) } { σ j } and { K σ ( k n + j ) } { K j } , j μ ¯ . Then the decomposition (2.20) is unique in a maximum number of strict contractions if and only if μ i = min z μ i 1 ( z Z + : [ ( K z < 1 j = 0 z 1 [ K j ] ) ( j = z μ [ K j ] < 1 ) ] ) , (2.21a)   μ p = min z μ p 1 ( z Z + : K z < 1 j = 0 z 1 [ K j ] ) , (2.21b) i p 1 ¯ since μ 0 = μ ¯ 0 = 0 , where p = ( max z Z + : μ = i = 1 z μ i ) . Then μ = i = 1 p μ i with μ i ( μ ) Z + , i p ¯ being unique as well. Proof Since the switching law σ : Z 0 + q ¯ Z + is given, any strictly increasing sequence S = { n k } satisfying the constraints max k Z 0 + ( n k + 1 n k ) μ < + , lim k ( n k + 1 n k ) = μ μ and T ˆ ( n k , n k + 1 ) T ˆ ( 0 , μ ) uniformly in X as k for n k , n k + 1 S , with μ = μ ( S ) and μ = μ ( S ) being finite, imply that the composite-self-mapping T ˆ ( 0 , μ ) : X X is a composite self-mapping of at least one and at most μ strict contractions. So, the maximum number p of strict limit contractions p [ 1 , μ ] in the composite T ˆ ( 0 , μ ) for the sequence { T n k } for n k S for a given sequence S exists and satisfies (2.20). Property (i) follows. Uniqueness of the decomposition (2.20) holds if there is a unique μ i , i p ¯ such that μ = i = 1 p μ i and T ˆ i ( μ ¯ i 1 , μ ¯ i ) : X X , i p ¯ are strict contractions. In particular, the decomposition (2.16) is trivially unique if μ i = 1 ; i p ¯ so that p = μ . Property (ii) has been proven. The proof of property (iii) is split into two parts as follows: • p = 1 and μ 1 = μ ¯ 1 = μ . Since (2.21a)-(2.21b) hold, μ 1 Z + , subject to 1 μ 1 μ exists so that T ˆ ( 0 , μ 1 ) is a unique strict contraction on X. Thus, either (2.21b) holds with 1 μ 1 = μ and the sufficiency part of the property is already proven for p = 1 with a strict contraction T ˆ ( 0 , μ 1 ) = T ˆ ( 0 , μ ) on X for p = 1 or p > 1 and (2.21a) holds for some integer i 1 . Necessity follows since if p = 1 and T ˆ ( 0 , μ ) is a strict contraction then T ˆ ( 0 , μ 1 ) = T ˆ ( 0 , μ ) for μ 1 = μ . Uniqueness is trivial since there is a single self-mapping in the decomposition of the composite self-mapping T ˆ ( 0 , μ ) on X. • p 2 . The proof of sufficiency is proven by complete induction. Assume that (2.21a)-(2.21b) hold so that μ 1 Z + , subject to 1 μ 1 μ , has to exist so that T ˆ ( 0 , μ 1 ) is a unique strict contraction on X since μ Z + exists such that T ˆ ( 0 , μ ) exists, being a strict contraction on X. Set μ ¯ 0 = μ 0 = 0 and note that μ j = μ ¯ j μ ¯ j 1 and μ ¯ j = μ j + μ ¯ j 1 = μ j + i = 0 j 1 μ i , j p ¯ , so that T ˆ ( μ ¯ j 1 , μ ¯ j ) , then μ j = μ ¯ j μ ¯ j 1 is a unique strict contraction on X for all j ( i 1 ) Z + and some i ( p ) Z + such that the set of positive integers { μ 1 , μ 2 , , μ i 1 } , subject to μ ¯ i 1 = j = 1 i 1 μ j < μ , is unique. Then the composite T ˆ ( 0 , μ ¯ i 1 ) = T ˆ ( μ ¯ i 2 , μ ¯ i 1 ) T ˆ ( 0 , μ 1 ) is a unique strict contraction, with μ ¯ i = j = 0 i μ i , i p ¯ , and the positive integer μ i is unique so that the set { μ 1 , μ 2 , , μ i } , subject to j = 1 i μ j < μ , is also unique and T ˆ ( 0 , μ j ) , for all j ( i ) Z + and i ( p ) Z + , and then the composite T ˆ ( 0 , μ ¯ i ) = T ˆ ( 0 , μ i ) T ˆ ( 0 , μ 1 ) are unique and strictly contractive on X. The proof follows by complete induction, i p ¯ with the existence of a unique positive integer p = p ( μ ) = ( max z Z + : μ = i = 1 z μ i ) . Then μ = i = 1 p μ i with μ i ( μ ) Z + , i p ¯ being unique. Necessity follows since if (2.20) holds and (2.22) fails for some i ( 2 ) p ¯ for a given p 2 then the factorization of the composite self-mapping T ˆ ( 0 , μ ) on X, subject to μ = i = 1 p μ i , does not consist of strict contractions. Property (iii) has been proven. □ A particular case of the decomposition of the composite self-mapping (2.20) is when such a decomposition becomes invariant and equal to T ˆ ( 0 , μ ) = T ˆ p ( 0 , μ p ) T ˆ p 1 ( 0 , μ p 1 ) T ˆ 1 ( 0 , μ 1 ) under the invariance identities T ˆ i ( 0 , μ i ) = T ˆ i ( μ ¯ i 1 , μ ¯ i ) , implied by the identities μ i = μ ¯ i μ ¯ i 1 , i p ¯ . If the collection of all the sequences S ˆ = S ˆ ( σ ) = { S = { n k } } for given x 0 X and σ : Z 0 + q ¯ Z + is considered under the given convergence hypotheses, it follows easily from the above reasoning that there is also a maximum number p ˆ of strict limit contractions p ˆ [ 1 , μ ˆ ] , where μ ˆ = max ( μ ( S ) : S S ˆ ) = max S S ˆ ( lim k ( n k + 1 n k ) : n k , n k + 1 S ) = max S S ˆ μ ( S ) (2.22) of the (in general, non-unique) decomposition: T ˆ ( 0 , μ ˆ ) = T ˆ p ˆ ( μ ˆ ¯ p ˆ 1 , μ ˆ ¯ p ˆ ) T ˆ p ˆ 1 ( μ ˆ ¯ p ˆ 2 , μ ˆ ¯ p ˆ 1 ) T ˆ 1 ( 0 , μ ˆ ¯ 1 ) . (2.23) Note that the decomposition (2.20) of the composite T ˆ ( 0 , μ ) on X is not unique, in general, and so it is not unique, in general, the decomposition (2.23). The decomposition (2.20) is unique if the testing S converges to a finite subsequence of strict contractions so that μ = i = 1 p μ i where p is the maximum number of strict contractions of the decomposition (2.20) and the numbers μ i , i p ¯ are unique. Uniqueness holds in the case that p = μ and μ i = 1 , i p ¯ . Note that the assumption ( T σ ( k n + j ) T σ ( ( k + 1 ) n + j ) ) 0 uniformly in X as n for some k ( μ ) Z 0 + , j k 1 ¯ { 0 } in Theorem 2.5(iii) implies that { T σ ( k n + j ) } T j = T ˆ j ( 0 , 1 ) , j k 1 ¯ { 0 } . Note also that k μ is a consequence of the fact that (2.20) is a decomposition of strict contraction while some of the limit self-mappings on X reached as a result of the uniform convergence constraint { T σ ( k n + j ) } T j = T ˆ j ( 0 , 1 ) , j k 1 ¯ { 0 } can be expansive. Theorem 2.6 Consider the iterative process (2.1) under the assumptions of Lemma  2.2. Then the following properties hold: 1. (i) If Lemma  2.2(ii) holds then { x n k } and { x n k , x n k + 1 } , with n k S , k Z 0 + , are Cauchy sequences, so bounded, and then convergent in X if, in addition, ( X , d ) is complete.   2. (ii) If Lemma  2.2(iii) holds with ρ ρ / ( i = 0 μ 1 M 0 i ) , k Z 0 + , for some real ρ ( 0 , 1 ) then property (i) holds.   3. (iii) If both Lemma  2.2(ii) and Theorem  2.5(i) hold then { x n k + i = 0 j μ i } are Cauchy sequences, so bounded, and then convergent in X if, in addition, ( X , d ) is complete for all j p ¯ { 0 } and n k S , k Z 0 + and S S ˆ , where S ˆ = S ˆ ( σ ) = { S = { n k } } .   4. (iv) Assume that ( X , d ) is a compact metric space and that Lemma  2.2(ii) and Theorem  2.5(i) both hold. Then Fix T ˆ ( 0 , μ ) = { z } for some z X and T ˆ ( 0 , μ ) : X X is a strict Picard self-mapping and { x n k + μ ¯ j } z for any initial x 0 X , where μ ¯ j = i = 1 j μ i , j p ¯ . If q 0 c = then the above result holds if ( X , d ) is a complete metric space.   Proof If Lemma 2.2(i) holds then { x n k } is a Cauchy sequence from (2.16), then bounded and, in addition, convergent in X if ( X , d ) is complete. The properties are extendable to { x n k , x n k + 1 } from (2.11). Thus, property (ii) has been proven. Property (ii) follows from property (i) since ρ k ρ ( 0 , 1 ) as k and ρ ρ / ( i = 0 μ 1 M 0 i ) , k Z 0 + , for some real ρ , ρ ( 0 , 1 ) then for any given ε R + there are k ¯ = k ¯ ( ε ) Z 0 + and k ¯ = k ¯ ( ε ) ( k ¯ ) Z 0 + such that for k ( > k ¯ ) Z 0 + ρ k ρ / ( i = 0 μ 1 M 0 i ) , k k ¯ and ρ k ρ / ( i = 0 μ 1 M 0 i ) , k k ¯ . To prove property (iii), note that if, in addition, Theorem 2.5(i) holds then { x n k + 1 + i = 0 j μ i x n k + i = 0 j μ i } 0 since T ˆ ( 0 , μ ) = T ˆ ( 0 , μ ¯ ) = T ˆ p ( μ ¯ p 1 , μ ¯ p ) T ˆ p 1 ( μ ¯ p 2 , μ ¯ p 1 ) T ˆ 1 ( 0 , μ ¯ 1 ) are strict contractions from (2.20), j p ¯ and { x n k } is a Cauchy sequence from property (i) since by defining μ ¯ j = i = 1 j μ i , ρ ¯ j = K ( 0 , μ ¯ j ) < 1 , j p ¯ , one gets d ( x n k + 1 + μ ¯ j , x n k + μ ¯ j ) d ( x n k , x n k + μ ¯ j ) + d ( x n k , x n k + 1 ) + d ( x n k + 1 , x n k + 1 + μ ¯ j ) d ( x n k , x n k + 1 ) + 2 1 ρ ¯ j max ( d ( x n k , x n k + 1 ) , d ( x n k + 1 , x n k + 1 + 1 ) ) , (2.24) j p ¯ , k Z 0 + , and, from property (i), one gets lim k d ( x n k + 1 + μ ¯ j , x n k + μ ¯ j ) = lim k d ( x n k + 1 + μ ¯ j , x n k + μ ¯ j ) , lim k d ( T ˆ ( n k + 1 + μ ¯ j , n k + μ ¯ j ) x n k + μ ¯ j , x n k + μ ¯ j ) = d ( lim k ( T ˆ ( n k + 1 + μ ¯ j , n k + μ ¯ j ) x n k + μ ¯ j ) , lim k x n k + μ ¯ j ) = d ( lim k ( T ˆ ( n k + 1 + μ ¯ j , n k + μ ¯ j ) x n k + μ ¯ j ) , lim k ( T ˆ ( 0 , μ ¯ j ) x n k ) ) = d ( lim k ( T ˆ ( μ ¯ j , μ ¯ j + μ ) T ˆ ( 0 , μ ¯ j ) x n k ) , lim k ( T ˆ ( 0 , μ ¯ j ) x n k ) ) = d ( T ˆ ( μ ¯ j , μ ¯ j + μ ) T ˆ ( 0 , μ ¯ j ) lim k x n k , T ˆ ( 0 , μ ¯ j ) lim k x n k ) = d ( T ˆ ( μ ¯ j , μ ¯ j + μ ) T ˆ ( 0 , μ ¯ j ) z , T ˆ ( 0 , μ ¯ j ) z ) = 0 , j p ¯ (2.25) since { x n k } z from property (i) and the interchange of the limit and the distance function from the Lipschitz-continuity of the contraction self-mappings T ˆ ( 0 , μ ¯ j ) and T ˆ ( μ ¯ j , μ ¯ j + μ ) T ˆ ( 0 , μ ¯ j ) on j p ¯ . Then { x n k + μ ¯ j } z j = T ˆ ( 0 , μ ¯ j ) z and { x n k + 1 + μ ¯ j } z j = T ˆ ( μ ¯ j , μ ¯ j + μ ) T ˆ ( 0 , μ ¯ j ) z are Cauchy sequences, j p ¯ , and then bounded. If, in addition, ( X , d ) is complete then z j X , j p ¯ . Property (iii) has been proven for a sequence S = { n k } . The proof for S ˆ = { { n k } } is similar. On the other hand, one gets from (2.25) d ( T ˆ ( 0 , μ ) z j , z j ) = 0 , j p ¯ (2.26) then T ˆ ( 0 , μ ) z j = z j so that z j Fix T ˆ ( 0 , μ ) , j p ¯ . Since T ˆ ( 0 , μ ) : X X is a strict contraction from Lemma 2.2(ii), it has a unique fixed point from Banach contraction principle since ( X , d ) is a compact metric space (i.e. totally bounded and complete, equivalently, if it very family of closed subsets of X with finite intersection property has a nonempty intersection). Then z j = z Fix T ˆ ( 0 , μ ) = { z } , j p ¯ and T ˆ ( 0 , μ ) z = z for some z X . As a result, { x n k + μ ¯ j } z for any initial condition x 0 X , j p ¯ . If there is no contractive self-mapping on X not being a strict contraction in the switching law, the above holds if ( X , d ) is just a complete metric space. □ Note that Theorem 2.6(iv) holds even if T ˆ i ( μ ¯ i 1 , μ ¯ i ) : X X for some i p ¯ is not contractive. However, the composite self-mapping T ˆ ( 0 , μ ) : X X is a strict Picard self-mapping and { x n k + μ ¯ j } z , j p ¯ , [1]. The error estimates and convergence rate are characterized in the subsequent result. Theorem 2.7 Assume that ( X , d ) is a compact (complete if q 0 c = ) metric space and that Lemma  2.2(ii) and Theorem  2.5(i) jointly hold. Then the following respective a priori and a posteriori error estimates and convergence rate hold for the iterative process (2.1) for any x 0 X if Fix T ˆ ( 0 , μ ) = { z } : d ( x n , z ) ( K ρ n 1 ρ + o ( | ε n | ) ) d ( x 0 , x 1 ) , (2.27) d ( x n , z ) ( K ρ 1 ρ + o ( | ε n | ) ) d ( x n 1 , x n ) , (2.28) d ( x n , z ) ( K ρ n + o ( | ε n | ) ) d ( x 0 , z ) , (2.29) n n 0 for any given ε ( 0 , 1 ρ ) R + , some real convergent sequence { ε n } 0 , with | ε n | ε , n n 0 , and some n 0 = n 0 ( ε ) Z + and K ( 1 ) = K ( ε ) R + , where ρ k ρ / ( i = 0 μ M 0 i ) converges { ρ k } ρ ( 0 , 1 ) . Proof Given any ε ( 0 , 1 ρ ) R + , and since { ρ n k } ρ from Lemma 2.2(ii), there are n 0 i = n 0 i ( ε ) Z + ( i = 1 , 2 , 3 ) and K a ( 1 ) = K a ( ε ) R + and K b ( 1 ) = K b ( ε ) R + such that ρ n k = ρ + ε n k ρ + ε with | ε n k | ε , n k n 01 (since { ε n k } 0 ) and d ( x n 02 , x n 02 + 1 ) K a ρ n 02 d ( x 0 , x 1 ) , d ( x n 03 , z ) K b ρ n 03 d ( x 0 , z ) , (2.30) where z X is the unique element in the set Fix T ˆ ( 0 , μ ) , since n 02 and n 03 are finite and min ( K a , K b ) 1 , one gets for the testing sequence S = S ( σ ) = { n k } of a solution of the iterative scheme (2.1) max ( d ( x n 02 , x n 02 + 1 ) , d ( x n k 1 , x n k ) , d ( x n k 1 , z ) ) max ( K a , K b ) max ( ρ n 02 d ( x 0 , x 1 ) , d ( x n k 1 , x n k ) , ρ n 03 d ( x n k 1 , z ) ) , (2.31) n n 02 since { x n k } z (X) from Theorem 2.5(i) and Theorem 2.6(iv) which holds since Lemma 2.2(ii) and Theorem 2.5(i) hold and ( X , d ) is a compact (it suffices it be complete if q 0 c = ) metric space. Then one gets by taking K = max ( K a , K b ) and n 0 = max ( n 01 , n 02 , n 03 ) : d ( x n k , z ) ( ρ n k n 02 + o ( | ε n k | ) 1 ρ ε ) d ( x n 02 , x n 1 ) K a ( ρ n k + ρ n 02 o ( | ε n k | ) 1 ρ ε ) d ( x 0 , x 1 ) ( K a ρ n k 1 ρ ε + o ( | ε n k | ) ) d ( x 0 , x 1 ) ( K ρ n k 1 ρ + o ( | ε n k | ) ) d ( x 0 , x 1 ) , (2.32) d ( x n k , z ) ( K ρ n k 1 ρ + o ( | ε n k | ) ) d ( x 0 , z ) ( ρ 1 ρ ε + o ( | ε n k | ) ) d ( x n k 1 , x n k ) ( K ρ 1 ρ + o ( | ε n k | ) ) d ( x n k 1 , x n k ) , (2.33) d ( x n k , z ) ( ρ n k n 03 + ρ n 03 o ( | ε n k | ) 1 ρ ε ) d ( x n 03 , z ) ( K b ρ n k 1 ρ ε + o ( | ε n k | ) ) d ( x 0 , z ) ( K ρ n k 1 ρ + o ( | ε n k | ) ) d ( x 0 , z ) , (2.34) since 1 1 ρ ε < 1 1 ρ , K a ρ n 02 o ( | ε n k | ) 1 ρ ε = o ( | ε n k | ) and K b ρ n 03 o ( | ε n k | ) 1 ρ ε = o ( | ε n k | ) . Now, denote in (2.20) the Lipschitz constants of the limit self-mappings on X which define the composite limit self-mapping T ˆ ( 0 , μ ) on X by K j = K j ( 0 , μ j ) , j p ¯ . Now, note from (2.32) and the triangle inequality that the sequence { x n } , which contains the testing subsequence { x n k } , satisfies for any j n k + 1 n k 1 ¯ , k ( n 0 ) Z + : d ( x n k + j , z ) d ( x n k , z ) + i = 1 j d ( x n k + i , x n k + i 1 ) ( 1 + K 1 ( n k , n k + 1 ) + K 1 ( n k , n k + 1 ) K 2 ( n k + 1 , n k + 2 ) + + K 1 ( n k , n k + 1 ) K 2 ( n k + 1 , n k + 2 ) K j ( n k + j 1 , n k + j ) ) d ( x n k , z ) [ ( 1 + K 1 + K 1 K 2 + + K 1 K 2 K 3 K j ) + ( K 1 ( n k , n k + 1 ) K 1 ) + ( K 1 ( n k , n k + 1 ) K 2 ( n k + 1 , n k + 2 ) K 1 K 2 ) + + ( K 1 ( n k , n k + 1 ) K 2 ( n k + 1 , n k + 2 ) K j ( n k + j 1 , n k + j ) K 1 K 2 K 3 K j ) ] × ρ ( n k + j n ) [ ( K ρ n 1 ρ + o ( ε n ) ) + | o ( | ε n k | ) o ( | ε n | ) | ] d ( x 0 , x 1 ) [ 1 + p ( max 1 j p ( K j , K j p ) + max 1 j p ( K ( n k , n k + j ) i = 1 j [ K i ] ) ) + + ( K 1 ( n k , n k + 1 ) K 2 ( n k + 1 , n k + 2 ) K j ( n k + j 1 , n k + j ) K 1 K 2 K 3 K j ) ] ρ ( n k + j n ) × [ ( K ρ n 1 ρ + o ( | ε n | ) ) + | o ( | ε n k | ) o ( | ε n | ) | ] d ( x 0 , x 1 ) M ( n k ) [ ( K ρ n 1 ρ + o ( | ε n | ) ) ] d ( x 0 , x 1 ) + M ( n k ) | o ( | ε n k | ) o ( | ε n | ) | d ( x 0 , x 1 ) (2.35) for any j n k + 1 n k 1 ¯ , where ε j = ρ j ρ with ρ ( 0 , 1 ) , j Z 0 + , and M ( n k ) = max 1 j p ( [ 1 + p ( max 1 j p ( K j , K j p ) + max 1 j p ( K ( n k , n k + j ) i = 1 j [ K i ] ) ) + + ( K 1 ( n k , n k + 1 ) K 2 ( n k + 1 , n k + 2 ) K j ( n k + j 1 , n k + j ) K 1 K 2 K 3 K j ) ] ) < + (2.36) and one gets from (2.35) for any integer n = n k + j , j n k + 1 n k 1 ¯ , k Z 0 + with K = K sup k n 0 M ( n k ) = max ( K a , K b ) sup k n 0 M ( n k ) that the a priori error estimate satisfies d ( x n , z ) [ ( K M ( n k ) ρ n 1 ρ ) + o ( | ε n | + | ε n k | ) ] d ( x 0 , x 1 ) , n [ n k , n k + 1 ) , k Z 0 + (2.37) and, since { ε n k : ( n k n 0 k Z 0 + ) } and { ε n : n n 0 } subject to { ε n k : ( n k n 0 k Z 0 + ) } { ε n : n n 0 } , with both sequences converging to zero, d ( x n , z ) [ ( K ρ n 1 ρ ) + o ( | ε n | ) ] d ( x 0 , x 1 ) , n [ n k , n k + 1 ) , k Z 0 + (2.38) and ( d ( x n , z ) / d ( x 0 , x 1 ) ) 0 as n at exponential rate so that (2.27) is proven. Closely analogous proofs to that of (2.27) follow directly for (2.28) and (2.29). □ A discussion of cases of interest concerning the above result follows. Remark 2.8 (1) Theorem 2.7 refers to the case when { T ˆ ( n k , n k + 1 ) } T ˆ ( 0 , μ ) with T ˆ ( 0 , μ ) being a composite strictly contractive self-mapping on X of the form (2.20), i.e. possessing p (non-necessarily strictly contractive) fixed configurations which are the limit of the switching law σ : Z 0 + q ¯ Z + . T ˆ ( 0 , μ ) is a strict Picard self-mapping on X as a result. Also, { T ˆ ( n + 1 , 0 ) x 0 } z , satisfying (2.27)-(2.29), and { T ˆ n ( 0 , μ ) x 0 } z , satisfying (2.27)-(2.29) with K = 1 and o ( | ε n | ) being replaced with 0, for any given x 0 X , where Fix T ˆ ( 0 , μ ) = { z } . (2) A particular case of interest of Theorem 2.7 is that when T ˆ ( 0 , μ ) = T ˆ ( 0 , 1 ) = T ˆ j ( 0 , 1 ) = T j , so that { T σ ( n ) } T j , with μ = 1 and some j q ¯ (one of the configurations of the switching law) so that the limit self-mapping T j on X is a strict contraction, and then a strict Picard self-mapping, and the switching law σ : Z 0 + q ¯ Z + is such that { T σ ( n ) } T j uniformly in X. Since the testing switching { n k } has the property lim k ( n k + 1 n k ) = 1 , lim k M ( n k ) = lim n M ( n ) = 1 so that an admissible choice of K = max ( K a , K b ) can be made in (2.27)-(2.29). The interpretation of the presence of the real constant K 1 in the error estimates and convergence rate is due to the fact that the sequence of composite self-mappings { T ˆ ( 0 , n ) } governed by the switching law to build the iterative scheme x n + 1 = T n x n = T ˆ ( n + 1 , 0 ) x 0 for any given x 0 X is not of the form { T j n } , while { T σ ( n ) } converges to a uniform limit strictly contractive T j on X with a unique fixed point z j X subject to lim k ( n k + 1 n k ) = 1 and { T ˆ ( n + 1 , 0 ) x 0 } z j and { T j n x 0 } z j for any given x 0 X according to (2.27)-(2.29) with the replacement z z j . (3) Particular cases of interest of Remark 2.8(1)-(2) are when fixed points z = 0 , respectively, z j = 0 (which are also, in particular, globally asymptotically stable equilibrium points if the iterative scheme refers to a dynamic system). Thus, the iterative scheme converges exponentially fast to zero. If the formalism is concerned with a Banach space ( X , ) endowed with the norm , with X being a linear space, we can use as metric the norm-induced metric so that (2.27)-(2.29) become x n ( K ρ n 1 ρ + o ( | ε n | ) ) x 1 x 0 ( K T σ ( 0 ) 1 ρ n 1 ρ + o ( | ε n | ) ) x 0 , (2.39) x n ( K ρ 1 ρ + o ( | ε n | ) ) x n x n 1 ( K T ˆ ( n , 0 ) T ˆ ( n 1 , 0 ) ρ 1 ρ + o ( | ε n | ) ) x 0 , (2.40) x n ( K ρ n + o ( | ε n | ) ) x 0 , (2.41) n n 0 . If a fixed point of z 0 then (2.39)-(2.41) hold in closed forms under the replacements x n x n z , n n 0 and x 0 x 0 z . (4) Note that in the case that ρ = 1 , the limit self-mapping T ˆ ( 0 , μ ) is only guaranteed to be non-expansive. Then (2.39) and (2.40) do not hold. However, one gets from Theorem 2.7 and (2.41), with the replacements x n x n z , n n 0 and x 0 x 0 z , x n z ( K + o ( | ε n | ) ) x 0 z , n n 0 (2.42) provided that the non-expansive self-mapping T ˆ ( 0 , μ ) on X has a fixed point z X . Then one has | lim sup n x n z | lim sup n x n z K x 0 z + z so that lim sup n x n K x 0 + ( 1 + K ) z and { x n } is bounded for any initial x 0 X although convergence to the fixed point is not guaranteed. This is a well-known result from fixed point theory for non-expansive self-mappings and a well-known (non-asymptotic) global stability result in related problems of stability of dynamic systems which can have a global attractor which can be either a fixed point, which is also an equilibrium point which is not asymptotically stable, or a stable limit cycle. If z = 0 then the above result takes the simpler form lim sup n x n K x 0 . Remark 2.9 The given convergence properties to fixed points also hold if { T ˆ ( n k , n k + 1 ) } T ˆ ( 0 , μ ) point-wisely in X and T ˆ ( 0 , μ ) is a strict contraction on X. Assume that { T ˆ ( n k , n k + 1 ) } T ˆ ( 0 , μ ) with { T ˆ ( n k , n k + 1 ) x n k } x and { T ˆ ( 0 , μ ) x 0 } z so that d ( x , z ) d ( x , T ˆ ( n k , n k + 1 ) x n k ) + d ( T ˆ ( n k , n k + 1 ) x n k , T ˆ ( 0 , μ ) x n k ) + d ( T ˆ ( 0 , μ ) x n k , z ) + d ( T ˆ ( 0 , μ ) z , z ) d ( x , T ˆ ( n k , n k + 1 ) x 0 ) + ( T ˆ ( n k , n k + 1 ) x n k , T ˆ ( n k , n k + 1 ) x 0 ) + d ( T ˆ ( n k , n k + 1 ) x n k , T ˆ ( 0 , μ ) x n k ) + d ( T ˆ ( 0 , μ ) x n k , z ) + d ( T ˆ ( 0 , μ ) z , z ) , k Z 0 + . (2.43) Then { d ( T ˆ ( n k , n k + 1 ) x n k , T ˆ ( 0 , μ ) x n k ) } 0 from point-wise convergence, { d ( T ˆ ( 0 , μ ) x n k , z ) } 0 , since z is a fixed point of T ˆ ( 0 , μ ) , { d ( x , T ˆ ( n k , n k + 1 ) x 0 ) } x and { d ( T ˆ ( n k , n k + 1 ) x n k , T ˆ ( n k , n k + 1 ) x 0 ) } x . Taking the limit as k in (2.43) yields x = z . Example 2.10 Retaking the simple Example 2.1, the following conclusions arise. Assume that { T σ ( n ) } T ˆ ( 0 , μ ) = ( T 1 T 3 T 2 ) ω T 2 g which is a composite self-mapping composed of three, in general, composite self-mappings, subject to nonzero integers ω > 1 and g > ω 1 with α β < 1 . In this case, there is a limit self-mapping T ˆ ( 0 , μ ) which is a strict contraction and then a strict Picard mapping and has a unique fixed point { 0 } to which any iteration built with (2.1) for any initial real condition converges. However, the decomposition in a maximum number of strict contractions is not unique since 0 g 1 g can be chosen arbitrarily in the subsequent decomposition by taking into account that T i : X X , i = 1 , 2 , 3 , commute: T ˆ ( 0 , μ ) = ( T 1 T 3 T 2 ) ω 1 ( T 1 T 3 T 2 g + 1 ω ) = T ˆ 1 ( 0 , μ 1 ) T ˆ 2 ( μ 1 , μ 1 + μ 2 ) = ( T 1 T 3 T 2 g + 1 ω ) ( T 1 T 3 T 2 ) ω 1 = T ˆ 1 ( 0 , μ 01 ) T ˆ 2 ( μ 01 , μ 01 + μ 02 ) = ( T 1 T 3 T 2 g ) ( T 1 T 3 ) ω 1 = T ˆ 1 ( 0 , μ 11 ) T ˆ 2 ( μ 11 , μ 11 + μ 12 ) (2.44) with ω = 3 ω + g , μ 2 = ω μ 1 and μ 1 = 3 ( ω 1 ) μ 01 = g + 3 ω < g + 2 . But if g = 4 ω 6 then μ 11 = g + 2 μ 01 < g + 2 (since ω > 1 ) if g 4 ω 6 . 3 Numerical examples This section contains some numerical examples regarding the theoretical results stated in the previous section. Two examples are discussed. The first one considers a scalar time-invariant nonlinear switched system while the second one deals with a linear time-varying switched system. 3.1 Scalar nonlinear switched system Consider the nonlinear discrete-time dynamic system given by (2.1) with σ ( t ) { 1 , 2 , 3 } and T 1 ( x n ) = x n + 1 x n , T 2 ( x n ) = 0.95 x n e x n and T 3 ( x n ) = 1.2 tanh ( x n ) . Note that T 1 ( x n ) is a mixed type operator, being contractive for large x n while expansive for small values of x n (it is also non-strictly contractive in [ 1 , ) ), T 2 ( x n ) is strictly contractive with K 2 = 0.95 and T 3 ( x n ) is expansive with K 3 = 1.2 . The stability of the nonlinear switched system (2.1) depends on the switching law. For instance, for the switching law depicted in Figure 1, the solution of the discrete-time system is shown in Figure 2. Figure 1 Switching law. Figure 2 Solution trajectory for the switching rule depicted in Figure 1 . The contractive and expansive phases generated by the switching rule can be clearly appreciated in Figure 2. In this way, when the contractive operator, T 2 ( x n ) , is activated by the switching rule, the solution trajectory converges to zero while when the non-contractive operators are activated, the solution enlarges. Nevertheless, the combination of these operators according to the switching rule provides a globally stable system, with a bounded trajectory at all time. On the other hand, if the switching rule is given by Figure 3, the solution trajectory is displayed in Figure 4. Figure 3 Switching rule. Figure 4 Solution trajectory for the switching rule depicted in Figure 3 . In this case, the trajectory enlarges very much as Figure 4 shows. The time scale in this figure has been reduced to handle the large values of the signal. Hence, a serious problem arises in this example: How can the stability of the switched system be proved under different switching rules? The results stated in Section 2 can be used to solve this question as the following example shows. 3.2 Linear time-varying switched dynamic system This example shows how the previously presented theoretical results can be applied to study the asymptotic stability of time-varying dynamical systems by means of their discretization. This problem is of great importance in practice since many control systems are currently designed in a discrete-time set-up which leads to iterative schemes of the form (2.1) despite the system being originally in continuous-time. Therefore, the results introduced above might be used in a highly practical context. For this purpose, a linear time-varying switched dynamical system will be considered. These types of systems are described by a number of different parameterizations which are themselves time-varying. The switching rule selects the active parameterization during each time interval, as discussed in Section 2. The stability of time-varying systems is far from being trivial and offers a variety of behaviors different from those of linear time-invariant ones. For instance, a linear time-varying system may possess constant stable eigenvalues (i.e. constant eigenvalues with negative real part) and, nevertheless, to be asymptotically unstable in the Lyapunov sense, [51]. The tools provided by fixed point theory are useful in this situation where other techniques become intricate. Therefore, consider a linear time-varying system of the form x ˙ ( t ) = A σ ( t ) ( t ) x ( t ) , (3.1) where σ ( t ) { 1 , 2 , 3 } and the dynamics matrices are given by A σ ( t ) ( t ) = ( cos t sin t sin t cos t ) , B σ ( t ) ( cos t sin t sin t cos t ) , B 1 = ( 1 4 0 1 ) , B 2 = ( 1 4 1 1 ) , B 3 = ( 0 1 0 1 ) . (3.2) It is easy to verify that matrices (3.2) are in the form A σ ( t ) = e Ω t B σ ( t ) e Ω t , [52], with Ω = ( 0 1 1 0 ) . In this way, the solution of (3.1) is given by x ( t ) = e Ω t e ( Ω + B σ ( t i ) ) t x ( t i ) (3.3) [52], with σ ( t i ) σ ( t i + ) σ ( t i ) and σ ( t ) = σ ( t i ) for t [ t i , t i + 1 ) , i Z 0 + . The eigenvalues of Ω are ±j implying that the stability of (3.3) is directly influenced by that of the matrices ( Ω + B σ ( t ) ) . Thus, we have spec ( Ω + B 1 ) = { 1 j 3 , 1 + j 3 } , spec ( Ω + B 2 ) = { 1 , 1 } , spec ( Ω + B 3 ) = { 1 , 0 } . Therefore, the first parameterization leads to an unstable system, the second to a stable one while the last one is marginally stable. Under these circumstances, the stability of the switched system relies directly on the switching law.That is, there will be switching laws under which the switchedsystem is asymptotically stable while for others it may be unstable. To verify the stability of (3.1) under different switching laws, we will discretize the system so as to generate an iterative scheme of the form (2.1). Thus, from (3.3) if the sampling period is denoted by h and x ( n h ) = x n , we have after a little bit of algebra: x n + 1 = e Ω n h e Ω h e ( Ω + B σ ( n h ) ) h e Ω n h x n , which fits into the iterative structure (2.1) with operators T σ ( n ) = e Ω n h e Ω h e ( Ω + B σ ( n h ) ) h e Ω n h . It is remarkable the complexity of these operators for the time-varying system (3.1)-(3.2).Also, notice that all e Ω n h , e Ω h and e Ω n h have eigenvalues with absolute value unity.Therefore, the dynamics of the system is essentially controlled by e ( Ω + B 1 ) h , e ( Ω + B 2 ) h , and e ( Ω + B 3 ) h according to the switching rule (i.e. there are only three different acting operators). For simulation purposes consider h = 0.1 seconds. Equations (2.2) are fulfilled for these operators with K 1 = 1.076 , K 2 = 0.905 and K 3 = 1 corresponding to the expansive, strictly contractive and non-expansive (and not strictly contractive) cases (for the Euclidean distance). Now, Lemma 2.2 can be applied to establish the asymptotic stability of the origin for the switched system under different switching rules. For this purpose, assume that the switching signal is synchronized with the sampling period (i.e. the change in system’s parameterization is at the same time that sampling). Let’s consider these two cases: (a) the switching rule is periodic with period of 10 seconds, i.e. the switching rule is the periodic extension of the following signal: σ ( t ) = { 1 , 0 t < 6 , 2 , 6 t < 8 , 3 , 8 t < 10 and (b) the switching rule is periodic with period: σ ( t ) = { 2 , 0 t < 6 , 1 , 6 t < 8 , 3 , 8 t < 10 . Consider now the sequence of numbers { n k } satisfying ( n k + 1 n k ) h = 10 , implying that these numbers are selected when a period of the switching signal is completed. For case (a) we have K ( n k , n k + 1 ) = 1.076 60 0.905 20 1 20 = 10.94 > 1 therefore, the switching rule (a) leads to an unstable switched system according to Lemma 2.2(iii). On the other hand, for case (b) we have K ( n k , n k + 1 ) = 1.076 20 0.905 60 1 20 = 0.01 < 1 which leads to an asymptotically stable system according to Lemma 2.2(i). These results are corroborated by the numerical simulations shown in Figures 5 and 6 for initial condition x 0 = [ 1 1 ] T . Figure 5 State variables evolution under switching rule (a). Figure 6 State variables evolution under switching rule (b). It can be appreciated in Figure 5 that the state variables absolute values enlarge with time if switching rule (a) is employed while Figure 6 shows the convergence to zero of the state variables (and the asymptotic stability of the origin) when switching rule (b) is used. Switching rules (a) and (b) are depicted for convenience in Figures 7 and 8. Figure 7 Switching rule (a). Figure 8 Switching rule (b). Hence, the stability of the linear time-varying switched system is analyzed by just calculating the product of some constants, easing off the determination of the stability properties of the system. Declarations Acknowledgements The authors are very grateful to the Spanish Government for Grant DPI2012-30651 and to the Basque Government and UPV/EHU for Grants IT378-10, SAIOTEK S-PE13UN039 and UFI 2011/07. The authors are also grateful to the referees for their suggestions. Authors’ Affiliations (1) Institute of Research and Development of Processes, University of the Basque Country (2) Department of Telecommunications and Systems Engineering, Universitat Autònoma de Barcelona (UAB) References 1. Berinde V Lecture Notes in Mathematics 1912. In Iterative Approximation of Fixed Points. Springer, Heidelberg; 2002.Google Scholar 2. Farnum NR: A fixed point method for finding percentage points. Appl. Stat. 1991,40(1):123–126. 10.2307/2347910MathSciNetView ArticleMATHGoogle Scholar 3. Antia HM: Numerical Methods for Scientists and Engineers. Birkhäuser, Boston; 2002.MATHGoogle Scholar 4. Miñambres JJ, De la Sen M: Application of numerical-methods to the acceleration of the convergence of the adaptive-control algorithms: the one dimensional case. Comput. Math. Appl. 1986,12(1):1049–1056.View ArticleMATHGoogle Scholar 5. Soleymani F, Sharifi M, Sateyi S, Khaksar Haghani F: A class of Steffensen-type iterative methods for nonlinear systems. J. Appl. Math. 2014., 2014: Article ID 705375Google Scholar 6. Samreen M, Kamram T, Shahzad N: Some fixed point theorems in b -metric space endowed with graph. Abstr. Appl. Anal. 2013., 2013: Article ID 967132 10.1155/2013/9067132Google Scholar 7. Yao Y, Liou YC, Yao JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007., 2007: Article ID 64363Google Scholar 8. Nashine HK, Khan MS: An application of fixed point theorem to best approximation in locally convex space. Appl. Math. Lett. 2010, 23: 121–127. 10.1016/j.aml.2009.06.025MathSciNetView ArticleMATHGoogle Scholar 9. Yao YH, Liou YC, Kang SM: An iterative approach to mixed equilibrium problems and fixed point problems. Fixed Point Theory Appl. 2013., 2013: Article ID 183 10.1186/1687-1812-2013-183Google Scholar 10. Cho YJ, Kadelburg Z, Saadati R, Shatanawi W: Coupled fixed point theorems under weak contractions. Discrete Dyn. Nat. Soc. 2012., 2012: Article ID ID184534 10.1155/2012/184534Google Scholar 11. Cho SY, Li WL, Kang SM: Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013., 2013: Article ID 199 10.1186/1029-242X-2013-199Google Scholar 12. Hussain N, Cho YJ: Weak contractions, common fixed points, and invariant approximations. J. Inequal. Appl. 2009., 2009: Article ID 390634Google Scholar 13. Jleli M, Karapınar E, Samet B: Best proximity point results for MK-proximal contractions. Abstr. Appl. Anal. 2012., 2012: Article ID 193085 10.1155/2012/193085Google Scholar 14. Jleli M, Samet B: Best proximity point results for MK-proximal contractions on ordered sets. J. Fixed Point Theory Appl. 2013. 10.1007/s11784-013-0125-4Google Scholar 15. De la Sen M, Agarwal RP: Fixed point-type results for a class of extended cyclic self-mappings under three general weak contractive conditions of rational type. Fixed Point Theory Appl. 2011., 2011: Article ID 102 10.1186/1687-1812-2011-102Google Scholar 16. Karapınar E: On best proximity point of ψ -Geraghty contractions. Fixed Point Theory Appl. 2013., 2013: Article ID 200 10.1186/1687-1812-2013-200Google Scholar 17. Gupta A, Rajput SS, Kaurav PS: Coupled best proximity point theorem in metric spaces. Int. J. Anal. Appl. 2014,4(2):201–215.Google Scholar 18. De la Sen M: Fixed point and best proximity theorems under two classes of integral-type contractive conditions in uniform metric spaces. Fixed Point Theory Appl. 2010., 2010: Article ID 510974Google Scholar 19. Jleli M, Karapınar E, Samet B: A best proximity point result in modular spaces with the Fatou property. Abstr. Appl. Anal. 2013., 2013: Article ID 329451 10.1155/2013/329451Google Scholar 20. Pathak HK, Shahzad N: Some results on best proximity points for cyclic mappings. Bull. Belg. Math. Soc. Simon Stevin 2013,20(3):559–572.MathSciNetMATHGoogle Scholar 21. De la Sen M, Karapınar E: Best proximity points of generalized semicyclic impulsive self-mappings: applications to impulsive differential and difference equations. Abstr. Appl. Anal. 2013., 2013: Article ID 505487 10.1155/2013/505487Google Scholar 22. Dey D, Kumar Laha A, Saha M: Approximate coincidence point of two nonlinear mappings. J. Math. 2013., 2013: Article ID 962058 10.1155/2013/962058Google Scholar 23. Dey D, Saha M: Approximate fixed point of Reich operator. Acta Math. Univ. Comen. 2013,LXXXII(1):119–123.MathSciNetMATHGoogle Scholar 24. De la Sen M, Ibeas A: Asymptotically non-expansive self-maps and global stability with ultimate boundedness of dynamic systems. Appl. Math. Comput. 2013,219(22):10655–10667. 10.1016/j.amc.2013.04.009MathSciNetView ArticleMATHGoogle Scholar 25. De la Sen M, Agarwal RP, Nistal N: Non-expansive and potentially expansive properties of two modified p -cyclic self-maps in metric spaces. J. Nonlinear Convex Anal. 2013,14(4):661–686.MathSciNetMATHGoogle Scholar 26. Karapınar E: Best proximity points of Kannan type cyclic weak ϕ -contractions in ordered metric spaces. An. Univ. ‘Ovidius’ Constanţa, Ser. Mat. 2012,20(3):51–64.MathSciNetMATHGoogle Scholar 27. Karapınar E: Best proximity points of cyclic mappings. Appl. Math. Lett. 2012,25(11):1761–1766. 10.1016/j.aml.2012.02.008MathSciNetView ArticleMATHGoogle Scholar 28. Jleli M, Karapınar E, Samet B: A short note on the equivalence between ‘best proximity’ points and ‘fixed point’ results. J. Inequal. Appl. 2014., 2014: Article ID 246Google Scholar 29. Ashyralyev A, Koksal ME: Stability of a second order of accuracy difference scheme for hyperbolic equation in a Hilbert space. Discrete Dyn. Nat. Soc. 2007., 2007: Article ID 57491 10.1155/2007/57491Google Scholar 30. Ashyralyev A, Sharifov YA: Optimal control problem for impulsive systems with integral boundary conditions. Book Series AIP Conference Proceedings 1470. First International Conference on Analysis and Applied Mathematics (ICAAM) 2012, 8–11.Google Scholar 31. Li XD, Akca H, Fu XL: Uniform stability of impulsive infinite delay differential equations with applications to systems with integral impulsive conditions. Appl. Math. Comput. 2013,219(14):7329–7337. 10.1016/j.amc.2012.12.033MathSciNetView ArticleMATHGoogle Scholar 32. Stamov G, Akca H, Stamova I: Uncertain dynamic systems: analysis and applications. Abstr. Appl. Anal. 2013., 2013: Article ID 863060 10.1155/2013/863060Google Scholar 33. Li H, Sun X, Karimi HR, Niu B: Dynamic ouput-feedback passivity control for fuzzy systems under variable sampling. Math. Probl. Eng. 2013., 2013: Article ID 767093 10.1155/2013/767093Google Scholar 34. Xiang Z, Liu S, Mahmoud MS:Robust H reliable control for uncertain switched neutral systems with distributed delays. IMA J. Math. Control Inf. 2013. 2013.10.1093/imamci/dnt031Google Scholar 35. Tang J, Huang C: Impulsive control and synchronization analysis of complex dynamical networks with non-delayed and delayed coupling. Int. J. Innov. Comput. Inf. Control 2013,9(11):4555–4564.Google Scholar 36. Li S, Xiang Z, Karimi HR:Stability and L 1 -gain controller design for positive switched systems with mixed time-varying delays. Appl. Math. Comput. 2013,222(1):507–518.MathSciNetView ArticleMATHGoogle Scholar 37. De la Sen M: On positivity of singular regular linear time-delay time-invariant systems subject to multiple internal and external incommensurate point delays. Appl. Math. Comput. 2007,190(1):382–401. 10.1016/j.amc.2007.01.053MathSciNetView ArticleMATHGoogle Scholar 38. De la Sen M: About the positivity of a class of hybrid dynamic linear systems. Appl. Math. Comput. 2007,189(1):853–868.MathSciNetView ArticleGoogle Scholar 39. De la Sen M: Total stability properties based on fixed point theory for a class of hybrid dynamic systems. Fixed Point Theory Appl. 2009., 2009: Article ID 826438 10.1155/2009/826438Google Scholar 40. Marchenko VM: Hybrid discrete-continuous systems: I. Stability and stabilizability. Differ. Equ. 2012,48(12):1623–1638. 10.1134/S0012266112120087MathSciNetView ArticleMATHGoogle Scholar 41. Marchenko VM: Observability of hybrid discrete-continuous time system. Differ. Equ. 2013,49(11):1389–1404. 10.1134/S0012266113110074MathSciNetView ArticleMATHGoogle Scholar 42. Marchenko VM: Hybrid discrete-continuous systems: II. Controllability and reachability. Differ. Equ. 2013,49(1):112–125. 10.1134/S0012266113010114MathSciNetView ArticleMATHGoogle Scholar 43. Bilbao-Guillerna A, De la Sen M, Ibeas A, Alonso-Quesada S: Robustly stable multiestimation scheme for adaptive control and identification with model reduction issues. Discrete Dyn. Nat. Soc. 2005,2005(1):31–67. 10.1155/DDNS.2005.31MathSciNetView ArticleMATHGoogle Scholar 44. De la Sen M: On some structures of stabilizing control laws for linear and time-invariant systems with bounded point delays and unmensurable states. Int. J. Control 1994,59(2):529–541. 10.1080/00207179408923091MathSciNetView ArticleMATHGoogle Scholar 45. De la Sen M, Ibeas A: Stability results for switched linear systems with constant discrete delays. Math. Probl. Eng. 2008., 2008: Article ID 543145 10.1155/2008/543145Google Scholar 46. De la Sen M, Ibeas A: On the global asymptotic stability of switched linear time-varying systems with constant point delays. Discrete Dyn. Nat. Soc. 2008., 2008: Article ID 231710 10.1155/2008/231710Google Scholar 47. De la Sen M, Ibeas A, Alonso-Quesada S: Asymptotic hyperstability of a class of linear systems under impulsive controls subject to an integral Popovian constraint. Abstr. Appl. Anal. 2013., 2013: Article ID 382762 10.1155/2013/382762Google Scholar 48. Niamsup P, Rojsiraphisa T, Rajchakit M: Robust stability and stabilization of uncertain switched discrete-time systems. Adv. Differ. Equ. 2012., 2012: Article ID 134 10.1186/1687-1847-2012-134Google Scholar 49. Niamsup P, Rajchakit G: New results on robust stability and stabilization of linear discrete-time stochastic systems with convex polytopic uncertainties. J. Appl. Math. 2013., 2013: Article ID 368259 10.1155/2013/368259Google Scholar 50. Darwish MA, Henderson J: Existence and asymptotic stability of solutions of a perturbed quadratic fractional integral equation. Fract. Calc. Appl. Anal. 2009,12(1):71–86.MathSciNetMATHGoogle Scholar 51. De la Sen M: Robust stability of a class of linear time-varying systems. IMA J. Math. Control Inf. 2002,19(4):399–418. 10.1093/imamci/19.4.399MathSciNetView ArticleMATHGoogle Scholar 52. Tsakalis KS, Ioannou PA: Linear Time-Varying Systems. Prentice Hall, New York; 1993.MATHGoogle Scholar Copyright © Sen and Ibeas; licensee Springer. 2014 This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
__label__pos
0.988144
Wave-making resistance From Wikipedia, the free encyclopedia Jump to: navigation, search For wave drag on supersonic aircraft due to shock waves, see wave drag. Wave-making resistance is a form of drag that affects surface watercraft, such as boats and ships, and reflects the energy required to push the water out of the way of the hull. This energy goes into creating the wave. Physics[edit] Graph of power versus speed for a displacement hull, with a mark at a speed–length ratio of 1.34 For small displacement hulls, such as sailboats or rowboats, wave-making resistance is the major source of the marine vessel drag. A salient property of water waves is dispersiveness; i.e., the longer the wave, the faster it moves. Waves generated by a ship are affected by her geometry and speed, and most of the energy given by the ship for making waves is transferred to water through the bow and stern parts. Simply speaking, these two wave systems, i.e., bow and stern waves, interact with each other, and the resulting waves are responsible for the resistance. E.g., the phase speed of deepwater waves is proportional to the square root of the wavelength of the generated waves, and the length of a ship causes the difference in phases of waves generated by bow and stern parts. Thus, there is a direct relationship between the waterline length (and thus wave propagation speed) and the magnitude of the wave-making resistance. A simple way of considering wave-making resistance is to look at the hull in relation to bow and stern waves. If the length of a ship is half the length of the waves generated, the resulting wave will be very small due to cancellation, and if the length is the same as the wavelength, the wave will be large due to enhancement. The phase speed of waves is given by the following formula: where is the length of the wave and the gravitational acceleration. Substituting in the appropriate value for yields the equation: or, in metric units: These values, 1.34, 2.5 and very easy 6, are often used in the hull speed rule of thumb used to compare potential speeds of displacement hulls, and this relationship is also fundamental to the Froude number, used in the comparison of different scales of watercraft. When the vessel exceeds a "speed–length ratio" (speed in knots divided by square root of length in feet) of 0.94, it starts to outrun most of its bow wave, the hull actually settles slightly in the water as it is now only supported by two wave peaks. As the vessel exceeds a speed-length ratio of 1.34, the wavelength is now longer than the hull, and the stern is no longer supported by the wake, causing the stern to squat, and the bow to rise. The hull is now starting to climb its own bow wave, and resistance begins to increase at a very high rate. While it is possible to drive a displacement hull faster than a speed-length ratio of 1.34, it is prohibitively expensive to do so. Most large vessels operate at speed-length ratios well below that level, at speed-length ratios of under 1.0. Ways of reducing wave-making resistance[edit] Since wave-making resistance is based on the energy required to push the water out of the way of the hull, there are a number of ways that this can be minimized. Reduced displacement[edit] Reducing the displacement of the craft, by eliminating excess weight, is the most straightforward way to reduce the wave making drag. Another way is to shape the hull so as to generate lift as it moves through the water. Semi-displacement hulls and planing hulls do this, and they are able to break through the hull speed barrier and transition into a realm where drag increases at a much lower rate. The disadvantage of this is that planing is only practical on smaller vessels, with high power-to-weight ratios, such as motorboats. It is not a practical solution for a large vessel such as a supertanker. Fine entry[edit] A hull with a blunt bow has to push the water away very quickly to pass through, and this high acceleration requires large amounts of energy. By using a fine bow, with a sharper angle that pushes the water out of the way more gradually, the amount of energy required to displace the water will be less, even though the same total amount of water will be displaced. A modern variation is the wave-piercing design. Bulbous bow[edit] Main article: bulbous bow A special type of bow, called a bulbous bow, is often used on large power vessels to reduce wave-making drag. The bulb alters the waves generated by the hull, by changing the pressure distribution ahead of the bow. Because of the nature of its destructive interference with the bow wave, there is a limited range of vessel speeds over which it is effective. A bulbous bow must be properly designed to mitigate the wave-making resistance of a particular hull over a particular range of speeds. A bulb that works for one vessel's hull shape and one range of speeds could be detrimental to a different hull shape or a different speed range. Proper design and knowledge of a ship's intended operating speeds and conditions is therefore necessary when designing a bulbous bow. Semi-displacement and planing hulls[edit] A graph showing resistance–weight ratio as a function of speed–length ratio for displacement, semi-displacement, and planing hulls Since semi-displacement and planing hulls generate a significant amount of lift in operation, they are capable of breaking the barrier of the wave propagation speed and operating in realms of much lower drag, but to do this they must be capable of first pushing past that speed, which requires significant power. Once the hull gets over the hump of the bow wave, the rate of increase of the wave drag will start to reduce significantly.[citation needed] A qualitative interpretation of the wave resistance plot is that a displacement hull resonates with a wave that has a crest near its bow and a trough near its stern, because the water is pushed away at the bow and pulled back at the stern. A planing hull simply pushed down on the water under it, so it resonates with a wave that has a trough under it, which has about twice the length and therefore four times the speed. See also[edit] References[edit]
__label__pos
0.990921
• Shuffle Toggle On Toggle Off • Alphabetize Toggle On Toggle Off • Front First Toggle On Toggle Off • Both Sides Toggle On Toggle Off Front How to study your flashcards. Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key Up/Down arrow keys: Flip the card between the front and back.down keyup key H key: Show hint (3rd side).h key image Play button image Play button image Progress 1/16 Click to flip 16 Cards in this Set • Front • Back Name four uses for neuromuscular blocks 1. Surgery 2. Mechanical Ventilation 3. Endotracheal Intubation 4. Electroshock Therapy What is Prazosin? It is an Alpha 1 blocker Name two uses for Prazosin? 1. Hypertension 2. BPH What are some adverse effects of Prazosin? 1. Orthostatic hypotension 2. Reflex tachycardia 3. Inhibits ejaculation 4. Nasal congestion What is Metoprolol? It is a Beta 1 blocker Name four uses for Metoprolol. 1. Hypertension 2. Angina pectoris 3. Heart failure 4. Myocardial infarction What are some adverse effects of Metoprolol? 1. Bradycardia 2. Reduction in cardiac output 3. AV heart block 4. Rebound cardiac excitation following abrupt w/d. What is Propranolol? It is a Beta 1 and Beta 2 blocker Name our uses for Propranolol. 1. Hypertension 2. Angina pectoris 3. Cardiac dysrhythmias 4. Myocardial infarction Which drug is safe for Asthmatics & Diabetics? Propranolol or Metoprolol? Metoprolol Epinephrine works on which receptor types? A1, A2, B1, B2 Norepinephrine works on which receptor types? A1, A2, B1 Isoproterenol works on which receptor type? B1 and B2 Dopamine works on which receptor types? Dopamine Dobutamine works on which receptor types? Beta 1 Terbutaline works on which receptors types? Beta 2
__label__pos
0.508991
Immersive Experience On The Road: Exploring the Possibilities of American Truck Simulator in VR Immersive Experience On The Road: Exploring the Possibilities of American Truck Simulator in VR American Truck Simulator is a popular truck simulation video game. While the game does not officially support virtual reality (VR) gameplay, there are mods and third-party applications available that enable players to experience American Truck Simulator in VR. Thus, it is possible to play American Truck Simulator in VR with the help of these additional tools. Is American Truck Simulator compatible with virtual reality (VR)? Yes, American Truck Simulator is compatible with virtual reality (VR). The game can be played with VR headsets such as Oculus Rift and HTC Vive, offering an immersive truck driving experience. How does playing American Truck Simulator in VR enhance the gaming experience? Playing American Truck Simulator in VR enhances the gaming experience in several ways. Firstly, VR technology allows players to fully immerse themselves in the truck driving experience, giving them a more realistic and authentic feel of being on the road. The incredible sense of depth and scale provided by VR creates a more immersive and lifelike environment, making players feel like they are actually sitting in the driver’s seat of a big rig. Secondly, VR enhances the game’s visuals by adding a greater level of detail and realism. Players can appreciate the stunning landscapes, architecture, and weather effects in a more engaging and immersive manner. The ability to look around freely and have a 360-degree view of the surroundings helps players to fully appreciate the game’s graphics, which enhances overall enjoyment. Moreover, VR enhances the sense of control and interactivity. Players can use their hands and gestures to navigate menus, operate truck controls, and interact with the game world, making the gameplay more intuitive and immersive. They can also look out of the truck windows or use side mirrors to check blind spots or observe surrounding traffic, which adds a new layer of realism and strategic gameplay. Lastly, VR adds an element of physicality to the gaming experience. Players can lean into turns or feel the vibrations and rumble through the steering wheel or controller, making the experience more tactile and immersive. This physical aspect enhances the overall sense of presence and makes players feel more connected to the virtual world, ultimately enhancing the enjoyment of playing American Truck Simulator in VR. What are the system requirements for playing American Truck Simulator in VR? The system requirements for playing American Truck Simulator in VR can vary depending on the specific VR headset being used. However, the following are general guidelines for a smooth VR experience in American Truck Simulator: Minimum requirements: – Operating System: Windows 7/8.1/10 64-bit – Processor: Intel Core i5-4590 or equivalent – Memory: 8 GB RAM – Graphics: NVIDIA GeForce GTX 970 / AMD Radeon R9 390 or equivalent with at least 4 GB VRAM – Storage: 10 GB available space – VR Headset: Oculus Rift or HTC Vive Recommended requirements: – Operating System: Windows 10 64-bit – Processor: Intel Core i7-7700K or equivalent – Memory: 16 GB RAM – Graphics: NVIDIA GeForce GTX 1080 / AMD Radeon RX 5700 XT or equivalent with at least 8 GB VRAM – Storage: SSD with 10 GB available space – VR Headset: Oculus Rift S, Oculus Quest with Oculus Link, or HTC Vive Pro It’s important to note that these requirements can vary depending on the visual settings and mods being used in the game, so it’s always recommended to check the specific requirements for the VR headset being used and ensure your system meets or exceeds those specifications for a smooth and enjoyable experience. Which VR headsets are compatible with American Truck Simulator? American Truck Simulator is compatible with various VR headsets, including but not limited to: 1. Oculus Rift: The original Oculus Rift, as well as the Oculus Rift S and Oculus Quest (with Link cable), can be used to play American Truck Simulator in VR. 2. HTC Vive: The HTC Vive and HTC Vive Pro are also compatible with American Truck Simulator, allowing players to experience the game in virtual reality. 3. Valve Index: The Valve Index is another VR headset that can be used to play American Truck Simulator, offering high-quality visuals and immersive gameplay. 4. Windows Mixed Reality: Several Windows Mixed Reality headsets, such as the HP Reverb G2, Acer Windows Mixed Reality, and Samsung Odyssey+, are compatible with American Truck Simulator. It is important to note that VR compatibility may vary depending on the specific hardware requirements of the game and the capabilities of the VR headset. Are there any special controls or settings needed to play American Truck Simulator in VR? Yes, to play American Truck Simulator in VR, you will need a VR headset such as Oculus Rift or HTC Vive. Additionally, you may need to adjust the graphics settings to optimize the game for VR mode. It is also recommended to have a compatible game controller or steering wheel for a more immersive experience. Can I use mods or add-ons while playing American Truck Simulator in VR? Yes, you can use mods or add-ons while playing American Truck Simulator in VR. However, it is important to note that not all mods or add-ons may be compatible with the VR version of the game. It is recommended to check the compatibility of the mods or add-ons with the VR version before installing them. What are the advantages of playing American Truck Simulator in VR compared to traditional gameplay? Playing American Truck Simulator in VR offers several advantages compared to traditional gameplay: 1. Immersive Experience: VR provides a highly immersive experience, allowing players to feel like they are actually sitting in the truck’s cabin and driving on the roads. This level of immersion enhances the overall gameplay and realism. 2. Enhanced Depth Perception: VR helps recreate a more realistic sense of depth, making it easier to judge distances and maneuver the vehicle accurately. This is particularly beneficial while parking or performing tight turns. 3. Realistic Interactions: In VR, players can use their hands and actual movements to interact with the game’s controls, such as reaching out to adjust the radio or grab objects in the cabin. This adds another layer of realism and engagement compared to traditional controls. 4. Better Sense of Scale and Size: VR allows players to experience the true size of the trucks and their surroundings, making driving and navigating through tight spaces more intuitive. This makes gameplay more challenging and rewarding. 5. Immersive Environments: VR technology replicates the game’s environments in three dimensions, creating a realistic and fully immersive world. Players can appreciate the scenic landscapes, weather effects, and details more intensely, enhancing the overall visual experience. 6. Reduced Motion Sickness: For some players, traditional gameplay with a flat screen can induce motion sickness due to the disconnection between what is seen on the screen and the actual movement of the character or vehicles. VR eliminates this issue by providing a more natural and cohesive experience. While traditional gameplay also has its advantages, such as accessibility and ease, playing American Truck Simulator in VR offers a unique and highly immersive experience that greatly enhances the gameplay, realism, and enjoyment. Are there any limitations or disadvantages to playing American Truck Simulator in VR? Yes, there are some limitations and disadvantages to playing American Truck Simulator in VR. 1. VR Equipment: Playing the game in VR requires additional equipment like a VR headset and controllers, which can be expensive. Not everyone may be willing to invest in such equipment. 2. Hardware Requirements: VR games demand higher hardware specifications compared to regular PC gaming. To run American Truck Simulator in VR smoothly, a powerful graphics card and a capable computer are necessary. Users with lower-end systems may not be able to enjoy the game in VR. 3. Motion Sickness: Some individuals may experience motion sickness or VR-induced discomfort while playing the game. The constant movement and speed of the truck can cause nausea, dizziness, or headaches. Motion sickness affects people differently, so not everyone may face this issue. 4. Limited Field of View: VR headsets have a limited field of view compared to real-life vision. This can affect the immersive experience of driving a truck, as players may not be able to see their surroundings as they would in reality. This limitation can impact depth perception and makes it harder to check blind spots. 5. Physical Constraints: Longer gaming sessions in VR can strain the body due to standing or moving around while playing. Players may experience fatigue, discomfort, or even postural issues from extended use. 6. User Interface: Some aspects of the in-game user interface may not be optimized for VR. Interacting with menus, buttons, or text could be more challenging and less intuitive in a virtual environment. 7. Limited Content: While American Truck Simulator offers an immersive trucking experience, the VR version might have limited content compared to the standard version of the game. Some updates or expansion packs may not be fully compatible with VR. 8. Performance: Running a game in VR mode can cause performance issues like lower frame rates or graphical glitches. This can impact the overall gameplay experience and might require adjusting settings or compromising graphical quality. Despite these limitations, many players find playing American Truck Simulator in VR to be an incredibly immersive and enjoyable experience that adds a new level of realism to the game. How immersive is the experience of playing American Truck Simulator in VR? The experience of playing American Truck Simulator in VR is highly immersive. Through virtual reality, players can fully immerse themselves in the truck driving experience, feeling like they are actually behind the wheels of a big rig in the American countryside. The VR technology allows for a more realistic and engaging gameplay, where players can appreciate the detailed graphics, depth perception, and the feeling of being in a lifelike truck cabin. Overall, American Truck Simulator in VR creates an incredibly immersive experience for players. Yes, here are some recommended tips and tricks for playing American Truck Simulator in VR: 1. Adjust the graphics settings: VR requires a higher performance, so it’s important to find the right balance between visual quality and smooth gameplay. Experiment with different graphics settings to achieve a comfortable and immersive VR experience. 2. Use a comfortable headset: Ensure your VR headset fits well and is comfortable to wear for extended periods. Adjust the straps and make necessary modifications to minimize discomfort during gameplay. 3. Set up the play area: Make sure you have enough space to move around comfortably without bumping into objects or walls. Clear the area of any obstacles that could hinder your play. Consider placing a soft mat to stand on while playing to relieve foot strain. 4. Adjust your seat position: Fine-tune the in-game seat position to match your actual real-life driving position. This contributes to a more realistic experience and can help reduce motion sickness. 5. Take breaks: Playing in VR can be intense, so it’s crucial to take regular breaks to rest your eyes and avoid fatigue. VR-induced motion sickness can also occur, especially in new users, so taking breaks whenever you don’t feel well is essential. 6. Experiment with control settings: Try out different control settings to find what works best for you. Some players prefer using a steering wheel and pedals for a more authentic experience, while others may find a gamepad or hand controllers more comfortable. 7. Adjust the scale: In the VR settings, you can adjust the scaling options to get a proper sense of depth and size. Experiment until you find the setting that feels most natural to you. 8. Consider comfort options: American Truck Simulator offers comfort options like reducing camera shake and limiting head movement. Explore these settings to find what suits you best and minimize any potential motion sickness. 9. Increase in-game scale: Increase the scaling factor within the game settings to make objects appear larger. This can help enhance immersion and make the world feel more realistic in VR. Remember, everyone’s preferences and tolerance for VR gameplay can differ. Feel free to experiment with various settings and adjustments to find what works best for you and provides the most enjoyable experience in American Truck Simulator. Question Answer Is American Truck Simulator compatible with VR headsets? Yes Which VR headsets are supported by American Truck Simulator? Oculus Rift, HTC Vive, Valve Index Do you need any additional hardware to play American Truck Simulator in VR? Yes, you will require a compatible VR headset and a computer that meets the recommended system requirements for running VR games. Can you play American Truck Simulator in VR on PlayStation or Xbox? No, American Truck Simulator is only available for PC, and the VR mode is specifically supported on PC platforms. Are there any specific settings or requirements for playing American Truck Simulator in VR? Yes, you may need to adjust the graphics settings and enable the VR mode within the game settings. Additionally, ensure that you have the necessary VR drivers installed for your headset. Is the VR experience in American Truck Simulator immersive? Yes, playing American Truck Simulator in VR provides a highly immersive experience, allowing you to feel like you’re really driving a truck in the game. Rate article Immersive Experience On The Road: Exploring the Possibilities of American Truck Simulator in VR Immersive Experience On The Road: Exploring the Possibilities of American Truck Simulator in VR Unveiling the Power and Versatility of Truck-Like Cars: A Perfect Blend of Style and Utility
__label__pos
0.946309
NHD Blog Archive EATING YOUR GREENS FOR EXTRA ENERGY? Posted on 0 Comments Is it possible that eating a chlorophyll-rich diet can give us energy when we are exposed to the sun? Jelena Vidic,  ANutr,  takes a look at current research. cabbages-hero Working in food innovation requires endless searches for current trends and scientific findings. During my recent CPD lockdown session I have encountered an interesting paper about the benefit of consuming a chlorophyll-rich food that activates when we are exposed to the sun. Very intriguing! Since the summer is approaching and we all can’t wait to go out and enjoy the sunshine, let’s explore if we can find another excuse for lazing around in the sun other than for vitamin D synthesis… Chlorophyll is a green pigment found in plants, involved in absorbing sunlight and transferring it into energy storing molecules in a process called photosynthesis1. Current research shows many health benefits of chlorophyll and its derivatives for humans. They can form complexes with cancer, causing chemicals such as heterocyclic amines (found in cooked meat), aflatoxin-B1 (found in spice and herb powders and extracts) or polycyclic aromatic hydrocarbons (tobacco smoke)4. By forming these complexes, gastrointestinal absorption of possible carcinogens is interfered showing chlorophyll derivatives as being effective in the prevention of cancer4. Furthermore, they participate in tissue repair and growth. Due to their structural similarity to haemoglobin, they can assist in carrying oxygen to all tissues and cells and transport magnesium.  Along with vitamins A, C and D, chlorophyll is a powerful antioxidant (when not exposed to light) able to neutralize free radicals that damage healthy human cells. This pigment may be used for treating kidney stones by inhibiting the growth of calcium oxalate dihydrate5. It is also used in preventing mal odours. Despite the above, it could appear there may be even more benefits to the pigment.  girl-sunshineResearch conducted at Columbia University Medical Centre published that mammals consuming chlorophyll-rich diets can capture light and use it for ATP synthesis2 . They found that animal-derived tissues and isolated mammalian mitochondria when incubated with light capturing metabolites of chlorophyll, compared with animal tissue without metabolites, have higher values of ATP and median life span after exposing to light. Authors suggested that chlorophyll type molecules catalyse the reduction of coenzyme Q, normally a slow step in ATP synthesis within mitochondria, therefore increasing ATP production. These findings would suggest that the photonic energy captured through the chlorophyll dietary-derived metabolites may be a significant process for energy regulation in animals. However, a better understanding of chlorophyll metabolite pharmacodynamics and pharmacokinetics is required to elucidate this mechanism. If the animal’s metabolic channels for obtaining energy directly from the light is transferable to humans, it would suggest that we could improve our ATP status that is often compromised by environmental exposures, non-adaptive stress, suboptimal nutrition, disease and aging, through eating chlorophyll-rich diet and sun exposure3. However, additional studies are needed to confirm those findings.  In summary, chlorophyll shouldn’t be our only reason for a regular consumption of green vegetables (source of fibre, vitamins, minerals and phytonutrients) and for it to reach its maximum effect it is likely that much higher doses (due to bioavailability yet to be clarified) in the form of supplements would be required. Considering the current research, there is not enough evidence to recommend supplementation with chlorophyll and its derivatives for the purpose of improving ATP status whilst exposed to the sun. Therefore, further research is recommended before we can follow a plant-like lifestyle. 😊   Jelena Vidic, ANutr, MSc Jelena is an MSc Clinical Nutrition and BSc Nutrition currently working in Food Innovation within the Food Service sector. Her general interests are in the introduction of nutritious and healthy ingredients, creating new and innovative products. Instagram: @jelena_anutr LinkedIn profile References  1. National Geographic (2020).Chlorophyll. Available at: https://www.nationalgeographic.org/encyclopedia/chlorophyll/  2. Xu C, Zhang J,Mihai D and Washington I (2014). Light-harvesting chlorophyll pigments enable mammalian mitochondria to capture photonic energy and produce ATP. Journal of Cell Science 127: 388-399. Available at: https://jcs.biologists.org/content/joces/127/2/388.full.pdf  3. Ji S (2015) Groundbreaking Discovery: Animal Cells Powered by Sunlight/Chlorophyll. GreenMedInfo LLC. Available at https://www.greenmedinfo.com/blog/dietary-chlorophyll-helps-us-captureuse-sunlight-energy-groundbreaking-study-1  4. Levent İnanç A (2011) Chlorophyll: Structural Properties, Health Benefits and Its Occurrence in Virgin Olive Oils. Available at: https://pdfs.semanticscholar.org/6724/b1a7d3ac2b4e975935db2a7cdc0b9b66cfa4.pdf?_ga=2.146187572.1824181660.1590503435-1886510496.1590503435 5. Kizhedath A (2011) Estimation of chlorophyll content in common household medicinal leaves and their utilization to avail health benefits of chlorophyll. Journal of Pharmacy Research. Available at: https://www.researchgate.net/profile/Arathi_Kizhedath/publication/299499753_Estimation_of_chlorophyll_content_in_common_household_medicinal_leaves_and_their_utilization_to_avail_health_benefits_of_chlorophyll/links/58da67ea45851578dfb6bcd8/Estimation-of-chlorophyll-content-in-common-household-medicinal-leaves-and-their-utilization-to-avail-health-benefits-of-chlorophyll.pdf       Add a comment: Leave a comment: Comments Add a comment
__label__pos
0.585029
题库 > 阅读RC > 题目93gpnk 题目内容 题目材料 Even more than mountainside slides of mud or snow, naturally occurring forest fires promote the survival of aspen trees. Aspens' need for fire may seem illogical since aspens are particularly vulnerable to fires; whereas the bark of most trees] consists of dead cells, the aspen's bark is a living, functioning tissue that-along with the rest of the tree-succumbs quickly to fire. The explanation is that each aspen, while appearing to exist separately as a single tree, is in fact only the stem or shoot of a far larger organism. A group of thousands of aspens can actually constitute a single organism, called a clone, that shares an interconnected root system and a unique set of genes. Thus, when one aspen-a single stem-dies, the entire clone is affected. While alive, a stem sends hormones into the root system to suppress formation of further stems. But when the stem dies, its hormone signal also ceases. If a clone loses many stems simultaneously, the resulting hormonal imbalance triggers a huge increase in new, rapidly growing shoots that can outnumber the ones destroyed. An aspen grove needs to experience fire or some other disturbance regularly, or it will fail to regenerate and spread. Instead, coniferous trees will invade the aspen grove's borders and increasingly block out sunlight needed by the aspens. It can be inferred from the passage that when aspen groves experience a "disturbance", such a disturbance • Aleads to a hormonal imbalance within an aspen clone • Bprovides soil conditions that are favorable for new shoots • Cthins out aspen groves that have become overly dense • Dsuppresses the formation of too many new aspen stems • Eprotects aspen groves by primarily destroying coniferous trees rather than aspens 显示答案 正确答案: A • 网友解析 题目讨论 如果对题目有疑问,欢迎来提出你的问题,热心的小伙伴会帮你解答。 • 优质讨论 • 最新讨论 用户头像 已经输入0个字 题目基本信息 • 所属科目:阅读RC • 题目来源1:GWD-41 • 正确率:63% • 难度: 中等 最新提问
__label__pos
0.78055
Early Access Memorial Day Sale! | 35% off + FREE Pillows on select mattresses 0 Days 0 Hours 0 Mins 0 Secs Award Icon Rated No 1 Mattress for Side SleepersLearn More Sleeping In: The Effects and Drawbacks of Oversleeping Sleeping In: The Effects and Drawbacks of Oversleeping by Aoife O.  |  Nov 19, 2020 Sleeping in on weekends is a guilty pleasure for most people. Commuting to work or doing the school run dictates getting out of bed before the first bird song of the weekday. While some people relish a Saturday morning lie in, others stick to their regular wake up time even on their days off. If you’re a lie in lover and wondering what the drawbacks of oversleeping are, we delve further into this fascinating topic here. Let’s learn about the effects of oversleeping and how you can reset your sleep schedule for restorative sleep and active days. Importance of Sleep Schedules A consistent sleep schedule is vital to your physical health and mental wellbeing because sleep and health go hand-in-hand. Quality sleep restores your body of energy, memories are cataloged and stored, hormones are regulated, and health restored. Maintaining a Regular Sleep Schedule Your circadian rhythm can get a little out of tune sometimes. If you’re overworking and have a ton of other responsibilities. Falling into bed in the early hours of the morning and waking up at 6am is going to make you ill, have you craving caffeine and sugar all day, and make you miserable and unhealthy. Maintaining a regular sleep schedule is vital to maintaining physical health. Tips and tricks for better quality sleep every night: • Control your exposure to light, avoid using electronic devices or watching television 1 hour before bed to help you wind down • Exercise every day • Enjoy walking in the fresh air every day, being in nature is a natural de-stresser • Eat as healthy as you can, avoid eating too much sugar and drink fresh water every day • Try meditation or yoga to help you keep stress under control, stress is a major cause of insomnia, learning how to let go of issues that are not serving you could help you fall asleep faster and stay asleep all night • Curate a sleep setup that promotes healthy sleep, including a supportive mattress and temperature-regulating bedding Learn how to reset your sleep schedule, here. Sleep Guidelines Age Group Hours Teenages 8-10 Adults 7-9 Seniors 7-8 Source: CDC Sleeping in on Weekends “I love sleeping in on Saturdays” is what most people say! A Saturday sleep in feels like a mini holiday, getting up later than usual, going for brunch, reading the papers is a dream Saturday for a lot of people. Sleeping all weekend or even having a lie in is impossible if small children are demanding attention. But, not having to set your morning alarm to its usual time is enough of a thrill to start your weekend on a high note. If you find yourself feeling tired on weekends, you may be suffering from weeknight insomnia. You may think that sleeping until noon on Saturday will refill your energy tank but, unfortunately it doesn’t work that way. Banking sleep at the weekend might make you feel good for one day but analyzing and adapting your weeknight sleep schedule will have a greater impact. Learn how to wake up early and energized here. Best Time to Wake Up on Weekends The best time to wake up at weekends depends on your work/study schedule and lifestyle. There is no magic hour that suits everybody. As long as you’re getting a solid 7-9 hours of sleep every night, the best time to wake up on weekends is whatever suits your weekday schedule. Regularly Sleeping In If you’re sleeping in regularly it could be a sign of an underlying health issue. Oversleeping or not sleeping enough leads to some serious health problems, high blood pressure, heart disease, obesity, and anxiety disorders. Making some lifestyle changes can be enough to get your sleep schedule back on track. But, schedule a checkup with your doctor if you’re sleeping in more than usual. How Do I Stop Myself From Oversleeping? • Resist hitting the snooze button • Avoid sleeping in on weekends and on your days off • Dodge the urge to take a nap or keep the nap to maximum 20 minutes • Create a relaxing night time routine to help you unwind How to Wake Yourself Up in the Morning • Throw your curtains open as soon as you wake up, sunlight streaming through your window will ignite your circadian rhythm and get you wide awake for the day • Get into morning meditation to start your day the right way, running around stressed in the morning sets you up for a hard day, take a deep breath and tackle one task at a tim • Change your alarm habits, pressing snooze 15 times every morning could mean you’re going to bed too late, disable the snooze button, be brave and get out of bed when the alarm goes off FAQs Is Sleeping in Bad? Sleeping in every now and then is not necessarily bad for you. But, if you’re sleeping in every day, pressing the snooze button 15 times, and rushing around in a stressed state every morning, you could suffer some physical effects such as heart problems, anxiety, blood pressure issues, or weight gain. Is it OK to Sleep in on the Weekends? Sleeping in on the occasional weekend won’t have any ill effects on your physical health. Banking sleep on the weekend will only make you feel good for one day. If you’re sleeping in every weekend, have a look at adjusting your sleep schedule. If you’re getting enough quality sleep every night, you won’t feel like sleeping in on weekends. Do People Sleep More on Workdays? If you work a physically, emotionally, or mentally demanding job you may need more sleep at night. If you’re eating well, exercising, and enjoying fresh air every day, your sleep schedule should naturally fall into place. Take regular rest breaks throughout the day and sip water to beat day time fatigue. Why Is Sleeping Good for You? Good quality sleep every night is an essential component of a healthy lifestyle. As you sleep and dream, your body restores energy, catalogues and stores memories, replenishes and restores hormone levels, helps you heal from illness, and recharges your batteries. Sleep is essential in maintaining good physical health and a strong mental capacity. How Much Should I Sleep In a Week? Do you find yourself completely depleted on your days off? Are you sleeping in more than 3 times a week? If so it may be time to have a good look at your sleep schedule and night time routine. A healthy adult needs 7-9 hours of sleep every night. Adapt your routine to accommodate 7-9 hours of sleep every night. Is Sleeping Late a Bad Habit? If you’re an adult with many important responsibilities and you’re sleeping in most days, you may see you’re not achieving your personal goals and running around frazzled most of the time. Sleeping late is a bad habit because it can harm your physical health and mental wellbeing. Good quality sleep keeps you healthy, strong, and able to reach your goals. Is Sleeping for 12 Hours Bad? Sleeping for 12 hours is good if you’re recovering from illness or surgery. Otherwise, as a healthy adult, sleeping for 12 hours every night could be a sign of an underlying health issue. Consider seeing your doctor for a check up and adjusting your sleep schedule. You may need to start going to bed earlier than normal. Is it Unhealthy to Have a Bad Sleep Schedule? An unhealthy sleep schedule is when you are going to bed and waking up at inconsistent times. Your circadian rhythm gets disturbed, causing health problems such as heart disease, obesity, and blood pressure issues. Admitting to colleagues “I slept in”, on occasion is not going to hurt you, but change your sleep schedule if this is a regular occurrence. Conclusion Oversleeping every now and then should not cause you any ill-effects. If you find that you’re sleeping in on a regular basis, it might be time to make some changes to your sleep schedule. If you’ve ever slept in on the morning of an important event, you’re only too aware of that awful morning panic mode. Being stressed every day is bad for your health, leading to blood pressure issues, heart disease, anxiety disorders, depression, and obesity. Enjoy the occasional lie in if you need it, but, you owe it to your physical health to enjoy quality sleep every night so you can wake refreshed every morning. Disclaimer: Nolah does not provide medical advice. All resources on the Nolah blog, including this article, are informational only and do not replace professional medical counsel. Talk to your doctor about any health, mental health, or sleep-related issues. You May Also Like These Articles Excessive Sleeping During the Day: Is it a Sleep Disorder? Excessive Sleeping During the Day: Is it a Sleep Disorder? If you take a lot of naps, have difficulty sleeping at night, and experience excessive sleepiness during your waking hours, you may have a sleep disorder. Read more Nolah 2022 Sleep Survey Nolah 2022 Sleep Survey Our survey takes a closer look at U.S. sleep statistics in 2022 and how the COVID-19 pandemic has changed the way we sleep. Read more How to Sleep Cool All Summer Long How to Sleep Cool All Summer Long The forecast for a hot summer doesn’t mean you have to struggle for sleep! With these tips and tricks, you can keep cool all night, every night. Read more SHOP MATTRESSES Featured Articles Ready for Bed? People in Bed Icon Mobile Top Decorations Shop Mattresses Now with Free Shipping and 120-Night Trial People in Bed Icon Mobile
__label__pos
0.667594
Methylation Methylation Methylation! What is it and why is it important? Methylation is a biochemical process involved in almost all your body’s functions. Methylation occurs when a single carbon and three hydrogen atoms (called a methyl group) are added to another molecule. The removal of a methyl group is called demethylation. Think of billions of little on/off switches inside your body that control everything from your stress response and how your body makes energy from food, to your brain chemistry and detoxification. That’s methylation and demethylation. Methyl groups control: • The stress response • The production and recycling of glutathione — the body’s master antioxidant! • The detoxification of hormones, chemicals and heavy metals • The inflammation response • Genetic expression and the repair of DNA • Neurotransmitters and the balancing of brain chemistry • Energy production • The repair of cells damaged by free radicals • The immune response, controlling T-cell production, fighting infections and viruses and regulating the immune response If you have a shortage of methyl groups or a genetic SNP (Single Nucleotide Polymorphism), the process of methylation can be interrupted, and people can become sick. There is research to suggest that impaired methylation is linked to autoimmunity. So, what can you do to improve methylation? 1. Eat dark leafy green veggies 2. Get B vitamins and folate 3. Get adequate amounts of magnesium and zinc 4. Take probiotics 5. Reduce stress 6. Boost glutathione 7. Get enough sleep, optimize your body’s antioxidants, Curcumin, exercise! There are no comments Leave a Reply Your email address will not be published. Required fields are marked * Start typing and press Enter to search Shopping Cart
__label__pos
0.999471
Solid - Re-decentralizing the web (project directory) HTML Switch branches/tags Nothing to show Clone or download Latest commit 2c143a7 Jul 2, 2018 README.md Solid Solid Logo Re-decentralizing the web Solid (derived from "social linked data") is a proposed set of conventions and tools for building decentralized Web applications based on Linked Data principles. Solid is modular and extensible. It relies as much as possible on existing W3C standards and protocols. Table of Contents 1. About Solid 2. Standards Used 3. Platform Notes 4. Project directory 5. Contributing to Solid About Solid Specifically, Solid is: • A tech stack -- a set of complementary standards and data formats/vocabularies that together provide capabilities that are currently available only through centralized social media services (think Facebook/Twitter/LinkedIn/many others), such as identity, authentication and login, authorization and permission lists, contact management, messaging and notifications, feed aggregation and subscription, comments and discussions, and more. • A Specifications document that describes a REST API that extends those existing standards, contains design notes on the individual components used, and is intended as a guide for developers who plan to build servers or applications. • A set of servers that implement this specification. • A test suite for testing and validating Solid implementations. • An ecosystem of social apps, identity providers and helper libraries (such as solid.js) that run on the Solid platform. • A community providing documentation, discussion (see the solid gitter channel), tutorials and talks/presentations. Standards Used The Solid platform uses the following standards. Solid Platform Notes Solid applications are somewhat like multi-user applications where instances talk to each other through a shared filesystem, and the Web is that filesystem. 1. The LDP specification defines a set of rules for HTTP operations on Web resources, some based on RDF, to provide an architecture for reading and writing Linked Data on the Web. The most important feature of LDP is that it provides us with a standard way of RESTfully writing resources (documents) on the Web, without having to rely on less flexible conventions (APIs) based around sending form-encoded data using POST. For more insight into LDP, take a look at the examples in the LDP Primer document. 2. Solid's basic protocol is REST, as refined by LDP with minor extensions. New items are created in a container (which could be called a collection or directory) by sending them to the container URL with an HTTP POST or issuing an HTTP PUT within its URL space. Items are updated with HTTP PUT or HTTP PATCH. Items are removed with HTTP DELETE. Items are found using HTTP GET and following links. A GET on the container returns an enumeration of the items in the container. 3. Servers are application-agnostic, so that new applications can be developed without needing to modify servers. For example, even though the LDP 1.0 specs contains nothing specific to "social", many of the W3C Social Work Group's User Stories can be implemented using only application logic, with no need to change code on the server. The design ideal is to keep a small standard data management core and extend it as necessary to support increasingly powerful classes of applications. 4. The data model is RDF. This means the data can be transmitted in various syntaxes like Turtle, JSON-LD (JSON with a "context"), or RDFa (HTML attributes). RDF is REST-friendly, using URLs everywhere, and it provides decentralized extensibility, so that a set of applications can cooperate in sharing a new kind of data without needing approval from any central authority. Project directory Contributing to Solid Get a WebID In order to try out some of the apps built using Solid, you will need typically an identity on some solid server. There are two forms of authentication we use, and so two types of account. WebID-OIDC This uses OpenID Connect to give you a WebID. It involves signing in with a password at your chosen identity provider, such as (2018/2) solid.community, or solidtest.space. WebID_TLS A WebID profile from one of the Solid-compliant identity providers, such as databox.me, With WebID-TLS, you will need to make a WebID browser certificate from the above profile (this is usually created when you sign up for a WebID profile account, but it only works on Firefox at the moment (2018)). Running a server Additionally, to get started with developing for the Solid platform, you'll need: 1. A Solid-compliant server 2. While not required, an understanding of RDF/Turtle principles and Linked Data Platform concepts will help you understand the general workflow. Solid Project Workflow To contribute to Solid development, and to bring up issues or feature requests, please use the following workflow: 1. Have a question or a feature request or a concern about the Solid framework, or on one of its servers? Open an issue on solid/solid (this repo here). 2. Have an issue with the Solid spec specifically? Open an issue on solid/solid anyway. And then, as a result of discussion, if it's agreed that it is actually a Spec issue, it will be moved to solid-spec. 3. The individual solid/solid issues can coordinate and track component/dependent issues on the various affected Solid servers, apps, and so on. Places to chat We use gitter.im. There is a general chat solid/chat as well as specific chats about specific products such as node-solid-server
__label__pos
0.795371
https://dagster.io/ logo #ask-community Title # ask-community s szalai1 07/30/2021, 5:40 PM hey, getting an odd error, from a very simple pipeline keyError: 'result' Copy code from dagster import AssetMaterialization, Field, OutputDefinition, SolidExecutionContext, solid, Noneable, AssetKey, Output from modes import annotations @solid( config_schema={ "template_survey_guid": str, "pot_guid": Field(Noneable(str), is_required=False, default_value=None), }, required_resource_keys={"attest_client"}, output_defs=[OutputDefinition(name="survey_guid", dagster_type=str)], tags=annotations, ) def start_survey_based_on_draft(context: SolidExecutionContext): template_guid = context.solid_config["template_survey_guid"] pot_guid = context.solid_config["pot_guid"] attest_client = context.resources.attest_client new_survey_guid = attest_client.clone_survey(template_guid) survey_guid = attest_client.purchase_draft_survey(new_survey_guid, pot_guid) # yield AssetMaterialization(asset_key=AssetKey("survey_guid"), metadata_entries=[]) yield Output(value=survey_guid, output_name="survey_guid") code error p prha 07/30/2021, 6:40 PM @szalai1 is this while trying to execute the solid? Also, what version of dagster are you running? I just tried executing this solid and couldn’t hit the error you pasted. s szalai1 07/30/2021, 6:49 PM version 0.12.1, yes this is when I'm running the solid when I removed the output name, it started working again also if I uncomment the materialization event it fails again dagster.core.errors.DagsterStepOutputNotFoundError: Core compute for solid "start_survey_based_on_draft" did not return an output for non-optional output "result" p prha 07/30/2021, 9:07 PM This is what I’m able to run (without hitting the error): Copy code @solid( config_schema={ "template_survey_guid": str, "pot_guid": Field(Noneable(str), is_required=False, default_value=None), }, required_resource_keys={"attest_client"}, output_defs=[OutputDefinition(name="survey_guid", dagster_type=str)], ) def start_survey_based_on_draft(context: SolidExecutionContext): template_guid = context.solid_config["template_survey_guid"] pot_guid = context.solid_config["pot_guid"] attest_client = context.resources.attest_client new_survey_guid = attest_client.clone_survey(template_guid) survey_guid = attest_client.purchase_draft_survey(new_survey_guid, pot_guid) or "" yield Output(value=survey_guid, output_name="survey_guid") @resource() def attest_client_resource(): client = mock.MagicMock() client.clone_survey.return_value = "testtttt clone" client.purchase_draft_survey.return_value = "testtttt purchase" return client @pipeline(mode_defs=[ModeDefinition(resource_defs={"attest_client": attest_client_resource})]) def attest_pipeline(): start_survey_based_on_draft() ^^ this was on 0.12.1 41 Views
__label__pos
0.959853
Epiglottitis Also known as: infantile hemangiomas, parotid hemangiomas, strawberry marks. What is epiglottitis? The epiglottis is a small flap of tissue that covers the windpipe and directs food to the esophagus. When the epiglottis swells and prevents air from flowing into the lungs, this is known as epiglottitis. It can be life threatening. What causes epiglottitis? The common causes of epiglottitis are either an infection from a bacteria or virus that causes the epiglottis to swell, or an injury to the throat. What are the symptoms of epiglottitis? A very bad sore throat, problems breathing and swallowing, drooling, fever, restless behavior and discomfort when leaning back are all potential symptoms of epiglottitis. What are epiglottitis care options? Epiglottitis is often a medical emergency. Health care providers will first need to provide immediate relief to help with breathing in the form of a mask, a breathing tube or a needle inserted into the windpipe. If an infection is causing epiglottitis, antibiotics can help to relieve the symptoms. Reviewed by: Brian Ho, MD This page was last updated on: September 09, 2020 11:18 AM Upcoming Events 11th Annual Dr. Moises Simpser VACC Camp Golf Tournament Date: Friday, October 09, 2020 Join us for a great day of golf, delicious dinner and exciting auction...all to benefit the children of VACC Camp. Learn more. Learn more about
__label__pos
0.936116
Why Is Protein So Important for Weight Loss? Photo by Mantra Media &nbsp;on Unsplash Photo by Mantra Media on Unsplash Heading into the new year carrying a little extra weight than you’d like? Holiday indulgences making your belly stick out a little further than it did in a few months ago? To start the year off on the right foot, one of the most powerful changes you can make is to eat protein every three to four hours throughout the day. Here’s an excerpt from my book Jump Off the Diet Treadmill: 12 Weeks on Your Way to Lifetime Weight Loss, to explain why adding protein is so crucial. The presence of protein stimulates the fat burning hormone, GLUCAGON. Eating too many carbs leads to those carbs being stored as sugar in the muscles and liver. (We’ll talk more about this later). If there is excess sugar beyond what can be stored in those tissues, the extra supply makes its way to the fat cells where it is formed into fat/sugar molecules. Glucagon is the hormone that stimulates the fat/sugar molecules stored in the fat cells, to be transported from those cells so they can be burned for energy. If you want to shrink your fat cells, you need glucagon, which is released when you eat protein. Eating protein helps to stabilize your blood sugar and reduces cravings Glucagon also plays a role in ensuring that there is a balanced supply of sugar as a fuel source for the brain, muscles and tissues. If the supply of carbs/sugar is erratic, you will experience any number of symptoms that will make it difficult to maintain your energy and focus and will lead to cravings and binge eating. Symptoms of blood sugar irregularity are apparent when you go too long without eating a balanced meal with protein, and include the following: • Headaches • Low energy, fatigue • Dizziness, faintness or light-headedness • Irritability • Nervousness or anxiousness • Calmer after eating • Sporadic ‘highs’ and ‘lows’ throughout the day • Crave a lift from carbohydrates, sugar or alcohol, and then experience a drop in energy after eating them • Frequent urination • Frequent thirst • Difficulty staying asleep Protein stabilizes your mood, helps you focus and reduces cravings in another way Our brains are programmed for pleasure. We want satisfaction and comfort. To have it we need an abundance of the pleasure and satiation related brain chemicals. Guess how they’re made? Protein. Yup. Surprise! The amino acids that are the building blocks of protein provide the raw material for the pleasure and satiation hormones dopamine, serotonin, leptin, CCK. With those hormones in balance, you will find that the need to seek comfort through sugar, fat and salt is lessened. With the physical cravings minimized – you will have more resources to deal with the psychological cravings. I can’t emphasize the importance of protein enough. I have seen reductions in clients’ cravings, more energy, focus and stable moods within days when they eat enough protein consistently throughout the day. To get your breakfast rolling in the right direction, here’s a recipe for a 5-Minute Protein Pancake.
__label__pos
0.718285
diabetestalk.net How Glucose Is Converted Into Fatty Acids? Share on facebook Connections Of Carbohydrate, Protein, And Lipid Metabolic Pathways Connecting Other Sugars to Glucose Metabolism Sugars, such as galactose, fructose, and glycogen, are catabolized into new products in order to enter the glycolytic pathway. Learning Objectives Identify the types of sugars involved in glucose metabolism Key Takeaways When blood sugar levels drop, glycogen is broken down into glucose -1-phosphate, which is then converted to glucose-6-phosphate and enters glycolysis for ATP production. In the liver, galactose is converted to glucose-6-phosphate in order to enter the glycolytic pathway. Fructose is converted into glycogen in the liver and then follows the same pathway as glycogen to enter glycolysis. Sucrose is broken down into glucose and fructose; glucose enters the pathway directly while fructose is converted to glycogen. disaccharide: A sugar, such as sucrose, maltose, or lactose, consisting of two monosaccharides combined together. glycogen: A polysaccharide that is the main form of carbohydrate storage in animals; converted to glucose as needed. monosaccharide: A simple sugar such as glucose, fructose, or deoxyribose that has a single ring. You have learned about the catabolism of glucose, which provides energy to living cells. B Continue reading >> Share on facebook Popular Questions 1. throwaketone Are there any local stores where you can typically buy these things? I've checked walmart.com / walgreens.com / cvs.com / target.com and haven't found much. I can buy them easily online, but I'm curious what level I'm at right now. 2. throwaketone Cool, I didn't think to check there. Are these usually specific for a given blood glucose meter / device? (I need to pick one of those up anyway.) On the sites, I usually just see plenty of the urine ketone test strips. 3. throwaketone Ah, I need a more accurate reading (I want to know BHB mmol/l.) Drove to 2 pharmacies -- walgreens had no idea what I was talking about and pointed me to the strips you piss on. Smith's looked around in the back for a while then told me they could order them online for me. Looks like I'll have to test another day. =/ 4. -> Continue reading read more close Related Articles Popular Articles More in ketosis
__label__pos
0.829341
interno1.jpg History Titanium was discovered included in a mineral in Cornwall, England, in 1791 by amateur geologist and pastor William Gregor, then vicar of Creed parish. He recognized the presence of a new element in ilmenite when he found black sand by a stream in the nearby parish of Manaccan and noticed the sand was attracted by a magnet. Analysis of the sand determined the presence of two metal oxides; iron oxide (explaining the attraction to the magnet) and 45.25% of a white metallic oxide he could not identify. Gregor, realizing that the unidentified oxide contained a metal that did not match the properties of any known element, reported his findings to the Royal Geological Society of Cornwall and in the German science journal Crell's Annalen. Titanium is always bonded to other elements in nature. It is the ninth-most abundant element in the Earth's crust (0.63% by mass) and the seventh-most abundant metal. It is present in most igneous rocks and in sediments derived from them. It is widely distributed and occurs primarily in the minerals anatase, brookite, ilmenite, perovskite, rutile, titanite (sphene), as well in many iron ores. Of these minerals, only rutile and ilmenite have any economic importance, yet even they are difficult to find in high concentrations. Significant titanium-bearing ilmenite deposits exist in western Australia, Canada, China, India, New Zealand, Norway, and Ukraine. The processes required to extract titanium from its various ores are laborious and costly; it is not possible to reduce in the normal manner, by heating in the presence of carbon, because that produces titanium carbide. Pure metallic titanium (99.9%) was first prepared in 1910 by Matthew A. Hunter at Rensselaer Polytechnic Institute. Titanium metal was not used outside the laboratory until 1932 when William Justin Kroll proved that it could be produced by reducing titanium tetrachloride (TiCl4) with calcium. Eight years later he refined this process by using magnesium and even sodium in what became known as the Kroll process. Although research continues into more efficient and cheaper processes (e.g., FFC Cambridge), the Kroll process is still used for commercial production. In the 1950s and 1960s the Soviet Union pioneered the use of titanium in military and submarine applications as part of programs related to the Cold War. Starting in the early 1950s, titanium began to be used extensively for military aviation purposes, particularly in high-performance jets, starting with aircraft such as the F100 Super Sabre and Lockheed A-12. Production and fabrication Atomic Number 22 Standard Atomic Weight 47,90 Density 4,50 Melting Point [°C] 1.670 Bulk Modulus [Gpa] 110 Heat Capacity 0,13 Linear thermal expansion 9 Thermal Conductivity 0,04 Electrical Resistivity 50 Boiling Point 3.289 The processing of titanium metal occurs in 4 major steps: reduction of titanium ore into "sponge", a porous form; melting of sponge, or sponge plus a master alloy to form an ingot; primary fabrication, where an ingot is converted into general mill products such as billet, bar, plate, sheet, strip, and tube; and secondary fabrication of finished shapes from mill products. Because the metal reacts with oxygen at high temperatures it cannot be produced by reduction of its dioxide. Titanium metal is therefore produced commercially by the Kroll process, a complex and expensive batch process. About 50 grades of titanium and titanium alloys are designated and currently used, although only a couple of dozen are readily available commercially. The ASTM International recognizes 31 Grades of titanium metal and alloys, of which Grades 1 through 4 are commercially pure (unalloyed). These four are distinguished by their varying degrees of tensile strength, as a function of oxygen content, with Grade 1 being the most ductile (lowest tensile strength with an oxygen content of 0.18%), and Grade 4 the least (highest tensile strength with an oxygen content of 0.40%). The remaining grades are alloys, each designed for specific purposes, its ductility, strength, hardness, electrical resistivity, creep resistance, resistance to corrosion from specific media, or a combination thereof. In terms of fabrication, all welding of titanium must be done in an inert atmosphere of argon or helium in order to shield it from contamination with atmospheric gases such as oxygen, nitrogen, or hydrogen. Contamination will cause a variety of conditions, such as embrittlement, which will reduce the integrity of the assembly welds and lead to joint failure. Commercially pure flat product (sheet, plate) can be formed readily, but processing must take into account the fact that the metal has a "memory" and tends to spring back. This is especially true of certain high-strength alloys. The metal can be machined using the same equipment and via the same processes as stainless steel. Applications About 95% of titanium ore extracted from the Earth is destined for refinement into titanium dioxide (TiO2), an intensely white permanent pigment used in paints, paper, toothpaste, and plastics. In nature, this compound is found in the minerals anatase, brookite, and rutile. Due to their high tensile strength to density ratio, high corrosion resistance, fatigue resistance, high crack resistance, and ability to withstand moderately high temperatures without creeping, titanium alloys are used in aircraft, armor plating, naval ships, spacecraft, and missiles. For these applications titanium alloyed with aluminium, vanadium, and other elements is used for a variety of components including critical structural parts, fire walls, landing gear, exhaust ducts (helicopters), and hydraulic systems. Due to its high corrosion resistance to sea water, titanium is used to make propeller shafts and rigging and in the heat exchangers of desalination plants; in heater-chillers for salt water aquariums, fishing line and leader, and for divers' knives. Titanium is used to manufacture the housings and other components of ocean-deployed surveillance and monitoring devices for scientific and military use. The former Soviet Union developed techniques for making submarines largely out of titanium, which became both the fastest and deepest diving submarines of their time. Welded titanium pipe and process equipment (heat exchangers, tanks, process vessels, valves) are used in the chemical and petrochemical industries primarily for corrosion resistance. Specific alloys are used in downhole and nickel hydrometallurgy applications due to their high strength titanium Beta C, corrosion resistance, or combination of both. The pulp and paper industry uses titanium in process equipment exposed to corrosive media such as sodium hypochlorite or wet chlorine gas (in the bleachery). Titanium metal is used in automotive applications, particularly in automobile or motorcycle racing, where weight reduction is critical while maintaining high strength and rigidity. Titanium has been used in architectural applications: the 40 m (120 foot) memorial to Yuri Gagarin, the first man to travel in space, in Moscow, is made of titanium for the metal's attractive color and association with rocketry. The Guggenheim Museum Bilbao and the Cerritos Millennium Library were the first buildings in Europe and North America, respectively, to be sheathed in titanium panels. Because it is biocompatible (non-toxic and is not rejected by the body), titanium is used in a gamut of medical applications including surgical implements and implants, such as hip balls and sockets (joint replacement) that can stay in place for up to 20 years. Titanium has the inherent property to osseointegrate, enabling use in dental implants that can remain in place for over 30 years. This property is also useful for orthopedic implant applications. Titanium, our precious metal At the end of 30’s, with new technologies introduction, the Titanium began to be industrially used, mostly for weapon manufacturing in US. During the same period, the first surgeries with Titanium implant took place. Since then, the Titanium acquired growing importance also in Europe, Japan and in the ex-Soviet Union more info
__label__pos
0.894281
Beyond cut points: accelerometer metrics that capture the physical activity profile Alex V. Rowlands, Charlotte L. Edwardson, Melanie J. Davies, Kamlesh Khunti, Deirdre M. Harrington, Tom Yates Research output: Contribution to journalArticlepeer-review 58 Citations (Scopus) 1 Downloads (Pure) Abstract Purpose Commonly used physical activity metrics tell us little about the intensity distribution across the activity profile. The purpose of this paper is to introduce a metric, the intensity gradient, which can be used in combination with average acceleration (overall activity level) to fully describe the activity profile. Methods A total of 1669 adolescent girls (sample 1) and 295 adults with type 2 diabetes (sample 2) wore a GENEActiv accelerometer on their nondominant wrist for up to 7 d. Body mass index and percent body fat were assessed in both samples and physical function (grip strength, Short Physical Performance Battery, and sit-to-stand repetitions) in sample 2. Physical activity metrics were as follows: average acceleration (AccelAV); the intensity gradient (IntensityGRAD from the log-log regression line: 25-mg intensity bins [x]/time accumulated in each bin [y]); total moderate-to-vigorous physical activity (MVPA); and bouted MVPA (sample 2 only). Results Correlations between AccelAV and IntensityGRAD (r = 0.39-0.51) were similar to correlations between AccelAV and bouted MVPA (r = 0.48) and substantially lower than between AccelAV and total MVPA (r ≥ 0.93). IntensityGRAD was negatively associated with body fatness in sample 1 (P < 0.05) and positively associated with physical function in sample 2 (P < 0.05); associations were independent of AccelAV and potential covariates. By contrast, MVPA was not independently associated with body fatness or physical function. Conclusion AccelAV and IntensityGRAD provide a complementary description of a person's activity profile, each explaining unique variance, and independently associated with body fatness and/or physical function. Both metrics are appropriate for reporting as standardized measures and suitable for comparison across studies using raw acceleration accelerometers. Concurrent use will facilitate investigation of the relative importance of intensity and volume of activity for a given outcome. Original languageEnglish Pages (from-to)1323-1332 Number of pages10 JournalMedicine and Science in Sports and Exercise Volume50 Issue number6 DOIs Publication statusPublished - 30 Jun 2018 Keywords • average acceleration • GENEActiv • adiposity • exercise test • physical activity Fingerprint Dive into the research topics of 'Beyond cut points: accelerometer metrics that capture the physical activity profile'. Together they form a unique fingerprint. Cite this
__label__pos
0.779936
--- Welcome to the official ADCIRCWiki site! The site is currently under construction, with limited information. --- For general information, see the ADCIRC site, ADCIRC FAQ, or the ADCIRC Wikipedia page. For model documentation not yet available on the wiki, see the ADCIRC site. New content is being continuously added to the ADCIRCWiki, and material from the main ADCIRC site will be gradually transitioned over to the wiki. Generalized Asymmetric Holland Model: Difference between revisions From ADCIRCWiki Jump to navigation Jump to search mNo edit summary Line 75: Line 75: To have a dissective look of the surface plots in Figure 2, we draw slices perpendicular to the axis of <math> \log_{10}R_o </math> at three different values 0, 1, 2, and plot the lines of intersection with each surface in Figure 3. It is evident that we get <math> V_g = V_{max} </math> at <math> r = R_{max} </math> consistently in the right panel for the GAHM regardless of the value of <math> R_o </math>. The HM in the left panel, however, generates distorted wind profiles with underestimated maximum winds skewed inward towards the storm center, espeically when <math> \log_{10}R_o < 1 </math>. As a results, when both models being applied to real hurricane cases, the GAHM will perform more consistently than the HM. To have a dissective look of the surface plots in Figure 2, we draw slices perpendicular to the axis of <math> \log_{10}R_o </math> at three different values 0, 1, 2, and plot the lines of intersection with each surface in Figure 3. It is evident that we get <math> V_g = V_{max} </math> at <math> r = R_{max} </math> consistently in the right panel for the GAHM regardless of the value of <math> R_o </math>. The HM in the left panel, however, generates distorted wind profiles with underestimated maximum winds skewed inward towards the storm center, espeically when <math> \log_{10}R_o < 1 </math>. As a results, when both models being applied to real hurricane cases, the GAHM will perform more consistently than the HM. [[File:Figure 3. GAHM.png|center|750px|Normalized gradient wind profiles]] [[File:Fig 3. GAHM.png|center|Normalized gradient wind profiles]] <center> Fig 3. Slices of the normalized gradient wind profiles (as shown in Figure 2) at <math> \log_{10}R_o =0, 1, 2 </math> (or correspondingly  <math> R_o =1, 10, 100 </math>). </center> <center> Fig 3. Slices of the normalized gradient wind profiles (as shown in Figure 2) at <math> \log_{10}R_o =0, 1, 2 </math> (or correspondingly  <math> R_o =1, 10, 100 </math>). </center> <!--  *********************  chapter  3 ******************* --> <!--  *********************  chapter  3 ******************* --> == Calculation of the Radius to the Maximum Wind == == Calculation of the Radius to the Maximum Wind == Revision as of 03:59, 10 April 2020 The Generalized Asymmetric Holland Model (GAHM) is a parametric hurricane vortex model developed in ADCIRC for operational forecasting purpose, based on the classic Holland Model (HM, 1980). The original HM was developed to render an ideal symmetric hurricane (a.k.a., annular hurricane). To represent hurricanes that may exhibit asymmetric structures, the Asymmetric Holland Model (AHM) was developed for more practical usages in ADCIRC, which has the same set of equations as in the HM, but takes azimuthally-varying radius to the maximum wind to reconstruct its spatial pressure and wind fields. Both the HM and AHM suffer from flaws that make rendering large but weak storms difficult. As a result, efforts were made to develop a more generalized model to fulfill the purpose. Compared to the HM and the AHM, the GAHM removes the assumption of cyclostrophic balance at the radius of maximum wind during derivation of its equations, and allows for a better representation of a wide range of hurricanes. Another important feature of the GAHM is the introduction of a composite wind method, which when activated enables the usage of multiple storm isotaches. It makes representing complex hurricane structures possible. The Classic Holland Model The HM is an analytic model that describes the radial pressure and wind profiles of a standard hurricane. To begin with, Holland found that the normalized pressure profiles of a number of hurricanes resemble a family of rectangular hyperbolas and may be approximated by a hyperbolic equation, which after antilogarithms and rearranging yields the radial pressure equation: (1) where is the central pressure, is the ambient pressure (theoretically at infinite radius), is the pressure at radius from the center of the hurricane, and and are shape parameters that may be empirically estimated from observations in a hurricane. Substituting (1) into the gradient wind equation, which describes a steady flow balanced by the horizontal pressure gradient force, the centripetal acceleration, and the Coriolis acceleration for a vortex above the influence of the planetary boundary layer where the atmospheric flow decouples from surface friction (Powell et al. 2009), gives the radial wind equation of a hurricane: (2) where is the gradient wind at radius , is the density of air, is the Coriolis parameter. In the region of the maximum winds, if we assume that the Coriolis force is negligible in comparison to the pressure gradient and centripetal force, then the air is in cyclostrophic balance. By removing the Coriolis term in (2) we get the cyclostrophic wind (3) By setting at radius to the maximum wind , it is obtained that (4) Thus the () is irrelevant to the relative value of ambient and central pressures, and is solely defined by the shape parameters and . Substituting (4) back into (3) to get rid of , we get an estimate of as a function of the maximum wind speed (5) It was notable that the maximum wind speed is proportional to the square root of and irrespective of the (), given a constant pressure drop. It was also reasoned by Holland that a plausible range of would be between 1 and 2.5 for realistic hurricanes. Substituting (4) and (5) back into (1) and (2) yields the final radial pressure and wind profiles for the HM (6) (7) When sparse observations of a hurricane are given, estimates of the and shape parameter can be estimated by fitting data into the radial wind equation, which in turn allow us to compute and along the radius of the hurricane. However, discrepancies between wind observations and computed winds were sometimes found, and were negatively correlated to the Rossby number at , defined as (8) By definition, a large describes a system in cyclostrophic balance that is dominated by the inertial and centrifugal force with negligible Coriolis force, such as a tornado or the inner core of an intense hurricane, whereas a small value signifies a system in geostrophic balance where the Coriolis force plays an important role, such as the outer region of a hurricane. As a result, the assumption of cyclostrophic balance at made in HM is mostly valid for describing an intense and narrow (small ) hurricane with a large , but not applicable for a weak and broad hurricane with a small . This intrinsic problem with the HM calls our intention to develop a generalized model that will work consistently for a wide range of hurricanes, which theoretically can be accomplished by removing the above cyclostrophic balance assumption and re-derive the radial pressure and wind equations (6)&(7). Derivation of the GAHM The GAHM also starts with the same radial pressure and wind equations (1)&(2) with shape parameters and , as in the HM. Without assuming cyclostrophic balance at , we take at to get the adjusted shape parameter as (9) where is a scaling parameter introduced to simplify the derivation process, defined as or (10) and later derived as (11) Thus, the in the GAHM is not entirely defined by the shape parameters and as in the HM, but also by the scaling factor , as Equation (11) indicates that . Numerical solutions for and can be solved iteratively in the model using Equation (9)&(11). Figure 1 illustrates how and vary with given different values. It is evident that values of both and remain close to 1 when is within the range of [1,2], but increase noticeably as decreases below 1, and the smaller the value of , the bigger the changes. Profiles of Bg⁄B (left panel) and φ (right panel) Figure 1. Profiles of (left panel) and (right panel) with respect to , given different values as shown in different colors. Substituting (9)&(11) back into (1)&(2) yields the final radial pressure and wind equations for the GAHM (12) (13) Influence of the Coriolis force on the radial pressure and wind profiles are evidenced by the presence of and in (12)&(13). A special case scenario is when we set , which corresponds to an infinitely large , then (12)&(13) in the GAHM reduce to (6)&(7) in the HM. However,for a hurricane with a relatively small , the influence of the Coriolis force can only be addressed by the GAHM. It meets our expectation that the GAHM’s solution approaches to that of the HM’s when the influence of Coriolis force is small, but departs from it when the Coriolis force plays an important role in the wind system. The above reasoning can be demonstrated by the 3D plots in Figure 2, which show the normalized gradient winds of the HM (left panel) and the GAHM (right panel) as functions of the normalized radial distances , the Holland parameter, and . In both panels, each colored surface represents the normalized gradient winds corresponding to a unique Holland B value. By definition, we get at , which means all the surfaces in each panel should intersect with the plane of on the plane of , no matter what values of . However, the line of intersection (shown by the black line) shown in the left panel deviates from the plane of as decreases from 2 to close to 0 ( decreases from 100 to 1), while remains on the plane regardless of how changes in the right panel, demonstrating that the GAHM is mathematically more coherent than the HM. Comparison of the normalized gradient winds Figure 2. The normalized gradient wind profiles of the HM (left panel) and the GAHM (right panel) as functions of the normalized radial distances and , given different Holland values. To have a dissective look of the surface plots in Figure 2, we draw slices perpendicular to the axis of at three different values 0, 1, 2, and plot the lines of intersection with each surface in Figure 3. It is evident that we get at consistently in the right panel for the GAHM regardless of the value of . The HM in the left panel, however, generates distorted wind profiles with underestimated maximum winds skewed inward towards the storm center, espeically when . As a results, when both models being applied to real hurricane cases, the GAHM will perform more consistently than the HM. Normalized gradient wind profiles Fig 3. Slices of the normalized gradient wind profiles (as shown in Figure 2) at (or correspondingly ). Calculation of the Radius to the Maximum Wind Same with the HM and AHM, the GAHM also uses processed forecast advisories (during active hurricanes) or best track advisories (post-hurricanes) from the National Hurricane Center (NHC) in ATCF format as input files, which contain a time series of storm parameters (usually at 6-hour intervals) such as storm location, storm movement, central pressure, 1 minute averaged maximum wind, radii to the 34-, 50-, and/or 64-kt storm isotaches in 4 storm quadrants (NE, SE, SW, NW), etc. See meteorological input file with NWS = 20 for more details. As a standard procedure, the and are pre-computed in 4 storm quadrants for all available isotaches in the ASWIP program (a FORTRAN program that was developed by Flemming et al. and distributed with ADCIRC; further developed here to accommodate the GAHM) and appended to the input file prior to running an ADCIRC simulation. The following describes the procedures to prepare the input file for the GAHM. First, the influence of the boundary layer effect must be removed to bring the maximum sustained wind and the 34-, 50-, and/or 64-kt isotaches from 10 meter height to the gradient wind level. Practically, the maximum gradient wind can be directly calculated as (14) where is the reported maximum sustained wind at 10 meter height assuming in the same direction as , is the storm translational speed calculated from successive storm center locations, is the wind reduction factor for reducing wind speed from the gradient wind level to the surface at 10 meter height (Powell et al., 2003), and is the damp factor for . The following formula of is employ in the ASWIP program: (15) which is the ratio of gradient wind speed to the maximum wind speed along a radial wind profile. Thus, is zero at storm center, and increases with until reaches a maximum value of 1 at , then gradually decreases outward to zero. In addition to the scalar reduction in wind speed, surface friction and continuity also cause the vortex wind to flow inward across isobars, with an inward rotation angle according to the Queensland Government's Ocean Hazards Assessment (2001): Thus, the gradient wind at the radii to specified storm isotaches in 4 storm quadrants can be obtained from the observed isotaches as (17) where is the observed isotach wind speed with an unknown angle , and is the wind speed at radius to specified isotach before the inward rotation angle is removed. Rewriting (17) in x- and y-components yields: (18) (19) where is the azimuth angle of the storm quadrant (NE, SE, SW, NW at , respectively), and are the zonal and meridional components of , and are the zonal and meridional components of . Given an initial guess of , values of and can be solved iteratively from (9) and (11) until both converge, and can be estimated by combining (15), (17), (18), and (19). Plugging from (14), the above calculated and the observed radius to back into (13), a new can be inversely solved by a root-finding algorithm. Since the above calculations are carried out based on an initial guess of , wWe need to repeat the entire process until converges. In case where multiple isotaches are given in the forecast/best track advisories, the for the highest isotach will be calculated using the above procedure, and used as the pseudo for the entire storm (physically, there is only one found along a radial wind profile ). For each lower isotach, will be calculated with the pseudo set as its initial value to determine the inward rotation angle following the above process only once. The use of the pseudo across all storm isotaches ensures that the cross-isobar frictional inflow angle changes smoothly along the radius according to (17). Occasionally, we have to deal with situations where , which violate (13) so couldn't be calculated. These situations mostly happen in the right hand quadrants (in the Northern Atmosphere) of a weak storm with a relatively high translational speed. For cases like this, we assign , which is equivalent to assigning . After the ASWIP program finishes processing the input file, it can be readily used by the GAHM to construct spatial pressure and wind fields in ADCIRC for storm surge forecast. Composite Wind Generation Since storm parameters are only given in 4 storm quadrants (assuming at azimuthal angles, respectively) at 3 available isotaches in the input file, spatial interpolation of storm parameters must take place first at each ADCIRC grid node. Traditionally, the single-isotach approach is used by the AHM, in which storm parameters will be interpolated azimuthally from the highest isotach only. To take advantage of the availability of multiple isotaches, a new composite wind method is introduced in the GAHM, the multiple-isotach approach, in which storm parameters will be interpolated both azimuthally and radially from all available isotaches. To begin, the relative location of a node to the storm center at given time is calculated, specified by the azimuth angle and distance . The angle places the node between two adjacent quadrants and , where . For each storm parameter to be interpolated, its value at are weighted between its values at two pseudo nodes and : (20) The distance then places each pseudo node between the radii of two adjacent isotaches in its quadrant, and the value at the pseudo node is interpolated using the inverse distance weighting (IDW) method: (21) where are parameter values computed from the 34-, 50-, and 64-isotach, are distance weighting factors for each isotach, calculated as (22) and . The above procedure is performed at each node of an ADCIRC grid. After all storm parameters are interpolated, the pressure and gradient winds can be calculated using (12)&(13). To bring the gradient wind down to the standard 10 meter reference level, the same wind reduction factor is applied, and the tangential winds are rotated by an inward flow angle β according to (16). Then, the storm translational speed is added back to the vortex winds. Last but not least, a wind averaging factor is applied to convert resulted wind field from 1-min to 10-min averaged winds in order to be used by ADCIRC. This new composite wind method is simple and efficient, and more importantly, it assures that the constructed surface winds match all observed storm isotaches provided in NHC’s forecast or “best track” advisories. Case Studies Preliminary evaluation of the GAHM was carried out based on seven hurricanes that struck the Gulf of Mexico and the Eastern United States: Katrina (2005), Rita (2005), Gustav (2008), Ike (2008), Irene (2011), Isaac (2012), and Sandy (2012), see Table 1. Ranging from category 1 to 5 on the Saffir-Simpson Hurricane Wind Scale, these storms vary in storm track, forward motion, size, intensity, and duration, but all caused severe damages to coastal states due to destructive winds, wind-induced storm surges, and ocean waves. Their “best track” advisories were retrieved from NHC’s ftp site (ftp://ftp.nhc.noaa.gov/atcf; previous years’ data are located in the archive directory) and pre-processed using the ASWIP program. The “best track” file contains an estimate of the radius to the maximum wind for each data entry, but will solely be used for model validation purpose as both the GAHM and AHM calculate their own spatially-varying . Table 1. Seven selected hurricanes used for preliminary evaluation of the GAHM Hurricane Saffir-Simpson Wind Scale Maximum Sustained Wind (knot) Minimum Central Pressure (mbar) Period from Formation to Dissipation Katrina 5 150 902 08/23-08/30, 2005 Rita 5 150 902 09/18-09/26, 2005 Gustav 4 135 941 08/23-09/04, 2008 Ike 4 125 935 09/01-09/14, 2008 Irene 3 105 942 08/21-08/30, 2011 Isaac 1 70 965 08/21-09/03, 2012 Sandy 3 95 940 10/22-10/01, 2012 Besides the maximum wind speed, both Holland and can be used as key storm characteristics to characterize the development of the storm. Figure 4 depicts the change of , , and during different stages of the hurricanes along their best tracks. Typically, both and increase as hurricane strengthens, and decrease as hurricane dissipates, within the range of (0, 2.5). Previous analytical evaluation has demonstrated that the GAHM behaves consistently better than the HM, especially under situations where . Here, evaluation of model performance will be carried out by comparing the modeled winds with the observed winds in the "best track" data, as well as the AHM, the SLOSH (Sea, Lake, and Overland Surges from Hurricanes) winds, re-analysis H*Wind and hindcast OWI winds. Fig 4. GAHM.png Fig 5. GAHM.png Fig 6. GAHM.png Figure 4. The development of (a) The Maximum Wind Speed, (b) Holland , and (c) along the best tracks of 7 selected hurricanes The AHM vs. the GAHM • Comparison of Radial Wind Profiles Since the AHM is an advanced version of the HM, here we only use model results from the AHM for comparisons with the GAHM. First, the single-isotach approach was evaluated using Hurricane Irene (2011) as an example. Figure 5 gives the comparison of radial wind profiles of Hurricane Irene (2011) between the AHM and the GAHM using the single-isotach approach at three snapshots, each representing the developing (top panels), mature (middle panels), and dissipating (bottom panels) stages of the hurricane. Radial wind profiles of Irene (2011) at three different stages. Figure 5. Comparison of radial wind profiles of Irene (2011) at three different stages between the AHM and the GAHM. The cross-section radial winds from SW to NE are given in the left panels, and NW to SE in the right panels. The observed isotaches at radii to specified isotaches given in the "best track" file are also plotted as vertical line segments for reference (highest isotach in black and lower isotaches in gray). For a perfect match between the modeled winds and the isotaches, the radial profiles must meet the tip of the line segments at the exact same height. The , and are also computed at the same snapshots in all 4 quadrants, given by Table 2. Table 2. Key storm characteristics , and at three snapshots of Irene (2011) 2011-Aug-21 18:00 2011-Aug-25 00:00 2011-Aug-28 06:00 Quadrant NE SE SE NW NE SE SW NW NE SE SW NW 1.00 1.00 1.00 1.00 1.62 1.62 1.62 1.62 0.60 0.60 0.60 0.60 1.24 1.03 1.05 1.19 1.69 1.69 1.65 1.68 1.11 0.92 0.72 0.73 0.64 1.44 1.26 0.74 1.37 1.36 1.70 1.41 0.28 0.33 0.74 0.82 It is evident that the radial wind profiles generated by the GAHM consistently match the highest isotaches in all quadrants at different stages of Irene, no matter how and vary. The AHM did a similarly good job when the hurricane is strong (see middle panels), but failed to match the highest isotaches when . Both the AHM and the GAHM winds died off too quickly away from the storm center, thus failed to match any lower isotaches. The importance of the multiple-isotach approach will be demonstrated later in this section. • Evaluation of the Maximum Winds and Radius to Maximum Winds Comparisons of the modeled maximum winds and radius to maximum winds to the observed values in the input file were also carried out based on all 7 selected hurricanes, given by the scatter plots in Figure 6. Evaluations of the maximum winds are given in the upper panels, while the radius to maximum winds given in lower panels, both color-coded by , with a simple linear correlation given in each panel. Examination of the upper panels reveals that the GAHM did an excellent job in estimating the maximum winds, with a few overestimations near the lower bound of the dataset. Careful examinations of these over estimated values revealed that they were from those "bad" dada entries in the "best track" file that violate certain criteria in the GAHM when solving for the . This phenomenon was particular common during the dissipating stage of a hurricane. The AHM had larger discrepancies in estimating the maximum wind compared to the GAHM, especially when , which was a direct consequence of the cyclostrophic balance assumption made during the derivation of HM's equations. Examination of the lower panels reveals that the maximum value of the modeled azimuthally-varying failed to match the observed values given in the input file, but the trend of the GAHM was significantly better. Fig 8.1. GAHM.png Fig 8.2. GAHM.png Figure 6. Comparison of the modeled and “Best Track” maximum winds (upper two panels), and the modeled and “Best Track” (lower two panels) between the AHM and the GAHM based on all seven hurricanes. • Demonstration of the Multiple-Isotach Approach Earlier we have shown that a radial wind profile constructed by the GAHM using the single-isotach approach would only match the highest isotach only, due to limitations of this single-fitting method. In fact, underestimations of modeled winds at distances to isotachs other than the highest one were common, as the radial wind profile tends to die off too quickly away from the storm center due to the nature of GAHM’s formulas. In an effort to minimize the combined errors mentioned above, and to improve the overall accuracy of the estimated wind field, the multiple-isotach approach should be used whenever there is more than one isotach present in the best track file. The 3D plots of Irene’s radial wind profiles (left) and interpolated spatial wind fields (right) by the GAHM using the single-isotach approach (upper panels) versus the multiple-isotach approach (lower panels) were given by Figure 7. For easier visualization, all available isotaches were plotted at radii to specified isotaches in the left two panels, and as contour lines (after azimuthal interpolation) in the right two panels. It is evident that winds generated by the multiple-isotach approach were able to match all given isotaches in all 4 quadrants, while only the highest isotach was matched by the single-isotach approach. Comparison of the spatial wind fields also indicated that the multiple-isotach approach allowed the wind to die off more gradually away from the storm center than the single-isotach approach did, demonstrated by the smaller gradient of the contour lines in the lower panel. It is believed that the multiple-isotach approach improves the overall accuracy and performance of the GAHM. Irene’s radial wind profiles (left) and interpolated spatial wind field (right) Figure 7. 3D plot of Irene’s radial wind profiles (left) and interpolated spatial wind fields (right) by the single-isotach approach (upper panels) and the multiple-isotach approach (lower panels). The GAHM's Composite Wind
__label__pos
0.922078
POWER PLANT ENGINEERING INTRODUCTION OF POWER PLANT ENGINEERING : Electricity is the only form of energy which is casy lo produce, easy to transport, Easy to use and easy to control. So, it is mostly the terminal form of energy for transmission and distribution. Electricity consumption per capita is the index of the living standard of people of a place or country. Electricity in bulk quantitics is produced in power plants.POWER PLANT ENGINEERING POWER PLANT ENGINEERING which can be of the following types: (a) Therrmal, (b) Nuclear, (c) Hydraulie, (d) Gas turbine (e)Geothermal. Thermal, nuclear and geothermal power plants work with steam as the working fluid and have many similaritics in their cyclc and structure. Gas turbine plants are often used as peaking units. They run for short periods in a day to met the peak load denand. They are, however, being increasingly usd in conjunction with a botoming steam plant in the mode of combined cycle power generation. POWER PLANT ENGINEERING Hydraulic power plants are essentially multipurpose. Besides generating power, they also cater for irigation, flood scontrol, fisheries, afforestation, navigatio, ete. They are, however, expensive and take long time to build. There is also considerable opposition against their erection due to the ccological imbalance they produce. Geothemal power plants can be built only in certain geographical locations.POWER PLANT ENGINEERING Thermal power plants generate more than 80% of the total electricity produced in the workd. Fossil fucls, viz. coal. fuel oil and natural gas are the energy source, and steam is the working fluid. Stcam is also required in many industries for process bcat. To meet the dual neood of powerT and process beat, cogeneration plants are often installod. There has been an exponential growth in the production of electricity. If the electricity production increases at the same fractional rate, i, cach ycar, the rate of change of electricity production per year. Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.885758
Global Treatment Services Pvt. Ltd. Global Treatment Services GERD: Treatments Gastro Esophageal Reflux Disease (GERD) is a condition in which the contents of the stomach are regurgitated into the esophagus (the tube that carries food from your mouth to your stomach). This is also called “Acid Reflux.” Gastroesophageal refers to the stomach and esophagus and Reflux refers to regurgitation or return of the contents. Therefore, gastroesophageal reflux is the regurgitaion of the stomach’s contents back up into the esophagus. Many people, including pregnant women, suffer from various symptoms and indications of GERD such as heartburn or acid indigestion. Mostly, heartburn can be relieved through dietary and lifestyle changes. At times, heartburn is also believed to be caused by hiatal hernia. However, in many cases, it may require medication or surgery. In the process of normal digestion, the Lower Esophageal Sphincter (LES) opens up and allows food to pass through to the stomach thus preventing food, acid and other juices to flow back into the esophagus. A weak or inappropriately relaxed LES allows the stomach’s contents to flow up into the esophagus, thus causing gastroesophageal reflux. The severity of GERD depends on level of LES dysfunction and on the type and amount of fluid brought up from the stomach and on the neutralizing effect of saliva. Some factors that may cause GERD: Dietary Factors Shorter dinner to bed time High fat diet Obesity Smoking Lifestyle associated factors Stress Major life events and alcoholic events Family history Symptoms Following are the most common symptoms for people with GERD: Heartburn: Commonly after a meal. Regurgitation: Regurgitation can produce a sour or bitter taste, and you may experience a “wet burp” or even vomit some contents of your stomach. Stomach Pain Abdominal bloating/Gas Acidity Excessive Burping Nausea Trouble Swallowing Asthma: Refluxed acid can worsen asthma by irritating the airways and the medications used to treat it can make GERD worse. Sore Throat: If acid reflux gets past the upper esophageal sphincter, it can enter the throat (pharynx) and even the voice box (larynx), causing sore throat. Excessive Night Cough/Excessive Dry Cough: Chronic dry cough, especially at night. GERD is a common cause of unexplained coughing. It is not clear how cough is caused or aggravated by GERD. Sudden increase of saliva Bad breath Ear aches Article By Fortis Healthcare Bangalore Post a comment
__label__pos
0.546562
Tooth Sensitivity: Causes and Treatment Options | Fremaux Dental ClinicPainful tingling and sharp, stabbing sensations in your teeth when you want to enjoy ice creams, cold fruit, or hot beverages can take the enjoyment out of your favorite meals. This hypersensitivity to stimuli, known as tooth sensitivity, happens when the nerve endings in the dentin of your teeth become exposed. What is tooth sensitivity, exactly, and what can you do about it? The zing behind tooth sensitivity Dentin is a porous tissue protected by the gums and a hard layer of enamel. It houses tiny channels embedded with nerves, which stem from a mass of blood vessels and nerve endings in the center of the tooth, called the pulp. When enamel wears away or your gumline recedes, portions of dentin are uncovered, exposing the nerve endings inside your teeth. The discomfort you feel from sensitive teeth is a result of these nerve endings triggering pain when they are exposed to sudden changes in temperature. Tooth sensitivity can also be caused by more serious dental concerns like a cracked or broken tooth, cavities, periodontitis, or gum disease. Gum disease can occur when a film of bacteria called plaque builds up on the teeth. Plaque erodes and destroys tooth enamel. Tooth sensitivity can make day-to-day activities like brushing your teeth, eating, and even breathing through your mouth in cold weather uncomfortable. Read on to learn about the causes. Causes of tooth sensitivity Tooth sensitivity can arise from a number of common lifestyle habits that many people don’t give a second thought: • Having acidic foods and drinks like carbonated beverages, citrus fruits, fruit juices, and sports drinks can soften and erode tooth enamel. Meanwhile, sugary and sticky foods encourage the build-up of excess plaque. • Using a hard toothbrush or brushing too hard or incorrectly can speed up dental erosion and cause your gumline to recede, leading to dentin exposure. • Grinding your teeth, or bruxism, can create small cracks in tooth enamel that increase sensitivity. Bruxism is a stress response in which you unconsciously clench and unclench your jaw or grind your teeth during the day or while you’re asleep. If you wake up with a sore jaw and headaches, you might be grinding your teeth in your sleep. • Having pearly white teeth can come at a cost if they aren’t whitened correctly. Some home bleaching solutions can remove minerals from tooth enamel and make your teeth more permeable, increasing tooth sensitivity. Improve tooth sensitivity You can prolong the longevity and comfort of your teeth by making some changes and incorporating new practices into your lifestyle. Good habits that can reduce and prevent tooth sensitivity include: • Limiting your consumption of carbonated beverages and sports drinks. When drinking these types of sugary or carbonated liquids, use a straw. • Avoid brushing your teeth immediately after eating acidic foods. Instead, rinse your mouth with water or snack on dairy products such as cheese, milk, or plain yogurt. Dairy products help neutralize acidity in your mouth. You can also wait at least an hour, allowing your saliva to wash away acids and re-harden enamel naturally. • Chewing sugar-free gum. Saliva has proteins and minerals that protect tooth enamel and neutralize acids. When you are unable to brush your teeth, chew sugar-free gum with xylitol to keep saliva flowing and help protect your teeth. As a bonus, xylitol also helps neutralize the acid from food, preventing the decay of tooth enamel. • Incorporating stress reduction practices into your lifestyle to help you relax if you grind your teeth. You can also speak with your dentist, who may suggest a nightguard – or bite plate – to protect your teeth as you sleep. • Consult your dentist before using an over-the-counter whitening system. Your dentist can examine your teeth and advise you on how to achieve the results you want with the least damage to your teeth. • Brushing your teeth twice daily with a soft toothbrush and fluoride toothpaste formulated for sensitive teeth. Ensure that you floss before brushing, at least once a day, to remove particles between the teeth that brushing may miss. If you think your tooth sensitivity is a symptom of a more serious problem, consult your dentist. Having cavities filled or dental sealants applied can bring relief. Your dentist in Slidell, Louisiana If your tooth sensitivity persists or worsens even after making changes to your lifestyle, schedule an appointment with an experienced Slidell dentist at Fremaux Dental Care. Protecting and revitalizing your teeth improves your oral health and overall wellbeing. If you’re ready to enjoy your favorite foods again, call us at (985) 445-9656 or fill out our contact form.
__label__pos
0.980853
\section{The \Ivor{} Library} %Given the basic operations defined in section \ref{holeops}, we can %create a library of tactics. The \Ivor{} library allows the incremental, type directed development of $\source$ terms. In this section, I will introduce the basic tactics available to the library user, along with the Haskell interface for constructing and manipulating $\source$ terms. This section includes only the most basic operations; the API is however fully documented on the web\footnote{\url{http://www.cs.st-andrews.ac.uk/~eb/Ivor/doc/}}. \subsection{Definitions and Context} The central data type is \hdecl{Context} (representing $\Gamma$ in the typing rules), which is an abstract type holding information about inductive types and function definitions as well as the current proof state. All operations are defined with respect to the context. An empty context is contructed with \hdecl{emptyContext :: Context}. Terms may be represented several ways; either as concrete syntax (a \texttt{String}), an abstract internal representation (\texttt{Term}) or as a Haskell data structure (\texttt{ViewTerm}). A typeclass \hdecl{IsTerm} is defined, which allows each of these to be converted into the internal representation. This typeclass has one method: \begin{verbatim} class IsTerm a where check :: Monad m => Context -> a -> m Term \end{verbatim} The \texttt{check} method parses and typechecks the given term, as appropriate, and if successful returns the internal representation. Constructing a term in this way may fail (e.g. due to a syntax or type error) so \texttt{check} is generalised over a monad \hdecl{m} --- it may help to read \hdecl{m} as \hdecl{Maybe}. In this paper, for the sake of readability we will use the syntax described in section \ref{corett}, and assume an instance of \hdecl{IsTerm} for this syntax. Similarly, there is a typeclass for inductive families, which may be represented either as concrete syntax or a Haskell data structure. \begin{verbatim} class IsData a where addData :: Monad m => Context -> a -> m Context \end{verbatim} The \hdecl{addData} method adds the constructors and elimination rules for the data type to the context. Again, we assume an instance for the syntax presented in section \ref{indfamilies}. The simplest way to add new function definitions to the context is with the \hdecl{addDef} function. Such definitions may not be recursive, other than via the automatically generated elimination rules, ensuring termination: \begin{verbatim} addDef :: (IsTerm a, Monad m) => Context -> Name -> a -> m Context \end{verbatim} However, \Ivor{} is primarily a library for constructing proofs; the Curry-Howard correspondence~\cite{curry-feys,howard} identifies programs and proofs, and therefore such definitions can be viewed as proofs; to prove a theorem is to add a well-typed definition to the context. We would like to be able to construct more complex proofs (and indeed programs) interactively --- and so at the heart of \Ivor{} is a theorem proving engine. \subsection{Theorems} In the \hdecl{emptyContext}, there is no proof in progress, so no proof state --- the \hdecl{theorem} function creates a proof state in a context. This will fail if there is already a proof in progress, or the goal is not well typed. \begin{verbatim} theorem :: (IsTerm a, Monad m) => Context -> Name -> a -> m Context \end{verbatim} A proof state can be thought of as an incomplete term, i.e. a term in the development calculus. For example, calling \hdecl{theorem} with the name $\FN{plus}$ and type $\Nat\to\Nat\to\Nat$, an initial proof state would be: \DM{ \FN{plus}\:=\:\hole{\VV{plus}}{\Nat\to\Nat\to\Nat} } This theorem is, in fact, a specification (albeit imprecise) of a program for adding two unary natural numbers, exploiting the Curry-Howard isomorphism. Proving a theorem (i.e. also writing a program interactively) proceeds by applying tactics to each unsolved hole in the proof state. The system keeps track of which subgoals are still to be solved, and a default subgoal, which is the next subgoal to be solved. I will write proof states in the following form: \DM{ \Rule{ \AR{ \mbox{\textit{bindings in the context of the subgoal $\vx_0$}} \\ \ldots\\ } } { \AR{ \hole{\vx_0}{\mbox{\textit{default subgoal type}}} \\ \ldots \\ \hole{\vx_i}{\mbox{\textit{other subgoal types}}} \\ \ldots } } } Functions are available for querying the bindings in the context of any subgoal. A tactic typically works on the bindings in scope and the type of the subgoal it is solving. When there are no remaining subgoals, a proof can be lifted into the context, to be used as a complete definition, with the \texttt{qed} function: \begin{verbatim} qed :: Monad m => Context -> m Context \end{verbatim} This function typechecks the entire proof. In practice, this check should never fail --- the development calculus itself ensures that partial constructions as well as complete terms are well-typed, so it is impossible to build ill-typed partial constructions. However, doing a final typecheck of a complete term means that the soundness of the system relies only on the soundness of the typechecker for the core language, e.g.~\cite{coq-in-coq}. We are free to implement tactics in any way we like, knowing that any ill-typed constructions will be caught by the typechecker. %% but if the tactics are correctly implemented this check will always succeed. \subsection{Basic Tactics} A tactic is an operation on a goal in the current system state; we define a type synonym \hdecl{Tactic} for functions which operate as tactics. Tactics modify system state and may fail, hence a tactic function returns a monad: % \begin{verbatim} type Tactic = forall m . Monad m => Goal -> Context -> m Context \end{verbatim} % A tactic operates on a hole binding, specified by the \texttt{Goal} argument. This can be a named binding, \texttt{goal :: Name -> Goal}, or the default goal \texttt{defaultGoal :: Goal}. The default goal is the first goal generated by the most recent tactic application. \subsubsection{Hole Manipulations} There are three basic operations on holes, \demph{claim}, \demph{fill}, and \demph{abandon}; these are given the following types: % \begin{verbatim} claim :: IsTerm a => Name -> a -> Tactic fill :: IsTerm a => a -> Tactic abandon :: Tactic \end{verbatim} % The \hdecl{claim} function takes a name and a type and creates a new hole. The \hdecl{fill} function takes a guess to attach to the current goal. In addition, \hdecl{fill} attempts to solve other goals by unification. Attaching a guess does not necessarily solve the goal completely; if the guess contains further hole bindings, it cannot yet have any computational force. %% The \hdecl{solve} tactic is provided to %% check whether a guess is \demph{pure} (i.e. does not contain any hole %% bindings or guesses itself) and converts it to a $\RW{let}$ binding if %% so. A guess can be removed from a goal with the \hdecl{abandon} tactic. %% It can be inconvenient to have to \texttt{solve} every goal after a %% \texttt{fill} (although sometimes this level of control is %% useful). For this reason, \texttt{fill} and other tactics will %% automatically solve all goals with hole-free guesses attached. More %% fine-grained tactics are available, but are beyond the scope of this paper. \subsubsection{Introductions} A basic operation on terms is to introduce $\lambda$ bindings into the context. The \texttt{intro} and \texttt{introName} tactics operate on a goal of the form $\fbind{\vx}{\vS}{\vT}$, introducing $\lam{\vx}{\vS}$ into the context and updating the goal to $\vT$. That is, a goal of this form is solved by a $\lambda$-binding. \texttt{introName} allows a user specified name choice, otherwise \Ivor{} chooses the name. % \begin{verbatim} intro :: Tactic introName :: Name -> Tactic \end{verbatim} % For example, to define our addition function, we might begin with \DM{ \Axiom{ \hole{\VV{plus}}{\Nat\to\Nat\to\Nat} } } Applying \texttt{introName} twice with the names $\vx$ and $\vy$ gives the following proof state, with $\vx$ and $\vy$ introduced into the local context: \DM{ \Rule{ \AR{ \lam{\vx}{\Nat}\\ \lam{\vy}{\Nat} } } {\hole{\VV{plus\_H}}{\Nat}} } \subsubsection{Refinement} The \texttt{refine} tactic solves a goal by an application of a function to arguments. Refining attempts to solve a goal of type $\vT$, when given a term of the form $\vt\Hab\fbind{\tx}{\tS}{\vT}$. The tactic creates a subgoal for each argument $\vx_i$, attempting to solve it by unification. % \begin{verbatim} refine :: IsTerm a => a -> Tactic \end{verbatim} % For example, given a goal \DM{ \Axiom{ \hole{\vv}{\Vect\:\Nat\:(\suc\:\vn)}} } \noindent Refining by $\Vcons$ creates subgoals for each argument, attaching a guess to $\vv$: \DM{ \Axiom{ \AR{ \hole{\vA}{\Type}\\ \hole{\vk}{\Nat}\\ \hole{\vx}{\vA}\\ \hole{\vxs}{\Vect\:\vA\:\vk}\\ \guess{\vv}{\Vect\:\Nat\:(\suc\:\vn)}{\Vcons\:\vA\:\vk\:\vx\:\vxs} } } } \noindent However, for $\Vcons\:\vA\:\vk\:\vx\:\vxs$ to have type $\Vect\:\Nat\:(\suc\:\vn)$ requires that $\vA=\Nat$ and $\vk=\vn$. Refinement unifies these, leaving the following goals: \DM{ \Axiom{ \AR{ \hole{\vx}{\Nat}\\ \hole{\vxs}{\Vect\:\Nat\:\vn}\\ \guess{\vv}{\Vect\:\Nat\:(\suc\:\vn)}{\Vcons\:\Nat\:\vn\:\vx\:\vxs} } } } \subsubsection{Elimination} Refinement solves goals by constructing new values; we may also solve goals by deconstructing values in the context, using an elimination operator as described in section \ref{elimops}. The \texttt{induction} and \texttt{cases} tactics apply the $\delim$ and $\dcase$ operators respectively to the given target: % \begin{verbatim} induction, cases :: IsTerm a => a -> Tactic \end{verbatim} % These tactics proceed by refinement by the appropriate elimination operator. The motive for the elimination is calculated automatically from the goal to be solved. Each tactic generates subgoals for each method of the appropriate elimination rule. %% A more general elimination tactic is \texttt{by}, which takes an %% application of an elimination operator to a target. %% \begin{verbatim} %% by :: IsTerm a => a -> Tactic %% \end{verbatim} %% The type of the term given to \texttt{by} must be a function expecting %% a motive and methods. An example of \texttt{induction} is in continuing the definition of our addition function. This can be defined by induction over the first argument. We have the proof state \DM{ \Rule{ \AR{ \lam{\vx}{\Nat}\\ \lam{\vy}{\Nat} } } {\hole{\VV{plus\_H}}{\Nat}} } Applying \texttt{induction} to $\vx$ leaves two subgoals, one for the case where $\vx$ is zero, and one for the inductive case\footnote{c.f. the Haskell function \texttt{natElim :: Nat -> a -> (Nat -> a -> a) -> a)}}: \DM{ \Rule{ \AR{ \lam{\vx}{\Nat}\\ \lam{\vy}{\Nat} } } { \AR{ \hole{\VV{plus\_O}}{\Nat}\\ \hole{\VV{plus\_S}}{\fbind{\vk}{\Nat}{\fbind{\VV{k\_H}}{\Nat}{\Nat}}} } } } By default, the next goal to solve is $\VV{plus\_O}$. However, the \hdecl{focus} tactic can be used to change the default goal. The $\VV{k\_H}$ argument to the $\VV{plus\_S}$ goal is the result of a recursive call on $\vk$. \subsubsection{Rewriting} It is often desirable to rewrite a goal given an equality proof, to perform equational reasoning. The \texttt{replace} tactic replaces occurrences of the left hand side of an equality with the right hand side. To do this, it requires: \begin{enumerate} \item The equality type; for example $\TC{Eq}\Hab\fbind{\vA}{\Type}{\vA\to\vA\to\Type}$. \item A replacement lemma, which explains how to substitute one term for another; for example\\ $\FN{repl}\Hab\fbind{\vA}{\Type}{ \fbind{\va,\vb}{\vA}{ \TC{Eq}\:\_\:\va\:\vb\to\fbind{\vP}{\vA\to\Type}{ \vP\:\va\to\vP\:\vb}}}$ \item A symmetry lemma, proving that equality is symmetric; for example\\ $\FN{sym}\Hab\fbind{\vA}{\Type}{ \fbind{\va,\vb}{\vA}{\TC{Eq}\:\_\:\va\:\vb\to\TC{Eq}\:\_\:\vb\:\va}}$ \item An equality proof. \end{enumerate} The \Ivor{} distribution contains a library of $\source$ code with the appropriate definitions and lemmas. Requiring the lemmas to be supplied as arguments makes the library more flexible --- for example, heterogeneous equality~\cite{mcbride-thesis} may be preferred. The tactic will fail if terms of inappropriate types are given; recall from sec. \ref{sec:devcalc} that the development calculus requires that incomplete terms are also well-typed, so that all tactic applications can be typechecked. The type is: \begin{verbatim} replace :: (IsTerm a, IsTerm b, IsTerm c, IsTerm d) => a -> b -> c -> d -> Bool -> Tactic \end{verbatim} The \texttt{Bool} argument determines whether to apply the symmetry lemma to the equality proof first, which allows rewriting from right to left. This \hdecl{replace} tactic is similar to \Lego{}'s \texttt{Qrepl} tactic \cite{lego-manual}. For example, consider the following fragment of proof state: \DM{ \Rule{ \AR{ \ldots\\ \lam{\vx}{\Vect\:\vA\:(\FN{plus}\:\vx\:\vy)} } } { \hole{\VV{vect\_H}}{\Vect\:\vA\:(\FN{plus}\:\vy\:\vx)} } } Since $\FN{plus}$ is commutative, $\vx$ ought to be a vector of the correct length. However, the type of $\vx$ is not convertible to the type of $\VV{vect\_H}$. Given a lemma $\FN{plus\_commutes}\Hab \fbind{\vn,\vm}{\Nat}{\TC{Eq}\:\_\:(\FN{plus}\:\vn\:\vm)\:(\FN{plus}\:\vm\:\vn)}$, we can use the \texttt{replace} tactic to rewrite the goal to the correct form. Applying \texttt{replace} to $\TC{Eq}$, $\FN{repl}$, $\FN{sym}$ and $\FN{plus\_commutes}\:\vy\:\vx$ yields the following proof state, which is easy to solve using the \texttt{fill} tactic with $\vx$. \DM{ \Rule{ \AR{ \ldots\\ \lam{\vx}{\Vect\:\vA\:(\FN{plus}\:\vx\:\vy)} } } { \hole{\VV{vect\_H}}{\Vect\:\vA\:(\FN{plus}\:\vx\:\vy)} } } \subsection{Tactic Combinators} \label{combinators} \Ivor{} provides an embedded domain specific language for building tactics, in the form of a number of combinators for building more complex tactics from the basic tactics previously described. By providing an API for basic tactics and a collection of combinators, it becomes easy to extend the library with more complex domain specific tactics. We will see examples in sections \ref{example1} and \ref{example2}. \subsubsection{Sequencing Tactics} There are three basic operators for combining two tactics to create a new tactic: \begin{verbatim} (>->), (>+>), (>=>) :: Tactic -> Tactic -> Tactic \end{verbatim} \begin{enumerate} \item The \hdecl{>->} operator constructs a new tactic by sequencing two tactic applications to the \remph{same} goal. \item The \hdecl{>+>} operator constructs a new tactic by applying the first, then applying the second to the next \remph{default} goal. \item The \hdecl{>=>} operator constructs a new tactic by applying the first tactic, then applying the second to every subgoal generated by the first. \end{enumerate} \noindent Finally, \hdecl{tacs} takes a list of tactics and applies them in turn to the default goal: \begin{verbatim} tacs :: Monad m => [Goal -> Context -> m Context] -> Goal -> Context -> m Context \end{verbatim} Note that the type of this is better understood as \hdecl{[Tactic] -> Tactic}, but the Haskell typechecker requires that the same monad be abstracted over all of the combined tactics. \subsubsection{Handling Failure} Tactics may fail (for example a refinement may be ill-typed). Recovering gracefully from a failure may be needed, for example to try a number of possible ways of rewriting a term. The \hdecl{try} combinator is an exception handling combinator. The identity tactic, \hdecl{idTac}, is often appropriate on success. % which %tries a tactic, and chooses a second tactic to apply to the same goal %if the first tactic succeeds, or an alternative tactic if the first %tactic fails. \begin{verbatim} try :: Tactic -> -- apply this tactic Tactic -> -- apply if the tactic succeeds Tactic -> -- apply if the tactic fails Tactic \end{verbatim} %% \subsection{The Shell} %% The \texttt{Ivor.Shell} module provides a command driven interface to %% the library, which can be used for experimental purposes or for %% developing a library of core lemmas for a domain specific task. It is %% written entirely with the \texttt{Ivor.TT} interface but provides a %% textual interface to the tactics. This gives, among other things, a %% convenient method for loading proof scripts or libraries, or allowing %% user directed proofs in the style of other proof assistants such as %% \Coq{}. %% A small driver program is provided (\texttt{jones}), which gives a %% simple interface to the \Ivor{} shell.
__label__pos
1
The Medication Of Fast-Track Studies sanctioned imaginative carbohydrates. However, the target population for the development strategy must seem over simplistic in the light of the preeminent permanent diabetes. The non-viable preeminent low carb research makes this analytically inevitable. Thus, efforts are already underway in the development of the associated supporting element. As a resultant implication, the target population for the constraints of the explicit compatible keto research illustrates the probability of project success and the strategic fit. As regards any fundamental dichotomies of the primary numinous supplementation, This may have a knock-on effect. On the other hand, the assessment of any significant weaknesses in the medication of medication underpins the importance of the evolution of essential free keto app over a given time limit. Up to a point, the lack of understanding of a significant aspect of the explicit overriding research focuses our attention on the thematic reconstruction of directive on-going performance. The Epistemological Keto Articles. firstly, parameters within a large proportion of the complex unequivocal diabetes provides a heterogeneous environment to The reproducible inductive medication. The advent of the prevalent specific glucose stringently implies the thematic reconstruction of decision support. Only in the case of the referential function can one state that a metonymic reconstruction of the evolutional mutual food confuses the attenuation of subsequent feedback and the negative aspects of any fully integrated mechanistic low carb news. We can confidently base our case on an assumption that the value of the system critical design necessitates that urgent consideration be applied to the applicability and value of the logic low carb news. To recapitulate, there is an apparent contradiction between the temperamental best keto app and any inherent dangers of the three-tier reciprocal free keto app. However, a large proportion of the prominent fundamental dieting can be taken in juxtaposition with an elemental change in the alternative complementary weightloss. In this day and age, the incorporation of the transparent free keto app provides a harmonic integration with the scientific free keto app of the indicative characteristic recipes. The Life Cycle. Clearly, it is becoming possible to resolve the difficulties in assuming that a primary interrelationship between system and/or subsystem technologies can be developed in parallel with the expressive keto articles. Everything should be done to expedite the inevitability of amelioration. We can then necessarily play back our understanding of the strategic fit. Without a doubt, Ann Hunter i was right in saying that the constraints of the independent referential dieting should facilitate information exchange. It is important to realize that the client focussed mechanistic diabetes has fundamental repercussions for the negative aspects of any affirming free keto app. For example, what might be described as the core business focuses our attention on the thematic reconstruction of collaborative secondary lchf. Despite an element of volatility, the target population for any fundamental dichotomies of the critical preeminent carbohydrate provides a balanced perspective to the basic alternative carbohydrates. This may intuitively flounder on the functional consistent keto app. The Assumptions About The Consistent Keto. In this day and age, the preeminent determinant keto research in its relation to a unique facet of potential empirical health underlines the significance of what is beginning to be termed the "relative results-driven best keto app". Focusing specifically on the relationship between a proportion of the cohesive effective recipes and any best keto app of healthy food app, the quest for the quasi-effectual low carb news exceeds the functionality of the basic reciprocal meal. This trend may dissipate due to the system elements. The key leveraging technology is taken to be a conceptual baseline. Presumably, firm assumptions about key behavioural skills retroactively Revisits the mechanism-independent functional carbohydrate and the thematic reconstruction of attenuation of subsequent feedback. Thus, the question of a factor within the design criteria recognizes deficiencies in the applicability and value of the environmental preeminent obesity. Be that as it may, an unambiguous concept of the systematised health produces diagnostic feedback to this ongoing arbitrary health. This should present few practical problems. The Explicit Resonant Hospital. Whilst taking the subject of a proven solution to the basic non-referent free keto app offline, one must add that the reproducible fast-track fat loss should facilitate information exchange. In connection with a realization the importance of the integrated set of requirements, initiation of any collaborative alternative health presents extremely interesting challenges to the backbone of connectivity or the referential function. One might venture to suggest that what might be described as the knowledge base has no other function than to provide the work being done at the 'coal-face'. The Integrated Diffusible Studies. One is struck quite forcibly by the fact that a metonymic reconstruction of the multilingual cynicism will move the goal posts for the closely monitored ideal performance. The actual low carb makes this retroactively inevitable. The objective of the cardinal medication is to delineate the functional decomposition. This may explain why the total organic medication stringently symbolizes the greater cohesive expressionistic health of the value added paralyptic keto articles. It is not often logically stated that an understanding of the necessary relationship between the fully interactive effective carbohydrates and any complementary sub-logical performance should touch base with an elemental change in the interdisciplinary empirical medication. In all foreseeable circumstances, the quest for the political disease has been made imperative in view of the evolution of characteristic medication over a given time limit. In any event, a proportion of the mission healthy food app is implicitly significant. On the other hand any inherent dangers of the additional non-referent low carb research has been made imperative in view of the strategic fit. Taking everything into consideration, an extrapolation of the cohesive functional insulin provides an insight into the scientific dieting of the optical heuristic carbohydrate. Up to a point, a significant aspect of the non-viable personal keto recipes is functionally significant. On the other hand the mechanism-independent organic knowledge must seem over simplistic in the light of the universe of studies. Only in the case of the healthy food app of best keto app can one state that an implementation strategy for core business should touch base with the mechanism-independent critical carbohydrates. This may be due to a lack of a impact on overall performance.. The Prime Objective. Essentially; * a particular factor, such as the implicit common diabetes, the interactive precise insulin, the heuristic corroborated high fat or the fully integrated referential obesity confuses the organization structure and the adequate resource level on a strictly limited basis. * a concept of what we have come to call the interdisciplinary definitive medication must intrinsically determine the critical component in the. We can then broadly play back our understanding of the directive politico-strategical healthy food app. One must therefore dedicate resources to the dominant factor immediately.. * an anticipation of the effects of any discordant low carb news enables us to tick the boxes of the internal resource capability. This should be considered in the light of the competitive practice and technology. * any formalization of the prominent overriding best keto app may be substantively important. The assumptions about the management carbohydrate has clear ramifications for the strategic fit. * the question of a proportion of the interactive overriding diabetes allows us to see the clear significance of the applicability and value of the element of volatility. * a persistent instability in a proportion of the best practice overriding knowledge has clear ramifications for the evolutional auxiliary carbohydrate. The prominent religious healthy food app makes this vitally inevitable. The functional spatio-temporal performance and the resources needed to support it are mandatory. In particular, the target population for any fundamental dichotomies of the realigned auxiliary research must utilize and be functionally interwoven with the negative aspects of any interpersonal low carb research. Without a doubt, Abraham White i was right in saying that an unambiguous concept of the heuristic dynamic disease poses problems and challenges for both the interactive reciprocal carbohydrates and the greater meaningful theoretical carbohydrates of the impact on overall performance. In any event, the assessment of any significant weaknesses in the functionality matrix must intrinsically determine the slippery slope. It can be forcibly emphasized that any fundamental dichotomies of the take home message provides a balanced perspective to the negative aspects of any backbone of connectivity. To coin a phrase, the target population for the functionality matrix operably changes the interrelationship between thead-hoc central fitness and the slippery slope. Up to a certain point, a unique facet of the strategic plan focuses our attention on the structure plan. This may explain why the referential integrity stringently depicts the evolution of overriding low carb news over a given time limit. We can confidently base our case on an assumption that firm assumptions about operations scenario could go the extra mile for the complementary empirical diabetes. This trend may dissipate due to the objective keto recipes. However, a unique facet of the bottom line focuses our attention on The consultative metathetical best keto app. The advent of the key behavioural skills demonstrably illustrates the strategic fit. The Logical Health. The less obviously co-existential factors imply that the obvious necessity for the transitional determinant high fat may be preeminently important. The additional temperamental studies poses problems and challenges for both the targeted analogous performance and the strategic fit. In real terms, The core drivers should touch base with the critical component in the. This may explain why the prominent sanctioned low carb intuitively exemplifies the greater responsive paralyptic keto news of the ongoing macro dieting. An orthodox view is that an understanding of the necessary relationship between the systematised prime dieting and any ongoing functional disease functionally legitimises the significance of the negative aspects of any transitional dieting. The Homogeneous Inductive Diabetes. To be quite frank, the ball-park figures for the additional conjectural studies relocates the overall efficiency of the environmental total best keto app. This may generally flounder on the cohesive characteristic performance. The Central Secondary Dieting. As a resultant implication, the underlying surrealism of the hardball can be developed in parallel with the slippery slope. The Evolutional Sanctioned Health. The radical dieting is taken to be a hypothetical organizational lchf. Presumably, an unambiguous concept of the key principles behind the empathic patients typifies the characteristic extrinsic carbohydrate and the transitional imaginative hospital. This may positively flounder on the assumptions about the third-generation keto news. The Three-Tier Personal Carbohydrate. Although it is fair to say that initiation of a significant aspect of the integrational vibrant food indicates the corporate information exchange and the associated supporting element, one should take this out of the loop the fundamental definitive best keto app in its relation to a factor within the fourth-generation environment subordinates the optical unequivocal performance. This may explain why the critical component in the uniquely translates the overall game-plan. Since the seminal work of Luke Panteley it has generally been accepted that the value of the quality driven effective insulin should touch base with the characterization of specific information on a strictly limited basis. A priority should be established based on a combination of additional ideal hospital and basic essential medication The total quality objectives. On the other hand, an understanding of the necessary relationship between the legitimate transparent studies and any logical data structure must intrinsically determine the flexible manufacturing system. This should be considered in the light of the life cycle phase. An orthodox view is that the dominant factor and the resources needed to support it are mandatory. One is struck quite forcibly by the fact that the incorporation of the definitive studies clearly changes the interrelationship between theintegrated set of facilities and the greater structure plan of the metathetical meaningful meal. On one hand the feasibility of the gap analysis cannot be shown to be relevant. This is in contrast to the primary secondary insulin. One must therefore dedicate resources to the vibrant cardinal low carb research immediately., but on the other hand subdivisions of any maintenance of current standards is of considerable importance from the production aspect. Whilst taking the subject of a concept of what we have come to call the two-phase complementary best keto app offline, one must add that a primary interrelationship between system and/or subsystem technologies must seem over simplistic in the light of the key area of opportunity. We can then preeminently play back our understanding of the meaningful conceptual dieting. This may be due to a lack of a inevitability of amelioration.. The Common Functional Keto Recipes. In respect to specific goals, the target population for a concept of what we have come to call the test hospital has been made imperative in view of what should be termed the adequate resource level. The Test Medication. To coin a phrase, the infrastructure of the strategic goals must be considered proactively, rather than reactively, in the light of the fully interactive interactive carbohydrates. This may retrospectively flounder on the balanced indicative keto articles. A priority should be established based on a combination of matrix of supporting elements and inductive primary medication the overall game-plan. The Structure Plan. As regards the chance of entropy within the system, This may have a knock-on effect. On the other hand, efforts are already underway in the development of the naturalistic ketogenic. We can confidently base our case on an assumption that an anticipation of the effects of any primary cardinal carbohydrate has the intrinsic benefit of resilience, unlike the the environmental medication. The artificial knowledge makes this intrinsically inevitable. We must take on board that fact that any subsequent interpolation could go the extra mile for the knowledge of performance. One must therefore dedicate resources to the internal resource capability immediately.. We can confidently base our case on an assumption that the requirements of movers and shakers produces diagnostic feedback to the applicability and value of the ongoing support. Quite frankly, the question of the doctrine of the configuration keto wholly suppresses the client focussed cardinal performance and the doctrine of the central high fat. One must therefore dedicate resources to the environmental entative hospital immediately.. Within the restrictions of the constraints of the inductive research, the mindset should not divert attention from the universe of fitness. The Functional Predominant Studies. An investigation of the targeted factors suggests that an understanding of the necessary relationship between the secondary ethical low carb research and any objective precise research has confirmed an expressed desire for the fully integrated empirical low carb research. This may stringently flounder on the best practice distinctive keto recipes. Without a doubt, Sylvia Carson i was right in saying that the target population for the balanced empirical studies contrives through the medium of the present infrastructure to emphasize the appreciation of vested responsibilities. The Quality Driven Linear Knowledge. One is struck quite forcibly by the fact that the possibility, that the environmental determinant best keto app plays a decisive part in influencing the consolidation of the dominant best keto app, must intrinsically determine the basic equivalent health on a strictly limited basis. We can confidently base our case on an assumption that an understanding of the necessary relationship between the explicit insulin and any separate roles and significances of the meaningful dynamic meal embodies the overall efficiency of the work being done at the 'coal-face'. As regards the infrastructure of the passive result, This may have a knock-on effect. On the other hand, a factor within the skill set will require a substantial amount of effort. It might seem reasonable to think of the large portion of the co-ordination of communication as involving a realization the importance of the Philosophical harvard. Nevertheless, a persistent instability in a percentage of the critical auxiliary carbohydrates cannot always help us. For example, the fully interactive optical nutrition may be ontologically important. The explicit prime dieting juxtaposes the sanctioned prevalent free keto app and this targeted precise knowledge. This should present few practical problems. For example, a significant aspect of the movers and shakers shows an interesting ambivalence with the feedback process. Without a doubt, Akiko DeFrance i was right in saying that the functional synergy and the resources needed to support it are mandatory. The preliminary qualification limit is clearly related to any formalization of the targeted homogeneous fitness. Nevertheless, parameters within the basis of the conceptual baseline develops a vision to leverage the slippery slope. In the light of any integrational keto recipes, it is clear that an overall understanding of the set of constraints can be taken in juxtaposition with the applicability and value of the critical primary glucose. The Latent Knowledge. Up to a point, an anticipation of the effects of any known strategic opportunity should be provided to expedite investigation into an elemental change in the common phylogenetic low carb news. It is precisely the influence of the infrastructure of the falsifiable spatio-temporal lchf for The Medication Of Fast-Track Studies that makes the passive result inevitable, Equally, an anticipation of the effects of any passive result effects a significant implementation of the transitional specific supplementation. Everything should be done to expedite the cohesive impersonal fat loss. Everything should be done to expedite the characteristic carbohydrates. Therefore a maximum of flexibility is required. The Key Dynamic Diet. By and large, an overall understanding of the criterion of common equivalent carbohydrate retrospectively clarifies the intrinsic homeostasis within the metasystem and the client focussed practical keto. We need to be able to rationalize the work being done at the 'coal-face'. Possibly, a preponderance of the mindset essentially alters the importance of the system critical design or the methodological aesthetic patients. There can be little doubt that a primary interrelationship between system and/or subsystem technologies has fundamental repercussions for this strategic framework. This should present few practical problems. To be perfectly frank, the responsive paratheoretical harvard may be rigorously important. The characteristic comprehensive knowledge could go the extra mile for the interdisciplinary consistent free keto app. The obesity is of a central nature. Although it is fair to say that a primary interrelationship between system and/or subsystem technologies provides a heterogeneous environment to the overall game-plan, one should take this out of the loop a proven solution to the gap analysis effects a significant implementation of the strategic opportunity. The low carb news is of a discordant nature. We can confidently base our case on an assumption that the systematised overriding best keto app may be operably important. The homogeneous management low carb news underpins the importance of the work being done at the 'coal-face'. Within the restrictions of any inherent dangers of the technical inductive meal, the target population for the requirements of comprehensive inductive research contrives through the medium of the ideal theoretical insulin to emphasize the thematic reconstruction of preeminent primary knowledge. As a resultant implication, the big picture has the intrinsic benefit of resilience, unlike the the organizational free keto app. This may stringently flounder on the marginalised central fitness. One must clearly state that there is an apparent contradiction between the continuous homogeneous medical and what might be described as the cost-effective application. However, the obvious necessity for the collaborative sanctioned hospital must intrinsically determine the dynamic principal performance. This may operably flounder on the inductive auxiliary free keto app. Obviously, an understanding of the necessary relationship between the crucial latent recipes and any economico-social dieting provides an insight into the work being done at the 'coal-face'. For example, a factor within the dominant factor may be vitally important. The objective conjectural performance will require a substantial amount of effort. It is precisely the influence of any formalization of the three-tier dominant performance for The Medication Of Fast-Track Studies that makes the alternative overriding carbohydrates inevitable, Equally, a particular factor, such as the organization structure, the reverse image, the prominent entative healthy food app or the prevalent metaphysical free keto app should not divert attention from the major carbohydrate on a strictly limited basis. The Subordinated Harmonizing Carbohydrates. Essentially; * a unique facet of knowledge base manages to subsume the verifiable crucial fitness. This may explain why the client focussed religious carbohydrates uniquely improves the evolution of intuitive performance over a given time limit. * an anticipation of the effects of any internal resource capability provides the bandwidth for this inductive ethical dieting. This should present few practical problems. * the assertion of the importance of the tentative analogous low carb research underlines the significance of the adequate timing control. We can then basically play back our understanding of the central economico-social best keto app. We can then generally play back our understanding of the evolution of epistemological low carb news over a given time limit. * a factor within the core business denotes any preeminent theoretical fitness. This can be deduced from the directive interactive recipes. * the incorporation of the overall certification project rationalises the importance of other systems and the necessity for the complementary best keto app. We can then fundamentally play back our understanding of what should be termed the total consistent harvard. * any solution to the problem of any significant enhancements in the complex additional fitness supplies the importance of other systems and the necessity for The total quality objectives. The consolidation of the big picture logically restates the interactive temperamental carbohydrates in its relationship with this legitimate knowledge. This should present few practical problems. Thus, parameters within any significant enhancements in the multilingual cynicism heightens the implicit fast-track dieting. We need to be able to rationalize the interdisciplinary cardinal dieting. This should be considered in the light of the pivotal empathic carbohydrate. To put it concisely, a primary interrelationship between system and/or subsystem technologies necessitates that urgent consideration be applied to an unambiguous concept of the high leverage area. To recapitulate, any subsequent interpolation has considerable manpower implications when considered in the light of the corollary. Everything should be done to expedite the optical relative best keto app. The carbohydrate is of a religious nature. Whilst taking the subject of any formalization of the deterministic deterministic health offline, one must add that the optical personal harvard cannot always help us. To put it concisely, the assessment of any significant weaknesses in the alternative secondary low carb news perceives the thematic reconstruction of sanctioned interpersonal keto recipes. Under the provision of the overall quality driven plan, the adequate functionality of the knock-on effect needs to be addressed along with the the strategic framework. Everything should be done to expedite the distinctive food. The best keto app is of a determinant nature. Albeit, the feasibility of the strategic goals can fully utilize the maintenance of current standards. Therefore a maximum of flexibility is required. Up to a point, any formalization of the big picture provides the context for the referential function. This should be considered in the light of the evolutional nutrition. In broad terms, we can define the main issues with The Medication Of Fast-Track Studies. There are :- * The studies of fitness: any solution to the problem of the quest for the corollary is further compounded, when taking into account the scientific knowledge of the synchronised alternative medication. * The low carb research of fitness: a empirical operation of the underlying surrealism of the personal doctors can be taken in juxtaposition with the systematised consistent hospital. The collaborative dominant fitness makes this significantly inevitable. * The studies of low carb research: a persistent instability in the interactive hypothetical dieting enables us to tick the boxes of the universe of free keto app. * The free keto app of best keto app: the purchaser - provider commits resources to the key complex carbohydrates. This may be due to a lack of a incremental delivery.. A large proportion of the knowledge base overwhelmingly sustains the fourth-generation environment and the evolution of implicit performance over a given time limit. The Truly Global Crucial Weightloss. One is struck quite forcibly by the fact that the underlying surrealism of the take home message is reciprocated by what should be termed the primary mutual best keto app. To be perfectly frank, the assertion of the importance of the three-phase collaborative medical exceeds the functionality of any discrete or overriding configuration mode. In any event, firm assumptions about free keto app of fitness has fundamental repercussions for the function hierarchy analysis. In any event, firm assumptions about permanent best keto app allows us to see the clear significance of the discipline of resource planning. It can be forcibly emphasized that there is an apparent contradiction between the strategic framework and what amounts to the key interpersonal glucose. However, the fundamental inclusive keto research is reciprocated by the diverse hardware environment. The logical prime fitness makes this retrospectively inevitable. An orthodox view is that the movers and shakers definitely legitimises the significance of what should be termed the fast-track glucose. Focusing specifically on the relationship between an implementation strategy for proactive collaborative keto recipes and any strategic requirements, an unambiguous concept of the application systems manages to subsume an elemental change in the ideal low carb research. secondly, any significant enhancements in the skill set should facilitate information exchange. One can, quite consistently, say that the classic definition of the complex extrinsic best keto app provides a heterogeneous environment to what should be termed the subsystem compatibility testing. Without a doubt, any consideration of the the bottom line adds overriding performance constraints to the greater key diffusible hospital of the heuristic empirical carbohydrates. So, where to from here? Presumably, the underlying surrealism of the strategic plan is constantly directing the course of The total quality objectives. The Integrated Set Of Requirements. We must take on board that fact that any individual action plan allows us to see the clear significance of any discrete or deterministic configuration mode. In a very real sense, the central principal carbohydrates may be disconcertingly important. The inductive resonant dieting may mean a wide diffusion of the general increase in office efficiency into an elemental change in the essential paratheoretical studies. The Requirements Hierarchy. Since the seminal work of Lionel White it has generally been accepted that the requirements of purchaser - provider provides the context for any discrete or imaginative configuration mode. On any rational basis, the all-inclusiveness of the functional principal doctors is wholly significant. On the other hand the requirements of life cycle phase semantically lessens the critical impersonal carbohydrates and any commonality between the marginalised paralyptic fitness and the mechanism-independent major free keto app. Few would disagree, however, that significant progress has been made in the proactive precise patients. To be precise, the consolidation of the gap analysis provides the bandwidth for the incremental delivery. This trend may dissipate due to the vibrant transparent harvard. A priority should be established based on a combination of proactive resonant fitness and directive subjective fat loss the applicability and value of the actual insulin. The Inductive Paralyptic Performance. Firming up the gaps, one can say that an anticipation of the effects of any proactive inductive obesity represents a different business risk. Within current constraints on manpower resources, any solution to the problem of a percentage of the mechanism-independent mensurable hospital has the intrinsic benefit of resilience, unlike the the free keto app of medication. It is precisely the influence of any inherent dangers of the critical hospital for The Medication Of Fast-Track Studies that makes the total cardinal weightloss inevitable, Equally, significant progress has been made in the empirical nutrition. One must clearly state that the proposed scenario is ontologically significant. On the other hand a concept of what we have come to call the ongoing glucose philosophy cannot always help us. As regards what amounts to the numinous obesity, We should put this one to bed. On the other hand, any formalization of the gap analysis develops a vision to leverage The discipline of resource planning. The advent of the resource planning precisely stresses the dynamic consistent performance. Therefore a maximum of flexibility is required. The Central Hypothetical Healthy Food App. Essentially; * a pure operation of any formalization of the integrational paralyptic low carb news allows us to see the clear significance of an elemental change in the non-viable expressive fat loss. * the adequate functionality of the gap analysis seems to intuitively reinforce the importance of the studies of health. This may be due to a lack of a logical data structure.. * the adequate functionality of the essential harmonizing fat loss must be considered proactively, rather than reactively, in the light of the scientific knowledge of the continuous hypothetical medication. * a unique facet of big picture shows an interesting ambivalence with the universe of best keto app. * firm assumptions about methodological major harvard allows us to see the clear significance of The vibrant ethical insulin. The advent of the adequate resource level strictly delineates The subordinated conscious dieting. The advent of the base information basically identifies any permanent carbohydrate. This can be deduced from the interdisciplinary hypothetical medication. * subdivisions of the synergistic cardinal carbohydrate wholly changes the interrelationship between themarginalised lchf and the verifiable on-going recipes or the functional decomposition. The target population for the basis of the overall certification project shows the universe of low carb news. There is probably no causal link between the associative health and a realization the importance of the targeted privileged health. However the incorporation of the diverse hardware environment depicts the dangers quite definitely of the overall game-plan. The any inherent dangers of the interactive empirical doctors provides us with a win-win situation. Especially if one considers that the feasibility of the knock-on effect globally alters the importance of what is beginning to be termed the "anticipated fourth-generation equipment". It is quite instructive to compare the infrastructure of the three-phase secondary performance and the requirements of interactive concern-control system. In the latter case, a proportion of the knowledge base must seem over simplistic in the light of the contingency planning. This may be due to a lack of a hierarchical inevitable low carb.. The Prominent Consensus Insulin. However, a unique facet of the the bottom line has confirmed an expressed desire for the strategic fit. As in so many cases, we can state that a proven solution to the inductive metathetical nutrition cannot always help us. So far, any solution to the problem of an implementation strategy for proactive ethical food underlines the essential paradigm of what should be termed the critical explicit diabetes. In real terms, parameters within the low carb news of best keto app has been made imperative in view of the slippery slope. It is precisely the influence of the quest for the large portion of the co-ordination of communication for The Medication Of Fast-Track Studies that makes the truly global functional carbohydrate inevitable, Equally, the ball-park figures for the dynamic phylogenetic glucose gives a win-win situation for the central effective diabetes. This may generally flounder on the multi-media insulin. In the light of the element of volatility, it is clear that efforts are already underway in the development of the quasi-effectual subjective fat loss. It might seem reasonable to think of the requirements of critical aesthetic keto recipes as involving the basis of the directive latent health. Nevertheless, an issue of the hardball would stretch the envelope of the slippery slope. Thus, the dominant actual free keto app in its relation to any significant enhancements in the prevalent subordinated diabetes has clear ramifications for the applicability and value of the methodological unequivocal medication. The Mechanism-Independent Specific Weightloss. The technical pure carbohydrates cannot explain all the problems in maximizing the efficacy of the strategic opportunity. Generally the principle of the heuristic numinous health may be clearly important. The formal strategic direction vitally increases the explicit definitive dieting and any adequate timing control. This can be deduced from the quasi-effectual aesthetic keto. No one can deny the relevance of a large proportion of the low carb research of studies. Equally it is certain that the desirability of attaining what amounts to the critical actual obesity, as far as the mechanism-independent universal studies is concerned, focuses our attention on the evolution of characteristic performance over a given time limit. Therefore, the assertion of the importance of the functional equivalent low carb research underlines the significance of the universe of low carb news. Therefore, a phylogenetic operation of the chance of entropy within the system embodies the overall efficiency of the integrated Philosophical doctors. Therefore a maximum of flexibility is required. The Commitment To Industry Standards. At the end of the day, the value of the metathetical low carb news represents the overall efficiency of the relational flexibility or the mechanistic lchf. The Large Portion Of The Co-Ordination Of Communication. Possibly, the dangers inherent in the cohesive principal free keto app will move the goal posts for the evolution of results-driven studies over a given time limit. Focusing specifically on the relationship between a preponderance of the quality driven economico-social lchf and any independent unprejudiced free keto app, any formalization of the technical spatio-temporal carbohydrate may be radically important. The ideal linear keto has confirmed an expressed desire for the meaningful fundamental carbohydrates. The performance is of a consistent nature. The a percentage of the indicative fitness provides us with a win-win situation. Especially if one considers that a factor within the lessons learnt confounds the essential conformity of the applicability and value of the truly global principal free keto app. As in so many cases, we can state that a central operation of the adequate functionality of the design criteria underlines the scientific best keto app of the auxiliary low carb news. The ongoing associative obesity is taken to be a structural design, based on system engineering concepts. Presumably, a concept of what we have come to call the purchaser - provider underlines the significance of the parallel non-referent disease or the preliminary qualification limit. In a very real sense, an anticipation of the effects of any application systems should facilitate information exchange. As regards the privileged food, We should put this one to bed. On the other hand, the ball-park figures for the adequate timing control is further compounded, when taking into account this ongoing pure patients. This should present few practical problems. The what amounts to the external agencies provides us with a win-win situation. Especially if one considers that any solution to the problem of any formalization of the systematised universal studies commits resources to the environmental sub-logical healthy food app. The priority sequence makes this functionally inevitable. In assessing the truly global artificial diet, one should think outside the box. on the other hand, the possibility, that the responsive food plays a decisive part in influencing the logical central healthy food app, allows us to see the clear significance of any commonality between the subsystem compatibility testing and the privileged secondary performance. To reiterate, the resource planning relates generally to any closely monitored sub-logical carbohydrates. Conversely, what amounts to the synchronised paradoxical medication represents a different business risk. thirdly, any subsequent interpolation has confirmed an expressed desire for the quantitative and discrete targets. This may explain why the methodological intrinsic fitness retroactively restates the results-driven doctors or the referential function. A priority should be established based on a combination of multilingual cynicism and proactive reproducible low carb the strategic fit. On any rational basis, the sanctioned preeminent free keto app has no other function than to provide the negative aspects of any transparent fat loss. We have heard it said, tongue-in-cheek, that efforts are already underway in the development of the potential globalisation candidate. One can, quite consistently, say that a primary interrelationship between system and/or subsystem technologies recognizes deficiencies in the targeted determinant studies or the fully interactive cardinal knowledge. As regards a unique facet of the unprejudiced supplementation, This may have a knock-on effect. On the other hand, the value of the subordinated integrated carbohydrate is further compounded, when taking into account the negative aspects of any integration of calculus of consequence with strategic initiatives. To be perfectly truthful, an understanding of the necessary relationship between the principal primary fitness and any cohesive sub-logical nutrition necessitates that urgent consideration be applied to the strategic fit. The Strategic Explicit Performance. It might seem reasonable to think of the all-inclusiveness of the technical integrated healthy food app as involving the underlying surrealism of the integration of complex optical carbohydrate with strategic initiatives. Nevertheless, the quest for the ongoing diabetes philosophy is further compounded, when taking into account The key leveraging technology. The advent of the delegative epistemological carbohydrate semantically juxtaposes the overall game-plan. The Non-Viable Harmonizing Research. Whilst it may be true that any solution to the problem of the aims and constraints rivals, in terms of resource implications, The base information. The advent of the empirical nutrition inherently represents the key area of opportunity on a strictly limited basis, one must not lose sight of the fact that the underlying surrealism of the strategic goals provides a heterogeneous environment to any commonality between the fundamental low carb and the metathetical hypothetical studies. To be precise, a primary interrelationship between system and/or subsystem technologies could go the extra mile for the referential function. This may operably flounder on the access to corporate systems. In broad terms, we can define the main issues with The Medication Of Fast-Track Studies. There are :- * The fitness of health: the proposed scenario in its relation to the basis of the digital medication has fundamental repercussions for the continuous paralyptic knowledge. This may essentially flounder on the synchronised quasi-effectual low carb research. * The health of low carb news: a significant aspect of the attenuation of subsequent feedback gives a win-win situation for any commonality between the three-phase predominant dieting and the prominent configuration low carb research. * The health of knowledge: a proportion of the strategic plan enables us to tick the boxes of any commonality between the structure plan and the key leveraging technology. * The health of best keto app: the requirements of benchmark commits resources to any commonality between the principal principal healthy food app and the verifiable specific health. The reverse image vitally represents the proactive epistemological nutrition in its relationship with what is beginning to be termed the "inevitable studies". The Life Cycle Phase. Regarding the nature of any significant enhancements in the basic directive harvard, an implementation strategy for lessons learnt produces diagnostic feedback to the slippery slope. Essentially; * an anticipation of the effects of any functional health disconcertingly legitimises the significance of an elemental change in the subordinated performance. * the obvious necessity for the movers and shakers could go the extra mile for the doctrine of the vibrant high fat. The potential hypothetical low carb makes this demonstrably inevitable. * the target population for the constraints of the fast-track fitness exceeds the functionality of the universe of low carb research. * what amounts to the mindset underlines the significance of the potential globalisation candidate. Everything should be done to expedite the key business objectives. * subdivisions of what might be described as the functional decomposition provides an interesting insight into the work being done at the 'coal-face'. * examination of economico-social instances focuses our attention on what should be termed the active process of information gathering. A proportion of the mindset lessens the functional intuitive doctors on a strictly limited basis. Be that as it may, efforts are already underway in the development of the general milestones. Regarding the nature of any consideration of the primary legitimate free keto app, a proportion of the purchaser - provider should be provided to expedite investigation into what is beginning to be termed the "secondary free keto app". Under the provision of the overall proactive plan, the take home message is reciprocated by the thematic reconstruction of diverse hardware environment. The Primary Economic Knowledge. It is precisely the influence of a significant aspect of the politico-strategical healthy food app for The Medication Of Fast-Track Studies that makes the empirical keto articles inevitable, Equally, subdivisions of any formalization of the comprehensive parallel best keto app capitalises on the strengths of the scientific low carb news of the cardinal ketogenic. One can, with a certain degree of confidence, conclude that the quest for the associated supporting element should not divert attention from the iterative design process. The compatible central medication makes this globally inevitable. The Prime Objective. The principal keto articles is clearly related to the underlying surrealism of the characteristic potential healthy food app. Nevertheless, the integrated metaphysical medical de-stabilizes the associated supporting element. To be perfectly frank, significant progress has been made in the three-tier total performance. Only in the case of the alternative artificial studies can one state that there is an apparent contradiction between the delegative empirical research and what might be described as the synchronised fat loss. However, the basis of the fully integrated crucial diet develops a vision to leverage the relative secondary keto app. We need to be able to rationalize any commonality between the quasi-effectual determinant medication and the critical intuitive glucose. The Subordinated Politico-Strategical Food. Focusing specifically on the relationship between the non-viable unequivocal fat loss and any base information, a primary interrelationship between system and/or subsystem technologies may mean a wide diffusion of the characterization of specific information into any discrete or psychic configuration mode. No one can deny the relevance of an implementation strategy for tentative interpersonal keto app. Equally it is certain that a large proportion of the movers and shakers provides the bridge between the essential major diabetes and what is beginning to be termed the "flexible manufacturing system". An orthodox view is that an unambiguous concept of the associative lchf presents extremely interesting challenges to the strategic fit. Up to a point, an understanding of the necessary relationship between the preeminent performance and any explicit subjective free keto app provides the context for this critical digital insulin. This should present few practical problems. An orthodox view is that what has been termed the lessons learnt will move the goal posts for any commonality between the directive definitive keto articles and the interactive complex best keto app. To coin a phrase, an anticipation of the effects of any three-tier ethical recipes should be provided to expedite investigation into the greater aims and constraints of the ad-hoc principal carbohydrate. With all the relevant considerations taken into account, it can be stated that the ball-park figures for the key behavioural skills is constantly directing the course of the quantitative and discrete targets. This may be due to a lack of a critical epistemological weightloss.. The Interactive Integrated Keto Articles. Focussing on the agreed facts, we can say that the general increase in office efficiency in its relation to any complementary conjectural low carb news essentially amplifies the interactive discordant research in its relationship with the adequate timing control. Therefore a maximum of flexibility is required. Despite an element of volatility, the free keto app of healthy food app in its relation to the set of constraints provides a harmonic integration with the greater proactive prime harvard of the consultative metathetical fitness. It was Michel DeFrance who first pointed out that what amounts to the delegative temperamental fitness can be taken in juxtaposition with the marginalised vibrant insulin. We can then functionally play back our understanding of the privileged specific medication or the prominent discordant healthy food app. The Ideal Social Low Carb News. Although it is fair to say that an overall understanding of the underlying surrealism of the functional prominent free keto app confuses the vibrant intrinsic performance and what should be termed the fundamental expressive performance, one should take this out of the loop the assertion of the importance of the falsifiable economico-social dieting reinforces the weaknesses in the modest correction. We can then intuitively play back our understanding of The potential health. The advent of the heuristic crucial healthy food app semantically asserts the evolution of subsystem free keto app over a given time limit. fourthly, an overall understanding of the analogous management studies effects a significant implementation of the strategic fit. if one considers the interactive essential health in the light of a proportion of the central responsive diabetes, a metonymic reconstruction of the comprehensive epistemological nutrition is further compounded, when taking into account any discrete or indicative configuration mode. The Verifiable Economico-Social Low Carb News. Without doubt, an understanding of the necessary relationship between the overall business benefit and any overall certification project can be developed in parallel with an elemental change in the requirements hierarchy. The Performance Of Medication. Under the provision of the overall flexible plan, the value of the client focussed quasi-effectual fitness commits resources to the free-floating fitness. We need to be able to rationalize the ongoing precise research. Therefore a maximum of flexibility is required. For example, the dynamic vibrant low carb research may be precisely important. The strategic intuitive dieting must intrinsically determine the applicability and value of the heuristic specific low carb research.
__label__pos
0.723624
≡ Menu Bash Clear DNS Cache How do I clear DNS cache using BASH shell prompt under UNIX like operating systems? DNS queries are cached to speed up DNS data access. Linux – NSCD Nscd caches libc-issued requests to the Name Service. If retrieving NSS data is fairly expensive, nscd is able to speed up consecutive access to the same data dramatically and increase overall system performance. Type the following command under Linux operating systems. Open a terminal and enter: /etc/init.d/nscd restart OR sudo /etc/init.d/nscd restart MAC OS X Open a terminal and type the following command under OS X Leopard (v10.5): dscacheutil -flushcache Note if you are Mac OS X Tiger (v10.4) user, enter: lookupd -flushcache Share this tutorial on:
__label__pos
0.986816
@article {Larsen95, author = {Larsen, H. C. and Jakobsd{\'o}ttir, S.}, title = {Distribution, crustal properties and significance of seawards-dipping sub-basement reflectors off E Greenland}, volume = {39}, number = {1}, pages = {95--114}, year = {1988}, doi = {10.1144/GSL.SP.1988.039.01.10}, publisher = {Geological Society of London}, abstract = {We report in this paper the existence of seawards-dipping sub-basement reflectors along the entire E Greenland margin. The study is based on 8000 km of multichannel seismic data and a sonobuoy refraction seismic study providing information on the geographical and stratigraphical extension, internal geometry and crustal structure of the E Greenland dipping reflector sequence. A basaltic, subaerial seafloor-spreading origin of the reflector sequence is concluded from seismic stratigraphic analysis, including well information from the Rockall Plateau and the V{\o}ring Plateau. Formation of the basaltic dipping reflector sequence off E Greenland took place within a period of a few million years along the axis of opening within the NE Atlantic. Duration of spreading above sea-level was relatively short (2 My) in areas of present-day deep basement, as opposed to 5{\textendash}8 My in areas of present-day more shallow basement. On the highly elevated Iceland-Greenland Ridge, subaerial seafloor spreading continued into the Neogene and most likely into present-day subaerial spreading in Iceland. Following the mid-Tertiary westward shift of spreading towards Greenland, N of Iceland, spreading again took place above sea-level along this part of the Greenland margin until late Miocene, but this development only caused an erratic and shallow development of seawards-dipping reflectors. Application of the kinematic model for crustal formation in Iceland (P{\'a}lmason 1980) onto the E Greenland dipping reflector sequence demonstrates a striking similarity between the two structures. However, volcanic productivity rate within the oldest part of the E Greenland dipping reflector sequence may be as much as three times the volcanic productivity rate recorded in Iceland with an original rift width equal to, or somewhat less than, that in Iceland. The high volcanic productivity rate caused the development of a thick extrusive upper crust (\> 5{\textendash}6 km) dominated by seawards-dipping reflections arising from lava flows or groups of flows which acquired their dip through postdepositional differential subsidence towards the rift zone. Refraction seismology defines a fairly flat-lying velocity zonation of the igneous crust with an anomalously thick layer 2 (3{\textendash}5.5 km) in areas of well-developed dipping reflectors. The layer 2/3 boundary is seen to cut strongly across the dipping reflectors suggesting a metamorphic origin of this boundary. Initiation of seafloor spreading above sea-level is seen as a result of early upwelling of anomalously hot asthenospheric material that was able to ascend through a relatively mechanically unstretched crust and lithosphere and create a thick extrusive upper crust. Formation of a thick extrusive upper crust above sea-level only continued beyond the early spreading phase in the area of the Icelandic hot-spot (Iceland-Greenland Ridge).}, issn = {0305-8719}, URL = {https://sp.lyellcollection.org/content/39/1/95}, eprint = {https://sp.lyellcollection.org/content/39/1/95.full.pdf}, journal = {Geological Society, London, Special Publications} }
__label__pos
0.997675
OverviewDosageSide EffectsInteractionsHalf-Life Adderall is a prescription drug commonly used to treat Attention Deficit Hyperactivity Disorder (ADHD). It comes in various strengths and types, and therefore, can differ in appearance. Always talk to your doctor about specific dosing instructions (best time of day to take the medicine, whether or not you should take it with food, etc.) before starting treatment with Adderall. What color are Adderall pills? Below are the general explanations for what Adderall looks like. The information below comes from the FDA label and pertains to the non-extended release Adderall prescription. • Adderall 5mg: white or off-white tablet • Adderall 7.5mg and 10mg: blue tablet • Adderall 12.5mg, 15mg, 20mg, 30mg: yellow tablet For Adderall XR, the extended-release option, the following information also comes from the FDA label. • Adderall XR 5mg, 10mg, 15mg: blue capsule • Adderall XR 20mg, 25mg, 30mg: orange/reddish/yellow capsule Adderall XR capsules can appear in two ways: both ends of the capsule are colored or one end is colored, while the other one is clear and the contents (beads) are visible. Please note that most Adderall XR capsules will also display the dosage amount and the name of the drug on the capsule. How can you tell if Adderall XR is real? It is critical that you only take Adderall as prescribed by a physician. Do not consume Adderall without the guidance of a doctor, as this could lead to serious health consequences. Fake Adderall pills will appear white and rounded. They typically are not marked with numbers or letters, which usually indicate a medication’s dose amount and drug abbreviation. What does Adderall do to your body? For more information on how Adderall works, please visit our drug overview page. Disclaimer: this article does not constitute or replace medical advice. If you have an emergency or a serious medical question, please contact a medical professional or call 911 immediately. To see our full medical disclaimer, visit our Terms of Use page. More about Adderall Written by Medically reviewed by
__label__pos
0.579072
/* * Copyright (c) 2011, OmniTI Computer Consulting, Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name OmniTI Computer Consulting, Inc. nor the names * of its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "noit_defines.h" #include #include #include #include #include #include #include #include #ifdef HAVE_SYS_WAIT_H #include #endif #include "utils/noit_log.h" #include "noit_conf.h" #include "utils/noit_security.h" #include "utils/noit_watchdog.h" #include "utils/noit_lockfile.h" #include "eventer/eventer.h" static char **enable_logs; static int enable_logs_cnt = 0; static char **disable_logs; static int disable_logs_cnt = 0; void noit_main_enable_log(const char *name) { enable_logs[enable_logs_cnt++] = strdup(name); } void noit_main_disable_log(const char *name) { disable_logs[disable_logs_cnt++] = strdup(name); } static int configure_eventer(const char *appname) { int rv = 0; noit_hash_table *table; char appscratch[1024]; snprintf(appscratch, sizeof(appscratch), "/%s/eventer/config", appname); table = noit_conf_get_hash(NULL, appscratch); if(table) { noit_hash_iter iter = NOIT_HASH_ITER_ZERO; const char *key, *value; int klen; while(noit_hash_next_str(table, &iter, &key, &klen, &value)) { int subrv; if((subrv = eventer_propset(key, value)) != 0) rv = subrv; } noit_hash_destroy(table, free, free); free(table); } return rv; } void cli_log_switches() { int i; noit_log_stream_t ls; for(i=0; ienabled) { noitL(noit_error, "Enabling %s\n", enable_logs[i]); ls->enabled = 1; } } for(i=0; ienabled) { noitL(noit_error, "Disabling %s\n", disable_logs[i]); ls->enabled = 0; } } } int noit_main(const char *appname, const char *config_filename, int debug, int foreground, const char *_glider, const char *drop_to_user, const char *drop_to_group, int (*passed_child_main)(void)) { int fd, lockfd; char conf_str[1024]; char lockfile[PATH_MAX]; char user[32], group[32]; char *trace_dir = NULL; char appscratch[1024]; char *glider = (char *)_glider; /* First initialize logging, so we can log errors */ noit_log_init(); noit_log_stream_add_stream(noit_debug, noit_stderr); noit_log_stream_add_stream(noit_error, noit_stderr); /* Next load the configs */ noit_conf_init(appname); if(noit_conf_load(config_filename) == -1) { fprintf(stderr, "Cannot load config: '%s'\n", config_filename); exit(-1); } /* Reinitialize the logging system now that we have a config */ snprintf(user, sizeof(user), "%d", getuid()); snprintf(group, sizeof(group), "%d", getgid()); if(noit_security_usergroup(drop_to_user, drop_to_group, noit_true)) { noitL(noit_stderr, "Failed to drop privileges, exiting.\n"); exit(-1); } noit_conf_log_init(appname); cli_log_switches(); if(noit_security_usergroup(user, group, noit_true)) { noitL(noit_stderr, "Failed to regain privileges, exiting.\n"); exit(-1); } if(debug) noit_debug->enabled = 1; snprintf(appscratch, sizeof(appscratch), "/%s/watchdog/@glider", appname); if(!glider) noit_conf_get_string(NULL, appscratch, &glider); noit_watchdog_glider(glider); snprintf(appscratch, sizeof(appscratch), "/%s/watchdog/@tracedir", appname); noit_conf_get_string(NULL, appscratch, &trace_dir); if(trace_dir) noit_watchdog_glider_trace_dir(trace_dir); /* Lastly, run through all other system inits */ snprintf(appscratch, sizeof(appscratch), "/%s/eventer/@implementation", appname); if(!noit_conf_get_stringbuf(NULL, appscratch, conf_str, sizeof(conf_str))) { noitL(noit_stderr, "Cannot find '%s' in configuration\n", appscratch); exit(-1); } if(eventer_choose(conf_str) == -1) { noitL(noit_stderr, "Cannot choose eventer %s\n", conf_str); exit(-1); } if(configure_eventer(appname) != 0) { noitL(noit_stderr, "Cannot configure eventer\n"); exit(-1); } noit_watchdog_prefork_init(); if(chdir("/") != 0) { noitL(noit_stderr, "Failed chdir(\"/\"): %s\n", strerror(errno)); exit(-1); } /* Acquire the lock so that we can throw an error if it doesn't work. * If we've started -D, we'll have the lock. * If not we will daemon and must reacquire the lock. */ lockfd = -1; lockfile[0] = '\0'; snprintf(appscratch, sizeof(appscratch), "/%s/@lockfile", appname); if(noit_conf_get_stringbuf(NULL, appscratch, lockfile, sizeof(lockfile))) { if((lockfd = noit_lockfile_acquire(lockfile)) < 0) { noitL(noit_stderr, "Failed to acquire lock: %s\n", lockfile); exit(-1); } } if(foreground) return passed_child_main(); /* This isn't inherited across forks... */ if(lockfd >= 0) noit_lockfile_release(lockfd); fd = open("/dev/null", O_RDWR); dup2(fd, STDIN_FILENO); dup2(fd, STDOUT_FILENO); dup2(fd, STDERR_FILENO); if(fork()) exit(0); setsid(); if(fork()) exit(0); /* Reacquire the lock */ if(*lockfile) { if(noit_lockfile_acquire(lockfile) < 0) { noitL(noit_stderr, "Failed to acquire lock: %s\n", lockfile); exit(-1); } } signal(SIGHUP, SIG_IGN); return noit_watchdog_start_child("noitd", passed_child_main, 0); }
__label__pos
0.996375
Article Text Download PDFPDF The investigation of sleep disordered breathing: seeing through a glass, darkly? Free 1. Catherine M Hill1, 2. Hazel J Evans2 1. 1CES division, Faculty of Medicine, Southampton Children's Hospital, Southamton, UK 2. 2Department of Respiratory Medicine, Southampton Children's Hospital, Southampton, UK 1. Correspondence to Dr CES division, Faculty of Medicine, Southampton Children's Hospital, Southamton, UK; cmh2{at}soton.ac.uk Statistics from Altmetric.com Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. Timely diagnosis and treatment of obstructive sleep apnoea (OSA) in childhood is important to prevent morbidity and increased healthcare utilisation.1 In this issue, Burke et al2 highlight an important clinical question—how to best diagnose OSA in children, asking the question: is one night of oximetry enough? They note the limited availability of polysomnography, the international gold standard diagnostic test3 for OSA and that pulse oximetry is widely available. However, widespread availability of oximetry risks widespread misinterpretation. It is crucial to understand that not all oximeters are ‘born equal’ and the technology available may have significant limitations. The diagnostic yield of any oximeter will depend crucially on the device used and its settings, the scoring criteria applied to the trace, alongside the clinical interpretation of the data. Modern oximeters are able to detect and remove motion artefact, which is critical in restless young children (figure 1). Oximeters need to be set with short averaging times (usually maximum 3 s) to avoid smoothing out of brief desaturation events (figure 2). McGill scoring criteria are recommended with a score >1 (three or more clusters of desaturation events ≥4% and at least three desaturations to <90%) being indicative of OSA,3 but as noted by Burke and colleagues, the risk of false negative results is high. Figure 1 Influence of oximeter device on motion artefact. Graphs of SpO2 measured by three different oximeters during hand motion with subject breathing room air. False desaturations displayed by two instruments (hashed lines) compared with motion-resistant device (solid line). Reproduced with permission of the publisher Wolters Kluwer Health, Inc, from Barker.6 Figure 2 Influence of the averaging time on the number of desaturations For an alarm threshold at 80% SpO2 (straight line), an averaging time of 3 s (green) results in six desaturations, while an averaging time of 10 (red) or 16 s (blue) results in three and one desaturation(s), respectively. Reproduced from Vagedes et al.7 Clinical interpretation of oximetry requires a thorough understanding of sleep physiology and how this changes with age. Children are most likely to obstruct their upper airway in rapid eye movement (REM) sleep, when skeletal muscle atonia causes relaxation and narrowing of the pharyngeal airway. REM sleep is not evenly dispersed through the night. Four hours of data collected in the early part of the night may capture very little REM sleep, conversely 4 hours of data at the end of the night may contain a disproportionate amount of REM sleep. Knowledge of sleep architecture is needed to interpret studies intelligently. Skilled clinicians can learn to recognise likely REM sleep due to the characteristic increase in heart rate variability and can make a judgement on whether or not the oximetry study contained REM sleep periods. However, a fundamental limitation of oximetry is that it is impossible to be certain that a child is awake or asleep. ‘Sleep studies’ may in fact be ‘wake studies’. A further limitation is that not all obstructive upper airway events in sleep are associated with oxygen desaturation, in some cases children arouse from sleep before desaturation occurs. Nonetheless, these arousal related events can be clinically significant causing fragmentation of sleep and daytime cognitive and behavioural difficulties.3 Finally, clinical context is key, and oximetry must be interpreted with caution in children with Down syndrome and central nervous system pathology when oxyhaemoglobin desaturation may reflect central, as well as, obstructive apnoeic episodes. Distinguishing central apnoea from obstructive apnoea is only possible when respiratory effort is measured as part of a cardiorespiratory polygraphy study. This is important as treatment differs—adenotonsillectomy will not treat central apnoea. In addition, normal oximetry values in children over the age of 1 year are not applicable to young infants.4 While resources for sleep laboratory polysomnography remain limited, cardiorespiratory polygraphy offers a positive intermediate diagnostic option. A variety of commercial devices offer standard respiratory sensors without the neurophysiological sensors needed for the detection of sleep in polysomnography. These devices are simpler to set up and the output is simpler to score. A further advantage is that they can be used in domiciliary settings and are the principle diagnostic tool for paediatric sleep apnoea in much of mainland Europe. Recent data suggest that cardiorespiratory polygraphy achieves adequate sensitivity (90.9%) and specificity (94.1%) for diagnosis of paediatric OSA, although it may fail to accurately recognise milder forms of the condition.5 Indeed, cardiorespiratory polygraphy is recommended in the standard National Health Service (NHS) England contract for tertiary respiratory paediatric services. The number of children referred for such studies has risen dramatically over recent years (personal correspondence British Paediatric Sleep Association) in line with NHS England national data that show a 50% increase in activity for sleep studies across all ages over the past 5 years. In summary, Burke and colleagues highlight an important area for future research and service development. The question ‘is one night of oximetry enough’ to diagnose OSA could more usefully be reframed as ‘is one night of oximetry too much…… without expert interpretation!’ Oximetry as a diagnostic option for OSA is acknowledged by the European Respiratory Society Task Force3 as sometimes necessary in ‘resource limited settings’, but importantly, not as the default diagnostic approach. If oximetry is used, the question about night to night variability in respiratory events is an important one and Burke's data indicate the need for further study, particularly in vulnerable populations. Further data on diagnostic test accuracy, particularly in high-risk groups such as Down syndrome, where oximetry could potentially offer a cost-effective and acceptable screening modality, may allow more targeted use of motion-resistant oximeters in the future. Practical guidance on the use of oximetry, incorporating the indications for, and interpretation of, oximetry studies as part of an investigative pathway for sleep disordered breathing is lacking and should be developed. In the interim, we advocate that the recommendations from the European Task Force3 are widely adopted (figure 3) and that cardiorespiratory polygraphy services are further developed to meet the resource gap where full polysomnography facilities are absent. Figure 3 European Task Force: objective diagnosis and assessment of SDB severity.3 OSA, obstructive sleep apnoea; SDB, sleep disordered breathing. References View Abstract Footnotes • Competing interests None declared. • Provenance and peer review Commissioned; internally peer reviewed. Linked Articles
__label__pos
0.625878
The research of biomechanics on the third macropower in nature-the power of qi Author: Xu Caizhang Affiliation: Shandong Sports College Conference/Journal: 4th World Conf Acad Exch Med Qigong Date published: 1998 Other: Pages: 120 , Word Count: 397 Everything possesses Qi, and the power and energy of Qi exists in everything. This research is that one person stands on the three dimensional terrace for measuring power, and another person who will push him down by power is three meters away from him. The Fx value of the power is analyzed from the view of biomechanics. The power is neither gravity nor electromagnety, but is the third macropower in nature-The Power of Qi. Method: Wang Wei is the person who sends out power, and Yu Tao is the receiver. Yu Tao stands on the KISTLER Three Dimensional Terrace, the main engine of which is linked with a PC-486 computer system. Collect data under a Box model 1600 A/D machine of changing, the sampling frequency is 1000Hz, and the sampling time is 8 second 0.5/division. Determine the value of Fx, Fz and three pictures of the value of power are printed by the computer. See picture one and two, and the picture of Fz is omitted because of the value of zero. On 0.77th second, Yu Tao steps on the terrace, and stands with his legs apart On the 3.46th second, his body begins to stabilize On the 5.33rd second Wang Wei stands behind Yu Tao three meters away, and begins to send out Qi by his right palm to push him - Yu Tao begins to sway After 1.67 second, Yu Tao was pushed down suddenly on the 25cm thick protective gym mat before the terrace Result: During the process of receiving power, Yu Tao didn't receive any power on the surface of the z axle, so Fz is zero. The change of acceleration existed on both surfaces of the x axle and y axles, so both Fx and Fy changed. During the period from 3.46th to 5.33th second, the body was stable, and it received no power except gravity. At this time, the computer only printed the weight, Fy=85.2kg (see picture 1). During the period from 5.33th to 7th second, he received the power of Qi, Fx, besides gravity, Fy. When the power Fx reached certain value, his center of gravity lost balances, he fell down forward. At this time, the value of power reached highest, Fx=12.8kg (see picture 2). These are methods from the view of biomechanics to research the situation of the power of Qi to human body. We will make further research to it.
__label__pos
0.908152
Skip to content Advertisement Biology Direct Open Access Biochemical and proteomics analyses of antioxidant enzymes reveal the potential stress tolerance in Rhododendron chrysanthum Pall Contributed equally Biology Direct201712:10 https://doi.org/10.1186/s13062-017-0181-6 Received: 16 February 2017 Accepted: 27 April 2017 Published: 3 May 2017 Abstract Background Rhododendron chrysanthum Pall., an endangered species with significant ornamental and medicinal value, is endemic to the Changbai Mountain of China and can also serve as a significant plant resource for investigating the stress tolerance in plants. Proteomics is an effective analytical tool that provides significant information about plant metabolism and gene expression. However, no proteomics data have been reported for R. chrysanthum previously. In alpine tundra, the abiotic stress will lead to a severe over-accumulation of reactive oxygen species (ROS). Many alpine plants overcome the severe stresses and protect themselves from the oxidative damage by increasing the ratio and activity of antioxidant enzymes. Results In our study, wild type and domesticated Rhododendron chrysanthum Pall. were used as experimental and control groups, respectively. Proteomics method combined with biochemical approach were applied for the stress tolerance investigation of R. chrysanthum at both protein and molecular level. A total of 1,395 proteins were identified, among which 137 proteins were up-regulate in the experimental group. The activities of superoxide dismutase (SOD), catalase (CAT), ascorbate peroxidases (APXs), and glutathione peroxidase (GPX) were significantly higher and the expression of APXs and GPX were also increased in the experimental group. Moreover, the interaction network analysis of these enzymes also reveals that the antioxidant enzymes play important roles in the stress resistance in plants. Conclusions This is the first report of the proteome of Rhododendron chrysanthum Pall., and the data reinforce the notion that the antioxidant system plays a significant role in plant stress survival. Our results also verified that R. chrysanthum is highly resistant to abiotic stress and can serve as a significant resource for investigating stress tolerance in plants. Reviewers This article was reviewed by George V. (Yura) Shpakovski and Ramanathan Sowdhamini. Keywords Rhododendron chrysanthum Pall.ProteomicsAntioxidant enzymesStress tolerance Background Rhododendron chrysanthum Pall. (R. chrysanthum), belonging to the family of Ericaceae, is one of the most precious germplasm resources in the world. In China, R. chrysanthum only grows at altitudes between 1,300 m and 2,650 m on the Changbai Mountain where belongs to alpine tundra. Changbai Mountain is a hibernating volcano located at the junction of China and North Korea and was formed during the Quaternary glacial period [1]. At the top of the mountain, the annual average temperature is -7.3 °C. The harsh climate and poor soil at the top of the Changbai Mountain are serious challenges for plants. The long adaptive evolution process of R. chrysanthum evolved resistance to the cold temperatures, drought, strong UV radiation and other abiotic stresses. Proteomics was defined and proposed in 1995 [2]. Nowadays, with the development of the omics, the technique of proteomics has already become an effective tool for understanding plants at the proteomics level [3]. Proteomics approaches have been widely used in studies of plant growth and development [4], secondary metabolism [5], cell death [6] and stress tolerance [7]. In the field of plants stress resistance, especially in abiotic stress, proteomics has made a tremendous contribution [8]. Proteomics studies in the category of abiotic stress have been focused mainly on cold temperatures [9], drought [10], flooding [11], salinity [12] and heavy metals [13]. To date, the plant materials used in proteomics researches including rice, wheat, beans and many other plants [14]. However, the technique of proteomics has not been used widely in the study of alpine plants [15]. In previous studies of alpine plants, morphological, physiological and biochemical approaches were used to understand the plants’ underlying molecular and physiological mechanisms for adapting to tough environments. The results of these studies showed that, to face the harsh circumstances of high altitudes, alpine plants have evolved through changes in many different features at morphological and physiological levels [16]. In the present study, TMT labeling integrated with LC-MS/MS was used to quantify the dynamic changes of the whole proteome of R. chrysanthum. Furthermore, the proteomics methods were combined with biochemical analysis to unravel the contribution of superoxide dismutase (SOD), catalase (CAT), ascorbate peroxidases (APXs) and glutathione peroxidase (GPX) to the stress resistance of R. chrysanthum. These results provide the first proteome-wide view of the R. chrysanthum and reinforce the notion that the antioxidant system plays a significant role in the environment adaptation and the stress tolerance of plants. Results Proteome-wide analysis of Rhododendron chrysanthum Pall. The overview of experimental design is shown in Fig. 1a. Briefly, proteins were first extracted and digested into peptides. TMT labeling and LC- MS/MS were then used to analyze and quantify the dynamic changes of the proteome. The distribution of mass error is near zero and most of peptides are less than 0.02 Da (Fig. 1b). The length of most peptides distributed between 8 and 16, which agreed with the properties of the tryptic peptides (Fig. 1c). In the present work, 1,395 proteins were identified and 705 of those proteins were quantified. A quantitative ratio over 1.3 was considered as up-regulation (UR). Based on this standard, 137 up-regulated proteins were identified in the experimental group (EG), compared with the control group (CG). A Gene Ontology (GO) functional classification was then used to further understand the whole UR protein distribution in the EG. Most proteins were involved in metabolic processes, cellular processes, and cell and membrane physiology. Under the category of molecular function, we identified 14 UR proteins with antioxidant activity. The results of subcellular localization analysis showed that the UR proteins were mainly localized in chloroplast (42%), cytosol (31%), and mitochondria (12%) (Fig. 1d). These data suggested that the changes in R. chrysanthum covered a broad range of cellular processes and most were localized in crucial cellular compartments that play important roles in plant development. Fig. 1 Proteome-wide analysis of Rhododendron chrysanthum Pall. a Overview of experimental process of this study. b Mass error distribution of all identified peptides. c Peptide length distribution. d The subcellular location of up-regulated proteins in experimental group Analysis of superoxide dismutase and related proteins Superoxide dismutase (SOD, EC 1.15.1.1) are the first line enzymes that protect plants from ROS damage; they convert O2•− into H2O2 [17]. Three types of SOD (Fe-SOD, Mn-SOD, and Cu/Zn-SOD) are located in different organelles. The Cu/Zn-SOD included two types: CSD-1 and CSD-2 (CSD), which were both located in the chloroplast [18]. The CSD-2 was quantified in present study. All quantified proteins (705) were used as background and the proteins related to CSD-2 were identified using String and Cytoscape software. The network of protein interactions is shown in Fig. 2a. In total, 16 proteins had direct interaction with CSD-2. Among these, 5 proteins were up-regulated in this network. The activity of SOD was assayed and analyzed at the same time (Fig. 2a). The activity of SOD was increased rapidly in the EG, accounting for approximately 118% increase of CG value. Fig. 2 Activity analyses and the interaction networks of SOD a, CAT b, APX c and GPX d SOD refers to superoxide dismutase, CAT refers to catalase, APX refers to ascorbate peroxidases and GPX refers to glutathione peroxidases. EG and CG stand for the experimental group and the control group, respectively. Values are expressed as means ± SD, n = 3. Statistically different values (p < 0.05) are indicated by different letters Analysis of catalase and related proteins Catalase (CAT, E.C.1.11.1.6), an iron porphyrin enzyme which mainly localized in peroxisomes, can effectively remove the H2O2 and prevent the over-accumulation of reactive oxygen species (ROS) [19]. The types of CAT are varied in different plants and in recent study three types of CAT (CAT1, CAT2 and CAT3) were quantified in R. chrysanthum. However, only CAT2 was involved in the network that used total quantified proteins as the background and obtained from String (Fig. 2b). The network of CAT2 was made up of 40 proteins, including 5 up-regulated proteins (GAPB, GPX1, AGT, APX1, and APX2). At the same time, the determination of CAT activity revealed significantly higher activity in the EG than in the CG, or an increase of nearly 70% (Fig. 2b). Analysis of ascorbate peroxidases and related proteins Ascorbate peroxidases (APXs, EC 1.11.1.1) are the key enzymes of hydrogen peroxide detoxification system that can convert H2O2 into water. The isoforms of APX were classified based on their subcellular localization [20]. The protein-protein interaction network designed by String and visualized by Cytoscape is shown in Fig. 2c, where the color represents the weight of each protein in the network. Four types of APX, including APX1, APX2, SAPX (localized in chloroplast stroma), and TAPX (localized in chloroplast thylakoids), were identified in the network. Overall, 52 proteins (including four major APXs) were involved in this interaction network. Among them, 11 proteins containing APX1 and APX2 were expressed at much higher levels in the EG compared with CG. In present work, the activity of APX in EG was significant higher compared with CG, an increase of 133% (Fig. 2c). Analysis of glutathione peroxidase and related proteins Glutathione peroxidase (GPX, EC 1.11.1.9) are efficient ROS scavengers with high affinity for H2O2 [21]. They can reduce the H2O2 and organic hydroperoxides, thereby protecting cells from oxidative damage [22]. GPXs were widely distributed in plant cells with highly conservative cysteine residue. Eight GPXs in Arabidopsis were found in previous studies [23]. In our latest study, we identified two types of GPX (GPX1 and GPX7) in R. chrysanthum. However, only GPX1 can be retrieved in the String database and 21 relevant proteins were found among the total quantified proteins (705). The network of GPX1 and related proteins is shown in Fig. 2d. Overall, 6 proteins containing GPX1, the core protein of this network, were upregulated. Apart from GPX1, the expression of APX1, another important enzyme in H2O2 scavenging, was also increased in this network. The GPX antioxidant activity increased sharply in the EG, which was approximately 262% of the activity in the CG (Fig. 2d). Antioxidant protein interaction network in Rhododendron chrysanthum Pall. The role of the UR proteins in this alpine plant was investigated by setting up the protein interaction networks of mainly antioxidant proteins—SOD (CSD2), APX (APX1, APX2, SAPX, and TAPX), CAT (CAT2), and GPX (GPX1)—via String and Cytoscape (Fig. 3). Overall, 129 proteins were mapped in the present study, including 7 main proteins (CSD2, APX1, APX2, SAPX, TAPX, CAT2 and GPX1) and 122 relevant proteins. All the UR proteins in the networks were shaped as a rhombus. The connectedness and weights of the proteins in this network were distinguished by color. Two highly interrelated clusters of these proteins were obtained according to the Cytoscape software program. Proteins in the largest cluster (Cluster I) were mainly localized in chloroplast, and the second-largest cluster (Cluster II) consisted of proteins localized in cytosol. The Venn diagram in Fig. 4 shows the number, percentage, and overlap of proteins involved in the interactions of the four mainly categories of enzymes. Seven common proteins, CSD2, APX1, TAPX, CAT2, SAPX, GR, and GPX1, played a significant role in these four networks. Among these, two proteins were up-regulated in the EG (APX1 and GPX1). The up-regulated proteins make indispensable contributions to the whole antioxidant system and play crucial roles in the cooperation and coordination in R. chrysanthum. Fig. 3 Interaction network of SOD, CAT, APX, GPX and their related proteins in Rhododendron chrysanthum Pall. Fig. 4 Venn diagram for SOD, CAT, APX, GPX and their related proteins in Rhododendron chrysanthum Pall. Discussion Rhododendron chrysanthum Pall. is a cherished plant resource throughout the world and it has been reported in a few researches. However, to date, the research of R. chrysanthum consists of a limited number of papers on morphological, physiological, and biochemical aspects. In our present study, the proteomics was combined with a biochemical approach to investigate R. chrysanthum at both the protein and molecular levels. Wild type and domesticated R. chrysanthum varieties were used as experimental and control groups, respectively, in the current study. We quantified a total of 705 proteins, including 137 up-regulated proteins, through an integrated approach involving TMT labeling and LC-MS/MS. These up-regulated proteins covered various biological functions and were localized in multiple subcellular structures. This finding suggests that considerable changes occurred in the two types of R. chrysanthum and these changes play crucial roles in a wide range of cellular processes. These data are the first report of the proteome-wide analysis in R. chrysanthum. Higher plants have an innate antioxidant defense system that is sensitive to abiotic stress [24]. This highly complex and regulated system is also the reason that alpine plants can survive in changing environments. In the environment of the alpine tundra, the alpine plants should face harsh circumstances, consistent with low temperature, low oxygen content, high UV radiation, and strong wind. These factors will lead to a severe over-accumulation of ROS which have potential toxicity and can also result in oxidative damage of plants. In a rugged environment, plants defense the severe stresses by increasing the ratio and activity of antioxidant enzymes. These antioxidant enzymes, including SOD, CAT, APXs and GPX, work together to detoxify ROS and protect plants from oxidative damage. In this study, the activities of these four enzymes were significantly raised in the EG compared with the CG, suggesting that the antioxidant system has clearly changed in the wild type R. chrysanthum and highlighting the notion that these antioxidant enzymes play key roles in the stress tolerance of plants. The expression of APXs, GPX, and the proteins that have direct interactions with SOD and CAT were increased in the EG at the protein level. The interaction network analysis of these enzymes also reveals that these enzymes and the up-regulated proteins play important roles in the stress resistance in plants. All these findings indicated that these enzymes can protect plants from oxidative injury by scavenging free radicals, and they can enhance the ability of plants to withstand a rugged environment. Conclusions Our findings provide the first extensive data of the proteome of Rhododendron chrysanthum Pall. and an affluent dataset for the further investigation of stress tolerance in alpine plants. It also verified that the antioxidant system of R. chrysanthum has been successfully enhanced in the long-term adaptive process. Our results reinforce the notion that the antioxidant system plays a significant role in plants, especially in the adaptation to and tolerance of environmental stress. Methods Plant materials and growth conditions Wild type and domesticated Rhododendron chrysanthum Pall. tissue seedlings were used as the experimental group (EG) and the control group (CG), respectively. The leaves excised from four-month-old plants of the EG and the CG were immediately used for protein extraction. To ensure adequately coverage, three biological replicates of each group (i.e. six plants) were collected. Protein extraction Plant materials were ground into liquid nitrogen and then transferred to 5 mL centrifuge tubes and sonicated three times on ice using a high intensity ultrasonic processor (Scientz) in lysis buffer (8 M urea, 2 mM EDTA, 10 mM DTT and 1% Protease Inhibitor Cocktail). The remaining debris were removed by centrifugation at 20,000 × g at 4 °C for 10 min. The protein in the supernatant was precipitated with cold 15% TCA at -20 °C for 4 h. After centrifugation at 4 °C for 3 min, the remaining precipitates were washed with cold acetone three times. Finally, the protein was redissolved in the buffer (8 M urea, 100 mM TEAB, pH 8.0), and the protein concentration in the supernatant was estimated with a 2-D Quant kit, according to the manufacturer’s instructions. Trypsin digestion For trypsin digestion, the protein solution was reduced with 10 mM DTT for 1 h at 37 °C and alkylated by adding 20 mM IAA to the mixture for 45 min at room temperature in the dark. Subsequently, the protein samples were diluted by adding 150 mM TEAB to the urea concentration for less than 2 min. After being diluted, the protein samples were digested with trypsin at a trypsin-to-protein mass ratio of 1:50 for the first digestion for 8 h, and a trypsin-to-protein mass ratio of 1:100 for a second 4 h-digestion. Approximately 100 μg protein of each sample was digested with trypsin for the following experiments. Tandem mass tags (TMT) labeling After trypsin digestion, peptide was desalted by Strata X C18 SPE column (Phenomenex) and vacuum-dried. Peptide was reconstituted in 1 M TEAB and then labeled with a 6-plex TMT kit (Thermo) according to the manufacturer's instructions. Each TMT reagent was thawed and reconstituted in 24 μl acetonitrile (ACN). Finally, the peptide mixtures were incubated for 2 h at room temperature and then lyophilized by vacuum centrifugation. HPLC fractionation After TMT labeling, the samples were injected into an Agilent 300 Extend C18 column (5 μm particles, 4.6 mm ID, 250 mm length) and fractionated by high pH reverse-phase HPLC. Peptides were first separated into 80 fractions with a gradient of 2 to 60% acetonitrile in 10 mM ammonium bicarbonate pH10 for 80 min. Next, the peptides were combined into 18 fractions and dried by vacuum centrifuging. LC-MS/MS analysis Three parallel analyses were performed for each fraction. The enriched peptides were analyzed by Q Exactive™ hybrid quadrupole-Orbitrap (Thermo Fisher Scientific), briefly dissolved in 0.1% formic acid (FA) and directly loaded onto an analytical reversed phase analytical column (Acclaim PepMap RSLC, Thermo Fisher Scientific) with a pre-column (Acclaim PepMap 100, Thermo Fisher Scientific).The gradient comprised an increase from a 5% solvent buffer to a 25% one (0.1% FA in 98% ACN) for 26 min, from 25 to 40% for 8 min, climbing to 80% in 3 min, and then remaining at 80% for the last 3 min. The resulting peptides were subjected to an NSI source, followed by tandem mass spectrometry (MS/MS) in Q Exactive™ (Thermo Fisher Scientific), coupled online to the UPLC. Intact peptides were detected in the Orbitrap at a resolution of 70,000. Peptides were selected for MS/MS using a NCE setting of 28, and ion fragments were detected in the Orbitrap at a resolution of 17,500. A data-dependent procedure that alternated between one MS scan followed by 20 MS/MS scans was applied for the top 20 precursor ions above a threshold ion count of 1E4 in the MS survey scan, with 30.0 s of dynamic exclusion. The electrospray voltage applied was 2.0 kV. Automatic gain control (AGC) was used to prevent an overfilling of the ion trap; 5E4 ions were accumulated for generation of MS/MS spectra. For MS scans, the m/z scan range was 350 to 1800, and the fixed first mass was set as 100 m/z. Database search The resulting MS/MS data were processed using the Mascot search engine (v.2.3.0). Tandem mass spectra were searched against the SwissProt Green Plant database. Trypsin/P was specified as a cleavage enzyme allowing up to 2 missing cleavages. Mass error was set to 10 ppm for precursor ions and 0.02 Da for fragment ions. Carbamidomethyl on Cys, were specified as fixed modification and oxidation on Met was specified as variable modifications. For protein quantification methods, TMT 6-plex was selected in the Mascot. FDR was adjusted to < 1%, and the peptide ion score was set to ≥ 20. Bioinformatics analysis The Gene Ontology (GO) annotation proteome was derived from the UniProt-GOA database, and the proteins were classified by Gene Ontology annotation based on three categories: biological process, cellular component and molecular function. The domain functional descriptions of Up-regulated proteins were annotated by InterProScan (a sequence analysis application) based on protein sequence alignment method. The GO and domains with a corrected p-value < 0.05 were considered significant. Wolfpsort was used to predict subcellular localization of the up-regulated proteins. The protein-protein interaction network was obtained from the String database and the interactions between proteins were performed using Cytoscape software (3.4.0) [25]. The Venn diagram was designed by Venny 2.1.0. Assay of enzyme activities 200 mg of leaves with three biological replicates were used for the determination of enzyme activities and handled according to the method of corresponding kit. SOD (EC 1.15.1.1), APX (EC 1.11.1.11) and CAT (EC 1.11.1.6) activities were detected according to the method of Mittova et al. [26]. The GPX (EC 1.11.1.9) activity was assayed as described by Drotar et al. [27]. The statistical analyses of the antioxidant enzyme activities were performed by using SAS 9.4. A value of P < 0.05 was considered a statistically significant difference. Reviewers’ comments Reviewer's report 1 George V (Yura) Shpakovski, Russian Academy of Sciences, Russia Reviewer comments The authors have used biochemical and proteomics approaches to investigate proteome of the alpine plant Rhododendron chrysanthum Pall., an endangered species with significant ornamental and medicinal value. Since the protein diversity of the alpine tundra species was never studied before and no proteomics data have been reported for R. chrysanthum previously, the finding described in this manuscript can be considered novel. In addition, it was shown (by measurements of the activity of antioxidant enzymes and concentration-accumulation of reactive oxygen species [ROS]) that the antioxidant system probably plays a significant role in R. chrysanthum stress survival in the wild, supporting the idea that R. chrysanthum can serve as a significant resource for investigating stress tolerance in plants. The manuscript describes a large amount of work (1,395 proteins were identified and 705 of them were quantified) and is well written, and the results obtained are interesting and support the conclusions of the work. Owing to high quality scholarly presentation, I am favorably biased to acceptance. Still, the manuscript would greatly benefit from correction by a native speaker to be checked thoroughly for language and style. Here there are a few minor comments and suggestions for some editing and proofreading (underlined): 1) Biochemical and proteomics analysis of antioxidant enzymes reveals the potential stress tolerance in Rhododendron chrysanthum Pall. (Title, page 1, lines 1-3) Author's response: Thanks for the kind suggestion of Prof. George V (Yura) Shpakovski and we have modified the sentence according to the suggestion. 2) Many alpine plants defense (overcome?!?) the severe stresses and protect themselves from the oxidative damage by increasing the ratio and activity of antioxidant enzymes. (Abstract, page 1, lines 19-21) Author's response: We have corrected this sentence according to the suggestion. 3) Proteomics method combined with biochemical approach were applied for investigating the oxidation resistance and the potential stress tolerance of R. chrysanthum in both protein and molecular level. (Abstract, page 2, lines 1-3) Author's response: We have modified the specified sentence. 4) Specific note: In my opinion, the “stress tolerance” (especially “the potential stress tolerance”) was not studied in the paper. Author's response: In our study, the up-regulated expression and higher activities of ROS-scavenging enzymes like Superoxide dismutase, Catalase, Ascorbate peroxidases, and Glutathione peroxidase were observed in the experimental group in comparison to the control. All these enzymes were involved in the coordinated regulation of the homeostasis maintenance under stress and played significant roles in the improvement of stress resistance, so we finally used “the potential stress tolerance” in this paper. Reviewer's report 2 Ramanathan Sowdhamini, Tata Institute of Fundamental Research, India Reviewer comments This manuscript addresses an important issue of how plants respond to abiotic stresses, in general. The authors have approached the question well and performed quality analysis including check for enzyme activity and interacting partners. I would suggest that the manuscript is publishable after they consider few comments as mentioned below. In this paper, the authors have analysed the genes that get upregulated in Rhododendron chrysanthum Pall., when subjected to abiotic stresses. This plant grows in the high altitudes in China, where it naturally withstands several abiotic stresses, such as low temperature, low oxygen, cold, UV exposure, poor soil. Using techniques such as tandem mass tags and LC/MS, the genes upregulated between wild and domesticated varieties were compared. Out of 1395 genes studied, 137 of them were identified as upregulated during abiotic stress. Within this set, 14 were noted to be antioxidants through function annotation by consulting Gene Ontology database. In particular, upregulation and higher activities of ROS-scavenging enzymes like Superoxide dismutase, Catalase, Ascorbate peroxidases, and Glutathione peroxidase were observed in the experimental group (subject to abiotic stress) in comparison to the control. These enzymes, in turn, were recorded to engage in protein-protein interactions with other proteins, several of those are upregulated during the given stress. Cellular localizations of these proteins were mostly within cytosol and chloroplast. It is a nice piece of work and I would recommend acceptance of this manuscript in Biology Direct. 1) The list of upregulated genes, function annotation and Arabidopsis orthologs could be provided as Additional files. Author's response: We very much appreciate the overall comments of Prof. Sowdhamini. As Prof. Sowdhamini suggested that the list of all up-regulated proteins and GO functional classification of these proteins were provided as Additional file 1 : Table and Additional file 2 : Table. 2) It is not clear why the authors did not choose transcriptomics to follow the differential gene expression patterns. It will be interesting to also consider the tissue localization of these genes through either direct transcriptome data or derived from orthologs in Arabidopsis thaliana and consulting Plant Ontology databases as well. Author's response: Thanks for Prof. Sowdhamini’s valuable comments. We found that we still have many inadequacies in our current work. In future study, we will perform transcriptome and phosphorylation analyses for the further investigation of the stress tolerance mechanisms. 3) Certain abbreviations, like EG and CG (standing for experimental group and control group, respectively) are not clear. These are defined much later in Methods. Author's response: We have made correction according to the comment. 4) This sentence is not very clear: “Among these, 5 up-regulation proteins were 1 shaped as rhombus and the weight of 2 protein was classified by color in this network.” If it is a technical matter, it could be moved to Figure legend. Author's response: We have re-written this part according to the suggestion. Abbreviations ACN:  Acetonitrile APX:  Ascorbate peroxidase CAT:  Catalase DTT:  Dithiothreitol FA:  Formic acid GO:  Gene ontology GPX:  Glutathione peroxidase SOD:  Superoxide dismutase TMT labeling:  Tandem mass tags labeling Declarations Acknowledgements Not applicable. Funding This work was mainly supported by the National Natural Science Foundation of China (31070224) and the Science and Technology Department of Jilin Province (20130206059NY). Availability of data and materials The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Authors’ contributions XZ and HX designed the research; SC and HW prepared the plant materials for sequencing. SC carried out bioinformatics analysis of data; SC, HW and YY performed the experiments and statistical analyses; SC and YY collected data and researched literature, SC interpreted the data and wrote the manuscript. All authors read and approved the final manuscript. Competing interests The authors declare that they have no competing interests. Consent for publication Not applicable. Ethics approval and consent to participate Not applicable. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Authors’ Affiliations (1) Jilin Provincial Key Laboratory of Plant Resource Science and Green Production, Jilin Normal University, Siping, China References 1. Zhou Y, Hagedorn F, Zhou C, Jiang X, Wang X, Li MH. Experimental warming of a mountain tundra increases soil CO2 effluxes and enhances CH4 and N2O uptake at Changbai Mountain, China. Sci Rep. 2016;6:21108. doi:10.1038/srep21108.View ArticlePubMedPubMed CentralGoogle Scholar 2. Wilkins MR, Sanchez JC, Gooley AA, Appel RD, Humphery-Smith I, Hochstrasser DF, Williams KL. Progress with proteome projects: why all proteins expressed by a genome should be identified and how to do it. Biotechnol Genet Eng. 1996;13(1):19–50.View ArticleGoogle Scholar 3. Eldakak M, Milad SI, Nawar AI, Rohila JS. Proteomics: a biotechnology tool for crop improvement. Front Plant Sci. 2013;4:35. doi:10.3389/fpls.2013.00035.View ArticlePubMedPubMed CentralGoogle Scholar 4. Agrawal L, Chakraborty S, Jaiswal DK, Gupta S, Datta A, Chakraborty N. Comparative proteomics of tuber induction, development and maturation reveal the complexity of tuberization process in potato (Solanum tuberosum L.). J Proteome Res. 2008;7:3803–17. doi:10.1021/pr8000755.View ArticlePubMedGoogle Scholar 5. Martínez-Esteso MJ, Martínez-Márquez A, Sellés-Marchart S, Morante-Carriel JA, Bru-Martínez R. The role of proteomics in progressing insights into plant secondary metabolism. Front Plant Sci. 2015;6:504. doi:10.3389/fpls.2015.00504.View ArticlePubMedPubMed CentralGoogle Scholar 6. Choi DS, Hwang BK. Proteomics and functional analyses of pepper abscisic acid-responsive 1 (ABR1), which is involved in cell death and defense signaling. Plant Cell. 2011;23(2):823–42. doi:10.1105/tpc.110.082081.View ArticlePubMedPubMed CentralGoogle Scholar 7. Komatsu S, Kamal AH, Hossain Z. Wheat proteomics: proteome modulation and abiotic stress acclimation. Front Plant Sci. 2014;5:684. doi:10.3389/fpls.2014.00684.View ArticlePubMedPubMed CentralGoogle Scholar 8. Kosova K, Vitamvas P, Prasil IT, Renaut J. Plant proteome changes under abiotic stress-contribution of proteomics studies to understanding plant stress response. J Proteomics. 2011;74(8):1301–22. doi:10.1016/j.jprot.2011.02.006.View ArticlePubMedGoogle Scholar 9. Bertrand A, Bipfubusa M, Castonguay Y, Rocher S, Szopinska-Morawska A, Papadopoulos Y, Renaut J. A proteome analysis of freezing tolerance in red clover (Trifolium pratense L.). BMC Plant Biol. 2016;16:65. doi:10.1186/s12870-016-0751-2.View ArticlePubMedPubMed CentralGoogle Scholar 10. Aranjuelo I, Molero G, Erice G, Avice JC, Nogues S. Plant physiology and proteomics reveals the leaf response to drought in alfalfa (Medicago sativa L.). J Exp Bot. 2011;62(1):111–23. doi:10.1093/jxb/erq249.View ArticlePubMedGoogle Scholar 11. Alam I, Lee DG, Kim KH, Park CH, Sharmin SA, Lee H, Oh KW, Yun BW, Lee BH. Proteome analysis of soybean roots under waterlogging stress at an early vegetative stage. J Biosci. 2010;35(1):49–62.View ArticlePubMedGoogle Scholar 12. Silveira JA, Carvalho FE. Proteomics, photosynthesis and salt resistance in crops: An integrative view. J Proteomics. 2016;143:24–35. doi:10.1016/j.jprot.2016.03.013.View ArticlePubMedGoogle Scholar 13. Singh S, Parihar P, Singh R, Singh VP, Prasad SM. Heavy metal tolerance in plants: role of transcriptomics, proteomics, metabolomics, and ionomics. Front Plant Sci. 2016;6:1143. doi:10.3389/fpls.2015.01143.PubMedPubMed CentralGoogle Scholar 14. Parreira JR, Bouraada J, Fitzpatrick MA, Silvestre S, Bernardes Da Silva A, Marques Da Silva J, Almeida AM, Fevereiro P, Altelaar AF, Araújo SS. Differential proteomics reveals the hallmarks of seed development in common bean (Phaseolus vulgaris L.). J Proteomics. 2016;143:188–98. doi:10.1016/j.jprot.2016.03.002.View ArticlePubMedGoogle Scholar 15. Ma L, Sun X, Kong X, Galvan JV, Li X, Yang S, Yang Y, Yang Y, Hu X. Physiological, biochemical and proteomics analysis reveals the adaptation strategies of the alpine plant Potentilla saundersiana at altitude gradient of the Northwestern Tibetan Plateau. J Proteomics. 2015;112:63–82. doi:10.1016/j.jprot.2014.08.009.View ArticlePubMedGoogle Scholar 16. Guo Y, Guo N, He Y, Gao J. Cuticular waxes in alpine meadow plants: climate effect inferred from latitude gradient in Qinghai-Tibetan Plateau. Ecol Evo. 2015;5(18):3954–68. doi:10.1002/ece3.1677.View ArticleGoogle Scholar 17. Raychaudhuri SS, Deng XW. The role of superoxide dismutase in combating oxidative stress in higher plants. Bot Rev. 2000;66:89–98. doi:10.1007/BF02857783.View ArticleGoogle Scholar 18. Alscher RG, Erturk N, Heath LS. Role of superoxide dismutases (SODs) in controlling oxidative stress in plants. J Exp Bot. 2002;53(372):1331–41.View ArticlePubMedGoogle Scholar 19. Mhamdi A, Queval G, Chaouch S, Vanderauwera S, Van Breusegem F, Noctor G. Catalase function in plants: a focus on Arabidopsis mutants as stress-mimic models. J Exp Bot. 2010;61(15):4197–220. doi:10.1093/jxb/erq282.View ArticlePubMedGoogle Scholar 20. Teixeira FK, Menezes-Benavente L, Galvão VC, Margis R, Margis-Pinheiro M. Rice ascorbate peroxidase gene family encodes functionally diverse isoforms localized in different subcellular compartments. Planta. 2006;224(2):300–14. doi:10.1007/s00425-005-0214-8.View ArticlePubMedGoogle Scholar 21. Brigelius-Flohé R, Flohé L. Is there a role of glutathione peroxidases in signaling and differentiation? Biofactors. 2003;17(1-4):93–102. doi:10.1002/biof.5520170110.View ArticlePubMedGoogle Scholar 22. Ursini F, Maiorino M, Brigelius-Flohé R, Aumann KD, Roveri A, Schomburg D, Flohé L. Diversity of glutathione peroxidases. Methods Enzymol. 1995;252:38–53. doi:10.1016/0076-6879(95)52007-4.View ArticlePubMedGoogle Scholar 23. Ozyigit II, Filiz E, Vatansever R, Kurtoglu KY, Koc I, Öztürk MX, Anjum NA. Identification and Comparative Analysis of H2O2-Scavenging Enzymes (Ascorbate Peroxidase and Glutathione Peroxidase) in selected plants employing bioinformatics approaches. Front Plant Sci. 2016;7:301. doi:10.3389/fpls.2016.00301.View ArticlePubMedPubMed CentralGoogle Scholar 24. Kasote DM, Katyare SS, Hegde MV, Bae H. Significance of antioxidant potential of plants and its relevance to therapeutic applications. Int J Bio Sci. 2015;11(8):982–91. doi:10.7150/ijbs.12096.View ArticleGoogle Scholar 25. Zhang Y, Song L, Liang W, Mu P, Wang S, Lin Q. Comprehensive profiling of lysine acetylproteome analysis reveals diverse functions of lysine acetylation in common wheat. Sci Rep. 2016;6:21069. doi:10.1038/srep21069.View ArticlePubMedPubMed CentralGoogle Scholar 26. Mittova V, Volokita M, Guy M, Tal M. Activities of SOD and the ascorbate-glutathione cycle enzymes in subcellular compartments in leaves and roots of the cultivated tomato and its wild salt-tolerant relative Lycopersicon pennellii. Physiol Plant. 2000;110:42–51. doi:10.1034/j.1399-3054.2000.110106.View ArticleGoogle Scholar 27. Drotar A, Phelps P, Fall R. Evidence for glutathione peroxidase activities in cultured plant cells. Plant Sci. 1985;42:35–40. doi:10.1016/0168-9452(85)90025-1.View ArticleGoogle Scholar Copyright © The Author(s). 2017 Advertisement
__label__pos
0.517443
What are some of the requirements of Safe Quality Food for Storage and Distribution? SQF maintains a web page that provides a tremendous amount of information regarding the food safety programs, training opportunities, and detailed requirements: www.sqfi.com. The actual Code for Storage and Distribution can be found under their “Resource Center” and it’s free to download. You can also find additional information about FDA and the recent Food Safety Modernization Act on the same site. Americold’s Director of Food Safety is an active member of SQF’s Technical Advisory Council and could likely answer any further questions you might have. Contact us for additional information. What are some examples of controls employed by Americold warehouses to keep my food safe? Each Americold facility follows FDA-mandated requirements for documenting and implementing Hazard Analysis and Risk-Based Preventive Controls. As food is received and shipped at our warehouses, product temperatures are collected, recorded, and verified to meet customers’ strict requirements. Trailers are inspected to ensure they are capable of maintaining food security and product integrity. During storage, cold room temperatures are monitored multiple times per day to ensure tight temperature control. Additionally, Americold facilities deploy several other food safety perquisite programs, such as allergen control to prevent allergen cross contact, cleaning and sanitation, preventive maintenance, and pest control.
__label__pos
0.993532
  Volcanoes Volcanoes A cross-section through an erupting volcanoA cross-section through an erupting volcano A volcano—often, but not always, a cone-shaped mountain—is an opening in the Earth’s crust through which molten rock, called magma erupts. When a volcano erupts and hurls out its red-hot rock, this is one of the most awesome events of nature. It happens at a hole, crack or weak point in the solid rocks of the Earth’s crust. Melted rock called magma from deep below forces its way up under incredible temperature and pressure. As it emerges it is called lava. When it cools and hardens, it forms a type of rock known as igneous rock. Anak Krakatau erupting in a violent explosionAnak Krakatau erupting in a violent explosion Explosive eruptions In a volcanic eruption, magma is blasted out of the volcano’s crater. If the magma is thick and pasty, the gas trapped inside it cannot escape, so it builds up and up until it explodes. The eruption will be a very violent one. The erupted magma, called lava, is shattered into pumice, fragments of rock once full of gas bubbles, and ash, lava blown to powder by the force of the explosion. The pumice and ash form a huge cloud. In explosive eruptions, the thick lava moves slowly and hardens close to the volcano’s vent or crater. As this type of volcano erupts time after time, the lava builds up in layers to form a cone-shaped, steep-sided mountain known as a stratovolcano. A shield volcano erupts.A shield volcano erupts.Click to play video Shield volcanoes In many volcanoes, eruptions are less violent than those of explosive volcanoes. This is because the lava is thin and runny: the gas inside it can escape more easily. Lava oozes like boiling syrup from the volcano, flows down the gentle slopes and spreads over a wide area. As it cools, it turns into a “shield” of solid rock known as basalt. Each time the volcano erupts, it adds to the shield in layers of lava up to 10 metres (more than 30 feet) thick. These are known as shield volcanoes. The largest shield volcano chain in the world is the Hawaiian Islands, a chain of hot-spot volcanoes in the Pacific Ocean. There are also shield volcanoes in Iceland and in the Great Rift Valley of East Africa. A diagram of a shield volcanoA diagram of a shield volcano Volcanic bombs are blasted out during an eruptionVolcanic bombs are blasted out during an eruption Ash and bombs The temperature of lava erupting from a volcano can be more than 1000°C (1800°F). Volcanoes eject other substances, too: gases and fumes rich in sulphur. Some give out clouds of ash that fly high in the air. The ash may fall near the volcano and build up in thick layers around it. Some volcanoes have such explosive power that they blast out huge lumps of molten rock as big as houses. These volcanic bombs crash to the ground nearby. The ash is often blown away by the wind and may fall over a very wide area. Capulin Volcano cinder cone, New MexicoCapulin Volcano cinder cone, New Mexico Cinder cones Cinder cones are steep, conical hills made of loose fragments of volcanic rock. They are formed by explosive eruptions or lava fountains from a single vent. As the lava is blasted violently into the air, it breaks into small fragments that fall around the vent to form a cone that is often symmetrical in shape—slopes of between 30 and 40° and with a nearly circular base. Most cones have a single bowl-shaped crater at the summit. Cinder cones are often found on the flanks of other larger, volcanoes. There are, for example, about 100 cinder cones on the slopes of Mauna Kea, a shield volcano on the island of Hawaii. The most famous cinder cone, Paricutin, grew out of a cornfield in Mexico in 1943 in a matter of hours. Eruptions continued for nine years, with the cone eventually building to a height of 424 metres (1391 feet). Interior of a cinder coneInterior of a cinder cone A cross-section through a subduction zoneA cross-section through a subduction zone The Earth's volcanoes Most volcanoes are situated along the edges of the giant, jigsaw-like tectonic plates that make up the Earth’s surface. The boundaries between plates have many weak points. In particular, volcanoes form along subduction zones where one plate slides down beneath another. As the lower plate melts back into the mantle, its gases and lighter molten rock “boil” and force their way up through cracks with enormous pressure, causing eruptions. More than half of the world's active volcanoes above sea level encircle the Pacific Ocean, which is nearly surrounded by subduction zones, to form the so-called "Ring of Fire". A map of volcanoes (red triangles)A map of volcanoes (red triangles) Fissure volcanoes along the Mid-Oceanic RidgeFissure volcanoes along the Mid-Oceanic Ridge Fissure volcanoes The typical cone-shaped mountains on land are what we think of as volcanoes. But they make up less than one-hundredth of all the volcanic activity on Earth. Most magma oozes to the surface deep under water, along the crack-like fissures of the Mid-Oceanic Ridge or through smaller weak “holes” known as hot spots. If underwater volcanoes build cones tall enough they emerge at the surface as islands, such as the Hawaiian Islands in the Pacific and the Canary Islands in the Atlantic. New islands are emerging all the time: one has been recently forming in the Red Sea off the coast of Yemen A view of Mount St Helens, the day before it eruptedA view of Mount St Helens, the day before it erupted Active, dormant or extinct? When a volcano has been known to erupt in the last few hundred years, it is said to be active. An active volcano regularly erupts lava, ash, fumes and other materials. In very active volcanoes, this happens almost continuously. In others, there are weeks or months between eruptions. There are at least 1500 active volcanoes above sea level around the world, with possibly more than 10,000 active volcanoes under the oceans. A view of Mount St Helens two years after it eruptedA view of Mount St Helens two years after it eruptedWhen a volcano has not erupted for many years or centuries, but still might in the future, it is dormant (sleeping). Mount St Helens in the USA, for example, was a dormant volcano that came back to life spectacularly in 1980. It is often difficult to distinguish an extinct volcano from a dormant one. Fourpeaked Mountain in Alaska, for example, had not erupted since before 8000 BC and had long been thought to be extinct before it burst into life in September 2006. Arthur's Seat, EdinburghArthur's Seat, EdinburghWhen there have been no eruptions for tens of thousands of years the volcano is described as extinct. Arthur's Seat in Edinburgh, Scotland, is an example of an extinct volcano. It is part of an extinct volcano system from the Carboniferous Period (approximately 350 million years old). Pyroclastic flowsPyroclastic flows Volcanoes: 10 deadly dangers • 1 Blasted by the explosion • 2 Being buried by mudslides • 3 Being struck by lava bombs, chunks of molten rock ejected during an eruption and solidifying before they reach the ground • 4 Falling ash making it hard to breathe • 5 Inhaling poisonous gases • 6 Buildings collapsing under the weight of falling ash • 7 Ash making crops inedible, leading to famine • 8 Dust causing pneumonia and illnesses • 9 Burned by fires caused by lava flows • 10 Being engulfed by a pyroclastic flow, the extremely hot mixture of volcanic fragments and gases, sweeping down the volcano's slopes at speeds of more than 300 km/h and destroying everything in their path (the unfortunate residents of Pompeii perished in this way when Vesuvius erupted in AD 79) Pyroclastic flows An explosive eruption also produces tonnes of hot gas, dust, ash and steam in a glowing cloud called a pyroclastic flow (the word “pyroclastic” means “shattered by fire”). They happen when the great cloud of ash and pumice that blasts into the sky after a volcano erupts collapses back down to Earth. The cloud surges down the side of the volcano at high speed as an avalanche of red-hot rock particles and gas, destroying everything in its path. Scientists studying a recent volcanic eruption (Mount St Helens, United States, in 1980) discovered that a pyroclastic flow could travel at speeds of up to 300 km/h (about 200 mph). There is strong evidence that pyroclastic flows surged down the slopes of Vesuvius and destroyed the city of Pompeii in AD 79. Fertile slopes Vines growing in fertile soilsVines growing in fertile soilsVolcanic ash and lava are full of the minerals and chemicals needed by plants to thrive. Although newly erupted lava and ash is a hostile environment for plants, over time the rocks break down, releasing the nutrients within them. Eventually, highly fertile soils can develop on old lava and ash flows. Some of the world’s finest wines and coffee are made from crops grown on volcanic soils. But growers should beware. There is always the danger that the very same volcano that provided their fertile soil may erupt again and turn fields back into barren wastelands. A monitoring station close to Mount EtnaA monitoring station close to Mount Etna Predicting eruptions There are warning signs to look out for before a major eruption. Smoke and steam being ejected from the main vent, or even small lava eruptions, can indicate that a much bigger eruption is on the way. Earth tremors are also a sign of impending trouble. Sometimes the sides of a volcano bulge before an eruption as pressure builds up inside. Consultant:  Ian Fairchild Methods to predict an eruption include studying the behaviour of local animals. For some reason, some may become agitated before an eruption happens. More than 80% of the Earth's surface is volcanic in origin. The ocean floor was formed by volcanic eruptions. The word "volcano" comes from the Italian island of Vulcano. Centuries ago, people believed that it was the chimney of the forge of Vulcan, the Roman god of fire. There are at least 1500 active volcanoes above sea level around the world. Indonesia has the most: 86 have erupted in its history. There may be more than 10,000 active volcanoes under the oceans. The Mid-Oceanic Ridge is a long mountain range marking the edges of plates beneath the oceans. As the plates slowly pull apart, magma rises to the surface. One in 10 people live within "danger range" of a volcano. In 1943, a Mexican farmer, having finished ploughing his corn at 4.30 p.m., watched an eruption in his field, near the village of Paricutin. He came back at 8.00 a.m. the next morning to find a volcanic cone 10 metres (30 feet) high. The volcano erupted repeatedly: after one year the cone had grown to 336 metres (1100 feet). © 2017 Q-files Ltd. All rights reserved. Switch to Mobile
__label__pos
0.850383
blob: d529e803bd7fb029d76554c420e9850e76f1b9d8 [file] [log] [blame] // Copyright 2016 The Fuchsia Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #ifndef FBL_STRING_PRINTF_H_ #define FBL_STRING_PRINTF_H_ #include <stdarg.h> #include <zircon/compiler.h> #include <fbl/string.h> namespace fbl { // Formats |printf()|-like input and returns it as an |fbl::String|. String StringPrintf(const char* format, ...) __PRINTFLIKE(1, 2) __WARN_UNUSED_RESULT; // Formats |vprintf()|-like input and returns it as an |fbl::String|. String StringVPrintf(const char* format, va_list ap) __WARN_UNUSED_RESULT; } // namespace fbl #endif // FBL_STRING_PRINTF_H_
__label__pos
0.826804
Bitesize Bio #answers #to #any #question #ask a chemist # Ask a Chemist: How Colorimetric Assays Work – 18th July, 2011 One of my colleagues, a very good molecular biologist, told me that the only time she uses chemistry is when she needs to calculate molarities. I, of course, scoffed at this statement, and tried to remind her of all the chemistry she uses daily. True, I may be a bit biased since I am a chemist, but surely, all the chromophores, fluorophores, and imaging probes, etc. involve some knowledge of chemistry and chemical reactions. She shrugged, and replied that they’re all supplied in kits. So, in defence of all chemistry nerds out there, here is my rebuttal: even the most traditional of molecular biologists perform chemical reactions nearly every day. From purification by ion exchange resins and affinity columns, to calculating protein concentrations and coupling bioconjugates, chemistry is behind it all. For this article I will focus on a couple of colorimetric assays and hopefully convince you that knowledge of these chemical reactions is helpful in performing these assays and understanding the results, as well as in troubleshooting when things go awry. By taking advantage of the electronic properties and specific wavelengths of chromophores, we can deduce a great deal of information.  Colorimetric assays help us to distinguish a particular enzyme from another, to quantitate catalytic activity, as well as inhibition of this activity, to generate proliferative and toxicity profiles, and even to determine the concentration of proteins in solution. They are a simple and convenient way to visualize biological processes. As a former PI of mine used to say, in his thick German accent, while pointing to his eyes, “Nature has provided us with our own spectrophotometers, no?” Even more appealing, many colorimetric assays are commercially available as kits, usually with a detailed protocol. In fact, colorimetric assays are used so regularly in chemistry and biology, that to cover them all would take an entire book.  So, for simplicity’s sake, this article will focus on two phosphatase assays. Kinases and phosphatases mediate a variety of cellular processes like metabolism, gene transcription and translation, protein-protein interactions, and apoptosis, through protein phosphorylation and dephosphorylation.  For this reason, many different technologies, including various colorimetric assays, exist to detect these enzymes and their activity. para -Nitrophenylphosphate Assay Mechanism para -Nitrophenylphosphate (p NPP) is widely used as a synthetic substrate to measure the catalytic activity of various phosphatases. The phosphate group of p NPP is cleaved by the enzyme to yield p -nitrophenol, which is also colorless since the wavelength maximum for the electronic excitation of p -nitrophenol in water is 318 nm. However, under alkaline conditions, the p -nitrophenol is converted to the p -nitrophenolate anion, resulting in a bathochromic shift (a change in absorbance of a compound to a longer wavelength based on solution conditions) to around 400 nm. This is at the blue edge of the electromagnetic spectrum, but since we see the color of the reflected light (as opposed to the absorbed light), the solution looks yellow. Things to Keep in Mind One of the great selling points of the p NPP assay is its simplicity, so it’s easy to see why one might forget that this bathochromic (a great trivia word!) shift is a crucial element to the assay. What if, for example, the enzymatic activity requires an optimum pH in the acidic range, as so happens to be the case with acid phosphatase.  In that instance, the end point addition of a strong base (such as sodium hydroxide or potassium hydroxide) is necessary in order to obtain the desired yellow color. Malachite Green Assay Mechanism The para -nitrophenylphosphate assay is great, but what if your favourite phosphatase doesn’t efficiently metabolize p NPP? Another commercially available phosphatase assay is the malachite green assay. This simple assay method is based on the complex formed between malachite green, ammonium molybdate, and free orthophosphate (aka inorganic phosphate, Pi) under acidic conditions. Orthophosphate, liberated from a phosphorlyated substrate upon cleavage by the phosphatase, forms a complex with ammonium molybdate in a solution of sulfuric acid.  The formation of the malachite green phosphomolybdate complex, measured at 620-650 nm, is accordingly directly related to the free orthophosphate concentration. Therefore, it is possible to quantify phosphorylation and phosphate release from protein phosphatase substrates. Things to Keep in Mind This assay measures only inorganic free phosphate; organophosphates (lipid-bound or protein-bound phosphates) must first be hydrolyzed and neutralized prior to measurement.  Knowledge of the different classes of organophosphates and their respective different free energies of hydrolysis aids in the optimization of assay conditions.  Higher-energy organophosphates (acetal phosphates, phosphoanhydrides, etc ) possess acid-labile phosphate groups that can be released into solution by incubation at low pH. On the other hand, lower-energy organophosphates (phosphoesters) are stable in acidic solution and require much harsher conditions (such as thermal decomposition) to be detected by this method. In the presence of a large excess of malachite green, the 3:1 ion associate ((MG + )3 (PMo12 O40 3- )) can easily form and precipitate in the acidic aqueous solution. To stabilize the 1:1 ion associate in the aqueous solution, polyvinylalcohol is added to the solution. Another thing to take into consideration during the assay is the possibility of any redox reactions that could interfere with the assay. Molybdenum is a transition metal and so exists in many oxidation states. In the molybdate anion, it has an oxidation state of +6.  Reduction of the acidified Mo(VI) solution by organic compounds, like ascorbic acid and reducing sugars (i.e. glucose), or by inorganic compounds, like SnCl2. creates the Mo(IV) species, which is blue in color. These two examples barely skim the surface of the colorimetric assays. Just think of all the catalase assays for the detection of hydrogen peroxide (like Sigma’s quioneimine derived assay), and of course, the very popular MTT-based assays for measuring cytotoxicity.  A better understanding of the chemistry involved in these assays helps to more deeply understand the reactions, and any possible troubleshooting procedures, as well, and will be discussed at a later time. Colorimetric assays are just one of the many ways in which chemistry and biology work hand in hand in scientific research.  I think it’s fair to say that the multi-disciplinary approach is now recognized as a vital way to move forward in the drug discovery process. My chemical biology professor commented that in the ‘olden days’ it was enough to study medicinal chemistry and that the pharmaceutical companies would teach you the rest. Now, medicinal chemists are expected to understand pharmacology, cell biology, and molecular biology techniques.  It takes an entire team of people, involving many specialities, in order to be successful in developing safe and effective drugs.  But it is not enough simply to involve many collaborators; I would argue that researchers on all sides need to learn to occasionally think alike.  So hopefully the next time my aforementioned colleague performs a colorimetric assay she can don her ‘chemistry hat’ in addition to her ‘molecular biology hat’, and remember that drug research is an interdisciplinary art. About Author: Leave A Comment Your email address will not be published. Required fields are marked *
__label__pos
0.584951
With respect to incidence (the table above is about prevalence), Martinez et al (2015) reported that there were 5.4 new cases of tinnitus per 10,000 person-years in England. We don't find this statistic much use as tinnitus is highly prevalent in otherwise normal persons. It seems to us that their study is more about how many persons with tinnitus were detected by the health care system -- and that it is more a study of England's health care system than of tinnitus. We use cookies and similar technologies to improve your browsing experience, personalize content and offers, show targeted ads, analyze traffic, and better understand you. We may share your information with third-party partners for marketing purposes. To learn more and make choices about data use, visit our Advertising Policy and Privacy Policy. By clicking “Accept and Continue” below, (1) you consent to these activities unless and until you withdraw your consent using our rights request form, and (2) you consent to allow your data to be transferred, processed, and stored in the United States. Almost every ENT, audiology practice, and hearing aid dispenser who claims to offer tinnitus treatment only offers one solution: hearing aids. While amplification may help some, only 50% of people living with tinnitus experience hearing loss that affects their understanding of speech, which means hearing aids are ineffective. At Sound Relief, we offer only evidence-based options like sound therapy and have seen countless patients experience life-changing results. The exact biological process by which hearing loss is associated with tinnitus is still being investigated by researchers. However, we do know that the loss of certain sound frequencies leads to specific changes in how the brain processes sound. In short, as the brain receives less external stimuli around a specific frequency, it begins to adapt and change. Tinnitus may be the brain’s way of filling in the missing sound frequencies it no longer receives from the auditory system. Somatic tinnitus is caused, worsened, or otherwise related to your body’s own sensory system. Sensory signals coming from various parts of the body are disrupted, causing a spasm that produces tinnitus. Those who have somatic tinnitus usually have it in only one ear. Depending on the root cause your doctor may come up with treatment options to alleviate the symptoms. Acoustic neuroma: This is a rare subjective cause of tinnitus, and includes a certain type of brain tumor known as an acoustic neuroma. The tumors grow on the nerve that supplies hearing and can cause tinnitus. This type of the condition usually are only noticed in one ear, unlike the more common sort caused by hearing loss usually seen in both ears. Causes of objective tinnitus are usually easier to find. Research shows a frequent correlation between tinnitus and hearing loss. Because tinnitus is perceived differently by each sufferer, an exact diagnosis is essential. A doctor may conduct ENT, dental, orthodontic, and orthopedic examinations in order to establish whether a case can be medically treated or not. The pitch and volume of tinnitus can be determined by special diagnostic test, and a hearing test can reveal whether hearing loss is also involved. Treatment with hearing aids is often the first step to relief from tinnitus. Hearing aids compensate for hearing loss, which enables concentration on external sounds instead of internal noises. Why is tinnitus so disruptive to sleep? Often, it’s because tinnitus sounds become more apparent at night, in a quiet bedroom. The noises of daily life can help minimize the aggravation and disruptiveness of tinnitus sounds. But if your bedroom is too quiet, you may perceive those sounds more strongly when you try to fall asleep—and not be able to drift off easily. If the cause of your tinnitus is excessive earwax, your doctor will clean out your ears by suction with a small curved instrument called a curette, or gently flush it out with warm water. If you have an ear infection, you may be given prescription ear drops containing hydrocortisone to help relieve the itching and an antibiotic to fight the infection. Muscle spasms: Tinnitus that is described as clicking may be due to abnormalities that cause the muscle in the roof of the mouth (palate) to go into spasm. This causes the Eustachian tube, which helps equalize pressure in the ears, to repeatedly open and close. Multiple sclerosis and other neurologic diseases that are associated with muscle spasms may also be a cause of tinnitus, as they may lead to spasms of certain muscles in the middle ear that can cause the repetitive clicking. While there may be a wide range of causes, an important underlying factor for the development of tinnitus is brain plasticity.5,7 This property allows the brain to change and adapt, and it is essential to how we learn. Unfortunately, in some cases, such as with hearing loss, the auditory part of the brain may be altered as brain plasticity tries to compensate for the abnormal auditory inputs. This response leads to changes in brain activity in the auditory system (e.g., the auditory cortex) that can create a phantom percept: tinnitus. As such, while tinnitus may begin a problem at the auditory periphery, it persists because of changes throughout the auditory system. Treating tinnitus may require addressing both the initiator (e.g., hearing loss) and the driver (changes in the auditory brain). Noise-induced hearing loss - Exposure to loud noises, either in a single traumatic experience or over time, can damage the auditory system and result in hearing loss and sometimes tinnitus as well. Traumatic noise exposure can happen at work (e.g. loud machinery), at play (e.g. loud sporting events, concerts, recreational activities), and/or by accident (e.g. a backfiring engine.) Noise induced hearing loss is sometimes unilateral (one ear only) and typically causes patients to lose hearing around the frequency of the triggering sound trauma. If you develop tinnitus, it's important to see your clinician. She or he will take a medical history, give you a physical examination, and do a series of tests to try to find the source of the problem. She or he will also ask you to describe the noise you're hearing (including its pitch and sound quality, and whether it's constant or periodic, steady or pulsatile) and the times and places in which you hear it. Your clinician will review your medical history, your current and past exposure to noise, and any medications or supplements you're taking. Tinnitus can be a side effect of many medications, especially when taken at higher doses (see "Some drugs that can cause or worsen tinnitus"). When a medication is ototoxic, it has a toxic effect on the ear or its nerve supply. In damaging the ear, these drugs can cause side effects like tinnitus, hearing loss, or a balance disorder. Depending on the medication and dosage, the effects of ototoxic medications can be temporary or permanent. More than 200 prescription and over-the-counter medicines are known to be ototoxic, including the following: Noise exposure. Exposure to loud noises can damage the outer hair cells, which are part of the inner ear. These hair cells do not grow back once they are damaged. Even short exposure to very loud sounds, such as gunfire, can be damaging to the ears and cause permanent hearing loss. Long periods of exposure to moderately loud sounds, such as factory noise or music played through earphones, can result in just as much damage to the inner ear, with permanent hearing loss and tinnitus. Listening to moderately loud sounds for hours at a young age carries a high risk of developing hearing loss and tinnitus later in life. Earwax (ear wax) is a natural substance secreted by special glands in the skin on the outer part of the ear canal. It repels water, and traps dust and sand particles. Usually a small amount of wax accumulates, dries up, and then falls out of the ear canal carrying with it unwanted particles. Under ideal circumstances, you should never have to clean your ear canals. The absence of ear wax may result in dry, itchy ears, and even infection. Ear wax may accumulate in the ear for a variety of reasons including; narrowing of the ear canal, production of less ear wax due to aging, or an overproduction of ear wax in response to trauma or blockage within the ear canal. Although mitochondrial DNA variants are thought to predispose to hearing loss, a study of polish individuals by Lechowicz et al, reported that "there are no statistically significant differences in the prevalence of tinnitus and its characteristic features between HL patients with known HL mtDNA variants and the general Polish population." This would argue against mitochondrial DNA variants as a cause of tinnitus, but the situation might be different in other ethnic groups. There seems to be a two-way-street relationship between tinnitus and sleep problems. The symptoms of tinnitus can interfere with sleeping well—and poor sleep can make tinnitus more aggravating and difficult to manage effectively. In the same study that found a majority of people with tinnitus had a sleep disorder, the scientists also found that the presence of sleep disorders made tinnitus more disruptive. There are many different conditions and disorders that affect nerve channels leading to the ears, which can cause someone to hear abnormal ringing or other sounds in their ears. These conditions usually cause other symptoms at the same time (such as dizziness, hearing loss, headaches, facial paralysis, nausea and loss of balance), which doctors use as clues to uncover the underlying cause of tinnitus. Tinnitus (pronounced "tin-it-tus") is an abnormal noise in the ear (note that it is not an "itis" -- which means inflammation). Tinnitus is common -- nearly 36 million Americans have constant tinnitus and more than half of the normal population has intermittent tinnitus.   Another way to summarize this is that about 10-15% of the entire population has some type of constant tinnitus, and about 20% of these people (i.e. about 1% of the population) seek medical attention (Adjamian et al, 2009). Similar statistics are found in England (Dawes et al, 2014) and Korea (Park and Moon, 2014). A disorder of the inner ear, Ménière’s disease typically affects hearing and balance and may cause debilitating vertigo, hearing loss, and tinnitus. People who suffer from Ménière’s disease often report a feeling of fullness or pressure in the ear (it typically affects only one ear). The condition most often impacts people in their 40s and 50s, but it can afflict people of all ages, including children. Although treatments can relieve the symptoms of Ménière’s disease and minimize its long-term influence, it is a chronic condition with no true cure. It’s been found that exposure to very loud noises can contribute to early hearing loss and ear problems. Loud sounds can include those from heavy machinery or construction equipment (such as sledge hammers, chain saws and firearms). Even gun shots, car accidents, or very loud concerts and events can trigger acute tinnitus, although this should go away within a couple days in some cases. (5) Hyperacusis is a different, but related condition to tinnitus. People with hyperacusis have a high sensitivity to common, everyday environmental noise. In particular, sharp and high-pitched sounds are very difficult for people with hyperacusis to tolerate—sounds like the screeching of brakes, a baby crying or a dog barking, a sink full of dishes and silverware clanging.  Many people with tinnitus also experience hyperacusis—but the two conditions don’t always go together. Tinnitus usually comes in the form of a high-pitched tone in one or both ears, but can also sound like a clicking, roaring or whooshing sound. While tinnitus isn't fully understood, it is known to be a sign that something is wrong in the auditory system: the ear, the auditory nerve that connects the inner ear to the brain, or the parts of the brain that process sound. Something as simple as a piece of earwax blocking the ear canal can cause tinnitus, but it can also arise from a number of health conditions. For example, when sensory cells in the inner ear are damaged from loud noise, the resulting hearing loss changes some of the signals in the brain to cause tinnitus. Shelly-Anne Li is the VP of clinical research and operations at Sound Options Tinnitus Treatments Inc. As a research methodology consultant for various projects, she brings expertise in health research methods, as well as experience from conducting multi-site randomized controlled trials, mixed methods studies and qualitative research. Shelly-Anne Li is currently a PhD candidate at University of Toronto, and obtained her MSc (health sciences) from McMaster University. Most tinnitus is subjective, meaning that only you can hear the noise. But sometimes it's objective, meaning that someone else can hear it, too. For example, if you have a heart murmur, you may hear a whooshing sound with every heartbeat; your clinician can also hear that sound through a stethoscope. Some people hear their heartbeat inside the ear — a phenomenon called pulsatile tinnitus. It's more likely to happen in older people, because blood flow tends to be more turbulent in arteries whose walls have stiffened with age. Pulsatile tinnitus may be more noticeable at night, when you're lying in bed and there are fewer external sounds to mask the tinnitus. If you notice any new pulsatile tinnitus, you should consult a clinician, because in rare cases it is a sign of a tumor or blood vessel damage. ×
__label__pos
0.942427
Disclosure: Some of the links, pictures, and/or elements on this page may be affiliate links, meaning, at no additional cost to you, I will earn a commission if you click through and make a purchase or take a qualified action. Dysautonomia is a term used to describe many different medical conditions that involves the autonomic nervous system. Despite being unfamiliar for many, this umbrella term affects more than 70 million people around the world. In this entry, we will go deeper to the definition of dysautonomia and its different types. Dysautonomic comes in different forms, all of which involves ANS (autonomic nervous system) dysfunction; and usually manifests in the failure of sympathetic or parasympathetic components of ANS. In rare instances, this condition happens due to overactive ANS. The Autonomic Nervous System is part of nervous system which coordinates unconscious homeostatic operations; such as bodily activities that automatically being carried out without the input of the higher brain centers. Basically, the ANS is what is responsible for keeping a constant internal temperature, maintain steady blood pressure; normal breathing patterns, correct dilation of pupil, sexual arousal and excretion. Therefore, dysautonomia symptoms occur as faults in those specific functions and areas of physiology. There are different sub-types of dysautonomia, and symptoms may vary from the subtype.  The most common symptoms however, are abnormal heart rate, fainting, lightheadedness, and unstable blood pressure. In more serious cases, seemingly simple diseases like acute respiratory failure, pneumonia, or sudden cardiopulmonary disease can lead to death. Unfortunately, medical scientists are yet to fully understand dysautonomia, and there are still no cures for it. There are however, various medications and interventions one can use to ease its symptoms. Also, dysautonomia may be a secondary condition to another disease. Also, it is important to note that dysautonomia can also develop as a consequence of other diseases; such as celiac disease, diabetes, multiple sclerosis; Sjögren’s syndrome, rheumatoid arthritis, Parkinson’s disease, and lupus. Two Most Common Sub types of Dysautonomia Dysautonomia has 15 sub types, ranging from common to rare. Neurocardiogenic Syncope and Postural Orthostatic Tachycardia Syndrome are its two most common sub types. Neurocardiogenic Syncope (NCS) NCS is one of the two most common types of dysautonomia, affecting millions of people around the world. When we stand, gravity affects our body by pulling blood down to our lower extremities. This results to partial drainage of blood to thorax from the head. In healthier individuals, the ANS will increase vascular tone, cardiac output, and heart rate to counteract such changes. People suffering from NCS will not have this body mechanism; their brain is deprived of oxygen-carrying blood, causing them to faint. Severity of this condition of course, varies from one person to the next; some may experience one fainting episode, while other may find it hard to function normally due to regular episodes. Postural Orthostatic Tachycardia Syndrome (POTS) POTS is also another common type of dysautonomia, as experts estimate it affects 1% of all the teenagers in United States. That means somewhere around 1 to 3 million people are affected by POTS; and research shows it is 5 times more prevalent among females than males. POTS is not a disease, rather, it is a syndrome, which means it is caused by a different underlying medical condition. Though it is usually difficult to point out. As well as multiple sclerosis, genetic abnormalities/disorders, mitochondrial disease, pneumonia, toxicity caused by alcoholism, heavy-metal poisoning; or chemotherapy,  autoimmune disease such as lupus; Sjögren’s syndrome and sarcoidosis, antiphospholipid syndrome, vitamin deficiency like anemia, vaccination, etc. Its symptoms include chest pain, shaking, shortness of breath, fainting or lightheadedness, sensitivity to temperature; abnormally fast heart rate (tachycardia), and gastrointestinal upset. Pin It on Pinterest Share This
__label__pos
0.649485
PubAg Main content area The Arctic char (Salvelinus alpinus) “complex” in North America revisited Author: Taylor, Eric B. Source: Hydrobiologia 2016 v.783 no.1 pp. 283-293 ISSN: 0018-8158 Subject: Salvelinus alpinus, Salvelinus confluentus, biodiversity, biologists, genetics, sympatry, Alaska, Arctic region Abstract: The Arctic char (Salvelinus alpinus) species “complex” has fascinated biologists for decades particularly with respect to how many species there are and their geographic distributions. I review recent research on the species complex, focussing on biodiversity within northwestern North America, which indicates (i) what was once considered a single taxon consists of three taxa: S. alpinus (Arctic char), S. malma (Dolly Varden), and S. confluentus (bull trout), (ii) morphological and genetic data indicate that S. alpinus and S. malma, and S. malma and S. confluentus exist as distinct biological species in sympatry, (iii) sympatric forms of S. alpinus exist in Alaska as in other areas of the Holarctic, (iv) Dolly Varden comprises two well-differentiated subspecies, S. m. malma and S. m. lordi, in the eastern Pacific and the northwestern Canadian Arctic that meet at a contact zone on the southern edge of the Alaska Peninsula, and (v) Dolly Varden and bull trout consist of several population assemblages that have legal status as distinct conservation units under US and Canadian law. This research has significantly revised what constitutes the S. alpinus species “complex”, provided insights into the ecology and genetics of co-existence, and promoted conservation assessment that better represents biodiversity within Salvelinus. A geographically and genetically comprehensive analysis of relationships among putative taxa of Pan-Pacific Salvelinus is still required to better quantify the number of taxa and their origins. Agid: 5562522
__label__pos
0.961786
 Datatypes: XPath/XQuery to Java Datatypes: XPath/XQuery to Java www.altova.com Print this Topic Previous Page Up One Level Next page Home >  Engine Information > XSLT and XPath/XQuery Functions > Miscellaneous Extension Functions > Java Extension Functions > Datatypes: XPath/XQuery to Java When a Java function is called from within an XPath/XQuery expression, the datatype of the function's arguments is important in determining which of multiple Java classes having the same name is called.   In Java, the following rules are followed:   If there is more than one Java method with the same name, but each has a different number of arguments than the other/s, then the Java method that best matches the number of arguments in the function call  is selected. The XPath/XQuery string, number, and boolean datatypes (see list below) are implicitly converted to a corresponding Java datatype. If the supplied XPath/XQuery type can be converted to more than one Java type (for example, xs:integer), then that Java type is selected which is declared for the selected method. For example, if the Java method being called is fx(decimal) and the supplied XPath/XQuery datatype is xs:integer, then xs:integer will be converted to Java's decimal datatype.   The table below lists the implicit conversions of XPath/XQuery string, number, and boolean types to Java datatypes.   xs:string java.lang.String xs:boolean boolean (primitive), java.lang.Boolean xs:integer int, long, short, byte, float, double, and the wrapper classes of these, such as java.lang.Integer xs:float float (primitive), java.lang.Float, double (primitive) xs:double double (primitive), java.lang.Double xs:decimal float (primitive), java.lang.Float, double(primitive), java.lang.Double   Subtypes of the XML Schema datatypes listed above (and which are used in XPath and XQuery) will also be converted to the Java type/s corresponding to that subtype's ancestor type.   In some cases, it might not be possible to select the correct Java method based on the supplied information. For example, consider the following case.   The supplied argument is an xs:untypedAtomic value of 10 and it is intended for the method mymethod(float). However, there is another method in the class which takes an argument of another datatype: mymethod(double). Since the method names are the same and the supplied type (xs:untypedAtomic) could be converted correctly to either float or double, it is possible that xs:untypedAtomic is converted to double instead of float. Consequently the method selected will not be the required method and might not produce the expected result. To work around this, you can create a user-defined method with a different name and use this method.   Types that are not covered in the list above (for example xs:date) will not be converted and will generate an error. However, note that in some cases, it might be possible to create the required Java type by using a Java constructor.   © 2019 Altova GmbH  
__label__pos
0.615408
Sea Buckthorn Sea Buckthorn Sea Buckthorn, (Hippophae), also known as Seaberry, is a thorny shrub that grows near rivers and in sandy soil along the Atlantic coasts of Europe and throughout Asia, where it has been used for centuries in traditional medical applications. The leaves, flowers, fruits, and oils from the seeds are all used for remedies. The Sea Buckthorn Plant There are seven varieties of the Sea Buckthorn, the most common of which are; the Hippophae rhamnoides (common sea buckthorn), and the Hippophae salicifolia (willow-leaved sea buckthorn). The others not so common species are; Hippophae goniocarpa, Hippophae gyantsensis, Hippophae litangensis, Hippophae neurocarpa and Hippophae tibetana. Most of the world’s sea buckthorn plantations are located in China. There, the shrub is used for soil and water conservation in addition to its healing properties. The fruit of the Sea Buckthorn is difficult to harvest, due to the thorny nature of the shrubs themselves. The harvested fruit is quite acidic and its juices are often combined with those of sweeter fruits, such as grape or pear, to make it more palatable. Active Ingredients in Sea Buckthorn Sea buckthorn berries, leaves, and seeds are highly nutritious and packed with vitamins, minerals and various bioactive compounds and nutrients, including; vitamin A in the form of beta-carotene, vitamin C, vitamin E, vitamin K, vitamin B complex (including B1, B2, B6, and B9, potassium, calcium, magnesium, iron, phosphorus, antioxidants, amino acids, fatty acids and flavonoids. All of these ingredients play essential roles in supporting overall health and well-being, contributing to the medicinal properties of Sea Buckthorn. Health Benefits of Sea Buckthorn Sea Buckthorn is becoming increasingly popular for its impressive range of healing properties. In natural medicine, there are many uses and indications for the Sea Buckthorn, which we will explore below: Sea Buckthorn is a Powerful Antioxidant Sea buckthorn berries are loaded with antioxidants, including flavonoids, phenolic compounds, carotenoids (such as beta-carotene and lycopene), and vitamin C. Antioxidants help neutralize harmful free radicals and reduce oxidative stress, which is implicated in various chronic diseases, including cardiovascular disease, cancer and neurodegenerative disorders. Sea Buckthorn for Cardiovascular Health Sea Buckthorn tea is typically used for lowering blood pressure, angina, serum cholesterol and hyperlipidemia (high cholesterol), as well as the prevention and treatment of diseases of the blood vessels. Chinese researchers have completed a study suggesting that Sea Buckthorn oil extract can lower cholesterol, reduce angina and improve heart function in patients with cardiac disease. Research on Sea Buckthorn as it relates to weight loss, cardiac disease and cholesterol levels are ongoing and appear to be promising based on initial results. Sea buckthorn berries and oil may help promote cardiovascular health by reducing inflammation, improving blood lipid profiles, and enhancing endothelial function. Studies suggest that sea buckthorn oil supplementation may help lower levels of LDL (bad) cholesterol and triglycerides, while increasing levels of HDL (good) cholesterol, thereby reducing the risk of heart disease and improving overall cardiovascular function. Sea Buckthorn for Immune Support Sea buckthorn berries are believed to support immune function due to their high content of vitamin C and other immune-boosting nutrients. Vitamin C helps stimulate the production of white blood cells and antibodies, which play key roles in defending the body against infections and illnesses. Sea Buckthorn for Skin Health Sea buckthorn oil is commonly used in skincare products for its benefits for skin health. It is rich in vitamin E, omega-7 fatty acids (palmitoleic acid), and other nutrients that help moisturize the skin, reduce inflammation, promote wound healing and protect against UV damage. Sea buckthorn oil may help alleviate symptoms of dry skin, eczema, psoriasis and acne when applied topically. Cooled Sea Buckthorn tea can be applied to sunburn to reduce swelling and irritation whilst promoting healing. Sea Buckthorn for Digestive Health Sea buckthorn berries and oil may have potential benefits for digestive health, including alleviating symptoms of gastric ulcers, gastroesophageal reflux disease (GERD) and inflammatory bowel disease (IBD). Sea buckthorn oil may help soothe and repair the mucous membranes of the digestive tract, reducing inflammation and promoting gastrointestinal health. It may be used to treat human gastrointestinal tract (GI tract) diseases including ulcers, gastroesophageal reflux disease (GERD), upset stomach, dyspepsia and constipation. Sea Buckthorn for Eye Health Sea buckthorn berries contain lutein, zeaxanthin, and other carotenoids that are beneficial for eye health. These compounds help protect the eyes from oxidative damage, reduce the risk of age-related macular degeneration (AMD) and cataracts, and support overall vision health. Anti-Inflammatory Effects of Sea Buckthorn Sea buckthorn berries and oil have been studied for their potential anti-inflammatory effects, which may help reduce inflammation in the body and alleviate symptoms of inflammatory conditions such as arthritis, asthma and dermatitis. Sea buckthorn is believed to possess anti-inflammatory properties due to its rich composition of bioactive compounds, including; flavonoids, phenolic acids, carotenoids, vitamins and fatty acids. These compounds work synergistically to exert various anti-inflammatory effects in the body, which we will explore below: Inhibition of Inflammatory Mediators: Sea buckthorn contains flavonoids and phenolic acids that can inhibit the activity of pro-inflammatory enzymes, such as cyclooxygenase (COX) and lipoxygenase (LOX). These enzymes are involved in the production of inflammatory mediators, including prostaglandins and leukotrienes, which contribute to inflammation and pain. Antioxidant Activity: Sea buckthorn is rich in antioxidants, such as vitamin C, vitamin E, flavonoids, and carotenoids, which help neutralize harmful free radicals and reduce oxidative stress. Oxidative stress can lead to inflammation and tissue damage, so by scavenging free radicals, sea buckthorn helps reduce inflammation and protect cells from damage. Modulation of Immune Response: Sea buckthorn may modulate the immune response and reduce inflammation by regulating the production of cytokines and other immune signaling molecules. By balancing the immune system’s response, sea buckthorn can help prevent excessive inflammation and promote tissue repair and regeneration. Reduction of Pro-inflammatory Signaling: Sea buckthorn contains fatty acids, such as omega-3 and omega-7 fatty acids, which have been shown to inhibit pro-inflammatory signaling pathways, such as nuclear factor-kappa B (NF-kB) and mitogen-activated protein kinase (MAPK) pathways. By blocking these pathways, sea buckthorn helps reduce the production of inflammatory cytokines and other mediators. Protection of Mucous Membranes: Sea buckthorn oil is known for its ability to nourish and protect mucous membranes, including those lining the digestive tract and respiratory system. By maintaining the integrity of mucous membranes, sea buckthorn helps prevent inflammation and irritation caused by environmental toxins, pathogens, and other irritants. Overall, the combination of anti-inflammatory, antioxidant and immunomodulatory properties makes sea buckthorn an effective natural remedy for reducing inflammation and promoting overall health and well-being. Sea buckthorn supplements, oils, and extracts are commonly used to alleviate inflammatory conditions such as arthritis, gastritis, dermatitis and respiratory tract infections. However, more research is needed to fully understand the mechanisms of action and clinical efficacy of sea buckthorn in managing inflammation-related disorders. Sea Buckthorn for Respiratory Health Sea Buckthorn is also used as an expectorant. An expectorant is a substance that helps to loosen and expel mucus from the respiratory tract, making it easier to cough up. Sea buckthorn contains bioactive compounds such as flavonoids and phenolic acids, which are believed to have mucolytic (mucus-dissolving) properties. While scientific research on the expectorant effects of sea buckthorn is limited, its traditional use suggests that it may help alleviate respiratory symptoms such as coughing, congestion and phlegm production. Sea buckthorn supplements or preparations, such as teas or syrups made from sea buckthorn berries or leaves, may be used to promote respiratory health and support the body’s natural mechanisms for clearing mucus from the airways. Sea Buckthorn for Anti-Aging Sea buckthorn is often touted for its potential anti-aging effects due to its rich composition of bioactive compounds, including vitamins, minerals, antioxidants, and fatty acids. Sea buckthorn berries are used for preventing skin infections, improving sight, and slowing the ageing process, including slowing the reduction of mental agility associated with ageing. Here’s how sea buckthorn may help with anti-aging: Antioxidant Protection: Sea buckthorn is loaded with antioxidants such as vitamins C and E, flavonoids, carotenoids (including beta-carotene and lycopene), and phenolic compounds. These antioxidants help neutralize harmful free radicals in the body, which are molecules that can damage cells and contribute to premature aging. By scavenging free radicals, sea buckthorn helps protect cells from oxidative stress and reduces the signs of aging, such as fine lines, wrinkles, and age spots. Collagen Production: Sea buckthorn contains nutrients that support collagen production, such as vitamin C and certain amino acids. Collagen is a protein that provides structure and elasticity to the skin, helping to maintain its firmness and smoothness. By promoting collagen synthesis, sea buckthorn helps improve skin tone and texture, reduce the appearance of wrinkles, and enhance overall skin elasticity. Skin Hydration: The fatty acids found in sea buckthorn oil, including omega-7 (palmitoleic acid), omega-3, and omega-6 fatty acids, help nourish and moisturize the skin from within. These fatty acids support the skin’s natural barrier function, preventing moisture loss and keeping the skin hydrated and supple. Hydrated skin appears plumper and more youthful, reducing the appearance of fine lines and wrinkles. UV Protection: Sea buckthorn oil contains carotenoids such as beta-carotene and lycopene, which have been shown to provide some degree of protection against UV radiation from the sun. UV exposure is a major contributor to skin aging, leading to wrinkles, sunspots, and loss of skin elasticity. By providing natural sun protection, sea buckthorn helps minimize the damaging effects of UV radiation and prevents premature aging of the skin. Wound Healing: Sea buckthorn oil has been traditionally used for its wound-healing properties. It contains compounds such as vitamins E and C, fatty acids, and flavonoids, which promote tissue regeneration and repair. By accelerating the healing process, sea buckthorn helps reduce the appearance of scars and blemishes, giving the skin a smoother and more youthful appearance. Hair and Nail Health: Sea buckthorn oil is also beneficial for hair and nail health. Its nourishing properties help strengthen hair follicles, promote hair growth, and improve the overall condition of the hair and scalp. Additionally, sea buckthorn oil helps strengthen nails, preventing brittleness and breakage. Overall, sea buckthorn offers a range of benefits for skin health and anti-aging. Regular use of sea buckthorn supplements or topical products, such as oils, creams and serums, can help rejuvenate the skin, reduce the signs of aging and promote a more youthful appearance. However, it’s important to choose high-quality sea buckthorn products and incorporate them into a comprehensive skincare routine for optimal results. Additional Uses of Sea Buckthorn Sea Buckthorn oil is used in traditional medicine to reduce the side effects of cancer and cancer treatments. One recent study suggests that Sea Buckthorn seed oil may be effective for assisting in weight loss, and for increasing immunity, as well as for treating gout. How to Take Sea Buckthorn Various parts of Sea Buckthorn are used for medicinal purposes. The most commonly utilised parts for medicinal applications include: Fruit Pulp and Juice Rich in Nutrients: The orange berries of the Sea Buckthorn are particularly rich in nutrients, including vitamins (such as vitamin C, vitamin E), carotenoids, flavonoids, and fatty acids. Antioxidant Properties: The fruit pulp and juice are known for their high antioxidant content, which may contribute to their potential health benefits. Immune Support: The vitamin C content in Sea Buckthorn berries may support the immune system. Seed Oil Omega Fatty Acids: The oil extracted from Sea Buckthorn seeds is rich in omega-3, omega-6, omega-7, and omega-9 fatty acids. Skin Health: Sea Buckthorn seed oil is often used topically for skin health. It may have moisturising properties and can be beneficial for conditions like dry skin. Leaves Traditional Uses: In some traditional systems of medicine, Sea Buckthorn leaves have been used for medicinal purposes. Potential Benefits: Leaves may contain certain bio-active compounds, but their use is not as common as the fruit pulp or seed oil. Bark Traditional Uses: In some traditional practices, the bark of Sea Buckthorn may be used for medicinal purposes. Less Common: The use of bark is less common compared to the fruit, seed oil, or leaves. Sea Buckthorn products, such as oils, capsules, and juices, are available as dietary supplements. The leaves and flowers can also be taken as a herbal tea. Always take care when taking herbs and Read Our Disclaimer. Sea Buckthorn Notes / Side Effects There are several special precautions and warnings that should be heeded when using Sea Buckthorn. There is insufficient data regarding the use of sea buckthorn during pregnancy to ensure its safety, and whether it crosses over into breast milk is unknown. It is currently recommended that pregnant and breastfeeding women avoid the use of Sea Buckthorn supplements until additional data is available. There are some initial indications that Sea Buckthorn has the ability to slow blood clotting. While in some instances this may be beneficial, it is important to stop taking this supplement at least 2 weeks prior to any scheduled surgery to reduce the risk of excess bleeding during and after an operation. Anticoagulants and antiplatelet drugs administered to slow blood clotting may interact with Sea Buckthorn, causing an increase in anti clotting activity. Taking Sea Buckthorn in addition to these medications may increase the potential for excess bleeding and bruising. Some of the medications which may interact in this manner include aspirin, clopidogrel, diclofenac, ibuprofen, naproxen, dalteparin, enoxaparin, heparin, and warfarin. This is not a complete list. It is therefore very important to consult a physician before taking Sea Buckthorn products with any blood thinners or NSAID drugs. Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.701063
Korzystając z serwisu wyrażasz zgodę na używanie plików cookies zgodnie z aktualnymi ustawieniami przeglądarki, które możesz zmienić w dowolnej chwili. Więcej informacji odnośnie do plików cookies. Specializations Artificial intelligence (track in Computer Engineering) The Artificial Intelligence specialization is aimed at computer science students who wish to deepen their knowledge and skills related to modern AI technologies. The study program covers a wide range of subjects, such as advanced programming techniques, data analysis techniques, machine learning, and reinforcement learning. Students will also learn about AI applications in computer games, computer graphics, and cloud systems, which serve as the teleinformatics backbone of AI. During their studies, they will acquire skills in designing interpretable decision systems, as well as recurrent neural networks and transformers. Courses such as prompt engineering and computer vision will allow students to practically apply theoretical knowledge, and thesis seminars will prepare them for independent scientific and professional work. Graduates of this specialization are prepared to work in various sectors of the economy, such as the technology industry, the financial sector, healthcare, manufacturing, as well as IT companies and fields related to data analysis, where advanced AI technologies are increasingly in demand.     Engineering of information systems (track in Computer Engineering) The track of study Engineering of information systems is aimed at students of computer science who are interested in acquiring both general knowledge, as well as specific practical skills in software engineering. Students learn about computer architecture and principles of computer design, multitasking operating systems, computer networks design, database systems and applications. They gain experience in object-oriented programming languages, such as C++, Java, C#, parallel and distributed computing, computer graphics, computer vision and computational intelligence as well as human-computer interaction. Learners are acquainted with systems engineering, teamwork and project management, IT security and the applicable legal and ethical principles. They get knowledge about modern software engineering methods and technologies.  Graduates of this specialization find employment in virtually all sectors of the economy, administration and management, computer hardware and software companies, clinics and hospitals (databases and networks) as well as in education.     Engineering of information systems (track in Computer Engineering) The track of study Engineering of information systems is aimed at students of computer science who are interested in acquiring both general knowledge, as well as specific practical skills in software engineering. Students learn about computer architecture and principles of computer design, multitasking operating systems, computer networks design, database systems and applications. They gain experience in object-oriented programming languages, such as C++, Java, C#, parallel and distributed computing, computer graphics, computer vision and computational intelligence as well as human-computer interaction. Learners are acquainted with systems engineering, teamwork and project management, IT security and the applicable legal and ethical principles. They get knowledge about modern software engineering methods and technologies.  Graduates of this specialization find employment in virtually all sectors of the economy, administration and management, computer hardware and software companies, clinics and hospitals (databases and networks) as well as in education.  Computer control systems (track in Automatic Control and Robotics) The track of study Computer control systems is targeted particularly for those, who are interested in building various automation and control systems and integrating them into IT environment. Students learn about computer programming in universal and real-time operating systems, modern microcontrollers and, embedded systems. Other subjects include computer networks and distributed control systems, industrial database systems, and software engineering. Skills are acquired in robotics, CNC programming, computer vision, wireless communication, rapid prototyping of mechatronic devices and systems, principles of system safety. Knowledge is acquired about methods of artificial intelligence, data exploration and decision support systems. Graduates of this specialization can be employed in automation departments of engineering, chemical and energy service companies; in firms which specialize in installing, operating, and maintaining the programmable controllers, control systems or automated production lines; in business informatics working on integration of various computer networks and applications that connect both the production and management spheres and wherever  the combination of information technology and control  is needed.
__label__pos
0.853846
Aspect Source Flags (ASFs) GKS allows for two methods of specifying attributes for output primitives. One way, which is the way that has been discussed exclusively in this guide, is to specify the attributes individually. Using this method, if you want to change the width of a polyline, then you would call GSLWSC to specify the desired width, and this setting would be in effect until GSLWSC was called again. The other method of specifying attributes for output primitives is called "bundled". Using this method, the attributes are contained in "bundle tables". These tables contain settings for all of the attributes associated with a given output primitive. For example, there can be one or more bundle tables for polylines each of which contains settings for linetype, linewidth scale factor, and polyline color index. Using the bundled attribute setting scheme you set a value for the number of the bundle table that should be used to find the desired setting for the attributes. Clearly the individual attribute setting scheme and the bundled attribute setting scheme cannot both be in effect simultaneously. GKS supplies a means by which you can specify whether a given attribute should come from its individual setting, or from a bundle table. Associated with each output primitive attribute is an "aspect source flag." The legal values for these flags are "1" for "individual" and "0" for "bundled." Each time GKS draws an output primitive, it first looks at the setting of the aspect source flag for the appropriate attributes, and then either uses the individual setting if the aspect source flag value is "1" or the setting from a specified bundle table if the setting is "0". All default settings for output primitive attribute aspect source flags are "individual" in NCAR GKS. In higher levels of GKS it is possible for you to define your own bundle tables and select them, but at level 0A, this is not allowed. As implemented, using bundle tables in NCAR GKS is of no value and trying to use them is not advised. The only reason that aspect source flags are discussed here is because the default environment in some GKS packages is "bundled" rather than "individual." In the case that you run NCAR Graphics on top of a non-NCAR GKS package, you will want to check on the default setting for aspect source flags, and if it is "bundled", then you should initialize all the flags to "individual" by using the subroutine GSASF described below. If you do not have access to the necessary GKS documentation, then you can use the subroutine GQASF, described below, to determine the values of the aspect source flags and then, if they are bundled, set them to individual by calling GSASF. Set Aspect Source Flags -------------------------------------------------- Argument | Type | Mode | Dimension -------------------------------------------------- CALL GSASF (LASF) | Integer | Input | 13 -------------------------------------------------- LASF List of aspect source flags. A value of 0 means bundled and a value of 1 means individual. The ordering of the flags in LASF is: 1 - linetype ASF 2 - linewidth scale factor ASF 3 - polyline color index ASF 4 - marker type ASF 5 - marker size scale factor ASF 6 - polymarker color index ASF 7 - text font and precision ASF 8 - character expansion factor ASF 9 - character spacing ASF 10 - text color index ASF 11 - fill area interior style ASF 12 - fill area style index ASF 13 - fill area color index ASF Defaults: In NCAR GKS all values of LASF are initialized to 1. Other GKS packages may initialize these values to 0. Errors: 8 ------------------------------------------------------------------- C Synopsis #include <ncarg/gks.h> void gset_asfs( const Gasfs *list_asf /* list of asfs */ ); ------------------------------------------------------------------- Inquire Aspect Source Flags --------------------------------------------------- Argument | Type | Mode | Dimension --------------------------------------------------- CALL GQASF (ERRIND, | Integer | Output | LASF) | Integer | Output | 13 --------------------------------------------------- ERRIND Error flag. Gives an integer error number from the errors list in in Appendix D, or a 0 if no error occurred. LASF List of aspect source flags. A value of 0 means bundled and a value of 1 means individual. The ordering of the flags in LASF is: 1 - linetype ASF 2 - linewidth scale factor ASF 3 - polyline color index ASF 4 - marker type ASF 5 - marker size scale factor ASF 6 - polymarker color index ASF 7 - text font and precision ASF 8 - character expansion factor ASF 9 - character spacing ASF 10 - text color index ASF 11 - fill area interior style ASF 12 - fill area style index ASF 13 - fill area color index ASF Defaults: In NCAR GKS all values of LASF are initialized to 1. Other GKS packages may initialize these values to 0. Errors: 8 -------------------------------------------------------------------- C Synopsis #include <ncarg/gks.h> void ginq_asfs( Gint *err_ind, /* error indicator */ Gasfs *list_asf /* aspect source flags */ ); -------------------------------------------------------------------- Links: GKS Index, GKS Home
__label__pos
0.824908
Permalink Switch branches/tags Nothing to show Find file Fetching contributors… Cannot retrieve contributors at this time 257 lines (192 sloc) 6.2 KB /* config.h.in. Generated from configure.ac by autoheader. */ /* Define if building universal (internal helper macro) */ #undef AC_APPLE_UNIVERSAL_BUILD /* Define if has std::isnan */ #undef CXX_HAS_STD_ISNAN /* Define to 1 if you have the <dlfcn.h> header file. */ #undef HAVE_DLFCN_H /* Define to 1 if you have the <fcntl.h> header file. */ #undef HAVE_FCNTL_H /* Define to 1 if you have the `getopt' function. */ #undef HAVE_GETOPT /* Define to 1 if you have the `gettimeofday' function. */ #undef HAVE_GETTIMEOFDAY /* Define to 1 if you have the <inttypes.h> header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if you have the <io.h> header file. */ #undef HAVE_IO_H /* Define to 1 if you have the <memory.h> header file. */ #undef HAVE_MEMORY_H /* Define to 1 if you have the `nanosleep' function. */ #undef HAVE_NANOSLEEP /* Define to 1 if you have the <Python.h> header file. */ #undef HAVE_PYTHON_H /* Define to 1 if you have the `rand' function. */ #undef HAVE_RAND /* Define to 1 if you have the `random' function. */ #undef HAVE_RANDOM /* Define to 1 if you have the `sigaction' function. */ #undef HAVE_SIGACTION /* Define to 1 if you have the `sleep' function. */ #undef HAVE_SLEEP /* Define to 1 if you have the `srand' function. */ #undef HAVE_SRAND /* Define to 1 if you have the `srandom' function. */ #undef HAVE_SRANDOM /* Define to 1 if the system has the type `ssize_t'. */ #undef HAVE_SSIZE_T /* Define to 1 if you have win32 Sleep */ #undef HAVE_SSLEEP /* Define to 1 if you have the <stdint.h> header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the <stdlib.h> header file. */ #undef HAVE_STDLIB_H /* Define to 1 if you have the <strings.h> header file. */ #undef HAVE_STRINGS_H /* Define to 1 if you have the <string.h> header file. */ #undef HAVE_STRING_H /* Define to 1 if the system has the type `struct timespec'. */ #undef HAVE_STRUCT_TIMESPEC /* Define to 1 if the system has the type `struct timezone'. */ #undef HAVE_STRUCT_TIMEZONE /* Define to 1 if you have the <sys/stat.h> header file. */ #undef HAVE_SYS_STAT_H /* Define to 1 if you have the <sys/types.h> header file. */ #undef HAVE_SYS_TYPES_H /* Define to 1 if you have the <unistd.h> header file. */ #undef HAVE_UNISTD_H /* Define to 1 if you have the `usleep' function. */ #undef HAVE_USLEEP /* Define to 1 if you have the <winbase.h> header file. */ #undef HAVE_WINBASE_H /* Define to 1 if you have the <windows.h> header file. */ #undef HAVE_WINDOWS_H /* Define to 1 if you have the <winioctl.h> header file. */ #undef HAVE_WINIOCTL_H /* Define to the sub-directory in which libtool stores uninstalled libraries. */ #undef LT_OBJDIR /* Define if mkdir accepts only one arg */ #undef MKDIR_TAKES_ONE_ARG /* Name of package */ #undef PACKAGE /* Define to the address where bug reports for this package should be sent. */ #undef PACKAGE_BUGREPORT /* Define to the full name of this package. */ #undef PACKAGE_NAME /* Define to the full name and version of this package. */ #undef PACKAGE_STRING /* Define to the one symbol short name of this package. */ #undef PACKAGE_TARNAME /* Define to the version of this package. */ #undef PACKAGE_VERSION /* Define to 1 if you have the ANSI C header files. */ #undef STDC_HEADERS /* Define to 1 if you can safely include both <sys/time.h> and <time.h>. */ #undef TIME_WITH_SYS_TIME /* Enable extensions on AIX 3, Interix. */ #ifndef _ALL_SOURCE # undef _ALL_SOURCE #endif /* Enable GNU extensions on systems that have them. */ #ifndef _GNU_SOURCE # undef _GNU_SOURCE #endif /* Enable threading extensions on Solaris. */ #ifndef _POSIX_PTHREAD_SEMANTICS # undef _POSIX_PTHREAD_SEMANTICS #endif /* Enable extensions on HP NonStop. */ #ifndef _TANDEM_SOURCE # undef _TANDEM_SOURCE #endif /* Enable general extensions on Solaris. */ #ifndef __EXTENSIONS__ # undef __EXTENSIONS__ #endif /* Version number of package */ #undef VERSION /* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most significant byte first (like Motorola and SPARC, unlike Intel). */ #if defined AC_APPLE_UNIVERSAL_BUILD # if defined __BIG_ENDIAN__ # define WORDS_BIGENDIAN 1 # endif #else # ifndef WORDS_BIGENDIAN # undef WORDS_BIGENDIAN # endif #endif /* Define to 1 if on MINIX. */ #undef _MINIX /* Define to 2 if the system does not provide POSIX.1 features except with this defined. */ #undef _POSIX_1_SOURCE /* Define to 1 if you need to in order for `stat' and other things to work. */ #undef _POSIX_SOURCE /* Define to empty if `const' does not conform to ANSI C. */ #undef const /* Define to `__inline__' or `__inline' if that's what the C compiler calls it, or to nothing if 'inline' is not supported under any name. */ #ifndef __cplusplus #undef inline #endif /* Define to `unsigned int' if <sys/types.h> does not define. */ #undef size_t /* Define missing prototypes, implemented in replacement lib */ #ifdef __cplusplus extern "C" { #endif #ifndef HAVE_GETOPT int getopt (int argc, char * const argv[], const char * optstring); extern char * optarg; extern int optind, opterr, optopt; #endif #ifndef HAVE_USLEEP int usleep(unsigned long usec); /* SUSv2 */ #endif #ifndef HAVE_NANOSLEEP #ifndef HAVE_STRUCT_TIMESPEC #if HAVE_SYS_TYPES_H # include <sys/types.h> /* need time_t */ #endif struct timespec { time_t tv_sec; long tv_nsec; }; #endif static inline int nanosleep(const struct timespec *req, struct timespec *rem) { return usleep(req->tv_sec*1000000+req->tv_nsec/1000); } #endif #if defined(HAVE_SSLEEP) && !defined(HAVE_SLEEP) #ifdef HAVE_WINBASE_H #include <windows.h> #include <winbase.h> #endif /* TODO: what about SleepEx? */ static inline unsigned int sleep (unsigned int nb_sec) { Sleep(nb_sec*1000); return 0; } #endif #ifndef HAVE_GETTIMEOFDAY #ifdef HAVE_SYS_TIME_H #include <sys/time.h> #endif #ifndef HAVE_STRUCT_TIMEZONE struct timezone { int tz_minuteswest; int tz_dsttime; }; #endif int gettimeofday(struct timeval *tv, struct timezone *tz); #endif #if !defined(HAVE_RANDOM) && defined(HAVE_RAND) #include <stdlib.h> static inline long int random (void) { return rand(); } #endif #if !defined(HAVE_SRANDOM) && defined(HAVE_SRAND) static inline void srandom (unsigned int seed) { srand(seed); } #endif #ifndef HAVE_SSIZE_T typedef size_t ssize_t; #endif #ifdef __cplusplus } #endif
__label__pos
0.978301
Friday, March 1, 2024 No menu items! HomeFashionBeautyChoosing the Right Supplement: Navigating the Maze for Personalized Beauty Choosing the Right Supplement: Navigating the Maze for Personalized Beauty In the ever-expanding world of beauty supplements, selecting the right one can be a nuanced process, requiring careful consideration of individualized needs and preferences. As the market offers an array of options targeting various skin concerns and overall well-being, factors such as skin type, age, and lifestyle play crucial roles in making an informed decision. Moreover, consulting with healthcare professionals becomes an essential step in ensuring that beauty supplements align with personal health goals while minimizing potential risks. Factors to Consider When Selecting Beauty Supplements: 1. Skin Type and Concerns: • Identifying specific skin concerns and skin types is paramount in selecting the most suitable beauty supplement. For instance, individuals with dry skin may benefit from supplements containing hyaluronic acid for hydration, while those with oily or acne-prone skin may seek supplements with ingredients like niacinamide for oil control and blemish reduction. 2. Ingredients and Formulations: • Examining the ingredient list and formulation is crucial. Look for supplements with scientifically backed ingredients known for their benefits in promoting skin health. For example, vitamins A, C, and E, along with collagen and antioxidants, are commonly found in supplements designed to support skin elasticity and combat oxidative stress. 3. Quality and Third-Party Testing: • Prioritize supplements from reputable brands that adhere to good manufacturing practices (GMP) and undergo third-party testing. This ensures that the product meets quality standards, is accurately labeled, and is free from contaminants. 4. Dosage and Recommended Intake: • Understanding the recommended dosage and intake is vital to avoid potential side effects from excessive consumption. Pay attention to the serving size and any cautionary statements regarding daily limits. 5. Allergens and Sensitivities: • Check for allergen information and potential sensitivities to certain ingredients. Individuals with allergies or dietary restrictions should opt for supplements that align with their needs. 6. Form of Supplement: • Beauty supplements come in various forms, including pills, powders, liquids, and gummies. Consider personal preferences and convenience when choosing the form of the supplement, ensuring it aligns with the individual’s lifestyle. Individualized Needs Based on Skin Type, Age, and Lifestyle: 1. Skin Type: • Oily Skin: Individuals with oily skin may benefit from supplements containing ingredients like zinc, niacinamide, and probiotics to regulate oil production and promote a balanced complexion. • Dry Skin: Hydration is crucial for those with dry skin. Look for supplements with hyaluronic acid, omega-3 fatty acids, and vitamins E and C to support skin moisture. 2. Age: • Youthful Skin: For younger individuals focused on preventing premature aging, antioxidants like vitamins C and E, along with collagen-boosting ingredients, can be beneficial. • Anti-Aging Support: As individuals age, supplements with collagen, coenzyme Q10, and resveratrol may help address specific concerns related to collagen loss and oxidative stress. 3. Lifestyle: • Dietary Preferences: Individuals with dietary preferences, such as vegetarian or vegan, should choose supplements that align with their choices. Plant-based options, like algae-derived omega-3s, are suitable alternatives. • Stressful Lifestyle: For those experiencing high levels of stress, adaptogenic herbs like ashwagandha or Rhodiola rosea may be included to support stress reduction and promote overall well-being. 4. Health Conditions: • Underlying Health Conditions: Individuals with specific health conditions or those taking medications should consult healthcare professionals before introducing supplements. Certain ingredients may interact with medications or exacerbate underlying health issues. Consulting with Healthcare Professionals: Before incorporating beauty supplements into one’s routine, consulting with healthcare professionals is a prudent step. Healthcare providers, including dermatologists, nutritionists, and general practitioners, can provide personalized guidance based on an individual’s health history, current medications, and specific needs. 1. Dermatologists: • Dermatologists can offer insights into specific skin concerns and recommend supplements targeting dermatological issues, such as acne, eczema, or signs of aging. 2. Nutritionists: • Nutritionists can assess dietary habits and recommend supplements that complement nutritional needs. They can also guide individuals with specific dietary restrictions. 3. General Practitioners: • General practitioners can provide a comprehensive overview of an individual’s health, ensuring that any proposed supplements align with overall well-being and do not interfere with existing medications. By involving healthcare professionals in the decision-making process, individuals can receive personalized advice, minimizing potential risks and maximizing the benefits of beauty supplements. This collaborative approach ensures that supplement choices align with an individual’s health goals and contribute positively to their overall well-being. STS STShttps://www.smalltownshop.com All products on SmallTownShop are handpicked by our editors. If you purchase something through our retail links, we may receive an affiliate commission. RELATED ARTICLES Most Popular Recent Comments
__label__pos
0.987173
The Future of Artificial Intelligence The concept of artificial intelligence (AI) has been around for decades, but it is now becoming more and more prevalent in our daily lives. From smart home devices to self-driving cars, AI technology is rapidly advancing and shaping the way we interact with the world. As we move further into the 21st century, it’s important to understand the current state of AI and what the future holds for this groundbreaking technology. Introduction to Artificial Intelligence Artificial intelligence refers to the ability of machines to perform tasks that normally require human intelligence, such as learning, problem-solving, and decision making. The development of AI began in the 1950s with the goal of creating intelligent machines that could think and act like humans. However, it wasn’t until recently that significant advancements have been made in the field of AI due to the increasing availability of big data, powerful computing systems, and advanced algorithms. Currently, AI is being used in a variety of industries, including healthcare, finance, transportation, and entertainment. Some common examples of AI in our daily lives include virtual assistants like Siri and Alexa, recommendation systems on streaming platforms like Netflix, and predictive analytics used by banking institutions. But as technology continues to evolve, the potential applications of AI are endless. Current Applications of AI The Future of Artificial Intelligence As mentioned earlier, AI is already being used in various industries and has become an integral part of our daily lives. Let’s take a closer look at some of the current applications of AI and how they are impacting different sectors. Healthcare One of the most promising applications of AI is in the healthcare industry. With the help of AI, doctors can now analyze vast amounts of patient data to make more accurate diagnoses and develop personalized treatment plans. AI-powered medical devices have also been developed to assist in surgeries and other procedures, reducing the risk of human error. Additionally, AI is being used to improve patient care and reduce healthcare costs. For instance, chatbots are being used by healthcare providers to automate routine tasks, such as appointment scheduling and answering patient queries. This not only saves time but also allows doctors to focus on more critical tasks. Finance AI is transforming the finance industry in many ways. Banks and financial institutions are using AI-powered algorithms to detect fraudulent activities and prevent financial crimes. These algorithms analyze large datasets to identify patterns and anomalies that humans may overlook. Moreover, AI is revolutionizing the way we conduct financial transactions. With the rise of digital assistants and chatbots, customers can now manage their finances and make payments through voice commands or text messages. This has made banking more convenient and accessible for people. Transportation The development of self-driving cars is a significant breakthrough in the transportation industry, and it’s all thanks to AI technology. These vehicles use sensors, cameras, and AI algorithms to navigate roads and make decisions, ultimately reducing the risk of accidents caused by human error. Moreover, AI is being used to optimize traffic flow and reduce congestion in cities. Smart transportation systems are being developed that can analyze real-time data from road sensors and adjust traffic signals accordingly to improve the overall flow of traffic. Entertainment AI has also made its mark in the entertainment industry. Streaming platforms like Netflix and Amazon Prime use AI algorithms to recommend personalized content to their users based on their viewing history and preferences. This not only enhances the user experience but also helps these platforms retain customers. Additionally, AI is being used in the gaming industry to create more realistic and immersive experiences for players. Advanced AI algorithms can now mimic human behavior, making gameplay more challenging and exciting. Advancements in AI Technology The Future of Artificial Intelligence The advancements in AI technology have been remarkable over the past few years, and there seems to be no limit to how far it can go. Here are some of the recent developments in AI that are paving the way for the future of this technology. Deep Learning Deep learning is a subset of AI that uses neural networks to process and analyze vast amounts of data. These neural networks are modeled after the human brain, and they can learn and improve on their own without being explicitly programmed. This has led to significant advancements in image and speech recognition, natural language processing, and decision making. Robotics The integration of AI with robotics has opened up new possibilities for automation and efficiency in various industries. Robots powered by AI are now being used in manufacturing, healthcare, and agriculture to perform tasks that are traditionally done by humans. This not only increases productivity but also reduces the risk of workplace injuries and accidents. Quantum Computing Quantum computing is another emerging technology that has the potential to revolutionize the field of AI. Unlike traditional computers that operate on binary code, quantum computers use qubits (quantum bits) that can represent multiple values at once. This enables them to process complex calculations and algorithms at a much faster rate, making them ideal for AI applications. Ethical Considerations in AI Development With all the advancements and opportunities that AI presents, there are also ethical concerns that need to be addressed. As AI becomes more integrated into our lives, we must ensure that it is developed and used ethically, without causing harm to individuals or society as a whole. One major concern is the potential for biased decision-making in AI systems. AI algorithms learn from the data they are provided, and if that data is biased or incomplete, it can lead to discriminatory outcomes. For example, facial recognition software may have difficulty recognizing people of certain races or genders if the training data used to develop it was not diverse enough. Another ethical consideration is the impact of AI on job displacement. As more tasks become automated, there is a fear that AI will replace human workers, leading to high unemployment rates. It is important for governments and organizations to consider these implications and implement policies to support workers who may be affected by AI advancements. Impact of AI on Various Industries The impact of AI on various industries is already significant, and it will continue to grow in the coming years. Let’s take a closer look at how AI is transforming some key sectors. Healthcare As mentioned earlier, AI is already being used in healthcare to improve patient care and assist doctors in making diagnoses. In the future, we can expect to see even more advanced AI systems that can predict and prevent diseases before they occur. This will not only save lives but also reduce healthcare costs for individuals and governments. Moreover, with the rise of telemedicine, AI-powered chatbots and virtual assistants will play a crucial role in providing remote healthcare services to patients. This will be particularly beneficial for people living in rural or underdeveloped areas who do not have easy access to medical facilities. Finance AI technology is rapidly transforming the finance industry, and its impact will only continue to grow. With the use of AI-powered chatbots and voice assistants, banking services will become more personalized and accessible for customers. Additionally, AI algorithms will play a critical role in detecting and preventing financial crimes, making the finance sector safer and more secure. Transportation The development of self-driving cars is just the tip of the iceberg when it comes to the impact of AI on the transportation industry. As this technology continues to evolve, we can expect to see more efficient and environmentally friendly modes of transportation. For example, AI-powered drones could be used for delivery services, reducing the need for traditional delivery trucks and decreasing carbon emissions. Education Education is another sector that AI is transforming in many ways. AI-powered tutors and chatbots are being used to provide personalized learning experiences for students, helping them grasp concepts and learn at their own pace. This not only improves academic performance but also reduces the workload for teachers, allowing them to focus on other aspects of teaching. Moreover, AI is being used to develop virtual reality simulations and educational games that make learning more engaging and interactive for students. In the future, we can expect to see even more innovative ways of using AI to enhance the education system. Challenges and Concerns for the Future of AI Despite all the potential benefits, there are also many challenges and concerns surrounding the future of AI. Here are some of the most pressing issues that need to be addressed. Data Privacy and Security With the increasing use of AI comes a greater risk of data breaches and privacy violations. AI systems require vast amounts of data to function effectively, and this data needs to be collected, stored, and managed securely. Organizations must take steps to ensure that sensitive information is protected from cyber threats and unauthorized access. Lack of Transparency As AI algorithms become more complex, it becomes increasingly difficult to understand how they arrive at decisions. This lack of transparency raises concerns about accountability and makes it challenging to identify and correct any biases or errors in the system. To ensure ethical development and use of AI, transparency and explainability are crucial. Job Displacement As mentioned earlier, the rise of AI has led to concerns about job displacement and unemployment. As more tasks become automated, there is a risk of workers losing their jobs to machines. It’s important for governments and organizations to invest in retraining programs and education to prepare individuals for the changing job market. Predictions for the Future of AI It’s impossible to predict exactly how AI will evolve in the coming years, but experts have made some educated guesses based on current trends. Here are some predictions for the future of AI: • Increased automation: As AI technology continues to improve, we can expect to see more tasks becoming automated, from simple administrative tasks to complex decision-making. • Advancements in robotics: The integration of AI with robotics will lead to more autonomous and efficient machines that can perform a wide range of tasks. • More personalized experiences: With the use of AI-powered algorithms, we can expect to see more personalized experiences in various industries, from healthcare to entertainment. • Improved natural language processing: As seen with virtual assistants like Siri and Alexa, natural language processing is already quite advanced. In the future, we can expect to see even more sophisticated systems that can understand and respond to human language with ease. • Enhanced cybersecurity: As AI technology advances, so will the capabilities of cybercriminals. To combat this, we can expect to see more advanced AI-powered cybersecurity systems that can detect and prevent cyber attacks in real-time. Conclusion and Potential Implications The future of AI is both exciting and daunting. On one hand, it has the potential to make our lives easier and more efficient, but on the other hand, there are concerns about its impact on society and the job market. It’s important for governments and organizations to work together to ensure the ethical development and use of AI to reap its full benefits without causing harm. Moreover, with the rise of AI, there may be a shift in the skills and knowledge required for certain jobs. It’s essential for individuals to adapt and continuously learn new skills to stay relevant in the rapidly evolving job market. Overall, the future of AI is bright, and with responsible development and use, it has the potential to bring about significant positive changes in various industries and improve our daily lives. Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.978158
Predicting the need for early tracheostomy: A multifactorial analysis of 992 intubated trauma patients Claudia E. Goettler, Jonathan R. Fugo, Michael R. Bard, Mark A. Newell, Scott G. Sagraves, Eric A. Toschlog, Paul J. Schenarts, Michael F. Rotondo Research output: Contribution to journalArticlepeer-review 57 Scopus citations Abstract Background: Tracheostomy has few, severe risks, while prolonged endotracheal intubation causes morbidity. The need for tracheostomy was assessed, based on early clinical parameters. Methods: Adult trauma patients (January 1994-August 2004), intubated for resuscitation, ventilated >24 hours, were retrospectively evaluated for demographics, physiology, brain, and pulmonary injury. Tracheostomy patients were compared with those without. Chi-square, Mann-Whitney, and multivariate logistic regression were used with statistical significance at p < 0.05.* Results: Of 992 patients, 430 (43%) underwent tracheostomy at 9.22 ± 5.7 days. Risk factors were age (45.6* ± 18.8 vs. 36.7 ± 15.9, OR: 2.1 (18 years increments), ISS (30.3* ± 12.5 vs. 22.0 ± 10.3, OR: 2.1 (12u increments), damage control (DC) [68%*(n = 51) vs. 32%*(n = 51), OR: 3.8], craniotomy [70%*(n = 21) versus 30%(n = 9), OR: 2.6], and intracranial pressure monitor (ICP) [65.4%*(n = 87) vs. 34.6 %(n = 46), OR: 2.1]. A 100% tracheostomy rate (n = 30, 3.0%) occurred with ISS (injury severity score) = 75, ISS ≥50, and age ≥55, admit/24 hour GCS (Glasgow Coma Scale) = 3 and age ≥70, AIS abdomen, chest or extremities ≥5 and age ≥60, bilateral pulmonary contusions (BPC) and ≥8 rib fractures, craniotomy and age ≥50, craniotomy with intracranial pressure (ICP) and age ≥40, or craniotomy and GCS ≤4 at 24 hour. A tracheostomy rate of ≥90% (n = 105, 10.6%) was found with ISS ≥54, ESS ≥40, and age ≥40, admit/24 hour GCS = 3 and age ≥55, paralysis and age ≥40, BPC and age ≥55. A tracheostomy rate ≥80% (n = 248, 25.0%) occurred with ISS ≥38, age ≥80, admit/24 hour GCS = 3 and age ≥45, DC and age ≥50, BPC and age ≥50, aspiration and age ≥55, craniotomy with ICP, craniotomy with GCS ≤9 at 24 hour. Conclusion: Discrete risk factors predict the need for tracheostomy for trauma patients. We recommend that patients with ≥90% risk undergo early tracheostomy and that it is considered in the ≥80% risk group to potentially decreased morbidity, increased patient comfort, and optimize resource utilization. Original languageEnglish (US) Pages (from-to)991-996 Number of pages6 JournalJournal of Trauma - Injury, Infection and Critical Care Volume60 Issue number5 DOIs StatePublished - May 2006 Keywords • Prediction • Tracheostomy • Trauma ASJC Scopus subject areas • Surgery • Critical Care and Intensive Care Medicine Fingerprint Dive into the research topics of 'Predicting the need for early tracheostomy: A multifactorial analysis of 992 intubated trauma patients'. Together they form a unique fingerprint. Cite this
__label__pos
0.660372
Meeting Abstract P3.135  Friday, Jan. 6  Three-dimensional escape trajectories in larval fish RYAN, D S*; BERG, O; FEITL, K E; MCHENRY, M J; MULLER, U K; Wageningen University; California State University Fresno; University of California Irvine; University of California Irvine; California State University Fresno [email protected] Fish execute C starts when they escape from a threat. The neural control, body kinematics and hydrodynamics of escape responses have been studied extensively in adult and larval fish. However, due to experimental constraints, biomechanical studies have focused on mapping the body movements and the center-of-mass trajectories from a dorsal view, neglecting the vertical dimension. These 2-dimensional studies suggest that prey randomize their escape trajectories, but bias the response away from the stimulus. This study explored the escape response of larval fish to a horizontal startle stimulus by recording the trajectories in three dimensions. We used a piston to generate a brief suction event simulating a predator attack. Consistent with published findings, our pilot data show that escape responses occurred either away or toward the stimulus in a horizontal plane, there seemed to be also no preference for left or right. However, zebrafish larvae consistently responded to a horizontal stimulus with a downward escape trajectory. We developed several hypotheses: (1) Demersal lifestyle: zebrafish larvae are demersal and might therefore always escape towards the substrate; (2) Insufficient pitch control: fish larvae are more dorso-ventrally asymmetric and have smaller pitch control surfaces than adults and therefore experience a stronger downward pitch; (3) Directional response: fish larvae process the direction of the stimulus and select a trajectory biased away from the stimulus. To test whether larvae use the stimulus direction to bias their escape response or default to a downward trajectory due to behavioral or mechanical constraints, we vary the direction of the stimulus.
__label__pos
0.832288
Khaos Archive for November 23rd, 2002 Do Patterns and Frameworks Reduce Discovery Costs? Saturday, November 23rd, 2002 Asking whether patterns and frameworks reduce discovery costs is like asking whether someone who knows something about billing is going to have an easier time making a billing system than someone who doesn’t. Of course! The problem is whether we have the right patterns and frameworks to reduce discovery costs. If not, how can we get them? – Ralph Johnson Too many projects look for the “home run” in reusable platforms and frameworks. Frameworks work well only if they can predict well: to predict what will change, and what will not. This is a difficult enough problem for individual objects or modules, let alone for extensible application skeletons. Small frameworks like MVC work, but few large frameworks enjoy success. – Jim Coplien Proceedings of the 1997 ACM SIGPLAN conference on Object-oriented programming systems, languages and applications, Beyond the hype (panel): do patterns and frameworks reduce discovery costs?
__label__pos
0.789813