content
string
pred_label
string
pred_score
float64
Question: How Do I Create A PR In Bitbucket? How do I create a PR in Jira? To create a pull request, you need to have made your code changes on a separate branch or forked repository.From the open repository, click + in the global sidebar and select Create a pull request under Get to work.Fill out the rest of the pull request form. Click Create pull request.. Why is it called pull request? Pull requests are a feature specific to GitHub. They provide a simple, web-based way to submit your work (often called “patches”) to a project. It’s called a pull request because you’re asking the project to pull changes from your fork. … You might also find GitHub’s article about pull requests helpful. Is pull request a git feature? While pull requests are not a core feature of Git, they are commonplace when it comes to collaborating with Git hosting services. They are especially necessary when working with open-source projects. … Most open-source projects have a maintainer who can control which changes are approved and merged into the project. What is PR in bitbucket? Pull requests are a feature that makes it easier for developers to collaborate using Bitbucket. … Once their feature branch is ready, the developer files a pull request via their Bitbucket account. This lets everybody involved know that they need to review the code and merge it into the master branch. How do I make a PR request? In summary, if you want to contribute to a project, the simplest way is to:Find a project you want to contribute to.Fork it.Clone it to your local system.Make a new branch.Make your changes.Push it back to your repo.Click the Compare & pull request button.Click Create pull request to open a new pull request.
__label__pos
0.999623
Daerah Asal Fungsi F Dari X Ke 4x 3 Adalah Pengantar Apakah kamu pernah mendengar tentang fungsi matematika F(x) = 4x^3? Fungsi ini termasuk dalam kategori fungsi polinomial tingkat tiga atau kubik. Fungsi kubik memiliki bentuk umum seperti F(x) = ax^3 + bx^2 + cx + d, di mana a, b, c, dan d adalah konstanta.Namun, pada artikel ini, kita akan membahas daerah asal atau domain dari fungsi F(x) = 4x^3. Daerah asal atau domain adalah himpunan semua nilai x yang dapat diterima dalam fungsi matematika. Fungsi Kubik Sebelum membahas daerah asal dari fungsi F(x) = 4x^3, kita perlu memahami terlebih dahulu apa itu fungsi kubik. Fungsi kubik adalah fungsi matematika yang memiliki suatu variabel dalam pangkat tiga. Bentuk umum dari fungsi kubik adalah F(x) = ax^3 + bx^2 + cx + d, di mana a, b, c, dan d adalah konstanta.Fungsi kubik dapat memiliki satu, dua, atau tiga akar atau titik stasioner. Titik stasioner adalah titik di mana turunan pertama fungsi sama dengan nol. Daerah Asal Fungsi F Dari X Ke 4x 3 Adalah… Kembali ke topik utama, daerah asal fungsi F(x) = 4x^3. Untuk mengetahui daerah asal dari suatu fungsi, kita perlu mempertimbangkan dua hal: pembilang dan penyebut.Dalam fungsi F(x) = 4x^3, pembilangnya adalah 4x^3. Pembilang ini dapat mengambil nilai apa saja dalam bilangan real. Namun, kita perlu mempertimbangkan penyebutnya.Penyebut dalam fungsi F(x) = 4x^3 adalah 1. Sehingga, daerah asal dari fungsi ini adalah semua bilangan real. Dalam bentuk matematisnya, daerah asal fungsi F(x) = 4x^3 adalah (-∞, ∞). Contoh Soal Berikut adalah contoh soal untuk memperjelas konsep daerah asal fungsi F(x) = 4x^3.Jika F(x) = 4x^3 – 3x^2 + 2x – 1, tentukanlah daerah asal dari F(x).Jawab:Pembilang dari F(x) adalah 4x^3 – 3x^2 + 2x – 1 dan penyebutnya adalah 1. Sehingga, daerah asal dari F(x) adalah semua bilangan real. Dalam bentuk matematisnya, daerah asal fungsi F(x) = 4x^3 – 3x^2 + 2x – 1 adalah (-∞, ∞). Kesimpulan Dalam artikel ini, kita membahas tentang daerah asal atau domain dari fungsi matematika F(x) = 4x^3. Daerah asal dari suatu fungsi tergantung pada pembilang dan penyebutnya. Untuk fungsi F(x) = 4x^3, daerah asalnya adalah semua bilangan real atau (-∞, ∞).Terima kasih telah membaca artikel ini dan sampai jumpa kembali di artikel menarik lainnya. Tinggalkan komentar
__label__pos
0.999521
University College London Browse deposit.zip (2.72 MB) Cholesteryl Ester Transfer Protein (CETP) as a Drug Target for Cardiovascular Disease Download (2.72 MB) Version 3 2021-07-30, 12:19 Version 2 2021-05-04, 15:15 Version 1 2021-02-23, 14:55 dataset posted on 2021-07-30, 11:18 authored by Floriaan SchmidtFloriaan Schmidt Contains data and computer scripts to generate the figures underlying the manuscript: Cholesteryl Ester Transfer Protein (CETP) as a Drug Target for Cardiovascular Disease. The README.md: # Scripts and deposited data This directory contains R and python scripts along with the data necessary to generate the results described in the manuscript: 'cholesteryl ester transfer protein (CETP) as a drug target for cardiovascular disease'. ## Directory content . ├── README.md ├── scripts ├── figures ├── MR_data ├── TRIAL_data └── PICKLED_data The `scripts` directory contains the computer codes necessary to generate the figures which get saved to the `figures` directory. The scripts source data from the `MR_data` and `TRIAL_data` directories. The `PICKLED_data` directory contains the python pickled files from which we extracted the .csv files in `MR_data` -- these files can be ignored. ## Computer scripts The directory root contains various R (.R) and python (.py) scripts that will process the data files. It also contains a `cetp_conda_env.yml` file to conda install the necessary R and python modules: ``` cd scripts conda env create -f cetp_conda_env.yml ``` This creates the `cetp` conda environment. To generate the figures, please run the following `bash` script: ``` # run from the scripts directory ./run_scripts.sh ``` ### Input and output * scripts/MR_heatmap.py : Takes the MR_data .tsv files and returns Figures 2 and 5. * scripts/MR_metabolites.R : Takes the MR_data .tsv/csv files and returns Figure 4. * scripts/MR_forestplots.R : Takes the MR_data .tsv files and returns Figure 3. * scripts/RCT_forestplots.R : Takes the TRIAL_data .xlsx files and returns Figure 1. ## Contact Please contact Floriaan Schmidt for any queries: [email protected] . Funding BHF grant PG/18/5033837 and the UCL BHF Research Accelerator AA/18/6/34223 History I confirm that I am not uploading any: personal data as defined by data protection legislation, including information that may identify a living individual; information provided in confidence; or information that would contravene a third-party agreement • Yes I have considered whether the data to be published may be licensed commercially before deciding to freely release it to the public. Further information and advice may be sought from UCL Business https://www.uclb.com/about/our-people/ • Yes
__label__pos
0.811361
Permalink Switch branches/tags Nothing to show Find file Copy path Fetching contributors… Cannot retrieve contributors at this time 39 lines (32 sloc) 359 Bytes tags title date normal Bitwise and 2015-10-14 var ENUM = { A: 0, B: 1, C: 2, D: 4, E: 8, F: 16, G: 32 } var c = ENUM.B + ENUM.D; c & ENUM.C === 0 c & ENUM.B === ENUM.B c & ENUM.D === ENUM.D function b (n) { return n.toString(2); } b(1) == 1 b(2) == 10 b(4) == 100 b(8) == 1000 b(16) == 10000 ...
__label__pos
0.552259
LATEST VERSION: 9.5.2 - RELEASE NOTES Pivotal GemFire® v9.5 SELECT Statement The SELECT statement allows you to filter data from the collection of object(s) returned by a WHERE search operation. The projection list is either specified as * or as a comma delimited list of expressions. For *, the interim results of the WHERE clause are returned from the query. Examples: Query all objects from the region using *. Returns the Collection of portfolios (The exampleRegion contains Portfolio as values). SELECT * FROM /exampleRegion Query secIds from positions. Returns the Collection of secIds from the positions of active portfolios: SELECT secId FROM /exampleRegion, positions.values TYPE Position WHERE status = 'active' Returns a Collection of struct<type: String, positions: map> for the active portfolios. The second field of the struct is a Map ( jav.utils.Map ) object, which contains the positions map as the value: SELECT "type", positions FROM /exampleRegion WHERE status = 'active' Returns a Collection of struct<portfolios: Portfolio, values: Position> for the active portfolios: SELECT * FROM /exampleRegion, positions.values TYPE Position WHERE status = 'active' Returns a Collection of struct<pflo: Portfolio, posn: Position> for the active portfolios: SELECT * FROM /exampleRegion portfolio, positions positions TYPE Position WHERE portfolio.status = 'active' SELECT Statement Results The result of a SELECT statement is either UNDEFINED or is a Collection that implements the SelectResults interface. The SelectResults returned from the SELECT statement is either: 1. A collection of objects, returned for these two cases: • When only one expression is specified by the projection list and that expression is not explicitly specified using the fieldname:expression syntax • When the SELECT list is * and a single collection is specified in the FROM clause 2. A collection of Structs that contains the objects When a struct is returned, the name of each field in the struct is determined following this order of preference: 1. If a field is specified explicitly using the fieldname:expression syntax, the fieldname is used. 2. If the SELECT projection list is * and an explicit iterator expression is used in the FROM clause, the iterator variable name is used as the field name. 3. If the field is associated with a region or attribute path, the last attribute name in the path is used. 4. If names cannot be decided based on these rules, arbitrary unique names are generated by the query processor. DISTINCT Use the DISTINCT keyword if you want to limit the results set to unique rows. Note that in the current version of GemFire you are no longer required to use the DISTINCT keyword in your SELECT statement. SELECT DISTINCT * FROM /exampleRegion Note: If you are using DISTINCT queries, you must implement the equals and hashCode methods for the objects that you query. LIMIT You can use the LIMIT keyword at the end of the query string to limit the number of values returned. For example, this query returns at most 10 values: SELECT * FROM /exampleRegion LIMIT 10 ORDER BY You can order your query results in ascending or descending order by using the ORDER BY clause. You must use DISTINCT when you write ORDER BY queries. SELECT DISTINCT * FROM /exampleRegion WHERE ID < 101 ORDER BY ID The following query sorts the results in ascending order: SELECT DISTINCT * FROM /exampleRegion WHERE ID < 101 ORDER BY ID asc The following query sorts the results in descending order: SELECT DISTINCT * FROM /exampleRegion WHERE ID < 101 ORDER BY ID desc Note: If you are using ORDER BY queries, you must implement the equals and hashCode methods for the objects that you query. Preset Query Functions GemFire provides several built-in functions for evaluating or filtering data returned from a query. They include the following: Function Description Example ELEMENT(expr) Extracts a single element from a collection or array. This function throws a FunctionDomainException if the argument is not a collection or array with exactly one element. ELEMENT(SELECT DISTINCT * FROM /exampleRegion WHERE id = 'XYZ-1').status = 'active' IS_DEFINED(expr) Returns TRUE if the expression does not evaluate to UNDEFINED. Inequality queries include undefined values in their query results. With the IS_DEFINED function, you can limit results to only those elements with defined values. IS_DEFINED(SELECT DISTINCT * FROM /exampleRegion p WHERE p.status = 'active') IS_UNDEFINED (expr) Returns TRUE if the expression evaluates to UNDEFINED. With the exception of inequality queries, most queries do not include undefined values in their query results. The IS_UNDEFINED function allows undefined values to be included, so you can identify elements with undefined values. SELECT DISTINCT * FROM /exampleRegion p WHERE IS_UNDEFINED(p.status) NVL(expr1, expr2) Returns expr2 if expr1 is null. The expressions can be query parameters (bind arguments), path expressions, or literals.   TO_DATE(date_str, format_str) Returns a Java Data class object. The arguments must be String S with date_str representing the date and format_str representing the format used by date_str. The format_str you provide is parsed using java.text.SimpleDateFormat.   COUNT The COUNT keyword returns the number of results that match the query selection conditions specified in the WHERE clause. Using COUNT allows you to determine the size of a results set. The COUNT statement always returns an integer as its result. The following queries are example COUNT queries that return region entries: SELECT COUNT(*) FROM /exampleRegion SELECT COUNT(*) FROM /exampleRegion WHERE ID > 0 SELECT COUNT(*) FROM /exampleRegion WHERE ID > 0 LIMIT 50 SELECT COUNT(*) FROM /exampleRegion WHERE ID >0 AND status LIKE 'act%' SELECT COUNT(*) FROM /exampleRegion WHERE ID IN SET(1,2,3,4,5) The following COUNT query returns the total number of StructTypes that match the query’s selection criteria. SELECT COUNT(*) FROM /exampleRegion p, p.positions.values pos WHERE p.ID > 0 AND pos.secId 'IBM' The following COUNT query uses the DISTINCT keyword and eliminates duplicates from the number of results. SELECT DISTINCT COUNT(*) FROM /exampleRegion p, p.positions.values pos WHERE p.ID > 0 OR p.status = 'active' OR pos.secId OR pos.secId = 'IBM'
__label__pos
0.997698
"Standard Deviation" Essays and Research Papers 11 - 20 of 500 Standard Deviation and Minimum Order mean+0.67*SD (standard deviation). According to z table, z equal to 0.67 when probability is 0.75. Therefore, we can calculate quantity for each style include the risk of stock out by using formulate Q*=mean+z*SD. Therefore, we can get the maximum order units for each style in order to avoid stock out. Figure 1 |Style |Price |Average |Standard |2*standard |P=1-8%/(24%+8|Z |Q*=Average+z*SD | | | |forecast |deviation |deviation... Premium People's Republic of China, Risk, Economy of the People's Republic of China 1732  Words | 7  Pages Open Document Probability: Standard Deviation and Pic 9551/SQRT(300),1) = 0.0783 (c) A circuit contains three resistors wired in series. Each is rated at 6 ohms. Suppose, however, that the true resistance of each one is a normally distributed random variable with a mean of 6 ohms and a standard deviation of 0.3 ohm. What is the probability that the combined resistance will exceed 19 ohms? How "precise" would the manufacturing process have to be to make the probability less than 0.005 that the combined resistance of the circuit would exceed 19... Premium Probability theory, Ohm's law, Random variable 558  Words | 3  Pages Open Document Random Variable and Standard Deviation the mean from part b find the standard deviation of the probability distribution. 8. A computer password consists of two letters followed by a five-digit number, none of which can be repeated. After 3 tries the computer locks down and notifies security. a) What is the probability of guessing the correct password on the first try? b) What it the probability of guessing the correct password within three tries? 9. Find the mean and standard deviation of the binomial distribution... Premium Probability theory, Standard deviation, Discrete probability distribution 560  Words | 3  Pages Open Document Standard Deviation and Probability order is long and uncertain. This time gap is called “lead time.” From past experience, the materials manager notes that the company’s demand for glue during the uncertain lead time is normally distributed with a mean of 187.6 gallons and a standard deviation of 12.4 gallons. The company follows a policy of placing an order when the glue stock falls to a predetermined value called the “reorder point.” Note that if the reorder point is x gallons and the demand during lead time exceeds x gallons... Premium Standard deviation, Safety stock, Reorder point 514  Words | 3  Pages Open Document Standard Deviation and Gulf View Condominiums Sales Price |   | Days to Sell |   |   |   |   |   |   |   | Mean | 474007.5 | Mean | 454222.5 | Mean | 106 | Standard Error | 31194.293 | Standard Error | 30439.72954 | Standard Error | 8.256078 | Median | 437000 | Median | 417500 | Median | 96 | Mode | 975000 | Mode | 305000 | Mode | 85 | Standard Deviation | 197290.03 | Standard Deviation | 192517.7534 | Standard Deviation | 52.21602 | Sample Variance | 3.892E+10 | Sample Variance | 37063085378 | Sample Variance | 2726.513 | Kurtosis... Premium Mean, Normal distribution, Standard deviation 913  Words | 4  Pages Open Document Standard Deviation and Double Degree in analysing the data is determining if outliers exists within the data. The presence of outliers must be evaluated because their existence could distort the data and make it inaccurate. In order to determine if outliers exist the average and standard deviation must be calculated in order to calculate the Z score, which will show, wither or not outliers exist. In this instance to outliers where found present in the data set as all of the data fell within the +3,-3 range, the largest positive outlier... Premium Factor analysis, Median, Normal distribution 1218  Words | 5  Pages Open Document Quiz: Standard Deviation and Confidence Interval Estimate corresponds to a 94% level of confidence. A. 1.88 B. 1.66 C. 1.96 D. 2.33 2. In a sample of 10 randomly selected women, it was found that their mean height was 63.4 inches. Form previous studies, it is assumed that the standard deviation, σ, is 2.4. Construct the 95% confidence interval for the population mean. A. (61.9, 64.9) B. (58.1, 67.3) C. (59.7, 66.5) D. (60.8, 65.4) 3. Suppose a 95% confidence interval for µ turns out to be (120, 310). To make... Premium Confidence interval, Sample size, Statistical inference 973  Words | 4  Pages Open Document Standard deviation abstract Standard Deviation Abstract QRB/501 Standard Deviation Abstract Standard Deviations Are Not Perverse Purpose: The purpose of this article is to illustrate how using statistical data, such as standard deviation, can help a cattleman choose the best lot of calf’s at auction. The statistical data used in these decision making processes can also help the cattleman with future analysis of the lots purchased and existing stock. Research Question: How can understanding the standard deviation... Premium Normal distribution, National Hockey League, Unbiased estimation of standard deviation 1465  Words | 5  Pages Open Document Biology Homework Stansard Deviation 14, 14, 15, 15, 16. The mean is 14.0mm.What is the best estimate of the standard deviation? 
 -1mm 5  1000 bananas were collected from a single plantation and weighed.Their masses formed a normal distribution. How many bananas would be expected to be within 2 standard deviations of the mean? 
 -950 6  In a normal distribution, what percentage of values fall within ±1 standard deviation of the mean and
±2 standard deviations of the mean? 
 -1= 68% -2=95% 7  The lengths of the leaves of dandelion plants... Premium Median, Cauchy distribution, Mean 669  Words | 4  Pages Open Document standard deviation Standard deviation can be difficult to interpret as a single number on its own. Basically, a small standard deviation means that the values in a statistical data set are close to the mean of the data set, on average, and a large standard deviation means that the values in the data set are farther away from the mean, on average. The standard deviation measures how concentrated the data are around the mean; the more concentrated, the smaller the standard deviation. A small standard deviation can... Premium Real number, Mean, Statistics 507  Words | 2  Pages Open Document Become a StudyMode Member Sign Up - It's Free
__label__pos
0.902556
Safety Control Technology of Deepwater Perforated Gas Well Testing Abstract Due to the high difficulties, high investment, and high risks in deepwater oil and gas well testing, major safety problems can occur easily. A key to prevent accidents is to conduct safety assessment and control on deepwater testing and to improve the testing technology. The deepwater of the South China Sea has some special environmental features: long distance from offshore, frequent typhoons in summer and constant monsoons in winter, and the presence of sandy slopes, sandy ridges and internal waves, coupled with the complex properties of oil and gas reserves which bring more challenges to deepwater well testing. In combination with deepwater well testing practice in the South China Sea, this paper analyzes the main potential risks in deepwater well testing and concludes that there are risks of failures of testing string, tools, and ground processes. Other risks are gas hydrate blockage, reservoir stratum sanding, and typhoon impacts. Specific precautions are also proposed in response to these risks in the paper. Share and Cite: Liang, H. and Wu, M. (2019) Safety Control Technology of Deepwater Perforated Gas Well Testing. Engineering, 11, 131-136. doi: 10.4236/eng.2019.113011. 1. Introduction The deepwater zone in the South China Sea which is called “the second Person Gulf” is vast, rich of petroleum and natural gas hydrate reserves [1] [2] . A number of significant discoveries of natural gas reserves have been achieved recently [3] [4] . As an irreplaceable tool in deepwater oil and gas exploration and production, well testing not only provides valuable information for structure and trap evaluation, but also paves the way for effective oil and gas production. However, deepwater well testing technology in China is in its infancy stage currently [5] [6] [7] [8] . In order to meet the strategic needs of deepwater oil and gas exploitation in the South China Sea, safety assessment and control technique of deepwater well testing is investigated. 2. Challenges of Deepwater Well Testing 1) Deepwater well testing should be accomplished with floating drilling platform. Affected by factors like wind, wave, and current, the platform is under constant complex movements such as rising, sinking, and drifting. Besides, the deepwater well testing string should also be constrained by the riser. As a result, the forces on the test string, especially that above the mud line, is extraordinarily complicated, which has brought with significant difficulty for its design and safety control. This challenge is even tougher as water depth increases. 2) The combination of low temperature at the mud line and the rapid drop of pressure once well shut-in is the major lure for natural gas hydrate, which will bring with not only failure to the test, but also dramatic risks to the well control, or even catastrophic accidents. 3) Under water facilities should work against the tough surroundings brought by the great water depth to ensure the entire well testing process is upon reliable foundations. Moreover, other factors including the limited space of the platform, the narrow window of formation pressure, high production rate and high formation pressure, will also add the challenges to well control and surface safety control. 4) As the test is conducted on a floating platform, unpredictable incidents like the breakdown of positioning system, undercurrents, and bad weather, will make the platform drift away from the well head. Under this situation, test string above the mud line should be dismissed from the rest in order to avoid a catastrophe. Consequently, the fast dismission of the string under emergency and the re-connection after that are other challenges in well testing. 3. Potential Risks and Their Preventive Measures of Deepwater Well Testing 3.1. Failure and Its Prevention of Testing String and Facilities The test string, from which the underground fluid flow to the surface, is composed with three major components: bottom-hole test facilities, testing tubing, and under water facilities. For safety purpose, suitable bottom-hole test facilities should be selected. Besides, the test tubing should be optimized through specific mechanical analysis with the target of safety and high quality. It should not merely satisfy the demand of test process in the toughest environment, but also be convenient in use, suitable in material and economical. According to such principles, the optimization workflow of the tubing design in deepwater testing is shown as following (showing in Figure 1). 3.2. Failure and Its Prevention of Surface Process A typical surface process of deepwater well testing includes: the flow head, safety Figure 1. Optimization workflow of the string design in deepwater well testing. valves, emergency shutdown valves, a desander, choke manifold, a steam boiler and heater, a three-phase separator, a gauge tank, a conveying pump, a temporary supply tank, a burner boom, and connecting manifold, etc. This process is short in flow distance (only dozens of meters from the burner wall to well head), high in pressure, short in flow time (generally about 1 day), and complex in phase change processes. Ice blockage is risky to take place during the test, bringing about great danger to the personnel and the test. So a series of emergency shutdown facilities should be equipped in the manifold to ensure that any place can be artificially shut-in in emergency. A desander is needed to remove sands in potential sand producing formations. A large discharge pump for chemicals is installed at the well head to inject methanol to prevent gas hydrate. All the pressure vessels in deepwater well testing should be qualified with a third party qualification such as DNV, ABS or Lioyd according to their period of validity. Once the lectotype of facilities is finished, the workflow should be numerically simulated to check whether the temperature and pressure are in proper range, and the capability of the facilities is correct. Then the further optimization of the facilities and pipelines should be conducted. 3.3. Prediction and Control of Natural Gas Hydrate Natural gas hydrate is a kind of cage-like crystal formed when the gaseous natural gas (like methane and ethane) reacts with water in low temperature (0˚C - 10˚C) and high pressure (over 10 MPa) environment. It is like ice crumbles or compacted snow in appearance, so it is usually called “flammable ice”. The density is 880 - 900 kg/m3. Natural gas with free water are risky to transform into gas hydrate in low temperature and high pressure surroundings. Currently there are a number of approaches to determine the pressure-temperature at which natural gas hydrate comes into being. Roughly, they can be classified as: graph approach, empirical equation approach, balance constant approach, and statistical thermodynamics approach which is the most accurate yet complicated method. On the basis of system theory, the statistical thermodynamics approach connects the macro phase behavior of gas hydrate with the inter-molecular reactions. By employing functions to describe the creation conditions of gas hydrate, this method is beneficial from the solid theoretical foundation and has a wide applicable range. With the help of computer, it can be applied to continuously calculate the temperature and pressure at which gas hydrate forms in a relatively wide range. According to the classical absorption theory proposed by Vander Waals and Platteeuw, the phase balance conditions of gas hydrate can be applied to determine whether solid hydrate appears at given pressure, temperature and other conditions. As for well testing process, thermal or chemical methods are commonly used to prevent hydrate. The thermal method is applied to heat up the natural gas before choke. If the pressure drop at the choke keeps constant, to increase the temperature of the natural gas before choke means to increase the temperature of the gas after that. Once the temperature of the gas after choke is higher than the critical temperature at which hydrate appears, the target of hydrate prevention is achieved. Some chemicals are helpful to decrease the balance temperature of gas hydrate. Under certain pressure, the balance pressure decreases as the chemical concentration rises. A lot of investigations have been conducted by researchers all over the world on the chemicals which can impede gas hydrate. Methanol and ethanediol are the most commonly used chemicals for this purpose, and the former is the recognized most effective gas hydrate inhibitor in deepwater well testing. It has the following advantages: 1) low viscosity, easy to injection and allocation; 2) high solubility and volatility, easy to contact with other fluids in the wellbore; 3) easy to be burned with the produced gas. But as methanol is toxic and highly flammable, the safety aspect should be paid more attention to. The procedure of methanol injection should be clearly written in the Operation Procedure, and in the procedures of the testing company and the contractor. Flow rate is another factor for gas hydrate, which is easier to appear at lower flow rate. Temperature change is violated at the mud line as the temperature at seabed is quite low. As is shown in Figure 2, No hydrate appears when the flow rate is 25 × 104 m3/d. 3.4. Sanding Risks and Control Reservoirs in deepwater zones are generally shallow in buried depths. Furthermore, the compaction degree is decreased as a large interval of rock formations are substituted by sea water, whose density is less. As a result, sand production is common. In order for sand control, on one aspect, suitable well completion approaches should be adopted according to the specific property of the pay zone; on another aspect, sand control techniques should be applied whenever necessary. Figure 2. Generated area of hydrate under different flow capacity. Mechanical sand control techniques are commonly used as the first barrier for sand production in deepwater evaluation wells. Some sand control measurements in Liwan Block, South China Sea are provided here. 1) All the wells are cemented with casing, which is helpful for sand control in the casing. 2) Qty-2 Meshrite screen pipe is attached on DST-TCP string and extended below the packer. Coiled tubing and related lifting frame for sand wash are prepared on the site. 3) Pressure drop in the test is strictly controlled. Proper perforation fluid is selected. The pressure difference in the time-lapse underbalanced perforation is limited to 2.76 MPa. 4) Test valves should be designed to work against the sand. 5) A sand monitor and desander are set at the upstream of the choke manifold. An oil pool is prepared to contain the separated sand. 6) Production pressure difference is closely monitored during the production process, in which the production rate increases gradually from a small beginning. Control the flow pressure at well head within the sand production limitations. 7) Supervise the sand content in the produced fluid by taking a sample every 15 min from the separator. 3.5. Typhoon Prediction Typhoon is a factor responsible for the excursion of the platform, which further affects the stress state of the pipe string above mud line. Once the excursion exceeds the limitation, the testing tree should be immediately dismissed to avoid the breaking of the string. According to the depth of water and the demands of the testing tree, the safety operation window of the floating platform can be calculated considering factors like the response time of the testing tree, response time of BOP, and drift analysis of the platform. When the platform drifts away to an extent, the bottom-hole valves should be shutdown, testing tree dismissed. The safety operation windows are marked with different colors. Green represents that the dynamic positioning system works well and normal operations can be conducted. Blue-green means that the dismission limit is close, and operation should be pause. The valves in the testing tree should be shutdown. Drifts of the platform should be monitored more closely, and the dismission of the string above mud line is prepared. Yellow zones means immediate dismission should be taken out. If the color is red, that is means the dynamic positioning system is broken down completely. LMRP should be departed from BOP instantly to prevent the well head, BOP and risers from being broken. 4. Conclusion Due to the limited space, intense facilities and personnel, tough natural environment, and remote distance from land-based supports, any accidents may result in significant loss. In the testing period, as the underground oil and gas is produced to the surface through the testing string, once the linkage of the string is out of control, catastrophe would happen. One of the key points to prevent accidents is to conduct the safety assessment and control in the deepwater testing process. Test data should be gathered within safety limitation. Conflicts of Interest The authors declare no conflicts of interest regarding the publication of this paper. References [1] Xie, Y.H. (2015) Status and Prospect of Proprietary Oil and Gas Field Exploration and Development in Deepwater West Area of South China Sea. Oil Drilling & Production Technology, 37, 11-13. [2] Xie, Y.H. (2014) A Major Breakthrough in Deepwater Natural Gas Exploration in a Self-Run Oil/Gas Field in the Northern South China Sea and Its Enlightenment. Natural Gas Industry, 34, 1-8. [3] Wu, M.W., Yang, H.J., Liang, H., et al. (2015) Key Techniques and Practices of Critical Flow Based Tests for Deepwater Exploration Wells: A Case Study of Deep Water Area in the Qiongdongnan Basin. Natural Gas Industry, 35, 65-70. [4] Wu, M.W., Liang, H. and Jiang, H.F. (2015) Key Technology of Testing Design for High-Permeability Gas Well in Deep Water Area of the Qiongdongnan Basin. China Offshore Oil and Gas, 6, 31-36. [5] Yang, S.K., Dai, Y.D., Lv, Y. and Guan, L.J. (2009) Key Techniques of Gas Well Testing in South China Sea Deep Water. China Offshore Oil and Gas, 4, 237-241. [6] Dai, Z., Luo, D.H., Liang, W., et al. (2012) A DST Design and Practice in Deep-Water Gasfields, South China Sea. China Offshore Oil and Gas, 1, 25-28. [7] Zhang, X.T. (2010) The Structure Design of Well Completion Test String in Deep Water. China University of Petroleum, Beijing. [8] Xie, X., Fu, J.H., Zhang, Z. and He, Y.F. (2011) Mechanical Analysis of Deep Water Well-Testing Strings. Natural Gas Industry, 31, 77-79. Copyright © 2023 by authors and Scientific Research Publishing Inc. Creative Commons License This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.
__label__pos
0.913984
Objective Play audio with VLC in Python. Distributions This will work on any Linux distribution Requirements A working Linux install with Python and VLC. Difficulty Easy Conventions • # - requires given command to be executed with root privileges either directly as a root user or by use of sudo command • $ - given command to be executed as a regular non-privileged user Introduction There are plenty of ways to play audio files with Python. It really depends on your application, but the easiest way, by far, is to use the bindings for VLC to control VLC with Python, and play your files. With VLC, you don't need to worry about codecs and file support. It also doesn't require too many complicated methods, and/or objects. So, for simple audio playback, VLC is best. Get The VLC Bindings The VLC bindings are actually developed and maintained by VLC. That said, the easiest way is still to use pip # pip install python-vlc Of course, if this is for a single project, use virtualenv instead. Set Up Your File Creating your file is very simple. You only need to import the VLC module. import VLC That's really all. You can use the module to create MediaPlayer instances, and that's what's necessary to play audio. Create A Media Player Object Again, the VLC module is super easy to use. You only need to instantiate a MediaPlayer object and pass it the audio file that you want to play. VLC can handle virtually any file type, so you don't need to worry about compatibility. player = vlc.MediaPlayer("/path/to/file.flac") Play A Song Playing a file from an existing object is even easier. You only need to call the play method on the object, and Python will begin playing it. When the playback finishes, it will stop. There's no looping or any nonsense like that. player.play() Stopping And Pause The VLC bindings make it easy to stop or pause a file once you've started playing it too. There is a pause method that will pause playback if the file is playing. player.pause() If the player is already paused, calling the method again will resume playback. To stop a file altogether, call the stop method. player.stop Looping And "Playlists" You can actually create pseudo-playlists with this, and loop through the songs that you've added. It would only take a basic for loop. playlist = ['/path/to/song1.flac', '/path/to/song2.flac', 'path/to/song3.flac'] for song in playlist: player = vlc.MediaPlayer(song) player.play() That's obviously very rudimentary, but you can see how Python can script VLC. Closing Thoughts VLC isn't the only solution for playing audio with Python, and it certainly isn't the best in every situation, but it is very good for a lot of basic use cases. The greatest bonus of using VLC is the unbeatable simplicity. Exercises 1. Install the Python VLC bindings with pip in a virtual environment. 2. Create a Python file and import the VLC bindings. 3. Instantiate a player object to play a file. 4. Play that file. 5. Play the file again. Pause and resume playback. 6. Create a loop to play multiple files in order. 7. Challenge: Generate a list of files using Python modules to interact with directories on your system. Play them as a playlist.
__label__pos
0.622122
Do Antibiotics Cause Constipation Introduction :- Are you taking antibiotics and wondering “Do antibiotics cause constipation?”  You are in the right place, We will discuss this in detail Can Antibiotics Cause Constipation? Although antibiotics might induce gastrointestinal problems, constipation is not one of them. Antibiotics, on the other hand, might affect the gut microbiota, causing gastrointestinal disorders such as diarrhoea or constipation. The risk of constipation is determined by the antibiotic used and other personal risk factors. If a person develops constipation while taking antibiotics, they should not blame the medication but rather consider boosting their hydration and fibre diet as well as daily exercise to keep their bowels moving. If a person experiences severe symptoms, such as a new or worsening fever, or if their side effects continue to worsen, they should seek medical treatment. How To Treat Constipation Caused By Antibiotics? If a person develops constipation while taking antibiotics, they should not blame the medication but rather consider boosting their hydration and fibre diet as well as daily exercise to keep their bowels moving. If the constipation is caused by antibiotics, there are numerous treatments available, including: Stool softeners, such as docusate sodium, can help soften stools and make them easier to pass.  Laxatives, such as polyethylene glycol (PEG), can help stimulate bowel movements and relieve constipation.  Probiotics: Taking probiotics can help restore the balance of bacteria in the gut and reduce the risk of constipation caused by antibiotics. Fibre supplements, such as psyllium, can help bulk up stools and encourage regular bowel movements. Drinking enough fluids, such as water and herbal tea, can help keep stools soft and prevent constipation.  Daily exercise helps bowel movements and prevent constipation.  Before mixing any two medications or beginning any new treatment for antibiotic-induced constipation, consult with your doctor or chemist. What Is The Time Frame To Recover From Antibiotic Caused Constipation? There is no specific time frame for recovering from antibiotic-induced constipation. Even after the antibiotics have been removed from a person’s body, the alterations to the stomach that induce constipation may persist. There are two ways antibiotics might cause constipation. The first is by wreaking havoc on gut bacteria, and the second is by depleting the body of critical nutrients that aid in digestion. Antibiotics seldom induce constipation, but they can cause diarrhoea, cramps, and nausea. If constipation is severe, unpleasant, or occurs in conjunction with other gastrointestinal symptoms, a person should consult a doctor. What Can Increase The Risk Of Constipation? Some risk factors that may enhance the chance of antibiotic-induced constipation include:  Older people are more likely than younger adults to develop constipation as a result of antibiotics. The longer a person takes antibiotics, the greater the risk of getting constipation. Some antibiotics are more likely than others to produce constipation. People who have pre-existing gastrointestinal issues, such as irritable bowel syndrome (IBS), are more likely to have antibiotic-induced constipation. 1. Poor diet: A diet low in fibre and high in processed foods can increase the risk of antibiotic-induced constipation. Lack of exercise: A sedentary lifestyle can raise the risk of antibiotic-induced constipation. How Do Antibiotics Affect Gut Bacteria? Antibiotics can have a substantial impact on the gut microbiome, which is the diverse mix of bacteria that grows in the stomach and aids digestion. Antibiotics can have a number of negative effects on the gut microbiota, including decreased species diversity, changes in metabolic activity, and the selection of antibiotic-resistant organisms, resulting in downstream effects such as antibiotic-associated diarrhoea and recurring difficile infections.  Most antibiotics operate by killing or stopping bacteria from developing, but because they can’t tell the difference between good and bad bacteria, they can wreak havoc on the gut’s healthy bacteria.  Changes in the gut microbiome can induce a variety of gastrointestinal problems, including infections. diarrhoea. Are There Any Natural Therapies That Can Treat Constipation Caused By Antibiotics? While antibiotics almost never cause constipation, there are some natural therapies that can be used to prevent or alleviate constipation while on antibiotics. Increase your intake of fluids and fibre. Drinking plenty of water and eating high-fibre meals can help keep your intestines flowing and prevent constipation. Probiotic supplements can help restore the balance of healthy bacteria in the stomach and avoid gastrointestinal problems. Herbal medicines, such as senna, psyllium, and aloe vera, can be used to ease constipation. However, before using any herbal medicines, consult with a doctor or pharmacist, especially if you are taking antibiotics. To avoid unwanted alteration of the gut flora, antibiotics should only be used when absolutely essential. Can Drinking Water Reduce The Risk Of Constipation? Drinking more water while taking antibiotics can help reduce constipation. Constipation is frequently caused by dehydration, which makes it difficult to pass a bowel movement. Drinking enough water and being hydrated can therefore help keep the intestines flowing and prevent constipation. Increasing fibre intake and exercising regularly, in addition to drinking more water, can help reduce constipation when taking antibiotics. If constipation is severe, painful, or happens in conjunction with other gastrointestinal symptoms, a person should consult a doctor. What High-Fibre Foods Can Help Prevent Constipation? While taking antibiotics, eating high-fibre foods can help prevent constipation. Here are some high-fibre dietary examples: Beans, lentils, and chickpeas are high in fibre and can aid in digestion. Almonds, chia seeds, and flaxseeds are high in fibre and can aid in constipation prevention. Apples, oranges, berries, pears, and figs are high in fibre and can aid in bowel movement. Broccoli, carrots, peas, and leafy greens like spinach and kale are all high in fibre. Fibre-rich foods such as brown rice, whole wheat bread, and whole grain pasta can help reduce constipation. Conclusion  Hence , after going through the blog, you are now able to answer the question “Can antibiotics make you constipated? If you have any questions, let us know in the comments below. By Caitlyn
__label__pos
0.99976
Next Article in Journal Study of Condensate Absorption Capacity in Exposed Soil when Water Recedes at the Bottom of Hoh Xil Lake, Qinghai Next Article in Special Issue Water Leakage and Crack Identification in Tunnels Based on Transfer-Learning and Convolutional Neural Networks Previous Article in Journal The Study on the Ballast Water Management of Mailiao Exclusive Industrial Harbor in Taiwan Previous Article in Special Issue Removal of Copper (II) from Aqueous Solution by a Hierarchical Porous Hydroxylapatite-Biochar Composite Prepared with Sugarcane Top Internode Biotemplate     Order Article Reprints Font Type: Arial Georgia Verdana Font Size: Aa Aa Aa Line Spacing: Column Width: Background: Article Evaluation of the Karst Collapse Susceptibility of Subgrade Based on the AHP Method of ArcGIS and Prevention Measures: A Case Study of the Quannan Expressway, Section K1379+300-K1471+920 1 Department of Energy Engineering and Building Environment, Guilin University of Aerospace Technology, Guilin 541004, China 2 College of Civil Engineering and Architecture, Guilin University of Technology, Guilin 541004, China 3 Guangxi Hualan Geotechnical Engineering Co., Ltd., Nanning 530001, China 4 Institute of Karst Geology, Chinese Academy of Geological Sciences, Guilin 541004, China * Author to whom correspondence should be addressed. Water 2022, 14(9), 1432; https://doi.org/10.3390/w14091432 Received: 31 March 2022 / Revised: 24 April 2022 / Accepted: 27 April 2022 / Published: 29 April 2022 (This article belongs to the Special Issue Water–Rock/Soil Interaction) Abstract : In order to solve the problem of geological disasters caused by karst collapse in the K1379+300-K1471+920 section of the Quannan Expressway reconstruction and expansion, the evaluation of karst collapse susceptibility in the study area was carried out, and the corresponding prevention measures are put forward. Firstly, by identifying and determining the susceptible factors of karst collapse in the study area, three criterion layers, including the basic geological conditions, karst collapse impact, and human activities were selected, with a total of seven susceptible factors. The analytic hierarchy process (AHP) was used to assign values to each factor, and the evaluation model of karst collapse susceptibility in the study area was established. Then, using the spatial analysis function of ArcGIS, the seven susceptible factor partition maps were superimposed according to the evaluation model, and the evaluation map of the karst collapse susceptibility was obtained. The study area was divided into five levels of susceptibility: extremely susceptible areas (2.64–2.81), susceptible areas (2.43–2.64), somewhat susceptible areas (1.88–2.43), non-susceptible areas (1.04–1.88), and non-karst areas (0.51–1.04). The length of the extremely susceptible area is 11.90 km, 12.85% of the total length of the route, and the susceptible area, somewhat susceptible area, non-susceptible area, and non-karst area account for 25.05%, 39.54%, 11.01%, and 11.55% of the total length, respectively. The research results of the karst collapse susceptibility in the area are consistent with the actual situation. Finally, combined with the research results, prevention measures for karst collapse are put forward, which provide a reference for the prevention and mitigation of disaster in engineering construction. 1. Introduction Karst collapse is a dynamic geological phenomenon in which the surface rock and soil bodies sink downward under the action of natural or human factors and form collapse pits (holes) in the ground, which is one of the main types of geological disasters in karst areas [1,2,3,4,5]. According to the statistics, 17 countries have been plagued by karst collapse problems. China is one of the countries with the most extensive karst collapse development in the world, covering 23 provinces and cities in China, among which karst collapse is especially serious in Guangxi, Guizhou, and Hubei, greatly affecting the economic construction and livable environment. Therefore, it is extremely necessary to solve the problem of karst collapse, which must be theoretically analyzed and mastered first. It is extremely important to select an effective evaluation method of karst collapse susceptibility, and then select the corresponding prevention measures on this basis, which will often achieve better results. However, due to the influence of many factors, the formation of karst collapse has a large degree of uncertainty, both in time and space [6,7,8]. The selection of suitable evaluation methods has always puzzled researchers [9,10,11], for which they have made a lot of efforts. From the 1960s to the present, the studies on karst collapse evaluation have been fruitful. Wang Fei [12], Yang Yang [13], Miao Shixian [14], Mu Chunmei [15], Wan Zhibo [16], Gao Xuechi [17], etc. have analyzed the evolution mechanism of karst collapse through field monitoring, experiments, and the analysis of triggering factors. Since the factors affecting karst collapse are multi-faceted, multi-layered, interrelated, and mutually restrictive, their degrees of influence are different, meaning many methods cannot be directly applied to karst collapse evaluation. Therefore, Hengheng [6], Zhong Yu [18], Wu Liqing [19], Duan Xianqian [20], Ouyang Hui [21], Cui Yuliang [22], etc. carried out quantitative predictive evaluations of karst collapse in time and space through different evaluation index systems and methods and achieved certain results. In order to seek a reliable evaluation method of karst collapse, through continuous exploration and application research, many experts and scholars, such as Pan Zongyuan [3], Zhang Jie [5], Zeng Bin [8], Li Xi [23], Chen Juyan [24], etc., have gradually confirmed that AHP methods and GIS technologies have better applicability and good reliability in karst collapse evaluation. The advantages of AHP methods and GIS technologies are also obvious [25,26,27,28,29]. For the karst collapse problem, Xiao Jianqiu [30], Peng Yuhuan [31], Zhang Baozhu [32], Luo Xiaojie [33], etc., based on the effective evaluation of karst collapse susceptibility and combined with karst collapse-inducing factors and the karst geological structure, proposed a management plan and prevention measures for karst collapse and achieved better results. In order to adapt to the economic development of the ASEAN region and ensure smooth and safe economic transportation, it is necessary to renovate and expand section K1379+300-K1471+920 of the Quannan Expressway. The total length of the route is 92.62 km, 72% of which is located in the karst area. The karst collapse is the main risk factor in the construction and operation of the expressway. It is of great significance to carry out the prediction and evaluation of karst collapse and propose prevention and control methods for the whole route to ensure the safe construction and operation of the expressway after completion, and to promote the steady economic development of the ASEAN region. In this paper, based on the previous research results, the evaluation index system and evaluation model of the karst collapse susceptibility are established by the AHP method, the evaluation of the karst collapse susceptibility of the K1379+300-K1471+920 section of the Quannan Expressway is carried out by ArcGIS analysis technology, and the prevention measures for karst collapse are proposed to provide a reference for disaster prevention and mitigation work. The research of this paper plays a guiding role in the safe construction and operation of the expressway after completion, which is of great practical significance. At the same time, it is of certain academic research value, as it promotes and draws reference from the research on the karst collapse of several route projects. 2. Overview of the Research Area 2.1. Natural Geography The range of the study area is the K1379+300–K1471+920 section of the Quannan Expressway Expansion Project, which belongs to Binyang County, Heng County, and the Yongning District of Guangxi and passes through the karst area. The range of the research area is shown in Figure 1, Figure 2 and Figure 3. The research area is located in the south of central Guangxi, China, south of a latitude of 23.5° N, has a subtropical monsoon climate, is rich in light and heat, and has abundant rainfall. The annual average temperature is 21.8 °C, and the annual average rainfall is 1300 mm. The rainy season is concentrated from April to September, accounting for more than 76% of the annual rainfall. The rainfall is the least from November to February, which is the annual dry season. The northern part of the research area is located in the Guizhong Basin and its edge, while the southern part is mainly located in the Yong (Yu) River valley. The landform types are divided into mountainous and plain landforms, mainly karst landforms. 2.2. Geological Structure The northern part of the research area is located in the Guizhong Basin and its edge, while the southern part is mainly located in the Yong (Yu) River valley. The geological structure is relatively complex, and the folds are generally not developed. Faults dominate the geological and tectonic background of the study area. The faults in the study area are mainly concentrated in the area from K1389 to K1431, are mainly compressive or compressive–torsional faults, and mainly in the Litang Fracture Zone, the Luxu–Liantang–Hengxian Fault System, and the Tianma–Lucun Regional Fault. There are 12 faults intersecting along the line, five of which are distributed in the karst area. The folds are mainly in the Liujing–Shangzhou gentle monoclinic structure and the Gantang short-axis oblique. According to the combination relationship and genesis of the structure in the study area, it can be divided into three structural systems: the Guangxi mountain-shaped structural system, the north-west structural belt, and the east–west structural belt. The east–west structure is the earliest formation, mainly manifested as wide and gentle folds; the north-west-trending structural belt formed later than the east–west structure, and the latest is the Guangxi mountain-shaped structure, which is dominated by compressive–torsional faults. The three types of structures are all the result of compression. Due to the later formation of the Guangxi mountain-shaped structure and the multiple tectonic movements after its formation superimposed and transformed the EW-trending and NW-trending structures, these two groups of structures experienced the alternating action of left and right twisting. Compression failure and tectonic traction along the fault zone are more common, thus controlling the development direction of karst in the study area. Therefore, the east–west structure has the greatest impact on the line, followed by the north-west structure, and the mountain-shaped structure with the lowest. The relationships between the main fault and the folds and lines in the research area and their influence are shown in Table 1 and Figure 3. 2.3. Landform The types of landforms in the study area can be divided into karst landforms and non-karst landforms according to the lithology. Non-karst landforms are composed of erosion and accumulation landforms. The erosion landforms are mainly formed by tectonic erosion. The terrain is characterized by gentle slopes, low mountains, and hills, and the terrain is undulating. The typical types of depositional landforms include alluvial–proluvial fans and river terraces, which are relatively flat. Karst landforms are formed by the combined action of dissolution and erosion. When a karst area is dominated by carbonate rocks, dissolution is the main action, and erosion is the supplement. When a karst area is dominated by clastic rocks, erosion is the main action, and dissolution is the supplement. Dissolution–erosion and erosion–dissolution landforms are mainly developed in the interbedded hydrochloride and clastic rocks, marl and argillaceous limestone, or non-carbonate rocks intercalated with carbonate rocks. Due to the low purity of carbonate rock or the influence of non-carbonate rock, the dissolution effect of carbonate rock is reduced, the karst development is relatively weak, the erosion effect of water flow is strong, and the weathering and denudation effects are also significant. Therefore, erosion plays an important role in the shaping of landforms. The landform formed when dissolution is dominant and erosion is secondary is called erosion–dissolution landform; otherwise, it is dissolution–erosion landform, with relatively gentle terrain and large fluctuations. Dissolution landforms are the key landform types in the study area. They are developed in relatively pure carbonate rock distribution areas. The typical dissolution landform types are dissolving residual hills and ridge plains. The terrain is generally flat and slightly undulating, locally. The karst collapse in the study area is influenced by karst landforms, and the types of landforms in the study area are shown in Figure 3 and Table 2. The typical dissolution landforms of the study area are shown in Figure 4. 2.4. Overburden As shown in Figure 3, more than 80% of the surface of the research area is covered by the Quaternary strata. The Quaternary overburden is of the Holocene and Pleistocene ages. The Holocene series layers are mainly distributed in the first terrace of the river, and the Pleistocene series layers are mainly distributed in the second and third terraces. According to its genesis, it is mainly divided into residual slope sediments, residual sediments, and alluvial sediments, which are in the form of clay, silty clay, silt and sand, pebble, and heterogeneous soil. According to the results of the field survey, field investigation, and geophysical exploration and drilling, the thickness of the Quaternary soil layer is small in the foothills and slopes, generally less than 8 m. In the karst plains and valleys, the thickness of the overlying soil layer varies greatly, generally from 1 to 10 m, and the maximum thickness is mainly less than 20 m. The engineering geological properties of the Quaternary soil layers in the karst area of the research area vary greatly, and there is a tendency for gradual deterioration from top to bottom. Especially in the deeper solution ditch and solution trough, there is thick soft plastic and soft plastic flow soil, which easily produces karst collapse under the influence of groundwater level fluctuation. The karst collapse that has occurred in the study area is mainly the collapse of soil layers with a thickness of around or within 10 m. It is closely related and has a great impact on the construction of the expressway project. 2.5. Hydrogeology The stratigraphy of the study area is divided into three major types of groundwater-bearing rock groups: carbonate rock, clastic rock, and loose rock, and the corresponding groundwater types are karst water, fracture water of clastic rock, and pore water of loose accumulation of the Quaternary. Karst water is divided into three subcategories: karst fissure water, fissure–cave water, and cave–fissure water, which are mainly developed in the Devonian and Carboniferous carbonate strata, with rich groundwater, mainly pipeline flow, springs, and dark rivers developed along the tectonic line and fracture zone. The fracture water of clastic rocks is mainly controlled by tectonics and weathering, which is distributed in the clastic rock formations of Devonian, Cambrian, Cretaceous, and Tertiary systems, and the groundwater is relatively poor. The pore water of loose accumulation includes the pore phreatic water of the Holocene and Pleistocene series of the Quaternary, which is mainly distributed in the riverbeds, river mudflats, terraces, plains, and depressions of intermountain streams and gullies in the river valleys. Except for the riverbeds, river mudflats, and terraces, which are richer, the rest of the water is poor. The groundwater in the clastic area is mainly recharged by the infiltration of atmospheric rainfall, while the groundwater in the karst area is recharged by the infiltration of atmospheric rainfall collected by negative karst topography, infiltration through waterfall holes, underground river skylights, vertical wells, karst fissures, etc., and the infiltration of groundwater in the Quaternary, while there is a lateral recharge of fissure water from neighboring non-soluble rocks in the research area. The fluctuation of the groundwater level in the karst area, especially at the rock–soil interface, is one of the main factors leading to the formation of karst collapse. 2.6. Karst Development The bedrock stratum in the research area is Cambrian to Tertiary, mainly sedimentary rocks, of which the Devonian stratum is the most widely distributed. The length of the karst development section in the research area is 79.8 km, accounting for 86.18% of the total length. The bedrock stratum with the greatest influence on the line is the pure carbonate rock with strong karst development, mainly including the Upper Devonian Liujiang Group (D3l), Middle Devonian Donggangling Group (D2d), Middle Carboniferous Tai Po Group (C2d), and Lower Carboniferous Datang Group (C1d). the rock group has developed seven underground rivers. According to the field survey, there have been 45 natural collapses and 151 collapse pits. More than 90% of them are concentrated in the karst-developed section of k1380–k1410, mainly with soil collapse and no bedrock collapse. The thickness of the collapsed soil layer is about 10 m or less, the thickness of the soil layer is more than 20 m, and the scale of the collapse pits is larger, which seriously affects the engineering construction in the research area. The typical karst collapse of the study area is shown in Figure 5. 2.7. Human Activities The domestic water source in the study area is mostly groundwater (mainly karst groundwater). In most cases, there is one well in a village. The amount of groundwater exploitation is not high, but a few villages and townships need a centralized water supply, and the amount of groundwater exploitation is high in these areas, which has caused the subsidence and cracking of many houses near the mining wells. Local farms in the area have been massively converted to vegetable cultivation and drilling wells to extract groundwater for irrigation, which has led to karst collapse in many places. High-frequency vibration during expressway construction and operation, ground piling, and blasting vibration can trigger collapse. For example, in 2012, during the construction of the Liunan Intercity High-Speed Railway, ground collapse was triggered in Ma’an Village due to punching pile construction. According to the on-site investigation, it was found that there were 18 artificially triggered (groundwater pumping- or construction-induced) collapses. Most of the collapses caused by groundwater pumping and draining occurred within 400 m around the mining well. Therefore, groundwater pumping or construction has had a greater impact on the formation and occurrence of karst collapses in the study area. A typical groundwater mining well in the study area is shown in Figure 6. 3. Evaluation Index System Construction 3.1. Evaluation Methodology Karst collapse has the characteristics of concealment and suddenness, and it is difficult to accurately predict its location and occurrence time before it happens. It is extremely harmful to the engineering construction and operation of the research area. Karst collapse is the most serious risk factor facing engineering construction in the research area; therefore, it is extremely important to carry out the prediction and evaluation of karst collapse susceptibility and screen out the potential karst collapse-susceptible areas for engineering construction. In the evaluation of karst collapse susceptibility, qualitative and quantitative evaluation methods are mainly used at present, but qualitative evaluation often cannot fully reflect the joint effect of multiple factors on karst collapse. Therefore, the evaluation of karst collapse susceptibility mostly adopts quantitative evaluation methods, such as the analytic hierarchy process, comprehensive fuzzy analysis, artificial neural network, and the logistic regression method. The analytical hierarchy process involves decomposing the problem into different component factors according to the nature of the problem and the total goal to be achieved, and combining the factors at different levels according to their mutual influence relationship and affiliation to form a multi-level analysis structure model, and finally, simplifying the system analysis for the determination of the relative importance weights of the bottom level relative to the top level (total goal). The advantage is that when calculating the ranking weights of all elements of the same level for the highest level, the consistency ratio (CR) can be checked and corrected. If it is not satisfied, the judgment matrix can be readjusted until it is satisfied, which reduces the blindness and arbitrariness of relying entirely on experts’ scores and avoids the bias caused by other evaluation methods in which experts only assign values based on their experience. It is a combination of qualitative and quantitative decision analysis methods [34,35,36]. Therefore, in this study, first, the mature analytic hierarchy process was used to decompose the complex evaluation target of karst collapse susceptibility into the criterion layer with the main karst collapse-inducing factors, and then to decompose the criterion layer into the index layer. On this basis, the single-level ranking (weight) and total ranking were calculated by the method of qualitative index quantification, and the evaluation model of karst collapse susceptibility was established. Finally, ArcGIS technology was used to superimpose the influence zoning map of each index according to the evaluation model, and the prediction evaluation map reflecting the susceptibility of karst collapse was obtained. 3.2. Evaluation System Construction Due to the characteristics of sudden and unpredictable karst collapse, the evaluation of its susceptibility has been an important technical tool in the current comprehensive prevention and control of karst collapse. Although karst collapse is influenced by many factors, the occurrence of karst collapse cannot be separated from the three factors of rock, soil, and water [37]. Based on the results of the geological survey of karst collapse in the research area, combined with the hydrological engineering geological conditions and the previous research results on karst collapse-inducing factors [3,4,5,6,7,8,9,10,11,12,13], a hierarchical evaluation system of three levels, one objective, three criteria, and seven indicators was constructed by selecting three susceptible factors in a total of seven aspects [38,39,40], as shown in Figure 7. The evaluation of karst collapse susceptibility as the objective layer contains three criterion layers (basic geological conditions, karst risk influence, and human activities) and seven indicator layers (degree of karst development (Hkarst), karst landform (Hlandform), fault (Hfault), soil thickness (Hsoil), karst collapse (Hcollapse), underground river (Hgroundriver), and mining well (Hwell)). 3.3. Evaluation Model Construction The evaluation model of the karst collapse susceptibility was constructed according to the principle of mathematical multi-factor fitting or prediction using a polynomial of one-dimensional based on the established karst collapse susceptibility = evaluation index system. A quantitative database of seven major indicators was established using the analytical hierarchy process to establish the weights of each indicator and classify each indicator according to its influence on karst collapse, giving normalized indicators. Using the ArcGIS spatial analysis tools, the indicator value of each evaluation factor was superimposed according to the weights. The evaluation model is as follows: H = X 1 × H 1 + X 2 × H 2 + H 3 × X 3 + where H is the susceptible evaluation result; Xi is the weight of the influence factor of this layer determined by the analytical hierarchy process method (AHP); Hi is the value of the impact factor of this layer. Each layer of impact factors can include multiple sub-level impact factors, and the upper-level impact factors are derived from the sub-level factors using a similar model. 4. Evaluation Model of the Karst Collapse Susceptibility 4.1. Quantification of Evaluation Index Assignment According to the characteristics of the geological environment of the study area and the degree of influence of each index on karst collapse based on a questionnaire survey of experts, as well as referring to the method of assigning influence factors to karst collapse in similar projects, and combined with the previous research results on the inducing factors of karst collapse [3,4,5,6,7,8,9,10,11,12,13] and the expert group’s review recommendations for this research, the quantitative indicators were finally determined, and the impact level of each impact factor was divided into five levels, namely extremely high impact, high impact, medium impact, low impact, and extremely low impact. The larger the score, the higher the degree of influence, and vice versa. On this basis, the zoning map of the degree of influence of each influence factor on karst collapse was derived, as detailed in Table 3 and Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14. 4.2. Constructing the Judgment Matrix and Assigning Values The method of constructing the judgment matrix is that each element with downward affiliation (called criterion) is the first element of the judgment matrix (located in the upper left corner), and each element affiliated to it is arranged in the first row and the first column in turn. In analyzing the relationship between the factors of the evaluation target, a judgment matrix can be constructed according to the importance of the two factors in the hierarchical structure evaluation system. In order to ensure the reliability and accuracy of the criterion of importance between two factors, in this study, the evaluation of the importance of each factor was based on the previous research results on karst collapse-inducing factors [3,4,5,6,7,8,9,10,11,12,13] and the expert group’s review recommendations for this research. Then, the final judgment matrix was obtained by synthesizing the judgment matrix independently constructed by each expert. The construction judgment matrix was constructed according to a nine-level scale, and the specific results are listed in Table 3. In order to eliminate the influence of prejudice caused by experts participating in the determination of weighting factors, the judgment results of each expert can be judged by the consistency of the test results of the judgment matrix. In addition, the unreasonable judgment results of experts were eliminated, which reduced the blindness and randomness of relying solely on expert scores. This avoids the deviation caused by experts only assigning values based on experience, reduces the influence of human factors, and ensures the reliability of the AHP method. According to the importance criterion between the two factors (Table 4), the susceptible factors were assigned and combined with the karst collapse hierarchy in Table 3. The judgment matrices KA-Bi and KBi-Ci were established for the association between the different layers of objective layer A and criterion layer Bi, and criterion layer Bi and indicator layer Ci. For example, if the ratio of the importance of criterion layer B1 to indicator layer C3 is 3, then the ratio of the importance of indicator layer C3 to criterion layer B1 is 1/3; if the ratio of the importance of criterion layer B1 to indicator layer C2 is 2, then the ratio of the importance of indicator layer C2 to criterion layer B1 is 1/2. Based on this approach, the matrices were constructed, and Equations (2)–(4) are the correlation judgment matrices KA-Bi, KB1-Ci, and KB2-Ci for objective layer A–criterion layer Bi, criterion layer B1–criterion layer Ci, and criterion layer B2–indicator layer Ci, respectively. K A B i = 1 4 5 1 / 4 1 3 1 / 5 1 / 3 1 K B 1 C i = 1 2 3 6 1 / 2 1 2 3 1 / 3 1 / 2 1 2 1 / 6 1 / 3 1 / 2 1   K B 2 C i = 1 2 1 / 2 1   4.3. Hierarchical Single Ranking and Validation Taking the criterion layer B1–indicator layer Ci judgment matrix (3) as an example, the weight of indicator layer Ci in criterion layer B1 was calculated. The square root method was used for the hierarchical analysis to calculate the following: (1) Calculate the product of each element of each row of the judgment matrix, the M 1 = 1 × 2 × 3 × 6 = 36 , same argument M 2 = 3 , M 3 = 0.3333 , M 4 = 0.0278 ; (2) Calculate the nth power root of M 1 (n = 4), W 1 ¯ = M 1 4 = 36 4 = 2.4495 , same argument W 2 ¯ = 1.3161 , W 3 ¯ = 0.7598 , W 4 ¯ = 0.4083 ; (3) Normalized to W ¯ 1 , W 1 = 2.4495 2.4495 + 1.3161 + 0.7598 + 0.4083 = 0.4965 , same argument W 2 = 0.2668 , W 3 = 0.1540 , W 4 = 0.0827 ; So, W = 0.4965 , 0.2668 , 0.1540 , 0.0827 is the desired eigenvector; (4) Calculate the maximum characteristic root of the judgment matrix λ max , λ max = 1 n i = 1 n ( K W ) i W i = 1.9885 4 × 0.4965 + 1.0713 4 × 0.2668 + 0.6184 4 × 0.1540 + 0.3314 4 × 0.0827 = 4.0104   where (KW)1 = 1 × 0.4965 + 2 × 0.2668 + 3 × 0.1540 + 6 × 0.0827 = 1.9885, and (KW)2, (KW)3, and (KW)4 are calculated as 1.073, 0.6184, and 0.3314, respectively. (5) In order to test whether the qualitative judgment of the constructed judgment matrix logically meets the requirement of transmissibility, it is necessary to conduct a consistency test, and the consistency index CR is used as the criterion to measure the consistency of the judgment matrix. The judgment matrix can be considered to have satisfactory consistency when CR < 0.10; otherwise, it is necessary to adjust the judgment matrix; C I = λ max n n 1 = 4.0104 4 4 1 = 0.00347 , C R = C I R I = 0.00347 0.9 = 0.003856 , where RI is the average random consistency index, and the RI values of its judgment matrix are shown in Table 5. When n = 4, RI = 0.9; since CR < 0.1, the consistency satisfies the requirement. Similarly, the weights of all evaluation factors can be calculated, as shown in Table 6. 4.4. Karst Collapse Susceptibility Evaluation Model From the above calculations, A = [0.3285, 0.1770, 0.1023, 0.0548, 0.1494, 0.0830, 0.1050], and the karst collapse susceptibility prediction and evaluation model can be established as follows: H = (0.3285 × Hkarst + 0.1770 × Hlandform + 0.1023 × Hfault + 0.0548Hsoil) + (0.1494 × Hcollapse + 0.0830 × Hgroundriver) + 0.1050 × Hwell 5. Analysis of Evaluation Results By using the spatial analysis function of ArcGIS, the zoning map of the influence of the seven assigned factors of the degree of karst collapse was superimposed according to the AHP prediction and evaluation model to obtain the prediction and evaluation map reflecting the karst collapse susceptibility, as shown in Figure 15. According to the size of the H value, the study area was divided. There are four levels of karst collapse susceptibility, including extremely susceptible areas (2.64–2.81), susceptible areas (2.43–2.64), somewhat susceptible areas (1.88–2.43), and non-susceptible areas (1.04–1.88), and one non-karst level (0.51–1.04), as shown in Figure 15 and Table 7. In Table 7, in the karst collapse-susceptible areas, the length of the extremely susceptible area is 11.9 km, accounting for about 12.85% of the total length of the line, and the remaining three susceptible areas are 23.2 km (25.05%), 36.62 km (39.54%), and 10.2 km (11.01%), respectively. The length of the non-karst area is 10.7 km, accounting for about 11.55% of the total length of the line. The sections of the expressway passing through the extremely susceptible area and susceptible area account for about 37.90% of the total length of the line. According to the analysis results, the susceptibility of karst collapse in the study area is mainly affected by factors such as the degree of karst development, karst landform, and soil thickness, and locally by faults, karst collapse, underground rivers, and mining well. The extremely susceptible and susceptible areas of karst collapse coincide with the existing karst collapse area, and the research results are consistent with the actual situation. The total length of the extremely susceptible and susceptible areas of karst collapse equals 34.3 km, mainly distributed in the dissolution plain landform (88.05%). Only 4.1 km (11.95%) of the susceptible area is distributed in the erosion–dissolution landform, 99.10% of the 11.1 km of the extremely susceptible area is distributed in the strong developed karst area, 63.36% of the 23.2 km of the susceptible area is distributed in the strong developed karst area, and the rest are distributed in the moderate developed karst area. The soil thickness in the extremely susceptible area is less than 10 m or 5–10 m, and it is affected by faults, underground channels, karst collapse, and mining wells. Areas with soil thickness of 5–10 m have increased susceptibility. The somewhat susceptible and non-susceptible areas of karst collapse are mainly controlled by the degree of karst development, and they are located in the areas with moderate developed karst or weak developed karst. The non-karst area is not affected by any karst collapse-susceptible factors. Because the extremely susceptible and susceptible areas of karst collapse are very dangerous to the project, corresponding control measures must be taken, and the strong developed karst area, dissolution plain landform areas, and soil thickness of less than 10 m in the study area should be treated. The karst area should be taken seriously and control measures can be taken. In the areas with somewhat susceptible and non-susceptible areas of karst collapse, attention should also be paid to the moderate developed or weak developed karst areas with a soil thickness of less than 10 m, such as the K1410+300-K1410+600 and K1440+000-K1471+920 sections. Combined with the hydrological engineering geological conditions, karst development, karst landform, and other influencing factors in the study area, the evaluation results coincide with the locations of karst collapse in the K1379+300-K1471+920 section of the Quannan Expressway in recent years, indicating that it is feasible to use the combination of the AHP method of ArcGIS to evaluate the susceptibility to karst collapse. The results can provide a scientific basis and technical support for the prevention and control of geological disasters, the planning of key areas, and the development and utilization of land. The basic factors of karst collapse, such as soil thickness and underground river in the study area, were mainly obtained through borehole data, which can reflect the geological situation of the study area well but cannot reveal it completely due to the limits of the number and spacing of boreholes. Moreover, the development and degree of karst are dynamic, so the data of the field investigation are time-sensitive. Karst collapse is sudden and unpredictable. Therefore, the results of this study have certain limitations. 6. Suggestions for Prevention Measures According to the evaluation conclusions of the AHP method of ArcGIS, as shown in Table 7 and Figure 15, it can be seen that the extremely susceptible and susceptible areas of karst collapse have a greater impact on the construction of the project and the safety after the project is completed, and the possibility of roadbed instability and damage is high. It is very necessary to take necessary prevention measures in the section. The specific prevention measures are proposed based on actual engineering experience and evaluation conclusions, as shown in Table 8. At the same time, it is recommended to carry out a key exploration of the hidden karst soil caves and karst caves in the K1379+300-K1471+920 section to further identify the hidden karst situation. In addition, there is also the danger of karst collapse in the somewhat susceptible areas, but the degree of susceptibility to karst collapse is lower than that of the susceptible area. It is recommended to detect hidden karst soil caves and karst caves according to the actual situation to further identify the hidden karst situation, paying attention to the possible karst collapse and referring to the prevention measures for the corresponding treatment of the susceptible areas. For the non-susceptible areas of the road section, the degree of karst collapse susceptibility is low. During construction, attention should be paid to the possible karst collapse in local areas, and corresponding treatment can also be made with reference to the prevention measures for the karst collapse-susceptible areas. 7. Conclusions The evaluation of karst collapse susceptibility is a complex and comprehensive research topic. In the evaluation process, it is very important to use scientific evaluation methods and establish a practical and perfect comprehensive evaluation system for karst collapse susceptibility evaluation. In this study, based on the AHP method of ArcGIS, the prediction and evaluation of the karst collapse susceptibility of section K1379+320-K1471+920 of the Quannan Expressway were carried out, and the conclusions are as follows: (1) With the full integration of karst collapse-inducing factors, through the AHP hierarchical analysis method, it is reasonable to build a hierarchical structure evaluation system of three levels, one objective, three criteria, and seven indicators to derive the karst collapse susceptibility evaluation model. (2) Through the spatial analysis function of ArcGIS, the prediction and evaluation map of karst collapse susceptibility was obtained. According to the size of the H value, the study area was divided into five levels. There are four levels of karst collapse susceptibility, including extremely susceptible areas (2.64–2.81), susceptible areas (2.43–2.64), somewhat susceptible areas (1.88–2.43), and non-susceptible areas (1.04–1.88), and one non-karst level (0.51–1.04). The length of the extremely susceptible area is 11.9 km, accounting for about 12.85% of the total length of the line, and the remaining three susceptible areas are 23.2 km (25.05%), 36.62 km (39.54%), and 10.2 km (11.01%), respectively. The research conclusions are consistent with the geographical location of karst collapse and the susceptibility to karst collapse in recent years, and the research results are consistent with the actual situation. (3) According to the analysis results, the total length of the extremely susceptible and susceptible areas of karst collapse is 34.3 km, mainly distributed in the dissolution plain landform (88.05%). Only 4.1 km (11.95%) of the susceptible area is distributed in the erosion–dissolution landform; 99.10% of the 11.1 km of the extremely susceptible area is distributed in the strong developed karst area, 63.36% of the 23.2 km of the susceptible area is distributed in the strong developed karst area, and the rest are distributed in the moderate developed karst area. The soil thickness in the extremely susceptible area is less than 10 m or 5–10 m, and it is affected by faults, underground water, karst collapse, and mining well. Areas with soil thickness of 5–10 m have increased susceptibility. The somewhat susceptible and non-susceptible areas of karst collapse are mainly controlled by the degree of karst development, and they are located in the areas with moderate developed karst or weak developed karst. The non-karst areas are not affected by any karst collapse susceptible factors. (4) In view of the prediction and evaluation conclusions and with reference to similar engineering experience, effective karst collapse prevention measures are put forward, which can provide a reference for disaster prevention and mitigation in engineering construction. (5) The research results have played a guiding role in the safe construction and safe operation of the project after completion, which is of great practical significance and has certain academic research value, as it promotes and draws reference from the development of karst collapse research for several route projects. At the same time, the research method provides a reference for similar projects to evaluate the susceptibility of karst collapse and also provides a scientific basis for the planning and layout of route engineering and its geological disaster prevention. (6) Although the research results can provide guidance for prevention in the study area, the research results have certain limitations due to the difficulty of collecting basic research data comprehensively. Author Contributions Writing—review and editing, Y.-H.X.; Supervision, B.-H.Z.; Writing—original draft, Y.-X.L.; Data curation, B.-C.L.; Software, C.-F.Z.; Investigation, Y.-S.L. All authors have read and agreed to the published version of the manuscript. Funding This research was funded by the Project on Improving the Basic Research Ability of Young and Middle-aged Teachers in Guangxi Universities (Grant No. 2022KY0786) and the Natural Science Foundation of Guangxi Province, China (Grant No. 2020GXNSFAA297078). Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement The data presented in this study are available in Figure 2, Figure 3, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15. Conflicts of Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. References 1. Zheng, Z.J.; Ao, W.L.; Zeng, J.; Gan, F.P.; Zhang, W. Application of integrated geophysical methods to karst collapse investigation in the Sijiao village near Liuzhou. Hydrogeol. Eng. Geol. 2017, 44, 143–149. [Google Scholar] 2. Yi, S.M.; Lu, W.; Zhou, X.J. The Formation Investigation and Remediation of Sinkhole in the Xiamao Village, Guangzhou. Trop. Geogr. 2021, 41, 801–811. [Google Scholar] 3. Pan, Z.Y.; Jia, L.; Liu, B.C. Risk evaluation of karst collapse based on technology of AHP and ArcGIS—A case of Yongle Town in Zunyi City. J. Guilin Univ. Technol. 2016, 36, 464–470. [Google Scholar] 4. Wei, Y.Y.; Sun, S.L.; Huang, J.J. Spatial-temporal distribution and causes of karst collapse in the Xuzhou area. Carsologica Sin. 2015, 34, 52–57. [Google Scholar] 5. Zhang, J.; Bi, P.; Wei, A.; Tao, Z.B.; Zhu, H.C. Assessment of susceptibility to karst collapse in the Qixia Zhongqiao district of Yantai based on fuzzy comprehensive method. Carsologica Sin. 2021, 40, 215–220. [Google Scholar] 6. Wang, H.H.; Zhang, F.W.; Guo, C.Q.; Sun, C.T. Urban karst collapse hazard assessment based on analytic hierarchy process:An example of southern Wuhan City. Carsologica Sin. 2016, 35, 667–673. [Google Scholar] 7. Perrin, J.; Cartannaz, C.; Noury, G.; Vanoudheusden, E. A Multi-criteria Approach to Karst Subsidence Hazard Mapping Supported by Weights-of-evidence Analysis. Eng. Geol. 2015, 197, 296–305. [Google Scholar] [CrossRef] 8. Zeng, B.; Yang, M.Y.; Shao, C.J.; Chen, Z.H.; Peng, D.M.; Zheng, S.N. Susceptibility Assessment of Karst Collapse of Hangchang Expressway Projects Based on Analytic Hierarchy Process. Saf. Environ. Eng. 2018, 25, 29–38. [Google Scholar] 9. Wu, Y.N. Process and influencing factors of karst ground collapse in the water source area of Taian-Jiuxian. Carsologica Sin. 2020, 39, 225–231. [Google Scholar] 10. Tu, J.; Li, H.J.; Peng, H.; Wei, X.; Jia, L. Analysis on collapse model of the karst area covered by clay in Wuhan City Jiangxia district Hongqi village. Carsologica Sin. 2018, 37, 112–119. [Google Scholar] 11. Wang, G.L.; Qiang, Z.; Cao, C.; Chen, Y.; Hao, J.Y. Assessment of susceptibility to karst collapse based on geodetector and analytichierarchy process: An example of Zhongliangshan area in Chongqing. Carsologica Sin. 2021. Available online: https://kns.cnki.net/kcms/detail/45.1157.P.20210310.1716.004.html (accessed on 11 March 2021). 12. Wang, F.; Chai, B.; Xu, G.L.; Chen, L.; Xiong, Z.T. Evolution Mechanism of Karst Sinkholes in Wuhan City. J. Eng. Geol. 2017, 25, 824–832. [Google Scholar] 13. Yang, Y.; Cao, X.M.; Feng, F.; Ding, J.P. Mechanism analysis of karst collapse at polie of Yanyu, Guizhou. J. Liaoning Tech. Univ. (Nat. Sci.) 2016, 35, 1081–1084. [Google Scholar] 14. Miao, S.X.; Huang, J.J.; Wu, J.Q.; Li, S.M. Mechanism Analysis of Karst Collapse and Ground Fissures Disasters in Dayanglin, Zhenjiang. J. Disaster Prev. Mitig. Eng. 2013, 33, 679–685. [Google Scholar] 15. Mu, C.M.; He, Y.C.; Li, C.J. The Cause of Formation Analysis and Preventive Treatment to the Karstic Collapse of a Stadium in Guilin. Ind. Constr. 2013, 43, 459–463. [Google Scholar] 16. Wan, Z.B.; Wu, X.; Xu, S.; Li, Y.H.; Yang, R.Y.; Chen, H.H.; Gao, M.X.; Zhang, S.F. Mechanism of karst collapse in Shiliquan area in Zaozhuang City. Hydrogeol. Eng. Geol. 2006, 4, 109–111. [Google Scholar] 17. Gao, X.C. Mechanism Analysis of Karst Subgrade Subsidence of Laixin Expressway. J. Highw. Transp. Res. Dev. 2004, 4, 42–44. [Google Scholar] 18. Zhong, Y.; Zhang, M.K.; Pan, L.; Zhao, S.K.; Hao, Y.H. Risk assessment for urban karst collapse in Wuchang District of Wuhan based on GIS. J. Tianjin Norm. Univ. (Nat. Sci. Ed.) 2015, 35, 48–53. [Google Scholar] 19. Wu, L.Q.; Liao, J.J.; Wang, W.; Pi, W.; Zhou, L.L. Risk Assessment of Karst Surface Collapse in Wuhan Region Based on AHP-Information Method. J. Yangtze River Sci. Res. Inst. 2017, 34, 43–47. [Google Scholar] 20. Duan, X.Q.; Chu, X.W.; Li, B. Risk prediction and evaluation of the karst collapse based on the set pair mechanism analysis. J. Saf. Environ. 2016, 16, 72–76. [Google Scholar] 21. Ou, Y.F.; Xu, G.L.; Zhang, X.J.; Li, Y.F.; Dong, J.X. Static Analysis and Hazard Assessment of Karst Ground Collapse in Vital Project. J. Yangtze River Sci. Res. Inst. 2016, 33, 88–93. [Google Scholar] 22. Cui, Y.L.; Wang, G.H.; Li, Z.Y. Risk assessment of karst collapse areas based on the improved fish bone model:An example of the Liuzhou area in Guangxi Province. Carsologica Sin. 2015, 34, 64–71. [Google Scholar] 23. Li, X.; Yin, K.L.; Chen, B.D.; Li, Y.; Jiang, C.; Yi, J. Evaluation of karst collapse susceptibility on both sides of Yangtze River in Baishazhou, Wuhan and countermeasures for prevention and control during subway construction. Geol. Sci. Technol. Bull. 2020, 39, 121–130. [Google Scholar] 24. Chen, J.Y.; Zhu, B.; Peng, S.X.; Shan, H.M. AHP and GIS-based assessment of karst collapse susceptibility in mining areas—A case study of karst mining area in Lingyu, Guizhou. J. Nat. Hazards 2021, 30, 226–236. [Google Scholar] 25. Abedini, M.; Tulabi, S. Assessing LNRF, FR, and AHP models in landslide susceptibility mapping index: A comparative study of Nojian watershed in Lorestan province, Iran. Environ. Earth Sci. 2018, 77, 405. [Google Scholar] [CrossRef] 26. Hammami, S.; Zouhri, L.; Souissi, D.; Souei, A.; Zghibi, A.; Marzougui, A.; Dlala, M. Application of the GIS based multi-criteria decision analysis and analytical hierarchy process (AHP) in the flood susceptibility mapping (Tunisia). Arab. J. Geosci. 2019, 12, 653. [Google Scholar] [CrossRef] 27. Azarafza, M.; Akgün, H.; Atkinson, P.M.; Derakhshani, R. Deep learning-based landslide susceptibility mapping. Sci. Rep. 2021, 11, 24112. [Google Scholar] [CrossRef] [PubMed] 28. Subedi, P.; Subedi, K.; Thapa, B.; Subedi, P. Sinkhole susceptibility mapping in Marion County, Florida: Evaluation and comparison between analytical hierarchy process and logistic regression based approaches. Sci. Rep. 2019, 9, 7140. [Google Scholar] [CrossRef][Green Version] 29. Di Napoli, M.; Di Martire, D.; Bausilio, G.; Calcaterra, D.; Confuorto, P.; Firpo, M.; Pepe, G.; Cevasco, A. Rainfall-induced shallow landslide detachment, transit and runout susceptibility mapping by integrating machine learning techniques and GIS-based approaches. Water 2021, 13, 488. [Google Scholar] [CrossRef] 30. Xiao, J.Q.; Qiao, S.F. Interaction between karst-subsidence foundation and subgrade in Lou-Xing freeway and its treatment methods. J. Railw. Sci. Eng. 2009, 6, 33–38. [Google Scholar] 31. Peng, Y.H. Analysis of ground collapse mechanism and engineering management in karst areas. China Rural. Water Hydropower 2004, 4, 40–42. [Google Scholar] 32. Zhang, B.Z.; Chen, Z.D. Mechanism and comprehensive management of karst collapse in mines. Geol. China 1997, 4, 27–29. [Google Scholar] 33. Luo, X.J. Prevention, control and emergency disposal of covered karst ground collapse. Yangtze River 2016, 47, 38–44. [Google Scholar] 34. Nanehkaran, Y.A.; Mao, Y.; Azarafza, M.; Kockar, M.K.; Zhu, H.H. Fuzzy-based multiple decision method for landslide susceptibility and hazard assessment: A case study of Tabriz, Iran. Geomech. Eng. 2021, 24, 407–418. [Google Scholar] 35. Das, S. Flood susceptibility mapping of the Western Ghat coastal belt using multi-source geospatial data and analytical hierarchy process (AHP). Remote Sens. Appl. Soc. Environ. 2020, 20, 100379. [Google Scholar] [CrossRef] 36. Ghorbanzadeh, O.; Feizizadeh, B.; Blaschke, T. An interval matrix method used to optimize the decision matrix in AHP technique for land subsidence susceptibility mapping. Environ. Earth Sci. 2018, 77, 584. [Google Scholar] [CrossRef] 37. Wu, Y.B.; Liu, Z.K.; Yin, R.Z.; Lei, M.T.; Dai, J.L.; Luo, W.Q.; Pan, Z.Y. Evaluation and application of karst collapse susceptibility in Huaihua, Hunan based on AHP and GIS techniques. Carsologica Sin. 2021. Available online: https://kns.cnki.net/kcms/detail/45.1157.P.20211221.1205.002.html (accessed on 12 December 2021). 38. Arabameri, A.; Rezaei, K.; Pourghasemi, H.R.; Lee, S.; Yamani, M. GIS-based gully erosion susceptibility mapping: A comparison among three data-driven models and AHP knowledge-based technique. Environ. Earth Sci. 2018, 77, 628. [Google Scholar] [CrossRef] 39. Azarafza, M.; Ghazifard, A.; Akgün, H.; Asghari-Kaljahi, E. Landslide susceptibility assessment of South Pars Special Zone, southwest Iran. Environ. Earth Sci. 2018, 77, 805. [Google Scholar] [CrossRef] 40. Souissi, D.; Zouhri, L.; Hammami, S.; Msaddek, M.H.; Zghibi, A.; Dlala, M. GIS-based MCDM–AHP modeling for flood susceptibility mapping of arid areas, southeastern Tunisia. Geocarto Int. 2020, 35, 991–1017. [Google Scholar] [CrossRef] Figure 1. The location map of the study area. Figure 1. The location map of the study area. Water 14 01432 g001 Figure 2. The range of study area. Figure 2. The range of study area. Water 14 01432 g002 Figure 3. Engineering geological map of the study area. Figure 3. Engineering geological map of the study area. Water 14 01432 g003 Figure 4. Typical dissolution landforms. Figure 4. Typical dissolution landforms. Water 14 01432 g004 Figure 5. Typical karst collapse. Figure 5. Typical karst collapse. Water 14 01432 g005 Figure 6. A typical groundwater mining well. Figure 6. A typical groundwater mining well. Water 14 01432 g006 Figure 7. Evaluation system. Figure 7. Evaluation system. Water 14 01432 g007 Figure 8. Zoning map of influence degree of karst development on karst collapse. Figure 8. Zoning map of influence degree of karst development on karst collapse. Water 14 01432 g008 Figure 9. Zoning map of influence degree of karst landform on karst collapse. Figure 9. Zoning map of influence degree of karst landform on karst collapse. Water 14 01432 g009 Figure 10. Zoning map of influence degree of fault on karst collapse. Figure 10. Zoning map of influence degree of fault on karst collapse. Water 14 01432 g010 Figure 11. Zoning map of influence degree of soil thickness on karst collapse. Figure 11. Zoning map of influence degree of soil thickness on karst collapse. Water 14 01432 g011 Figure 12. Zoning map of influence degree of existing karst collapse in the research area. Figure 12. Zoning map of influence degree of existing karst collapse in the research area. Water 14 01432 g012 Figure 13. Zoning map of influence degree of underground river on karst collapse. Figure 13. Zoning map of influence degree of underground river on karst collapse. Water 14 01432 g013 Figure 14. Zoning map of influence degree of mining well on karst collapse. Figure 14. Zoning map of influence degree of mining well on karst collapse. Water 14 01432 g014 Figure 15. Zoning map of karst collapse susceptible prediction evaluation in the study area. Figure 15. Zoning map of karst collapse susceptible prediction evaluation in the study area. Water 14 01432 g015 Table 1. List of main structures in the research area. Table 1. List of main structures in the research area. Number Name Characteristic Intersection Area Impact Degree F1Bazha faultFracture of unknown natureK1389+750High F2Yangshan faultNormal faultK1393+020High F3Bianshan faultNormal faultK1397+100High F4Yao Village faultFracture of unknown natureK1398+570High F5Xingu Ling-Hengxian faultCompressive fractureK1404+340High F6Gaoshan faultNormal faultK1412+020Medium F7Fault of unknown natureFracture of unknown natureK1415+050Low F8Fault of unknown natureFracture of unknown natureK1416+200Low F9Li village faultNormal faultK1417+480Low F10Lijianpo faultNormal faultK1421+020Low F11Liantang faultRetrograde faultK1423+530Low F12Wangbuna faultRetrograde faultK1431+000Medium 1Liujing-Shangzhou gentle monoclinal faultMonoclinic structureLiujing, Lingli, Wuhe to Shangzhou areaHigh 2Gantang short-axis syncline SynclineGantang areaMedium Table 2. List of landform types in the study area. Table 2. List of landform types in the study area. Lithological ClassificationGenetic ClassificationThe Distribution of Section Non-karst landformsErosion landformsK1410+500~K1420+400, K1431+000~K1434+100 Accumulation landformsK1420+400~K1423+600, K1425+100~K1435+900 Karst landformsDissolution landformsK1379+300K1410+500, K1449+500~K1456+800, K1468+300~K1471+920 Dissolution–erosion landforms or erosion–dissolution landformsK1423+600~K1431+000, K1435+900~K1449+500, K1456+800~K1468+300 Table 3. Evaluation factors and assignment table of structural karst collapse susceptibility level. Table 3. Evaluation factors and assignment table of structural karst collapse susceptibility level. Objective Layer ACriteria Layer BIndicator Layer CImpact Degree/Assignment Extremely High Impact/5 High Impact/4Middle Impact/3 Low Impact/2Extremely Low Impact/1 Evaluation of karst collapse susceptibilityBasic geological conditions B1Degree of karst development C1 Hkarst StrongModerateWeak None Karst landform C2 Hlandform PlainErosion–karst hills valley (depression)Dissolution–erosion low hillsSolitary and residual peak Peak clump or peak forest Non-karst landforms FaultC3Hfault0~250 m250~500 m500~750 m750~1000 m>1000 m Soil thickness C4 Hsoil <5 m5~10 m10~20 m20~30 m>30 m Karst risk influence B2Karst collapse C5 Hcollapse >4/km22~4/km21~2/km21/km20 Underground river C6 Hgroundriver <1.5 m1.5~3 m3~6 m6~10 m>10 m Human activities B3Mining well C7 Hwel 0~250 m250~500 m500~750 m750~1000 m>1000 m Table 4. Importance scales. Table 4. Importance scales. Importance ScalesMeaning 1When two elements are compared, they are of equal importance 3When comparing two elements, the former is slightly more important than the latter 5When comparing two elements, the former is more important than the latter 7When comparing two elements, the former is significantly more important compared to the latter 9When comparing two elements, the former is extremely more important compared to the latter 2, 4, 6, 8The intermediate values of the above judgments ReciprocalIf the ratio of the importance of element I to element j is aij, then the ratio of the importance of element j to element I is aji = 1/aij Table 5. Average random consistency index allocation table. Table 5. Average random consistency index allocation table. Number of Steps n12345678910 RI000.580.901.121.241.321.411.451.49 Table 6. Evaluation factor weight allocation table. Table 6. Evaluation factor weight allocation table. Objective Layer AEvaluation of Karst Collapse Susceptibility Criterion layer BBasic geological conditions B1Karst risk influence B2Human activities B3 Criterion layer weights relative to objective layer0.66260.23240.1050 Indicator layer CDegree of karst development C1Karst landform C2Fault C3Soil thickness C4Karst collapse C5Underground river C6Mining well C7 Criterion layer weights relative to indicator layer weights0.49650.26680.15400.08270.64290.35711.0000 Indicator layer weights relative to objective layer weights0.32850.17700.10230.05480.14940.08300.1050 Table 7. Table of evaluation conclusions of karst collapse susceptibility. Table 7. Table of evaluation conclusions of karst collapse susceptibility. MileageSusceptible LevelLength/kmMileageSusceptible LevelLength/km k1379+300-k1381+800Extremely susceptible area2.5k1409+300-k1410+300Susceptible area1.0 k1381+800-k1388+000Susceptible area6.2k1410+300-k1410+600Somewhat susceptible area0.3 k1388+000-k1389+000Extremely susceptible area1.0k1410+600-k1414+500Non-susceptible area3.9 k1389+000-k1390+000Susceptible area1.0k1414+500-k1415+400Somewhat susceptible area0.9 k1390+000-k1391+000Extremely susceptible area1.0k1415+400-k1416+900Non-susceptible area1.5 k1391+000-k1394+000Susceptible area3.0k1416+900-k1417+200Somewhat susceptible area0.3 k1394+000-k1395+600Extremely susceptible area1.6k1417+200-k1418+100Non-karst area0.9 k1395+600-k1397+500Susceptible area1.9k1418+100-k1420+000Non-susceptible area1.9 k1397+500-k1399+700Extremely susceptible area2.2k1420+000-k1423+200Somewhat susceptible area3.2 k1399+700-k1400+500Susceptible area0.8k1423+200-k1425+100Non-susceptible area1.9 k1400+500-k1401+600Extremely susceptible area1.1k1425+100-k1433+000Non-karst area7.9 k1401+600-k1403+800Susceptible area2.2k1433+000-k1434+000Non-susceptible area1.0 k1403+800-k1405+000Extremely susceptible area1.2k1434+000-k1435+900Non-karst area1.9 k1405+000-k1408+000Susceptible area3.0k1435+900-k1440+000Susceptible area4.1 k1408+000-k1409+300Extremely susceptible area1.3k1440+000-k1475+000Somewhat susceptible area35.0 Table 8. Table of prevention measures. Table 8. Table of prevention measures. Road SectionSusceptible LevelPrevention Measures k1379+300-k1381+800Extremely susceptible areaIf the karst is developed in a large area and the bedrock surface is violently undulating, a large excavation program will be adopted to cut the height, fill the low level, and reinforce the substrate; on the contrary, if the solution trench and solution trough are locally developed, a local excavation and backfill or structure span program will be adopted. If the burial depth is shallow, excavation and backfill will be used to reinforce the hidden soil cave and karst cave, and if the burial depth is deep, grouting or structure can be used to span according to the specific situation. k1381+800-k1387+700Extremely susceptible area, susceptible area K1387+700-K1410+500Extremely susceptible area K1418+400-K1425+100Susceptible area K1436+700-K1439+700Extremely susceptible area K1439+700-K1471+920Susceptible area Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Share and Cite MDPI and ACS Style Xie, Y.-H.; Zhang, B.-H.; Liu, Y.-X.; Liu, B.-C.; Zhang, C.-F.; Lin, Y.-S. Evaluation of the Karst Collapse Susceptibility of Subgrade Based on the AHP Method of ArcGIS and Prevention Measures: A Case Study of the Quannan Expressway, Section K1379+300-K1471+920. Water 2022, 14, 1432. https://doi.org/10.3390/w14091432 AMA Style Xie Y-H, Zhang B-H, Liu Y-X, Liu B-C, Zhang C-F, Lin Y-S. Evaluation of the Karst Collapse Susceptibility of Subgrade Based on the AHP Method of ArcGIS and Prevention Measures: A Case Study of the Quannan Expressway, Section K1379+300-K1471+920. Water. 2022; 14(9):1432. https://doi.org/10.3390/w14091432 Chicago/Turabian Style Xie, Yan-Hua, Bing-Hui Zhang, Yu-Xin Liu, Bao-Chen Liu, Chen-Fu Zhang, and Yu-Shan Lin. 2022. "Evaluation of the Karst Collapse Susceptibility of Subgrade Based on the AHP Method of ArcGIS and Prevention Measures: A Case Study of the Quannan Expressway, Section K1379+300-K1471+920" Water 14, no. 9: 1432. https://doi.org/10.3390/w14091432 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here. Article Metrics Back to TopTop
__label__pos
0.822461
• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Logo of plosbiolPLoS BiologySubmit to PLoSGet E-mail AlertsContact UsPublic Library of Science (PLoS)View this Article PLoS Biol. Feb 2005; 3(2): e65. Published online Feb 15, 2005. doi:  10.1371/journal.pbio.0030065 PMCID: PMC548955 Facts from Text—Is Text Mining Ready to Deliver? Biological databases offer access to formalized facts about many aspects of biology—genes and gene products, protein structure, metabolic pathways, diseases, organisms, and so on. These databases are becoming increasingly important to researchers. The information that populates databases is generated by research teams and is usually published in peer-reviewed journals. As part of the publication process, some authors deposit data into a database but, more often, it is extracted from the published literature and deposited into the databases by human curators, a painstaking process. Research literature and scientific databases fulfil different needs. Literature provides ideas and new hypotheses, but is not constrained to provide facts in formats suitable for use in databases. By contrast, databases efficiently provide large quantities of data and information in a standardised schema representing a predefined interpretation of the data. While the acceptance of a paper can enforce the submission of data to a central data repository, such as EMBL (www.ebi.ac.uk/embl/) or ArrayExpress (www.ebi.ac.uk/arrayexpress/), nobody receives credit for the submission of a fact to a database without an associated publication. As long as this practice continues, curation will be necessary to add the (re)formalised facts to biological databases. Given that publications are not about to be replaced with routine deposition of data into databases, is it possible to develop software tools to support the work of the curator? Could we automatically analyse new scientific publications routinely to extract facts, which could then be inserted into scientific databases? Could we tag gene and protein names, as well as other terms in the document, so that they are easier to recognise? How can we use controlled vocabularies and ontologies to identify biological concepts and phenomena? Fortunately, there are many groups that are now seeking to answer these questions, precisely with a view to extracting facts from text. Part of the motivation for this effort in text mining technology is the inexorable rise in the amount of published literature (Figure 1). This massive growth, coupled with the current inefficiencies in transferring facts into other data resources, leads to the unfortunate state that biological databases tend to be incomplete (for example, DNA sequences without known function in genetic databases), and there are inconsistencies between databases and literature. Figure 1 Medline Article Deluge In theory, text mining is the perfect solution to transforming factual knowledge from publications into database entries. But computational linguists have not yet developed tools that can analyse more than 30% of English sentences correctly and transform them into a structured formal representation [1,2]. We can analyse part of a sentence, such as a subphrase describing a protein–protein interaction or part of a sentence containing a gene and a protein name, but we always run into Zipf's law whenever we write down the rules for how the extraction is done (Figure 2) [3]. A small number of patterns describe a reasonable portion of protein–protein interactions, gene names, or mutations, but many of those entities are described by a pattern of words that's only ever used once. Even if we could collect them all—which is impossible—we can't stop new phrases from being used. Figure 2 Zipf's Law Curators—The Gold Standard Hand-curated data is precise, because the curator is trained to inspect literature and databases, select only high-quality data, and reformat the facts according to the schema of the database. In addition, curators select citations from the text as evidence for the identified fact, and those citations are also added to the database. Curators read and interpret the text at the same time, and if they don't understand the meaning of a sentence, they can go back and pick a new strategy to analyse it—they can even call the authors to iron out any ambiguities. Curators can also cope with the high variability of language described by Zipf's law. At present, no computer-based system comes close to matching these capabilities. In particular, it is difficult to convert all the curators' domain knowledge into a structured training set for the purposes of machine learning approaches. Curators fulfil a second important task: they know how to define standards for data consistency, in particular, the most relevant terminology, which has led to the design of standardised ontologies and controlled vocabularies (see Box 1 for an explanation of these and related terms). Examples of these include Gene Ontology (GO; www.geneontology.org/), Unified Medical Language System (www.nlm.nih.gov/research/umls/), and MedDRA (www.meddramsso.com/NewWeb2003/index.htm) [4]. These terminological resources help to relate entries in bioinformatics databases to concepts mentioned in scientific publications and to link related information in databases using different schemas. Text miners would love such standards to be used in text, but there is an understandable reluctance to impose and use standards that might limit the expressiveness of natural language. Box 1. Glossary Controlled vocabulary: A set of terms, to standardise input to a database. F-measure: A statistic that is used to score the success of NE recognition by text mining tools. The F-measure is an average parameter based on precision (how many of the entities found by the tool are correct identifications of an entity) and recall (how many of the entities existing in the text did the tool find). Machine learning: The technology and study of algorithms through which machines (computers) can “learn”, or automatically improve their systems through data gathered in the past (experience). Ontology: A set of terms with clear semantics (language), clear motivations for distinction between the terms, and strict rules for how the terms relate to each other. Curation and Text Mining—In Partnership The problem with curation of data is that it is time consuming and costly, and therefore has to focus on the most relevant facts. This compromises the completeness of the curated data, and curation teams are doomed to stay behind the latest publications. So, is it possible for curation and text mining to work together for rapid retrieval and analysis of facts with precise postprocessing and standardisation of the extracted information? There are several software tools that perform well in the identification of standardised terms from the literature. Examples include Textpresso and Whatizit [5,6,7,8]. Extensive term lists come from the Human Genome Organization (www.gene.ucl.ac.uk/hugo; 20,000 gene and protein names), GO (almost 20,000 terms), Uniprot/Swiss-Prot (www.ebi.uniprot.org/index.shtml; about 200,000 terms), and other databases. In addition, terms describing diseases, syndromes, and drugs are available from the Unified Medical Language System. Altogether, about 500,000 terms constitute the basis of domain knowledge in life sciences. To gain some perspective of this figure: an average individual handles 2,000 to 20,000 terms in his or her daily language, and Merriam-Webster's Collegiate Dictionary provides definitions for 225,000 terms (www.merriam-webstercollegiate.com/). The identification of all terms by a text mining system still sets challenging demands. All variants of a term have to be taken into account, including syntactical variants and synonyms. In the case of ambiguities, relevant findings have to be distinguished from other findings—a process referred to as disambiguation. Depending on the curation task, it might therefore be advantageous to select only part of the terminological resources and thus restrict the domain of the terminology to the curators' needs (Figure 3). Figure 3 GOAnnotator Available text mining solutions are concerned with named entity (NE) recognition (entities are, for example, proteins, species, and cell lines), with identification of relationships between NEs (such as protein interactions), and with the classification of text subphrases according to annotation schemata in general (thyroid receptor is a thyroid hormone receptor) [9,10,11,12,13,14,15]. Whilst the identification of a curation team's terminology in the scientific text under scrutiny is immensely valuable, there is still a long way to go before this becomes routine. Some Immediate Challenges Not all terms used in the literature (NEs) can actually be found in some kind of database (perhaps because of an author error, or an alternative name for an entity adopted by the community). Text mining methods therefore have to detect new terms and map the term to known terminology [16]. If several mappings are possible, the correct version has to be selected (disambiguation). Over the past several years text mining research teams have presented various approaches that train a software tool to locate representations of gene or protein names (for example, BioCreative, www.pdg.cnb.uam.es/BioLINK/BioCreative.eval.html, and JNLPBA, www.genisis.ch/~natlang/JNLPBA04/) [17,18]. These tools are scored with a statistic known as the F-measure, with the best methods scoring about 0.85. At the level of 0.85, curators still tend to be unhappy. However, analyses have shown that this score is in the range of curator–curator variation (unpublished data, measured as part of the project work for [19]), which suggests that such methods produce useful results. Additional information-extraction methods have been proposed, for example, for the documentation of mutations in specific genes and for the extraction of the subcellular location of proteins [11,13]. An even larger number of tools focus on the identification of appropriate terminology for the annotation of genes (GO terms) [7]. The evaluation of their usefulness depends on the demands of the user groups. Finally, another way to support curation teams would be to provide information-retrieval methods to guide the team members towards documents containing relevant information. For example, in 2002, the participants in the Knowledge Discovery and Data-Mining Challenge Cup (www.cs.cornell.edu/projects/kddcup/) had to select documents from a given corpus that contained relevant experimental results about Drosophila [20]. How Can Publishers Contribute? For all automated information-extraction methods, it is obvious that access to literature is crucial. Electronic access has, of course, already had a huge impact, but the structure and organisation of manuscripts could also be improved. For example, semantic tags could be integrated into the text. The markup would not appear on web pages or when the document is printed, but it would help software to deal with semantic aspects of the document. Inserting tags, for example, to mark protein names would allow retrieval software to find documents about proteins even if they look like common English words, such as “you” or “and”. Retrieval engines currently often ignore such terms. In addition, explicit tags would enable text mining methods, for example, when looking for protein–protein interactions, to use the correct semantic interpretation. Text mining systems already available today, such as Whatizit, can integrate semantic tags during submission, which have to be verified by the author. Text mining is ready to deliver tools whereby information is passed back to the authors about the proper use of terminology within their documents. If the use of a term raises conflicts or ambiguities or if the use of a term is wrong, the author is asked to provide feedback. The curation effort is resolved at the earliest possible time-point. Author, publisher, reviewer, and reader profit from consistent information representation, which leads to better dissemination of documents and journals and easily offsets the additional cost in the generation of an article. Publishers and authors have to agree on standards though. Is Text Mining Ready to Deliver? Text mining solutions have found their way into daily work, wherever fast and precise extraction of details from a large volume of text is needed. We have to keep in mind, however, that any text mining tool, just like other bioinformatics resources, will only be suitable for a limited number of tasks. For example, the same text may serve curators from different communities who extract different types of facts, depending on their domain knowledge. Furthermore, different communities have different expectations for accuracy. For example, curators dealing with a small set of proteins prefer tools with high recall, whereas curators dealing with a large number of proteins prefer tools with high precision. Although text mining cannot dissect English sentences completely, and cannot extract the meaning and put the facts into a database, text mining tools are becoming increasingly used and valued. Text mining is ready to deliver handling of complex terminology and nomenclature as a mature service. It is only a matter of time and effort before we are able to extract facts automatically. The consequences are likely to be profound. Not only will we have a more effective approach for the mining of knowledge from the literature, our approach to the publication process itself might change. If a fact is clear enough for automatic extraction, it could be reported in a fact database instead of a publication. As methods improve, authors will see more and more of their text being analysed and formalised in a database. If appropriate quality control is provided, and if authors receive due credit for their deposition of facts into databases, we might well see a shift towards original papers describing new creative ideas and visions rather than just listing facts. Abbreviations GO Gene Ontology NE named entity Footnotes Citation: Rebholz-Schuhmann D, Kirsch H, Couto F (2005) Facts from text—Is text mining ready to deliver? PLoS Biol 3(2): e65. Dietrich Rebholz-Schuhmann and Harald Kirsch are at the European Bioinformatics Institute, Cambridge, United Kingdom. Francisco Couto is in the Departamento de Informática, Faculidade de Ciências, Universidade de Lisboa, Portugal. References • Briscoe T, Carroll J. Robust Accurate statistical annotation of general text; Proceedings of the Third International Conference on Language Resources and Evaluation; 2002 May; Canary Islands, Spain: European Language Resources Association; 2002. pp. 1499–1504. • Pyysalo S, Ginter F, Pahikkala T, Koivula J, Boberg J, et al. In: Collier N, Ruch P, Nazarenko, editors. Analysis of link grammar on biomedical dependency corpus targeted at protein-protein interactions; Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its Applications; 2004 August 28–29; Geneva, Switzerland. 2004. pp. 15–21. • Zipf GK. Selective studies and the principle of relative frequency in language. Cambridge (Massachusetts): MIT Press; 1932. • Gene Ontology Consortium. Creating the gene ontology resource: Design and implementation. Genome Res. 2001;11:1425–1433. [PMC free article] [PubMed] • Müller HM, Kenny EE, Sternberg PW. Textpresso: An ontology-based information retrieval and extraction system for biological literature. PLoS Biol. 2004;2:e309. [PMC free article] [PubMed] • Nenadic G, Mima H, Spasic I, Ananiadou S, Tsujii JI. Terminology-driven literature mining and knowledge acquisition in biomedicine. Int J Med Inf. 2002;67:33–48. [PubMed] • Perez AJ, Perez-Iratxeta C, Bork P, Thode G, Andrade MA. Gene annotation from scientific literature using mappings between keyword systems. Bioinformatics. 2004;20:2084–2091. [PubMed] • Rebholz-Schuhmann D, Kirsch H. Extraction of biomedical facts—A modular Web server at the EBI (Whatizit) [presentation]. Healthcare Digital Libraries Workshop; 2003 September 16; Bath, United Kingdom. 2004. • Marcotte EM, Xenarios I, Eisenberg D. Mining literature for protein-protein interactions. Bioinformatics. 2001;17:359–363. [PubMed] • Ono T, Hishigaki H, Tanigami A, Takagi T. Automated extraction of information on protein-protein interactions from the biological literature. Bioinformatics. 2001;17:155–161. [PubMed] • Rebholz-Schuhmann D, Marcel S, Albert S, Tolle R, Casari G, et al. Automatic extraction of mutations from Medline and cross-validation with Omim. Nucleic Acids Res. 2004;32:135–142. [PMC free article] [PubMed] • Rzhetsky A, Iossifov I, Koike T, Krauthammer M, Kra P, et al. GeneWays: A system for extracting, analyzing, visualizing, and integrating molecular pathway data. J Biomed Inf. 2004;37:43–53. [PubMed] • Stapley BJ, Kelley LA, Sternberg MJ. Predicting the sub-cellular location of proteins from text using support vector machines. Pac Symp Biocomput. 2002;2002:374–385. [PubMed] • Temkin J, Gilder M. Extraction of protein interaction information from unstructured text using a context-free grammar. Bioinformatics. 2003;19:2046–2053. [PubMed] • Yu H, Agichtein E. Extracting synonymous gene and protein terms from biological literature. Bioinformatics. 2003;19(Suppl 1):I340–I349. [PubMed] • Hanisch D, Fluck J, Mevissen HT, Zimmer R. Playing biology's name game: Identifying protein names in scientific text. Pac Symp Biocomput. 2003;2003:403–414. [PubMed] • Blaschke C, Hirschman L, Yeh A, Colosimo M, Morgan A, et al. Report on the BioCreAtIvE Workshop, Granada 2004 [abstract]. 12th International Conference on Intelligent Systems for Molecular Biology; 2004 13 July–4 August; ; Glasgow, United Kingdom. Intelligent Systems for Molecular Biology; 2004. • GuoDong Z, Dan S, Jie Z, Jian S, Heng TS, et al. In: Blaschke C, editor. Recognition of protein/gene names from text using an ensemble of classifiers and effective abbreviation resolution; Proceedings of the BioCreative Workshop; 2004 March 28–31; Granada, Spain: BMC Bioinformatics; 2004. • Albert S, Gaudan S, Knigge H, Raetsch A, Delgado A, et al. Computer-assisted generation of a protein-interaction database for nuclear receptors. Mol Endocrinol. 2003;17:1555–1567. [PubMed] • Yeh A, Hirschman L, Morgan A. Evaluation of text data mining for database curation: Lessons learned from the KDD Challenge Cup. Bioinformatics. 2003;19:I331–I339. [PubMed] Articles from PLoS Biology are provided here courtesy of Public Library of Science Formats: Related citations in PubMed See reviews...See all... Cited by other articles in PMC See all... Links Recent Activity Your browsing activity is empty. Activity recording is turned off. Turn recording back on See more...
__label__pos
0.884705
Switching it up: IGBTs Posted by Mackenzie Inman 15/08/2014 9 Comment(s) Variable Frequency Drives, An Insulated Gate Bipolar Transistor (IGBT) is a key component in what makes up a VFD (Variable Frequency Drive). If you break down a VFD, one easy way to analyze it is to think of it in three main parts: the bridge converter, DC link, and what we will talk about today, the inverter. An IGBT is the inverter element in a VFD, pulsing voltage faster than we can even blink. IGBTs have come a long way since they were first developed in the 1980’s. The IGBTs of today are much more advanced than their predecessors, which were slow at switching current on and off and often had problems overheating when passing a high current. With each new generation, IGBTs have continued to improve. No longer plagued by slow speeds, IGBTs have become highly reliable devices that can handle high voltage devices and are able to switch in less than a nanosecond (that’s a billionth of a second)! IGBTs are the “Gatekeepers” of Current To understand an IGBT’s role in a VFD, it is important to identify how an IGBT works on a smaller scale. As defined by being a transistor, an IGBT is a semiconductor with three terminals which work as a switch for moving electrical current. Just as the word “gate” suggests, when voltage is applied to the gate, it opens or “turns on” and creates a path for current to flow between the layers. If no voltage is applied to the gate, or if the voltage is not high enough, the gate remains closed and there will be no flow of electricity. In this way, an IGBT behaves like a switch; on when the gate is open and flowing current and off when it is closed.  In this way, the IGBT acts as the switch used to create Pulse-Width Modulation (PWM). An IGBT will switch the current on and off so rapidly that less voltage will be channeled to the motor, helping to create the PWM wave. For example, although the input voltage may in reality be 650V, the motor perceives it as more like 480V by using PWM (shown in diagrams below). This PWM wave is key to a VFDs operation because it is the variable voltage and frequency created by the PWM wave that will allow a VFD to control the speed of the motor. Therefore, without the IGBT switching the current on and off so rapidly a PWM wave—and the speed control that comes with it— could not be created. 480V 60Hz 80V 10Hz The number of pulses per second from the IGBTs is known as carrier frequency. Since carrier frequency is an adjustable parameter on most VFDs, you can essentially set it as high or as low as you want. Although, adjusting the carrier frequency comes with a few tradeoffs. Setting the carrier frequency too high will reduce the acoustic noise level produced from the VFD, but it will also shorten the expected VFD life due to heat. A higher carrier frequency will also contribute to an increase in motor heating and affect the overall efficiency of the motor. On the other hand, if you are in a sound sensitive environment – or if you just don’t want a headache – setting the carrier frequency too low can create a lot of motor noise or whining from the VFD. We have found that setting your carrier frequency at about 2 Kilohertz will achieve a nice balance between the audible acoustics while still keeping your VFD running efficiently.  In a typical six pulse drive there are six IGBTs pulsing voltage up to 15,000 times per second. Since their introduction in the 1980’s, IGBTs have literally switched up the market and now play a large role in many modern day power electronics applications where speed and process control are needed. It is clear that IGBTs play a large role in many power electronic applications and will continue to as they become more and more advanced in their technology. Hopefully taking this in depth look at the small part an IGBT plays has helped you to understand the overall functionality of a VFD as well. Check out our other featured articles for everything you need to know about VFDs and motors at www.vfds.com/blog, or click on the banner below to find the VFD you are looking for out of our 2,000+ inventory! Leave a Comment 9 Comment(s) Mitchell Friedman: 27/07/2017, 08:50:46 AM Reply We require a small (1 amp @ 120v 60 Hz input) power supply to provide 120v 60 Hz adjustable to 75 Hz output for a production application. rahul gupta: 06/01/2017, 06:18:52 AM Reply Commen good block for learning Amruth: 25/08/2016, 03:18:19 AM Reply what is the max temperature that IGBT can withstand in AC drives?? What is the max temp AC drives can withstand?? Ayoob: 04/06/2016, 07:44:14 AM Reply Is it possible to run the motor of 275 kW with 250kw drive? admin: 06/06/2016, 08:04:43 AM Hello Ayoob, If the VFD has a maximum current rating higher than the motor's full load amperage (FLA), and there is no reason the VFD would need to be de-rated (high elevation, high temperature, single-phase input), then the VFD will be able to drive the motor without a problem. naresh kumar: 27/11/2015, 11:38:12 PM Reply Principal nikan: 15/08/2015, 07:58:17 AM Reply Hi, thanks for nice site. Does the output of the inverter 200 V to 12 V, 12 V, 50 Hz with reduced and motor launches. Thank you. Jeff Vollin: 29/07/2015, 02:43:57 PM Reply Where do you get IGBT's that switch in under a nanosecond? I am not familiar with any device that fast, and I would be very interested if it existed. vikas: 10/09/2016, 11:05:48 PM take any faulty VFD break it u will find a three terminal device in it which will be in rectangular in shape. if u want more knowledge about it then refer to its manual. Marius Hauki : 17/05/2015, 05:50:55 AM Reply The rotor loss in a squirrel caged asynchronous AC motor is proportional to Pmech_axle * s/(1-s) where s = (ns - n) / ns. ns = synchronous speed n = axle speed s = slip If a voltage speed regulation method is used, the motor moves closer to Mmax and s increases. The rotor losses increase significantly since the artio s/(1-s) increases. Furthermore Mmax is proportional to Uin squared , so the M (mechanical moment) lost may be significant. If a slip ring rotor resistance regulation method is used, s still increases and loss increases, but the Moment Mmax is constant but moved lower in rpm. If a frequency based speed regulation method (VFD) is used, the ns is altered so s (the slip) can be held much lower. Therefore the rotor loss should be lower. The Mmax is also more or less constant. Therefore a VFD method gives less loss when we look at the fundamental frequency of the drive current. Harmonics may give loss components, but the VFD should have proper filtering and design to prevent harmonic current loss in the motor. al archambault: 30/12/2014, 11:32:43 AM Reply Your comment about higher xarrier frequencies increasing losses in the motor is not correct. A higher carrier frequency actually reduces the motor losses but as you said higher carrier frequency increases the losses in the inverter section of the VFD. admin: 09/01/2015, 04:23:00 PM Al, Thank you for your comment. We always try to give as accurate and reliable information as possible. We spoke with several of our engineers to learn their experiences with how a higher or lower carrier frequency affects a motor. They really felt that the majority of the time, losses either way (with a higher or lower carrier frequency) were negligible. They have, however, seen that with a high carrier frequency there were occasions where the motor would run hotter. We'll make sure to update our post to make that more clear, so thank you for bringing that to our attention. Mackenzie
__label__pos
0.500053
تمام کارهای مربوط به برد مدار چاپی را به ما بسپارید :) ۴ مطلب با موضوع «مقالات خارجی» ثبت شده است Practical PCB Layout Tips Practical PCB Layout Tips Engineers tend to pay most attention to circuits, the latest components, and code as important parts of an electronics project, but sometimes a critical component of electronics, the PCB layout, is neglected. Poor PCB layout can cause function and reliability problems. This article contains practical PCB layout tips that can help your PCB projects work correctly and reliably. ۰ نظر موافقین ۰ مخالفین ۰ علی پاداش ?What's a PCB Overview One of the key concepts in electronics is the printed circuit board or PCB. It's so fundamental that people often forget to explain what a PCB is. This tutorial will breakdown what makes up a PCB and some of the common terms used in the PCB world. Blank PCB from the ClockIt Kit Over the next few pages, we'll discuss the composition of a printed circuit board, cover some terminology, a look at methods of assembly, and discuss briefly the design process behind creating a new PCB. What's a PCB? Printed circuit board is the most common name but may also be called "printed wiring boards" or "printed wiring cards". Before the advent of the PCB circuits were constructed through a laborious process of point-to-point wiring. This led to frequent failures at wire junctions and short circuits when wire insulation began to age and crack. -> Mass of wire wrap courtesy Wikipedia user Wikinaut <- A significant advance was the development of wire wrapping, where a small gauge wire is literally wrapped around a post at each connection point, creating a gas-tight connection that is highly durable and easily changeable. As electronics moved from vacuum tubes and relays to silicon and integrated circuits, the size and cost of electronic components began to decrease. Electronics became more prevalent in consumer goods, and the pressure to reduce the size and manufacturing costs of electronic products drove manufacturers to look for better solutions. Thus was born the PCB. LilyPad PCB PCB is an acronym for the printed circuit board. It is a board that has lines and pads that connect various points together. In the picture above, there are traces that electrically connect the various connectors and components to each other. A PCB allows signals and power to be routed between physical devices. Solder is the metal that makes the electrical connections between the surface of the PCB and the electronic components. Being metal, the solder also serves as a strong mechanical adhesive. Composition A PCB is sort of like a layer cake or lasagna- there are alternating layers of Continue ...  ۰ نظر موافقین ۰ مخالفین ۰ علی پاداش Six Things to Consider When Designing Your PCB Unless your PCB is designed correctly in the first place, you are going to run into issues sooner or later. Designing a PCB for one of today's products can be very complex, but this aspect of things is often overlooked. Instead, the focus falls upon the more "interesting" aspects of the product, like the FPGAs or MCUs. The fact remains, however, that unless the board is designed correctly in the first place, you are going to run into issues sooner or later. ۰ نظر موافقین ۰ مخالفین ۰ علی پاداش The Importance Of IPC Standards For PCB Manufacturing Technological advances have ensured that Printed Circuit Boards cannot only perform complex functions they can also be produced inexpensively. This is the exact reason why PCBs are an integral part of so many devices. However, the quality of the device is directly proportional to the quality of the PCB used. PCB failure can, therefore, have debilitating consequences wherein entire systems can fail. It is therefore extremely important to stick to some quality measures in the PCB design and manufacturing process. ۰ نظر موافقین ۰ مخالفین ۰ علی پاداش
__label__pos
0.691263
Wednesday 29 November 2023 Get Fresh & Clean Water through RO Water Purifier Water that is natural or untreated contains contaminants that are classified under Physical, Chemical and Biological categories. Visible contaminants like mud or dirt, sediment or suspended materials are included by physical contaminants. Naturally organic chemicals, salts, metals, pesticides etc. are included in Chemical contaminants. Microbes like bacteria, viruses and parasites (cysts) come under Biological contaminants. Basic pre-filtration techniques are used to remove physical contaminants and is commonly used in many water purifiers. Various technologies like UV, ozonation, ultrafiltration, biocidal resins and allied materials are used to remove biological contaminants. All these chemicals can be removed but removing the dissolved chemicals/solids is a challenge because of its size and complex nature. Activated carbon is able to remove some organic chemicals and chlorine but cannot remove heavy metals as well as pesticides effectively. This is where Reverse Osmosis is required. RO is a membrane separation where water is passed under high pressure, through a semi-permeable membrane. The process removes excess TDS (Total Dissolved Solids), chemical contaminants such as Nitrate, Fluoride, Arsenic and some other heavy metals as well as pesticides from water so it becomes safe for drinking. 20-30% of the water that is purified is achieved and 70-80% of water, which has high contaminant concentration, is drained. RO, however, is not applicable to all types of water because of the key technology limitations. It works on very tiny-pore sized membranes that molecularly separate all the dissolved chemical contaminants. Its design, however, cannot discriminate between the ‘good’ and the ‘bad’ chemicals. It is important that RO technology be used only for waters where resulting benefits exceed its limitations. Because of deterioration of freshwater sources and the colossal increase in population, groundwater usage that is high in dissolved solids has become more common. There is heaviness or salinity in water that many people complain about. This happens when the TDS exceeds 500 mg/L or hardness i.e. Calcium and Magnesium exceeds 200 mg/L. In order to make this water potable according to the BIS norms, as well as to achieve acceptable taste, the usage of RO is very important. If such high TDS water is consumed for long-term, will result in an excess agglomeration that causes stress on kidneys. This might lead to kidney stone i.e. an acute health issue. Because of anthropogenic activities, high levels of heavy metals such as lead, arsenic and some other contaminants in water that should be removed by RO in a household application in order to render safe drinking water. Some other available technologies such as UV, boiling or resin-based) are not able to remove these heavy metals to acceptable global standards. However, it is very important to use RO technology in correct water conditions. Do not use it indiscriminately. The customer must be made aware of all positives as well as the limitations of RO technology. Educating customers about the use is very important as the selection of water purifier for home must be as per their household requirements, preferences and the quality of input water. The water quality in India keeps changing due to both human and natural factors. Therefore, it is very necessary for you to test the water source while purchasing a water purifier. After a good research, you are ready to buy water purifier or RO water purifier. Share 2 comments Comments are closed.
__label__pos
0.808521
Prediction: South-Facing Ivy Growth Smaller than North-Facing Ivy I predict that the ivy plant that grows on the south side of the wall will be smaller than the north facing ivy. This is because they have less limiting factors affecting them such as availability of light. The south side ivy have no problem with availability of light as the earth is tilted on its axis facing the sun, so this side has more sunlight. The purpose of the leaves to be able to photosynthesise, so I think that the north leaves will be bigger in size as they need a larger surface area in order to photosynthesise at the same rate as the smaller south ivy. Conclusion: For the ivy growing on the north side of the wall the results are generally very varied. The graphs show skewed results, as there is an uneven distribution of growth by the plant, and there is no pattern between the data collected. Between 50-80mm on the north petiole length there are more results, with the highest amount being 6 petioles at 75-80mm in length. Get quality help now Marrie pro writer Marrie pro writer checked Verified writer Proficient in: Chemistry star star star star 5 (204) “ She followed all my directions. It was really easy to contact her and respond very fast as well. ” avatar avatar avatar +84 relevant experts are online Hire writer The north leaf length has the most in the group 30-35mm with 11 petioles in this category. Again there is an uneven distribution, but the numbers seem to decline, as the length gets bigger. The common width is between 40-45mm with 9 and most of the ivy plants having a leaf width of 35-70mm before we see a significant decline at 70-75mm with only one plant. The ivy growing on the south side has results, which aren't as varied as the north side ivy. Get to Know The Price Estimate For Your Paper Topic Number of pages Email Invalid email By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email "You must agree to out terms of services and privacy policy" Write my paper You won’t be charged yet! There is a much more even distribution amongst the plants. This suggests to me that they have less limiting factors acting against them. These limiting factors can affect he rate of photosynthesis in a plant, these factors can be light intensity, carbon dioxide levels and temperature. The equation for photosynthesis is: Light CO2 + H2O O2 + C6H12O6 This equation shows that you need the input variables, which in this case is carbon dioxide and water to produce the output variables that are oxygen and glucose. Both light intensity and carbon dioxide levels feature in the equation but temperature doesn't. However photosynthesis is driven by enzymes that work better in warmer conditions, but if the temperature is too hot they become denatured and therefore cannot carry out their function. The south petiole length of 25-30 mm and 30-35mm have the same number of ivy, which is 11. With 20-25mm and 35-40mm with 9 and 8. This instantly shows a different picture to the north graphs as their results didn't steadily increase and decrease but grew statically and erratically. The south leaf width between 30-35mm there was 14 plants, this was the highest number in the group and the south leaf length had also 14 plants but this time in the 35-40mm group. If I compare the highest number of results for leaf length, with and petiole for the north and south. I can see that there is a considerable amount of difference in the sizes of the leaf. North South Petiole 75-80mm 6/50 25-30mm 30-35mm 11/50 Width 40-45mm 9/50 30-35mm 14/50 Length 30-35mm 11/50 35-40mm 14/50 Using this table I can see that the south side growing ivy has grown at similar sizes ranging from about 25-40mm. Whilst the north side ivy ranges from 30-80mm which is a 50mm difference on the north side and 15mm on the south side. This tells me that there are more limiting factors affecting the ivy plants on the north side of the wall. Factors affecting the growth of the ivy on the north side of the wall can be temperature, water and carbon dioxide. All these factors are needed in photosynthesis as shown by the equation. Light CO2 + H2O O2 + C6H12O6 Plants need to photosynthesise they use the energy for carbohydrates, proteins and fats. If there is an increase or decrease in temperature the enzymes that catalyse this process are denatured. This means that photosynthesis is affected. Also plants need sunlight to photosynthesise so as there is more sunlight on the south side of the wall. I know this as the sun tilts towards the sun like this: This can show why the petiole lengths are longer as they need to grow longer so that their leaves can reach the sunlight to photosynthesise. This agrees with my prediction as I said that the south side plants would be smaller than the north side plants. This is also proven by the averages of each category shown below in a table. Averages North ivy South ivy Petiole length 75.1 31.8 Leaf length 46.9 34.66 Leaf width 61.36 31.96 In each category the averages show that the North ivy has a larger petiole length, leaf length and width, as it has had to adapt to its surroundings due to factors affecting it. So this table of results shows that my prediction is correct, as the ivy on the south side of the wall is smaller than the north facing ivy. Transpiration can also be another limiting factor in this process. Transpiration is the loss of water from a plant. It is caused by evaporation of water from inside the leaves via the stomata. The biggest rate of transpiration occurs in hot, dry and windy conditions. To prevent this from occurring plants have a waxy layer (cuticle) on their leaves, which stops them losing too much water. You will find the plants in hot climates have to adapt by having a thicker layer of wax. This can affect the ivy leave because there will be more water vapour on the south side as temperature is higher, so the air is more saturated causing less transpiration to occur. The north leaves have a large surface area that can aid transpiration but they have long petioles that restrict surface area to make transpiration more difficult, this is an example of a plant adapting to its environment. So the north ivy leaves are more varied than the south as shown by the results, proving my prediction correct. The results confirm that my prediction is correct. This is due to the earth's tilt on its axis causing the availability of sunlight to be more limited on the north side. This caused the north ivy to grow larger leaves and petioles to deal with the situation, as they would need a bigger surface area to trap the sunlight for the photosynthesis process and longer petioles to reach the sunlight on the south side of the wall. This is shown by the results, which is portrayed by the graphs. In conclusion the petiole lengths, leaf widths and lengths are larger on the north facing ivy wall than the south facing ivy wall, due to the North side being in shadow because of the Earth's axis. Which in turn causes the lengths to be longer and bigger to be able to complete the photosynthesis reaction with the sunlight obtained. Evaluation The results show that the generally the sizes of the south ivy are smaller than the north ivy. This is due to the position of the leaves on the wall and what factors have affected their growth. The most important factor that I think caused a difference in these plants is the availability of sunlight due to their position, north or south. This is devised on the fact that my results only show the sizes that the leaves and petioles grew to. If the experiment was done again, then temperature and availability of sunlight could be measured. I would measure sunlight and temperature levels with the use of a solar meter. If the levels recorded were different for example the south receives more sunlight and has a higher temperature this would justify my conclusion. As I said that there are more limiting factors affecting the north ivy plant and sunlight is needed for photosynthesis, and temperature to catalyse the enzymes needed for photosynthesis. This is shown by the results, as the north petiole lengths are longer as they need to grow further to reach the sunlight. As I myself did not carry out the experiment I have to take into consideration that it was done as a fair test and with the same variables used each time, for example the same ivy plant used to measure leaf length and width and petiole length. From my graphs I can see that there are some anomalous results, the results of the north petiole length for example. The results seem to increase to a peak of 6 between 75-80mm but the next group between 80-85mm there are no results. But this could be to due to inaccurate measurement of the plant or an error in the data collected. Also another reason for anomalous results is genetic difference, which could be due to the limiting factors that have affect the north ivy plants. As the leaves generally have to grow longer and larger to obtain sunlight for photosynthesis, so some of the leaves may grow to excess, likewise they may not even grow to the average size at all because of this. Also if the leaves were picked randomly from the top or bottom of the plant this would too make a difference as the top leaves would have more sunlight available meaning they would have a smaller surface area. Finally the results given were as whole numbers so there could be a degree of inaccuracy if decimal places were not used. However as my prediction agreed with the results obtained, I would say that the experiment was successful as my hypothesis that the south side ivy plant would be smaller was correct. This enabled me to write a conclusion with the scientific evidence needed to prove my prediction correct. There was enough data given for me to have some good graphs with many different groups and sizes. This too helped to conclude that my hypothesis is correct; as I could determine a ratio, averages and percentages, and also see whether the south plants were smaller than the north plants or vice versa. To ensure that the measurements recorded were accurate, if I were to do the experiment again, I would increase the sample size from fifty to hundred to get a wider range of results that can prove to be more accurate. Again the averages, ratios and percentages would be recorded to see if they coincided with the prediction. Also I could test the pH of the soil where the ivy plants grow as this too can be a factor that can limit or aid growth, for example if the soil was to acidic or alkaline. I would collect soil samples from each side of the wall and filter them through filter paper into a water beaker. I would then use universal indicator and see what colour the soil changes. I would compare the colours against a pH chart. If they were different then this result would support the conclusion as this could affect the process of photosynthesis. The colour of the leaves can be recorded against for example a colour chart also the total height can be measured, this can also show the amount of chlorophyll in the plant, which is also needed in photosynthesis. This too can support the conclusion, as I know from my results that the north ivy leaves were bigger in size thus having a larger surface area. The larger surface area could mean that there is more chlorophyll present or the same amount present as the smaller south ivy leaf, if that is the case than genetic variation has occurred and the plant has had to adapt to its surroundings. The total height of the north and south ivy plants can be subtracted to note the difference. Also the location where the plant is growing for example under a tree at the top or bottom of a hill. All these factors can help further the investigation to determine why the dimensions of the north and south ivy plants differ. Updated: Apr 29, 2023 Cite this page Prediction: South-Facing Ivy Growth Smaller than North-Facing Ivy. (2020, Jun 02). Retrieved from https://studymoose.com/ivy-plants-new-essay Prediction: South-Facing Ivy Growth Smaller than North-Facing Ivy essay Live chat  with support 24/7 👋 Hi! I’m your smart assistant Amy! Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes. get help with your assignment
__label__pos
0.899582
© ebm-papst The formula for the overall effi­ciency of turbo­com­pres­sors The higher the overall effi­ciency of a turbo compressor, the less elec­trical energy it needs to do the same work, but the effi­ciency of each indi­vidual compo­nent is crucial. Ahmet Çokşen, Group Leader Product Manage­ment (Photo | ebm-papst) If you want to save money and energy with your fans, motors, and turbo compres­sors, effi­ciency – or η – is prob­ably the most impor­tant indi­cator. After all, it shows how effi­ciently machines can convert power input into useful power output. If effi­ciency is high, a large propor­tion of power input gets to where it is supposed to go. If, on the other hand, it is low, power is mainly lost as waste heat. The effi­ciency referred to in this attrac­tive formula is the overall effi­ciency ηoverall of an oil-free turbo compressor. It is the math­e­mat­ical product of many other effi­cien­cies, namely of each indi­vidual compo­nent that delivers power when compressing refrig­er­ants and other gases. This includes upstream power elec­tronics ­(ηPower­Elec­tronics), which control and transmit the elec­tric current to the motor, as well as the motor (ηMotor), which converts the elec­trical energy into mechan­ical energy. The effi­ciency of the oil-free gas bear­ings (­ηGasBearing) is also impor­tant. This is because it provides infor­ma­tion on how smoothly the rotor is running, in terms of power loss and wear, with the compressor impeller in the bearing, in order to rotate at up to 300,000 revo­lu­tions per minute and compress gases effi­ciently. The last piece of the puzzle for calcu­lating the overall effi­ciency of a compressor is the effi­ciency of the compressor stage with the compressor impeller (­PAero­dy­namics). Compared with the other compo­nents, this formula breaks down exactly how this effi­ciency is made up. First, the ideal power consump­tion of the compressor stage (PAero,isen) – i.e. the power that the stage could provide if there were no losses. This compressor map for the refrig­erant propane (R290) shows the overall effi­ciency of a compressor, with red repre­senting high effi­ciency, and blue low effi­ciency. The diagram shows the speeds (black lines), pres­sure ratios (Y-axis) and refrig­erant flow rates per second (X-axis) at which the compressor works most effi­ciently. (Photo | ebm-papst) Second, the actual power consump­tion (PAero) and the power loss due to leakage (PLeakage). The ratio of the ideal power to the power actu­ally used, including leakage, is then used to describe the effi­ciency of the compressor stage. Finally, the effi­cien­cies of all the compo­nents can be multi­plied together. The result is the overall effi­ciency of the turbo compressor: a value between 0 and 1. A result of 0.7 means that the turbo compressor uses 70 percent of the power used to compress gases. Required fields: Comment, Name & Mail (Mail will not be published). Please also take note of our Privacy protection.
__label__pos
0.607076
SciCombinator Discover the most talked about and latest scientific content & concepts. Concept: Symptom 174 While age and the APOE ε4 allele are major risk factors for Alzheimer’s disease (AD), a small percentage of individuals with these risk factors exhibit AD resilience by living well beyond 75 years of age without any clinical symptoms of cognitive decline. Concepts: Alzheimer's disease, DNA, Medicine, Gene, Genetics, Epidemiology, Symptom, Apolipoprotein E 172 Objective To investigate whether symptomatic treatment with non-steroidal anti-inflammatory drugs (NSAIDs) is non-inferior to antibiotics in the treatment of uncomplicated lower urinary tract infection (UTI) in women, thus offering an opportunity to reduce antibiotic use in ambulatory care.Design Randomised, double blind, non-inferiority trial.Setting 17 general practices in Switzerland.Participants 253 women with uncomplicated lower UTI were randomly assigned 1:1 to symptomatic treatment with the NSAID diclofenac (n=133) or antibiotic treatment with norfloxacin (n=120). The randomisation sequence was computer generated, stratified by practice, blocked, and concealed using sealed, sequentially numbered drug containers.Main outcome measures The primary outcome was resolution of symptoms at day 3 (72 hours after randomisation and 12 hours after intake of the last study drug). The prespecified principal secondary outcome was the use of any antibiotic (including norfloxacin and fosfomycin as trial drugs) up to day 30. Analysis was by intention to treat.Results 72/133 (54%) women assigned to diclofenac and 96/120 (80%) assigned to norfloxacin experienced symptom resolution at day 3 (risk difference 27%, 95% confidence interval 15% to 38%, P=0.98 for non-inferiority, P<0.001 for superiority). The median time until resolution of symptoms was four days in the diclofenac group and two days in the norfloxacin group. A total of 82 (62%) women in the diclofenac group and 118 (98%) in the norfloxacin group used antibiotics up to day 30 (risk difference 37%, 28% to 46%, P<0.001 for superiority). Six women in the diclofenac group (5%) but none in the norfloxacin group received a clinical diagnosis of pyelonephritis (P=0.03).Conclusion Diclofenac is inferior to norfloxacin for symptom relief of UTI and is likely to be associated with an increased risk of pyelonephritis, even though it reduces antibiotic use in women with uncomplicated lower UTI.Trial registration ClinicalTrials.gov NCT01039545. Concepts: Kidney, Urinary tract infection, Symptom, Symptomatic treatment, Non-steroidal anti-inflammatory drug, Diclofenac, Antibiotic, Ciprofloxacin 168 Solitary rectal ulcer syndrome (SRUS) is a benign and chronic disorder well known in young adults and less in children. It is often related to prolonged excessive straining or abnormal defecation and clinically presents as rectal bleeding, copious mucus discharge, feeling of incomplete defecation, and rarely rectal prolapse. SRUS is diagnosed based on clinical symptoms and endoscopic and histological findings. The current treatments are suboptimal, and despite correct diagnosis, outcomes can be unsatisfactory. Some treatment protocols for SRUS include conservative management such as family reassurance, regulation of toilet habits, avoidance of straining, encouragement of a high-fiber diet, topical treatments with salicylate, sulfasalazine, steroids and sucralfate, and surgery. In children, SRUS is relatively uncommon but troublesome and easily misdiagnosed with other common diseases, however, it is being reported more than in the past. This condition in children is benign; however, morbidity is an important problem as reflected by persistence of symptoms, especially rectal bleeding. In this review, we discuss current diagnosis and treatment for SRUS. Concepts: Medicine, Disease, Asthma, Medical terms, Surgery, Symptom, Rectum, Rectal prolapse 168 Hair-pulling disorder (trichotillomania, HPD) is a disabling condition that is characterized by repetitive hair-pulling resulting in hair loss. Although there is evidence of structural grey matter abnormalities in HPD, there is a paucity of data on white matter integrity. The aim of this study was to explore white matter integrity using diffusion tensor imaging (DTI) in subjects with HPD and healthy controls. Sixteen adult female subjects with HPD and 13 healthy female controls underwent DTI. Hair-pulling symptom severity, anxiety and depressive symptoms were also assessed. Tract-based spatial statistics were used to analyze data on fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD) and radial diffusivity (RD). There were no differences in DTI measures between HPD subjects and healthy controls. However, there were significant associations of increased MD in white matter tracts of the fronto-striatal-thalamic pathway with longer HPD duration and increased HPD severity. Our findings suggest that white matter integrity in fronto-striatal-thalamic pathways in HPD is related to symptom duration and severity. The molecular basis of measures of white matter integrity in HPD deserves further exploration. Concepts: Anxiety, Statistics, Symptoms, Symptom, White matter, Diffusion MRI, Imaging, Tensors 167 Chronic day-to-day symptoms of orthostatic intolerance are the most notable features of postural orthostatic tachycardia syndrome (POTS). However, we have encountered patients with such symptoms and excessive tachycardia but with no symptoms during the tilt-table test (TTT). We aimed to investigate whether POTS patients with chronic orthostatic intolerance always present orthostatic symptoms during the TTT and analyze the factors underlying symptom manifestation during this test. Concepts: Medical terms, Cardiology, Symptoms, Symptom, Orthostatic hypotension, Post-concussion syndrome, Postural orthostatic tachycardia syndrome, Orthostatic intolerance 165 BACKGROUND: To assess the clinical and laboratory parameters, response to therapy and development of antituberculosis (TB) drug resistance in pulmonary TB (PTB) patients with diabetes mellitus (DM) and without DM. METHODS: Using a prospective design, 227 of 310 new cases of culture-positive PTB diagnosed at the Queen Savang Vadhana Memorial Hospital and the Chonburi Hospital between April 2010 and July 2012 that met the study criteria were selected. Data regarding clinical and laboratory parameters, drug susceptibility and treatment outcomes were compared between PTB patients with DM and those without DM. To control for age, the patients were stratified into two age groups (< 50 and ≥ 50 years) and their data were analysed. RESULTS: Of the 227 patients, 37 (16.3%) had DM, of which 26 (70.3%) had been diagnosed with DM prior to PTB diagnosis and 11 (29.7%) had developed DM at PTB diagnosis. After controlling for age, no significant differences were found between the two groups regarding mycobacterium burden, sputum-culture conversion rate, evidence of multidrug-resistant tuberculosis, frequency of adverse drug events from anti-TB medications, treatment outcomes and relapse rate. The presenting symptoms of anorexia (p = 0.050) and haemoptysis (p = 0.036) were observed significantly more frequently in PTB patients with DM, while the presenting symptom of cough was observed significantly more frequently in PTB patients without DM (p = 0.047). CONCLUSIONS: Plasma glucose levels should be monitored in all newly diagnosed PTB patients and a similar treatment regimen should be prescribed to PTB patients with DM and those without DM in high TB-burden countries. Concepts: Pharmacology, Medical terms, Diabetes mellitus, The Canon of Medicine, Blood sugar, Symptoms, Symptom, Tuberculosis 163 Japanese encephalitis virus (JEV) causes acute central nervous system (CNS) disease in humans, in whom the clinical symptoms vary from febrile illness to meningitis and encephalitis. However, the mechanism of severe encephalitis has not been fully elucidated. In this study, using a mouse model, we investigated the pathogenetic mechanisms that correlate with fatal JEV infection. Following extraneural infection with the JaOArS982 strain of JEV, infected mice exhibited clinical signs ranging from mild to fatal outcome. Comparison of the pathogenetic response between severe and mild cases of JaOArS982-infected mice revealed increased levels of TNF-α in the brains of severe cases. However, unexpectedly, the mortality rate of TNF-α KO mice was significantly increased compared with that of WT mice, indicating that TNF-α plays a protective role against fatal infection. Interestingly, there were no significant differences of viral load in the CNS between WT and TNF-α KO mice. However, exaggerated inflammatory responses were observed in the CNS of TNF-α KO mice. Although these observations were also obtained in IL-10 KO mice, the mortality and enhanced inflammatory responses were more pronounced in TNF-α KO mice. Our findings therefore provide the first evidence that TNF-α has an immunoregulatory effect on pro-inflammatory cytokines in the CNS during JEV infection and consequently protects the animals from fatal disease. Thus, we propose that the increased level of TNF-α in severe cases was the result of severe disease, and secondly that immunopathological effects contribute to severe neuronal degeneration resulting in fatal disease. In future, further elucidation of the immunoregulatory mechanism of TNF-α will be an important priority to enable the development of effective treatment strategies for Japanese encephalitis. Concepts: Inflammation, Central nervous system, Nervous system, Brain, Infection, Symptom, Mouse, Encephalitis 151 Mast cell activation disease (MCAD) is a term referring to a heterogeneous group of disorders characterized by aberrant release of variable subsets of mast cell (MC) mediators together with accumulation of either morphologically altered and immunohistochemically identifiable mutated MCs due to MC proliferation (systemic mastocytosis [SM] and MC leukemia [MCL]) or morphologically ordinary MCs due to decreased apoptosis (MC activation syndrome [MCAS] and well-differentiated SM). Clinical signs and symptoms in MCAD vary depending on disease subtype and result from excessive mediator release by MCs and, in aggressive forms, from organ failure related to MC infiltration. In most cases, treatment of MCAD is directed primarily at controlling the symptoms associated with MC mediator release. In advanced forms, such as aggressive SM and MCL, agents targeting MC proliferation such as kinase inhibitors may be provided. Targeted therapies aimed at blocking mutant protein variants and/or downstream signaling pathways are currently being developed. Other targets, such as specific surface antigens expressed on neoplastic MCs, might be considered for the development of future therapies. Since clinicians are often underprepared to evaluate, diagnose, and effectively treat this clinically heterogeneous disease, we seek to familiarize clinicians with MCAD and review current and future treatment approaches. Concepts: Immune system, DNA, Protein, Cancer, Mast cell, Symptom, Medical sign, Mastocytosis 147 Clinical signs and symptoms of different airway pathogens are generally indistinguishable, making laboratory tests essential for clinical decisions regarding isolation and antiviral therapy. Immunochromatographic tests (ICT) and direct immunofluorescence assays (DFA) have lower sensitivities and specificities than molecular assays, but have the advantage of quick turnaround times and ease-of-use. Concepts: Virus, Chemistry, Symptom, Antiviral drug, Influenza, Assay, Medical sign, Human respiratory syncytial virus 57 Chronic fatigue syndrome (CFS) is a complex, multisystem disorder that can be disabling. CFS symptoms can be provoked by increased physical or cognitive activity, and by orthostatic stress. In preliminary work, we noted that CFS symptoms also could be provoked by application of longitudinal neural and soft tissue strain to the limbs and spine of affected individuals. In this study we measured the responses to a straight leg raise neuromuscular strain maneuver in individuals with CFS and healthy controls. We randomly assigned 60 individuals with CFS and 20 healthy controls to either a 15 minute period of passive supine straight leg raise (true neuromuscular strain) or a sham straight leg raise. The primary outcome measure was the symptom intensity difference between the scores during and 24 hours after the study maneuver compared to baseline. Fatigue, body pain, lightheadedness, concentration difficulties, and headache scores were measured individually on a 0-10 scale, and summed to create a composite symptom score. Compared to individuals with CFS in the sham strain group, those with CFS in the true strain group reported significantly increased body pain (P = 0.04) and concentration difficulties (P = 0.02) as well as increased composite symptom scores (all P = 0.03) during the maneuver. After 24 hours, the symptom intensity differences were significantly greater for the CFS true strain group for the individual symptom of lightheadedness (P = 0.001) and for the composite symptom score (P = 0.005). During and 24 hours after the exposure to the true strain maneuver, those with CFS had significantly higher individual and composite symptom intensity changes compared to the healthy controls. We conclude that a longitudinal strain applied to the nerves and soft tissues of the lower limb is capable of increasing symptom intensity in individuals with CFS for up to 24 hours. These findings support our preliminary observations that increased mechanical sensitivity may be a contributor to the provocation of symptoms in this disorder. Concepts: Symptoms, Symptom, Tissues, Soft tissue, Fatigue, Post-concussion syndrome, Chronic fatigue syndrome, Straight leg raise
__label__pos
0.533879
Dumbo: America’s First Forgotten NTR Hello, and welcome back to Beyond NERVA! Today, in our first post in the Forgotten Reactors series, we’re going back to the beginnings of astronuclear engineering, and returning to nuclear thermal propulsion as well, looking at one of the reactors that’s had a cult following since the 1950s: the pachydermal rocket known as DUMBO. In a nuclear thermal rocket, the path that the propellant takes has a strong impact on how hard it is to predict the way that the propellant will move through the reactor. Anyone who’s dealt with a corroded steam central heating system that won’t quit knocking, no matter how much you try, has dealt with the root of the problem: fluid behavior in a set of tubes only makes sense, and doesn’t cause problems, if you can make sure you know what’s going on, and that’s not only counter-intuitively hard, but it’s one of the subjects that (on the fine scale, in boundary conditions, and in other extremes) tends to lead towards chaos theory more than traditional fluid dynamics of ANY sort, much less adding in the complications of heat transport. However, if you can have gas flow longer through the reactor, you can get greater efficiency, less mass, and many other advantages. This was first proposed in the Dumbo reactor at the beginning of Project Rover, alongside the far more familiar Kiwi reactors. Rather than have the gas flow from one end of the reactor to the other through straight pipes, like in Kiwi, the propellant in Dumbo would flow part of the way down the reactor core, then move radially (sideways) for a while, and then returns to flowing along the main axis of the reactor before exiting the nozzle. Because of the longer flow path, and a unique fuel element and core geometry, Dumbo seemed to offer the promise of both less volume and less mass for the same amount of thrust due to this difference in flow path. Additionally, this change offered the ability to place thermally sensitive materials more evenly across the reactor, due to the distribution of the cold propellant through the fuel element structure. Dumbo ended up being canceled, in part, because the equations required to ensure that fatal flow irregularities wouldn’t occur, and the promised advantages didn’t materialize, either. None of this means that Dumbo was a bad idea, just an idea ahead of its time – an idea with inspiration to offer. Dumbo’s progeny live on. In fact, we’ve covered both the fuel element form AND the modern incarnation of the fuel geometry in the blog before! With today’s knowledge of materials, advanced flow modeling, a cutting edge carbide fuel, and the beginnings of a Renaissance in nuclear design are breathing new life into the program even today, and the fundamental concept remains an attractive (if complex) one. The First Forgotten Reactor Early Dumbo cutaway drawing with flow path In the early days of astronuclear engineering, there was a lot of throwing pasta at the wall, and looking to see what stuck. Many more slide rules than are involved in the average family’s spaghetti dinner preparations, to determine if the pasta was done enough, but a large number of designs were proposed, and eventually settled down into four potentially near-ish term useful: radioisotope power supplies, nuclear thermal propulsion, nuclear electric propulsion, and nuclear explosive propulsion (which we usually call nuclear pulse propulsion). Each of these ended up being explored extensively, and a number of other novel concepts have been proposed over the years as well. In the beginning, however, research tended toward either the NTR or NPP designs, with two major programs: ROVER and ORION. Orion was fairly narrowly focused from the beginning, owing to the problems of making an efficient, low-mass, easily deployable, reliable, and cheap shaped nuclear charge – the physics drove the design. Rover, on the other hand, had many more options available to it, and some competition as to what the best design was. Being the earliest days of the atomic era, which way to go, and the lack of knowledge in both nuclear and materials science often limited Rover as much as lack of fuel for their reactors did! This led to some promising designs being discarded. Some were resurrected, some weren’t, but the longest lived of the less-initially-preferred designs is our subject for today. Dumbo was initially proposed in the literature in 1955. Two years later, a far more developed report was issued to address many of the challenges with the concept. The idea would continue to bounce around in the background of astronuclear engineering design until 1991, when it was resurrected… but more on that later. The concept was very different from the eventual NERVA concept (based on the Phoebus test reactor during Rover) in a number of ways, but two stand out: 1. Fuel element type and arrangement: The eventual Rover elements used uranium oxide or carbide suspended within graphite flour, which was then solidified, in Dumbo the fissile fuel was “metal.” However, the designers used this term differently than how it would be used today: rather than have the entire fuel element be metal, as we’ve seen in Kilopower, the fuel was uranium oxide pellets suspended in some type of high temperature metal. Today, we call this CERMET, for ceramic metal composite, and is the current favorite 2. Flow pattern: while both the initial Rover concepts (the Kiwi reactors) and the eventual NERVA engines used straight-through, axial propellant flow, which is simple to model mathematically, Dumbo’s flow path started the same (going from the nozzle end to the spacecraft end, cooling the reflectors and control components), but once it reached the top of the reactor and started flowing toward the nozzle, things changed. The flow would start going toward the nozzle through a central column, but be diverted through sets of corrugated fuel “washers” and spacers, making two 90 degree turns as it did so. This was called a “folded flow” system. A host of other differences were also present throughout the reactor and control systems, but these two differences were the biggest when comparing the two nearly-simultaneously developed systems. The biggest advantages that were offered by the basic concept were the ability to go to higher temperatures in the core, and be able to have a more compact and less massive reactor for the same thrust level. Additionally, at the time it seemed like the testing would be far simpler to do, because it appeared that the number of tests needed, and the requirements of those tests, would make the testing program both simpler and cheaper compared to the competing Kiwi design concept. Sadly, these advantages weren’t sufficient to keep the project alive, and Kiwi ended up winning in the down-selection process. In 1959, the Dumbo portion of Rover was canceled. There were two stated main reasons: first, there were no weight savings seen between the two systems upon in depth analysis; second, the manufacture of the components for the reactor required high precision out of at-the-time exotic materials. Another concern which was not mentioned at the time of cancellation but is a concern for certain variations on this reactor is the complex flow patterns in the reactor, something we’ll touch on briefly later. Contrary to popular belief, Dumbo’s design isn’t dead. The fuel type has changed, and many of the nuclear design considerations for the reactor have also changed, but the core concept of a stacked set of fuel discs and a folded flow pattern through the core of the reactor remains. Originally revived as the Advanced Dumbo concept, proposed by Bill Kirk at LANL in 1990, which advocated for the use of carbide fuels to increase the reactor core temperature, as well as moving to a solid disc with grooves cut in it. This was proposed at the same time as many other concepts for nuclear thermal rockets in the bout of optimism in the early 1990s, but funding was given instead to the pebblebed NTR, another concept that we’ll cover. This in itself evolved into the Tricarbide Grooved Ring NTR currently under investigation at the Marshall Space Flight Center, under the direction of Brian Taylor and William Emrich, a concept we covered already in the carbide fuel post, but will briefly review again at the end of this post. Is Dumbo Really a Metal Reactor? At the time, this was called a metal reactor, but there’s metal and there’s metal. Metal fuels aren’t uncommon in nuclear reactor design. CANDU reactors are one of the most common reactor types in operation today, and use metal fuel. New designs, such as Kilopower in space and the Westinghouse eVinci reactor on Earth, also use metal fuels, alloying the uranium with another metal to improve either the chemical, thermal, or nuclear properties of the fuel itself. However, there are a few general problems (and exceptions to those problems) with metal fuels. In general, metal fuels have a low melting point, which is exactly what is undesirable in a nuclear thermal rocket, where core temperature is the main driving factor to efficiency, even ahead of propellant mass. Additionally, there can be neutronic complications, in that many metals which are useful for the fuel material components are also neutron poisons, reducing the available power of the reactions in the core. On the flip side, metals generally offer the best thermal conductivity of any class of material. CERMET fuel micrograph, image NASA Rather than a metal alloy fuel such as CANDU or Kilopower reactors, Dumbo used uranium oxide embedded in a refractory metal matrix. For those that have been following the blog for a while, this isn’t metal, it’s CERMET (ceramic-metal composite), the same type of fuel that NASA is currently exploring with the LEU NTP program. However, the current challenges involved in developing this fuel type are a wonderful illustration as to why it was considered a stretch in the 1950s. For a more in-depth discussion on CERMET fuels, check out our blog post on CERMET fuels in their modern incarnation here: https://beyondnerva.com/2018/01/19/leu-ntp-part-two-cermet-fuel-nasas-path-to-nuclear-thermal-propulsion/ The metal matrix of these fuel elements was meant to be molybdenum initially, with the eventual stretch goal of using tungsten. Tungsten was still brand new, and remains a challenge to manufacture in certain cases. Metallurgists and fabricators are still working on improving our ability to use tungsten, and isotopically enriching it (in order to reduce the number of neutrons lost to the metal) is still beyond the technical capabilities of American metallurgical firms. The Dumbo fuel elements were to be stamped in order to account for the complex geometries involved, although there was a new set of challenges involved with this forming process, including ensuring even distribution of the fissile fuel through the stamped material. Folded Flow Reactors: Why, and How Hard? Perhaps the biggest challenge in Dumbo wasn’t the material the fuel elements were made of, but the means of transferring the heat into the propellant. This was due to a couple of potential issues: first, the propellant passed through a more convoluted than typical path through the reactor, and second, the reactor was meant to be a laminar flow heat exchanger, the first time that this would have been done. Dumbo fuel stack flow pattern, original image DOE Each Dumbo core had a number of sets of fuel washers, moderator spacers, and securing components stacked into cylinders. The propellant would flow through the Be reflector, into the central opening of the fuel elements, and then flow out of the fuel elements, exiting around the perimeter of the cylinder. This would then be directed out the nozzle to provide thrust. By going through so many twists and turns, and having so much surface area available for heat transfer, the propellant could be more thoroughly heated than in a more typical prismatic fuel element, such as we see with the later Kiwi and Phoebus reactors. As with folded optics in telescopes, folded flow paths allow for more linear distance traveled in the same volume. A final advantage is that, because of the shape and arrangement of the washers, only a small amount of material would need to be tested, at a relatively minor 1.2 kW power level, to verify the material requirements of the reactor. Timber Wind NTR, image DOE Timber Wind NTR, image DOE This sort of flow path isn’t unique to Dumbo. TRISO fuel, which use beads of fuel coated in pyrolitic carbon, fission product containment materials, and others have a very complex flow path through the reactor, increasing the linear distance traveled from one end of the core to the other well beyond the linear dimensions of the reactor. The differences mainly arise in the fuel geometry, not the concept of a non-axial flow. The challenge is modeling the flow of the coolant through the bends in the reactor. It’s relatively easy to have hot spots develop if the fluid has to change directions in the wrong way, and conversely cold spots can develop as well. Ensuring that neither of these happen is a major challenge in heat exchanger design, a subject that I’m far from qualified to comment on. The unique concept at the time was that this was meant to be a laminar flow heat exchanger (the fuel elements themselves form the heat exchanger). Laminar fluid flow, in broad terms, means that all of the molecules in the fluid are moving together. The opposite of laminar flow is turbulent flow, where eddies form in the fluid that move in directions other than the main direction of fluid flow. While the term may bring up images of white water rapids (and that’s not a bad place to start), the level of turbulence varies depending on the system, and indeed the level of turbulence in a heat exchanger modifies how much heat is transferred from the hot surface to the coolant fluid. Since the molecules are moving together in the same direction during laminar flow, the eddies that are a major component of heat transfer in some designs are no longer present, reducing the efficiency of heat transport through the working fluid. However, in some designs (those with a low Reynolds number, a characteristic of heat transfer capability) laminar flow can be more efficient than turbulent flow. For more discussion on the efficiency of laminar vs turbulent flow in heat exchangers, check out this paper by Patil et al: http://www.ijirset.com/upload/2015/april/76_Comparative-1.pdf . For a rocket engine, the presence of laminar flow makes the rocket itself more efficient, since all of the molecules are moving in the same direction: straight out of the nozzle. The better collimated, or directional, the propellant flow is, the more thrust efficient the engine will be. Therefore, despite the fact that laminar flow is less efficient at transferring heat, the heat that is transferred can be more efficiently imparted as kinetic energy into the spacecraft. In the case of Dumbo, the use of a large number of small orifices in the fuel elements allows for the complete transferrance of the heat of the nuclear reaction into the propellant, allowing for the efficient use of laminar flow heat exchange. This also greatly simplifies the basic design calculations of the fluid dynamics of the reactor, since laminar flow is easy to calculate, but turbulence requires complexity theory to fully model, a technique that didn’t exist at the time. However, establishing and maintaining laminar flow in the reactor was rightly seen as a major challenge at the time, and even over three decades later the challenges involved in this part of the design remained a point of contention about the feasibility of the laminar heat exchanger concept in this particular application. Another distinct advantage to this layout is that the central annulus of each fuel element stack was filled with propellant that, while it had cooled the radial reflector, remained quite cool compared to the rest of the reactor. This meant that materials containing high hydrogen content, in this case a special form of plastic foam, could be distributed throughout the reactor. This meant that the neutron spectrum of the reactor could be more consistent, ensuring more uniform fissioning of the fuel across the active region of the reactor, and a material could be chosen that allows for greater neutron moderation than the graphite fuel element matrix of a Kiwi-type reactor. A variation of this concept can be seen as well with the Russian RD-0140 and -0411, which have all of their fuel around the outer circumference of the reactor’s pressure vessel and a large moderator column running down the center of the core. This allows the center of the core of the reactor to be far cooler, and contain far more themally sensitive materials as a result. The Death of Dumbo Sadly, the advantages of this reactor geometry weren’t sufficient to keep the program alive. In 1959, Dumbo gained the dubious distinction of being the first NTR concept that underwent study and development to be canceled in the US (perhaps even worldwide). Kiwi, the father and grandfather of all other Rover flight designs, was the winner, and the prismatic fuel element geometry remains the preferred design even today. According to the Quarterly Status Report of LASL ROVER Program for Period Ending September 20, 1959, two factors caused the cancellation of the reactor: the first was that, despite early hopes, the reactor’s mass offered no advantages over an equivalent Kiwi reactor; the second was the challenges involved in the fabrication and testing of many of the novel components required, and especially the requirements of manufacturing and working the UO2/Mo CERMET fuel elements to a sufficiently precise degree, promised a long and difficult development process for the reactor to come to fruition. Dumbo remained an interesting and attractive design to students of astronuclear engineering from that point on. Mentions of the concept occur in most summaries of NTR design history, but sadly, it never attracted funding to be developed. Even the public who are familiar with NTRs have heard of Dumbo, even if they aren’t familiar with any of the details. Just last month, there was a thread started on NASASpaceFlight Forum about Dumbo, and reviving the concept in the public eye once again. The Rebirth of Dumbo: the Advanced Dumbo Rocket Advanced Dumbo fuel element stack. Notice the change in fuel shape due to the different material properties. Image NASA In 1991, there was a major conference attended by NASA, DOE, and DOD personnel on the subject of NTRs, and the development of the next generation NTR system for American use to go to Mars. At this conference, Bill Kirk of Los Alamos National Labs presented a paper on Dumbo, which he was involved in during its first iteration, and called for a revival of what he called a “folded flow washer type” NTR. This proposal, though, discarded the UO2/Mo CERMET fuel type in favor of a UC-ZrC carbide fuel element, to increase the fuel element maximum temperature. For a more in-depth look at carbide fuel elements, and their use in NTRs, check out the carbide fuel element post here. As we discussed in the carbide post, there are problems with thermal stress cracking and complex erosive behaviors in carbide fuel elements, but the unique form factor of the grooved disc allows for better distribution of the stresses, less continuous structural components to the fuel elements themselves, allowing for better thermal behavior and less erosion. Another large change from the classic Dumbo to the Advanced Dumbo was that the fluid flow through the reactor wasn’t meant to be laminar, and turbulent behavior was considered acceptable in this iteration. Other changes, including reflector geometry, were also incorporated, to modernize the concept’s support structures and ancillary equipment. Timber Wind reactor, image Winchell Chung Atomic Rockets Once again, though, the Dumbo concept, as well as the other concepts presented that had a folded flow pattern, were not selected. Instead, this conference led to the birth of the Timber Wind program, a pebble bed reactor design that we’ll cover in the future. Again, though, the concept of increasing the surface area compared to the axial length of the reactor was an inherent part of this design, and a TRISO pebble bed reactor shares some of the same advantages as a washer-type reactor would. The Second Rebirth: the Tricarbide Grooved Ring NTR Tricarbide Grooved Ring NTR fuel element stack. Notice the return of more complex geometry as materials design and fabrication of carbides has improved. Image NASA Washer type reactors live today, and in many ways the current iteration of the design is remarkably similar to the Advanced Dumbo concept. Today, the research is centered in the Marshall Space Flight Center, with both Oak Ridge National Laboratory and the University of Tennessee being partners in the program. The Tricarbide Grooved Ring NTR (TCGR) was originally proposed in 2017, by Brian Taylor and Bill Emrich. While Bill Kirk is not mentioned in any of the papers on this new iteration of this reactor geometry, the carbide grooved washer architecture is almost identical to the Advanced Dumbo, so it’s reasonable to assume that the TCGR design is at least inspired by the Advanced Dumbo concept of 27 years before (Bill Emrich is a very old hand in NTR design and development, and was active at the time of the conference mentioned above). The latest iteration, the TCGR, is a design that we covered in the carbide fuel element post, and because of this, as well as the gross similarities between the Advanced Dumbo and TCGR, we won’t go into many details here. If you want to learn more, please check out the TCGR page here: insert link. The biggest differences between the Advanced Dumbo and TCGR were the flow pattern and the fuel element composition. The flow pattern is a simple change in one way, but in another way there’s a big difference: rather than the cold end of the reactor being the central annular portion of the fuel element stack, the cold end became the exterior of the stack, with the hot propellant/coolant running down the center of the core. This difference is a fairly significant one from a fluid dynamics point of view, where the gas flow from the “hot end” of the reactor itself to the nozzle turns from a more diffuse set of annular shaped gas flows into a series of columns coming out of each fuel element cluster; whether this is easier to work with or not, and what the relative advantages are, is beyond my understanding, but [take this with a grain of salt, this is speculation] it seems like the more collimated gas flows would be able to integrate more easily into a single gaseous flow through the nozzle. Similarly to the simple but potentially profound change in the propellant flow path, the fuel element composition change is significant as well. Rather than just using the UC-ZrC fuel composition, the TCGR uses a mix of uranium, zirconium, and tantalum carbides, in order to improve both thermal properties as well as reduce stress fractures. For more information on this particular carbide type, check out the carbides post! Funding is continuing for this concept, and while the focus is primarily on the CERMET LEU NTP engine under development by BWXT, the TCGR is still a viable and attractive concept, and one that balances the advantages and disadvantages of the washer-type, folded flow reactor. As more information on this fascinating reactor becomes available, I’ll post updates on the reactor’s page! More Coming Soon! This was the first of a new series, the Forgotten Reactors. Next week will be another post in the series, looking at the SP-100 reactor. We won’t look at the reactor in too much depth, because it shares a lot of similarities with the SNAP-50 reactor’s final iteration; instead we’ll look at the most unique thing about this reactor: it was designed to be both launched and recovered by the Space Shuttle, leading to some unique challenges. While the STS is no longer flying, this doesn’t mean that the lessons learned with this design process are useless, because they will apply to a greater or lesser extent to every reactor recovery operation that will be performed in the future, and well as the challenges of having a previously-turned-on reactor in close proximity to the crew of a spacecraft with minimal shielding between the payload compartment and the crew cabin. Sources Dumbo — A Pachydermal Rocket Motor, DOE ID LAMS-1887 McInteer et al, Los Alamos Scientific Laboratory 1955 A Metal Dumbo Rocket Reactor, DOE ID LA-2091, Knight et al, Los Alamos Scientific Laboratory 1957 https://inis.iaea.org/collection/NCLCollectionStore/_Public/07/265/7265972.pdf?r=1&r=1 Quarterly Status Report of LASL Rover Program for Period Ending Sept 20, 1959, LAMS-2363 Dumbo, a Pachydermal Rocket Motor [Advanced Dumbo], Kirk, Los Alamos National Laboratory 1992 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920001882.pdf Investigation of a Tricarbide Grooved Ring Fuel Element for a Nuclear Thermal Rocket, NASA ID 20170008951 Taylor et al NASA MSFC 2017 Conference paper: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170008951.pdf Presentation slides: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170008940.pdf Table of Contents Share: Share on facebook Facebook Share on twitter Twitter Share on pinterest Pinterest Share on linkedin LinkedIn 4 Responses 1. Fascinating article. There’s been so much done, and so long ago, that people can claim newness on these essentially old concepts! I did have a question. Where did you find the information that the evinci reactor uses metal fuel? Early versions of megapower which they licensed used uo2, but it’s intriguing they made the switch. 2. One of the things that killed Dumbo was that the bean-counters required that it be tested with the nozzle that was designed for Nerva. I’ve also read that Dumbo sent more fuel out the exhaust. Dumbo needed those longer paths because it had smaller channels, smaller than the boundary layers in Nerva’s turbulent flow, thereby reducing thermal resistance. Reducing thermal resistance meant a lower surface temperature for a given average fluid temperature. Leave a Reply Your email address will not be published. Required fields are marked * On Key Related Posts Fusion Fuels Nuclear fusion is often seen as the “Holy Grail” of power generation and rocketry. It powers stars and our most powerful weapons, and is ~4X Radiator LNTR: The Last of the Line Hello, and welcome back to Beyond NERVA! Today, we’re finishing (for now) our in-depth look at liquid fueled nuclear thermal rockets, by looking at the
__label__pos
0.780802
Example #1 0 public void testLinenoFunctionCall() { AstNode root = parse("\nfoo.\n" + "bar.\n" + "baz(1);"); ExpressionStatement stmt = (ExpressionStatement) root.getFirstChild(); FunctionCall fc = (FunctionCall) stmt.getExpression(); // Line number should get closest to the actual paren. assertEquals(3, fc.getLineno()); } @Override public AstNode functionCall(AstNode target, Iterable<AstNode> arguments) { FunctionCall fc = new FunctionCall(); fc.setTarget(target); if (!Iterables.isEmpty(arguments)) { fc.setArguments(list(arguments)); } return fc; } Example #3 0 public void testJSDocAttachment4() { AstRoot root = parse("(function() {/** should not be attached */})()"); assertNotNull(root.getComments()); assertEquals(1, root.getComments().size()); ExpressionStatement st = (ExpressionStatement) root.getFirstChild(); FunctionCall fc = (FunctionCall) st.getExpression(); ParenthesizedExpression pe = (ParenthesizedExpression) fc.getTarget(); assertNull(pe.getJsDoc()); } Example #4 0 public void testRegexpLocation() { AstNode root = parse("\nvar path =\n" + " replace(\n" + "/a/g," + "'/');\n"); VariableDeclaration firstVarDecl = (VariableDeclaration) root.getFirstChild(); List<VariableInitializer> vars1 = firstVarDecl.getVariables(); VariableInitializer firstInitializer = vars1.get(0); Name firstVarName = (Name) firstInitializer.getTarget(); FunctionCall callNode = (FunctionCall) firstInitializer.getInitializer(); AstNode fnName = callNode.getTarget(); List<AstNode> args = callNode.getArguments(); RegExpLiteral regexObject = (RegExpLiteral) args.get(0); AstNode aString = args.get(1); assertEquals(1, firstVarDecl.getLineno()); assertEquals(1, firstVarName.getLineno()); assertEquals(2, callNode.getLineno()); assertEquals(2, fnName.getLineno()); assertEquals(3, regexObject.getLineno()); assertEquals(3, aString.getLineno()); }
__label__pos
0.978626
re: Immutable Paper VIEW POST VIEW FULL DISCUSSION   i was digging the nice prosemirror module, which implements immutability at the core of an html wysiwyg editor. That module uses two core immutability concepts: • states (immutable, two states cannot be merged) • transactions (propagates changes to other states, can accumulate) When using it and reading the docs, i was struck by how these two concepts are perfectly implemented as: • matter (fermions, two particules cannot be in the same state) • interactions (bosons, propagates changes to fermions, can "accumulate" too) The analogy seems to be so deep i didn't yet found the bottom of it. Even the way we describe the universe uses the concept of immutability :)   I'm not sure if I know enough about the physics you're talking about. I do remember that Haskell's source control system used quantum entanglement math to model the interaction of patches.   Precisely: it follows (well, almost) directly from the non-commutativity of the interaction particles (which propagate state change). It's crazy shit ! code of conduct - report abuse
__label__pos
0.650631
WordPress.org Plugin Directory Custom Header Extended Allows users to create a custom header on a per-post basis. 1. Upload the custom-header-extended folder to your /wp-content/plugins/ directory. 2. Activate the "Custom Header Extended" plugin through the "Plugins" menu in WordPress. 3. Edit a post to add a custom header. Requires: 3.6 or higher Compatible up to: 3.9.6 Last Updated: 2014-5-17 Active Installs: 4,000+ Ratings 5 out of 5 stars Support 0 of 2 support threads in the last two months have been resolved. Got something to say? Need help? Compatibility + = Not enough data 0 people say it works. 0 people say it's broken. 100,1,1 100,3,3 100,1,1 100,1,1 50,2,1
__label__pos
0.765087
Expert Reviewed wikiHow to Recognize Spinal Meningitis Symptoms Three Parts:Recognizing the Symptoms in Adults and ChildrenWatching for Signs of Meningitis in InfantsUnderstanding the Different TypesCommunity Q&A Meningitis, sometimes referred to as spinal meningitis, is an inflammation of the membranes surrounding the brain and spinal cord. Meningitis is usually caused by a viral infection, but it can also be caused by a bacterial or fungal infection. Depending on the type of infection, meningitis can be easily curable or potentially life threatening. Part 1 Recognizing the Symptoms in Adults and Children 1. 1 Watch for a severe headache. Headaches caused by inflammation of the meninges, the membranes surrounding the brain and spinal cord, feel different from other types of headaches. They're much more severe than a headache you'd get from dehydration or even a migraine. A persistent, severe headache is commonly felt by people with meningitis. • A meningitis headache won't ease up after taking over the counter pain pills. • If a severe headache is felt without the presence of other common meningitis symptoms, the cause of the headache may be another illness. If the headache persists for more than a day or so, see a doctor.[1] 2. 2 Look for vomiting and nausea associated with the headache. Migraines often lead to vomiting and nausea, so these symptoms don't automatically point to meningitis. However, it's important to pay close attention to other symptoms if you or the person you're concerned about is feeling sick enough to vomit.[2] 3. 3 Check for a fever. A high fever, along with these other symptoms, could indicate that the problem is meningitis, rather than the flu or strep throat. Take the temperature of the person who is sick to determine whether a high fever is on the list of symptoms. • The fever related to meningitis is generally around 101 degrees, and any fever over 103 Fahrenheit is cause for concern.[3] 4. 4 Determine whether the neck is stiff and sore. This is a very common symptom among those who have meningitis. The stiffness and soreness is caused by pressure from the inflamed meninges. If you or someone you know has a sore neck that doesn't seem to be related to other common causes of soreness and stiffness, like pulling a muscle or getting whiplash, meningitis might be the culprit. • If this symptom arises, have the person lie flat on his back and ask him to bend or flex his hips. When they do this, it should cause pain in the neck. This is a sign of meningitis.[4][5] 5. 5 Watch for concentration difficulties. Since the membranes around the brain become inflamed with meningitis, cognitive difficulties commonly occur among meningitis patients. The inability to finish reading an article, focus on a conversation, or complete a task, paired with a severe headache, could be a warning sign.[6] • He may not act himself and be overall more drowsy and lethargic than usual. • In rare cases, this can make the person anywhere from barely rousable to comatose.[7] 6. 6 Notice photophobia. Photophobia is an intense pain caused by light. Eye pain and eye sensitivity are associated with meningitis in adults. If you or someone you know has trouble going outside or being in a room with bright lights, see your doctor. • This may manifest by a general sensitivity or fear of bright lights at first. Watch for this behavior if other symptoms occur as well.[8] 7. 7 Look for seizures. Seizures are uncontrollable muscle movements, often violent in nature, which usually cause loss of bladder control and general disorientation. The person who underwent a seizure likely may not know what year it is, where they are, or how old they are right after the seizure is over. • If the person has epilepsy or a history of seizures, they may not be a symptom of meningitis. • If you encounter someone having a seizure, call 911. Roll them on their side and move any objects that he may hit themselves on away from the area. Most seizures stop on their own within one to two minutes.[9] 8. 8 Look for the tell-tale rash. Certain types of meningitis, such as meningococcal meningitis, cause a rash to occur. The rash is reddish or purple and blotchy, and may be a sign of blood poisoning. If you see a rash, you can determine whether it was caused by meningitis by conducting the glass test:[10] • Press a glass against the rash. Use a clear glass so you can see the skin through it. • If the skin under the glass does not turn white, this indicates that blood poisoning may have occurred. Go to the hospital immediately. • Not all types of meningitis have a rash. The absence of a rash should not be taken as a sign that a person does not have meningitis. Part 2 Watching for Signs of Meningitis in Infants 1. 1 Be aware of the challenges. The diagnosis of meningitis in children, especially infants, is a diagnostic challenge, even to experienced pediatricians. Since so many benign and self-limited viral syndromes present similarly, with fever and a crying child, it can be hard to distinguish meningitis symptoms in small children and infants. This leads many hospital protocols and individual clinicians to have a very high suspicion for meningitis, especially for those children 3 months and younger who have only received one set in their series of vaccines.[11] • With good vaccination compliance, the number of cases of bacterial meningitis have decreased. Viral meningitis still presents but presentation is mild and self-limited, with minimal care needed. 2. 2 Check for a high fever. Infants, like adults and children, develop a high fever with meningitis. Check your baby's temperature to determine if a fever is present. Whether or not meningitis is the cause, you should take your baby to the doctor if he or she has a fever.[12] 3. 3 Watch for constant crying. This can be caused by many illnesses and other issues, but if your baby seems especially upset and won't be calmed by changing, feeding, and other measures you usually take, you should call the doctor. In combination with other symptoms, constant crying may be a sign of meningitis.[13] • Crying caused by meningitis usually can't be comforted. Look for differences in the baby's normal crying patterns. • Some parents report that babies become even more upset when they are picked up if meningitis is the issue. • Meningitis may cause babies to produce a cry that is higher-pitched than normal.[14] 4. 4 Look for sleepiness and inactivity. A sluggish, sleepy, irritable baby who is usually active may have meningitis. Look for noticeable behavioral differences that point to lower consciousness and an inability to fully wake up.[15] 5. 5 Pay attention to weak sucking during feedings. Babies with meningitis have a reduced ability to make the sucking motion during feeding. If your baby is having trouble sucking, call the doctor immediately.[16] 6. 6 Watch for changes in the baby's neck and body. If the baby seems to have trouble moving his or her head, and his or her body looks unusually rigid and stiff, this could be a sign of meningitis. • The child may also feel pain around their neck and back. It may be simple stiffness at first, but if the child seems in pain when moved, it may be more severe. Watch to see if she automatically brings her feet up to her chest when you bend their neck forward or is she has pain when her legs are bent. • She may also be unable to straighten her lower legs if her hips are at a 90 degree angle. This presents in infants most often when their diapers are changed and you cannot pull their legs out.[17] Part 3 Understanding the Different Types 1. 1 Learn about viral meningitis. Viral meningitis is usually self limited and goes away on its own. There are a few specific viruses such as the herpes simplex virus (HSV) and HIV that require specific goal directed therapy with antiviral drugs. Viral meningitis is spread person to person contact. A groups of viruses called enterovirus is the primary source and occur most typically in the late summer to early fall. • Despite it being possible to be spread by person to person contact, outbreaks of viral meningitis are rare.[18] 2. 2 Know about Streptococcus pneumoniae. There are three kinds of bacteria that cause bacterial meningitis, which is the most worrisome and lethal. Streptococcus pneumoniae is the most common form to strike infants, young children, and adults in the US. There is a vaccine for this bacteria, however, so it is curable. It is spread most commonly from a sinus or ear infection and should be suspected when a person with a prior sinus or ear infection develops symptoms of meningitis. • Certain people are at higher risk, such as those who do not have spleens and those who are older. Vaccination for these individuals is protocol. [19] 3. 3 Understand Neisseria meningitidis. Another bacteria that causes bacterial meningitis is Neisseria meningitidis. This is a highly contagious form that afflicts otherwise healthy adolescents and young adults. It is spread person to person and outbreaks occur in schools or dorms. It is particularly lethal, leading to multi-organ failure, brain damage, and death if not rapidly identified and started on intravenous antibiotics. • It also has the distinction of causing a “petechial” rash, meaning a rash that looks like lots of tiny bruises, and this is an important distinction to note. • Vaccination is recommended for all adolescents 11 to 12 years of age, with a booster at age 16. If no prior vaccine was given and the patient is 16, only one vaccination is required.[20] 4. 4 Learn about Haemophilus influenza (Hib). The third bacteria that causes bacterial meningitis is Haemophilus influenza. This used to be a very common cause of bacterial meningitis in infants and children. However, since a Hib vaccination protocol has been introduced, rates have dropped dramatically. With the combination of immigrants from other countries that don’t follow routine vaccination or even parents who do not believe in vaccination, not all are protected against this form. • Obtaining an accurate vaccination history, preferably from the actual medical record or yellow vaccine card, is critical when this, or any, form of meningitis is considered.[21][22] 5. 5 Know about fungal meningitis. Fungal meningitis is rare and seen almost exclusively in those with AIDS or others with weakened immune systems. It is one of the AIDS defining diagnoses, occurring when the person has very little immunity, is exceedingly fragile, and is at risk for most any infection. The typical culprit is Cryptococcus. • The optimal prevention in an HIV infected individual is compliance with antiretroviral therapy to keep viral loads low and T cells high to protect from this type of infection.[23] 6. 6 Take advantage of meningitis vaccines if necessary. It is recommended that the following groups with high risk of contracting meningitis have routine vaccinations: • All children ages 11-18 • U.S. military recruits • Anyone who has a damaged spleen or whose spleen has been removed • College freshmen living in dormitories • Microbiologists exposed to meningococcal bacteria • Anyone who has terminal complement component deficiency (an immune system disorder) • Anyone traveling to countries which have an outbreak of meningococcal disease • Those who might have been exposed to meningitis during an outbreak[24] Community Q&A Unanswered Questions Show more unanswered questions Ask a Question 200 characters left Submit Video Sources and Citations 1. http://www.mayoclinic.com/health/meningitis/DS00118/DSECTION=symptoms 2. http://www.emedicinehealth.com/meningitis_in_adults/page3_em.htm#adult_meningitis_symptoms_and_signs 3. http://www.emedicinehealth.com/meningitis_in_adults/page3_em.htm#adult_meningitis_symptoms_and_signs Show more... (21) Article Info Categories: Meningitis | Spine Disorders In other languages: Español: reconocer los síntomas de la meningitis espinal, Italiano: Riconoscere i Sintomi della Meningite Spinale, Português: Reconhecer os Sintomas da Meningite, 中文: 识别脑脊髓膜炎, Deutsch: Die Symptome einer Hirnhautentzündung erkennen, Русский: распознать симптомы менингита, Français: reconnaitre les symptômes d'une méningite, 한국어: 척수막염의 증상 알아보는 법, Nederlands: De symptomen van hersenvliesontsteking herkennen, Bahasa Indonesia: Mengenali Gejala Meningitis Tulang Belakang Thanks to all authors for creating a page that has been read 306,116 times. Did this article help you?  
__label__pos
0.62883
blob: 7ed280bc9525daa93de726e1100f11de4b113de2 [file] [log] [blame] # Generated from ltmain.m4sh. # ltmain.sh (GNU libtool) 2.2.6b # Written by Gordon Matzigkeit <[email protected]>, 1996 # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006, 2007 2008 Free Software Foundation, Inc. # This is free software; see the source for copying conditions. There is NO # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # GNU Libtool is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # As a special exception to the GNU General Public License, # if you distribute this file as part of a program or library that # is built using GNU Libtool, you may include this file under the # same distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with GNU Libtool; see the file COPYING. If not, a copy # can be downloaded from http://www.gnu.org/licenses/gpl.html, # or obtained by writing to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # Usage: $progname [OPTION]... [MODE-ARG]... # # Provide generalized library-building support services. # # --config show all configuration variables # --debug enable verbose shell tracing # -n, --dry-run display commands without modifying any files # --features display basic configuration information and exit # --mode=MODE use operation mode MODE # --preserve-dup-deps don't remove duplicate dependency libraries # --quiet, --silent don't print informational messages # --tag=TAG use configuration variables from tag TAG # -v, --verbose print informational messages (default) # --version print version information # -h, --help print short or long help message # # MODE must be one of the following: # # clean remove files from the build directory # compile compile a source file into a libtool object # execute automatically set library path, then run a program # finish complete the installation of libtool libraries # install install libraries or executables # link create a library or an executable # uninstall remove libraries from an installed directory # # MODE-ARGS vary depending on the MODE. # Try `$progname --help --mode=MODE' for a more detailed description of MODE. # # When reporting a bug, please describe a test case to reproduce it and # include the following information: # # host-triplet: $host # shell: $SHELL # compiler: $LTCC # compiler flags: $LTCFLAGS # linker: $LD (gnu? $with_gnu_ld) # $progname: (GNU libtool) 2.2.6b Debian-2.2.6b-2ubuntu1 # automake: $automake_version # autoconf: $autoconf_version # # Report bugs to <[email protected]>. PROGRAM=ltmain.sh PACKAGE=libtool VERSION="2.2.6b Debian-2.2.6b-2ubuntu1" TIMESTAMP="" package_revision=1.3017 # Be Bourne compatible if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in *posix*) set -o posix;; esac fi BIN_SH=xpg4; export BIN_SH # for Tru64 DUALCASE=1; export DUALCASE # for MKS sh # NLS nuisances: We save the old values to restore during execute mode. # Only set LANG and LC_ALL to C if already set. # These must not be set unconditionally because not all systems understand # e.g. LANG=C (notably SCO). lt_user_locale= lt_safe_locale= for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES do eval "if test \"\${$lt_var+set}\" = set; then save_$lt_var=\$$lt_var $lt_var=C export $lt_var lt_user_locale=\"$lt_var=\\\$save_\$lt_var; \$lt_user_locale\" lt_safe_locale=\"$lt_var=C; \$lt_safe_locale\" fi" done $lt_unset CDPATH : ${CP="cp -f"} : ${ECHO="echo"} : ${EGREP="/bin/grep -E"} : ${FGREP="/bin/grep -F"} : ${GREP="/bin/grep"} : ${LN_S="ln -s"} : ${MAKE="make"} : ${MKDIR="mkdir"} : ${MV="mv -f"} : ${RM="rm -f"} : ${SED="/bin/sed"} : ${SHELL="${CONFIG_SHELL-/bin/sh}"} : ${Xsed="$SED -e 1s/^X//"} # Global variables: EXIT_SUCCESS=0 EXIT_FAILURE=1 EXIT_MISMATCH=63 # $? = 63 is used to indicate version mismatch to missing. EXIT_SKIP=77 # $? = 77 is used to indicate a skipped test to automake. exit_status=$EXIT_SUCCESS # Make sure IFS has a sensible default lt_nl=' ' IFS=" $lt_nl" dirname="s,/[^/]*$,," basename="s,^.*/,," # func_dirname_and_basename file append nondir_replacement # perform func_basename and func_dirname in a single function # call: # dirname: Compute the dirname of FILE. If nonempty, # add APPEND to the result, otherwise set result # to NONDIR_REPLACEMENT. # value returned in "$func_dirname_result" # basename: Compute filename of FILE. # value retuned in "$func_basename_result" # Implementation must be kept synchronized with func_dirname # and func_basename. For efficiency, we do not delegate to # those functions but instead duplicate the functionality here. func_dirname_and_basename () { # Extract subdirectory from the argument. func_dirname_result=`$ECHO "X${1}" | $Xsed -e "$dirname"` if test "X$func_dirname_result" = "X${1}"; then func_dirname_result="${3}" else func_dirname_result="$func_dirname_result${2}" fi func_basename_result=`$ECHO "X${1}" | $Xsed -e "$basename"` } # Generated shell functions inserted here. # Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh # is ksh but when the shell is invoked as "sh" and the current value of # the _XPG environment variable is not equal to 1 (one), the special # positional parameter $0, within a function call, is the name of the # function. progpath="$0" # The name of this program: # In the unlikely event $progname began with a '-', it would play havoc with # func_echo (imagine progname=-n), so we prepend ./ in that case: func_dirname_and_basename "$progpath" progname=$func_basename_result case $progname in -*) progname=./$progname ;; esac # Make sure we have an absolute path for reexecution: case $progpath in [\\/]*|[A-Za-z]:\\*) ;; *[\\/]*) progdir=$func_dirname_result progdir=`cd "$progdir" && pwd` progpath="$progdir/$progname" ;; *) save_IFS="$IFS" IFS=: for progdir in $PATH; do IFS="$save_IFS" test -x "$progdir/$progname" && break done IFS="$save_IFS" test -n "$progdir" || progdir=`pwd` progpath="$progdir/$progname" ;; esac # Sed substitution that helps us do robust quoting. It backslashifies # metacharacters that are still active within double-quoted strings. Xsed="${SED}"' -e 1s/^X//' sed_quote_subst='s/\([`"$\\]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\(["`\\]\)/\\\1/g' # Re-`\' parameter expansions in output of double_quote_subst that were # `\'-ed in input to the same. If an odd number of `\' preceded a '$' # in input to double_quote_subst, that '$' was protected from expansion. # Since each input `\' is now two `\'s, look for any number of runs of # four `\'s followed by two `\'s and then a '$'. `\' that '$'. bs='\\' bs2='\\\\' bs4='\\\\\\\\' dollar='\$' sed_double_backslash="\ s/$bs4/&\\ /g s/^$bs2$dollar/$bs&/ s/\\([^$bs]\\)$bs2$dollar/\\1$bs2$bs$dollar/g s/\n//g" # Standard options: opt_dry_run=false opt_help=false opt_quiet=false opt_verbose=false opt_warning=: # func_echo arg... # Echo program name prefixed message, along with the current mode # name if it has been set yet. func_echo () { $ECHO "$progname${mode+: }$mode: $*" } # func_verbose arg... # Echo program name prefixed message in verbose mode only. func_verbose () { $opt_verbose && func_echo ${1+"$@"} # A bug in bash halts the script if the last line of a function # fails when set -e is in force, so we need another command to # work around that: : } # func_error arg... # Echo program name prefixed message to standard error. func_error () { $ECHO "$progname${mode+: }$mode: "${1+"$@"} 1>&2 } # func_warning arg... # Echo program name prefixed warning message to standard error. func_warning () { $opt_warning && $ECHO "$progname${mode+: }$mode: warning: "${1+"$@"} 1>&2 # bash bug again: : } # func_fatal_error arg... # Echo program name prefixed message to standard error, and exit. func_fatal_error () { func_error ${1+"$@"} exit $EXIT_FAILURE } # func_fatal_help arg... # Echo program name prefixed message to standard error, followed by # a help hint, and exit. func_fatal_help () { func_error ${1+"$@"} func_fatal_error "$help" } help="Try \`$progname --help' for more information." ## default # func_grep expression filename # Check whether EXPRESSION matches any line of FILENAME, without output. func_grep () { $GREP "$1" "$2" >/dev/null 2>&1 } # func_mkdir_p directory-path # Make sure the entire path to DIRECTORY-PATH is available. func_mkdir_p () { my_directory_path="$1" my_dir_list= if test -n "$my_directory_path" && test "$opt_dry_run" != ":"; then # Protect directory names starting with `-' case $my_directory_path in -*) my_directory_path="./$my_directory_path" ;; esac # While some portion of DIR does not yet exist... while test ! -d "$my_directory_path"; do # ...make a list in topmost first order. Use a colon delimited # list incase some portion of path contains whitespace. my_dir_list="$my_directory_path:$my_dir_list" # If the last portion added has no slash in it, the list is done case $my_directory_path in */*) ;; *) break ;; esac # ...otherwise throw away the child directory and loop my_directory_path=`$ECHO "X$my_directory_path" | $Xsed -e "$dirname"` done my_dir_list=`$ECHO "X$my_dir_list" | $Xsed -e 's,:*$,,'` save_mkdir_p_IFS="$IFS"; IFS=':' for my_dir in $my_dir_list; do IFS="$save_mkdir_p_IFS" # mkdir can fail with a `File exist' error if two processes # try to create one of the directories concurrently. Don't # stop in that case! $MKDIR "$my_dir" 2>/dev/null || : done IFS="$save_mkdir_p_IFS" # Bail out if we (or some other process) failed to create a directory. test -d "$my_directory_path" || \ func_fatal_error "Failed to create \`$1'" fi } # func_mktempdir [string] # Make a temporary directory that won't clash with other running # libtool processes, and avoids race conditions if possible. If # given, STRING is the basename for that directory. func_mktempdir () { my_template="${TMPDIR-/tmp}/${1-$progname}" if test "$opt_dry_run" = ":"; then # Return a directory name, but don't create it in dry-run mode my_tmpdir="${my_template}-$$" else # If mktemp works, use that first and foremost my_tmpdir=`mktemp -d "${my_template}-XXXXXXXX" 2>/dev/null` if test ! -d "$my_tmpdir"; then # Failing that, at least try and use $RANDOM to avoid a race my_tmpdir="${my_template}-${RANDOM-0}$$" save_mktempdir_umask=`umask` umask 0077 $MKDIR "$my_tmpdir" umask $save_mktempdir_umask fi # If we're not in dry-run mode, bomb out on failure test -d "$my_tmpdir" || \ func_fatal_error "cannot create temporary directory \`$my_tmpdir'" fi $ECHO "X$my_tmpdir" | $Xsed } # func_quote_for_eval arg # Aesthetically quote ARG to be evaled later. # This function returns two values: FUNC_QUOTE_FOR_EVAL_RESULT # is double-quoted, suitable for a subsequent eval, whereas # FUNC_QUOTE_FOR_EVAL_UNQUOTED_RESULT has merely all characters # which are still active within double quotes backslashified. func_quote_for_eval () { case $1 in *[\\\`\"\$]*) func_quote_for_eval_unquoted_result=`$ECHO "X$1" | $Xsed -e "$sed_quote_subst"` ;; *) func_quote_for_eval_unquoted_result="$1" ;; esac case $func_quote_for_eval_unquoted_result in # Double-quote args containing shell metacharacters to delay # word splitting, command substitution and and variable # expansion for a subsequent eval. # Many Bourne shells cannot handle close brackets correctly # in scan sets, so we specify it separately. *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") func_quote_for_eval_result="\"$func_quote_for_eval_unquoted_result\"" ;; *) func_quote_for_eval_result="$func_quote_for_eval_unquoted_result" esac } # func_quote_for_expand arg # Aesthetically quote ARG to be evaled later; same as above, # but do not quote variable references. func_quote_for_expand () { case $1 in *[\\\`\"]*) my_arg=`$ECHO "X$1" | $Xsed \ -e "$double_quote_subst" -e "$sed_double_backslash"` ;; *) my_arg="$1" ;; esac case $my_arg in # Double-quote args containing shell metacharacters to delay # word splitting and command substitution for a subsequent eval. # Many Bourne shells cannot handle close brackets correctly # in scan sets, so we specify it separately. *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") my_arg="\"$my_arg\"" ;; esac func_quote_for_expand_result="$my_arg" } # func_show_eval cmd [fail_exp] # Unless opt_silent is true, then output CMD. Then, if opt_dryrun is # not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP # is given, then evaluate it. func_show_eval () { my_cmd="$1" my_fail_exp="${2-:}" ${opt_silent-false} || { func_quote_for_expand "$my_cmd" eval "func_echo $func_quote_for_expand_result" } if ${opt_dry_run-false}; then :; else eval "$my_cmd" my_status=$? if test "$my_status" -eq 0; then :; else eval "(exit $my_status); $my_fail_exp" fi fi } # func_show_eval_locale cmd [fail_exp] # Unless opt_silent is true, then output CMD. Then, if opt_dryrun is # not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP # is given, then evaluate it. Use the saved locale for evaluation. func_show_eval_locale () { my_cmd="$1" my_fail_exp="${2-:}" ${opt_silent-false} || { func_quote_for_expand "$my_cmd" eval "func_echo $func_quote_for_expand_result" } if ${opt_dry_run-false}; then :; else eval "$lt_user_locale $my_cmd" my_status=$? eval "$lt_safe_locale" if test "$my_status" -eq 0; then :; else eval "(exit $my_status); $my_fail_exp" fi fi } # func_version # Echo version message to standard output and exit. func_version () { $SED -n '/^# '$PROGRAM' (GNU /,/# warranty; / { s/^# // s/^# *$// s/\((C)\)[ 0-9,-]*\( [1-9][0-9]*\)/\1\2/ p }' < "$progpath" exit $? } # func_usage # Echo short help message to standard output and exit. func_usage () { $SED -n '/^# Usage:/,/# -h/ { s/^# // s/^# *$// s/\$progname/'$progname'/ p }' < "$progpath" $ECHO $ECHO "run \`$progname --help | more' for full usage" exit $? } # func_help # Echo long help message to standard output and exit. func_help () { $SED -n '/^# Usage:/,/# Report bugs to/ { s/^# // s/^# *$// s*\$progname*'$progname'* s*\$host*'"$host"'* s*\$SHELL*'"$SHELL"'* s*\$LTCC*'"$LTCC"'* s*\$LTCFLAGS*'"$LTCFLAGS"'* s*\$LD*'"$LD"'* s/\$with_gnu_ld/'"$with_gnu_ld"'/ s/\$automake_version/'"`(automake --version) 2>/dev/null |$SED 1q`"'/ s/\$autoconf_version/'"`(autoconf --version) 2>/dev/null |$SED 1q`"'/ p }' < "$progpath" exit $? } # func_missing_arg argname # Echo program name prefixed message to standard error and set global # exit_cmd. func_missing_arg () { func_error "missing argument for $1" exit_cmd=exit } exit_cmd=: # Check that we have a working $ECHO. if test "X$1" = X--no-reexec; then # Discard the --no-reexec flag, and continue. shift elif test "X$1" = X--fallback-echo; then # Avoid inline document here, it may be left over : elif test "X`{ $ECHO '\t'; } 2>/dev/null`" = 'X\t'; then # Yippee, $ECHO works! : else # Restart under the correct shell, and then maybe $ECHO will work. exec $SHELL "$progpath" --no-reexec ${1+"$@"} fi if test "X$1" = X--fallback-echo; then # used as fallback echo shift cat <<EOF $* EOF exit $EXIT_SUCCESS fi magic="%%%MAGIC variable%%%" magic_exe="%%%MAGIC EXE variable%%%" # Global variables. # $mode is unset nonopt= execute_dlfiles= preserve_args= lo2o="s/\\.lo\$/.${objext}/" o2lo="s/\\.${objext}\$/.lo/" extracted_archives= extracted_serial=0 opt_dry_run=false opt_duplicate_deps=false opt_silent=false opt_debug=: # If this variable is set in any of the actions, the command in it # will be execed at the end. This prevents here-documents from being # left over by shells. exec_cmd= # func_fatal_configuration arg... # Echo program name prefixed message to standard error, followed by # a configuration failure hint, and exit. func_fatal_configuration () { func_error ${1+"$@"} func_error "See the $PACKAGE documentation for more information." func_fatal_error "Fatal configuration error." } # func_config # Display the configuration for all the tags in this script. func_config () { re_begincf='^# ### BEGIN LIBTOOL' re_endcf='^# ### END LIBTOOL' # Default configuration. $SED "1,/$re_begincf CONFIG/d;/$re_endcf CONFIG/,\$d" < "$progpath" # Now print the configurations for the tags. for tagname in $taglist; do $SED -n "/$re_begincf TAG CONFIG: $tagname\$/,/$re_endcf TAG CONFIG: $tagname\$/p" < "$progpath" done exit $? } # func_features # Display the features supported by this script. func_features () { $ECHO "host: $host" if test "$build_libtool_libs" = yes; then $ECHO "enable shared libraries" else $ECHO "disable shared libraries" fi if test "$build_old_libs" = yes; then $ECHO "enable static libraries" else $ECHO "disable static libraries" fi exit $? } # func_enable_tag tagname # Verify that TAGNAME is valid, and either flag an error and exit, or # enable the TAGNAME tag. We also add TAGNAME to the global $taglist # variable here. func_enable_tag () { # Global variable: tagname="$1" re_begincf="^# ### BEGIN LIBTOOL TAG CONFIG: $tagname\$" re_endcf="^# ### END LIBTOOL TAG CONFIG: $tagname\$" sed_extractcf="/$re_begincf/,/$re_endcf/p" # Validate tagname. case $tagname in *[!-_A-Za-z0-9,/]*) func_fatal_error "invalid tag name: $tagname" ;; esac # Don't test for the "default" C tag, as we know it's # there but not specially marked. case $tagname in CC) ;; *) if $GREP "$re_begincf" "$progpath" >/dev/null 2>&1; then taglist="$taglist $tagname" # Evaluate the configuration. Be careful to quote the path # and the sed script, to avoid splitting on whitespace, but # also don't use non-portable quotes within backquotes within # quotes we have to do it in 2 steps: extractedcf=`$SED -n -e "$sed_extractcf" < "$progpath"` eval "$extractedcf" else func_error "ignoring unknown tag $tagname" fi ;; esac } # Parse options once, thoroughly. This comes as soon as possible in # the script to make things like `libtool --version' happen quickly. { # Shorthand for --mode=foo, only valid as the first argument case $1 in clean|clea|cle|cl) shift; set dummy --mode clean ${1+"$@"}; shift ;; compile|compil|compi|comp|com|co|c) shift; set dummy --mode compile ${1+"$@"}; shift ;; execute|execut|execu|exec|exe|ex|e) shift; set dummy --mode execute ${1+"$@"}; shift ;; finish|finis|fini|fin|fi|f) shift; set dummy --mode finish ${1+"$@"}; shift ;; install|instal|insta|inst|ins|in|i) shift; set dummy --mode install ${1+"$@"}; shift ;; link|lin|li|l) shift; set dummy --mode link ${1+"$@"}; shift ;; uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u) shift; set dummy --mode uninstall ${1+"$@"}; shift ;; esac # Parse non-mode specific arguments: while test "$#" -gt 0; do opt="$1" shift case $opt in --config) func_config ;; --debug) preserve_args="$preserve_args $opt" func_echo "enabling shell trace mode" opt_debug='set -x' $opt_debug ;; -dlopen) test "$#" -eq 0 && func_missing_arg "$opt" && break execute_dlfiles="$execute_dlfiles $1" shift ;; --dry-run | -n) opt_dry_run=: ;; --features) func_features ;; --finish) mode="finish" ;; --mode) test "$#" -eq 0 && func_missing_arg "$opt" && break case $1 in # Valid mode arguments: clean) ;; compile) ;; execute) ;; finish) ;; install) ;; link) ;; relink) ;; uninstall) ;; # Catch anything else as an error *) func_error "invalid argument for $opt" exit_cmd=exit break ;; esac mode="$1" shift ;; --preserve-dup-deps) opt_duplicate_deps=: ;; --quiet|--silent) preserve_args="$preserve_args $opt" opt_silent=: ;; --verbose| -v) preserve_args="$preserve_args $opt" opt_silent=false ;; --tag) test "$#" -eq 0 && func_missing_arg "$opt" && break preserve_args="$preserve_args $opt $1" func_enable_tag "$1" # tagname is set here shift ;; # Separate optargs to long options: -dlopen=*|--mode=*|--tag=*) func_opt_split "$opt" set dummy "$func_opt_split_opt" "$func_opt_split_arg" ${1+"$@"} shift ;; -\?|-h) func_usage ;; --help) opt_help=: ;; --version) func_version ;; -*) func_fatal_help "unrecognized option \`$opt'" ;; *) nonopt="$opt" break ;; esac done case $host in *cygwin* | *mingw* | *pw32* | *cegcc*) # don't eliminate duplications in $postdeps and $predeps opt_duplicate_compiler_generated_deps=: ;; *) opt_duplicate_compiler_generated_deps=$opt_duplicate_deps ;; esac # Having warned about all mis-specified options, bail out if # anything was wrong. $exit_cmd $EXIT_FAILURE } # func_check_version_match # Ensure that we are using m4 macros, and libtool script from the same # release of libtool. func_check_version_match () { if test "$package_revision" != "$macro_revision"; then if test "$VERSION" != "$macro_version"; then if test -z "$macro_version"; then cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, but the $progname: definition of this LT_INIT comes from an older release. $progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION $progname: and run autoconf again. _LT_EOF else cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, but the $progname: definition of this LT_INIT comes from $PACKAGE $macro_version. $progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION $progname: and run autoconf again. _LT_EOF fi else cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, revision $package_revision, $progname: but the definition of this LT_INIT comes from revision $macro_revision. $progname: You should recreate aclocal.m4 with macros from revision $package_revision $progname: of $PACKAGE $VERSION and run autoconf again. _LT_EOF fi exit $EXIT_MISMATCH fi } ## ----------- ## ## Main. ## ## ----------- ## $opt_help || { # Sanity checks first: func_check_version_match if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then func_fatal_configuration "not configured to build any kind of library" fi test -z "$mode" && func_fatal_error "error: you must specify a MODE." # Darwin sucks eval std_shrext=\"$shrext_cmds\" # Only execute mode is allowed to have -dlopen flags. if test -n "$execute_dlfiles" && test "$mode" != execute; then func_error "unrecognized option \`-dlopen'" $ECHO "$help" 1>&2 exit $EXIT_FAILURE fi # Change the help message to a mode-specific one. generic_help="$help" help="Try \`$progname --help --mode=$mode' for more information." } # func_lalib_p file # True iff FILE is a libtool `.la' library or `.lo' object file. # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_lalib_p () { test -f "$1" && $SED -e 4q "$1" 2>/dev/null \ | $GREP "^# Generated by .*$PACKAGE" > /dev/null 2>&1 } # func_lalib_unsafe_p file # True iff FILE is a libtool `.la' library or `.lo' object file. # This function implements the same check as func_lalib_p without # resorting to external programs. To this end, it redirects stdin and # closes it afterwards, without saving the original file descriptor. # As a safety measure, use it only where a negative result would be # fatal anyway. Works if `file' does not exist. func_lalib_unsafe_p () { lalib_p=no if test -f "$1" && test -r "$1" && exec 5<&0 <"$1"; then for lalib_p_l in 1 2 3 4 do read lalib_p_line case "$lalib_p_line" in \#\ Generated\ by\ *$PACKAGE* ) lalib_p=yes; break;; esac done exec 0<&5 5<&- fi test "$lalib_p" = yes } # func_ltwrapper_script_p file # True iff FILE is a libtool wrapper script # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_script_p () { func_lalib_p "$1" } # func_ltwrapper_executable_p file # True iff FILE is a libtool wrapper executable # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_executable_p () { func_ltwrapper_exec_suffix= case $1 in *.exe) ;; *) func_ltwrapper_exec_suffix=.exe ;; esac $GREP "$magic_exe" "$1$func_ltwrapper_exec_suffix" >/dev/null 2>&1 } # func_ltwrapper_scriptname file # Assumes file is an ltwrapper_executable # uses $file to determine the appropriate filename for a # temporary ltwrapper_script. func_ltwrapper_scriptname () { func_ltwrapper_scriptname_result="" if func_ltwrapper_executable_p "$1"; then func_dirname_and_basename "$1" "" "." func_stripname '' '.exe' "$func_basename_result" func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper" fi } # func_ltwrapper_p file # True iff FILE is a libtool wrapper script or wrapper executable # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_p () { func_ltwrapper_script_p "$1" || func_ltwrapper_executable_p "$1" } # func_execute_cmds commands fail_cmd # Execute tilde-delimited COMMANDS. # If FAIL_CMD is given, eval that upon failure. # FAIL_CMD may read-access the current command in variable CMD! func_execute_cmds () { $opt_debug save_ifs=$IFS; IFS='~' for cmd in $1; do IFS=$save_ifs eval cmd=\"$cmd\" func_show_eval "$cmd" "${2-:}" done IFS=$save_ifs } # func_source file # Source FILE, adding directory component if necessary. # Note that it is not necessary on cygwin/mingw to append a dot to # FILE even if both FILE and FILE.exe exist: automatic-append-.exe # behavior happens only for exec(3), not for open(2)! Also, sourcing # `FILE.' does not work on cygwin managed mounts. func_source () { $opt_debug case $1 in */* | *\\*) . "$1" ;; *) . "./$1" ;; esac } # func_infer_tag arg # Infer tagged configuration to use if any are available and # if one wasn't chosen via the "--tag" command line option. # Only attempt this if the compiler in the base compile # command doesn't match the default compiler. # arg is usually of the form 'gcc ...' func_infer_tag () { $opt_debug if test -n "$available_tags" && test -z "$tagname"; then CC_quoted= for arg in $CC; do func_quote_for_eval "$arg" CC_quoted="$CC_quoted $func_quote_for_eval_result" done case $@ in # Blanks in the command may have been stripped by the calling shell, # but not from the CC environment variable when configure was run. " $CC "* | "$CC "* | " `$ECHO $CC` "* | "`$ECHO $CC` "* | " $CC_quoted"* | "$CC_quoted "* | " `$ECHO $CC_quoted` "* | "`$ECHO $CC_quoted` "*) ;; # Blanks at the start of $base_compile will cause this to fail # if we don't check for them as well. *) for z in $available_tags; do if $GREP "^# ### BEGIN LIBTOOL TAG CONFIG: $z$" < "$progpath" > /dev/null; then # Evaluate the configuration. eval "`${SED} -n -e '/^# ### BEGIN LIBTOOL TAG CONFIG: '$z'$/,/^# ### END LIBTOOL TAG CONFIG: '$z'$/p' < $progpath`" CC_quoted= for arg in $CC; do # Double-quote args containing other shell metacharacters. func_quote_for_eval "$arg" CC_quoted="$CC_quoted $func_quote_for_eval_result" done case "$@ " in " $CC "* | "$CC "* | " `$ECHO $CC` "* | "`$ECHO $CC` "* | " $CC_quoted"* | "$CC_quoted "* | " `$ECHO $CC_quoted` "* | "`$ECHO $CC_quoted` "*) # The compiler in the base compile command matches # the one in the tagged configuration. # Assume this is the tagged configuration we want. tagname=$z break ;; esac fi done # If $tagname still isn't set, then no tagged configuration # was found and let the user know that the "--tag" command # line option must be used. if test -z "$tagname"; then func_echo "unable to infer tagged configuration" func_fatal_error "specify a tag with \`--tag'" # else # func_verbose "using $tagname tagged configuration" fi ;; esac fi } # func_write_libtool_object output_name pic_name nonpic_name # Create a libtool object file (analogous to a ".la" file), # but don't create it if we're doing a dry run. func_write_libtool_object () { write_libobj=${1} if test "$build_libtool_libs" = yes; then write_lobj=\'${2}\' else write_lobj=none fi if test "$build_old_libs" = yes; then write_oldobj=\'${3}\' else write_oldobj=none fi $opt_dry_run || { cat >${write_libobj}T <<EOF # $write_libobj - a libtool object file # Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION # # Please DO NOT delete this file! # It is necessary for linking the library. # Name of the PIC object. pic_object=$write_lobj # Name of the non-PIC object non_pic_object=$write_oldobj EOF $MV "${write_libobj}T" "${write_libobj}" } } # func_mode_compile arg... func_mode_compile () { $opt_debug # Get the compilation command and the source file. base_compile= srcfile="$nonopt" # always keep a non-empty value in "srcfile" suppress_opt=yes suppress_output= arg_mode=normal libobj= later= pie_flag= for arg do case $arg_mode in arg ) # do not "continue". Instead, add this to base_compile lastarg="$arg" arg_mode=normal ;; target ) libobj="$arg" arg_mode=normal continue ;; normal ) # Accept any command-line options. case $arg in -o) test -n "$libobj" && \ func_fatal_error "you cannot specify \`-o' more than once" arg_mode=target continue ;; -pie | -fpie | -fPIE) pie_flag="$pie_flag $arg" continue ;; -shared | -static | -prefer-pic | -prefer-non-pic) later="$later $arg" continue ;; -no-suppress) suppress_opt=no continue ;; -Xcompiler) arg_mode=arg # the next one goes into the "base_compile" arg list continue # The current "srcfile" will either be retained or ;; # replaced later. I would guess that would be a bug. -Wc,*) func_stripname '-Wc,' '' "$arg" args=$func_stripname_result lastarg= save_ifs="$IFS"; IFS=',' for arg in $args; do IFS="$save_ifs" func_quote_for_eval "$arg" lastarg="$lastarg $func_quote_for_eval_result" done IFS="$save_ifs" func_stripname ' ' '' "$lastarg" lastarg=$func_stripname_result # Add the arguments to base_compile. base_compile="$base_compile $lastarg" continue ;; *) # Accept the current argument as the source file. # The previous "srcfile" becomes the current argument. # lastarg="$srcfile" srcfile="$arg" ;; esac # case $arg ;; esac # case $arg_mode # Aesthetically quote the previous argument. func_quote_for_eval "$lastarg" base_compile="$base_compile $func_quote_for_eval_result" done # for arg case $arg_mode in arg) func_fatal_error "you must specify an argument for -Xcompile" ;; target) func_fatal_error "you must specify a target with \`-o'" ;; *) # Get the name of the library object. test -z "$libobj" && { func_basename "$srcfile" libobj="$func_basename_result" } ;; esac # Recognize several different file suffixes. # If the user specifies -o file.o, it is replaced with file.lo case $libobj in *.[cCFSifmso] | \ *.ada | *.adb | *.ads | *.asm | \ *.c++ | *.cc | *.ii | *.class | *.cpp | *.cxx | \ *.[fF][09]? | *.for | *.java | *.obj | *.sx) func_xform "$libobj" libobj=$func_xform_result ;; esac case $libobj in *.lo) func_lo2o "$libobj"; obj=$func_lo2o_result ;; *) func_fatal_error "cannot determine name of library object from \`$libobj'" ;; esac func_infer_tag $base_compile for arg in $later; do case $arg in -shared) test "$build_libtool_libs" != yes && \ func_fatal_configuration "can not build a shared library" build_old_libs=no continue ;; -static) build_libtool_libs=no build_old_libs=yes continue ;; -prefer-pic) pic_mode=yes continue ;; -prefer-non-pic) pic_mode=no continue ;; esac done func_quote_for_eval "$libobj" test "X$libobj" != "X$func_quote_for_eval_result" \ && $ECHO "X$libobj" | $GREP '[]~#^*{};<>?"'"'"' &()|`$[]' \ && func_warning "libobj name \`$libobj' may not contain shell special characters." func_dirname_and_basename "$obj" "/" "" objname="$func_basename_result" xdir="$func_dirname_result" lobj=${xdir}$objdir/$objname test -z "$base_compile" && \ func_fatal_help "you must specify a compilation command" # Delete any leftover library objects. if test "$build_old_libs" = yes; then removelist="$obj $lobj $libobj ${libobj}T" else removelist="$lobj $libobj ${libobj}T" fi # On Cygwin there's no "real" PIC flag so we must build both object types case $host_os in cygwin* | mingw* | pw32* | os2* | cegcc*) pic_mode=default ;; esac if test "$pic_mode" = no && test "$deplibs_check_method" != pass_all; then # non-PIC code in shared libraries is not supported pic_mode=default fi # Calculate the filename of the output object if compiler does # not support -o with -c if test "$compiler_c_o" = no; then output_obj=`$ECHO "X$srcfile" | $Xsed -e 's%^.*/%%' -e 's%\.[^.]*$%%'`.${objext} lockfile="$output_obj.lock" else output_obj= need_locks=no lockfile= fi # Lock this critical section if it is needed # We use this script file to make the link, it avoids creating a new file if test "$need_locks" = yes; then until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do func_echo "Waiting for $lockfile to be removed" sleep 2 done elif test "$need_locks" = warn; then if test -f "$lockfile"; then $ECHO "\ *** ERROR, $lockfile exists and contains: `cat $lockfile 2>/dev/null` This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support \`-c' and \`-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi removelist="$removelist $output_obj" $ECHO "$srcfile" > "$lockfile" fi $opt_dry_run || $RM $removelist removelist="$removelist $lockfile" trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15 if test -n "$fix_srcfile_path"; then eval srcfile=\"$fix_srcfile_path\" fi func_quote_for_eval "$srcfile" qsrcfile=$func_quote_for_eval_result # Only build a PIC object if we are building libtool libraries. if test "$build_libtool_libs" = yes; then # Without this assignment, base_compile gets emptied. fbsd_hideous_sh_bug=$base_compile if test "$pic_mode" != no; then command="$base_compile $qsrcfile $pic_flag" else # Don't build PIC code command="$base_compile $qsrcfile" fi func_mkdir_p "$xdir$objdir" if test -z "$output_obj"; then # Place PIC objects in $objdir command="$command -o $lobj" fi func_show_eval_locale "$command" \ 'test -n "$output_obj" && $RM $removelist; exit $EXIT_FAILURE' if test "$need_locks" = warn && test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then $ECHO "\ *** ERROR, $lockfile contains: `cat $lockfile 2>/dev/null` but it should contain: $srcfile This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support \`-c' and \`-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi # Just move the object if needed, then go on to compile the next one if test -n "$output_obj" && test "X$output_obj" != "X$lobj"; then func_show_eval '$MV "$output_obj" "$lobj"' \ 'error=$?; $opt_dry_run || $RM $removelist; exit $error' fi # Allow error messages only from the first compilation. if test "$suppress_opt" = yes; then suppress_output=' >/dev/null 2>&1' fi fi # Only build a position-dependent object if we build old libraries. if test "$build_old_libs" = yes; then if test "$pic_mode" != yes; then # Don't build PIC code command="$base_compile $qsrcfile$pie_flag" else command="$base_compile $qsrcfile $pic_flag" fi if test "$compiler_c_o" = yes; then command="$command -o $obj" fi # Suppress compiler output if we already did a PIC compilation. command="$command$suppress_output" func_show_eval_locale "$command" \ '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' if test "$need_locks" = warn && test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then $ECHO "\ *** ERROR, $lockfile contains: `cat $lockfile 2>/dev/null` but it should contain: $srcfile This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support \`-c' and \`-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi # Just move the object if needed if test -n "$output_obj" && test "X$output_obj" != "X$obj"; then func_show_eval '$MV "$output_obj" "$obj"' \ 'error=$?; $opt_dry_run || $RM $removelist; exit $error' fi fi $opt_dry_run || { func_write_libtool_object "$libobj" "$objdir/$objname" "$objname" # Unlock the critical section if it was locked if test "$need_locks" != no; then removelist=$lockfile $RM "$lockfile" fi } exit $EXIT_SUCCESS } $opt_help || { test "$mode" = compile && func_mode_compile ${1+"$@"} } func_mode_help () { # We need to display help for each of the modes. case $mode in "") # Generic help is extracted from the usage comments # at the start of this file. func_help ;; clean) $ECHO \ "Usage: $progname [OPTION]... --mode=clean RM [RM-OPTION]... FILE... Remove files from the build directory. RM is the name of the program to use to delete files associated with each FILE (typically \`/bin/rm'). RM-OPTIONS are options (such as \`-f') to be passed to RM. If FILE is a libtool library, object or program, all the files associated with it are deleted. Otherwise, only FILE itself is deleted using RM." ;; compile) $ECHO \ "Usage: $progname [OPTION]... --mode=compile COMPILE-COMMAND... SOURCEFILE Compile a source file into a libtool library object. This mode accepts the following additional options: -o OUTPUT-FILE set the output file name to OUTPUT-FILE -no-suppress do not suppress compiler output for multiple passes -prefer-pic try to building PIC objects only -prefer-non-pic try to building non-PIC objects only -shared do not build a \`.o' file suitable for static linking -static only build a \`.o' file suitable for static linking COMPILE-COMMAND is a command to be used in creating a \`standard' object file from the given SOURCEFILE. The output file name is determined by removing the directory component from SOURCEFILE, then substituting the C source code suffix \`.c' with the library object suffix, \`.lo'." ;; execute) $ECHO \ "Usage: $progname [OPTION]... --mode=execute COMMAND [ARGS]... Automatically set library path, then run a program. This mode accepts the following additional options: -dlopen FILE add the directory containing FILE to the library path This mode sets the library path environment variable according to \`-dlopen' flags. If any of the ARGS are libtool executable wrappers, then they are translated into their corresponding uninstalled binary, and any of their required library directories are added to the library path. Then, COMMAND is executed, with ARGS as arguments." ;; finish) $ECHO \ "Usage: $progname [OPTION]... --mode=finish [LIBDIR]... Complete the installation of libtool libraries. Each LIBDIR is a directory that contains libtool libraries. The commands that this mode executes may require superuser privileges. Use the \`--dry-run' option if you just want to see what would be executed." ;; install) $ECHO \ "Usage: $progname [OPTION]... --mode=install INSTALL-COMMAND... Install executables or libraries. INSTALL-COMMAND is the installation command. The first component should be either the \`install' or \`cp' program. The following components of INSTALL-COMMAND are treated specially: -inst-prefix PREFIX-DIR Use PREFIX-DIR as a staging area for installation The rest of the components are interpreted as arguments to that command (only BSD-compatible install options are recognized)." ;; link) $ECHO \ "Usage: $progname [OPTION]... --mode=link LINK-COMMAND... Link object files or libraries together to form another library, or to create an executable program. LINK-COMMAND is a command using the C compiler that you would use to create a program from several object files. The following components of LINK-COMMAND are treated specially: -all-static do not do any dynamic linking at all -avoid-version do not add a version suffix if possible -dlopen FILE \`-dlpreopen' FILE if it cannot be dlopened at runtime -dlpreopen FILE link in FILE and add its symbols to lt_preloaded_symbols -export-dynamic allow symbols from OUTPUT-FILE to be resolved with dlsym(3) -export-symbols SYMFILE try to export only the symbols listed in SYMFILE -export-symbols-regex REGEX try to export only the symbols matching REGEX -LLIBDIR search LIBDIR for required installed libraries -lNAME OUTPUT-FILE requires the installed library libNAME -module build a library that can dlopened -no-fast-install disable the fast-install mode -no-install link a not-installable executable -no-undefined declare that a library does not refer to external symbols -o OUTPUT-FILE create OUTPUT-FILE from the specified objects -objectlist FILE Use a list of object files found in FILE to specify objects -precious-files-regex REGEX don't remove output files matching REGEX -release RELEASE specify package release information -rpath LIBDIR the created library will eventually be installed in LIBDIR -R[ ]LIBDIR add LIBDIR to the runtime path of programs and libraries -shared only do dynamic linking of libtool libraries -shrext SUFFIX override the standard shared library file extension -static do not do any dynamic linking of uninstalled libtool libraries -static-libtool-libs do not do any dynamic linking of libtool libraries -version-info CURRENT[:REVISION[:AGE]] specify library version info [each variable defaults to 0] -weak LIBNAME declare that the target provides the LIBNAME interface All other options (arguments beginning with \`-') are ignored. Every other argument is treated as a filename. Files ending in \`.la' are treated as uninstalled libtool libraries, other files are standard or library object files. If the OUTPUT-FILE ends in \`.la', then a libtool library is created, only library objects (\`.lo' files) may be specified, and \`-rpath' is required, except when creating a convenience library. If OUTPUT-FILE ends in \`.a' or \`.lib', then a standard library is created using \`ar' and \`ranlib', or on Windows using \`lib'. If OUTPUT-FILE ends in \`.lo' or \`.${objext}', then a reloadable object file is created, otherwise an executable program is created." ;; uninstall) $ECHO \ "Usage: $progname [OPTION]... --mode=uninstall RM [RM-OPTION]... FILE... Remove libraries from an installation directory. RM is the name of the program to use to delete files associated with each FILE (typically \`/bin/rm'). RM-OPTIONS are options (such as \`-f') to be passed to RM. If FILE is a libtool library, all the files associated with it are deleted. Otherwise, only FILE itself is deleted using RM." ;; *) func_fatal_help "invalid operation mode \`$mode'" ;; esac $ECHO $ECHO "Try \`$progname --help' for more information about other modes." exit $? } # Now that we've collected a possible --mode arg, show help if necessary $opt_help && func_mode_help # func_mode_execute arg... func_mode_execute () { $opt_debug # The first argument is the command name. cmd="$nonopt" test -z "$cmd" && \ func_fatal_help "you must specify a COMMAND" # Handle -dlopen flags immediately. for file in $execute_dlfiles; do test -f "$file" \ || func_fatal_help "\`$file' is not a file" dir= case $file in *.la) # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$file" \ || func_fatal_help "\`$lib' is not a valid libtool archive" # Read the libtool library. dlname= library_names= func_source "$file" # Skip this library if it cannot be dlopened. if test -z "$dlname"; then # Warn if it was a shared library. test -n "$library_names" && \ func_warning "\`$file' was not linked with \`-export-dynamic'" continue fi func_dirname "$file" "" "." dir="$func_dirname_result" if test -f "$dir/$objdir/$dlname"; then dir="$dir/$objdir" else if test ! -f "$dir/$dlname"; then func_fatal_error "cannot find \`$dlname' in \`$dir' or \`$dir/$objdir'" fi fi ;; *.lo) # Just add the directory containing the .lo file. func_dirname "$file" "" "." dir="$func_dirname_result" ;; *) func_warning "\`-dlopen' is ignored for non-libtool libraries and objects" continue ;; esac # Get the absolute pathname. absdir=`cd "$dir" && pwd` test -n "$absdir" && dir="$absdir" # Now add the directory to shlibpath_var. if eval "test -z \"\$$shlibpath_var\""; then eval "$shlibpath_var=\"\$dir\"" else eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\"" fi done # This variable tells wrapper scripts just to set shlibpath_var # rather than running their programs. libtool_execute_magic="$magic" # Check if any of the arguments is a wrapper script. args= for file do case $file in -*) ;; *) # Do a test to see if this is really a libtool program. if func_ltwrapper_script_p "$file"; then func_source "$file" # Transform arg to wrapped name. file="$progdir/$program" elif func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" func_source "$func_ltwrapper_scriptname_result" # Transform arg to wrapped name. file="$progdir/$program" fi ;; esac # Quote arguments (to preserve shell metacharacters). func_quote_for_eval "$file" args="$args $func_quote_for_eval_result" done if test "X$opt_dry_run" = Xfalse; then if test -n "$shlibpath_var"; then # Export the shlibpath_var. eval "export $shlibpath_var" fi # Restore saved environment variables for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES do eval "if test \"\${save_$lt_var+set}\" = set; then $lt_var=\$save_$lt_var; export $lt_var else $lt_unset $lt_var fi" done # Now prepare to actually exec the command. exec_cmd="\$cmd$args" else # Display what would be done. if test -n "$shlibpath_var"; then eval "\$ECHO \"\$shlibpath_var=\$$shlibpath_var\"" $ECHO "export $shlibpath_var" fi $ECHO "$cmd$args" exit $EXIT_SUCCESS fi } test "$mode" = execute && func_mode_execute ${1+"$@"} # func_mode_finish arg... func_mode_finish () { $opt_debug libdirs="$nonopt" admincmds= if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then for dir do libdirs="$libdirs $dir" done for libdir in $libdirs; do if test -n "$finish_cmds"; then # Do each command in the finish commands. func_execute_cmds "$finish_cmds" 'admincmds="$admincmds '"$cmd"'"' fi if test -n "$finish_eval"; then # Do the single finish_eval. eval cmds=\"$finish_eval\" $opt_dry_run || eval "$cmds" || admincmds="$admincmds $cmds" fi done fi # Exit here if they wanted silent mode. $opt_silent && exit $EXIT_SUCCESS $ECHO "X----------------------------------------------------------------------" | $Xsed $ECHO "Libraries have been installed in:" for libdir in $libdirs; do $ECHO " $libdir" done $ECHO $ECHO "If you ever happen to want to link against installed libraries" $ECHO "in a given directory, LIBDIR, you must either use libtool, and" $ECHO "specify the full pathname of the library, or use the \`-LLIBDIR'" $ECHO "flag during linking and do at least one of the following:" if test -n "$shlibpath_var"; then $ECHO " - add LIBDIR to the \`$shlibpath_var' environment variable" $ECHO " during execution" fi if test -n "$runpath_var"; then $ECHO " - add LIBDIR to the \`$runpath_var' environment variable" $ECHO " during linking" fi if test -n "$hardcode_libdir_flag_spec"; then libdir=LIBDIR eval flag=\"$hardcode_libdir_flag_spec\" $ECHO " - use the \`$flag' linker flag" fi if test -n "$admincmds"; then $ECHO " - have your system administrator run these commands:$admincmds" fi if test -f /etc/ld.so.conf; then $ECHO " - have your system administrator add LIBDIR to \`/etc/ld.so.conf'" fi $ECHO $ECHO "See any operating system documentation about shared libraries for" case $host in solaris2.[6789]|solaris2.1[0-9]) $ECHO "more information, such as the ld(1), crle(1) and ld.so(8) manual" $ECHO "pages." ;; *) $ECHO "more information, such as the ld(1) and ld.so(8) manual pages." ;; esac $ECHO "X----------------------------------------------------------------------" | $Xsed exit $EXIT_SUCCESS } test "$mode" = finish && func_mode_finish ${1+"$@"} # func_mode_install arg... func_mode_install () { $opt_debug # There may be an optional sh(1) argument at the beginning of # install_prog (especially on Windows NT). if test "$nonopt" = "$SHELL" || test "$nonopt" = /bin/sh || # Allow the use of GNU shtool's install command. $ECHO "X$nonopt" | $GREP shtool >/dev/null; then # Aesthetically quote it. func_quote_for_eval "$nonopt" install_prog="$func_quote_for_eval_result " arg=$1 shift else install_prog= arg=$nonopt fi # The real first argument should be the name of the installation program. # Aesthetically quote it. func_quote_for_eval "$arg" install_prog="$install_prog$func_quote_for_eval_result" # We need to accept at least all the BSD install flags. dest= files= opts= prev= install_type= isdir=no stripme= for arg do if test -n "$dest"; then files="$files $dest" dest=$arg continue fi case $arg in -d) isdir=yes ;; -f) case " $install_prog " in *[\\\ /]cp\ *) ;; *) prev=$arg ;; esac ;; -g | -m | -o) prev=$arg ;; -s) stripme=" -s" continue ;; -*) ;; *) # If the previous option needed an argument, then skip it. if test -n "$prev"; then prev= else dest=$arg continue fi ;; esac # Aesthetically quote the argument. func_quote_for_eval "$arg" install_prog="$install_prog $func_quote_for_eval_result" done test -z "$install_prog" && \ func_fatal_help "you must specify an install program" test -n "$prev" && \ func_fatal_help "the \`$prev' option requires an argument" if test -z "$files"; then if test -z "$dest"; then func_fatal_help "no file or destination specified" else func_fatal_help "you must specify a destination" fi fi # Strip any trailing slash from the destination. func_stripname '' '/' "$dest" dest=$func_stripname_result # Check to see that the destination is a directory. test -d "$dest" && isdir=yes if test "$isdir" = yes; then destdir="$dest" destname= else func_dirname_and_basename "$dest" "" "." destdir="$func_dirname_result" destname="$func_basename_result" # Not a directory, so check to see that there is only one file specified. set dummy $files; shift test "$#" -gt 1 && \ func_fatal_help "\`$dest' is not a directory" fi case $destdir in [\\/]* | [A-Za-z]:[\\/]*) ;; *) for file in $files; do case $file in *.lo) ;; *) func_fatal_help "\`$destdir' must be an absolute directory name" ;; esac done ;; esac # This variable tells wrapper scripts just to set variables rather # than running their programs. libtool_install_magic="$magic" staticlibs= future_libdirs= current_libdirs= for file in $files; do # Do each installation. case $file in *.$libext) # Do the static libraries later. staticlibs="$staticlibs $file" ;; *.la) # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$file" \ || func_fatal_help "\`$file' is not a valid libtool archive" library_names= old_library= relink_command= func_source "$file" # Add the libdir to current_libdirs if it is the destination. if test "X$destdir" = "X$libdir"; then case "$current_libdirs " in *" $libdir "*) ;; *) current_libdirs="$current_libdirs $libdir" ;; esac else # Note the libdir as a future libdir. case "$future_libdirs " in *" $libdir "*) ;; *) future_libdirs="$future_libdirs $libdir" ;; esac fi func_dirname "$file" "/" "" dir="$func_dirname_result" dir="$dir$objdir" if test -n "$relink_command"; then # Determine the prefix the user has applied to our future dir. inst_prefix_dir=`$ECHO "X$destdir" | $Xsed -e "s%$libdir\$%%"` # Don't allow the user to place us outside of our expected # location b/c this prevents finding dependent libraries that # are installed to the same prefix. # At present, this check doesn't affect windows .dll's that # are installed into $libdir/../bin (currently, that works fine) # but it's something to keep an eye on. test "$inst_prefix_dir" = "$destdir" && \ func_fatal_error "error: cannot install \`$file' to a directory not ending in $libdir" if test -n "$inst_prefix_dir"; then # Stick the inst_prefix_dir data into the link command. relink_command=`$ECHO "X$relink_command" | $Xsed -e "s%@inst_prefix_dir@%-inst-prefix-dir $inst_prefix_dir%"` else relink_command=`$ECHO "X$relink_command" | $Xsed -e "s%@inst_prefix_dir@%%"` fi func_warning "relinking \`$file'" func_show_eval "$relink_command" \ 'func_fatal_error "error: relink \`$file'\'' with the above command before installing it"' fi # See the names of the shared library. set dummy $library_names; shift if test -n "$1"; then realname="$1" shift srcname="$realname" test -n "$relink_command" && srcname="$realname"T # Install the shared library and build the symlinks. func_show_eval "$install_prog $dir/$srcname $destdir/$realname" \ 'exit $?' tstripme="$stripme" case $host_os in cygwin* | mingw* | pw32* | cegcc*) case $realname in *.dll.a) tstripme="" ;; esac ;; esac if test -n "$tstripme" && test -n "$striplib"; then func_show_eval "$striplib $destdir/$realname" 'exit $?' fi if test "$#" -gt 0; then # Delete the old symlinks, and create new ones. # Try `ln -sf' first, because the `ln' binary might depend on # the symlink we replace! Solaris /bin/ln does not understand -f, # so we also need to try rm && ln -s. for linkname do test "$linkname" != "$realname" \ && func_show_eval "(cd $destdir && { $LN_S -f $realname $linkname || { $RM $linkname && $LN_S $realname $linkname; }; })" done fi # Do each command in the postinstall commands. lib="$destdir/$realname" func_execute_cmds "$postinstall_cmds" 'exit $?' fi # Install the pseudo-library for information purposes. func_basename "$file" name="$func_basename_result" instname="$dir/$name"i func_show_eval "$install_prog $instname $destdir/$name" 'exit $?' # Maybe install the static library, too. test -n "$old_library" && staticlibs="$staticlibs $dir/$old_library" ;; *.lo) # Install (i.e. copy) a libtool object. # Figure out destination file name, if it wasn't already specified. if test -n "$destname"; then destfile="$destdir/$destname" else func_basename "$file" destfile="$func_basename_result" destfile="$destdir/$destfile" fi # Deduce the name of the destination old-style object file. case $destfile in *.lo) func_lo2o "$destfile" staticdest=$func_lo2o_result ;; *.$objext) staticdest="$destfile" destfile= ;; *) func_fatal_help "cannot copy a libtool object to \`$destfile'" ;; esac # Install the libtool object if requested. test -n "$destfile" && \ func_show_eval "$install_prog $file $destfile" 'exit $?' # Install the old object if enabled. if test "$build_old_libs" = yes; then # Deduce the name of the old-style object file. func_lo2o "$file" staticobj=$func_lo2o_result func_show_eval "$install_prog \$staticobj \$staticdest" 'exit $?' fi exit $EXIT_SUCCESS ;; *) # Figure out destination file name, if it wasn't already specified. if test -n "$destname"; then destfile="$destdir/$destname" else func_basename "$file" destfile="$func_basename_result" destfile="$destdir/$destfile" fi # If the file is missing, and there is a .exe on the end, strip it # because it is most likely a libtool script we actually want to # install stripped_ext="" case $file in *.exe) if test ! -f "$file"; then func_stripname '' '.exe' "$file" file=$func_stripname_result stripped_ext=".exe" fi ;; esac # Do a test to see if this is really a libtool program. case $host in *cygwin* | *mingw*) if func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" wrapper=$func_ltwrapper_scriptname_result else func_stripname '' '.exe' "$file" wrapper=$func_stripname_result fi ;; *) wrapper=$file ;; esac if func_ltwrapper_script_p "$wrapper"; then notinst_deplibs= relink_command= func_source "$wrapper" # Check the variables that should have been set. test -z "$generated_by_libtool_version" && \ func_fatal_error "invalid libtool wrapper script \`$wrapper'" finalize=yes for lib in $notinst_deplibs; do # Check to see that each library is installed. libdir= if test -f "$lib"; then func_source "$lib" fi libfile="$libdir/"`$ECHO "X$lib" | $Xsed -e 's%^.*/%%g'` ### testsuite: skip nested quoting test if test -n "$libdir" && test ! -f "$libfile"; then func_warning "\`$lib' has not been installed in \`$libdir'" finalize=no fi done relink_command= func_source "$wrapper" outputname= if test "$fast_install" = no && test -n "$relink_command"; then $opt_dry_run || { if test "$finalize" = yes; then tmpdir=`func_mktempdir` func_basename "$file$stripped_ext" file="$func_basename_result" outputname="$tmpdir/$file" # Replace the output file specification. relink_command=`$ECHO "X$relink_command" | $Xsed -e 's%@OUTPUT@%'"$outputname"'%g'` $opt_silent || { func_quote_for_expand "$relink_command" eval "func_echo $func_quote_for_expand_result" } if eval "$relink_command"; then : else func_error "error: relink \`$file' with the above command before installing it" $opt_dry_run || ${RM}r "$tmpdir" continue fi file="$outputname" else func_warning "cannot relink \`$file'" fi } else # Install the binary that we compiled earlier. file=`$ECHO "X$file$stripped_ext" | $Xsed -e "s%\([^/]*\)$%$objdir/\1%"` fi fi # remove .exe since cygwin /usr/bin/install will append another # one anyway case $install_prog,$host in */usr/bin/install*,*cygwin*) case $file:$destfile in *.exe:*.exe) # this is ok ;; *.exe:*) destfile=$destfile.exe ;; *:*.exe) func_stripname '' '.exe' "$destfile" destfile=$func_stripname_result ;; esac ;; esac func_show_eval "$install_prog\$stripme \$file \$destfile" 'exit $?' $opt_dry_run || if test -n "$outputname"; then ${RM}r "$tmpdir" fi ;; esac done for file in $staticlibs; do func_basename "$file" name="$func_basename_result" # Set up the ranlib parameters. oldlib="$destdir/$name" func_show_eval "$install_prog \$file \$oldlib" 'exit $?' if test -n "$stripme" && test -n "$old_striplib"; then func_show_eval "$old_striplib $oldlib" 'exit $?' fi # Do each command in the postinstall commands. func_execute_cmds "$old_postinstall_cmds" 'exit $?' done test -n "$future_libdirs" && \ func_warning "remember to run \`$progname --finish$future_libdirs'" if test -n "$current_libdirs"; then # Maybe just do a dry run. $opt_dry_run && current_libdirs=" -n$current_libdirs" exec_cmd='$SHELL $progpath $preserve_args --finish$current_libdirs' else exit $EXIT_SUCCESS fi } test "$mode" = install && func_mode_install ${1+"$@"} # func_generate_dlsyms outputname originator pic_p # Extract symbols from dlprefiles and create ${outputname}S.o with # a dlpreopen symbol table. func_generate_dlsyms () { $opt_debug my_outputname="$1" my_originator="$2" my_pic_p="${3-no}" my_prefix=`$ECHO "$my_originator" | sed 's%[^a-zA-Z0-9]%_%g'` my_dlsyms= if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then if test -n "$NM" && test -n "$global_symbol_pipe"; then my_dlsyms="${my_outputname}S.c" else func_error "not configured to extract global symbols from dlpreopened files" fi fi if test -n "$my_dlsyms"; then case $my_dlsyms in "") ;; *.c) # Discover the nlist of each of the dlfiles. nlist="$output_objdir/${my_outputname}.nm" func_show_eval "$RM $nlist ${nlist}S ${nlist}T" # Parse the name list into a source file. func_verbose "creating $output_objdir/$my_dlsyms" $opt_dry_run || $ECHO > "$output_objdir/$my_dlsyms" "\ /* $my_dlsyms - symbol resolution table for \`$my_outputname' dlsym emulation. */ /* Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION */ #ifdef __cplusplus extern \"C\" { #endif /* External symbol declarations for the compiler. */\ " if test "$dlself" = yes; then func_verbose "generating symbol list for \`$output'" $opt_dry_run || echo ': @PROGRAM@ ' > "$nlist" # Add our own program objects to the symbol list. progfiles=`$ECHO "X$objs$old_deplibs" | $SP2NL | $Xsed -e "$lo2o" | $NL2SP` for progfile in $progfiles; do func_verbose "extracting global C symbols from \`$progfile'" $opt_dry_run || eval "$NM $progfile | $global_symbol_pipe >> '$nlist'" done if test -n "$exclude_expsyms"; then $opt_dry_run || { eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' } fi if test -n "$export_symbols_regex"; then $opt_dry_run || { eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' } fi # Prepare the list of exported symbols if test -z "$export_symbols"; then export_symbols="$output_objdir/$outputname.exp" $opt_dry_run || { $RM $export_symbols eval "${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"' case $host in *cygwin* | *mingw* | *cegcc* ) eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"' ;; esac } else $opt_dry_run || { eval "${SED} -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"' eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' case $host in *cygwin | *mingw* | *cegcc* ) eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' eval 'cat "$nlist" >> "$output_objdir/$outputname.def"' ;; esac } fi fi for dlprefile in $dlprefiles; do func_verbose "extracting global C symbols from \`$dlprefile'" func_basename "$dlprefile" name="$func_basename_result" $opt_dry_run || { eval '$ECHO ": $name " >> "$nlist"' eval "$NM $dlprefile 2>/dev/null | $global_symbol_pipe >> '$nlist'" } done $opt_dry_run || { # Make sure we have at least an empty file. test -f "$nlist" || : > "$nlist" if test -n "$exclude_expsyms"; then $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T $MV "$nlist"T "$nlist" fi # Try sorting and uniquifying the output. if $GREP -v "^: " < "$nlist" | if sort -k 3 </dev/null >/dev/null 2>&1; then sort -k 3 else sort +2 fi | uniq > "$nlist"S; then : else $GREP -v "^: " < "$nlist" > "$nlist"S fi if test -f "$nlist"S; then eval "$global_symbol_to_cdecl"' < "$nlist"S >> "$output_objdir/$my_dlsyms"' else $ECHO '/* NONE */' >> "$output_objdir/$my_dlsyms" fi $ECHO >> "$output_objdir/$my_dlsyms" "\ /* The mapping between symbol names and symbols. */ typedef struct { const char *name; void *address; } lt_dlsymlist; " case $host in *cygwin* | *mingw* | *cegcc* ) $ECHO >> "$output_objdir/$my_dlsyms" "\ /* DATA imports from DLLs on WIN32 con't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */" lt_dlsym_const= ;; *osf5*) echo >> "$output_objdir/$my_dlsyms" "\ /* This system does not cope well with relocations in const data */" lt_dlsym_const= ;; *) lt_dlsym_const=const ;; esac $ECHO >> "$output_objdir/$my_dlsyms" "\ extern $lt_dlsym_const lt_dlsymlist lt_${my_prefix}_LTX_preloaded_symbols[]; $lt_dlsym_const lt_dlsymlist lt_${my_prefix}_LTX_preloaded_symbols[] = {\ { \"$my_originator\", (void *) 0 }," case $need_lib_prefix in no) eval "$global_symbol_to_c_name_address" < "$nlist" >> "$output_objdir/$my_dlsyms" ;; *) eval "$global_symbol_to_c_name_address_lib_prefix" < "$nlist" >> "$output_objdir/$my_dlsyms" ;; esac $ECHO >> "$output_objdir/$my_dlsyms" "\ {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt_${my_prefix}_LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif\ " } # !$opt_dry_run pic_flag_for_symtable= case "$compile_command " in *" -static "*) ;; *) case $host in # compiling the symbol table file with pic_flag works around # a FreeBSD bug that causes programs to crash when -lm is # linked before any other PIC object. But we must not use # pic_flag when linking with -static. The problem exists in # FreeBSD 2.2.6 and is fixed in FreeBSD 3.1. *-*-freebsd2*|*-*-freebsd3.0*|*-*-freebsdelf3.0*) pic_flag_for_symtable=" $pic_flag -DFREEBSD_WORKAROUND" ;; *-*-hpux*) pic_flag_for_symtable=" $pic_flag" ;; *) if test "X$my_pic_p" != Xno; then pic_flag_for_symtable=" $pic_flag" fi ;; esac ;; esac symtab_cflags= for arg in $LTCFLAGS; do case $arg in -pie | -fpie | -fPIE) ;; *) symtab_cflags="$symtab_cflags $arg" ;; esac done # Now compile the dynamic symbol file. func_show_eval '(cd $output_objdir && $LTCC$symtab_cflags -c$no_builtin_flag$pic_flag_for_symtable "$my_dlsyms")' 'exit $?' # Clean up the generated files. func_show_eval '$RM "$output_objdir/$my_dlsyms" "$nlist" "${nlist}S" "${nlist}T"' # Transform the symbol file into the correct name. symfileobj="$output_objdir/${my_outputname}S.$objext" case $host in *cygwin* | *mingw* | *cegcc* ) if test -f "$output_objdir/$my_outputname.def"; then compile_command=`$ECHO "X$compile_command" | $Xsed -e "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` finalize_command=`$ECHO "X$finalize_command" | $Xsed -e "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` else compile_command=`$ECHO "X$compile_command" | $Xsed -e "s%@SYMFILE@%$symfileobj%"` finalize_command=`$ECHO "X$finalize_command" | $Xsed -e "s%@SYMFILE@%$symfileobj%"` fi ;; *) compile_command=`$ECHO "X$compile_command" | $Xsed -e "s%@SYMFILE@%$symfileobj%"` finalize_command=`$ECHO "X$finalize_command" | $Xsed -e "s%@SYMFILE@%$symfileobj%"` ;; esac ;; *) func_fatal_error "unknown suffix for \`$my_dlsyms'" ;; esac else # We keep going just in case the user didn't refer to # lt_preloaded_symbols. The linker will fail if global_symbol_pipe # really was required. # Nullify the symbol file. compile_command=`$ECHO "X$compile_command" | $Xsed -e "s% @SYMFILE@%%"` finalize_command=`$ECHO "X$finalize_command" | $Xsed -e "s% @SYMFILE@%%"` fi } # func_win32_libid arg # return the library type of file 'arg' # # Need a lot of goo to handle *both* DLLs and import libs # Has to be a shell function in order to 'eat' the argument # that is supplied when $file_magic_command is called. func_win32_libid () { $opt_debug win32_libid_type="unknown" win32_fileres=`file -L $1 2>/dev/null` case $win32_fileres in *ar\ archive\ import\ library*) # definitely import win32_libid_type="x86 archive import" ;; *ar\ archive*) # could be an import, or static if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null | $EGREP 'file format pe-i386(.*architecture: i386)?' >/dev/null ; then win32_nmres=`eval $NM -f posix -A $1 | $SED -n -e ' 1,100{ / I /{ s,.*,import, p q } }'` case $win32_nmres in import*) win32_libid_type="x86 archive import";; *) win32_libid_type="x86 archive static";; esac fi ;; *DLL*) win32_libid_type="x86 DLL" ;; *executable*) # but shell scripts are "executable" too... case $win32_fileres in *MS\ Windows\ PE\ Intel*) win32_libid_type="x86 DLL" ;; esac ;; esac $ECHO "$win32_libid_type" } # func_extract_an_archive dir oldlib func_extract_an_archive () { $opt_debug f_ex_an_ar_dir="$1"; shift f_ex_an_ar_oldlib="$1" func_show_eval "(cd \$f_ex_an_ar_dir && $AR x \"\$f_ex_an_ar_oldlib\")" 'exit $?' if ($AR t "$f_ex_an_ar_oldlib" | sort | sort -uc >/dev/null 2>&1); then : else func_fatal_error "object name conflicts in archive: $f_ex_an_ar_dir/$f_ex_an_ar_oldlib" fi } # func_extract_archives gentop oldlib ... func_extract_archives () { $opt_debug my_gentop="$1"; shift my_oldlibs=${1+"$@"} my_oldobjs="" my_xlib="" my_xabs="" my_xdir="" for my_xlib in $my_oldlibs; do # Extract the objects. case $my_xlib in [\\/]* | [A-Za-z]:[\\/]*) my_xabs="$my_xlib" ;; *) my_xabs=`pwd`"/$my_xlib" ;; esac func_basename "$my_xlib" my_xlib="$func_basename_result" my_xlib_u=$my_xlib while :; do case " $extracted_archives " in *" $my_xlib_u "*) func_arith $extracted_serial + 1 extracted_serial=$func_arith_result my_xlib_u=lt$extracted_serial-$my_xlib ;; *) break ;; esac done extracted_archives="$extracted_archives $my_xlib_u" my_xdir="$my_gentop/$my_xlib_u" func_mkdir_p "$my_xdir" case $host in *-darwin*) func_verbose "Extracting $my_xabs" # Do not bother doing anything if just a dry run $opt_dry_run || { darwin_orig_dir=`pwd` cd $my_xdir || exit $? darwin_archive=$my_xabs darwin_curdir=`pwd` darwin_base_archive=`basename "$darwin_archive"` darwin_arches=`$LIPO -info "$darwin_archive" 2>/dev/null | $GREP Architectures 2>/dev/null || true` if test -n "$darwin_arches"; then darwin_arches=`$ECHO "$darwin_arches" | $SED -e 's/.*are://'` darwin_arch= func_verbose "$darwin_base_archive has multiple architectures $darwin_arches" for darwin_arch in $darwin_arches ; do func_mkdir_p "unfat-$$/${darwin_base_archive}-${darwin_arch}" $LIPO -thin $darwin_arch -output "unfat-$$/${darwin_base_archive}-${darwin_arch}/${darwin_base_archive}" "${darwin_archive}" cd "unfat-$$/${darwin_base_archive}-${darwin_arch}" func_extract_an_archive "`pwd`" "${darwin_base_archive}" cd "$darwin_curdir" $RM "unfat-$$/${darwin_base_archive}-${darwin_arch}/${darwin_base_archive}" done # $darwin_arches ## Okay now we've a bunch of thin objects, gotta fatten them up :) darwin_filelist=`find unfat-$$ -type f -name \*.o -print -o -name \*.lo -print | $SED -e "$basename" | sort -u` darwin_file= darwin_files= for darwin_file in $darwin_filelist; do darwin_files=`find unfat-$$ -name $darwin_file -print | $NL2SP` $LIPO -create -output "$darwin_file" $darwin_files done # $darwin_filelist $RM -rf unfat-$$ cd "$darwin_orig_dir" else cd $darwin_orig_dir func_extract_an_archive "$my_xdir" "$my_xabs" fi # $darwin_arches } # !$opt_dry_run ;; *) func_extract_an_archive "$my_xdir" "$my_xabs" ;; esac my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | $NL2SP` done func_extract_archives_result="$my_oldobjs" } # func_emit_wrapper_part1 [arg=no] # # Emit the first part of a libtool wrapper script on stdout. # For more information, see the description associated with # func_emit_wrapper(), below. func_emit_wrapper_part1 () { func_emit_wrapper_part1_arg1=no if test -n "$1" ; then func_emit_wrapper_part1_arg1=$1 fi $ECHO "\ #! $SHELL # $output - temporary wrapper script for $objdir/$outputname # Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION # # The $output program cannot be directly executed until all the libtool # libraries that it depends on are installed. # # This wrapper script should never be moved out of the build directory. # If it is, it will not operate correctly. # Sed substitution that helps us do robust quoting. It backslashifies # metacharacters that are still active within double-quoted strings. Xsed='${SED} -e 1s/^X//' sed_quote_subst='$sed_quote_subst' # Be Bourne compatible if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case \`(set -o) 2>/dev/null\` in *posix*) set -o posix;; esac fi BIN_SH=xpg4; export BIN_SH # for Tru64 DUALCASE=1; export DUALCASE # for MKS sh # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH relink_command=\"$relink_command\" # This environment variable determines our operation mode. if test \"\$libtool_install_magic\" = \"$magic\"; then # install mode needs the following variables: generated_by_libtool_version='$macro_version' notinst_deplibs='$notinst_deplibs' else # When we are sourced in execute mode, \$file and \$ECHO are already set. if test \"\$libtool_execute_magic\" != \"$magic\"; then ECHO=\"$qecho\" file=\"\$0\" # Make sure echo works. if test \"X\$1\" = X--no-reexec; then # Discard the --no-reexec flag, and continue. shift elif test \"X\`{ \$ECHO '\t'; } 2>/dev/null\`\" = 'X\t'; then # Yippee, \$ECHO works! : else # Restart under the correct shell, and then maybe \$ECHO will work. exec $SHELL \"\$0\" --no-reexec \${1+\"\$@\"} fi fi\ " $ECHO "\ # Find the directory that this script lives in. thisdir=\`\$ECHO \"X\$file\" | \$Xsed -e 's%/[^/]*$%%'\` test \"x\$thisdir\" = \"x\$file\" && thisdir=. # Follow symbolic links until we get to the real thisdir. file=\`ls -ld \"\$file\" | ${SED} -n 's/.*-> //p'\` while test -n \"\$file\"; do destdir=\`\$ECHO \"X\$file\" | \$Xsed -e 's%/[^/]*\$%%'\` # If there was a directory component, then change thisdir. if test \"x\$destdir\" != \"x\$file\"; then case \"\$destdir\" in [\\\\/]* | [A-Za-z]:[\\\\/]*) thisdir=\"\$destdir\" ;; *) thisdir=\"\$thisdir/\$destdir\" ;; esac fi file=\`\$ECHO \"X\$file\" | \$Xsed -e 's%^.*/%%'\` file=\`ls -ld \"\$thisdir/\$file\" | ${SED} -n 's/.*-> //p'\` done " } # end: func_emit_wrapper_part1 # func_emit_wrapper_part2 [arg=no] # # Emit the second part of a libtool wrapper script on stdout. # For more information, see the description associated with # func_emit_wrapper(), below. func_emit_wrapper_part2 () { func_emit_wrapper_part2_arg1=no if test -n "$1" ; then func_emit_wrapper_part2_arg1=$1 fi $ECHO "\ # Usually 'no', except on cygwin/mingw when embedded into # the cwrapper. WRAPPER_SCRIPT_BELONGS_IN_OBJDIR=$func_emit_wrapper_part2_arg1 if test \"\$WRAPPER_SCRIPT_BELONGS_IN_OBJDIR\" = \"yes\"; then # special case for '.' if test \"\$thisdir\" = \".\"; then thisdir=\`pwd\` fi # remove .libs from thisdir case \"\$thisdir\" in *[\\\\/]$objdir ) thisdir=\`\$ECHO \"X\$thisdir\" | \$Xsed -e 's%[\\\\/][^\\\\/]*$%%'\` ;; $objdir ) thisdir=. ;; esac fi # Try to get the absolute directory name. absdir=\`cd \"\$thisdir\" && pwd\` test -n \"\$absdir\" && thisdir=\"\$absdir\" " if test "$fast_install" = yes; then $ECHO "\ program=lt-'$outputname'$exeext progdir=\"\$thisdir/$objdir\" if test ! -f \"\$progdir/\$program\" || { file=\`ls -1dt \"\$progdir/\$program\" \"\$progdir/../\$program\" 2>/dev/null | ${SED} 1q\`; \\ test \"X\$file\" != \"X\$progdir/\$program\"; }; then file=\"\$\$-\$program\" if test ! -d \"\$progdir\"; then $MKDIR \"\$progdir\" else $RM \"\$progdir/\$file\" fi" $ECHO "\ # relink executable if necessary if test -n \"\$relink_command\"; then if relink_command_output=\`eval \$relink_command 2>&1\`; then : else $ECHO \"\$relink_command_output\" >&2 $RM \"\$progdir/\$file\" exit 1 fi fi $MV \"\$progdir/\$file\" \"\$progdir/\$program\" 2>/dev/null || { $RM \"\$progdir/\$program\"; $MV \"\$progdir/\$file\" \"\$progdir/\$program\"; } $RM \"\$progdir/\$file\" fi" else $ECHO "\ program='$outputname' progdir=\"\$thisdir/$objdir\" " fi $ECHO "\ if test -f \"\$progdir/\$program\"; then" # Export our shlibpath_var if we have one. if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then $ECHO "\ # Add our own library path to $shlibpath_var $shlibpath_var=\"$temp_rpath\$$shlibpath_var\" # Some systems cannot cope with colon-terminated $shlibpath_var # The second colon is a workaround for a bug in BeOS R4 sed $shlibpath_var=\`\$ECHO \"X\$$shlibpath_var\" | \$Xsed -e 's/::*\$//'\` export $shlibpath_var " fi # fixup the dll searchpath if we need to. if test -n "$dllsearchpath"; then $ECHO "\ # Add the dll search path components to the executable PATH PATH=$dllsearchpath:\$PATH " fi $ECHO "\ if test \"\$libtool_execute_magic\" != \"$magic\"; then # Run the actual program with our arguments. " case $host in # Backslashes separate directories on plain windows *-*-mingw | *-*-os2* | *-cegcc*) $ECHO "\ exec \"\$progdir\\\\\$program\" \${1+\"\$@\"} " ;; *) $ECHO "\ exec \"\$progdir/\$program\" \${1+\"\$@\"} " ;; esac $ECHO "\ \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2 exit 1 fi else # The program doesn't exist. \$ECHO \"\$0: error: \\\`\$progdir/\$program' does not exist\" 1>&2 \$ECHO \"This script is just a wrapper for \$program.\" 1>&2 $ECHO \"See the $PACKAGE documentation for more information.\" 1>&2 exit 1 fi fi\ " } # end: func_emit_wrapper_part2 # func_emit_wrapper [arg=no] # # Emit a libtool wrapper script on stdout. # Don't directly open a file because we may want to # incorporate the script contents within a cygwin/mingw # wrapper executable. Must ONLY be called from within # func_mode_link because it depends on a number of variables # set therein. # # ARG is the value that the WRAPPER_SCRIPT_BELONGS_IN_OBJDIR # variable will take. If 'yes', then the emitted script # will assume that the directory in which it is stored is # the $objdir directory. This is a cygwin/mingw-specific # behavior. func_emit_wrapper () { func_emit_wrapper_arg1=no if test -n "$1" ; then func_emit_wrapper_arg1=$1 fi # split this up so that func_emit_cwrapperexe_src # can call each part independently. func_emit_wrapper_part1 "${func_emit_wrapper_arg1}" func_emit_wrapper_part2 "${func_emit_wrapper_arg1}" } # func_to_host_path arg # # Convert paths to host format when used with build tools. # Intended for use with "native" mingw (where libtool itself # is running under the msys shell), or in the following cross- # build environments: # $build $host # mingw (msys) mingw [e.g. native] # cygwin mingw # *nix + wine mingw # where wine is equipped with the `winepath' executable. # In the native mingw case, the (msys) shell automatically # converts paths for any non-msys applications it launches, # but that facility isn't available from inside the cwrapper. # Similar accommodations are necessary for $host mingw and # $build cygwin. Calling this function does no harm for other # $host/$build combinations not listed above. # # ARG is the path (on $build) that should be converted to # the proper representation for $host. The result is stored # in $func_to_host_path_result. func_to_host_path () { func_to_host_path_result="$1" if test -n "$1" ; then case $host in *mingw* ) lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g' case $build in *mingw* ) # actually, msys # awkward: cmd appends spaces to result lt_sed_strip_trailing_spaces="s/[ ]*\$//" func_to_host_path_tmp1=`( cmd //c echo "$1" |\ $SED -e "$lt_sed_strip_trailing_spaces" ) 2>/dev/null || echo ""` func_to_host_path_result=`echo "$func_to_host_path_tmp1" |\ $SED -e "$lt_sed_naive_backslashify"` ;; *cygwin* ) func_to_host_path_tmp1=`cygpath -w "$1"` func_to_host_path_result=`echo "$func_to_host_path_tmp1" |\ $SED -e "$lt_sed_naive_backslashify"` ;; * ) # Unfortunately, winepath does not exit with a non-zero # error code, so we are forced to check the contents of # stdout. On the other hand, if the command is not # found, the shell will set an exit code of 127 and print # *an error message* to stdout. So we must check for both # error code of zero AND non-empty stdout, which explains # the odd construction: func_to_host_path_tmp1=`winepath -w "$1" 2>/dev/null` if test "$?" -eq 0 && test -n "${func_to_host_path_tmp1}"; then func_to_host_path_result=`echo "$func_to_host_path_tmp1" |\ $SED -e "$lt_sed_naive_backslashify"` else # Allow warning below. func_to_host_path_result="" fi ;; esac if test -z "$func_to_host_path_result" ; then func_error "Could not determine host path corresponding to" func_error " '$1'" func_error "Continuing, but uninstalled executables may not work." # Fallback: func_to_host_path_result="$1" fi ;; esac fi } # end: func_to_host_path # func_to_host_pathlist arg # # Convert pathlists to host format when used with build tools. # See func_to_host_path(), above. This function supports the # following $build/$host combinations (but does no harm for # combinations not listed here): # $build $host # mingw (msys) mingw [e.g. native] # cygwin mingw # *nix + wine mingw # # Path separators are also converted from $build format to # $host format. If ARG begins or ends with a path separator # character, it is preserved (but converted to $host format) # on output. # # ARG is a pathlist (on $build) that should be converted to # the proper representation on $host. The result is stored # in $func_to_host_pathlist_result. func_to_host_pathlist () { func_to_host_pathlist_result="$1" if test -n "$1" ; then case $host in *mingw* ) lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g' # Remove leading and trailing path separator characters from # ARG. msys behavior is inconsistent here, cygpath turns them # into '.;' and ';.', and winepath ignores them completely. func_to_host_pathlist_tmp2="$1" # Once set for this call, this variable should not be # reassigned. It is used in tha fallback case. func_to_host_pathlist_tmp1=`echo "$func_to_host_pathlist_tmp2" |\ $SED -e 's|^:*||' -e 's|:*$||'` case $build in *mingw* ) # Actually, msys. # Awkward: cmd appends spaces to result. lt_sed_strip_trailing_spaces="s/[ ]*\$//" func_to_host_pathlist_tmp2=`( cmd //c echo "$func_to_host_pathlist_tmp1" |\ $SED -e "$lt_sed_strip_trailing_spaces" ) 2>/dev/null || echo ""` func_to_host_pathlist_result=`echo "$func_to_host_pathlist_tmp2" |\ $SED -e "$lt_sed_naive_backslashify"` ;; *cygwin* ) func_to_host_pathlist_tmp2=`cygpath -w -p "$func_to_host_pathlist_tmp1"` func_to_host_pathlist_result=`echo "$func_to_host_pathlist_tmp2" |\ $SED -e "$lt_sed_naive_backslashify"` ;; * ) # unfortunately, winepath doesn't convert pathlists func_to_host_pathlist_result="" func_to_host_pathlist_oldIFS=$IFS IFS=: for func_to_host_pathlist_f in $func_to_host_pathlist_tmp1 ; do IFS=$func_to_host_pathlist_oldIFS if test -n "$func_to_host_pathlist_f" ; then func_to_host_path "$func_to_host_pathlist_f" if test -n "$func_to_host_path_result" ; then if test -z "$func_to_host_pathlist_result" ; then func_to_host_pathlist_result="$func_to_host_path_result" else func_to_host_pathlist_result="$func_to_host_pathlist_result;$func_to_host_path_result" fi fi fi IFS=: done IFS=$func_to_host_pathlist_oldIFS ;; esac if test -z "$func_to_host_pathlist_result" ; then func_error "Could not determine the host path(s) corresponding to" func_error " '$1'" func_error "Continuing, but uninstalled executables may not work." # Fallback. This may break if $1 contains DOS-style drive # specifications. The fix is not to complicate the expression # below, but for the user to provide a working wine installation # with winepath so that path translation in the cross-to-mingw # case works properly. lt_replace_pathsep_nix_to_dos="s|:|;|g" func_to_host_pathlist_result=`echo "$func_to_host_pathlist_tmp1" |\ $SED -e "$lt_replace_pathsep_nix_to_dos"` fi # Now, add the leading and trailing path separators back case "$1" in :* ) func_to_host_pathlist_result=";$func_to_host_pathlist_result" ;; esac case "$1" in *: ) func_to_host_pathlist_result="$func_to_host_pathlist_result;" ;; esac ;; esac fi } # end: func_to_host_pathlist # func_emit_cwrapperexe_src # emit the source code for a wrapper executable on stdout # Must ONLY be called from within func_mode_link because # it depends on a number of variable set therein. func_emit_cwrapperexe_src () { cat <<EOF /* $cwrappersource - temporary wrapper executable for $objdir/$outputname Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION The $output program cannot be directly executed until all the libtool libraries that it depends on are installed. This wrapper executable should never be moved out of the build directory. If it is, it will not operate correctly. Currently, it simply execs the wrapper *script* "$SHELL $output", but could eventually absorb all of the scripts functionality and exec $objdir/$outputname directly. */ EOF cat <<"EOF" #include <stdio.h> #include <stdlib.h> #ifdef _MSC_VER # include <direct.h> # include <process.h> # include <io.h> # define setmode _setmode #else # include <unistd.h> # include <stdint.h> # ifdef __CYGWIN__ # include <io.h> # define HAVE_SETENV # ifdef __STRICT_ANSI__ char *realpath (const char *, char *); int putenv (char *); int setenv (const char *, const char *, int); # endif # endif #endif #include <malloc.h> #include <stdarg.h> #include <assert.h> #include <string.h> #include <ctype.h> #include <errno.h> #include <fcntl.h> #include <sys/stat.h> #if defined(PATH_MAX) # define LT_PATHMAX PATH_MAX #elif defined(MAXPATHLEN) # define LT_PATHMAX MAXPATHLEN #else # define LT_PATHMAX 1024 #endif #ifndef S_IXOTH # define S_IXOTH 0 #endif #ifndef S_IXGRP # define S_IXGRP 0 #endif #ifdef _MSC_VER # define S_IXUSR _S_IEXEC # define stat _stat # ifndef _INTPTR_T_DEFINED # define intptr_t int # endif #endif #ifndef DIR_SEPARATOR # define DIR_SEPARATOR '/' # define PATH_SEPARATOR ':' #endif #if defined (_WIN32) || defined (__MSDOS__) || defined (__DJGPP__) || \ defined (__OS2__) # define HAVE_DOS_BASED_FILE_SYSTEM # define FOPEN_WB "wb" # ifndef DIR_SEPARATOR_2 # define DIR_SEPARATOR_2 '\\' # endif # ifndef PATH_SEPARATOR_2 # define PATH_SEPARATOR_2 ';' # endif #endif #ifndef DIR_SEPARATOR_2 # define IS_DIR_SEPARATOR(ch) ((ch) == DIR_SEPARATOR) #else /* DIR_SEPARATOR_2 */ # define IS_DIR_SEPARATOR(ch) \ (((ch) == DIR_SEPARATOR) || ((ch) == DIR_SEPARATOR_2)) #endif /* DIR_SEPARATOR_2 */ #ifndef PATH_SEPARATOR_2 # define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR) #else /* PATH_SEPARATOR_2 */ # define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR_2) #endif /* PATH_SEPARATOR_2 */ #ifdef __CYGWIN__ # define FOPEN_WB "wb" #endif #ifndef FOPEN_WB # define FOPEN_WB "w" #endif #ifndef _O_BINARY # define _O_BINARY 0 #endif #define XMALLOC(type, num) ((type *) xmalloc ((num) * sizeof(type))) #define XFREE(stale) do { \ if (stale) { free ((void *) stale); stale = 0; } \ } while (0) #undef LTWRAPPER_DEBUGPRINTF #if defined DEBUGWRAPPER # define LTWRAPPER_DEBUGPRINTF(args) ltwrapper_debugprintf args static void ltwrapper_debugprintf (const char *fmt, ...) { va_list args; va_start (args, fmt); (void) vfprintf (stderr, fmt, args); va_end (args); } #else # define LTWRAPPER_DEBUGPRINTF(args) #endif const char *program_name = NULL; void *xmalloc (size_t num); char *xstrdup (const char *string); const char *base_name (const char *name); char *find_executable (const char *wrapper); char *chase_symlinks (const char *pathspec); int make_executable (const char *path); int check_executable (const char *path); char *strendzap (char *str, const char *pat); void lt_fatal (const char *message, ...); void lt_setenv (const char *name, const char *value); char *lt_extend_str (const char *orig_value, const char *add, int to_end); void lt_opt_process_env_set (const char *arg); void lt_opt_process_env_prepend (const char *arg); void lt_opt_process_env_append (const char *arg); int lt_split_name_value (const char *arg, char** name, char** value); void lt_update_exe_path (const char *name, const char *value); void lt_update_lib_path (const char *name, const char *value); static const char *script_text_part1 = EOF func_emit_wrapper_part1 yes | $SED -e 's/\([\\"]\)/\\\1/g' \ -e 's/^/ "/' -e 's/$/\\n"/' echo ";" cat <<EOF static const char *script_text_part2 = EOF func_emit_wrapper_part2 yes | $SED -e 's/\([\\"]\)/\\\1/g' \ -e 's/^/ "/' -e 's/$/\\n"/' echo ";" cat <<EOF const char * MAGIC_EXE = "$magic_exe"; const char * LIB_PATH_VARNAME = "$shlibpath_var"; EOF if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then func_to_host_pathlist "$temp_rpath" cat <<EOF const char * LIB_PATH_VALUE = "$func_to_host_pathlist_result"; EOF else cat <<"EOF" const char * LIB_PATH_VALUE = ""; EOF fi if test -n "$dllsearchpath"; then func_to_host_pathlist "$dllsearchpath:" cat <<EOF const char * EXE_PATH_VARNAME = "PATH"; const char * EXE_PATH_VALUE = "$func_to_host_pathlist_result"; EOF else cat <<"EOF" const char * EXE_PATH_VARNAME = ""; const char * EXE_PATH_VALUE = ""; EOF fi if test "$fast_install" = yes; then cat <<EOF const char * TARGET_PROGRAM_NAME = "lt-$outputname"; /* hopefully, no .exe */ EOF else cat <<EOF const char * TARGET_PROGRAM_NAME = "$outputname"; /* hopefully, no .exe */ EOF fi cat <<"EOF" #define LTWRAPPER_OPTION_PREFIX "--lt-" #define LTWRAPPER_OPTION_PREFIX_LENGTH 5 static const size_t opt_prefix_len = LTWRAPPER_OPTION_PREFIX_LENGTH; static const char *ltwrapper_option_prefix = LTWRAPPER_OPTION_PREFIX; static const char *dumpscript_opt = LTWRAPPER_OPTION_PREFIX "dump-script"; static const size_t env_set_opt_len = LTWRAPPER_OPTION_PREFIX_LENGTH + 7; static const char *env_set_opt = LTWRAPPER_OPTION_PREFIX "env-set"; /* argument is putenv-style "foo=bar", value of foo is set to bar */ static const size_t env_prepend_opt_len = LTWRAPPER_OPTION_PREFIX_LENGTH + 11; static const char *env_prepend_opt = LTWRAPPER_OPTION_PREFIX "env-prepend"; /* argument is putenv-style "foo=bar", new value of foo is bar${foo} */ static const size_t env_append_opt_len = LTWRAPPER_OPTION_PREFIX_LENGTH + 10; static const char *env_append_opt = LTWRAPPER_OPTION_PREFIX "env-append"; /* argument is putenv-style "foo=bar", new value of foo is ${foo}bar */ int main (int argc, char *argv[]) { char **newargz; int newargc; char *tmp_pathspec; char *actual_cwrapper_path; char *actual_cwrapper_name; char *target_name; char *lt_argv_zero; intptr_t rval = 127; int i; program_name = (char *) xstrdup (base_name (argv[0])); LTWRAPPER_DEBUGPRINTF (("(main) argv[0] : %s\n", argv[0])); LTWRAPPER_DEBUGPRINTF (("(main) program_name : %s\n", program_name)); /* very simple arg parsing; don't want to rely on getopt */ for (i = 1; i < argc; i++) { if (strcmp (argv[i], dumpscript_opt) == 0) { EOF case "$host" in *mingw* | *cygwin* ) # make stdout use "unix" line endings echo " setmode(1,_O_BINARY);" ;; esac cat <<"EOF" printf ("%s", script_text_part1); printf ("%s", script_text_part2); return 0; } } newargz = XMALLOC (char *, argc + 1); tmp_pathspec = find_executable (argv[0]); if (tmp_pathspec == NULL) lt_fatal ("Couldn't find %s", argv[0]); LTWRAPPER_DEBUGPRINTF (("(main) found exe (before symlink chase) at : %s\n", tmp_pathspec)); actual_cwrapper_path = chase_symlinks (tmp_pathspec); LTWRAPPER_DEBUGPRINTF (("(main) found exe (after symlink chase) at : %s\n", actual_cwrapper_path)); XFREE (tmp_pathspec); actual_cwrapper_name = xstrdup( base_name (actual_cwrapper_path)); strendzap (actual_cwrapper_path, actual_cwrapper_name); /* wrapper name transforms */ strendzap (actual_cwrapper_name, ".exe"); tmp_pathspec = lt_extend_str (actual_cwrapper_name, ".exe", 1); XFREE (actual_cwrapper_name); actual_cwrapper_name = tmp_pathspec; tmp_pathspec = 0; /* target_name transforms -- use actual target program name; might have lt- prefix */ target_name = xstrdup (base_name (TARGET_PROGRAM_NAME)); strendzap (target_name, ".exe"); tmp_pathspec = lt_extend_str (target_name, ".exe", 1); XFREE (target_name); target_name = tmp_pathspec; tmp_pathspec = 0; LTWRAPPER_DEBUGPRINTF (("(main) libtool target name: %s\n", target_name)); EOF cat <<EOF newargz[0] = XMALLOC (char, (strlen (actual_cwrapper_path) + strlen ("$objdir") + 1 + strlen (actual_cwrapper_name) + 1)); strcpy (newargz[0], actual_cwrapper_path); strcat (newargz[0], "$objdir"); strcat (newargz[0], "/"); EOF cat <<"EOF" /* stop here, and copy so we don't have to do this twice */ tmp_pathspec = xstrdup (newargz[0]); /* do NOT want the lt- prefix here, so use actual_cwrapper_name */ strcat (newargz[0], actual_cwrapper_name); /* DO want the lt- prefix here if it exists, so use target_name */ lt_argv_zero = lt_extend_str (tmp_pathspec, target_name, 1); XFREE (tmp_pathspec); tmp_pathspec = NULL; EOF case $host_os in mingw*) cat <<"EOF" { char* p; while ((p = strchr (newargz[0], '\\')) != NULL) { *p = '/'; } while ((p = strchr (lt_argv_zero, '\\')) != NULL) { *p = '/'; } } EOF ;; esac cat <<"EOF" XFREE (target_name); XFREE (actual_cwrapper_path); XFREE (actual_cwrapper_name); lt_setenv ("BIN_SH", "xpg4"); /* for Tru64 */ lt_setenv ("DUALCASE", "1"); /* for MSK sh */ lt_update_lib_path (LIB_PATH_VARNAME, LIB_PATH_VALUE); lt_update_exe_path (EXE_PATH_VARNAME, EXE_PATH_VALUE); newargc=0; for (i = 1; i < argc; i++) { if (strncmp (argv[i], env_set_opt, env_set_opt_len) == 0) { if (argv[i][env_set_opt_len] == '=') { const char *p = argv[i] + env_set_opt_len + 1; lt_opt_process_env_set (p); } else if (argv[i][env_set_opt_len] == '\0' && i + 1 < argc) { lt_opt_process_env_set (argv[++i]); /* don't copy */ } else lt_fatal ("%s missing required argument", env_set_opt); continue; } if (strncmp (argv[i], env_prepend_opt, env_prepend_opt_len) == 0) { if (argv[i][env_prepend_opt_len] == '=') { const char *p = argv[i] + env_prepend_opt_len + 1; lt_opt_process_env_prepend (p); } else if (argv[i][env_prepend_opt_len] == '\0' && i + 1 < argc) { lt_opt_process_env_prepend (argv[++i]); /* don't copy */ } else lt_fatal ("%s missing required argument", env_prepend_opt); continue; } if (strncmp (argv[i], env_append_opt, env_append_opt_len) == 0) { if (argv[i][env_append_opt_len] == '=') { const char *p = argv[i] + env_append_opt_len + 1; lt_opt_process_env_append (p); } else if (argv[i][env_append_opt_len] == '\0' && i + 1 < argc) { lt_opt_process_env_append (argv[++i]); /* don't copy */ } else lt_fatal ("%s missing required argument", env_append_opt); continue; } if (strncmp (argv[i], ltwrapper_option_prefix, opt_prefix_len) == 0) { /* however, if there is an option in the LTWRAPPER_OPTION_PREFIX namespace, but it is not one of the ones we know about and have already dealt with, above (inluding dump-script), then report an error. Otherwise, targets might begin to believe they are allowed to use options in the LTWRAPPER_OPTION_PREFIX namespace. The first time any user complains about this, we'll need to make LTWRAPPER_OPTION_PREFIX a configure-time option or a configure.ac-settable value. */ lt_fatal ("Unrecognized option in %s namespace: '%s'", ltwrapper_option_prefix, argv[i]); } /* otherwise ... */ newargz[++newargc] = xstrdup (argv[i]); } newargz[++newargc] = NULL; LTWRAPPER_DEBUGPRINTF (("(main) lt_argv_zero : %s\n", (lt_argv_zero ? lt_argv_zero : "<NULL>"))); for (i = 0; i < newargc; i++) { LTWRAPPER_DEBUGPRINTF (("(main) newargz[%d] : %s\n", i, (newargz[i] ? newargz[i] : "<NULL>"))); } EOF case $host_os in mingw*) cat <<"EOF" /* execv doesn't actually work on mingw as expected on unix */ rval = _spawnv (_P_WAIT, lt_argv_zero, (const char * const *) newargz); if (rval == -1) { /* failed to start process */ LTWRAPPER_DEBUGPRINTF (("(main) failed to launch target \"%s\": errno = %d\n", lt_argv_zero, errno)); return 127; } return rval; EOF ;; *) cat <<"EOF" execv (lt_argv_zero, newargz); return rval; /* =127, but avoids unused variable warning */ EOF ;; esac cat <<"EOF" } void * xmalloc (size_t num) { void *p = (void *) malloc (num); if (!p) lt_fatal ("Memory exhausted"); return p; } char * xstrdup (const char *string) { return string ? strcpy ((char *) xmalloc (strlen (string) + 1), string) : NULL; } const char * base_name (const char *name) { const char *base; #if defined (HAVE_DOS_BASED_FILE_SYSTEM) /* Skip over the disk name in MSDOS pathnames. */ if (isalpha ((unsigned char) name[0]) && name[1] == ':') name += 2; #endif for (base = name; *name; name++) if (IS_DIR_SEPARATOR (*name)) base = name + 1; return base; } int check_executable (const char *path) { struct stat st; LTWRAPPER_DEBUGPRINTF (("(check_executable) : %s\n", path ? (*path ? path : "EMPTY!") : "NULL!")); if ((!path) || (!*path)) return 0; if ((stat (path, &st) >= 0) && (st.st_mode & (S_IXUSR | S_IXGRP | S_IXOTH))) return 1; else return 0; } int make_executable (const char *path) { int rval = 0; struct stat st; LTWRAPPER_DEBUGPRINTF (("(make_executable) : %s\n", path ? (*path ? path : "EMPTY!") : "NULL!")); if ((!path) || (!*path)) return 0; if (stat (path, &st) >= 0) { rval = chmod (path, st.st_mode | S_IXOTH | S_IXGRP | S_IXUSR); } return rval; } /* Searches for the full path of the wrapper. Returns newly allocated full path name if found, NULL otherwise Does not chase symlinks, even on platforms that support them. */ char * find_executable (const char *wrapper) { int has_slash = 0; const char *p; const char *p_next; /* static buffer for getcwd */ char tmp[LT_PATHMAX + 1]; int tmp_len; char *concat_name; LTWRAPPER_DEBUGPRINTF (("(find_executable) : %s\n", wrapper ? (*wrapper ? wrapper : "EMPTY!") : "NULL!")); if ((wrapper == NULL) || (*wrapper == '\0')) return NULL; /* Absolute path? */ #if defined (HAVE_DOS_BASED_FILE_SYSTEM) if (isalpha ((unsigned char) wrapper[0]) && wrapper[1] == ':') { concat_name = xstrdup (wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } else { #endif if (IS_DIR_SEPARATOR (wrapper[0])) { concat_name = xstrdup (wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } #if defined (HAVE_DOS_BASED_FILE_SYSTEM) } #endif for (p = wrapper; *p; p++) if (*p == '/') { has_slash = 1; break; } if (!has_slash) { /* no slashes; search PATH */ const char *path = getenv ("PATH"); if (path != NULL) { for (p = path; *p; p = p_next) { const char *q; size_t p_len; for (q = p; *q; q++) if (IS_PATH_SEPARATOR (*q)) break; p_len = q - p; p_next = (*q == '\0' ? q : q + 1); if (p_len == 0) { /* empty path: current directory */ if (getcwd (tmp, LT_PATHMAX) == NULL) lt_fatal ("getcwd failed"); tmp_len = strlen (tmp); concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, tmp, tmp_len); concat_name[tmp_len] = '/'; strcpy (concat_name + tmp_len + 1, wrapper); } else { concat_name = XMALLOC (char, p_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, p, p_len); concat_name[p_len] = '/'; strcpy (concat_name + p_len + 1, wrapper); } if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } } /* not found in PATH; assume curdir */ } /* Relative path | not found in path: prepend cwd */ if (getcwd (tmp, LT_PATHMAX) == NULL) lt_fatal ("getcwd failed"); tmp_len = strlen (tmp); concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, tmp, tmp_len); concat_name[tmp_len] = '/'; strcpy (concat_name + tmp_len + 1, wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); return NULL; } char * chase_symlinks (const char *pathspec) { #ifndef S_ISLNK return xstrdup (pathspec); #else char buf[LT_PATHMAX]; struct stat s; char *tmp_pathspec = xstrdup (pathspec); char *p; int has_symlinks = 0; while (strlen (tmp_pathspec) && !has_symlinks) { LTWRAPPER_DEBUGPRINTF (("checking path component for symlinks: %s\n", tmp_pathspec)); if (lstat (tmp_pathspec, &s) == 0) { if (S_ISLNK (s.st_mode) != 0) { has_symlinks = 1; break; } /* search backwards for last DIR_SEPARATOR */ p = tmp_pathspec + strlen (tmp_pathspec) - 1; while ((p > tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) p--; if ((p == tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) { /* no more DIR_SEPARATORS left */ break; } *p = '\0'; } else { char *errstr = strerror (errno); lt_fatal ("Error accessing file %s (%s)", tmp_pathspec, errstr); } } XFREE (tmp_pathspec); if (!has_symlinks) { return xstrdup (pathspec); } tmp_pathspec = realpath (pathspec, buf); if (tmp_pathspec == 0) { lt_fatal ("Could not follow symlinks for %s", pathspec); } return xstrdup (tmp_pathspec); #endif } char * strendzap (char *str, const char *pat) { size_t len, patlen; assert (str != NULL); assert (pat != NULL); len = strlen (str); patlen = strlen (pat); if (patlen <= len) { str += len - patlen; if (strcmp (str, pat) == 0) *str = '\0'; } return str; } static void lt_error_core (int exit_status, const char *mode, const char *message, va_list ap) { fprintf (stderr, "%s: %s: ", program_name, mode); vfprintf (stderr, message, ap); fprintf (stderr, ".\n"); if (exit_status >= 0) exit (exit_status); } void lt_fatal (const char *message, ...) { va_list ap; va_start (ap, message); lt_error_core (EXIT_FAILURE, "FATAL", message, ap); va_end (ap); } void lt_setenv (const char *name, const char *value) { LTWRAPPER_DEBUGPRINTF (("(lt_setenv) setting '%s' to '%s'\n", (name ? name : "<NULL>"), (value ? value : "<NULL>"))); { #ifdef HAVE_SETENV /* always make a copy, for consistency with !HAVE_SETENV */ char *str = xstrdup (value); setenv (name, str, 1); #else int len = strlen (name) + 1 + strlen (value) + 1; char *str = XMALLOC (char, len); sprintf (str, "%s=%s", name, value); if (putenv (str) != EXIT_SUCCESS) { XFREE (str); } #endif } } char * lt_extend_str (const char *orig_value, const char *add, int to_end) { char *new_value; if (orig_value && *orig_value) { int orig_value_len = strlen (orig_value); int add_len = strlen (add); new_value = XMALLOC (char, add_len + orig_value_len + 1); if (to_end) { strcpy (new_value, orig_value); strcpy (new_value + orig_value_len, add); } else { strcpy (new_value, add); strcpy (new_value + add_len, orig_value); } } else { new_value = xstrdup (add); } return new_value; } int lt_split_name_value (const char *arg, char** name, char** value) { const char *p; int len; if (!arg || !*arg) return 1; p = strchr (arg, (int)'='); if (!p) return 1; *value = xstrdup (++p); len = strlen (arg) - strlen (*value); *name = XMALLOC (char, len); strncpy (*name, arg, len-1); (*name)[len - 1] = '\0'; return 0; } void lt_opt_process_env_set (const char *arg) { char *name = NULL; char *value = NULL; if (lt_split_name_value (arg, &name, &value) != 0) { XFREE (name); XFREE (value); lt_fatal ("bad argument for %s: '%s'", env_set_opt, arg); } lt_setenv (name, value); XFREE (name); XFREE (value); } void lt_opt_process_env_prepend (const char *arg) { char *name = NULL; char *value = NULL; char *new_value = NULL; if (lt_split_name_value (arg, &name, &value) != 0) { XFREE (name); XFREE (value); lt_fatal ("bad argument for %s: '%s'", env_prepend_opt, arg); } new_value = lt_extend_str (getenv (name), value, 0); lt_setenv (name, new_value); XFREE (new_value); XFREE (name); XFREE (value); } void lt_opt_process_env_append (const char *arg) { char *name = NULL; char *value = NULL; char *new_value = NULL; if (lt_split_name_value (arg, &name, &value) != 0) { XFREE (name); XFREE (value); lt_fatal ("bad argument for %s: '%s'", env_append_opt, arg); } new_value = lt_extend_str (getenv (name), value, 1); lt_setenv (name, new_value); XFREE (new_value); XFREE (name); XFREE (value); } void lt_update_exe_path (const char *name, const char *value) { LTWRAPPER_DEBUGPRINTF (("(lt_update_exe_path) modifying '%s' by prepending '%s'\n", (name ? name : "<NULL>"), (value ? value : "<NULL>"))); if (name && *name && value && *value) { char *new_value = lt_extend_str (getenv (name), value, 0); /* some systems can't cope with a ':'-terminated path #' */ int len = strlen (new_value); while (((len = strlen (new_value)) > 0) && IS_PATH_SEPARATOR (new_value[len-1])) { new_value[len-1] = '\0'; } lt_setenv (name, new_value); XFREE (new_value); } } void lt_update_lib_path (const char *name, const char *value) { LTWRAPPER_DEBUGPRINTF (("(lt_update_lib_path) modifying '%s' by prepending '%s'\n", (name ? name : "<NULL>"), (value ? value : "<NULL>"))); if (name && *name && value && *value) { char *new_value = lt_extend_str (getenv (name), value, 0); lt_setenv (name, new_value); XFREE (new_value); } } EOF } # end: func_emit_cwrapperexe_src # func_mode_link arg... func_mode_link () { $opt_debug case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) # It is impossible to link a dll without this setting, and # we shouldn't force the makefile maintainer to figure out # which system we are compiling for in order to pass an extra # flag for every libtool invocation. # allow_undefined=no # FIXME: Unfortunately, there are problems with the above when trying # to make a dll which has undefined symbols, in which case not # even a static library is built. For now, we need to specify # -no-undefined on the libtool link line when we can be certain # that all symbols are satisfied, otherwise we get a static library. allow_undefined=yes ;; *) allow_undefined=yes ;; esac libtool_args=$nonopt base_compile="$nonopt $@" compile_command=$nonopt finalize_command=$nonopt compile_rpath= finalize_rpath= compile_shlibpath= finalize_shlibpath= convenience= old_convenience= deplibs= old_deplibs= compiler_flags= linker_flags= dllsearchpath= lib_search_path=`pwd` inst_prefix_dir= new_inherited_linker_flags= avoid_version=no dlfiles= dlprefiles= dlself=no export_dynamic=no export_symbols= export_symbols_regex= generated= libobjs= ltlibs= module=no no_install=no objs= non_pic_objects= precious_files_regex= prefer_static_libs=no preload=no prev= prevarg= release= rpath= xrpath= perm_rpath= temp_rpath= thread_safe=no vinfo= vinfo_number=no weak_libs= single_module="${wl}-single_module" func_infer_tag $base_compile # We need to know -static, to get the right output filenames. for arg do case $arg in -shared) test "$build_libtool_libs" != yes && \ func_fatal_configuration "can not build a shared library" build_old_libs=no break ;; -all-static | -static | -static-libtool-libs) case $arg in -all-static) if test "$build_libtool_libs" = yes && test -z "$link_static_flag"; then func_warning "complete static linking is impossible in this configuration" fi if test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=yes ;; -static) if test -z "$pic_flag" && test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=built ;; -static-libtool-libs) if test -z "$pic_flag" && test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=yes ;; esac build_libtool_libs=no build_old_libs=yes break ;; esac done # See if our shared archives depend on static archives. test -n "$old_archive_from_new_cmds" && build_old_libs=yes # Go through the arguments, transforming them on the way. while test "$#" -gt 0; do arg="$1" shift func_quote_for_eval "$arg" qarg=$func_quote_for_eval_unquoted_result func_append libtool_args " $func_quote_for_eval_result" # If the previous option needs an argument, assign it. if test -n "$prev"; then case $prev in output) func_append compile_command " @OUTPUT@" func_append finalize_command " @OUTPUT@" ;; esac case $prev in dlfiles|dlprefiles) if test "$preload" = no; then # Add the symbol object into the linking commands. func_append compile_command " @SYMFILE@" func_append finalize_command " @SYMFILE@" preload=yes fi case $arg in *.la | *.lo) ;; # We handle these cases below. force) if test "$dlself" = no; then dlself=needless export_dynamic=yes fi prev= continue ;; self) if test "$prev" = dlprefiles; then dlself=yes elif test "$prev" = dlfiles && test "$dlopen_self" != yes; then dlself=yes else dlself=needless export_dynamic=yes fi prev= continue ;; *) if test "$prev" = dlfiles; then dlfiles="$dlfiles $arg" else dlprefiles="$dlprefiles $arg" fi prev= continue ;; esac ;; expsyms) export_symbols="$arg" test -f "$arg" \ || func_fatal_error "symbol file \`$arg' does not exist" prev= continue ;; expsyms_regex) export_symbols_regex="$arg" prev= continue ;; framework) case $host in *-*-darwin*) case "$deplibs " in *" $qarg.ltframework "*) ;; *) deplibs="$deplibs $qarg.ltframework" # this is fixed later ;; esac ;; esac prev= continue ;; inst_prefix) inst_prefix_dir="$arg" prev= continue ;; objectlist) if test -f "$arg"; then save_arg=$arg moreargs= for fil in `cat "$save_arg"` do # moreargs="$moreargs $fil" arg=$fil # A libtool-controlled object. # Check to see that this really is a libtool object. if func_lalib_unsafe_p "$arg"; then pic_object= non_pic_object= # Read the .lo file func_source "$arg" if test -z "$pic_object" || test -z "$non_pic_object" || test "$pic_object" = none && test "$non_pic_object" = none; then func_fatal_error "cannot find name of object for \`$arg'" fi # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir="$func_dirname_result" if test "$pic_object" != none; then # Prepend the subdirectory the object is found in. pic_object="$xdir$pic_object" if test "$prev" = dlfiles; then if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then dlfiles="$dlfiles $pic_object" prev= continue else # If libtool objects are unsupported, then we need to preload. prev=dlprefiles fi fi # CHECK ME: I think I busted this. -Ossama if test "$prev" = dlprefiles; then # Preload the old-style object. dlprefiles="$dlprefiles $pic_object" prev= fi # A PIC object. func_append libobjs " $pic_object" arg="$pic_object" fi # Non-PIC object. if test "$non_pic_object" != none; then # Prepend the subdirectory the object is found in. non_pic_object="$xdir$non_pic_object" # A standard non-PIC object func_append non_pic_objects " $non_pic_object" if test -z "$pic_object" || test "$pic_object" = none ; then arg="$non_pic_object" fi else # If the PIC object exists, use it instead. # $xdir was prepended to $pic_object above. non_pic_object="$pic_object" func_append non_pic_objects " $non_pic_object" fi else # Only an error if not doing a dry-run. if $opt_dry_run; then # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir="$func_dirname_result" func_lo2o "$arg" pic_object=$xdir$objdir/$func_lo2o_result non_pic_object=$xdir$func_lo2o_result func_append libobjs " $pic_object" func_append non_pic_objects " $non_pic_object" else func_fatal_error "\`$arg' is not a valid libtool object" fi fi done else func_fatal_error "link input file \`$arg' does not exist" fi arg=$save_arg prev= continue ;; precious_regex) precious_files_regex="$arg" prev= continue ;; release) release="-$arg"
__label__pos
0.910189
Processing archives offline? Hi there, I host on a cloud provider, which gives me some flexibility regarding testing database changes and disruptive Piwik changes. I recently looked into upgrading our instance of Piwik and found that the DB upgrade process takes about 6.5 hours to do, which is acceptable for me in that I can do this over night, but it is quite lengthy and it does fail halfway through requring me to manually delete a table before it will continue. I then looked into whether we could turn off ‘trigger from the browser’ and set up auto archiving as we have somewhere just under 1000 tracked websites; I can’t load All Websites in Piwik 1.1.1 as it hits the PHP script timeout limit, and it is suggested to turn this off and run the auto archiver from cron. This script takes about a week to run the first time, and it does fail too, so I wondering if I can get the processing done and then import the tables into the old DB once I have updated it? Thanks in advance, Rick Edit: DB size is 19GB it does fail halfway through requring me to manually delete a table before it will continue. how does it fail? Piwik 1.1.1 upgrade ASAP to 2.0.3 [quote=matt] how does it fail?[/quote] It says that the piwik_report table already exists. Prior to starting the upgrade of the database this table doens’t actually exist in the database. If I rename the table, or delete it, and rerun the upgrader it completes after about another two hours. *** Update *** Database Upgrade Required Your Piwik database is out-of-date, and must be upgraded before you can continue. Piwik database will be upgraded from version 1.1.1 to the new version 2.0.3. The database upgrade process may take a while, so please be patient. [X] Critical Error during the update process: * /home/user/www/piwik/core/Updates/1.8.3-b1.php: Error trying to execute the query 'CREATE TABLE `piwik_report` ( `idreport` INT(11) NOT NULL AUTO_INCREMENT, `idsite` INTEGER(11) NOT NULL, `login` VARCHAR(100) NOT NULL, `description` VARCHAR(255) NOT NULL, `period` VARCHAR(10) NOT NULL, `type` VARCHAR(10) NOT NULL, `format` VARCHAR(10) NOT NULL, `reports` TEXT NOT NULL, `parameters` TEXT NULL, `ts_created` TIMESTAMP NULL, `ts_last_sent` TIMESTAMP NULL, `deleted` tinyint(4) NOT NULL default 0, PRIMARY KEY (`idreport`) ) DEFAULT CHARSET=utf8'. The error was: SQLSTATE[42S01]: Base table or view already exists: 1050 Table 'piwik_report' already exists The above is the core error message. It should help explain the cause, but if you require further help please: * Check the [ Piwik FAQ ] which explains most common errors during update. * Ask your system administrator - they may be able to help you with the error which is most likely related to your server or MySQL setup. If you are an advanced user and encounter an error in the database upgrade: * identify and correct the source of the problem (e.g., memory_limit or max_execution_time) * execute the remaining queries in the update that failed * manually update the `option` table in your Piwik database, setting the value of version_core to the version of the failed update * re-run the updater (through the browser or command-line) to continue with the remaining updates * report the problem (and solution) so that Piwik can be improved I’m trying, however, I’m also trying to keep the amount of disruption to a minimum. I will be upgrading directly to 2.0.3 or whatever version is in the latest.zip at the time. Edit the file 1.8.3-b1.php and comment out the query from the file, and try again ? I performed the upgrade this weekend just gone. Unfortunately, after getting it to fail at the same point repeatedly, I decided to optimise the DB tables before I ran the upgrade to try to speed things up. Big mistake :frowning: It broke the upgrade at a different point and I ended up running each of the SQL statements manually. Not fun when you’ve got 13m rows in some of the tables and some of the queries take 45 mins :frowning: This will work fine from now on. Just stay up to date every few months :slight_smile:
__label__pos
0.998868
Just how can Sleep-Associated Health problems Affect Practical Reputation Based on Sex? Just how can Sleep-Associated Health problems Affect Practical Reputation Based on Sex? Methods: An excellent retrospective clinical audit away from 744 Australian customers across seven private standard methods between is used. Patients completed an electronic digital survey as an element of its regime visit, which included the fresh new Epworth Sleepiness Scale (ESS), the functional Outcomes of Sleep Questionnaire 10 (FOSQ-10), and other questions regarding the end result of the sleep situation. Brand new ratio of men and you will females with ESS and you may FOSQ-ten scores with the issues away from daytime sleepiness and load from periods because of drowsiness, correspondingly, was basically opposed, as well as said differences between the latest men and women during the memories, amount, difficulties with relationship, impact disheartened, and you can sleep disorders. Results: On presentation, females were more likely to have sleeping disorders associated with daytime sleepiness (median ESS score of 9 for females versus 8 for males, P = .038; proportion ESS > 9 was 49.0% for females versus 36.9% for males, P = .003). Women were also more likely to report an increased burden of symptoms due to sleepiness compared to men, as shown by lower FOSQ-10 scores (P < .001). Secondary outcome measures showed that females were more likely to feel excessively tired and depressed, have difficulties with memory and concentration, and have trouble sleeping at night. Snoring kept partners awake in roughly the same proportion of males and females, and a larger proportion of the partners of males were forced out of the room. Conclusions: Sleep-relevant illnesses both reveal inside and you will impact the lifetime out of men and women in different ways. Sleep medical researchers is to recognize such variations with the all of the amounts of condition avoidance and you can fitness promotion regarding patient education, so you can medical diagnosis and you may government to evolve quality of life of these having sleep-relevant health issues. Citation: Boccabella A great, Malouf J. Just how can sleep-related illnesses apply at practical standing predicated on sex? J Clin Bed Med. 2017;13(5):685–692. Inclusion Trouble sleeping notably apply at an effective person’s health insurance and really-are. Sleep-associated issues and you can sleep disorders can lead to too-much day drowsiness, connect with state of mind and you may concentration, improve likelihood of automobile crashes, and you will protect against your ability to work effectively and you can safely. step 1 –step three Sleep disorders may also lead to a selection of neurological, aerobic, and mental health trouble. step 1 Obstructive snore (OSA), the most common insomnia, is with the blood you can try here pressure, cardiovascular illnesses, and you can stroke. step one –cuatro Yet not, trouble sleeping do not entirely impact the diligent in addition to their fitness. Those who anti snoring often interrupt the partner’s bed, resulting in dating issues and closeness dilemmas. As well as extreme private and social weight, these things sign up to a boost in medical care capital utilization. 5 A big body off facts implies that trouble sleeping, including OSA, manifest in a different way into the males and females. step three,4,six These variations occur most notably regarding the incidence, pathophysiology, cues, attacks, and you will severity of the state. 3,6 Snoring incidence expands for ladies inside later on existence, such just after menopause. eight The reasons having like distinctions are argued, but are related to hormonal impacts, anatomical and you can emotional variations in the top airway, some other breathing auto mechanics, and the entire body pounds shipments. 4,six,8 Temporary Conclusion Most recent Training/Investigation Rationale: Men and women feel bed-associated illnesses in a different way in terms of symptomatology, incidence, and you may pathophysiology. The main function of this research were to see the change in useful status anywhere between genders once they show standard practitioners. Investigation Impact: Our studies have shown that people do have additional practical condition toward speech to standard practitioners. A bigger ratio of females claimed difficulties with anxiety, trouble sleeping, focus, memories, and you will affect dating than the boys. Differences also are noticed in the way in which OSA try handled. The newest ratio of men to girls likely to sleep laboratories could have been reported to be ranging from 8:1 and you can 10:step 1, in spite of the ratio of times are estimated on between 2:1 and you will step 3:step one. 3,seven Traditionally, bed studies have predominantly become conducted when you look at the male communities, step 3 and consequent evaluative, symptomatic, and you may government recommendations had been conceived predicated on including browse. It’s postulated you to lady expose which have nonspecific periods one to disagree from vintage symptomatology. 4,nine Therefore, girls shall be misdiagnosed with other disorders eg depression. cuatro That it sex prejudice can get account fully for a number of the underdiagnosis and mismanagement out of OSA in women. step 3 Other causes is you to people can get introduce reduced seem to once the of the personal stigma of this snoring, whilst defies the typical female label, or you to definitely snoring is far more severe for the guys. cuatro Leave a Comment Your email address will not be published. Required fields are marked *
__label__pos
0.548949
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z LARGE-CELL MODEL FOR RADIATION HEAT TRANSFER IN MULTIPHASE SYSTEMS DOI: 10.1615/thermopedia.000118  Large-Cell Model for Radiation Heat Transfer in Multiphase Systems Leonid A. Dombrovsky Following from: Computational models for radiative transfer in disperse systems Leading to: Thermal radiation modeling in melt-coolant interaction The radiation model discussed in this section has been recently developed by Dombrovsky (2007a) for multiphase flows typical of the so-called fuel-coolant interaction (FCI) when a high-temperature core melt (80%UO2+20%ZrO2) falls into a water pool. Various aspects of FCI have been widely investigated during last two decades because of the possibility of severe accidents with regard to light-water nuclear reactors. The complexity of different stages of high-temperature core-melt interaction with water is one of the reasons for the present-day state of the art; however, some important physical processes still have not been considered in detail. The efforts of many researchers have focused on hydrodynamic simulation of melt jet breakup (Dinh et al., 1999; Bürger, 2006; Pohlner et al., 2006) and the specific problems of steam explosions (Fletcher and Anderson, 1990; Theofanous, 1995; Fletcher, 1995; Berthoud, 2000). At the same time, the radiation heat transfer in the multiphase medium containing polydisperse corium particles of temperature of about 2500-3000 K has not been a subject of detailed analysis. The papers by T.-N. Dinh et al. (1999), Fletcher (1999), and Dombrovsky (1999a, 2000a) were probably the first publications where the important role of radiation heat transfer was discussed. It was noted that a part of thermal radiation emitted by particles can be absorbed far from the radiation sources because of the semitransparency of water in the short-wave range. The general problem of radiation heat transfer between corium particles and ambient water can be divided into the following problems of different scales: thermal radiation from a single particle through a steam blanket to ambient water and radiation heat transfer in a large-scale volume containing numerous corium particles, steam bubbles, and water droplets. One can show that solutions to these problems can be incorporated in a general physical and computational model, as was done for similar problems of radiation heat transfer in other disperse systems (Dombrovsky, 1996). The single-particle problem has been analyzed in some detail by Dombrovsky (1999a, 2000a). The main focus was given to the significant contribution of electromagnetic wave effects in the case of very thin steam layers. The effect of semitransparency of nonisothermal oxide particles on the thermal radiation has also been studied by Dombrovsky (1999b, 2000b, 2002). The resulting physical features of particle solidification have been reported recently by Dombrovsky (2007b) and Dombrovsky and Dinh (2008). To the best of our knowledge, the first attempt to calculate radiation heat transfer in water containing corium particles was reported by Yuen (2004). It was assumed that there is no radiation scattering in the medium. The spectral radiative properties of the melt particles of various temperatures and sizes were ignored in this paper, and all the particles were considered as the sources of black-body radiation. The calculations by Yuen (2004) were based on the formal zonal method, which seems to be a poor choice for the problem considered. A more sophisticated model for radiation heat-transfer calculation in water containing numerous polydisperse corium particles of different temperatures and polydisperse steam bubbles has been suggested by Dombrovsky (2007a). This model, called the large-cell radiation model (LCRM), is sufficiently simple to be easily implemented into computational fluid dynamics (CFD) codes for multiphase flow calculations (Dombrovsky et al., 2009). The computational results for realistic conditions are considered in the article Thermal radiation modeling in melt-coolant interaction. In the present article, we focus on the radiation model. Two-Band Model with Conventional Semitransparency and Opacity Regions To suggest an adequate model of radiation heat transfer in water containing hot corium particles and steam bubbles, one should take into account specific optical properties of water in the visible and near-infrared spectral ranges. It is well known that water is semitransparent in a short-wave range, and there is a strong absorption band at the wavelength λ = 3 μm (Hale and Querry, 1973). To estimate the role of nonlocal radiation effects, one can introduce the characteristic penetration depth of the collimated radiation in water: lλ = 1/αw. The spectral dependence of lλ in the most interesting intermediate range of 0.8 < λ < 1.4 μm is illustrated in Fig. 1. One can see that lλ decreases from about 0.5 m at the visible range boundary λ = 0.8 μm to lλ = 1 mm at the wavelength λ = 1.38 μm. Figure 1. The characteristic propagation depth of collimated near-infrared radiation in water. It is reasonable to separately consider the following conventional spectral regions: • The short-wave semitransparency range λ < λ* = 1.2 μm, where lλ > 10 mm. There is a considerable radiation heat transfer between corium particles in this spectral range because the distance between neighboring millimeter-sized particles is usually less than 10 mm. One can use the traditional radiation transfer theory to calculate the volume distribution of radiation power. Both absorption and scattering of radiation by particles should be taken into account. • The opacity range λ > λ*, where lλ < 10 mm. In this range one can neglect the radiation heat transfer between the particles. The radiative transfer problem degenerates because of strong absorption at distances comparable to both particle sizes and distances between the particles. One can assume that radiation emitted by the particle in this spectral range is totally absorbed in ambient water. Of course, the above two-band radiation model should be treated as a simple approach, and the effect of a conventional value of the boundary wavelength λ* may be a subject of further analysis. In a multiphase flow typical of the FCI problem, numerous steam bubbles and core melt particles have a considerable effect on the radiative properties of the medium in the range of water semitransparency. Nevertheless, the above division of the spectrum into two bands according to the absorption spectrum of water remains acceptable (Dombrovsky, 2007a). P1 Approximation and Large-Cell Radiation Model for Semitransparency Range The RTE for emitting, absorbing, refracting, and scattering medium containing N components of different temperatures can be written as follows (Dombrovsky, 1996; Siegel and Howell, 2002; Modest, 2003): (1) where nλ is the index of refraction of the host medium, and αλ,i is the absorption coefficient of the composite medium component with temperature Ti, (2) By writing the last term on the right-hand side of RTE (1), we have assumed that every component of the medium is characterized by a definite temperature. This is not the case for large corium particles with considerable temperature difference in the particle. Nevertheless, the problem formulation should not be revised for opaque particles. It is sufficient to treat the value of Ti as a surface temperature of the particles of ith fraction. An essentially more complex problem should be considered for semitransparent particles when thermal radiation comes from the particle volume. It is a realistic situation for particles of aluminum oxide or other light oxides used as simulant substances in experimental studies of the core melt-coolant interaction. The problem of thermal radiation from semitransparent nonisothermal particles is considered in the articles Thermal radiation from nonisothermal spherical particles and Thermal radiation from nonisothermal particles in combined heat transfer problems. The solution obtained can be combined with the large-scale problem under consideration. It is very difficult to use the complete description of the radiation heat transfer based on RTE (1) in the range of water semitransparency. Therefore, the simplified radiation models should be considered for engineering calculations. The integration of the RTE over all values of the solid angle yields the following equation of spectral energy balance: (3) where pλ is the spectral radiation power emitted in a unit volume of the medium. Note that Eq. (3) is a generalized form of Eq. (7) from the article The radiative transfer equation for the case of a multitemperature medium. The spectral balance equation (3) is considered as a starting point for simplified models for radiation heat transfer in multiphase disperse systems. In the case of somewhat cool particles, the main part of thermal radiation is emitted in the range of water opacity. Thus it is reasonable to ignore the specific feature of the process in the short-wave range and assume water to be totally opaque over the whole spectrum. This approach can be called the opaque medium model (OMM). According to the OMM, thermal radiation emitted by single hot particles is absorbed in water at very small distances from the particle. In this case, the total power absorbed by water in a unit volume is equal to the power emitted by particles in this volume: (4) where λ1 and λ2 are the boundaries of the spectral range of considerable thermal radiation. Obviously, this model overestimates the heat absorbed in water and cannot be employed to distinguish the radiation power absorbed at the steam/water interface near the particle and the power absorbed in the volume. The latter may be important for detailed analysis of heat transfer from corium particles to ambient water in calculations of water heating and evaporation. Simple estimates showed that contribution of short-wave radiation increases rapidly with the particle temperature, and one cannot ignore the spectral range of water semitransparency when corium particle temperature is greater than 2500 K. In other words, one can expect the OMM error to be considerable in this case. The large-cell radiation model (LCRM) is based on the assumption of negligible radiation heat transfer between the computational cells. Note that the present-day computer codes for multiphase flows use computational cells of about 5-10 cm or greater and all parameters of the multiphase flow are assumed to be constant in every cell. In the range of water semitransparency, the local radiative balance in a single cell yields the following relation for radiation energy density instead of Eq. (3): (5) As a result, the expressions for the integral radiation power absorbed in water can be written as (6) where αλ,w is the spectral absorption coefficient of water containing steam bubbles. The components Pw(1) and Pw(2) of the absorbed power correspond to the ranges of water semitransparency and opacity. One can assume that Pw(1) causes the volume heating of water, whereas Pw(2) causes the surface heating and evaporation of water near the hot particles. Obviously, the predicted contribution of the semitransparency range to the total absorbed power appears to be less than the corresponding value estimated by use of OMM. Note that LCRM does not include any characteristics of radiation scattering in the medium. The radiation balance equation (3) can also be employed without ignoring the radiation flux divergence. To realize such a possibility, one should find a relation between the spectral radiation flux and radiation energy density. In P1 approximation, the known representation of the radiation flux is assumed to make the problem statement complete: (7) and the spectral radiation energy density can be determined by solving the following boundary-value problem: (8a) (8b) where n is the unit vector of external normal to the boundary surface of the computational region. The boundary condition (8b) corresponds to the case of zero external radiation and no reflection from the boundary surface. The angular dependence of radiation intensity in the region of intensive FCI is expected to be smooth. Therefore, P1 can be used instead of the RTE to analyze the quality of LCRM. Note that boundary-value problem (8) is formulated for the complete computational region (not for single cells). After solving this problem for several wavelengths in the range of λ1 < λ < λ*, one can find the radiation power absorbed in water: (9) The total radiative heat loss from corium particles is (10) where αλ,c is the absorption coefficient of polydisperse corium particles. It is important that Pc(1)Pw(1) due to heat transfer by radiation in semitransparent medium: (11) The P1 approximation takes into account the radiative transfer between all the computational cells. It is an important advantage of this model, especially in the case of semitransparent cells. A long-time experience in the use of P1 for solving various engineering problems has shown that the predicted field of radiation energy density is usually very close to the exact RTE solution. One can see that P1 also gives the radiation flux at the boundary of the computational region. In contrast to the radiation energy density, the radiation flux error may be significant (see the article, An estimate of P1 approximation error for optically inhomogeneous media). Therefore, a more sophisticated approach should be employed to determine the radiation coming from the FCI region. The complete solution to the two-dimensional radiation heat-transfer problem in a multiphase flow typical of fuel-coolant interaction is too complicated even when the P1 approximation is employed. The main computational difficulty is related to the wide range of optical thickness of the medium at different wavelengths. One should consider not only the visible radiation when optical thickness of the medium is determined by numerous particles, but also a part of the near-infrared range characterized by the large absorption coefficient of water. As a result, the numerical solution of the boundary-value problem (8), generally speaking, cannot be obtained by using the same computational mesh at all wavelengths. There is no such difficulty in LCRM, which is simply an algebraic model and can be easily implemented into any multiphase CFD code. In the Lagrangian calculations of the transient temperature of corium particles, the value of integral (over the spectrum) radiation heat flux from the unit surface of a single particle is used. In OMM, this value is determined as follows: (12) In LCRM and P1, we have the following expressions for the radiation flux: (13) The complete formulation of the problem must include the relations for radiative characteristics of particles and steam bubbles. These relations have been derived in the papers of Dombrovsky et al. (2007a, 2009) (see also Radiative properties of gas bubbles in semi-transparent medium, Thermal radiation from spherical particle to absorbing medium through narrow concentric gap, and Thermal radiation modeling in melt-coolant interaction). In Lagrangian modeling of motion and cooling of an isothermal particle of radius ai, the following energy equation is usually employed: (14) For simplicity, it is assumed here that the particle is totally opaque and optically gray (εc = const). Generally speaking, ψi ≠ 1 and the values of ψi can be determined from the large-cell model. To clarify the physical sense of coefficient ψ, consider the case of monodisperse corium particles when (15) where α is the absorption coefficient of corium particles, and ζ0(T) is the part of blackbody radiation at temperature T in the range of water semitransparency: (16) Obviously, the coefficient ψ varies in the range between 1 - ζ0 and 1, where the lower limit corresponds to the high volume fraction of corium. Comparison of Diffusion and Large-Cell Models for Typical Problem Parameters Following the paper by Dombrovsky (2007a), consider a one-dimensional axisymmetric problem of radiation heat transfer in water containing polydisperse steam bubbles and steam-mantled corium particles. In our sample problem we use the following similar profiles of the volume fractions of corium and steam: (17) The following fixed values of parameters are considered: R = 0.5 m, fv0 = 0.5%. The function φ(r) and its “cell” approximation are shown in Fig. 2. The ordinates of the cell approximation for the number of cells N = 10 are calculated as follows: (18) Figure 2. Dimensionless profile of volume fraction of steam and corium considered in the model problem: 1 - smooth profile and 2 - stepwise approximation. The average radius of bubbles is assumed to equal 3 mm. The corium particles are treated as opaque ones. The emissivity of bulk corium was assumed to be independent of wavelength and temperature and equal to εc = 0.85. Because of the complexity of the general problem, two variants of the sample problem are considered below: one for monodisperse corium particles and one model for polydisperse corium characterized by different temperatures of small and large particles. Monodisperse Particles Consider the case of monodisperse corium particles of radius a2 = 2.5mm and temperature T = 3000K. The results of calculations based on P1 approximation are presented in Figs. 3 and 4. One can see in Fig. 3 that there is a considerable difference between the radiation power emitted by corium particles in the semitransparency range and the power absorbed in water. It is explained by considerable radiation flux from the region in this spectral range (see Fig. 4). The difference between the calculations for a smooth profile of the particle volume fraction and a stepwise profile typical of cell approximation of the flow parameters is insignificant, especially for radiation power absorbed in water and spectral radiation flux at the boundary region. One can see in Fig. 4 that thermal radiation from the multiphase medium can be observed only in the visible range, and the corresponding radiative heat loss is negligible in the medium heat balance. Figure 3. Radiative heat loss from corium particles (a) and radiation power absorbed in water (b): 1 - in the range of water semitransparency, 2 - in the range of water capacity; I - smooth profile, II - stepwise approximation. Figure 4. Spectral radiative flux at the boundary of the computational region: calculations for smooth profile (I) and stepwise approximation (II) of the medium parameters. It follows from Eq. (3) that the large-cell model gives the only profile of radiation power. This profile is intermediate between the profiles obtained for corium and water in P1 approximation. One can see in Fig. 5 that the relative error of the large-cell model in total radiation power is not large (about 5-10%) because of the decisive contribution of the opacity range. It is important that this error can be estimated by comparison of the large-cell solution with the upper limit of radiative heat loss from corium particles: (19) Figure 5. Total radiative heat loss from corium particles (a) and radiation power absorbed in water (b): 1 - P1 approximation, 2 - large-cell model, 3 - maximum estimate (19). The latter statement is illustrated by curve Pwmax(r) plotted in Fig. 5. Polydisperse Particles For simplicity, the following two-mode size distribution of particles is considered: (20) with a1 = 0.5 mm, a2 = 3 mm, T(a1) = T1 = 2000 K, T(a2) = T2 = 3000 K. Obviously, the integral characteristics of size-distribution (20) are (21) Note that ξ is the relative number of small particles, whereas the more representative volume fraction of these particles is given by ν = ξa13/a30. The effect of polydisperse corium particles can be analyzed on the basis of the large-cell model. The local character of this model allows us to consider a single cell of the medium. One can write the following expressions for radiative cooling rate coefficients: (22) Note that calculations showed the predominant role of visible radiation in direct heat transfer between the particles of different temperatures. For this reason, it is sufficient to use the “red” boundary of the visible spectral range λred = 0.8 μm instead of λ* in approximate calculations of the function ζ(T ). The results of calculations presented in Fig. 6 showed that thermal radiation from relatively hot large particles of corium in the visible spectral range can lead to significant decrease in the radiative cooling rate of small particles. This effect should be taken into account in calculations of the fuel-coolant interaction. A more representative analysis of the LCRM error in realistic FCI problems can be found in the paper by Dombrovsky et al. (2009) and in the article Thermal radiation modeling in melt-coolant interaction. Figure 6. The coefficients of radiative cooling rate for corium particles of two fractions as functions of relative volume fraction of small particles: 1 - ψ1, for small particles; 2 - ψ2, for large particles. REFERENCES Berthoud, G., Vapor explosions, Annu. Rev. Fluid Mech., vol. 32, pp. 573-611, 2000. Bürger, M., Particulate debris formation by breakup of melt jets, Nucl. Eng. Des., vol. 236, no. 19-21, pp. 1991-1997, 2006. Dinh, T. N., Bui, V. A., Nourgaliev, R. R., Green, J. A., and Sehgal, B. R., Experimental and analytical studies of melt jet-coolant interaction: A synthesis, Nucl. Eng. Des., vol. 189, no. 1-3, pp. 299-327, 1999. Dinh, T. N., Dinh, A. T., Nourgaliev, R. R., and Sehgal, B. R., Investigation of film boiling thermal hydraulics under FCI conditions: Results of analyses and numerical study, Nucl. Eng. Des., vol. 189, no. 1-3, pp. 251-272, 1999. Dombrovsky, L. A., Radiation Heat Transfer in Disperse Systems, New York: Begell House, 1996. Dombrovsky, L. A., Radiation heat transfer from a spherical particle via vapor shell to the surrounding liquid, High Temp., vol. 37, no. 6, pp. 912-919, 1999a. Dombrovsky, L. A., Thermal radiation of a spherical particle of semitransparent material, High Temp., vol. 37, no. 2, pp. 260-269, 1999b. Dombrovsky, L. A., Radiation heat transfer from a hot particle to ambient water through the vapor layer, Int. J. Heat Mass Transfer, vol. 43, no. 13, pp. 2405-2414, 2000a. Dombrovsky, L. A., Thermal radiation from nonisothermal spherical particles of a semitransparent material, Int. J. Heat Mass Transfer, vol. 43, no. 9, pp. 1661-1672, 2000b. Dombrovsky, L. A., A modified differential approximation for thermal radiation of semitransparent nonisothermal particles: Application to optical diagnostics of plasma spraying, J. Quant. Spectrosc. Radiat. Transf., vol. 73, no. 2-5, pp. 433-441, 2002. Dombrovsky, L. A., Large-cell model of radiation heat transfer in multiphase flows typical for fuel-coolant interaction, Int. J. Heat Mass Transfer, vol. 50, no. 17-18, pp. 3401-3410, 2007a. Dombrovsky, L. A., Thermal radiation of nonisothermal particles in combined heat transfer problems, Proc. of the 5th Int’l. Symp. on Radiative Transfer, Bodrum, Turkey, June 17-22, 2007 (dedication lecture), 2007b. Dombrovsky, L. A. and Dinh, T. N., The effect of thermal radiation on the solidification dynamics of metal oxide melt droplets, Nucl. Eng. Des., vol. 238, no. 6, pp. 1421-1429, 2008. Dombrovsky, L. A., Davydov, M. V., and Kudinov, P., Thermal radiation modeling in numerical simulation of melt-coolant interaction, Proc. of the 5th Int’l. Symp. on Radiative Transfer, vol. 1, no. 1, pp. 1-35, 2009. Fletcher, D. F. and Anderson, R.P., A review of pressure-induced propagation models of the vapour explosion process, Prog. Nucl. Energy, vol. 23, no. 2, pp. 137-179, 1990. Fletcher, D. F., Steam explosion triggering: A review of theoretical and experimental investigations, Nucl. Eng. Des., vol. 155, no. 1-2, pp. 27-36, 1995. Fletcher, D. F., Radiation absorption during premixing, Nucl. Eng. Des., vol. 189, no. 1-3, pp. 435-440, 1999. Hale, G.M. and Querry, M. P., Optical constants of water in the 200nm to 200μm wavelength region, Appl. Opt., vol. 12, no. 3, pp. 555-563, 1973. Modest, M. F., Radiative Heat Transfer, 2nd ed., New York: Academic Press, 2003. Pohlner, G., Vujic, Z., Bürger, M., and Lohnert, G., Simulation of melt jet breakup and debris bed formation in water pools with IKEJET/IKEMIX, Nucl. Eng. Des., vol. 236, no. 19-21, pp. 2026-2048, 2006. Siegel, R. and Howell, J. R., Thermal Radiation Heat Transfer, 4th ed., New York: Taylor & Francis, 2002. Theofanous, T. G., The study of steam explosions in nuclear systems, Nucl. Eng. Des., vol. 155, no. 1-2, pp. 1-26, 1995. Yuen, W. W., Development of a multiple absorption coefficient zonal method for application to radiative heat transfer in multi-dimensional inhomogeneous non-gray media, Proc. of the 2004 ASME Heat Transfer/Fluids Engineering Summer Conf., July 11-15, 2004, Charlotte, NC, USA, Paper HT-FED2004-56285. Number of views: 20994 Article added: 7 September 2010 Article last modified: 25 April 2011 © Copyright 2010-2018 Back to top
__label__pos
0.869596
Serverside Configuration   «Prev  Next» Lesson 7Using the Oracle Net Assistant, part 2 ObjectiveUse the Oracle Net Assistant to choose naming methods. Net Assistant Choosing Naming Methods Network - Profile, Service Names, Listeners, Oracle Names Servers To use the Oracle Net Assistant to choose naming methods, you must first select Profile from the menu hierarchy on the left side of the Assistant interface. The first tab under Profile is the Naming tab, where you can choose the naming methods. By default, Oracle Net will attempt to resolve a service name to a network address using the following three naming methods in the order in which they appear: 1. Local naming (specified in the Oracle Oracle Net Assistant as TNSNAMES) 2. Centralized naming using Oracle Names (specified in the Oracle Oracle Net Assistant as ONAMES) 3. Host naming (specified in the Oracle Oracle Net Assistant as HOSTNAME) In the next lesson, you will have the opportunity to simulate the procedure for defining trace levels for the listener.
__label__pos
0.918536
fortraveladvicelovers.com Coral reefs: where are they found, how are they formed and which are the most beautiful Who I am Martí Micolau @martímicolau EXTERNAL REFERENCES: SOURCES CONSULTED: wikipedia.org, lonelyplanet.com Article rating: Content warning Coral reefs are underwater forests rich in living species. Consider that at least 25% of all marine species in the world live in the waters of coral reefs. Also known by the English name Reef, the majority of coral reefs in the world are actually made up of many smaller fractions, connected in a single ecosystem. Here's everything you need to know about barrier Reef: how it is formed, where it is in the world, and which are the biggest and most beautiful! Index 1. How the coral reef is formed 2. The most beautiful coral reefs in the world 3. Coral bleaching and death 4. Curiosities about the coral reef 5. User questions and comments How the coral reef is formed The spectacular coral reefs they are "built" by the Antozoi, small octopus-shaped organisms that need clear, illuminated and oxygenated waters in order to live. These tiny polyps they gather in colonies called Coralli, and live in symbiosis with unicellular algae called zooxanthellae. Through the photosynthesis process of these algae, the small organisms constitute a sort of skeleton of calcium carbonate which assumes a protective and support function. Over time, these skeletons merge with each other creating coral structures hard as rock. The structures are called "barriers" when they are separated from the coast by a shallow lagoon. When they are found near the coast, they are called "coral reefs". The most beautiful coral reefs in the world As already specified, corals need certain conditions to live, including good lighting, sea temperatures between 20 ° and 30 ° C and high salinity. These conditions unite the areas of the central Pacific and the Australian east coast, not surprisingly, almost all of the existing reefs are concentrated in these areas. On the contrary, the western coasts of the continents are not suitable for developing barriers due to cold currents. But where are the most beautiful and largest barriers in the world located? Let's find out in the following ranking. 1 - Great Barrier Reef, Australia La Great Barrier Reef of Australia it is located off the coast of Queensland and is known to be the largest coral reef in the world. It is made up of some 3.000 barrier systems. Just think that it is so big that it is visible from space. Since 1981 it has been part of the UNESCO World Heritage Site. 2 - Coral reef in the Red Sea, Egypt The Red Sea coral reef is found off the coasts of Egypt, Israel and Saudi Arabia. Ten percent of the 1.200 species found in this coral reef are unique to this area. This place includes the Blue Hole of Dahab, one of the most popular and dangerous dive sites in the world. 3 - Reef of New Caledonia, New Caledonia The New Caledonian Barrier Reef, in the South Pacific, is the third longest barrier in the world. More than 1.000 different species - many of which have not yet been classified - live within this coral reef. New Caledonia encloses a 1.500 km circular lagoon and reaches an average depth of 25 meters. In 2008, UNESCO included it among the World Heritage Sites, giving it the name of "Lagoons of New Caledonia". 4 - Mesoamerican Reef, Yucatán, Belize, Guatemala and the islands of the Honduras Bay The Mesoamerican Reef, located in the Caribbean basin is the largest coral reef in the Atlantic Ocean. The coral reef extends almost 1.126 km, from the Yucatan Peninsula to the Bay of Islands, in Honduras. Over 500 species of fish and 65 types of coral live within this large reef system. It has been a UNESCO World Heritage Site since 1996. 5 - Coral reef of the Maldives Islands, Indian Ocean The Maldives are the largest coral reef system in the entire Indian Ocean. The islands that make up the atoll are formed by volcanic eruptions and contain more than 1.300 coral reefs. 6 - Apo Coral Reef, Philippines The Apo Reef is the largest barrier in the Philippines. This barrier is 800km long and covers 67,877 acres off the coast of Mindoro Island and is surrounded by a mangrove forest. Due to previous problems, in 2007, the Philippine government enacted a ban on reef fishing to help restore and preserve its pristine nature. 7 - Belize barrier reef, Caribbean Sea The Belize Barrier Reef is a part of the Mesoamerican Reef system. The reef stretches from Ambergris Caye in the north to Cayes Sapodilla in the south. This coral reef is protected by the UNESCO program, which deals with the world heritage of humanity. 8 - Saya de Malha, Indian Ocean Saya de Malha Banks in the Indian Ocean is the largest submerged reef in the world. This ridge connects the Seychelles and Mauritius islands to the Mascarene Plateau. Together with its coral reef, the marine habitat facilitates the life of particular species such as turtle and blue whales. 9 - Andros Reef, Bahamas The coral reef of Andros, in the Bahamas, stretches for approximately 167 km in length. The island is located along the edge of a ocean trench known as the "language of the sea". This means that the barrier extends downwards, this particularity allows it to reach one depth of almost 2 km. 10 - Florida Keys, United States The Florida Keys Reef system is thethe only coral reef system in North America. This system extends 160km along the southeastern coast of Florida, from Key Biscayne to the Dry Tortugas. The reef is protected as if it were an underwater state park. Images and videos /9 Coral bleaching and death In recent years, the symbiotic relationship between coral polyps and algae has been altered by rising water temperatures, which in turn are caused by global warming. For reasons not yet fully known, this is leading to the discoloration of corals as well as their progressive death. According to a study conducted by the James Cook University of Australia, over 90% of the Great Barrier Reef has been affected by the bleaching phenomenon. Other causes of coral death are insane fishing, tourism, ecological imbalances and pollution. Curiosities about the coral reef • What is the largest coral reef in the world? It is the great Australian coral reef, which extends for about 2.300 km • Is there a coral reef in Sardinia? There are no coral reefs in the Italian territory. The closest is the Red Sea Reef along the coast of Egypt. Among the main places where you can admire it there is Marsa Alam. • How long does it take for a coral reef to form? It takes thousands of years and several millions of colonies to form relevant coral structures Audio Video Coral reefs: where are they found, how are they formed and which are the most beautiful Add a comment from Coral reefs: where are they found, how are they formed and which are the most beautiful Comment sent successfully! We will review it in the next few hours.
__label__pos
0.611404
home about introduction 1. human 2. computer 3. interaction 4. paradigms 5. design basics 6. software process 7. design rules 8. implementation 9. evaluation 10. universal design 11. user support 12. cognitive models 13. socio-organizational 14. comm and collab 15. task models 16. dialogue 17. system models 18. rich interaction 19. groupware 20. ubicomp, VR, vis 21. hypertext and WWW references resources exercises online editions community search CHAPTER 21 hypertext, multimedia and the world-wide web  outline   links   resources   exercises  EXERCISE 21.1 Experiment with HyperCard or another hypertext system if you have access to one. As you work through the system, draw a map of the links and connections. Is it clear where you are and where you can get to at any point? If not, how could this be improved? answer This is an experimental exercise which requires access to a hypertext system. It can be used as the basis for a practical class, in which students analyze the effectiveness of the system. Drawing the map has two purposes: one is to reinforce the overall structure of the hypertext; the other is to test the navigational support that is available. Whether it is sufficient will depend on the system under scrutiny, but possible improvements would be to provide an explicit map, escape buttons, explicit paths to core material. The system may of course incorporate such features.   EXERCISE 21.2 Do the same for this book's website and tell us what you think! answer open-ended investigation   EXERCISE 21.3 What factors are likely to delay the widespread use of video in interfaces? What applications could benefit most from its use? answer Some of the factors are the costs in terms of hardware and software for compression and decompression; the slow speed due to the high bandwidth; the overall cost of equipment (for example, camera, video, CD); the lack of design tools to exploit video; the lack of specialist skills amongst designers. Many applications have been suggested as candidates for the integration of video. Educational systems, games and help systems are liable to benefit since information can be passed more clearly and memorably and new dimensions added. Other areas such as virtual reality can use video together with graphics in the creation of their artificial worlds. CSCW systems can use video to provide a face-to-face communication link between distributed workers (see Chapters 14 and 19). However, although these appear to be areas where video has a promising future, its use needs to be carefully considered and its consequences investigated. It may be that it will not fulfil its initial promise.   EXERCISE 21.4 Using a graphics package such as Adobe Photoshop or Macromedia Fireworks, save different types of image (photographs, line drawings, text) in different formats (GIF, JPEG, PNG). Compare the file sizes of the different formats, experimenting with different compression ratios (where applicable), numbers of colours, etc. answer open-ended investigation
__label__pos
0.970386
Resolving type parameter in module functor Hi [Yawar, sorry] et. al :wink: Back into module types and functors trying to build essentially a composable reducer system. rescript-lang/try example module type Partial = { type partial let reduce: (partial, 'action) => option<partial> } module Float = { type partial = float let reduce = (partial: partial, action: 'action): option<partial> => { switch action { | #Set(a) => ... } module Array = (E: Partial) => { type structure<'e> = array<'e> type partial = structure<E.partial> let reduce = (p: partial, action: [#Index(int, 'action)]): option<partial> => { switch action { | #Index(index, action) => <find element, send action to element, recompose array> } and this fails with Signature mismatch: ... Values do not match: let reduce: (partial, [> #Clear | #Set(partial)]) => option<partial> is not included in let reduce: (partial, 'action) => option<partial> File "playground.res", line 3, characters 3-52: Expected declaration File "playground.res", line 8, characters 7-13: Actual declaration Any thoughts? Saying poly variant not in type parameter doesnt compute for me. It seems like the type parameter is being resolved at the source module maybe and not at the calling-module? Thanks again Alex Someone can probably explain the nuances better than me, but one solution is to explicitly declare the action type: module type Partial = { type partial ++ type action -- let reduce: (partial, 'action) => option<partial> ++ let reduce: (partial, action) => option<partial> } module Float = { type partial = float ++ type action = [#Set(partial) | #Clear] let reduce = (partial: partial, action: action): option<partial> => { switch action { | #Set(a) => ... } This should compile now. Thanks John That solves the product case but then the poly variants [would not be] summable? sum type action composition I’m not sure if that’s feasible (anyone, feel free to correct me), but I’d also question if that’s a desirable design in the first place. How would the implementation of Either.reduce work? This makes more sense to me: type action = [#Left(L.partial) | #Right(R.partial) | #LAaction(L.action) | #RAction(R.action)] Otherwise, if L.action and R.action share any of the same constructors, then it’s ambiguous how the reduce function should behave (even if it compiles). poly variants do sum nicely. and failed compile on conflicting constructors sounds good: poly variant sum conflicts My immediate application is a Sum where both sides are products with some equivalent fields. So id rather not have to be aware at the client level which value the sum is taking when i send a new action. I could see both overlapping and non overlapping sums being useful, and available with either different Functors or some supplied policy module @yawaramin do you know anything about this? Other examples I’ve seen in ocaml seem to deal with concrete type name interference, but these are poly variants and type parameters mostly? [just realized i got your name wrong above, excuse me] Thanks Alex
__label__pos
0.99932
Secondary Structure The term secondary structure refers to the interaction of the hydrogen bond donor and acceptor residues of the repeating peptide unit. The two most important secondary structures of proteins, the alpha helix and the beta sheet, were predicted by the American chemist Linus Pauling in the early 1950s. Pauling and his associates recognized that folding of peptide chains, among other criteria, should preserve the bond angles and planar configuration of the peptide bond, as well as keep atoms from coming together so closely that they repelled each other through van der Waal's interactions. Finally, Pauling predicted that hydrogen bonds must be able to stabilize the folding of the peptide backbone. Two secondary structures, the alpha helix and the beta pleated sheet, fulfill these criteria well (see Figure ). Pauling was correct in his prediction. Most defined secondary structures found in proteins are one or the other type.                                                   Figure 1 Alpha helix. The alpha helix involves regularly spaced H‐bonds between residues along a chain. The amide hydrogen and the carbonyl oxygen of a peptide bond are H‐bond donors and acceptors respectively:  The alpha helix is right‐handed when the chain is followed from the amino to the carboxyl direction. (The helical nomenclature is easily visualized by pointing the thumb of the right hand upwards—this is the amino to carboxyl direction of the helix. The helix then turns in the same direction as the fingers of the right hand curve.) As the helix turns, the carbonyl oxygens of the peptide bond point upwards toward the downward‐facing amide protons, making the hydrogen bond. The R groups of the amino acids point outwards from the helix. Helices are characterized by the number of residues per turn. In the alpha helix, there is not an integral number of amino acid residues per turn of the helix. There are 3.6 residues per turn in the alpha helix; in other words, the helix will repeat itself every 36 residues, with ten turns of the helix in that interval. Beta sheet. The beta sheet involves H‐bonding between backbone residues in adjacent chains. In the beta sheet, a single chain forms H‐bonds with its neighboring chains, with the donor (amide) and acceptor (carbonyl) atoms pointing sideways rather than along the chain, as in the alpha helix. Beta sheets can be either parallel, where the chains point in the same direction when represented in the amino‐ to carboxyl‐ terminus, or antiparallel, where the amino‐ to carboxyl‐ directions of the adjacent chains point in the same direction. (See Figure 2 .)                                                Figure 2  Different amino acids favor the formation of alpha helices, beta pleated sheets, or loops. The primary sequences and secondary structures are known for over 1,000 different proteins. Correlation of these sequences and structures revealed that some amino acids are found more often in alpha helices, beta sheets, or neither. Helix formers include alanine, cysteine, leucine, methionine, glutamic acid, glutamine, histidine, and lysine. Beta formers include valine, isoleucine, phenylalanine, tyrosine, tryptophan, and threonine. Serine, glycine, aspartic acid, asparagine, and proline are found most often in turns. No relationship is apparent between the chemical nature of the amino acid side chain and the existence of amino acid in one structure or another. For example, Glu and Asp are closely related chemically (and can often be interchanged without affecting a protein's activity), yet the former is likely to be found in helices and the latter in turns. Rationalizing the fact that Gly and Pro are found in turns is somewhat easier. Glycine has only a single hydrogen atom for its side chain. Because of this, a glycine peptide bond is more flexible than those of the other amino acids. This flexibility allows glycine to form turns between secondary structural elements. Conversely, proline, because it contains a secondary amino group, forms rigid peptide bonds that cannot be accommodated in either alpha or beta helices. Fibrous and globular proteins The large‐scale characteristics of proteins are consistent with their secondary structures. Proteins can be either fibrous (derived from fibers) or globular (meaning, like a globe). Fibrous proteins are usually important in forming biological structures. For example, collagen forms part of the matrix upon which cells are arranged in animal tissues. The fibrous protein keratin forms structures such as hair and fingernails. The structures of keratin illustrate the importance of secondary structure in giving proteins their overall properties. Alpha keratin is found in sheep wool. The springy nature of wool is based on its composition of alpha helices that are coiled around and cross‐linked to each other through cystine residues. Chemical reduction of the cystine in keratin to form cysteines breaks the cross‐links. Subsequent oxidation of the cysteines allows new cross‐links to form. This simple chemical reaction sequence is used in beauty shops and home permanent products to restructure the curl of human hair—the reducing agent accounts for the characteristic odor of these products. Beta keratin is found in bird feathers and human fingernails. The more brittle, flat structure of these body parts is determined by beta keratin being composed of beta sheets almost exclusively. Globular proteins, such as most enzymes, usually consist of a combination of the two secondary structures—with important exceptions. For example, hemoglobin is almost entirely alpha‐helical, and antibodies are composed almost entirely of beta structures. The secondary structures of proteins are often depicted in ribbon diagrams, where the helices and beta sheets of a protein are shown by corkscrews and arrows respectively, as shown in Figure 3 .                                          Figure 3 Top × REMOVED
__label__pos
0.999645
0 Flares 0 Flares × Peekaboo Randomizer is a plugin that allows you to randomly display pieces of content at your WordPress driven website. It’s suitable for banner rotation, lucky visitor rewards, special offers… or just for making a webpage more exciting and fun. Upon installing the plugin, you will be in the possesion of 3 shortcodes with which you can choose which block of content you want to be randomly displayed to visitors, and with what incidence. Installation …is simple. As with pretty much any other WP plugin, you should: • 1. unzip the package • 2. upload the entire peekaboo_randomizer folder to your plugins directory • 3. activate the plugin in WordPress dashboard (Plugins page) A new icon in the WordPress’ TinyMCE text editor toolbar is created: SHORTCODES [pbr_single] attribute: chance The most basic usage. This shortcode is used when you have a block of content that should appear on the page randomly, unrelated to what happens with the rest of the content. Examples: [pbr_single chance=”25%”]This sentence will appear on the page in 25% of the cases, or once in 4 loadings, in average.[/pbr_single] [pbr_single chance=”3/8”]This sentence will appear on the page three times in eight calls.[/pbr_single] [pbr_single chance=”0.01%”]The incidence of 0.01% means the content will appear on the page once in 10000 times.[/pbr_single] [pbr_single chance=”1/10000”]Same as the above.[/pbr_single] As shown in the previous examples, the attribute chance can be expressed both as percentage or as fraction. Percentage value can have up to 6 decimals; which also means the lowest frequency of appearance is 0.000001%, or 1/1000000. A WORD ABOUT NESTING Wordpress doesn’t support nesting of a shortcode inside itself. [shortcode]Whatever [shortcode] this shortcode is supposed to do [/shortcode]- it will not do properly.[/shortcode] But, understanding that you may have a need to randomly display some content inside a larger randomly displayed block, Peekaboo Randomizer allows you to do so. Just add a number suffix to the shortcode name: [pbr_single chance=”1/3”]The larger block appears once in three loadings [pbr_single2 chance=”50%”], while this inner block appears in 50% of those cases.[/pbr_single2] Which means the inner block will be present on the page in 1/6 cases in total.[/pbr_single] You can nest these shortcodes up to 10 levels deep; from [pbr_single2] to [pbr_single10]. Just be careful to properly close each shortcode. [pbr_wrapper] no attributes [pbr] attributes: set and chance You can have several alternative blocks of content, while only one of them gets to be displayed on the page. For these cases you can create Peekaboo Randomizer sets. In order for a [pbr] shortcode block to “know” if another block from the same set is randomly chosen this time, not him, we need to feed all blocks to PB Randomizer, which will then do the wiring between them. For that purpose use [pbr_wrapper] shortcode, to nest all [pbr] shortcodes inside. The easiest, but also the most reliable way to do so is to start writing the content of a page by placing [pbr_wrapper] first, before everything else in the WP text editor, and [/pbr_wrapper] last. The entire page content can be safely nested inside the wrapper. Example: [pbr_wrapper] Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. [pbr set=”owls” chance=”25%”]<img src="snowy.jpg" />[/pbr] [pbr set=”owls” chance=”50%”]<img src="”scops.jpg"" />[/pbr] [pbr set=”owls” chance=”1/10”]<img src="barn_owl.jpg" />[/pbr] [pbr set=”owls” chance=”15%”]<img src="strix.jpg" />[/pbr] Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. [/pbr_wrapper] There are four alternating blocks that belong to a set named “owls” in this example. Each block contains an image of an owl. Snowy owl will appear 25% of the time, scops owl 50%, strix owl – 15%, while the image of barn owl will load once in ten cases. Only one image loads each time. IMPORTANT: The sum of all chances in one set has to be exactly 100% (i.e. 1/1) in order for the randomizer set to work properly. You can have as many sets inside one pbr_wrapper as you wish. Mind you, instances of a set do not have to reside next to each other. Another example: [pbr_wrapper] Bulldog [pbr set=”cats” chance=”50%”]Maine Coon[/pbr] [pbr set=”dogs” chance=”40%”]German Shepherd[/pbr] Ragdoll Siberian Husky Poodle [pbr set=”dogs” chance=”25%”]Labrador Retriever[/pbr] Border Collie [pbr set=”cats” chance=”50%”]Russian Blue[/pbr] Scottish Fold [pbr set=”dogs” chance=”35%”]Dalmatian[/pbr] [/pbr_wrapper] A WORD ABOUT NESTING Nesting of [pbr] shortcodes is also possible, but in a bit different way than [pbr_single] is, and only one level down. If an item/instance/block of a set is nested inside an item/instance/block of another set, it is necessary to use [pbr2] shortcode for defining the inner block and all other blocks which belong to the set (regardless whether those blocks are also nested or not). Also, there have to be two wrappers. A standard [pbr_wrapper], and [pbr_wrapper2]. As is the case with a single wrapper – the best and cleanest way is to open them both at the beginning, and close them at the very end of the content. Take a careful look at this example: [pbr_wrapper][pbr_wrapper2] [pbr set=”living creatures” chance=”5/13”] [pbr2 set=”animals” chance=”20%”]Koala[/pbr2] [pbr2 set=”animals” chance=”20%”]Wolf[/pbr2] [pbr2 set=”animals” chance=”20%”]Fox[/pbr2] [pbr2 set=”insects” chance=”50%”]Wasp[/pbr2] [pbr2 set=”insects” chance=”50%”]Mosquito[/pbr2] [/pbr] [pbr2 set=”animals” chance=”40%”]Penguin[/pbr2] [pbr set=”people” chance=”28%”]Harry[/pbr] [pbr set=”living creatures” chance=”8/13”] [pbr2 set=”trees” chance=”61%”]Cypress[/pbr2] [pbr2 set=”trees” chance=”39%”]Chestnut[/pbr2] [/pbr] [pbr set=”people” chance=”36%”]Jane[/pbr] [pbr set=”people” chance=”36%”]John[/pbr] [/pbr_wrapper2][/pbr_wrapper] Note that the “penguin” block of “animals” set is also wrapped in [pbr2], even it is not inside “living creatures” set. Since there’s at least one block of “animals” set nested inside “living creatures” then all other items from the “animals” have to have the sub-shortcode [pbr2]. No item of the set “people” is nested, so it’s just a regular [pbr] set. SOME FINAL WORDS As mentioned at the beginning, there’s a convenient click-through dialog available in the WordPress’ text editor, which should make inserting shortcodes a breeze. Bear in mind that the laws of probability sometimes may seem counter-intuitive and illogic. If you have set some content to appear on a page with a frequence of, say, 50%, don’t be alarmed if sometimes the content appears 5 times in a row… or if 7 times in a row it doesn’t… It may happen more often than you’d maybe expect. So, if you suspect that something is wrong with the mechanism of randomizing inside the plugin, please test it thoroughly by reloading the page multiple (really multiple) times and taking notes of the (dis)appearance pattern. Probability is all about the Law of Large Numbers Finally, let me inform you about the existance of Peekaboo Timer plugin, that you may find useful for combining with Randomizer. Peekaboo Timer hides or shows any content in accordance with various time-related criteria. For example, you can make some blocks of content appear randomly on a page during working hours, while other blocks shall appear (randomly or not) on weekends and off-hours, etc… Take a closer look: https://codecanyon.net/item/peekaboo-timer/7197988 0 Flares Twitter 0 Facebook 0 Google+ 0 Pin It Share 0 LinkedIn 0 Email -- 0 Flares × Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.586693
机器学习代写|决策树作业代写decision tree代考|Classification 如果你也在 怎样代写决策树decision tree这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。 决策树是一种决策支持工具,它使用决策及其可能后果的树状模型,包括偶然事件结果、资源成本和效用。它是显示一个只包含条件控制语句的算法的一种方式。 statistics-lab™ 为您的留学生涯保驾护航 在代写决策树decision tree方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写决策树decision tree代写方面经验极为丰富,各种代写决策树decision tree相关的作业也就用不着说。 我们提供的决策树decision tree及其相关学科的代写,服务范围广, 其中包括但不限于: • Statistical Inference 统计推断 • Statistical Computing 统计计算 • Advanced Probability Theory 高等概率论 • Advanced Mathematical Statistics 高等数理统计学 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 机器学习代写|决策树作业代写decision tree代考|Classification 机器学习代写|决策树作业代写decision tree代考|Introduction Classification, which is the data mining task of assigning objects to predefined categories, is widely used in the process of intelligent decision making. Many classification techniques have been proposed by researchers in machine learning, statistics, and pattern recognition. Such techniques can be roughly divided according to the their level of comprehensibility. For instance, techniques that produce interpretable classification models are known as white-box approaches, whereas those that do not are known as black-box approaches. There are several advantages in employing white-box techniques for classification, such as increasing the user confidence in the prediction, providing new insight about the classification problem, and allowing the detection of errors either in the model or in the data [12]. Examples of white-box classification techniques are classification rules and decision trees. The latter is the main focus of this book. A decision tree is a classifier represented by a flowchart-like tree structure that has been widely used to represent classification models, specially due to its comprehensible nature that resembles the human reasoning. In a recent poll from the kdnuggets website [13], decision trees figured as the most used data mining/analytic method by researchers and practitioners, reaffirming its importance in machine learning tasks. Decision-tree induction algorithms present several advantages over other learning algorithms, such as robustness to noise, low computational cost for generating the model, and ability to deal with redundant attributes [22]. Several attempts on optimising decision-tree algorithms have been made by researchers within the last decades, even though the most successful algorithms date back to the mid-80s [4] and early $90 \mathrm{~s}[21]$. Many strategies were employed for deriving accurate decision trees, such as bottom-up induction $[1,17]$, linear programming [3], hybrid induction [15], and ensemble of trees [5], just to name a few. Nevertheless, no strategy has been more successful in generating accurate and comprehensible decision trees with low computational effort than the greedy top-down induction strategy. A greedy top-down decision-tree induction algorithm recursively analyses if a sample of data should be partitioned into subsets according to a given rule, or if no further partitioning is needed. This analysis takes into account a stopping criterion, for deciding when tree growth should halt, and a splitting criterion, which is responsible for choosing the “best” rule for partitioning a subset. Further improvements over this basic strategy include pruning tree nodes for enhancing the tree’s capability of dealing with noisy data, and strategies for dealing with missing values, imbalanced classes, oblique splits, among others. A very large number of approaches were proposed in the literature for each one of these design components of decision-tree induction algorithms. For instance, new measures for node-splitting tailored to a vast number of application domains were proposed, as well as many different strategies for selecting multiple attributes for composing the node rule (multivariate split). There are even studies in the literature that survey the numerous approaches for pruning a decision tree $[6,9]$. It is clear that by improving these design components, more effective decision-tree induction algorithms can be obtained. 机器学习代写|决策树作业代写decision tree代考|Book Outline This book is structured in 7 chapters, as follows. Chapter 2 [Decision-Tree Induction]. This chapter presents the origins, basic concepts, detailed components of top-down induction, and also other decision-tree induction strategies. Chapter 3 [Evolutionary Algorithms and Hyper-Heuristics]. This chapter covers the origins, basic concepts, and techniques for both Evolutionary Algorithms and Hyper-Heuristics. Chapter 4 [HEAD-DT: Automatic Design of Decision-Tree Induction Algorithms]. This chapter introduces and discusses the hyper-heuristic evolutionary algorithm that is capable of automatically designing decision-tree algorithms. Details such as the evolutionary scheme, building blocks, fitness evaluation, selection, genetic operators, and search space are covered in depth. Chapter 5 [HEAD-DT: Experimental Analysis]. This chapter presents a thorough empirical analysis on the distinct scenarios in which HEAD-DT may be applied to. In addition, a discussion on the cost effectiveness of automatic design, as well as examples of automatically-designed algorithms and a baseline comparison between genetic and random search are also presented. Chapter 6 [HEAD-DT: Fitness Function Analysis]. This chapter conducts an investigation of 15 distinct versions for HEAD-DT by varying its fitness function, and a new set of experiments with the best-performing strategies in balanced and imbalanced data sets is described. Chapter 7 [Conclusions]. We finish this book by presenting the current limitations of the automatic design, as well as our view of several exciting opportunities for future work. 机器学习代写|决策树作业代写decision tree代考|Decision-tree induction algorithms Abstract Decision-tree induction algorithms are highly used in a variety of domains for knowledge discovery and pattern recognition. They have the advantage of producing a comprehensible classification/regression model and satisfactory accuracy levels in several application domains, such as medical diagnosis and credit risk assessment. In this chapter, we present in detail the most common approach for decision-tree induction: top-down induction (Sect. 2.3). Furthermore, we briefly comment on some alternative strategies for induction of decision trees (Sect. 2.4). Our goal is to summarize the main design options one has to face when building decision-tree induction algorithms. These design choices will be specially interesting when designing an evolutionary algorithm for evolving decision-tree induction algorithms. Keywords Decision trees – Hunt’s algorithm . Top-down induction – Design components 机器学习代写|决策树作业代写decision tree代考|Classification 决策树代写 机器学习代写|决策树作业代写decision tree代考|Introduction 分类是将对象分配到预定义类别的数据挖掘任务,广泛应用于智能决策过程。机器学习、统计学和模式识别领域的研究人员已经提出了许多分类技术。这些技术可以根据它们的可理解程度大致划分。例如,产生可解释分类模型的技术被称为白盒方法,而那些不能产生可解释分类模型的技术被称为黑盒方法。使用白盒技术进行分类有几个优点,例如增加用户对预测的信心,提供关于分类问题的新见解,以及允许检测模型或数据中的错误 [12]。白盒分类技术的例子是分类规则和决策树。后者是本书的重点。 决策树是由类似流程图的树结构表示的分类器,已广泛用于表示分类模型,特别是由于其类似于人类推理的可理解性。在 kdnuggets 网站 [13] 最近的一项民意调查中,决策树被认为是研究人员和从业者最常用的数据挖掘/分析方法,重申了其在机器学习任务中的重要性。与其他学习算法相比,决策树归纳算法具有几个优点,例如对噪声的鲁棒性、生成模型的低计算成本以及处理冗余属性的能力 [22]。 尽管最成功的算法可以追溯到 80 年代中期 [4] 和早期90 s[21]. 采用了许多策略来推导准确的决策树,例如自下而上的归纳[1,17]、线性规划 [3]、混合归纳 [15] 和树的集合 [5],仅举几例。然而,没有一种策略比贪心自上而下的归纳策略更成功地生成准确且易于理解的决策树,而且计算量很小。 贪心自上而下的决策树归纳算法递归地分析数据样本是否应根据给定规则划分为子集,或者是否需要进一步划分。该分析考虑了一个停止标准,对于 决定何时停止树的生长,以及一个分裂标准,它负责选择划分子集的“最佳”规则。对这一基本策略的进一步改进包括修剪树节点以增强树处理噪声数据的能力,以及处理缺失值、不平衡类、倾斜分割等的策略。 对于决策树归纳算法的这些设计组件中的每一个,文献中都提出了非常大量的方法。例如,提出了针对大量应用领域量身定制的节点拆分新措施,以及用于选择多个属性来组成节点规则(多变量拆分)的许多不同策略。文献中甚至有研究调查了修剪决策树的众多方法[6,9]. 很明显,通过改进这些设计组件,可以获得更有效的决策树归纳算法。 机器学习代写|决策树作业代写decision tree代考|Book Outline 本书共7章,内容如下。 第 2 章【决策树归纳】。本章介绍自上而下归纳的起源、基本概念、详细组成部分以及其他决策树归纳策略。 第 3 章 [进化算法和超启发式]。本章涵盖进化算法和超启发式算法的起源、基本概念和技术。 第 4 章 [HEAD-DT:决策树归纳算法的自动设计]。本章介绍并讨论了能够自动设计决策树算法的超启发式进化算法。深入介绍了进化方案、构建块、适应度评估、选择、遗传算子和搜索空间等细节。 第 5 章 [HEAD-DT:实验分析]。本章对 HEAD-DT 可能适用的不同场景进行了全面的实证分析。此外,还讨论了自动设计的成本效益,以及自动设计算法的示例以及遗传和随机搜索之间的基线比较。 第 6 章 [HEAD-DT:适应度函数分析]。本章通过改变其适应度函数对 HEAD-DT 的 15 个不同版本进行了调查,并描述了一组在平衡和不平衡数据集中表现最佳策略的新实验。 第7章[结论]。我们通过介绍当前自动设计的局限性以及我们对未来工作的几个令人兴奋的机会的看法来结束这本书。 机器学习代写|决策树作业代写decision tree代考|Decision-tree induction algorithms 摘要 决策树归纳算法广泛应用于知识发现和模式识别的各个领域。它们具有在多个应用领域(例如医疗诊断和信用风险评估)生成可理解的分类/回归模型和令人满意的准确度水平的优势。在本章中,我们将详细介绍最常见的决策树归纳方法:自上而下的归纳(第 2.3 节)。此外,我们简要评论了一些用于归纳决策树的替代策略(第 2.4 节)。我们的目标是总结在构建决策树归纳算法时必须面对的主要设计选项。在为进化决策树归纳算法设计进化算法时,这些设计选择将特别有趣。 关键词决策树——亨特算法。自上而下的归纳——设计组件 机器学习代写|决策树作业代写decision tree代考 请认准statistics-lab™ 统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。 金融工程代写 金融工程是使用数学技术来解决金融问题。金融工程使用计算机科学、统计学、经济学和应用数学领域的工具和知识来解决当前的金融问题,以及设计新的和创新的金融产品。 非参数统计代写 非参数统计指的是一种统计方法,其中不假设数据来自于由少数参数决定的规定模型;这种模型的例子包括正态分布模型和线性回归模型。 广义线性模型代考 广义线性模型(GLM)归属统计学领域,是一种应用灵活的线性回归模型。该模型允许因变量的偏差分布有除了正态分布之外的其它分布。 术语 广义线性模型(GLM)通常是指给定连续和/或分类预测因素的连续响应变量的常规线性回归模型。它包括多元线性回归,以及方差分析和方差分析(仅含固定效应)。 有限元方法代写 有限元方法(FEM)是一种流行的方法,用于数值解决工程和数学建模中出现的微分方程。典型的问题领域包括结构分析、传热、流体流动、质量运输和电磁势等传统领域。 有限元是一种通用的数值方法,用于解决两个或三个空间变量的偏微分方程(即一些边界值问题)。为了解决一个问题,有限元将一个大系统细分为更小、更简单的部分,称为有限元。这是通过在空间维度上的特定空间离散化来实现的,它是通过构建对象的网格来实现的:用于求解的数值域,它有有限数量的点。边界值问题的有限元方法表述最终导致一个代数方程组。该方法在域上对未知函数进行逼近。[1] 然后将模拟这些有限元的简单方程组合成一个更大的方程系统,以模拟整个问题。然后,有限元通过变化微积分使相关的误差函数最小化来逼近一个解决方案。 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 随机分析代写 随机微积分是数学的一个分支,对随机过程进行操作。它允许为随机过程的积分定义一个关于随机过程的一致的积分理论。这个领域是由日本数学家伊藤清在第二次世界大战期间创建并开始的。 时间序列分析代写 随机过程,是依赖于参数的一组随机变量的全体,参数通常是时间。 随机变量是随机现象的数量表现,其时间序列是一组按照时间发生先后顺序进行排列的数据点序列。通常一组时间序列的时间间隔为一恒定值(如1秒,5分钟,12小时,7天,1年),因此时间序列可以作为离散时间数据进行分析处理。研究时间序列数据的意义在于现实中,往往需要研究某个事物其随时间发展变化的规律。这就需要通过研究该事物过去发展的历史记录,以得到其自身发展的规律。 回归分析代写 多元回归分析渐进(Multiple Regression Analysis Asymptotics)属于计量经济学领域,主要是一种数学上的统计分析方法,可以分析复杂情况下各影响因素的数学关系,在自然科学、社会和经济学等多个领域内应用广泛。 MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 R语言代写问卷设计与分析代写 PYTHON代写回归分析与线性模型代写 MATLAB代写方差分析与试验设计代写 STATA代写机器学习/统计学习代写 SPSS代写计量经济学代写 EVIEWS代写时间序列分析代写 EXCEL代写深度学习代写 SQL代写各种数据建模与可视化代写 发表回复 您的电子邮箱地址不会被公开。 必填项已用*标注
__label__pos
0.566772
Upload Validation dalam Form Validation CodeIgniter By deptz, Fri. May 14, 2010 Categories: PHP related, tutorial Tags: , , 2,328 views Minggu ini saya mengerjakan website dari Elektronika dan Instrumentasi UGM. Saya membuat sebuah content management system(CMS) sederhana untuk memperbarui konten. Konsepnya adalah dalam sebuah posting, akan ada sebuah judul, ringkasan, teks panjang, dan gambar yang mewakili posting tersebut. Web tersebut disusun dengan menggunakan framework CodeIgniter. Awalnya saya berencana membuat sebuah librari untuk menangani web ini. Jadi saya tinggal fokus pada librarinya. Termasuk untuk bagian CMS tersebut. Saya pun menggunakan class form validation dari CodeIgniter untuk melakukan validasi input form CMS. Masalah pertama adalah, class form validation pada CodeIgniter hanya melakukan pemeriksaan pada variabel $_POST. Sedangkan saya akan mengunggah berkas gambar yang nantinya akan ada di variabel $_FILES.  Akhirnya setelah melakukan googling, ditemukan penyelesaian bahwa untuk mengatasi hal tersebut, kita dapat membuat sebuah fake post. Misalnya: $_POST['upload_form'] = 'pura-puranya ini konten'; Setelah itu, kita membuat sebuah rules seperti biasa dengan memanggil fungsi callback. Misalnya saja: $this->form_validation->set_rules('upload_form', 'Upload Form', 'required|callback_ngupload'); Dan tentunya fungsi callbacknya. Misalnya saja: function ngupload($str){ $ul_config['upload_path'] = '/path/to/upload/folder'; $ul_config['allowed_types'] = 'gif|jpg|png'; $this->load->library('upload',$ul_config); if(!$this->upload->do_upload('upload_form')){ if($_FILES['upload_form']['error']==4){ return TRUE; }else{ $this->form_validation->set_message('ngupload', $this->upload->display_errors()); return FALSE; } } else { $this->uploads = $this->upload->data(); return TRUE; } } Dapat dilihat diatas $_FILES[‘upload_form’][‘error’] = 4, ini adalah error ketika tidak ada berkas yang dipilih. Karena dalam konsep CMS yang saya buat tidak harus mengunggah gambar, maka saya mengabaikan error ini. Masalah yang kedua adalah, ternyata fungsi callback dari class form validation CodeIgniter tidak berjalan dengan semestinya kalau kita membuatnya dalam sebuah library. Harus di dalam sebuah controller. Mohon saya dikoreksi kalau salah. 🙂 7 Responses to “Upload Validation dalam Form Validation CodeIgniter” 1. cefer Says: wah bisa buat referensi untuk yang lagi belajar PHP kang.. 😀 2. d3ptzz Says: sip mas dab.. semoga bermanfaat..:D 3. fandronk Says: Yahudd lah si mas ini..:D 4. d3ptzz Says: @fandronk: yahud apanya fan?:) 5. fandronk Says: Sepak Terjangnya hahahahha… 6. Steve Says: wah bisa buat referensi untuk yang lagi belajar PHP kang.. 😀 7. awalone Says: ALHAMDULILLAH, syukron atas ilmunya … Comments
__label__pos
0.998019
From Many, One In Stars by Brian Koberlein4 Comments A single star is a wonder. A million stars is a story.  A star can burn for billions, even trillions of years. With human history spanning mere centuries, how can we possibly understand the lifespan of a star? If we only had the Sun to study, understanding it’s history would be difficult, but we can observe millions of stars, some ancient and some still forming. By looking at these stars as a whole we can piece together the history and evolution of a star. It is similar to taking pictures of a single day on Earth, and using it to piece together the story of how humans are born, live and die. One of the ways this is done is through a Hertzsprung-Russell (HR) diagram. The brightness of a star is plotted against its color. When we make such a plot, most stars lie along a diagonal line where the bluer the star the brighter it is. Given a large enough sample of stars, we can presume that the ages of stars are randomly distributed. Since most stars lie along this line (known as the main sequence) they must spend most of their lives there. So it’s clear that stars have a long stable period where they burn steadily. Other stars are red, but still quite bright. One would expect red stars to be dimmer than blue stars since they have a lower temperature. In order to be so bright, they must be quite large. These red giants are stars that have swollen up as their cores heat up in a last-ditch effort to continue fusing hydrogen. Some stars are large enough to start fusing helium in this stage. Since helium burns hotter, these stars brighten into blue giants. In the end, however, most stars collapse into white dwarfs when core fusion ends. They become hot but small stars, blue-white in color but quite dim. While an HR diagram gives us a snapshot of stellar lifetimes, they don’t tell the whole story. Another way to categorize stars is through their spectra. Different elements in a star’s atmosphere absorb particular wavelengths of light. By looking at the pattern of wavelengths absorbed we can determine which elements the star contains. On a basic level can categorize stars by their metallicity. While stars are mainly hydrogen and helium, they contain traces of other elements (which astronomers call metals). The metallicity of a star is by its ratio of iron to helium, known as [Fe/He]. This is expressed on logarithmic scale relative to the ratio of our Sun. So the [Fe/He] of our Sun is zero. Stars with lower metallicity will have negative [Fe/He] values, and ones with higher metallicity have positive values. Since “metals” are formed by fusion in the cores of stars, those stars with higher metallicity must have formed from the remnants of earlier stars. Our Sun is likely a third generation star. One of the things metallicity tells us is that stars toward the center of our galaxy formed earlier than stars in the outer regions. Through millions of stars we not only understand the history of stars but the history of galaxies. As we continue to gather more data on stars, they continue to tell us a rich collective story. Comments 1. Great post, as usual. How varied are the relative abundances of various “metals”, in stars of the same metallicity? For example, is there much variation when you compare elements produced in stars which do not go supernova (i.e. up to ~Fe), with those which do (up to Pb and Bi)? 1. Author Metallicity is more of a measure of the components making up the outer layers of a star. Heavier elements beyond iron are produced in the last moments of a star, so you wouldn’t really see them in the atmospheres of stars. 2. If a large enough star begins to fuse iron in the last moments before it goes supernova, does the presence of iron in a later-generation star affect it’s potential lifetime in any way? I understand that fusing iron is the death knell to a stellar core because it involves a net energy loss to produce it or any elements heavier than it. But, does the presence of iron or heavier elements in the protostellar cloud during the star’s formation limit the lifetime of the star to less than that of one containing a similar mass of hydrogen but with less heavier elements? Or is it only the process of actual Fe fusion that has any effect, and Fe doping doesn’t ‘poison’ the star to any degree? 1. I think it might make some difference, albeit only a small one. Even if all the Fe (and Co and Ni) a massive star ‘inherited’ when it formed were to end up in the core, well before the fusion stage before ‘iron fusion’, it’d be a pretty small total amount (even the most ‘metal-rich’ main sequence stars have merely percent levels, combined, of elements other than H and He). So the core collapse might happen somewhat sooner than in really metal-rich stars than really metal-poor ones. The really big difference is the presence of any metals (astronomer-speak; everything other than H, He, and perhaps Li is a “metal”); Population III stars – those with essentially zero metals – are thought to have quite different properties than the Pop I and Pop II main sequence (MS) stars we see today; H and He – both atoms and ions – have relatively few ‘electronic transition’ energy levels, so radiation transfer is quite different (how the fusion energy generated in the core gets out to the star’s surface; yes, there’s also convection), so stars with much greater masses than the most massive of today’s MS ones can be (relatively) stable … and die in different kinds of supernovae (I think; Brian?). Leave a Reply
__label__pos
0.999471
Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.567773 Title: Low dimensional structures of some mixed metal oxides containing antimony : synthesis and characterisation Author: de Laune, Benjamin Paul Awarding Body: University of Birmingham Current Institution: University of Birmingham Date of Award: 2013 Availability of Full Text: Access from EThOS: Full text unavailable from EThOS. Thesis embargoed until 01 Jul 2019 Access from Institution: Abstract: This thesis describes the synthesis and characterisation of phases related to schafarzikite (FeSb\(_2\)0\(_4\)). A range of Co\(_1\)\(_-\)\(_x\)Fe\(_x\)Sb\(_2\)\(_-\)\(_y\)Pb\(_y\)O\(_4\) (where x = 0, 0.25, 0.50 and 0.75; y = 0-0.75) compounds have been synthesised and characterised by a variety of techniques e.g. neutron powder diffraction (NPD), and thermogravimetric analysis. The refined lattice parameters for all compounds range between a = 8.4492(2) Å - 8.5728(2) Å and c = 5.9170(1) Å - 6.0546(2) Å (NPD, 300 K). The magnetic structures of Co\(_0\)\(_.\)\(_2\)\(_5\)Fe\(_0\)\(_.\)\(_7\)\(_5\)Sb\(_2\)O\(_4\) and Co\(_0\)\(_.\)\(_5\)\(_0\)Fe\(0\)\(_.\)\(_5\)\(_0\)Sb\(_2\)O\(_4\) have been shown to possess dominant A- type ordering as a result of overriding direct exchange interactions between intrachain transition metal cations, whilst all other phases show dominant C- type ordering consistent with 90˚ superexchange. Unusual negative susceptibility is seen and explained in several samples including CoSb\(_2\)O\(_4\). All phases are shown to display canted antiferromagnetic magnetic order. Oxidised intermediates are formed and characterised for the first time. This has been critically linked to the presence of Fe\(^2\)\(^+\) within these phases. There is evidence to suggest the excess oxygen is a peroxide and/or superoxide species. The synthesis of LiSbO\(_2\) is described and its structure determined: P2\(_1\)\(_/\)\(_c\) symmetry with a = 4.8550(3) Å, b = 17.857(1) Å, c = 5.5771(4) Å, β = 90.061(6)˚. Its electrical and thermal properties are described. Supervisor: Not available Sponsor: Not available Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral EThOS ID: uk.bl.ethos.567773  DOI: Not available Keywords: QD Chemistry Share:
__label__pos
0.960355
The Association between Prenatal Nicotine Exposure and Offspring's Hearing Impairment Academic Article Abstract • Objective  The objective of this study is to evaluate whether there is an association between in-utero exposure to nicotine and subsequent hearing dysfunction. Patients and Methods  Secondary analysis of a multicenter randomized trial to prevent congenital cytomegalovirus (CMV) infection among gravidas with primary CMV infection was conducted. Monthly intravenous immunoglobulin hyperimmune globulin therapy did not influence the rate of congenital CMV. Dyads with missing urine, fetal or neonatal demise, infants diagnosed with a major congenital anomaly, congenital CMV infection, or with evidence of middle ear dysfunction were excluded. The primary outcome was neonatal hearing impairment in one or more ears defined as abnormal distortion product otoacoustic emissions (DPOAEs; 1 to 8 kHz) that were measured within 42 days of birth. DPOAEs were interpreted using optimized frequency-specific level criteria. Cotinine was measured via enzyme-linked immunosorbent assay kits in maternal urine collected at enrollment and in the third trimester (mean gestational age 16.0 and 36.7 weeks, respectively). Blinded personnel ran samples in duplicates. Maternal urine cotinine >5 ng/mL at either time point was defined as in-utero exposure to nicotine. Multivariable logistic regression included variables associated with the primary outcome and with the exposure (p < 0.05) in univariate analysis. Results  Of 399 enrolled patients in the original trial, 150 were included in this analysis, of whom 46 (31%) were exposed to nicotine. The primary outcome occurred in 18 (12%) newborns and was higher in nicotine-exposed infants compared with those nonexposed (15.2 vs. 10.6%, odds ratio [OR] 1.52, 95% confidence interval [CI] 0.55-4.20), but the difference was not significantly different (adjusted odds ratio [aOR] = 1.0, 95% CI 0.30-3.31). This association was similar when exposure was stratified as heavy (>100 ng/mL, aOR 0.72, 95% CI 0.15-3.51) or mild (5-100 ng/mL, aOR 1.28, 95% CI 0.33-4.95). There was no association between nicotine exposure and frequency-specific DPOAE amplitude. Conclusion  In a cohort of parturients with primary CMV infection, nicotine exposure was not associated with offspring hearing dysfunction assessed with DPOAEs. Key Points Nicotine exposure was quantified from maternal urine. Nicotine exposure was identified in 30% of the cohort. Exposure was not associated with offspring hearing dysfunction. • Published In Digital Object Identifier (doi) Pubmed Id • 28550099 • Author List • Cleary EM; Kniss DA; Fette LM; Hughes BL; Saade GR; Dinsmoor MJ; Reddy UM; Gyamfi-Bannerman C; Varner MW; Goodnight WH
__label__pos
0.673806
Differences Between North- and South-Facing Slopes Differences Between North- and South-Facing Slopes ••• vovik_mar/iStock/GettyImages The face a slope presents to the sun – north or south – plays a role in the local climate created on it. This "microclimate" helps determine the types of plants that colonize the slope and influences which animals are drawn to the area seeking their preferred foods and suitable shelter. The basic difference between north- and south-facing slopes – the relative amount and intensity of sunlight they receive – leads to profound ecological differences, similar (but reversed) in the Northern and Southern Hemisphere. Amount of Sunlight In the Northern Hemisphere, north-facing slopes in latitudes from about 30 to 55 degrees receive less direct sunlight than south-facing slopes. The lack of direct sunlight throughout the day, whether in winter or summer, results in north-facing slopes being cooler than south-facing slopes. During winter months, portions of north-facing slopes may remain shaded throughout the day due to the low angle of the sun. This causes snow on north-facing slopes to melt slower than on south-facing ones. The scenario is just the opposite for slopes in the Southern Hemisphere, where north-facing slopes receive more sunlight and are consequently warmer. Near the equator, north- and south-facing slopes receive roughly the same amount of sunlight because the sun is almost directly overhead. At the poles, north and south slopes tend to be either shrouded in darkness all winter long, or bathed in sunlight all summer long, with only slight variation between the slopes in spring and fall. Depth of Soil Depth of soil on a slope, whether it faces north or south, depends on the steepness of the slope. The steeper the incline, the higher the rate of soil erosion from rain runoff. Soils on steep slopes are primarily made up of rock fragments because pieces of lightweight organic matter, such as leaves, wash away before they can decompose into soil. Slopes that have a gentle incline tend to accumulate a deeper layer of soil. In the Northern Hemisphere, soil on south-facing slopes dries out faster and is warmer than soil on north-facing slopes due to longer exposure to sunlight – the opposite applies in the Southern Hemisphere. Effect of Rainfall The amount of rain that falls on a slope and is taken up by existing vegetation is determined by how steep the slope is, rather than whether it faces north or south. Rain runs more quickly off steeper slopes and does not have time to be taken up by plants. Rain falling on less steep inclines stays in the soil longer and is utilized by plants and trees, generally resulting in larger plants and/or colonization of plants with higher hydration needs. Slope aspect can figure into this, however: Vegetation on south-facing slopes in the Northern Hemisphere, for example, has less time to take up water because of the drying effect of the sun. Effect on Plant Communities Given the effects of varying solar insolation, plant communities can vary widely between north- and south-facing slopes. In the Northern Hemisphere, warmer south-facing slopes green up sooner in spring, stay greener longer in the fall and tend to be drier than north-facing slopes. Plants that tolerate these hot, dry conditions – which, depending on the region, may be oaks, pines or drought-tolerant shrubs and grasses –grow well on southern slopes in their native range. A few feet away, a cooler, moister north-facing slope with a gradual incline may be dotted with closed mixed-hardwood or conifer forest and shade-tolerant wildflowers. Trees capture indirect sunlight better than low-growing grasses. Related Articles Does the Tundra Have Rain? Temperature and Precipitation in the Temperate Grasslands How Does Altitude Affect Vegetation? What Causes a Rain Shadow? How to Calculate Runway Slope Characteristics of the Grassland The Effects of Topography on the Climate Tundra Characteristics How Do Mountains Affect Precipitation? Characteristics of Grassland Biomes Temperate Woodland & Shrubland Flowers Types of Swamp Grass Names of Plants That Live in Grasslands Native Plants of the Texas Coastal Plains Texas Geography & Soil Types What Is the Wind in a Tundra? The Definition of Abiotic and Biotic Factors What Are Environmental Problems in Temperate Shrublands? What Are the Major Types of Terrestrial Ecosystems? Landforms of a Savanna Dont Go! We Have More Great Sciencing Articles!
__label__pos
0.923659
Is there any difference in calculation of Energy spectral density and Power spectral density using MATLAB? 4 views (last 30 days) Power Spectral Density PSD=PSD= real(Signal).^2 + imag(Signal).^2; Where Signal =FFT output How can I find Energy Spectral Density? Accepted Answer Image Analyst Image Analyst on 8 Feb 2022 Wouldn't it just be the sqrt(psd)? By the way, there is a function in the Signal Processing Toolbox to get the PSD: PSD = pwelch(timeDomainSignal); More Answers (0) Tags Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting!
__label__pos
0.999963
Loading presentation... Prezi is an interactive zooming presentation Present Remotely Send the link below via email or IM Copy Present to your audience Start remote presentation • Invited audience members will follow you as you navigate and present • People invited to a presentation do not need a Prezi account • This link expires 10 minutes after you close the presentation • A maximum of 30 users can follow your presentation • Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. DeleteCancel Make your likes visible on Facebook? Connect your Facebook account to Prezi and let your likes appear on your timeline. You can change this under Settings & Account at any time. No, thanks ELECTRON BEAM RADIATION THERAPY No description by Sharmaine Galon on 12 February 2015 Comments (0) Please log in to add your comment. Report abuse Transcript of ELECTRON BEAM RADIATION THERAPY ELECTRON BEAM RADIATION THERAPY Electron Beam Production Target Definition As with photon beam treatments, the first step in the initiation of electron therapy is to determine accurately the target to be treated. All available diagnostic, operative and medical information should be consulted to determine the extent and the final planning target volume (PTV) with appropriate margins to be treated before simulation and placement of the electron fields is initiated. Three main type of radiation used 1. Gamma rays 2. Xray 3. Electron beam Why Electron? Electrons have been used in radiotherapy since the early 1950’s.   Delivers a reasonably uniform dose from the surface to a specific depth, after which dose falls off rapidly, eventually to a near zero value   Using electron electron beams allows disease within approximately 6cm of the surface to be treated effectively, sparing deeper normal tissues Electron beam therapy is the choice for skin and shallow target in the body. Electron therapy or Electron Beam Therapy (EBT) is a kind of external beam radiotherapy where electrons are directed to a tumor site Electron beam therapy is commonly performed using a medical linear accelerator. Other machines can also be used in electron beam radiation therapy like Tomotherapy, betatrons, microtrons. What is Electron Beam Radiation Therapy? Two types of EBRT Indications: Treatment Schedule Clinical Scenarios Intracavitary Irradiation Intracavitary radiation is performed for treatment of intraoral or intravaginal areas of the body. Aditionally, IORT can be considered an intracavitary electron technique. It is used in the treatment of orallesions presenting in the floor of the mouth, tongue,soft palate, and retromolar trigone. For all intracavitary irradiation, specially designed treatment cones are required. In addition, an adapter to attach the cone to the linear accelerator has to be available. Intraoperative Irradiation Either a dedicated linear accelerator room that can meet the requirements of operating room sterile conditions or new mobile electron linacs that can be transported to a shielded OR need to be used. Total Limb Irradiation Treatment of the entire periphery of the body extremities (e.g melanoma ,lymphoma, Kaposi’s sarcoma ) can be accomplished using electron fields spaced uniformly around the limb. Electrons offer a technique of delivering a uniform dose while sparing deep tissues and structures which are uninvolved. TOTAL SKIN IRRADIATION Total skin electron treatments are employed in the management of myeosis fungoides. The first requirement for total skin electron treatment is a uniform electron field largeenough to cover the entire patient in a standing position from head to foot and in the right to left direction. This is accomplished by treating the patient at an extended distance (410 cm), angling the beams superiorly and inferiorly, and using a large sheet of plastic (3/8-inch thickness acrylic at 20 cm from the patient surface) to scatter the beam. CARANIOSPINAL IRRADIATION Replacement of the posterior photon field with a high energy electron field can reduce reatly the exit dose to the upper thorax region, especially the heart, and the lower digestive tract. This is especially important for pediatric patients and results in reductions of both acute and late complications. The lateral photon fields are rotated through an angle to match the divergence of the posterior electron field. The superior field edge of electron field “el” is not moved during the treatment but the inferior border of the photon fields is shifted 9 mm farther the junction location. One-third of the photon treatments are delivered with the inferior border of the two photon fields coincident with the electron field edge. The next one-third of the photon treatments are delivered with the edge of one photon field 9mm superior to the electron field edge and the edge of the second photon field moved 9mm inferiorly to the electron field edge. The final one third of the photon treatments are delivered with the edges of the photon fields reversed from their previous position. The angle of the two electron fields are rotated by an angle to account for the divergence of each of these electron fields and to produce a common field edge. Electron Arc Therapy Electron are therapy that is useful in treating postmastectomy chest wall. It is more useful in barrel chested women, where tangent beams can irradiate too much lung. There are three levels of collimation in electron therapy: the primary xray collimators, a shaped secondary Cerrobend insert, and skin collimation. Future Directions Electron therapy can be expected to become more sophisticated in the future as the enthusiasm for intensity-modulated radiation therapy will carry into electron therapy. Advances in electron dose calculations, methods for electron-beam optimization, and availability of electron multileaf collimators will enable the practice of intensity modulated and energy-modulated electron therapy. Intracavitary Irradiation (Note: Differentiate Spot treatment from TSEB Therapy) Spot treatment • Usually given 4 times a week over 3 to 4 weeks. • You should plan on being in the department for 60 to 90 minutes for each treatment. Total Skin Electron Beam Therapy • Given 2 times a week over 6 to 9 weeks. • You should plan on being in the department for 60 to 90 minutes for each treatment. After Treatment • The Radiation Oncologist may want you to have “boost” treatments to areas of your skin that need more radiation. • This includes the soles of the feet, perineum, scalp, skinfolds under the breasts, and on the stomach area. • Six to 10 boost treatments are usually given over 2 to 3 weeks. • Be in the department for 60-90 minutes for each treatment. Treatment What to expect: • You will check in at the reception desk and have a seat in the waiting room. • Your therapists will tell you to change into a hospital gown. They will bring you into the treatment room and help you get into position. • You will be in the treatment room for up to 30 minutes. Most of this time will be spent positioning you. The actual treatment only takes a few minutes. • During TSEB therapy, you will be standing on a platform that rotates so that the entire surface of your skin can be treated from different angles. • During spot treatment, you will be lying in the same position that you were in during your simulation. • You will be asked to disrobe and will be given a disposable yellow gown to wear during your treatments. • You may be given goggles to protect your eyes and put shields on your hands, feet, or both during some of your treatments. • Once you are in position, your therapist will leave the room and begin the treatment. • Breathe normally during your treatment, but do not move. • If you are very uncomfortable and need help, tell your therapists. They can turn off the machine and come in to see you at any time, if necessary. Simulation (Note: Explain simulation briefly) •If you are having TSEB therapy, you will not have a treatment planning procedure. This is because the entire surface of your skin will be treated. • If you are having spot treatment, you will first have a treatment planning procedure called a simulation. Megavoltage electron beams represent an important treatment modality in modern radiotherapy, often providing a unique option in the treatment of superficial tumors. Electron beam therapy is the choice for skin and shallow target in the body. Electron therapy or Electron Beam Therapy (EBT) is a kind of external beam radiotherapy where electrons are directed to a tumor site. Spot Treatment- if 1 or more spots Total Skin Electron Beam Therapy- if the entire surface of the skin is treated. Electron beam therapy is used in the treatment of: Superficial tumors like cancer of skin regions, or total skin (e.g. mycosis fungoides) Diseases of the limbs (e.g. melanoma and lymphoma) Nodal irradiation Cancer of the skin - eyelids, nose, ear, scalp, and limbs Cancer of the upper respiratory and digestive tract – floor of mouth, soft palate, retromolar trigone, salivary gland Cancer of the breast - chest wall irradiation following mastectomy Cancer in other sites - retina, orbital, spine (craniospinal irradiation) Pancreas and other abdominal structures (intraoperative therapy) Cervix (intracavitary irradiation) May also be used to boost the radiation dose to the surgical bed after mastectomy or lumpectomy. For deeper regions intraoperative electron radiation therapy might be applied. Generation of Electron Beams in a Linear Accelerator Medical linear accelerators (linacs) are cyclic accelerators which accelerate electrons to kinetic energies from 4 MeV to 25 MeV using non-conservative microwave RF fields. Components of Modern Linacs Five Major and Distinct Sections of the Machine: Gantry Gantry stand or support Modulator cabinet Patient support assembly (Treatment couch) Control console Source of electrons Also called as “electron gun” It contains a heated filament cathode and a perforated grounded anode Injection System The microwave radiation, used in the accelerating waveguide to accelerate electrons to the desired kinetic energy, is produced by the RF power generation system which consists of two major components: 1. RF power source - is either a magnetron or a klystron - both are devices using electron acceleration and deccelaration in vacuum for the production of high power RF fields. 2. Pulse modulator - produces the high voltage, high current, short duration pulses required by the RF power source (magnetron or klystron) and the injection system (electron gun) RF Power Generation System Waveguides are evacuated or gas-filled metallic structures of rectangular or circular cross-sections used in transition of microwaves. There are 2 types: RF power transmission waveguides Accelerating waveguides Accelerating Waveguide Electron Beam Transport System Bending magnets are used in linacs operating at energies above 6 MeV where the accelerating waveguides are too long for straight-through mounting. The accelerating waveguide is usually mounted parallel to the gantry rotation axis and the electron beam must be bent to be able to exit through the beam exit window. Three systems for electron bending have been developed: 90° bending, 270° bending, 112.5° bending. Linac Treatment Head The linac head contains several components, which influence the production, shaping, localizing, and monitoring of the clinical photon and electron beams. The important components found in a typical head of a fourth or fifth generation linac include: 1. Several retractable x-ray targets 2. Flattening filters and electron scattering foils (also called scattering filters) 3. Primary and adjustable secondary collimators 4. Dual transmission ionization chambers 5. Field defining light and range finder 6. Optional retractable wedges 7. Optional multileaf collimator (MLC) In a typical modern medical linac, the electron beam collimation is achieved with two or three collimator devices: 1. Primary collimator 2. Secondary movable beam-defining collimators 3. Multileaf collimator (MLC) Clinical electron beams also rely on electron beam applicators (cones) for beam collimation Beam Collimation Dose Monitoring System Most common dose monitors in linacs are transmission ionisation chambers permanently imbedded in the linac clinical photon and electron beams to monitor the beam output continuously during patient treatment. Most linacs use sealed ionisation chambers to make their response independent of ambient temperature and pressure. The customary position of the dose monitor chambers is between the flattening filter or scattering foil and the photon beam secondary collimator. The main requirements for the ionization chamber monitor are as follows: 1. Chambers must have a minimal effect on clinical photon and electron radiation beams. 2. Chamber response should be independent of ambient temperature and pressure (most linacs use sealed ionisation chambers to satisfy this condition). 3. Chambers should be operated under saturation conditions. PRODUCTION OF ELECTRON BEAM IN A LINAC Interaction of Electrons with Absorbing Material Electron entering a material interacts as a negatively charged particle with electric fields of specimen atoms. These interactions are classified in to two different types. 1. Elastic interactions: In this case no energy is transferred from electron to sample. As result electron leaving the sample still has the original energy. 2. Inelastic interactions: The energy of the incident electron is transferred to the sample atoms. Hence, after the interaction electron energy is reduced. Elastic Interaction Elastic interactions deflects the electron beam along new trajectory, causing them to spread laterally. A strong elastic scatter very near to the nucleus may result in beam electron leaving the specimen via back scattering, called Backscattered electrons (BSE). Probability of elastic scattering - Increases strongly with atomic number, as heavier atoms have much stronger positive charge at nucleus - Decreases as electron energy increases. Inelastic Interaction An inelastic collision, in contrast to an elastic collision, is a collision in which kinetic energy is not conserved. In collisions of macroscopic bodies, some kinetic energy is turned into vibrational energy of the atoms, causing a heating effect, and the bodies are deformed. With the inelastic scattering, beam electrons loose energy to specimen atoms in various ways. Can produce ionization, Bremsstahlung or a secondary electron. This is done to make sure that: • Your treatment site is mapped out correctly. • You get the right dose of radiation. • The dose of radiation to nearby tissue is as small as possible. Guidelines during simulation: • Do not apply ointments, creams, lotions, talcum powders, alcohol, deodorants, anti-perspirants, perfumes, make-up or after-shave lotions in the treatment area unless prescribed by your physician. These products may intensify a skin reaction. • Do not wear earrings or necklaces. • Eat and drink as you normally would. • Wear comfortable clothes. • You will be lying still for a long time. This is uncomfortable for some patients. If you think it will be for you, take acetaminophen or your usual pain medication 1 hour before your appointment. • If you think you may get anxious during your procedure, speak with your radiation oncologist about whether medication may be helpful. During your simulation: • Your therapists will take pictures of your skin and mark up the area(s) to be treated with a felt marker. • This will take about 2 to 4 hours. • The position that you are in during your simulation will be the same position you will be in for your spot treatments. Side Effects: • Patients who get spot treatment usually have minor side effects that involve the skin, hair, and nails in the area being treated. • Patients who get TSEB therapy usually have side effects that involve all of the skin, hair, and both fingernails and toenails. • Your skin will become: red, dark, dry, irritated (similar to sunburn), sore around your lips if your treatment is on your face. • The redness and irritation will get better after your treatment is done but your skin in the treated areas will be drier than the usual. • You will lose hair on your whole body (scalp, eyebrows, under your arms, and pubic hair) but will begin to grow back in 3 to 6 months after the treatment. • Your nails will fall off in the areas being treated. As your old nails fall out, new ones will be growing in underneath. Field Shaping and Collimation constructed : lead or low-melting-point lead alloy. (lipowitz) thickness: millimeters - to stop primary electrons is given by. Patient Shielding A variety of shielding can be placed close to, on or inside the patient. External shields: External shields can be placed over most body surfaces Internal shields these are thin shields that are places within a body cavity. Depth Dose major attraction of the electron beam irradiation is the shape of the depth dose curve. region of more or less uniform dose followed by a rapid dropoff of the dose offers a distinct clinical advantage over the conventional xray modalities. the depth in centimeters at which electrons deliver a dose to the 80% to 90% isodose level, is equal to approximately one-third to one-fourth of the electron energy in MeV most useful treatment depth, or theraputic range, of electron is given b the depth of the 90% depth dose. Energy Dependence of Depth Dose the percentage depth dose increases as the energy increases SSD the percentage depth dose increases as the energy increases depth dose variatons with SSD are ussually insignicant. Differences in the depth dse resulting from inverse square effect are small because electron do not penetrate that deep (6cm or less in the therapeutic region) and because the significant growth of penumbra width with SSD restricts the SSD in clinical practice to typicallt 115cm or less Dose Distribution in patient the ideal irradiation condition is for electron beam to be incident normal to a flat surface with underlying homogeneous soft issues. the dose distributiob for this condition similar to that for a water phantom described previously. this scenario What is radiation therapy? It is the use of high energy rays usually x-ray and similar rays (such as electrons) to treat disease. It works by destroying cancer cells in the area that’s treated. Main Beam-forming Components of a Modern Linac Injection system RF Power generation system Accelerating waveguide Auxillary system Beam transport system Beam collimation and beam monitoring system Consists of several services which are not directly involved with electron acceleration, yet they make the acceleration possible and the linac viable for clinical operation. Auxillary Sytem The linac auxiliary system comprises four systems: 1. Vacuum pumping system producing a vacuum pressure of ~10-6 tor in the accelerating guide and the RF generator . 2. Water cooling system used for cooling the accelerating guide, target, circulator, and RF generator. 3. Optional air pressure system for pneumatic movement of the target and other beam shaping components. 4. Shielding against leakage radiation. The majority of higher energy linacs, in addition to providing single or dual photon energies, also provide electron beams with several nominal electron beam energies in the range from 6 to 30 MeV. To activate an electron beam mode both the target and flattening filter of the x-ray beam mode are removed from the electron beam. Two techniques are available for producing clinical electron beams from electron pencil beams: 1. Pencil beam scattering 2. Pencil beam scanning Elastic Interaction Elastic interactions deflects the electron beam along new trajectory, causing them to spread laterally. A strong elastic scatter very near to the nucleus may result in beam electron leaving the specimen via back scattering, called Backscattered electrons (BSE). Probability of elastic scattering - Increases strongly with atomic number, as heavier atoms have much stronger positive charge at nucleus - Decreases as electron energy increases. The rate of energy loss for radiative interactions (bremsstrahlung) is approximately proportional to the electron energy and to the square of the atomic number of the absorber. Radiative losses is more efficient for higher energy electrons and higher atomic number materials. When a beam of electrons passes through a medium the electrons suffer multiple scattering, due to Coulomb force interactions between the incident electrons and predominantly the nuclei of the medium. As the electron beam traverses the patient, its mean energy decreases and its angular spread increases. The scattering power of electrons varies approximately as the square of the atomic number and inversely as the square of the kinetic energy. For this reason high atomic number materials are used in the construction of scattering foils used for the production of clinical electron beams in a linac. Electron Beam Radiation Therapy Galon, Sharmaine Alba, Alea Mae Albano, Darren Banate, Vanessa Lace Blancaflor, Lawrence Buensalida, Adriane Campañas, Leianngiezl Cruz, Mary Grace Sarmiento, Christine Paliza, Lou Grace Full transcript
__label__pos
0.779251
Vehicular Power Bus – Hell on Earth? Connecting a piece of gear to an automotive bus is a terrible thing to do… unless the gear manufacturer understands the perils that await including spikes, surges and dips. Fortunately, military specifications consolidate decades of observations to help us understand the vehicle power bus.
__label__pos
0.993002
Bridging ligand From Wikipedia, the free encyclopedia Jump to: navigation, search An example of a μ2 bridging ligand A bridging ligand is a ligand that connects two or more atoms, usually metal ions.[1] The ligand may be atomic or polyatomic. Virtually all complex organic compounds can serve as bridging ligands, so the term is usually restricted to small ligands such as pseudohalides or to ligands that are specifically designed to link two metals. In naming a complex wherein a single atom bridges two metals, the bridging ligand is preceded by the Greek character 'mu', μ,[2] with a superscript number denoting the number of metals bound to the bridging ligand. μ2 is often denoted simply as μ. When describing coordination complexes care should be taken not to confuse μ with η ('eta'), which relates to hapticity. Ligands that are not bridging, are called terminal ligands (see figure). List of bridging inorganic ligands[edit] Virtually all ligands are known to bridge, with the exception of amines and ammonia.[3] Common inorganic bridging ligands include most of the common anions. bridging ligand name example OH hydroxide [Fe2(OH)2(H2O)8]4+, see olation O2− oxide [Cr2O7]2-, see polyoxometalate SH hydrosulfido Cp2Mo2(SH)2S2 NH2 amido HgNH2Cl N3− nitride [Ir3N(SO4)6(H2O)3]4-, see metal nitrido complex CO carbonyl Fe2(CO)9, see metal carbonyl#Bridging carbonyls Cl- Chloride Nb2Cl10, see metal halide#Halide ligands H- Hydride B2H6 CN- Cyanide approx. Fe7(CN)18, see cyanometalate Many simple organic ligands form strong bridges between metal centers. Many common examples include organic derivatives of the above inorganic ligands (R = alkyl, aryl): OR, SR, NR2, NR2− (imido), PR2 (phosphido, note the ambiguity with the preceding entry), PR2− (phosphinidino), and many more. Examples[edit] Bonding[edit] For doubly bridging (μ2-) ligands, two limiting representation are 4e and 2e bonding interactions. These cases are illustrated in main group chemistry by [Me2Alμ2-Cl]2 and [Me2Al(μ2-Me]2. Complicating this analysis is the possibility of metal-metal bonding. Computational studies suggest that metal-metal bonding is absent in many compounds where the metals are separated by bridging ligands. For example, calculations suggest that Fe2(CO)9 lacks a Fe-Fe bond by virtue of a 3-center, 2-electron bond involving one of three bridging CO ligands.[4] Representations of two kinds of M-bridging ligand interactions, 3-center, 4 electron bond (left) and 3-center, 2 electron bonding.[4] {clear left} Polyfunctional ligands[edit] Polyfunctional ligands can attach to metals in many ways and thus can bridge metals in diverse ways, including sharing of one atom or using several atoms. Examples of such polyatomic ligands are the oxoanions CO32− and the related Carboxylate, PO43−, and the polyoxometallates. Several organophosphorus ligands have been developed that bridge pairs of metals, a well-known example being Ph2PCH2PPh2. See also[edit] References[edit] 1. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version:  (2006–) "bridging ligand". 2. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version:  (2006–) "µ- (mu)". 3. ^ Werner, H. (2004). "The Way into the Bridge: A New Bonding Mode of Tertiary Phosphanes, Arsanes, and Stibanes". Angew. Chem. Int. Ed. 43 (8): 938–954. doi:10.1002/anie.200300627. PMID 14966876.  4. ^ a b Jennifer C. Green, Malcolm L. H. Green, Gerard Parkin "The occurrence and representation of three-centre two-electron bonds in covalent inorganic compounds" Chem. Commun. 2012, 11481-11503. doi:10.1039/c2cc35304k
__label__pos
0.917085
counter statistics CBD Hemp Flower Can it be Helpful in Alleviating Painful Symptoms CBD Hemp Flower Can it be Helpful in Alleviating Painful Symptoms? CBD is a “magical herb.” There is no doubt about it. It is a one-stop solution to most of your everyday issues. You can easily manage a variety of problems if you switch to CBD as a supplement. While there are so many options available in the market, the full spectrum CBD hemp flower trend is growing. Why? They have more than 400 compounds that offer benefits in many ways, including pain management. In this article, we will talk about this property of the CBD hemp flower to help you make an informed decision. CBD Hemp Flower: How is it beneficial? So, let’s get started! CBD is one of the primary cannabinoids that is popular because of its anti-psychoactive effects. And if you opt for hemp flowers, you can bring down the usual “weed high.” This is quite a beneficial option if you wish to use CBD for alleviating painful symptoms. Moreover, if you happen to consume hemp flowers for a more extended period in higher doses, it won’t bring any life-threatening effects. And the benefits don’t end here! If you use industrial hemp, it contains a higher amount of cannabidiol. And if you use a full-spectrum cbd hemp flower, it will enable you to access a plethora of effects, including anti-inflammatory effects. And let’s not forget! This activity helps users to mitigate issues like stress, anxiety, body pains, and many more. Ideally, this flower is beneficial for the following health conditions: • Migraines • Arthritis • Headaches • Low Back Pain • Chronic Pain • Fibromyalgia • Muscle Pain and Spasm • Neuropathic Pain • IBS But how does it work? Hemp flowers mitigate painful symptoms by interacting with the body’s endocannabinoid system and its receptors. As per a lot of studies, cannabinoids are much more effective than the known NSAIDs. The power usually lies in the compounds that create the flower. Yes, we are talking about the “entourage effect.” This effect increases the efficiency since the compounds present in flowers join hands to work synergistically when there is a need to alleviate pain. How to get the maximum benefits? Ideally, cannabis users advocate smoking or vaping hemp flowers. For instance, if you want to de-stress yourself and focus on your task, smoke harlequin strain for the desired outcome. Similarly, other CBD hemp flowers can help you with pain relief. This is because this form of consumption increases the bioavailability of cannabidiol inside the bloodstream. So, it is one of the quickest ways to relish the benefits and reduce painful symptoms. But why full-spectrum? It’s because it will help you access terpenes that aren’t only responsible for the aroma or the taste. Instead, they also bring benefits like sedation, analgesic effects, antidepressant, anti-anxiety, and so on. Final Takeaways CBD hemp flowers are a natural way to mitigate painful effects. They help you manage a lot of conditions besides that in the most natural way possible. And if you opt for full-spectrum products, the results will be ever-lasting. Related Posts
__label__pos
0.595684
2014 Latest 100% Pass Guaranteed Microsoft 70-412 Dumps (101-110) QUESTION 101 You have a server named Server1 that runs Windows Server 2012 R2. Windows Server 2012 R2 is installed on volume C. You need to ensure that Safe Mode with Command Prompt loads the next time Server1 restarts. Which tool should you use? A.    The Restart-Server cmdlet B.    The Bootcfg command C.    The Restart-Computer cmdlet D.    The Bcdedit command Answer: D Explanation: A. Restart-Server is not a CMDLET B. modifies the Boot.ini file C. Restarts computer D. Boot Configuration Data (BCD) files provide a store that is used to describe boot applications and boot application settings. http://support.microsoft.com/kb/317521 http://technet.microsoft.com/en-us/library/hh849837.aspx http://technet.microsoft.com/en-us/library/cc731662(v=ws.10).aspx image You can see with msconfig tool that boot options have changed as follows: NOTE: Alternate Shell may be used image After reboot you should remove the safeboot option using bcdedit: – bcdedit /deletevalue safeboot QUESTION 102 Your network contains an Active Directory domain named contoso.com. The domain contains a server named Server1 that runs Windows Server 2012 R2. Server1 has the Active Directory Certificate Services server role installed and is configured to support key archival and recovery. You create a new Active Directory group named Group1. You need to ensure that the members of Group1 can request a Key Recovery Agent certificate. The solution must minimize the permissions assigned to Group1. Which two permissions should you assign to Group1? (Each correct answer presents part of the solution. Choose two.) A.    Read B.    Auto enroll C.    Write D.    Enroll E.    Full control Answer: AD Explanation: * In Template, type a new template display name, and then modify any other optional properties as needed. On the Security tab, click Add, type the name of the users you want to issue the key recovery agent certificates to, and then click OK. Under Group or user names, select the user names that you just added. Under Permissions, select the Read and Enroll check boxes, and then click OK. QUESTION 103 Your network contains two Web servers named Server1 and Server2. Server1 and Server2 are nodes in a Network Load Balancing (NLB) cluster. You configure the nodes to use the port rule shown in the exhibit. (Click the Exhibit button.) image You need to configure the NLB cluster to meet the following requirements: – HTTPS connections must be directed to Server1 if Server1 is available. – HTTP connections must be load balanced between the two nodes. Which three actions should you perform? (Each correct answer presents part of the solution. Choose three.) A.    From the host properties of Server1, set the Handling priority of the existing port rule to 2. B.    From the host properties of Server1, set the Handling priority of the existing port rule to 1. C.    From the host properties of Server2, set the Priority (Unique host ID) value to 1. D.    Create a port rule for TCP port 80. Set the Filtering mode to Multiple host and set the Affinity to None. E.    From the host properties of Server2, set the Handling priority of the existing port rule to 2. F.    Create an additional port rule for TCP port 443. Set the Filtering mode to Multiple host and set the Affinity to Single. Answer: BDE Explanation: Handling priority: When Single host filtering mode is being used, this parameter specifies the local host’s priority for handling the networking traffic for the associated port rule. The host with the highest handling priority (lowest numerical value) for this rule among the current members of the cluster will handle all of the traffic for this rule. The allowed values range from 1, the highest priority, to the maximum number of hosts allowed (32). This value must be unique for all hosts in the cluster. E (not C): Lower priority (2) for Server 2. D: HTTP is port 80. Multiple hosts. This parameter specifies that multiple hosts in the cluster handle network traffic for the associated port rule. This filtering mode provides scaled performance in addition to fault tolerance by distributing the network load among multiple hosts. You can specify that the load be equally distributed among the hosts or that each host handle a specified load weight. Reference: Network Load Balancing parameters QUESTION 104 Your network contains two Active Directory forests named contoso.com and litwareinc.com. A two- way forest trusts exists between the forest. Selective authentication is enabled on the trust. The contoso.com forest contains a server named Server1. You need to ensure that users in litwareinc.com can access resources on Server1. What should you do? A.    Install Active Directory Rights Management Services on a domain controller in contoso.com. B.    Modify the permission on the Server1 computer account. C.    Install Active Directory Rights Management Services on a domain controller in litwareinc.com. D.    Configure SID filtering on the trust. Answer: B Explanation: http://technet.microsoft.com/en-us/library/cc772808(v=ws.10).aspx image QUESTION 105 Your network contains an Active Directory domain named contoso.com. The domain contains two member servers named Server1 and Server2. All servers run Windows Server 2012 R2. Server1 and Server2 have the Failover Clustering feature installed. The servers are configured as nodes in a failover cluster named Cluster1. You add two additional nodes to Cluster1. You have a folder named Folder1 on Server1 that contains application data. You plan to provide continuously available access to Folder1. You need to ensure that all of the nodes in Cluster1 can actively respond to the client requests for Folder1. What should you configure? A.    Affinity-None B.    Affinity-Single C.    The cluster quorum settings D.    The failover settings E.    A file server for general use F.    The Handling priority G.    The host priority H.    Live migration I.    The possible owner J.    The preferred owner K.    Quick migration L.    the Scale-Out File Server Answer: L Explanation: http://technet.microsoft.com/en-us/library/hh831349.aspx Scale-Out File Server for application data (Scale-Out File Server) This clustered file server is introduced in Windows Server 2012 R2 and lets you store server application data, such as Hyper-V virtual machine files, on file shares, and obtain a similar level of reliability, availability, manageability, and high performance that you would expect from a storage area network. All file shares are online on all nodes simultaneously. File shares associated with this type of clustered file server are called scale-out file shares. This is sometimes referred to as active-active. image QUESTION 106 Information and details provided in a question apply only to that question. Your network contains an Active Directory domain named contoso.com. The domain contains two member servers named Server1 and Server2. All servers run Windows Server 2012 R2. Server1 and Server2 have the Network Load Balancing (NLB) feature installed. The servers are configured as nodes in an NLB cluster named Cluster1. Cluster1 hosts a secure web application named WebApp1. WebApp1 saves user state information locally on each node. You need to ensure that when users connect to WebApp1, their session state is maintained. What should you configure? A.    Affinity-None B.    Affinity-Single C.    The cluster quorum settings D.    The failover settings E.    A file server for general use F.    The Handling priority G.    The host priority H.    Live migration I.    The possible owner J.    The preferred owner K.    Quick migration L.    the Scale-Out File Server Answer: B Explanation: http://technet.microsoft.com/en-us/library/bb687542.aspx image QUESTION 107 Hotspot Question Your network contains an Active Directory domain named contoso.com. You install the IP Address Management (IPAM) Server feature on a server named Server1 and select Manual as the provisioning method. The IPAM database is located on a server named SQL1. You need to configure IPAM to use Group Policy Based provisioning. What command should you run first? To answer, select the appropriate options in the answer area. image Answer: image QUESTION 108 You have an Active Directory Rights Management Services (AD RMS) cluster. You need to prevent users from encrypting new content. The solution must ensure that the users can continue to decrypt content that was encrypted already. Which two actions should you perform? (Each correct answer presents part of the solution. Choose two.) A.    From the Active Directory Rights Management Services console, enable decommissioning. B.    From the Active Directory Rights Management Services console, create a user exclusion policy. C.    Modify the NTFS permissions of %systemdrive%\inetpub\wwwroot\_wmcs\licensing. D.    Modify the NTFS permissions of %systemdrive%\inetpub\wwwroot\_wmcs\decommission. E.    From the Active Directory Rights Management Services console, modify the rights policy templates. Answer: BE QUESTION 109 Your network contains an Active Directory domain named contoso.com. All file servers in the domain run Windows Server 2012 R2. The computer accounts of the file servers are in an organizational unit (OU) named OU1. A Group Policy object (GPO) named GPO1 is linked to OU1. You plan to modify the NTFS permissions for many folders on the file servers by using central access policies. You need to identify any users who will be denied access to resources that they can currently access once the new permissions are implemented. In which order should you Perform the five actions? image Answer: image Explanation: I hate steps like this because you can create a rule first and then the policy, or you can create the policy and create the rule during the creation of the policy. Either way I’m going to go with creating the policy first, and then the rule. QUESTION 110 You have a file server named Server1 that runs Windows Server 2012 R2. Data Deduplication is enabled on drive D of Server1. You need to exclude D:\Folder1 from Data Deduplication. What should you configure? A.    Disk Management in Computer Management B.    File and Storage Services in Server Manager C.    the classification rules in File Server Resource Manager (FSRM) D.    the properties of D:\Folder1 Answer: B Explanation: B. Data deduplication exclusion on a Volume are set from File & Storage Services, Server Manager or PowerShell http://technet.microsoft.com/en-us/library/hh831434.aspx image If you want to pass Microsoft 70-412 successfully, donot missing to read latest lead2pass Microsoft 70-412 exam questions. If you can master all lead2pass questions you will able to pass 100% guaranteed. http://www.lead2pass.com/70-412.html Lead2pass Testking Pass4sure Actualtests Others $99.99 $124.99 $125.99 $189 $29.99-$49.99 Up-to-Dated Real Questions Error Correction Printable PDF Premium VCE VCE Simulator One Time Purchase Instant Download Unlimited Install 100% Pass Guarantee 100% Money Back
__label__pos
0.601877
Site Sponsors: Cognetic Word Lists  The Problem They annoy everyone - people who think they are smart by putting a few words together, then handing out a do-noting web site to sell the combination. Doh! Cyber Nuts No matter if they are grabbing your ideal name before you think of it (cyber squatting), or after you release it (cyber poaching), the one thing these nut cases have in common is that they have absolutely no intention of actually doing anything with the URL. Indeed, many are merely squatting on the name, hoping to sell it back to us at some point in time. --Personally, I hope they hang onto those names forever ...! Cognetic Word In as much as so many of my friends and I have lots of neat products and web sites we want to share, we thought it high time to release a little tool I wrote years ago. I used it to help put together many of the web and other names you might have seen used here. Cognetic Words For the moment we decided to host the "Cognetic Words Project" at SourceForge.net. The official description reads: "This tool allows us to mix phrase lists together. The results can be used to create memorable web site names using 'Cognetic Word Lists' -Sniglets designed to defeat the name-hijacking of URL squatters, hackers, & other menaces to on-line innovation." Like many a NOJ these days, this tool uses Java. Will work anywhere... even in your browser. Enjoy! -Rn [ view entry ] ( 2662 views )   |  permalink  |  related link Java Threads in Android  Android Threads Those looking to port threads to Google's Android Operating System are often frustrated: While the classic thread classes are there, using threads in your application can cause some pretty tough-to-debug problems. Indeed, while 'googling around the 'net, one is often told not to use threads. We are told that using a Handler is better. While using handlers is sage advice for supporting small lightweight duties, sometimes you just want to use a little thread, anyway. Indeed, as processors, battery, and RAM become less and less of an issue in our pockets, many lean-and-mean anti-thread discussions sound much like the CGA -v- VGA display-support discussions we had at Informix while writing Wingz in the early 1990's. When all is said and done, it seems that - at the moment - most technical limitations simply fade away... The best reason why most use an honest-to-goodenss thread on android is to do heavier number crunching. By doing computer-intensive work away from the main user experience, we all know that things simply work better. (Even in user interfaces it seems, nobody likes a jerk! ;-) Thread Hangups Typically, one is crunching data for a reason: To eventually display something interesting. For this reason, one of the first head-scratchers we encounter when using a thread in an Android App is how it can completely trash an application. In fact, the most certain way to lock your 'App occurs whenever another Android Thread touches the GUI API in any way, shape, or form. Consider the following: import java.util.Timer; import java.util.TimerTask; public class TimeTicker extends TimerTask { Date date = new Date(); Date dateLast = new Date(); private boolean bRunning = false; @Override public void run() { dateLast = date; date = DateX.GetToday(); if (bRunning) { mainActivity.setDialogResult(DateX.FormatRfc822Date(date), iId); } } } In this example, our standard Java TimerTask implementation is running every second (not heavy-weight, I know - but it is the classical "hello world" example in the timer world.) Whenever we want it to send a timer event, it will do so using the setDialogResult() of mainActivity. All well and good. Once inside of mainActivity, we can easily update a few basic elements. When we try to invalidate the drawable surface to show the update however (a GUI function) at the time of this writing, things will go boom: public void setDialogResult(String sResult, int iId) { switch (iId) { case BUTTON_MARK: { this.vtBanner.sText = sResult; this.invalidate(); // BOOM! } break; // ... } } Maddeningly, until we understand Android's thread model, the reason things suddenly go south is not readily apparent. Indeed, if we use an Android Dialog Activity, things work just fine! By way of example, the following View.OnClickListener will update the GUI via that same setDialogResult() member function, just as easy as expected: public void onClick(View arg0) { dialog.bOkay = bSetValue; this.dialog.dismiss(); if (bSetValue) { Editable eString = dialog.editTextName.getText(); mainActivity.setDialogResult(eString.toString(), iId); } } It can even Toast! public void setDialogResult(String sResult, int iId) { switch (iId) { case BUTTON_NEXT: { // Toast t = Toast.makeText(this.getContext(), sResult, 600); // t.show(); this.vtBanner.sText = sResult; this.invalidate(); // Works Fine! } break; } } Why can one listener-laden Activity manage the Canvas, while even the lightest Thread, cannot? One Thread to Draw Them All Believe it or not, the reason why one class is blessed and another is cursed is simple: In the above example, even though the code is written as part of the main Activity class, our TimerTask is still effectively executing the code. Unfortunately, the timer running the TimerTask is not part of the main activity thread. It is another thread. The Dialog, however, is a completely different case. Because Android Dialogs are Views, they will all share the same, common, execution thread. Insidious, no? ( Surely knowledge is power !-) Fortunately, the Android Framework 'wizards foresaw our dilemma: By allowing us to enqueue certain whole-life methods on-the-stack, the problem of having a non-GUI thread manage a GUI-Thread's ... uh ... gui ... is very well supported, indeed: public void setDialogResult(String sResult, int iId) { switch (iId) { case BUTTON_MARK: { this.vtBanner.sText = sResult; this.postInvalidate(); // Another Thread = NOT thread safe! } break; case BUTTON_NEXT: { this.vtBanner.sText = sResult; this.invalidate(); // Another Activity = Same GUI Thread } break; } } -We simply need to be sure that non-GUI threads never use GUI access routines directly. -It is a thread safety thing. (i.e. Even relatively speaking, providing millions of synchronizing gatekeeper operations would slow everything down. The logic here is that it is better to make the developers work a little, than to make the computer work-a-lot. (yea, that logic sounds a lot like another ancient (1970!) pro-assembly-language argument to be sure... but we like Java, C++, and C# anyway. Computers got a whole lot faster, cheaper, and better!)) Want to discuss the slow death of java.util.Vector, anyone? (--me neither :) Executive Summary To conclude, when it comes to using Java's classic timers & threads, we can indeed do so ... even on Android - with relative impunity. Nevertheless, when doing so we must remember to always manage those secondary threads carefully. No mater what mechanism we may cobble-together to update Android's display, we must always be sure to access that Android UI only from the main user-interface-thread! For more information on threading and thread-related issues, you can read more under the painless threading topic on the Android Developer's Site. [ view entry ] ( 3973 views )   |  permalink Unit Testing Today  In 2009 and 2010, the United States Army asked me to write several papers on modern software capabilities. Inspired by what I discovered on recent unit testing advancements, I have written an article on unit testing. One that I would like to share: [ Unit Testing Today ] While the observations in the above focuses a little more on .NET & Microsoft software developer tools than I would prefer, I believe that this paper has a little something for everyone. Enjoy! [ view entry ] ( 3237 views )   |  permalink <<First <Back | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | Next> Last>>
__label__pos
0.795431
Dental Implants Lawton Ok Dental Treatment For Dental Bacteria At your dentist appointment They will examine and cleanse your teeth to ensure they are healthy and scrape away any tartar or plaque buildup on them. Your dentist can also perform procedures like replacing a missing tooth using implants or bridges, and filling in cracks or fractures using crowns. 1. Vitamin C Vitamin C isn’t just vital for overall health It’s also a powerful treatment for dental bacteria since its antioxidant properties fight infection and help build strong gums, as per Dr. Louie who advises that patients suffering from bleeding gums should consider taking an Vitamin C supplements. Vitamin C is a water-soluble vitamin present in vegetables and fruits. For optimal health benefits, it’s recommended to consume at 75 mg or more of Vitamin C each day. But, it is important to be aware that the most effective method to boost your vitamin C intake is to consume fresh fruits and vegetables like carrots, spinach and oranges. The foods that do not contain added sugar are likely to have higher levels of this vital nutrient. Therefore, making sure to include these items in your food and snacks can be beneficial. A diet that is rich in Vitamin C is the best method to protect yourself from dental problems. In addition, this eating plan boosts your immune system as it can help fight infections. Vitamin C’s antioxidant activity is necessary for the production of collagen, which helps maintain periodontal structures like gingiva, periodontal ligament and cementum, in addition to alveolar bone. Furthermore it is able to decrease inflammation in the mouth and aid in periodontal tissue healing. Rajpal also highlights another benefit of Vitamin C for its speedy elimination from the body of local anesthetics. She warns that using Vitamin C supplements 48 hours prior to or after an appointment with a dentist where the patient will be given anesthetics must be avoided. If you’ve already undergone a procedure and want to maximize the benefits consult your dentist about the IV-C treatment for enhanced absorption rate. This procedure can be administered either prior to, during, or after your appointment in order to reduce bodily fatigue and speed healing time. 2. Bioflavonoids Bioflavonoids are antioxidants that shield your body from the effects of oxidative stress which can cause inflammation and pain. They are naturally found in the fruits, vegetables herbs, and other natural products as well as in supplements. Most of the time eating a balanced and healthy, balanced diet is the most effective way to ensure that you are getting your daily amount of antioxidants. Nutrient-rich foods like fresh fruit and vegetables contain large amounts in these phytochemicals that are beneficial. In addition, tea, chocolate and wine can all provide beneficial sources of these compounds. The bioflavonoids in these products can protect against a range of health problems including hypertension and heart disease, as well as allergic reactions and stroke. In addition, they could lower the chance of developing cancer. Bioflavonoids also possess antioxidative stress reduction, which may contribute to allergies and asthma. They block free radicals, neutralize them, and boost your body’s own antioxidant defenses. Some of the most prevalent bioflavonoids are quercetin and rutin, both found within citrus fruit. Quercetin can be used as an effective antihistamine as well as anti-inflammatory that can alleviate allergies such as the congestion of your nose and itchy, irritable skin. Rutin is often used in some dental treatments to speed up the healing of bleeding gums and promote the growth of tissue after tooth extraction. This flavonoid is frequently coupled with vitamin C for greater effectiveness. One study demonstrated that bioflavonoids can help remineralize teeth after dental treatment, decreasing sensitivity and preventing cavities from developing. When testing three different mouth rinses containing bioflavonoid, the researchers discovered that combining naringin, hesperidin (NA and HE) and quercetin (QE) has reduced the amount of oral bacteria and enhanced the process of remineralization. 3. Zinc Zinc is an essential mineral to maintain dental health. It is beneficial for many reasons including the prevention of gum disease. It protects against inflammation and cell membrane disruption caused by bacteria that cause gingivitis. Additionally, it blocks formation of hydrogen peroxide, which irritates your mouth when inhaled. Zinc is found in abundance in the dental hard tissues especially dental enamel. The enamel of your teeth is made up of calcium hydroxyapatite, which has crystal structure that zinc aids in creating. This makes your tooth enamel stronger and more resistant to dental caries. Zinc toothpaste was scientifically proven to lower the amount of plaque and reduce calculus growth, safeguarding tooth enamel from damage and tooth loss. Additionally, it may reduce acid production as well as encourage remineralization, which strengthens enamel. In addition, it reduces the odor that is caused by VSCs, which are volatile sulfur compounds (VSCs) in your breath. This smell develops when bacteria digest leftover food in your mouth and release this gas. Zinc has numerous health benefits, so it’s crucial to make sure you are getting enough zinc in your diet. Insufficient zinc can cause serious health problems such as brittle bones and the inability of absorbing iron. Zinc has also been found to be beneficial in the treatment of inflammatory bowel disease (IBD) sufferers. It functions as an antiseptic that is astringent as well as weak helping with symptoms such as decreased appetite or impaired taste. For men taking 11 milligrams of zinc per day and 8 milligrams of zinc for women is suggested. In order to ensure your body is getting all the essential nutrients it needs it is best to combine zinc along with other micronutrients. 4. Echinacea Echinacea is an herb used to boost your immune system. It particularly works at decreasing inflammation, which can have negative effects on health conditions. Echinacea is also proven to be a powerful treatment for flu and colds. Studies have demonstrated that using echinacea reduces symptoms that are associated with the common cold However, more research is needed to determine whether echinacea prevents colds or simply improves their symptoms. One study found that children who took echinacea for four days following getting a cold would be less likely develop another cold the following year. They were more likely to not be sick for longer than two weeks. Another study found that echinacea proved to be more effective in stopping and treating persistent otitis media than chlorhexidine, suggesting it could be an alternate to antibiotics to treat dental treatment. The most effective echinacea supplement is those that have been standardized to contain a high concentration of the active ingredient. Additionally, they must be manufactured by an established firm that is known for its high-quality products. Echinacea supplements can be taken along with other supplements and other herbs for additional support. Make sure you discuss this possibility with your healthcare provider prior to beginning any supplementation regimen, to ensure they know what medicines and vitamins you’re taking. Echinacea plants are also known to improve immunity and reduce glucose levels in the blood, possibly benefitting those with type 2 diabetes, heart disease or other chronic diseases. Echinacea is a great plant to use frequently to boost your immune system generally or for a short period of time during influenza, colds or lower respiratory tract infections, and bladder infections. For most people, 200 mg daily is sufficient. 5. Garlic Garlic is a beloved home remedy for various ailments such as dental disease. The antibacterial properties and the anti-inflammatory qualities aid in combat toothaches and infection. Garlic is also a great option in order to help treat gum problems and other chronic illnesses. Its antioxidants are a major source of wellness, decreasing the chance of developing chronic diseases like heart disease. Garlic supplements can help lower total cholesterol and low-density lipoprotein (LDL) cholesterol, which is known by the name of “bad” cholesterol which has been linked to heart disease and stroke. However, it can take some time before you notice any changes of taking a supplement with garlic consistently. If you’re suffering from a toothache, try these home remedies to alleviate the discomfort: Start the process by applying cold compress to the area affected until it feels like it’s gone. This can help alleviate the pain and decrease swelling. Next, rub a freshly cut clove of garlic directly to the affected tooth. This will release allicin, a antibacterial and anti-inflammatory substance with antibacterial and antiviral properties which may help alleviate some toothache. Third, apply a mixture of salt and garlic to the infected tooth to reduce the inflammation and increase its efficacy to treat. Fourth, floss the teeth well and floss regularly to eliminate any remaining bacteria from your mouth. Home remedies for toothache might work, but it’s best to see a dentist if the pain persists for more than a few days. Doing so helps protect against long-term damage to your teeth. It also allows dentists to identify the root of your discomfort , and provide an effective treatment plan.
__label__pos
0.626387
The penis serves both excretory and reproductive functions in the human male sexual activity, also known as coitus, or copulation, reproductive work when the male reproductive organ (in people as well as other higher pets) gets in the feminine tract that is reproductive. In the event that reproductive work is complete, sperm cells are passed away through the male human body in to the female, in the act fertilizing the feminine egg and developing an organism that is new. Continue reading
__label__pos
0.976216
Medical Weight Loss Program FAQ Answers Is Doctor Aron is a medical doctor or nutritionist? Obesity Medicine Association Doctor Aron is licensed physician in Internal Medicine and Bariatrics — a Medical Weight Loss specialist — and member of the Obesity Medicine Association (formerly named the American Society of Bariatric Physicians). Why is your program different than others? Why choose WeightLossNYC™? When it comes to weight loss, safety is the biggest concern. At our center, you are examined and monitored by our highly trained physician during every visit. Decisions made regarding your care and progress is done so with the expert knowledge of a medical professional. We are dedicated to your safety and long term success. WeightLossNYC™ is not your typical weight loss center: You will soon come to realize there are many attributes that differentiate us from other weight loss programs. We don’t believe in a cookie-cutter approach. We Care for Patients, not Customers. Each patient is examined by a bariatric (weight loss) physician. Each visit you will work directly with Dr Aron, a medical specialist extensively trained in obesity management. Providing individual treatment sets our program apart from advertised commercial weight loss group programs. Additionally, as a medical weight loss program, we do ongoing evaluations of your health status and decrease your medications for weight-related diseases as your health improves. We can also prescribe weight control medications, such as appetite suppressants and others, to assist with your weight loss efforts. What is the program? There is no exact answer for this question since every individual’s circumstances are unique and therefore so will be their program. Everyone has different reasons why they seek our assistance, the factors that contributed to their weight gain, the effects the excess weight has had on their medical and emotional health and the type of assistance they are seeking to help lose weight. We also realize that the easiest program to follow is the one that has been created especially for you. Therefore there is no single weight loss structure that defines our treatment protocol. Dr Aron use evidence based up-to-date medical interventions to help you gain control of your weight. Dr. Aron will also discuss an exercise program with you. There are a variety of dietary programs; some of the diets use grocery store foods, others use food supplements purchased at the center, and some use a combination of the two. All programs have been designed by leading experts in the field of obesity treatments, are clinically proven to be effective and are modified to suit each patient’s individual needs. obesity treatment The only common element in all our weight loss programs is building a foundation of behavioral modification that will ultimately translate into lifestyle changes to ensure long-term weight control. If you intend on maintaining your weight loss, the first step is to understand that weight control is much more than going on a diet. All things considered, mastering weight control depends on embracing and implementing just basic principles for behavior change. From the very first day of treatment, we begin working on strategies to empower you and help you ’own’ these basic principles for behavior change in order to achieve lifelong success with weight management. Will I need prescription medication? There is a wide range of appetite curbing medication designed to take the edge off your hunger. When medically indicated and as part of a carefully monitored program, Dr Aron will select the right prescription for you. Dr. Aron uses appetite suppressants to decrease your desire for food and also boost your metabolism. We do everything we can to help you to lose weight without being hungry! Is it safe? Yes. Our program follows clear clinically proven guidelines. Dr. Aron’s weight loss program is not magic, it’s nutritional science that really works, providing your body with healthy nutrients and a customized program that accommodates your activity level and medical history. blood test also alerts us to any pre-existing medical conditions, such as diabetes or high cholesterol. A physician-supervised medical weight loss program may be the safest and wisest way to lose weight and maintain the loss. Overweight and obesity are frequently accompanied by other medical conditions which might go undetected and untreated in a non-medical weight loss program. What is the average weight I can expect to lose on your program? Clinical use of VLCD Diets Clinical Merits The methods we use are clinically proven and recommended by the Obesity Medicine Association, whose guidelines we follow: “Women generally lose 3 to 3-1/2 pounds per week and men lose 4 to 5 pounds per week … These average losses are 2-3 times greater than those resulting from conventional calorie-reducing diets used for the same time period.” [1,2,3] 1. American Society of Bariatric Physicians™, Use of VLCDs in the Treatment of Obesity, ASBP Approved Position Statement, Nov 2010 2. Very low-calorie diets. National Task Force on the Prevention and Treatment of Obesity, National Institutes of Health. JAMA. Aug 25 1993;270(8):967-974. 3. Wadden TA. Treatment of obesity by moderate and severe caloric restriction. Results of clinical research trials. Ann Intern Med. Oct 1 1993;119(7 Pt 2):688-693. Very rapid weight loss can be achieved when doing a physician-supervised weight loss programs, called a Very Low Calorie Diet (VLCD) or Low Calorie Diet (LCD). Our program offers such aggressive weight loss programs. These programs are easy to follow and safe when medically supervised. Each program can obtain different results and individual weight loss results vary. For other less aggressive weight loss programs offered through our clinic, each individual’s weight loss progress is different. Since every person’s needs and preferences are unique, programs are set up with the patient that best fits his/her lifestyle and dietary needs. There are a variety of dietary programs; some of the diets use grocery store foods, others use food supplements in combination with grocery store foods. Why do I need to under a doctor’s care in order to lose weight? Can’t I just go on one of the many diets I hear and read about that seem to produce such remarkable results? It is true that many published diets, especially those of the "crash" variety, result in rapid, temporary weight loss. We emphasize the word temporary. Few people can tolerate for long the boredom of foods eaten on such programs and, if they could, serious damage to their health from nutritional deficiency might result. Even a return to previous eating habits, after being on such diets, will often bring the weight right back to the original point or higher. Our program addresses the structures of eating and habits as well as medical factors of obesity. Why is your success rate so high in keeping weight off, once it has been reduced? This is due to the basic thoroughness of the initial medical examination in diagnosing the underlying factors of the patient’s weight problem. The guidelines in nutrition and exercise that the patient receives bring more conscious thought to bear on good health and proper eating habits. As the program progresses, you will feel so much better in every way that you will have less inclination to relapse into your old eating habits. Physiologically, no two human bodies are alike. What is proper diet and exercise for one person is not necessarily so for another. The Bariatric Weight Loss Program concentrates on the specific need for each individual patient. The Bariatric physician, unlike most other medical doctors, has received special training in nutrition as it relates to problems of the overweight. It is in these ways that the bariatric weight loss program differs from most weight control programs and insures our high success rate. Do I have to go to meetings? Structured dieting is only one component of the program. We provide private, one-on-one medical weight loss guidance. Our team is your personal weight-loss and wellness team until you reach your goal weight. As a weight loss team we will offer encouragement, advice and monitor your progress. When you have reached your goal weight, Dr. Aron will direct you towards structures for maintaining your weight loss. Is your program effective? Not only is our program effective, it is clinically research proven. You will lose weight on our program safely and consistently. I’ve wanted to lose weight for a long time. How can you help motivate me to really do it? Our exclusive Behavior Education Program and one-on-one counseling have helped thousands of people done what they thought they would never do: lose weight and keep it off. Each week you'll learn new ways of thinking and behaving while developing healthy weight loss attitudes and skills. We strive to make sure that you’ll not need to join a weight loss program again. Is your program right for me? It is if you want to lose weight. Results of 10-20 lbs per month may be achieved, though vary depending on your starting weight, which program you choose to follow, and other factors. After you lose weight by following our nutritional program, we stay on the job, with individual counseling and monitoring, to help you keep it off. Do I have to be extremely overweight to consider your weight loss program? What if I only need to lose 15-20 pounds? Our weight loss programs are not simply for people who are extremely overweight, but are for anyone who is unhappy or feels unhealthy due to their weight. We will tell you if you shouldn’t, or don’t need to lose weight, or if you are not the right candidate for one of our weight loss programs for any reason. Our goal is your overall health. How long do I have to stay on this weight loss program? The length of time you may stay on the program depends on how much weight you wish to lose. As you know, weight loss is not about a "quick fix!"We will work with you to help you to achieve on-going success. You determine your participation in the program. Your target weight is your goal. What we provide is the guidance and tools to meet your needs, a medically proven structure and means to achieve your goals. What does Bariatric mean? Bariatrics is a branch of medicine that deals specifically with problems of obesity, or being overweight. It is a very specialized field and only about 1% of the medical doctors in this country are qualified members of the American Society of Bariatric Physicians. Since obesity is a disease, it makes sense to treat it as one. In 1985, the National Institutes of Health, at its Health Consensus Development Conference on the Health Implications of Obesity, stated that obesity is a specific disease entity that should be treated and monitored medically by a trained physician Bariatric physicians, or bariatricians, are medical doctors who specialize in the treatment of overweight and obese patients and related medical conditions. These licensed physicians have received special training in Bariatric Medicine: the art and science of medical weight management. Bariatricians treat overweight and obese patients with a comprehensive program of diet and nutrition, exercise, behavioral therapy and, when necessary, the prescription of appetite suppressants and other appropriate medications. (The word bariatric stems from the Greek root baro meaning heavy or large.) What is body composition analysis (BCA)? Body composition is the amount of water, lean body mass and fat in the human body. The true definition of obesity is based on the percentage of body fat. Standard height and weight charts are inadequate in assessing a patient’s percent body fat. BCA can determine how much of your body weight is fat mass, and therefore what your percent body fat is. Our highly innovative scale figures this number using bioelectrical impedance, which measures the conductivity of different body compartments using very small electrical currents. Differences in water content within fat tissue compared to lean body tissue affects the how the current is conducted. Our goal is to help you have good quality weight loss, which means, loss of body fat mass only. Rapid weight loss can result is sacrificing too much lean body mass (which may ultimately slow down your metabolism making the weight loss harder to sustain long-term). We want to help you lose weight rapidly, but not at the expense of losing lean body mass. This is why we measure your body composition at each and every visit, so we can monitor the quality of your weight loss. If you were trying to lose weight on your own and are just using a conventional scale, you may start to lose motivation if you aren’t seeing the scale move fast enough. Weight loss progress, as measured on a conventional scale, can sometimes be erratic and unpredictable due to water fluctuations (especially in women who still are menstruating). This phenomenon can be unsettling if it were unexpected and especially if you have been following a program religiously and don’t see the anticipated loss of pounds on the scale. The benefit of having your body composition analyzed at each visit is you will know when your scale isn’t moving simply due to water retention. In actuality, you may still be losing fat mass, which means you are still improving your body composition and making progress with your weight loss. The method that we use to obtain this information is called electrical bioimpedence. It is a non-invasive and accurate test. It does require the following preparation instructions: no alcohol consumption within 24 hours prior to your first visit, no exercise, caffeine or food within 4 hours prior to taking the test, and drink 2 to 4 glasses of water 2 hours prior to taking the test. I have tried many different diet pills in the past. I have used a carb blockers: What is the best diet pill on the market today? There are many diet scams out there that claim "quick and easy weight loss"... most of them don’t work, and in addition can contain harmful chemicals and mixtures of substances that could put you at serious health risks. In addition to OTC diet pills there are countless other pills which claims to have an weight loss effect that are total hype. Don’t be fooled by products claiming to be "All Natural": Herbal formulas and medicines are not regulated by the FDA and may not even contain what the labels say they do. They can also have too much of a vitamin, mineral or herb, or a dangerous combination of all three (that can be toxic to your system). More often they claim to "work miracles" that are just flat out not true. There are no “magical” weight loss pills that will make you lose weight. Even if you lose weight initially you will gain it back, and even more because sustained weight loss requires complex approach and expert knowledge of obesity and weight loss. You can read many more such warnings on our blog, via FDA tag. Will I have to change my lifestyle? Research shows that lifestyle changes are necessary for both weight loss AND maintenance of weight loss. We at New York Medical Weight Loss Center want you to not only take the weight off, we want you to keep it off, so yes, some lifestyle changes will need to happen. Part of your initial visit with Dr. Aron will include a thorough lifestyle assessment. She will discuss with you ways to make small changes that fit in with your busy lifestyle. I need to lose weight quickly in preparation for an operation. Can you help? Yes. For those who require rapid weight loss, we offer a Very Low Calorie Diet which is physician-supervised and closely monitored. Is your weight loss program safe if I have diabetes or any other medical conditions? Not only are our programs safe for people with diabetes, hypertension, and other medical problems - they are helpful in controlling any further complications that result from these diseases. More than this, our weight loss programs can help alleviate many of the other conditions that result from obesity, including arthritis, coronary artery disease, sleep apnea, and persistent lymphedema, lower extremity edema, hypertension, and high cholesterol.. Do I need a referral? We are a direct access facility. One of your rights in the state of NY is to be able to come to medical providers without a referral. Our program is self-pay, though you may have additional options available to you. What if I cannot exercise at this time? You can still lose nearly as much weight. In the rapid weight loss phase, the diet and medication are the most important factors. After you lose your weight and you are trying to maintain it, then some type of exercise is very important. Should I supplement your program with vitamins? During your initial visit and all subsequent follow up visits you will receive a 2 week supply of necessary for your healthy weight loss vitamins. Vitamins are included in the price of visit. In some cases Dr. Aron may also suggest an additional vitamin supplement program to fit your individual needs. Swelling, water retention, moodiness, dry skin, anemia and hormonal imbalances can be traced back to a vitamin deficiency. Will I gain it back once I’ve lost the weight? Our maintenance program provides you with all the knowledge and guidelines to keep the weight off for life. It designed especially for those who have reached a goal and want to maintain success. We'll remain as your support system as long as you need us. Who comes to Dr Aron’s weight loss center? People of all ages come when they need professional help with weight issues, particularly those requiring a change in lifestyle. Many come because they primarily want to lose weight, from just a few pounds to several hundred. Others come because they recognize the importance of normal weight, nutrition, activity and behavioral change in the management of medical problems such as coronary disease, hypertension, diabetes or the metabolic syndrome. Call for Your First Appointment Contact Information Phone: 718-491-5525 Address: New York Medical Weight Loss Center, 7032 4th Avenue, Brooklyn NY 11209 fresh fruit Call Now to Schedule Appointment Start Losing Weight, Today, at Weight Loss NYC  
__label__pos
0.752665
Picking From JPCT Revision as of 21:15, 11 August 2014 by Admin (Talk | contribs) Jump to: navigation, search Picking Picking in jPCT can be implemented in two ways. One of these ways is usually a bit faster but doesn't work for compiled objects, while the other one works for all objects but might be a bit slower. The recommended way This works with all renderers and all objects, but depending on the scene it might by a bit slower. It's the only one that is available in jPCT-AE. Unlike the former approach, this is actually a kind of collision detection, which is why it triggers collision events too. Just like above, you have to make your objects pickable. However, because it's a collision detection, this works different. Instead of using setSelectable(...), you have to use setCollisionMode(...). For example: obj.setCollisionMode(Object3D.COLLISION_CHECK_OTHERS); Like above, you need your 2D picking coordinates. With them, you need a direction vector in world space. This is simple: SimpleVector dir=Interact2D.reproject2D3DWS(camera, frameBuffer, x, y).normalize(); Armed with this vector, you can now go to World and do Object[] res=world.calcMinDistanceAndObject3D(camera.getPosition(), dir, 10000 /*or whatever*/); The result is an Object[]-array with the Float-distance to the picked object in the first slot and the picked Object3D in the second. If nothing has been hit, the result will be [COLLISION_NONE, null]. An example Here an example that uses the software renderer to make an object follow the mouse: import java.awt.Color; import java.awt.Graphics; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import java.awt.event.MouseMotionListener; import javax.swing.JFrame; import com.threed.jpct.*; import com.threed.jpct.util.Light; public class MouseFollowDemo extends JFrame implements MouseMotionListener, MouseListener { private static final long serialVersionUID = 1L; private Graphics g = null; private FrameBuffer fb = null; private World world = null; private Object3D plane = null; private Object3D ramp = null; private Object3D player = null; private Object3D cube2 = null; private Object3D sphere = null; private int mouseX = 320; private int mouseY = 240; public MouseFollowDemo() { setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); pack(); setSize(640, 480); setResizable(false); setLocationRelativeTo(null); setVisible(true); addMouseMotionListener(this); addMouseListener(this); g = getGraphics(); } @Override public void mouseMoved(MouseEvent m) { mouseX = m.getX(); mouseY = m.getY(); } @Override public void mouseDragged(MouseEvent m) { // } @Override public void mouseClicked(MouseEvent e) { // } @Override public void mouseEntered(MouseEvent e) { // } @Override public void mouseExited(MouseEvent e) { // } @Override public void mousePressed(MouseEvent e) { // } @Override public void mouseReleased(MouseEvent e) { // } private void initStuff() { fb = new FrameBuffer(640, 480, FrameBuffer.SAMPLINGMODE_NORMAL); world = new World(); fb.enableRenderer(IRenderer.RENDERER_SOFTWARE); ramp = Primitives.getCube(20); ramp.setAdditionalColor(Color.RED); plane = Primitives.getPlane(20, 10); plane.setAdditionalColor(Color.GREEN); sphere = Primitives.getSphere(30); sphere.setAdditionalColor(Color.CYAN); sphere.translate(-50, 10, 50); cube2 = Primitives.getCube(20); cube2.setAdditionalColor(Color.ORANGE); cube2.translate(60, -20, 60); plane.rotateX((float) Math.PI / 2f); ramp.rotateX((float) Math.PI / 2f); player = Primitives.getCone(3); player.rotateX((float) Math.PI / 2f); player.rotateMesh(); player.clearRotation(); plane.setCollisionMode(Object3D.COLLISION_CHECK_OTHERS); ramp.setCollisionMode(Object3D.COLLISION_CHECK_OTHERS); sphere.setCollisionMode(Object3D.COLLISION_CHECK_OTHERS); cube2.setCollisionMode(Object3D.COLLISION_CHECK_OTHERS); cube2.setBillboarding(true); world.addObject(plane); world.addObject(ramp); world.addObject(sphere); world.addObject(cube2); world.addObject(player); player.translate(-50, -10, -50); Light light = new Light(world); light.setPosition(new SimpleVector(0, -80, 0)); light.setIntensity(40, 25, 22); world.setAmbientLight(10, 10, 10); world.buildAllObjects(); } private void relocate() { SimpleVector pos = getWorldPosition(); if (pos != null) { player.clearTranslation(); player.translate(pos); } } private SimpleVector getWorldPosition() { SimpleVector pos = null; SimpleVector ray = Interact2D.reproject2D3DWS(world.getCamera(), fb, mouseX, mouseY); if (ray != null) { SimpleVector norm = ray.normalize(); // Just to be sure... float f = world.calcMinDistance(world.getCamera().getPosition(), norm, 1000); if (f != Object3D.COLLISION_NONE) { SimpleVector offset = new SimpleVector(norm); norm.scalarMul(f); norm = norm.calcSub(offset); pos = new SimpleVector(norm); pos.add(world.getCamera().getPosition()); } } return pos; } private void doIt() throws Exception { Camera cam = world.getCamera(); cam.moveCamera(Camera.CAMERA_MOVEOUT, 100); cam.moveCamera(Camera.CAMERA_MOVEUP, 160); cam.lookAt(plane.getTransformedCenter()); while (true) { relocate(); fb.clear(); world.renderScene(fb); world.draw(fb); fb.update(); fb.display(g); Thread.sleep(10); } } public static void main(String[] args) throws Exception { MouseFollowDemo cd = new MouseFollowDemo(); cd.initStuff(); cd.doIt(); } } The old fashioned way As said, this doesn't work on compiled objects. If you are using the software renderer only or the hardware renderer in hybrid mode (i.e. without compiled objects), it's save to use though. To use this way, you have to make the objects in questions "selectable". You can do this be calling obj.setSelectable(Object3D.MOUSE_SELECTABLE); on your objects. Then, you render the scene and get your picking coordinates from your input device in screen space. Most likely mouse coordinates or similar. Then you do this: SimpleVector dir=Interact2D.reproject2D3D(camera, frameBuffer, x, y).normalize(); int[] res=Interact2D.pickPolygon(world.getVisibilityList(), dir); In res, you'll find the object- and the polygon number (in that order), if the picking actually picked something. If not, res is null. Your picked Object3D is now Object3D picked=world.getObject(res[0]); Please note that there are two variants of the pickPolygon-methods. The simple one (see above) makes unselectable objects act as a block to the picking ray, i.e. even if an object isn't selectable, it will still block the ray so that no object behind that object can be picked.
__label__pos
0.969786
What is a Contact Breaker? T. L. Childree A contact breaker is an electrical device generally used in the ignition system of an internal combustion engine. Contact breakers are sometimes referred to as "points" and are typically used to temporarily interrupt the electrical current passing through an ignition coil. This device is often utilized in combination with an electrical capacitor and is usually located in the distributor component of an engine. Contact breakers generally require frequent readjustments to perform properly, and their use has declined in recent years. Most modern internal combustion engines utilize an electronic ignition system that does not contain a contact breaker. Man with hands on his hips Man with hands on his hips Older internal combustion engines typically have an ignition system consisting of a battery, distributor, ignition coil, and spark plugs. The ignition coil consists of a shared magnetic core surrounded by two sets of copper transformer windings. The primary set of windings creates a magnetic field in the shared core. The secondary windings create a step-up transformer which produces the high voltage electrical current needed for the spark plugs to ignite the engine’s fuel. A contact breaker is used both to conduct and interrupt the flow of electricity to the ignition coil. During the ignition procedure, electrical current from the battery passes through the contact breaker en route to the ignition coil, distributor, and spark plugs. Inside the distributor is a rotating cam that opens and closes the contact breaker. When the breaker is closed, a short burst of electricity is sent to the ignition coil. When the breaker is opened, the electrical current is suddenly stopped, and a large amount of electricity is built up in the secondary winding of the ignition coil and sent to the spark plug. This process is repeated sequentially for each combustion cylinder of the engine. A small gap between the contact points of the breaker allows an electrical arc to occur when it is in the closed position. This arc can cause the breaker’s contact points to become damaged over a short period of time. To reduce the damage to the breaker, an electrical capacitor is often used as one of the breaker’s contact points to suppress the arcing and increase the output of the ignition coil. Contact breakers have a tendency to become misaligned during use and often require readjustment between regular service intervals. The use of contact breakers in ignition systems has been greatly reduced in recent years. Electronic ignition systems utilizing magnetic or optical sensing devices are now commonplace in most engines. These sensing devices have proven to be more precise and offer better high speed engine operation. Contact breaker ignition systems continue to be used in aircraft engines, however, because they are not as prone to sudden catastrophic failures as electronic sensors. You might also Like Readers Also Love Discussion Comments Soulfox @Melonlity -- swapping out a "points" starter for electronic ignition is expensive and may not be worth the money. Sure, they have to be maintained regularly, but a yearly tuneup is about all one needs to do to keep them functioning as they should. Electronic ignition is an improvement, but points worked reliably for decades and aren't as bad as some people claim. Melonlity You'll not find points as standard equipment in many internal combustion engines that have been made in the past 40 years. Electronic ignition systems became common in the 1970s as points were, as the article points out, just too unreliable. Here's something else -- one of the more popular upgrades for classic vehicles is swapping out the old "points" starters for electronic ignition systems. Some purists may have a problem with such modifications, but folks who want to start up their cars and drive them every day without having to fuss with troublesome points regularly tend to like them. Post your comments Login: Forgot password? Register:
__label__pos
0.77391
[SciPy-dev] roots, poly, comparison operators, etc. Pearu Peterson pearu at cens.ioc.ee Sat Feb 16 07:20:58 CST 2002 Hi! On Sat, 16 Feb 2002, eric wrote: > Other things: > > * Array printing is far superior in Matlab and Octave -- they > generally always look nice. We should clean up how arrays are output. > Also, the "format long" and "format short", etc options for specifying > how arrays are printed are pretty nice. I agree. May be ipython could be exploited here? In fact, rounding of tiny numbers to zeros could be done only when printing (though, personally I wouldn't prefer that either but I am just looking for a compromise), not inside calculation routines. In this way, no valuable information is lost when using these routines from other calculation routines and computation will be even more efficient. > * On Matlab/Octave, sort() sorts by magnitude of complex and then by > angle. On the other hand ==, >, <, etc. seem to only compare the real > part of complex numbers. > These seem fine to me. I know they aren't mathematically correct, but > they seem to be pragmatically correct. I'd like comments on these > conventions and what others think. There is no mathematically correct way to compare complex numbers, they just cannot be ordered in an unique and sensible way. However, in different applications different conventions may be useful or reasonable for ordering complex numbers. Whatever is the convention, their mathematical correctness is irrelevant and this cannot be used as an argument for prefering one convention to another. I would propose providing number of efficient comparison methods for complex (or any) numbers that users may use in sort functions as an optional argument. For example, scipy.sort([2,1+2j],cmpmth='abs') -> [1+2j,2] # sorts by abs value scipy.sort([2,1+2j],cmpmth='real') -> [2,1+2j] # sorts by real part scipy.sort([2,1+2j],cmpmth='realimag') # sorts by real then by imag scipy.sort([2,1+2j],cmpmth='imagreal') # sorts by imag then by real scipy.sort([2,1+2j],cmpmth='absangle') # sorts by abs then by angle etc. scipy.sort([2,1+2j],cmpfunc=<user defined comparison function>) Note that scipy.sort([-1,1],cmpmth='absangle') -> [1,-1] which also demonstrates the arbitrariness of sorting complex numbers. Btw, why do you want to sort the output of roots()? As far as I know, there is no order defined for roots of polynomials. May be an optional flag could be provided here? > * Comparison of IEEE floats is hairy. It looks to me like Matlab and > Octave have chosen to limit precision to 15 digits. This seems like a > reasonable thing to do, for SciPy also, but we'd currently have to > limit to 14 digits to deal with problems of imprecise LAPACK routines. > Pearu and Fernando are arguing against this, but maybe they wouldn't > mind a 15 digit limit for double and 6 for float. We'd have to modify > Numeric to do this internally on comparisons. There could be a flag > that enables exact comparisons. I hope this issue is solved by fixing roots() and also the accuracy of LAPACK routines can be rehabilitated now (I don't claim that their output is always accurate, just floating numbers cannot be represented accuratelly in computers memory, in principle. It has nothing to do with the programs that manipulate these numbers, one can only improve the algorithtms to minimize the computational errors). The deep lesson to learn here is: Fix the computation algorithm. Do not fix the output of the computation. Matlab and Octave are programs that are oriented to certain applications, namely, to engineering applications where linear algebra is very important. SciPy needs not to choose the same orientation as Matlab, however, it can certainly cover Matlab's orientations. Python is a general purpose language and SciPy could also be a general purpose package for scientific computing (whatever are the applications). Regards, Pearu More information about the Scipy-dev mailing list
__label__pos
0.722691
Back to blog The definitive guide to comprehensively monitoring your AI AI teams across verticals vehemently agree that their data and models must be monitored in production. Yet, many teams struggle to define exactly what to monitor. Specifically, what data to collect “at inference time”, what metrics to track and how to analyze these metrics.  The sheer variety and complexity of AI systems dictate that the "one size fits all" approach to monitoring does not work. Nevertheless, we are here to provide some clarity and discuss universally applicable approaches.  Having worked with a multitude of teams across verticals (and both deep learning and machine learning), we have been hearing a few consistent motivations, including:  • A desire to resolve issues much faster • A strong need to move from “reactive” to “proactive” -- that is, to detect data and model issues, way before the business KPIs are negatively impacted or customers complain So, how should you track and analyze your AI?  1. Define model performance metrics Attaining objective measures of success for production AI requires labels or “ground truth” for your inference data. A few cases in which this would be possible include: • Human-in-the-loop mechanism, with annotators, customers, or 3rd parties labeling at least a sample of inference data. For example - a fraud detection system that receives lists of actual fraudulent transactions (after the fact).  • Business KPIs could provide a sort of “labeling”. For example, for a search or recommendation model, you could track the clicks or conversions (tied back to each inference).  The latter, by the way, could lead to the holy grail of monitoring -- being able to assess precisely the impact (positive or negative) of the models on the business outcomes.  The availability of labels enables calculating and analyzing common model validation metrics, such as false positive/negative rates, error/loss functions, AUC/ROC, precision/recall and so on. It is important to note that the labels mentioned above may not generally be available at inference time. It could be seconds after the models run (e.g., a user clicking on a recommended ad), but also weeks after the models run (e.g., the merchant notifies the fraud system about real fraudulent transactions) for the “ground truth” feedback to be available. Consequently, an AI monitoring system should enable updating labels (and other types of data) asynchronously. A note about monitoring annotators Needless to say, labeled data is only as good as the labeling process and the individual annotators labeling it. Forward-thinking AI teams leverage monitoring capabilities to assess their annotation process and annotators. How would you do that? One example would be to track the average delta between what your model is saying and what your annotator is saying. If this metric gets above a certain threshold - one can assume that either the model is grossly underperforming or the annotator is getting it wrong. 2. Establish granular behavioral metrics out of model outputs Tracking model outputs is a must.  From one angle, output behavior can indicate problems that are barely detectable by looking elsewhere (i.e. a model’s high sensitivity might mean that a barely detectable change in inputs could really “throw off the model”). From another angle, there could be significant changes in input features that aren’t impacting output behavior as much. Therefore, metrics based on outputs are priority number one within the monitoring scope. Below are a few examples of metrics created from model outputs:  1. Basic statistical analysis of raw scores, e.g., weekly average and standard deviation of fraud probability score   2. Confidence score/interval, e.g., • The distance from a decision boundary (e.g., from the hyperplane in SVM models, or when using a simple threshold) • The delta between the chosen class and the second place in a multi-class classification model 3. In classification models, the distribution of the chosen classes 4. The rate of non-classifications (i.e., when none of your classes’ scores passed your threshold) Overall, anomalies in metrics created based on outputs tell the team that something is happening. To understand why, and whether and how to resolve what is happening, the team should include features and metadata in the monitoring scope. More on this below.  3. Track feature behavior individually and as a set Tracking feature behavior serves two purposes:  1. To explain changes that were detected in output behavior  2. To detect issues in upstream stages  (e.g., data ingestion and prep) When issues are detected in output behavior, features might be called upon to explain why.  The process of explaining the issues, in this case, may require a feature importance analysis, leveraging one of the hosts of prevalent methods, such as SHAP and LIME. Separately, tracking changes in feature behavior is another independent way to detect issues without looking at outputs. So, which upstream events may manifest in anomalous feature behavior? There are too many to count. A few examples include:  • Changes in the business such as the influx of new customers  • Changes in external data sources (e.g., new browsers, new devices)  • Changes introduced in preceding pipelines, e.g., a bug in the new release of the data ingestion code   For these reasons mentioned above, collecting and analyzing the feature behavior in production is a critical part of the monitoring scope.  4. Collect metadata to segment metric behavior  So far, we have covered categories of data to collect for the purpose of creating behavioral metrics. These metrics could be tracked and analyzed at the global level. However, to truly realize the value of monitoring, behavioral metrics have to be looked at for subpopulations and subsegments of model runs.  For example (somewhat trivial), an ad-serving model might perform consistently overall but provide gradually poorer recommendations for retirees (as measured by declining click-through rates), which are balanced by gradually better recommendations for young professionals (as proxied by increasing click-through rates). The AI team would want to understand the behavior of each subpopulation and take corrective actions as necessary.  The crucial enabler of segment-based analysis of the behavior is comprehensively collecting contextual metadata about the model runs. These contextual metadata often exist in the AI system, but don’t contribute to features of the model.  Here are a couple of additional examples of the value in metadata driven segmentation:  • Compliance assessment: A bank would like to ensure that its underwriting model is not biased towards (or against) specific genders or races. Gender and race are not model features, but nevertheless are important dimensions along which to evaluate model metrics and ensure it is in compliance with lending regulations.  • Root cause analysis: A marketing team detects that there is a subpopulation of consumers for whom the recommendation model is less effective. Through metadata-driven segmentation they’re able to correlate these consumers with a specific device and browser. Upon further analysis realizes that there’s a defect in the data ingestion process for this particular device and browser.  A note about model versions Another prominent example of metadata that is helpful to track is the model version (and the versions of other components in the AI system). This enables correlating deteriorating behaviors with the actual changes made to the system. 5. Track data during training, test, and inference time Monitoring comprehensively at inference time can yield immense benefits. Nevertheless, for even deeper insights into the AI system, forward-thinking teams expand the monitoring scope to include training and test data. When models underperform at inference time, having the ability to compare the features distribution for that segment of data, with the corresponding distribution for when the model was trained can provide the best insight into the root cause of the change in behavior. If possible, we highly recommend to track the same metadata fields discussed above, also when logging training runs. By doing so, teams can truly compare corresponding segments of the data and get to the source of issues faster and more accurately. Summary Evaluating the performance and behavior of complex AI systems in production is challenging. A comprehensive monitoring strategy could make a real difference.  In our experience, such monitoring strategy includes defining model performance metrics (e.g., precision, AUC/ROC, and others) using data available at the inference stage or even later, establishing granular behavioral metrics of model outputs, tracking feature behavior individually and as a set, and collecting metadata which could assist in segmenting metric behavior.  It is advisable to expand the monitoring scope to the training and test stages to get the full picture of the state of the system and more quickly isolate the root causes of issues.  The best performing AI teams are already implementing similar monitoring strategies as an integral part of their AI lifecycle. These teams experience less anxiety about potential production issues, and better yet, are able to extend their research into production and dramatically improve their models over time.  Request Demo Now
__label__pos
0.959308
Change follow_operation schema to use type BooleanLike (#301) ci/woodpecker/push/woodpecker Pipeline was successful Details Changes follow_operation schema to use BooleanLike instead of :boolean so that strings like "0" and "1" (used by mastodon.py) can be accepted. Rest of file uses the same. For more info please see https://git.pleroma.social/pleroma/pleroma/-/issues/2999 (I'm also sending this here as I'm not hopeful about upstream not ignoring it) Co-authored-by: ave <[email protected]> Reviewed-on: #301 Co-authored-by: ave <[email protected]> Co-committed-by: ave <[email protected]> pull/310/head ave 2 weeks ago committed by floatingghost parent 4a82f19ce6 commit 1c4ca20ff7 @ -4,6 +4,11 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). ## Unreleased ## Changed - MastoAPI: Accept BooleanLike input on `/api/v1/accounts/:id/follow` (fixes follows with mastodon.py) ## 2022.11 ## Added @ -12,7 +17,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - Scraping of nodeinfo from remote instances to display instance info - `requested_by` in relationships when the user has requested to follow you ## Changes ## Changed - Follows no longer override domain blocks, a domain block is final - Deletes are now the lowest priority to publish and will be handled after creates - Domain blocks are now subdomain-matches by default @ -223,12 +223,12 @@ defmodule Pleroma.Web.ApiSpec.AccountOperation do type: :object, properties: %{ reblogs: %Schema{ type: :boolean, allOf: [BooleanLike], description: "Receive this account's reblogs in home timeline? Defaults to true.", default: true }, notify: %Schema{ type: :boolean, allOf: [BooleanLike], description: "Receive notifications for all statuses posted by the account? Defaults to false.", default: false @ -902,6 +902,12 @@ defmodule Pleroma.Web.MastodonAPI.AccountControllerTest do |> post("/api/v1/accounts/#{followed.id}/follow", %{reblogs: true}) |> json_response_and_validate_schema(200) assert %{"showing_reblogs" => true} = conn |> put_req_header("content-type", "application/json") |> post("/api/v1/accounts/#{followed.id}/follow", %{reblogs: "1"}) |> json_response_and_validate_schema(200) assert [%{"id" => ^reblog_id}] = conn |> get("/api/v1/timelines/home") @ -931,6 +937,12 @@ defmodule Pleroma.Web.MastodonAPI.AccountControllerTest do |> post("/api/v1/accounts/#{followed.id}/follow", %{reblogs: false}) |> json_response_and_validate_schema(200) assert %{"showing_reblogs" => false} = conn |> put_req_header("content-type", "application/json") |> post("/api/v1/accounts/#{followed.id}/follow", %{reblogs: "0"}) |> json_response_and_validate_schema(200) assert [] == conn |> get("/api/v1/timelines/home") @ -941,21 +953,23 @@ defmodule Pleroma.Web.MastodonAPI.AccountControllerTest do %{conn: conn} = oauth_access(["follow"]) followed = insert(:user) ret_conn = conn |> put_req_header("content-type", "application/json") |> post("/api/v1/accounts/#{followed.id}/follow", %{notify: true}) assert %{"id" => _id, "subscribing" => true} = json_response_and_validate_schema(ret_conn, 200) assert %{"subscribing" => true} = conn |> put_req_header("content-type", "application/json") |> post("/api/v1/accounts/#{followed.id}/follow", %{notify: true}) |> json_response_and_validate_schema(200) ret_conn = conn |> put_req_header("content-type", "application/json") |> post("/api/v1/accounts/#{followed.id}/follow", %{notify: false}) assert %{"subscribing" => true} = conn |> put_req_header("content-type", "application/json") |> post("/api/v1/accounts/#{followed.id}/follow", %{notify: "1"}) |> json_response_and_validate_schema(200) assert %{"id" => _id, "subscribing" => false} = json_response_and_validate_schema(ret_conn, 200) assert %{"subscribing" => false} = conn |> put_req_header("content-type", "application/json") |> post("/api/v1/accounts/#{followed.id}/follow", %{notify: false}) |> json_response_and_validate_schema(200) end test "following / unfollowing errors", %{user: user, conn: conn} do Loading… Cancel Save
__label__pos
0.526096
Spring 2024 Benthic Cover Analysis of Fish Pond SURVEY DATES: 05/19/2024 - 05/21/2024 SURVEY LOCATION: Paea Lagoon Aua i’a Fish Pond AUTHORS: Jessie Segnitz, Uma Pant, Nicole Pianalto, and S. Tara Grover INTRODUCTION: In this survey, we observed the substrate inside, on top, and outside the Paea Lagoon Aua i’a Fish Pond. The fish pond is a Polynesian traditional practice of small scale fish collection that fell into disuse over time due to the effects of European colonization, commercial fishing, and globalization. We are building upon the previous studies done by Wildlands students in 2022 and 2023 in order to meet the needs of our client, a private individual interested in species and land conservation who owns a marine observatory on Tahiti. The objective of our study is to determine the sea floor substrate and algae covers on the inside and outside of the fishpond, and top of the rock wall top itself. Further objectives of the research team were to establish measurement definitions for the average particle sizes of both "fine" and "coarse" sand to set a standard for future use, and second, evaluate the presence of patterns on the seafloor of fine and coarse sand areas. There is a current along the shore running south to north through the fishpond and our client is specifically interested in the influence of this current and how the rock wall may act as a sieve or filter, changing the concentration of the sand types on the different sides of the wall. Earlier studies concluded that there was no statistically significant difference in substrate coverages outside and inside the fishpond, however, they did not account for the difference between fine and coarse sand, which is the knowledge gap we addressed with our survey. We hypothesized that there would be a significant difference in the coverage of the different sand types on the south and north side of the wall. We further hypothesized that there would be a difference in overall substrate coverage inside and outside of the fish pond walls because of the barrier effects of the wall on the current's ability to move and transport substrate types through the area. METHODS: FIRST SURVEY — INSIDE FISH POND We used a 50 x 50 cm quadrant to survey percent coverage of different substrate and algae types. We used a random number generator to generate 10 coordinates within the size of the fish pond which is roughly 15 x 15 meters. Using the bottom right corner of the fish pond (Northern corner) as our (0,0) origin point, we used a transect to measure out the predetermined coordinate points to the South along the shore (x-axis) and out into the water (y-axis) and placed the bottom right corner of the quadrat at each point. Our randomly-generated coordinates were (8, 12), (5, 1), (2, 7), (10, 9), (14, 3), (9, 11), (6, 12), (7, 9), (12, 9), (13, 2). For this survey as well as Survey #2, we evaluated percent coverage of the following categories: fine sand (with the majority under 1mm length of grain on average), coarse sand (over 1mm length of grain on average), bare rubble (chunks of substrate between 2.5-10 cm, including stone, dead coral, shells), bare rock (over 10 cm with no algae cover), and five types of algae. These types were turf algae (under 1 cm), and the macroalgae genuses: Halimeda, Padina, Turbinaria, and Dictyota. Two researchers both independently estimated the coverage of each type and then double verified with each other. We made sure to step lightly to avoid disturbing substrates or moving any particles into or out of the quadrats. We also exercised a high level of caution to avoid contact with dangerous benthic species including the stonefish, by wearing neoprene boots and swimming when possible without touching the floor. SECOND SURVEY —OUTSIDE FISH POND Starting from the north fish pond edge at the point closest to the shore, we measured out 5 meters parallel to the shore. That would be our starting point of our survey line of 15 meters to the end of the fish pond walls. We did systematic sampling, so every 5 meters starting from 0 meters on the transect we would sample using a 50 x50 cm quadrat, putting the quadrat on the left side of the transect with the bottom right corner at the starting point. We did this for the north, west and south walls of the fish pond. We evaluated percent coverage of the same categories and methods as for Survey #1 (above). THIRD, FOURTH, AND FIFTH SURVEY — SEDIMENT TRANSECT In this survey we used line intercept sampling by using a transect adjacent to the interior and exterior wall, which we identified as the area on the seafloor closest to the rock wall that did not include any large rocks that made up the foundation of the wall. We evaluated percent sediment coverage of the following categories: fine sand (under 1mm length of grain on average), coarse sand (over 1mm length of grain on average), rubble (2.5-10 cm), rock (greater than 10 cm), and alive coral, all regardless of any algae cover on top. By looking at what sediment lay directly underneath the transect line we classified what sediment was present and the length of the section it created, for the entire 14.5 meters. We conducted this survey for the north, west and south side walls of the fishpond on both the inside and outer side of walls. SIXTH SURVEY—- ON TOP OF WALL For this survey we used systematic sampling using the 50 x50 cm quadrat and a transect laid out from the start of the rock wall for all three walls. We placed the bottom of the quadrant at 0m, 5, and 10 meters for each wall. The quadrat was placed in the very center of the wall, and at all survey points, the wall was thicker in width than 50 cm so the quadrat consisted completely of substrate from the wall itself. We looked down from an aerial view and measured the same substrate and algae types as previously mentioned in the other surveys. We repeated this method for all three walls for a total of 9 survey points. SEVEN SURVEY — SAND MEASUREMENT For this survey a team of two took samples of fine and coarse sand from inside the fishpond, scooping only the surface layer of sediment. We collected approximately 20 mm of sand and water for each. The samples were chosen based on visual differentiation, where the fine sand was scooped from the left wall delta closest to shore inside the fishpond, and coarse sand from the center of the fish pond. The sand was laid out on a paper towel and a randomization method was used to choose 50 grains of sand to measure from each sample. Calipers were used to measure out the various particles in millimeters. IMPROVEMENTS/ CHANGES FROM PREVIOUS SURVEYS We used a 50 x 50 centimeter quadrat instead of the previous group's 1x1 m. This allowed us to take more accurate and detailed data on the coverage within our survey areas while still being a large enough surveyed area to be generalizable to the entire pond. We know that the algae species and concentration can change due to seasonal and climate patterns. We did our own preliminary analysis of what types of macroalgae genuses were present when we got in the water, and created our own list of them to measure instead of using the previous groups. This was the same as the previous groups except for one exclusion, Sargassum, which was not present at this time, and one new inclusion, Dyctyota, which was present. We did a randomized point intersect method for the inside of the fishpond instead of the previous groups' strategic sampling method because the fishpond is a big enough sample universe to benefit from a random sample to eliminate bias or accidental disproportionate inclusion or exclusion of substrate patterns. We continued the original method of strategic sampling for the outside fish pond sampling universe, and for the walltop itself, because we felt that the narrow range of these areas would be better represented by consistent sampling. A visual analysis of the general substrate cover of the entire survey area confirmed that we were not overlooking any patterns due to this sampling method. Surveying the wall top itself was also a new addition to this survey project that was not present in previous years. This allows us to set a baseline time zero (t=0) standard for wall composition which gives insight into the structural integrity of the stones based on how close together they are, and what substrate types are present among the cracks.. Another new addition was the specific line-intercept survey along the outer and inner walls that focused on determining patterns of sand dispersal through the walls. RESULTS: Please copy and paste the link below into your browser to view data sheets with graphical analysis https://docs.google.com/document/d/1296DmEP9zlzIvRDUheGtx5jx7ANnlSfT8JGAyNMvuSI/edit DISCUSSION: Survey one data illustrates that the substrate cover inside the fish pond is mostly coarse sand and a scattering of rubble across the area. While the dominant algae inside the fish pond was Halimeda. In the survey two data, it portrays that the sediment composition outside the fishpond was mostly made up of fine sand and some coarse sand, while the algae was mostly turf. The differential sediment composition inside and outside the pond demonstrates that the fish pond creates a different sediment environment, which is reflected by the dominant algae that is growing in the area. The survey three, four, and five data show sand patterns that are representative of the current flowing through the pond. The currents are coming from the south going north, parallel to the shore. Outside the southern wall there was a larger section of fine sand, while the interior of the fishpond there were three distinct sections of fine sand. This illustrated that the southern wall is filtering the fine sand into the pond at a relatively slow rate, with most of it concentrated on three sections that correlate with thinner and lower wall sections. While on the other hand, the opposite was reflected by the north wall, by the sand filtering out of the pond. The exterior of the wall substrate consisted of more fine sand than the interior of the wall reflecting that there is a large rate the fine sand is leaving the fish pond. With the influx of fine sand coming in at a slower rate and the outflux leaving at a greater rate, the inventory of the fine sand in the fish pond would be lower. This is also reflected in the data from survey one, the sediment coverage inside the fish pond, portraying that there was very little fine sand within the sampling sites that is a portrayal of the entire pond. Then with the west wall the sediment makeup of the interior and exterior was fairly similar with the percent of fine sand being around 30 percent. This similarity illustrates their is not that much if any sand movement between this wall. For Survey 6 looking at substrate composition on top of the wall itself, we found that the turf (algae less than 1mm long on top of rock/rubble) dominated the bulk of the substrate available in this location, at 83%, as expected. The wall is composed of large rocks, and can resist wave action more so than coarse or fine sand, which can be washed away. Since the stones here are the original foundation of the fishpond from whare our client rebuilt it many years ago, it makes sense that turf covers almost all of the present stone content. The coverage of bare rock with no turf is only the parts of stones that are still above water even at high tide, so there was no ability for turf to settle and grow there. The areas that were not turf or stone are what is visible between the rocks when looking from an aerial view, ad thus they represent the cracks between the rocks which allow a view of algae content below and occasionally all the way down to the sand on the floor. During Survey 7, we revealed key information about classifying the difference between the size of coarse and fine sand. We found that the coarse sand had a median of more than 1mm in particle size, and the fine sand had a median size of less than 1mm in particle size. We used this understanding to clarify our data for both types of sand during the other survey collections, where we used visual markers to identify each type. This classification can also be used for future studies as a standardization of the monitoring project. It's important to note there was an ocean swell a couple weeks ago that washed away different substrate from the location that may have previously been present in the area. For example, certain algae species and fine sand may have been washed away, influencing the current composition of the inside and outside of fishpond wall barriers. This may contribute to significant differences between this survey and the previous ones, although other factors such as seasonal differences and simply the regular wave and wind action of many months will also produce these differences. Overall, there are noticeable trends in the differences with sand cover on the inside versus the outside of the fishpond. The higher concentration of fine sand outside could be due to current and wave action pushing coarser, heavier sand particles inside the fishpond that has nowhere to go, whereas Fine sand can be lifted away by wave action. The fact that turf on rock substrate covered most of the fish wall shows how the stones have been present long enough to accumulate turf cover. Another explanation for the lack of other types of substrate and algae coverage may be due to the nature of intertidal zone harsh conditions, which only support life for the most hardy species that can survive high and low temperatures, water and salinity levels, and have increased mechanical wave action that make it difficult to establish a presence on the rocks and moves sediment rather quickly, not allowing for settlement. We also noted an interesting observation from this year's data versus last year's projects. Halimeda is the most common algae found in and around the pond, which is different from the rest of the coral reef areas that the research team has seen across Tahiti and Moorea. PROPOSALS FOR FUTURE METHODS: We propose that future projects continue to differentiate between fine and coarse sand, so that a standard can be maintained for data collection over time in consideration of factors such as current speed and wave action. A proposal for future groups is to analyze the structure of the wall to determine what qualities exactly lead specific sections to allow more fine sand through the stones. The line-intercept survey of sand cover along the sides of the walls should be repeated to track change over time. We also recommend separating the data collection between algae biodiversity (different genuses) and mineral substrates (coral, sand types, rock, rubble) as separate surveys, where the mineral substrate survey does not regard algae coverage. The algae biodiversity survey should also include a difference between turf on rock vs. turf on rubble, to make sure that it is clear to differentiate the preferential turf substrate. We also recommend continuing data collection on specific algae types over time, or making sure to note if there is a decrease in a specific population during that year to maintain standard measurements, just to keep a clear comparison of the different populations in the fish pond over time. Publicado el 22 de mayo de 2024 por langzi langzi Comentarios No hay comentarios todavía. Agregar un comentario Acceder o Crear una cuenta para agregar comentarios.
__label__pos
0.705505
JavaScripting The definitive source of the best JavaScript libraries, frameworks, and plugins. • × A Tiny WebGL helper Library Filed under  ›  • 🔾44%Overall • 743 • 2.2 days • 🕩95 • 👥4 TWGL: A Tiny WebGL helper Library [rhymes with wiggle] Build Status This library's sole purpose is to make using the WebGL API less verbose. Note: Minor API Changes in 2.x See Changelist TL;DR If you want to get stuff done use three.js. If you want to do stuff low-level with WebGL consider using TWGL. The tiniest example Not including the shaders (which is a simple quad shader) here's the entire code <canvas id="c"></canvas> <script src="../dist/4.x/twgl-full.min.js"></script> <script> const gl = document.getElementById("c").getContext("webgl"); const programInfo = twgl.createProgramInfo(gl, ["vs", "fs"]); const arrays = { position: [-1, -1, 0, 1, -1, 0, -1, 1, 0, -1, 1, 0, 1, -1, 0, 1, 1, 0], }; const bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays); function render(time) { twgl.resizeCanvasToDisplaySize(gl.canvas); gl.viewport(0, 0, gl.canvas.width, gl.canvas.height); const uniforms = { time: time * 0.001, resolution: [gl.canvas.width, gl.canvas.height], }; gl.useProgram(programInfo.program); twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo); twgl.setUniforms(programInfo, uniforms); twgl.drawBufferInfo(gl, bufferInfo); requestAnimationFrame(render); } requestAnimationFrame(render); </script> And here it is live. Why? What? How? WebGL is a very verbose API. Setting up shaders, buffers, attributes and uniforms takes a lot of code. A simple lit cube in WebGL might easily take over 60 calls into WebGL. At its core there's really only a few main functions • twgl.createProgramInfo compiles a shader and creates setters for attribs and uniforms • twgl.createBufferInfoFromArrays creates buffers and attribute settings • twgl.setBuffersAndAttributes binds buffers and sets attributes • twgl.setUniforms sets the uniforms • twgl.createTextures creates textures of various sorts • twgl.createFramebufferInfo creates a framebuffer and attachments. There's a few extra helpers and lower-level functions if you need them but those 6 functions are the core of TWGL. Compare the TWGL vs WebGL code for a point lit cube. Compiling a Shader and looking up locations TWGL const programInfo = twgl.createProgramInfo(gl, ["vs", "fs"]); WebGL // Note: I'm conceding that you'll likely already have the 30 lines of // code for compiling GLSL const program = twgl.createProgramFromScripts(gl, ["vs", "fs"]); const u_lightWorldPosLoc = gl.getUniformLocation(program, "u_lightWorldPos"); const u_lightColorLoc = gl.getUniformLocation(program, "u_lightColor"); const u_ambientLoc = gl.getUniformLocation(program, "u_ambient"); const u_specularLoc = gl.getUniformLocation(program, "u_specular"); const u_shininessLoc = gl.getUniformLocation(program, "u_shininess"); const u_specularFactorLoc = gl.getUniformLocation(program, "u_specularFactor"); const u_diffuseLoc = gl.getUniformLocation(program, "u_diffuse"); const u_worldLoc = gl.getUniformLocation(program, "u_world"); const u_worldInverseTransposeLoc = gl.getUniformLocation(program, "u_worldInverseTranspose"); const u_worldViewProjectionLoc = gl.getUniformLocation(program, "u_worldViewProjection"); const u_viewInverseLoc = gl.getUniformLocation(program, "u_viewInverse"); const positionLoc = gl.getAttribLocation(program, "a_position"); const normalLoc = gl.getAttribLocation(program, "a_normal"); const texcoordLoc = gl.getAttribLocation(program, "a_texcoord"); Creating Buffers for a Cube TWGL const arrays = { position: [1,1,-1,1,1,1,1,-1,1,1,-1,-1,-1,1,1,-1,1,-1,-1,-1,-1,-1,-1,1,-1,1,1,1,1,1,1,1,-1,-1,1,-1,-1,-1,-1,1,-1,-1,1,-1,1,-1,-1,1,1,1,1,-1,1,1,-1,-1,1,1,-1,1,-1,1,-1,1,1,-1,1,-1,-1,-1,-1,-1], normal: [1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1,0,0,0,1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1,0,0,0,1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1], texcoord: [1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1], indices: [0,1,2,0,2,3,4,5,6,4,6,7,8,9,10,8,10,11,12,13,14,12,14,15,16,17,18,16,18,19,20,21,22,20,22,23], }; const bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays); WebGL const positions = [1,1,-1,1,1,1,1,-1,1,1,-1,-1,-1,1,1,-1,1,-1,-1,-1,-1,-1,-1,1,-1,1,1,1,1,1,1,1,-1,-1,1,-1,-1,-1,-1,1,-1,-1,1,-1,1,-1,-1,1,1,1,1,-1,1,1,-1,-1,1,1,-1,1,-1,1,-1,1,1,-1,1,-1,-1,-1,-1,-1]; const normals = [1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1,0,0,0,1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1,0,0,0,1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1]; const texcoords = [1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1]; const indices = [0,1,2,0,2,3,4,5,6,4,6,7,8,9,10,8,10,11,12,13,14,12,14,15,16,17,18,16,18,19,20,21,22,20,22,23]; const positionBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer); gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW); const normalBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, normalBuffer); gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(normals), gl.STATIC_DRAW); const texcoordBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer); gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(texcoords), gl.STATIC_DRAW); const indicesBuffer = gl.createBuffer(); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indicesBuffer); gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(indices), gl.STATIC_DRAW); Setting Attributes and Indices for a Cube TWGL twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo); WebGL gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer); gl.vertexAttribPointer(positionLoc, 3, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(positionLoc); gl.bindBuffer(gl.ARRAY_BUFFER, normalBuffer); gl.vertexAttribPointer(normalLoc, 3, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(normalLoc); gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer); gl.vertexAttribPointer(texcoordLoc, 2, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(texcoordLoc); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indicesBuffer); Setting Uniforms for a Lit Cube TWGL // At Init time const uniforms = { u_lightWorldPos: [1, 8, -10], u_lightColor: [1, 0.8, 0.8, 1], u_ambient: [0, 0, 0, 1], u_specular: [1, 1, 1, 1], u_shininess: 50, u_specularFactor: 1, u_diffuse: tex, }; // At render time uniforms.u_viewInverse = camera; uniforms.u_world = world; uniforms.u_worldInverseTranspose = m4.transpose(m4.inverse(world)); uniforms.u_worldViewProjection = m4.multiply(viewProjection, world); twgl.setUniforms(programInfo, uniforms); WebGL // At Init time const u_lightWorldPos = [1, 8, -10]; const u_lightColor = [1, 0.8, 0.8, 1]; const u_ambient = [0, 0, 0, 1]; const u_specular = [1, 1, 1, 1]; const u_shininess = 50; const u_specularFactor = 1; const u_diffuse = 0; // At render time gl.uniform3fv(u_lightWorldPosLoc, u_lightWorldPos); gl.uniform4fv(u_lightColorLoc, u_lightColor); gl.uniform4fv(u_ambientLoc, u_ambient); gl.uniform4fv(u_specularLoc, u_specular); gl.uniform1f(u_shininessLoc, u_shininess); gl.uniform1f(u_specularFactorLoc, u_specularFactor); gl.uniform1i(u_diffuseLoc, u_diffuse); gl.uniformMatrix4fv(u_viewInverseLoc, false, camera); gl.uniformMatrix4fv(u_worldLoc, false, world); gl.uniformMatrix4fv(u_worldInverseTransposeLoc, false, m4.transpose(m4.inverse(world))); gl.uniformMatrix4fv(u_worldViewProjectionLoc, false, m4.multiply(viewProjection, world)); Loading / Setting up textures TWGL const textures = twgl.createTextures(gl, { // a power of 2 image hftIcon: { src: "images/hft-icon-16.png", mag: gl.NEAREST }, // a non-power of 2 image clover: { src: "images/clover.jpg" }, // From a canvas fromCanvas: { src: ctx.canvas }, // A cubemap from 6 images yokohama: { target: gl.TEXTURE_CUBE_MAP, src: [ 'images/yokohama/posx.jpg', 'images/yokohama/negx.jpg', 'images/yokohama/posy.jpg', 'images/yokohama/negy.jpg', 'images/yokohama/posz.jpg', 'images/yokohama/negz.jpg', ], }, // A cubemap from 1 image (can be 1x6, 2x3, 3x2, 6x1) goldengate: { target: gl.TEXTURE_CUBE_MAP, src: 'images/goldengate.jpg', }, // A 2x2 pixel texture from a JavaScript array checker: { mag: gl.NEAREST, min: gl.LINEAR, src: [ 255,255,255,255, 192,192,192,255, 192,192,192,255, 255,255,255,255, ], }, // a 1x8 pixel texture from a typed array. stripe: { mag: gl.NEAREST, min: gl.LINEAR, format: gl.LUMINANCE, src: new Uint8Array([ 255, 128, 255, 128, 255, 128, 255, 128, ]), width: 1, }, }); WebGL // Let's assume I already loaded all the images // a power of 2 image const hftIconTex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, tex); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, hftIconImg); gl.generateMipmaps(gl.TEXTURE_2D); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); // a non-power of 2 image const cloverTex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, tex); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, hftIconImg); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); // From a canvas const cloverTex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, tex); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, ctx.canvas); gl.generateMipmaps(gl.TEXTURE_2D); // A cubemap from 6 images const yokohamaTex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex); gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, posXImg); gl.texImage2D(gl.TEXTURE_CUBE_MAP_NEGATIVE_X, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, negXImg); gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_Y, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, posYImg); gl.texImage2D(gl.TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, negYImg); gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_Z, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, posZImg); gl.texImage2D(gl.TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, negZImg); gl.generateMipmaps(gl.TEXTURE_CUBE_MAP); // A cubemap from 1 image (can be 1x6, 2x3, 3x2, 6x1) const goldengateTex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex); const size = goldengate.width / 3; // assume it's a 3x2 texture const slices = [0, 0, 1, 0, 2, 0, 0, 1, 1, 1, 2, 1]; const tempCtx = document.createElement("canvas").getContext("2d"); tempCtx.canvas.width = size; tempCtx.canvas.height = size; for (let ii = 0; ii < 6; ++ii) { const xOffset = slices[ii * 2 + 0] * size; const yOffset = slices[ii * 2 + 1] * size; tempCtx.drawImage(element, xOffset, yOffset, size, size, 0, 0, size, size); gl.texImage2D(faces[ii], 0, format, format, type, tempCtx.canvas); } gl.generateMipmaps(gl.TEXTURE_CUBE_MAP); // A 2x2 pixel texture from a JavaScript array const checkerTex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, tex); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array([ 255,255,255,255, 192,192,192,255, 192,192,192,255, 255,255,255,255, ])); gl.generateMipmaps(gl.TEXTURE_2D); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); // a 1x8 pixel texture from a typed array. const stripeTex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, tex); gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, gl.LUMINANCE, gl.UNSIGNED_BYTE, new Uint8Array([ 255, 128, 255, 128, 255, 128, 255, 128, ])); gl.generateMipmaps(gl.TEXTURE_2D); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); Creating Framebuffers and attachments TWGL const attachments = [ { format: RGBA, type: UNSIGNED_BYTE, min: LINEAR, wrap: CLAMP_TO_EDGE }, { format: DEPTH_STENCIL, }, ]; const fbi = twgl.createFramebufferInfo(gl, attachments); WebGL const fb = gl.createFramebuffer(gl.FRAMEBUFFER); gl.bindFramebuffer(gl.FRAMEBUFFER, fb); const tex = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, tex); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.drawingBufferWidth, gl.drawingBufferHeight, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, tex, 0); const rb = gl.createRenderbuffer(); gl.bindRenderbuffer(gl.RENDERBUFFER, rb); gl.renderbufferStorage(gl.RENDERBUFFER, gl.DEPTH_STENCIL, gl.drawingBufferWidth, gl.drawingBufferHeight); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_STENCIL_ATTACHMENT, gl.RENDERBUFFER, rb); Compare TWGL example vs WebGL example Examples WebGL 2 Examples OffscreenCanvas Example ES6 module support AMD support CommonJS / Browserify support Other Features • Includes some optional 3d math functions (full version) You are welcome to use any math library as long as it stores matrices as flat Float32Array or JavaScript arrays. • Includes some optional primitive generators (full version) planes, cubes, spheres, ... Just to help get started Usage See the examples. Otherwise there's a few different versions • twgl-full.min.js the minified full version • twgl-full.js the concatinated full version • twgl.min.js the minimum version (no 3d math, no primitives) • twgl.js the concatinated minimum version (no 3d math, no primitives) API Docs API Docs are here. Download • from github http://github.com/greggman/twgl.js • from bower bower install twgl.js • from npm npm install twgl.js or npm install twgl-base.js • from git git clone https://github.com/greggman/twgl.js.git Rationale and other chit-chat TWGL's is an attempt to make WebGL simpler by providing a few tiny helper functions that make it much less verbose and remove the tedium. TWGL is NOT trying to help with the complexity of managing shaders and writing GLSL. Nor is it a 3D library like three.js. It's just trying to make WebGL less verbose. TWGL can be considered a spiritual successor to TDL. Where as TDL created several classes that wrapped WebGL, TWGL tries not to wrap anything. In fact you can manually create nearly all TWGL data structures. For example the function setAttributes takes an object of attributes. In WebGL you might write code like this gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer); gl.vertexAttribPointer(positionLoc, 3, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(positionLoc); gl.bindBuffer(gl.ARRAY_BUFFER, normalBuffer); gl.vertexAttribPointer(normalLoc, 3, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(normalLoc); gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer); gl.vertexAttribPointer(texcoordLoc, 2, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(texcoordLoc); gl.bindBuffer(gl.ARRAY_BUFFER, colorsBuffer); gl.vertexAttribPointer(colorLoc, 4, gl.UNSIGNED_BYTE, true, 0, 0); gl.enableVertexAttribArray(colorLoc); setAttributes is just the simplest code to do that for you. // make attributes for TWGL manually const attribs = { a_position: { buffer: positionBuffer, size: 3, }, a_normal: { buffer: normalBuffer, size: 3, }, a_texcoord: { buffer: texcoordBuffer, size: 2, }, a_color: { buffer: colorBuffer, size: 4, type: gl.UNSIGNED_BYTE, normalize: true, }, }; twgl.setAttributes(attribSetters, attribs); The point of the example above is TWGL is a thin wrapper. All it's doing is trying to make common WebGL operations easier and less verbose. Feel free to mix it with raw WebGL. Want to learn WebGL? Try webglfundamentals.org Show All
__label__pos
0.882972
Das Zoologische Forschungsmuseum Alexander Koenig ist ein Forschungsmuseum der Leibniz Gemeinschaft DNA metabarcoding reveals the complex and hidden responses of chironomids to multiple stressors Erscheinungsjahr:  2018 Vollständiger Titel:  DNA metabarcoding reveals the complex and hidden responses of chironomids to multiple stressors ZFMK-Autorinnen / ZFMK-Autoren:  Publiziert in:  "leibniz" - Magazin der Leibniz-Gemeinschaft Publikationstyp:  Zeitschriftenaufsatz DOI Name:  http://doi.org/10.1186/s12302-018-0157-x Bibliographische Angaben:  Beermann, A. J., Zizka, V. M. A., Elbrecht, V., Baranov, V., & Leese, F. (2018). DNA metabarcoding reveals the complex and hidden responses of chironomids to multiple stressors. Environmental Sciences Europe, 1–15. Abstract:  Chironomids, or non-biting midges, often dominate stream invertebrate communities in terms of biomass, abundance, and species richness and play an important role in riverine food webs. Despite these clear facts, the insect family Chironomidae is often treated as a single family in ecological studies or bioassessments given the difficulty to determine specimens further. We investigated stressor responses of single chironomid operational taxonomic units (OTUs) to three globally important stressors (increased salinity, fine sediment and reduced water flow velocity) in a highly replicated mesocosm experiment using a full-factorial design (eight treatment combinations with eight replicates each). In total, 183 chironomid OTUs (97% similarity) were obtained by applying a quantitative DNA metabarcoding approach. Whereas on the typically applied family level, chironomids responded positively to added fine sediment and reduced water velocity in the streambed and negatively to reduced velocity in the leaf litter, an OTU-level analysis revealed a total of 15 different response patterns among the 35 most common OTUs only. The response patterns ranged from (a) insensitivity to any experimental manipulation over (b) highly specific sensitivities to only one stressor to (c) additive multiple-stressor effects and even (d) complex interactions. Even though most OTUs (> 85%) could not be assigned to a formally described species due to a lack of accurate reference data bases at present, the results indicate increased explanatory power with higher taxonomic resolution. Thus, our results highlight the potential of DNA-based approaches when studying environmental impacts, especially for this ecologically important taxon and in the context of multiple stressors. Ansprechpartnerin / Ansprechpartner ehemaliger Wissenschaftlicher Mitarbeiter +49 228 9122-351 +49 228 9122-212 V.Elbrecht [at] leibniz-zfmk.de
__label__pos
0.638573
Skip to Content Medical Term: anticipation Pronunciation: an-tis′i-pā′shŭn Definition: 1. Appearance before the appointed time of a periodic symptom or sign. 2. Progressively earlier age of manifestation of a hereditary disease in successive generations; may be factitious (because of heightened awareness of early signs of the disease or because these signs are more conspicuous in the young) or authentic (because of progressive loss of epistatic and modifier genes by recombination and segregation, or because of expansion of unstable alleles in successive generations). 3. An increase in the severity of a phenotype in successive generations of a family, often associated with an increase in the number of trinucleotide repeats in a causative gene (e.g., fragile X syndrome, myotonic dystrophy, Huntington disease). (web1)
__label__pos
0.984121
Developer Additional Logging and Tracing Code Samples These examples illustrate how to include code in your apps for uploading logs and traces for Store apps and .NET applications. These examples illustrate basic implementations, and as a result the upload process cannot be canceled. To cancel the upload, create a CancellationTokenSource and pass its CancellationToken as the last parameter of the SendAsync method of the netclient's instance. Then you can show a button, for example "Cancel" and then you can signal the CancellationToken. The cancellation of the upload process is then handled by the Networking and Supportability libraries. Windows Store app uploader example code class UploadResult : SAP.Supportability.IUploadResult { public int ResponseStatusCode { get; set; } public string Hint { get; set; } } class SupportabilityUploader : SAP.Supportability.IUploader { SAP.Net.Http.HttpClient clientRef = null; Uri serverUri= null; public SupportabilityUploader(SAP.Net.Http.HttpClient client, Uri host, bool uploadBtx = true) { if (client == null) throw new ArgumentNullException("client"); if (host == null) throw new ArgumentNullException("host"); this.serverUri = new UriBuilder(host.Scheme, host.Host, host.Port, (uploadBtx ? "btx" : "clientlogs")).Uri; this.clientRef = client; } public IAsyncOperation<SAP.Supportability.IUploadResult> SendAsync(IReadOnlyDictionary<string, string> headers, Windows.Storage.Streams.IInputStream payload) { return Task.Run<SAP.Supportability.IUploadResult>(async () => { var result = await this.clientRef.SendAsync(() => { var request = new HttpRequestMessage(HttpMethod.Post, this.serverUri) { Content = new StreamContent(payload.AsStreamForRead()) }; foreach (var header in headers) { request.Content.Headers.TryAddWithoutValidation(header.Key, header.Value); } return request; }); return new UploadResult() { ResponseStatusCode = (int)result.StatusCode, Hint = await result.Content.ReadAsStringAsync() }; }).AsAsyncOperation(); } } Windows .NET application uploader example code (only the method which is different from the Store app) public async Task<SAP.Supportability.IUploadResult> SendAsync(IReadOnlyDictionary<string, string> headers, System.IO.Stream payload) { var result = await this.clientRef.SendAsync(() => { var request = new HttpRequestMessage(HttpMethod.Post, this.serverUri) { Content = new StreamContent(payload) }; foreach (var header in headers) { request.Content.Headers.TryAddWithoutValidation(header.Key, header.Value); } return request; }); return new UploadResult() { ResponseStatusCode = (int)result.StatusCode, Hint = await result.Content.ReadAsStringAsync() }; } Log creation upload example code for Store Apps and .Net applications var logManager = SAP.Supportability.SupportabilityFacade.Instance.ClientLogManager; logManager.SetLogLevel(SAP.Supportability.Logging.ClientLogLevel.Info); logManager.SetLogDestination(SAP.Supportability.Logging.ClientLogDestinations.FileSystem | SAP.Supportability.Logging.ClientLogDestinations.Console); var logger = logManager.GetLogger("testLogger"); logger.LogWarning("sample"); logger.LogError("sample error message"); logger.LogError("sample log 2"); string message = null; try { await SAP.Supportability.SupportabilityFacade.Instance.ClientLogManager.UploadClientLogsAsync(new SupportabilityUploader(httpClient, serverUri, false)); } catch (Exception ex) { var supportabilityException = ex as SAP.Supportability.ISupportabilityException; message = ex.Message + ((supportabilityException != null) ? ("(" + supportabilityException.UploadResult.ResponseStatusCode + ")") : ""); } BTX generation and upload example for Store apps and .Net applications var traceManager = (SAP.Supportability.Tracing.E2ETraceManager)SAP.Supportability.SupportabilityFacade.Instance.E2ETraceManager; traceManager.ClientHost = "WinDemo-Client"; traceManager.TraceLevel = SAP.Supportability.Tracing.E2ETraceLevel.Low; var transaction = await traceManager.StartTransactionAsync("NewTransactionWin"); var step = transaction.StartStep(); var request = step.StartRequest(); request.SetRequestLine("GET http://www.test.com HTTP/1.1"); request.SetRequestHeaders(new Dictionary<string, string> { {"SAP-PASSPORT",request.PassportHttpHeader} , {"X-CorrelationID","correlationID0101"} }); request.SetByteCountSent(100); request.EndRequest(); step.EndStep(); transaction.EndTransaction(); string message = null; try { await SAP.Supportability.SupportabilityFacade.Instance.E2ETraceManager.UploadBtxAsync(new SupportabilityUploader(httpClient, serverUri)); } catch (Exception ex) { var supportabilityException = ex as SAP.Supportability.ISupportabilityException; message = ex.Message + ((supportabilityException != null) ? ("(" + supportabilityException.UploadResult.ResponseStatusCode + ")") : ""); }
__label__pos
0.946306
Renal Angiogram (Angiogram-Kidneys, Renal Angiography, Renal Arteriogram, Renal Arteriography) Procedure overview What is a renal angiogram? An angiogram, also called an arteriogram, is an X-ray image of the blood vessels. It is performed to evaluate various vascular conditions, such as an aneurysm (ballooning of a blood vessel), stenosis (narrowing of a blood vessel), or blockages. A renal angiogram is an angiogram of the blood vessels of the kidneys. A renal angiogram may be used to assess the blood flow to the kidneys. Fluoroscopy is often used during a renal arteriogram. Fluoroscopy is the study of moving body structures similar to an X-ray "movie." A continuous X-ray beam is passed through the body part being examined, and is transmitted to a TV-like monitor so that the body part and its motion can be seen in detail. How is an angiogram performed? In order to obtain an X-ray image of a blood vessel, an intravenous (IV) or intra-arterial (IA) access is necessary so that contrast, also known as X-ray dye, can be injected into the body's circulatory system. This contrast dye causes the blood vessels to appear opaque on the X-ray image, thus allowing the physician to better visualize the structure of the vessel(s) under examination. Many arteries can be examined by an angiogram, including the arterial systems of the legs, kidneys, brain, and heart. For a renal angiogram, arterial access may be obtained through a large artery such as the femoral artery in the groin. Once access is obtained, the catheter is advanced to the renal artery, contrast is injected, and a series of X-ray pictures is made. These X-ray images show the arterial, venous, and capillary blood vessel structures and blood flow in the kidneys. Other related procedures that may be used to diagnose kidney problems include kidney, ureters, and bladder (KUB) X-ray, computed tomography (CT scan) of the kidneys, intravenous pyelogram, kidney biopsy, kidney scan, kidney ultrasound, magnetic resonance imaging (MRI), and renal venogram. Please see these procedures for additional information. How do the kidneys work? Illustration of the anatomy of the kidney The body takes nutrients from food and converts them to energy. After the body has taken the food that it needs, waste products are left behind in the bowel and in the blood. The kidneys and urinary system keep chemicals, such as potassium and sodium, and water in balance, and remove a type of waste, called urea, from the blood. Urea is produced when foods containing protein, such as meat, poultry, and certain vegetables, are broken down in the body. Urea is carried in the bloodstream to the kidneys. Two kidneys, a pair of purplish-brown organs, are located below the ribs toward the middle of the back. Their function is to: • Remove liquid waste from the blood in the form of urine • Keep a stable balance of salts and other substances in the blood • Produce erythropoietin, a hormone that aids the formation of red blood cell • Release calcitriol, the active form of vitamin D, which helps maintain calcium for bones and for normal chemical balance in the body • Regulate blood pressure The kidneys remove urea from the blood through tiny filtering units called nephrons. Each nephron consists of a ball formed of small blood capillaries, called a glomerulus, and a small tube called a renal tubule. Urea, together with water and other waste substances, forms the urine as it passes through the nephrons and down the renal tubules of the kidney. Reasons for the procedure A renal angiogram may be performed to detect abnormalities of the blood vessels of the kidneys. Such abnormalities may include, but are not limited to, the following: • Aneurysms • Stenosis or vasospasm (spasm of the blood vessel) • Arteriovenous malformation (an abnormal connection between the arteries and veins) • Thrombosis (a blood clot within a blood vessel) or occlusion (blockage of a blood vessel) • Renovascular hypertension (systemic high blood pressure caused when the renal artery is narrowed) Other conditions that may be detected by a renal angiogram include tumors, hemorrhage (bleeding), complications of kidney transplantation, and the invasion of a tumor into the blood vessels. An angiogram may be used to deliver medications directly into the tissue or organ needing treatment, such as the administration of a clotting medication to a bleeding site or cancer medication into a tumor. Renal angiograms are less frequently use with CT and MRI scans being more commonly used for diagnosing these conditions. Renal angiogram may also be recommended after a previous procedure, such as a CT scan, indicates the need for further information. There may be other reasons for your doctor to recommend a renal angiogram. Risks of the procedure You may want to ask your doctor about the amount of radiation used during the procedure and the risks related to your particular situation. It is a good idea to keep a record of your past history of radiation exposure, such as previous scans and other types of X-rays, so that you can inform your doctor. Risks associated with radiation exposure may be related to the cumulative number of X-ray examinations and/or treatments over a long period of time. If you are pregnant or suspect that you may be pregnant, you should notify your health care provider. Radiation exposure during pregnancy may lead to birth defects. There is a risk for allergic reaction to the contrast. Patients who are allergic to or sensitive to medications, contrast dye, or iodine should notify their doctor. Also, patients with kidney failure or other kidney problems should notify their doctor as contrast can worsen existing kidney disease. Because the procedure involves the blood vessels and blood flow of the kidneys, there is a small risk for complications involving the kidneys. These complications may include, but are not limited to, the following: • Hemorrhage due to puncture of a blood vessel • Injury to nerves • Thrombus. A clot in the blood vessel • Hematoma. An area of swelling caused by a collection of blood • Infection • Transient kidney failure • Damage to artery or arterial wall, which can lead to blood clots There may be other risks depending on your specific medical condition. Be sure to discuss any concerns with your doctor prior to the procedure. Certain factors or conditions may interfere with the accuracy of a renal angiogram. These factors include, but are not limited to, the following: • Remaining contrast substances from recent contrast studies, such as a barium enema • Gas or stool in the intestines Before the procedure • Your doctor will explain the procedure to you and offer you the opportunity to ask any questions that you might have about the procedure. • You will be asked to sign a consent form that gives permission to do the procedure. Read the form carefully and ask questions if something is not clear. • Notify your doctor if you have ever had a reaction to any contrast, or if you are allergic to iodine • Notify your doctor if you have or have had any prior kidney problems or kidney disease • Notify your doctor if you are sensitive to or are allergic to any medications, latex, tape, and anesthetic agents (local and general). • You will need to fast for a certain period of time prior to the procedure. Your doctor will notify you how long to fast, whether for a few hours or overnight. • Notify your health care provider if you are pregnant or suspect you may be pregnant. • Notify your doctor of all medications (prescribed and over-the-counter) and herbal supplements that you are taking. • Notify your doctor if you have a history of bleeding disorders or if you are taking any anticoagulant (blood-thinning) medications, aspirin, or other medications that affect blood clotting. It may be necessary for you to stop these medications prior to the procedure. • You may receive a sedative prior to the procedure if necessary. You may also receive an anticholinergic medication, which acts to slow down the production of saliva in the mouth, inhibit the production of acid in the stomach, and slow down the activities of the intestinal tract, among other effects. If you receive this medication, you may notice that your mouth feels dry. • Depending on the site used for injection of the contrast, the recovery period may last up to 12 to 24 hours. You should be prepared to spend the night if necessary. • Your doctor may request a blood test prior to the procedure to determine how long it takes your blood to clot. Other blood tests may be done as well. • Based on your medical condition, your doctor may request other specific preparation. During the procedure A renal angiogram may be performed on an outpatient basis or as part of your stay in a hospital. Procedures may vary depending on your condition and your doctor's practices. Generally, a renal angiogram follows this process: 1. You will be asked to remove any clothing, jewelry, or other objects that may interfere with the procedure. 2. You will be given a gown to wear. 3. You will be asked to empty your bladder prior to the start of the procedure. 4. You will be positioned on the X-ray table. 5. An intravenous (IV) line will be inserted in your arm or hand. 6. You may have a blood test done to test your kidney function. 7. You will be connected to an EKG monitor that records the electrical activity of the heart and monitors the heart during the procedure using small, adhesive electrodes. Your vital signs (heart rate, blood pressure, and breathing rate) will be monitored during the procedure. 8. The radiologist will check your pulses below the puncture site and mark them with a marker so that the circulation to the limb below the site can be checked after the procedure. 9. A needle will be inserted into an artery in your groin after the skin is cleansed and a local anesthetic is injected. On occasion, an artery in the elbow area of the arm may be used. If the groin or arm site is used, the site will be shaved prior to insertion of the IV. If the arm site is used, a blood pressure cuff will be applied to the arm below the IV site and inflated to prevent flow of the contrast dye into the lower arm. 10. Once the needle has been placed, a catheter (a long, thin tube) will be inserted into the artery at the groin or arm site. The catheter will be advanced into the aorta near the renal arteries. Fluoroscopy will be used to verify the location of the catheter. 11. An injection of contrast will be given. You may feel some effects when the dye is injected into the line. These effects include a flushing sensation, a salty or metallic taste in the mouth, a brief headache, or nausea and/or vomiting. These effects usually last for a few moments. 12. You should notify the radiologist if you feel any breathing difficulties, sweating, numbness, or heart palpitations. 13. After the contrast dye is injected, a series of X-rays will be taken. The first series of X-rays shows the arteries, and the second series shows capillary and venous blood flow. 14. Depending on the specific study being performed, there may be one or more additional injections of contrast dye. 15. Once sufficient information has been obtained, the catheter will be removed and pressure will be applied over the area to keep the artery from bleeding. 16. After the bleeding stops, a dressing will be applied to the site. A sandbag or other heavy item may be placed over the site for a period of time to prevent further bleeding or the formation of a hematoma at the site. After the procedure After the procedure, you will be taken to the recovery room for observation. The circulation and sensation of the leg where the injection catheter was inserted will be monitored. A nurse will monitor your vital signs and the injection site. You will remain flat in bed in a recovery room for several hours after the procedure. If the groin or arm site was used, the leg or arm on the side of the injection site will be kept straight for up to 12 hours. You may be given pain medication for pain or discomfort related to the injection site or to having to lie flat and still for a prolonged period. You will be encouraged to drink water and other fluids to help flush the contrast dye from your body. You may resume your usual diet and activities after the procedure, unless your doctor advises you differently. When you have completed the recovery period, you may be returned to your hospital room or discharged to your home. If this procedure was performed as an outpatient, you should have another person drive you home. Home instructions Once at home, you should monitor the injection site for bleeding. A small bruise is normal, as is an occasional drop of blood at the site. If the groin or arm was used, you should monitor the leg or arm for changes in temperature or color, pain, numbness, tingling, or loss of function of the limb. Drink plenty of fluids to prevent dehydration and to help pass the contrast. You may be advised not to do any strenuous activities or take a hot bath or shower for a period of time after the procedure. Notify your doctor to report any of the following: • Fever and/or chills • Increased pain, redness, swelling, or bleeding or other drainage from the groin injection site • Coolness, numbness and/or tingling, or other changes in the affected extremity Your doctor may give you additional or alternate instructions after the procedure, depending on your particular situation. Online resources The content provided here is for informational purposes only, and was not designed to diagnose or treat a health problem or disease, or replace the professional medical advice you receive from your doctor. Please consult your health care provider with any questions or concerns you may have regarding your condition. This page contains links to other websites with information about this procedure and related health conditions. We hope you find these sites helpful, but please remember we do not control or endorse the information presented on these websites, nor do these sites endorse the information contained here. American Cancer Society American Urological Association National Institute of Diabetes and Digestive and Kidney Diseases National Institutes of Health (NIH) National Kidney and Urologic Diseases Information Clearing House National Kidney Foundation National Library of Medicine
__label__pos
0.581754
Sign up × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free. I'm calling a WebMethod (ASP.NET) via AJAX (jQuery). If I create a version of the call with no params, it calls fine. When I pass my JSON into the real method, it doesn't get called (breakpoint not getting hit). Here's a sample of the JSON I'm passing in (array with 2 objects): { "bills":[ "{ 'Id': '1', 'Vote': 'true' },{ 'Id': '2', 'Vote': 'false' }" ] } Here's the WebMethod signature: [WebMethod] public static void LinkBillsToCandidate(List<JsonBillForCandidate> bills) Here's the .NET object: public class JsonBillForCandidate { public int Id { get; set; } public bool? Vote { get; set; } } Is there a problem with my JSON format? That's all I can think of that is preventing my call from going through. share|improve this question 1   Why are your array values in quotes? –  Oded May 6 '11 at 20:04 1   I do think you have to make the bills parameter a string only and have to JSON-deserialize the string on the server. –  Uwe Keim May 6 '11 at 20:05 2 Answers 2 It's because it's not finding the signature you're sending it (LinkBillsToCandidate(string)) As Uwe mentioned you can send it a string and deserialize it in server-side code using the JSON deserializing method: http://msdn.microsoft.com/en-us/library/bb412179.aspx share|improve this answer 1   In a recent project, I used the new dynamic keyword of .NET 4 to save me from creating a real class to deserialize into. For smaller objects, this should be sufficient. –  Uwe Keim May 6 '11 at 20:33 up vote 0 down vote accepted The problem was as I had suspected. Some slight tweaking of the JSON did the trick. Here is the final JSON format that works: { 'bills':[ { 'Id':3, 'Vote':true }, { 'Id':4, 'Vote':false } ] } This ASP.NET method handles the JSON just fine: [WebMethod] public static void LinkBillsToCandidate(List<JsonBillForCandidate> bills) { foreach (JsonBillForCandidate bill in bills) { BillLogic.LinkBillToCandidate(bill.Id, SessionHelper.CandidateId, bill.Vote); } NavigationHelper.GoToCandidate(); } share|improve this answer      @bridus:Please share your jquery code :) –  Santosh May 7 '11 at 16:53 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.686369
parental guidance Essential care for asthmatic children Fortunately, most children who suffer from asthma manage to control their symptoms until the attacks are really unusual. However, it is essential that both they and their parents know about this disease and the treatments that they require, what are its triggers to avoid and essential care for asthmatic children. The more they know about the disease, the better the quality of life they can offer the child and their abilities to control it so that they lead a completely normal life. Below we offer a series of indications that will be useful in the care of children suffering from asthma. Establish an action plan and follow it when necessary The parents of a child who suffers from asthma must establish an action plan in the event of an attack and the child must be aware of what actions to take in the face of this emergency. In general, the indications of said plan contain those that have been given by the treating physician, including medications, doses and how to take them, as well as the prevention of triggers how to act before an attack and another recognize it properly and control it in case it appears. In this way it will be possible to control the episodes or prevent them and they will only have to go to the pediatrician if they really need help. Provide medications according to the specialist’s prescription Children with asthma need to take daily medications to keep their airways from growing and others that are only used in the event of an attack to open the airways as quickly as possible. Most of these medications are administered through a nebulizer or inhaler to get them into the lungs, sometimes they can also be taken in pill or liquid form in any way needed, follow the Doctor’s prescription about dosages and how often you should take them. Identify triggers and avoid them The triggers are elements that can affect the airway and eventually cause an asthma attack, among them pollen stands out in the changes in the weather seasons, infections such as common colds, among others, it will be necessary for parents to determine what are the triggers that affect your child particularly and avoid as many as possible. To ensure that attacks are kept to a minimum. Get the flu shot every year Given that one of the main triggers of asthma in children is the flu, it is essential that they receive a booster through vaccination against this disease, which is so common, and in this way reduce the chances of becoming infected. Use devices to ensure the child’s health It is now possible to predict when a child is about to have an asthma attack by using specialized tools such as a flow meter or even keeping an asthma diary, which will help track symptoms, how often they occur and allow medication to be adjusted to maximize its effectiveness. The peak flow meter is a handheld device that measures a child’s ability to push air out of their lungs and helps determine when the airways are blocked as a characteristic of seizures. Identify signs of an attack When children have already suffered episodes like these before, it is common for it to manifest in certain signs that can be noted by parents as a warning to detect a possible attack Before the symptoms are evident and act in a timely manner by providing the medications that are necessary. Usually these signs refer to changes in appearance, mood or breathing. Some even simply say they feel a little weird, so it is always appropriate to be aware of children, especially when they are young. Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.677548
CMS/SMC Canadian Mathematical Society www.cms.math.ca Canadian Mathematical Society   location:  Publicationsjournals Publications         Search results Search: MSC category 14 ( Algebraic geometry )   Expand all        Collapse all Results 101 - 125 of 137 101. CMB 2003 (vol 46 pp. 575) Marshall, M. Optimization of Polynomial Functions This paper develops a refinement of Lasserre's algorithm for optimizing a polynomial on a basic closed semialgebraic set via semidefinite programming and addresses an open question concerning the duality gap. It is shown that, under certain natural stability assumptions, the problem of optimization on a basic closed set reduces to the compact case. Categories:14P10, 46L05, 90C22 102. CMB 2003 (vol 46 pp. 495) Baragar, Arthur Canonical Vector Heights on Algebraic K3 Surfaces with Picard Number Two Let $V$ be an algebraic K3 surface defined over a number field $K$. Suppose $V$ has Picard number two and an infinite group of automorphisms $\mathcal{A} = \Aut(V/K)$. In this paper, we introduce the notion of a vector height $\mathbf{h} \colon V \to \Pic(V) \otimes \mathbb{R}$ and show the existence of a canonical vector height $\widehat{\mathbf{h}}$ with the following properties: \begin{gather*} \widehat{\mathbf{h}} (\sigma P) = \sigma_* \widehat{\mathbf{h}} (P) \\ h_D (P) = \widehat{\mathbf{h}} (P) \cdot D + O(1), \end{gather*} where $\sigma \in \mathcal{A}$, $\sigma_*$ is the pushforward of $\sigma$ (the pullback of $\sigma^{-1}$), and $h_D$ is a Weil height associated to the divisor $D$. The bounded function implied by the $O(1)$ does not depend on $P$. This allows us to attack some arithmetic problems. For example, we show that the number of rational points with bounded logarithmic height in an $\mathcal{A}$-orbit satisfies $$ N_{\mathcal{A}(P)} (t,D) = \# \{Q \in \mathcal{A}(P) : h_D (Q) Categories:11G50, 14J28, 14G40, 14J50, 14G05 103. CMB 2003 (vol 46 pp. 429) Sastry, Pramathanath; Tong, Yue Lin L. The Grothendieck Trace and the de Rham Integral On a smooth $n$-dimensional complete variety $X$ over ${\mathbb C}$ we show that the trace map ${\tilde\theta}_X \colon\break H^n (X,\Omega_X^n) \to {\mathbb C}$ arising from Lipman's version of Grothendieck duality in \cite{ast-117} agrees with $$ (2\pi i)^{-n} (-1)^{n(n-1)/2} \int_X \colon H^{2n}_{DR} (X,{\mathbb C}) \to {\mathbb C} $$ under the Dolbeault isomorphism. Categories:14F10, 32A25, 14A15, 14F05, 18E30 104. CMB 2003 (vol 46 pp. 400) Marshall, M. Approximating Positive Polynomials Using Sums of Squares The paper considers the relationship between positive polynomials, sums of squares and the multi-dimensional moment problem in the general context of basic closed semi-algebraic sets in real $n$-space. The emphasis is on the non-compact case and on quadratic module representations as opposed to quadratic preordering presentations. The paper clarifies the relationship between known results on the algebraic side and on the functional-analytic side and extends these results in a variety of ways. Categories:14P10, 44A60 105. CMB 2003 (vol 46 pp. 323) Chamberland, Marc Characterizing Two-Dimensional Maps Whose Jacobians Have Constant Eigenvalues Recent papers have shown that $C^1$ maps $F\colon \mathbb{R}^2 \rightarrow \mathbb{R}^2$ whose Jacobians have constant eigenvalues can be completely characterized if either the eigenvalues are equal or $F$ is a polynomial. Specifically, $F=(u,v)$ must take the form \begin{gather*} u = ax + by + \beta \phi(\alpha x + \beta y) + e \\ v = cx + dy - \alpha \phi(\alpha x + \beta y) + f \end{gather*} for some constants $a$, $b$, $c$, $d$, $e$, $f$, $\alpha$, $\beta$ and a $C^1$ function $\phi$ in one variable. If, in addition, the function $\phi$ is not affine, then \begin{equation} \alpha\beta (d-a) + b\alpha^2 - c\beta^2 = 0. \end{equation} This paper shows how these theorems cannot be extended by constructing a real-analytic map whose Jacobian eigenvalues are $\pm 1/2$ and does not fit the previous form. This example is also used to construct non-obvious solutions to nonlinear PDEs, including the Monge--Amp\`ere equation. Keywords:Jacobian Conjecture, injectivity, Monge--Ampère equation Categories:26B10, 14R15, 35L70 106. CMB 2003 (vol 46 pp. 321) Ballico, E. Discreteness For the Set of Complex Structures On a Real Variety Let $X$, $Y$ be reduced and irreducible compact complex spaces and $S$ the set of all isomorphism classes of reduced and irreducible compact complex spaces $W$ such that $X\times Y \cong X\times W$. Here we prove that $S$ is at most countable. We apply this result to show that for every reduced and irreducible compact complex space $X$ the set $S(X)$ of all complex reduced compact complex spaces $W$ with $X\times X^\sigma \cong W\times W^\sigma$ (where $A^\sigma$ denotes the complex conjugate of any variety $A$) is at most countable. Categories:32J18, 14J99, 14P99 107. CMB 2003 (vol 46 pp. 204) Levy, Jason Rationality and Orbit Closures Suppose we are given a finite-dimensional vector space $V$ equipped with an $F$-rational action of a linearly algebraic group $G$, with $F$ a characteristic zero field. We conjecture the following: to each vector $v\in V(F)$ there corresponds a canonical $G(F)$-orbit of semisimple vectors of $V$. In the case of the adjoint action, this orbit is the $G(F)$-orbit of the semisimple part of $v$, so this conjecture can be considered a generalization of the Jordan decomposition. We prove some cases of the conjecture. Categories:14L24, 20G15 108. CMB 2003 (vol 46 pp. 140) Renner, Lex E. An Explicit Cell Decomposition of the Wonderful Compactification of a Semisimple Algebraic Group We determine an explicit cell decomposition of the wonderful compactification of a semi\-simple algebraic group. To do this we first identify the $B\times B$-orbits using the generalized Bruhat decomposition of a reductive monoid. From there we show how each cell is made up from $B\times B$-orbits. Categories:14L30, 14M17, 20M17 109. CMB 2002 (vol 45 pp. 686) Rauschning, Jan; Slodowy, Peter An Aspect of Icosahedral Symmetry We embed the moduli space $Q$ of 5 points on the projective line $S_5$-equivariantly into $\mathbb{P} (V)$, where $V$ is the 6-dimensional irreducible module of the symmetric group $S_5$. This module splits with respect to the icosahedral group $A_5$ into the two standard 3-dimensional representations. The resulting linear projections of $Q$ relate the action of $A_5$ on $Q$ to those on the regular icosahedron. Categories:14L24, 20B25 110. CMB 2002 (vol 45 pp. 349) Coppens, Marc Very Ample Linear Systems on Blowings-Up at General Points of Projective Spaces Let $\mathbf{P}^n$ be the $n$-dimensional projective space over some algebraically closed field $k$ of characteristic $0$. For an integer $t\geq 3$ consider the invertible sheaf $O(t)$ on $\mathbf{P}^n$ (Serre twist of the structure sheaf). Let $N = \binom{t+n}{n}$, the dimension of the space of global sections of $O(t)$, and let $k$ be an integer satisfying $0\leq k\leq N - (2n+2)$. Let $P_1,\dots,P_k$ be general points on $\mathbf{P}^n$ and let $\pi \colon X \to \mathbf{P}^n$ be the blowing-up of $\mathbf{P}^n$ at those points. Let $E_i = \pi^{-1} (P_i)$ with $1\leq i\leq k$ be the exceptional divisor. Then $M = \pi^* \bigl( O(t) \bigr) \otimes O_X (-E_1 - \cdots -E_k)$ is a very ample invertible sheaf on $X$. Keywords:blowing-up, projective space, very ample linear system, embeddings, Veronese map Categories:14E25, 14N05, 14N15 111. CMB 2002 (vol 45 pp. 417) Kamiyama, Yasuhiko; Tsukuda, Shuichi On Deformations of the Complex Structure on the Moduli Space of Spatial Polygons For an integer $n \geq 3$, let $M_n$ be the moduli space of spatial polygons with $n$ edges. We consider the case of odd $n$. Then $M_n$ is a Fano manifold of complex dimension $n-3$. Let $\Theta_{M_n}$ be the sheaf of germs of holomorphic sections of the tangent bundle $TM_n$. In this paper, we prove $H^q (M_n,\Theta_{M_n})=0$ for all $q \geq 0$ and all odd $n$. In particular, we see that the moduli space of deformations of the complex structure on $M_n$ consists of a point. Thus the complex structure on $M_n$ is locally rigid. Keywords:polygon space, complex structure Categories:14D20, 32C35 112. CMB 2002 (vol 45 pp. 204) Fakhruddin, Najmuddin On the Chow Groups of Supersingular Varieties We compute the rational Chow groups of supersingular abelian varieties and some other related varieties, such as supersingular Fermat varieties and supersingular $K3$ surfaces. These computations are concordant with the conjectural relationship, for a smooth projective variety, between the structure of Chow groups and the coniveau filtration on the cohomology. Categories:14C25, 14K99 113. CMB 2002 (vol 45 pp. 284) Sancho de Salas, Fernando Residue: A Geometric Construction A new construction of the ordinary residue of differential forms is given. This construction is intrinsic, \ie, it is defined without local coordinates, and it is geometric: it is constructed out of the geometric structure of the local and global cohomology groups of the differentials. The Residue Theorem and the local calculation then follow from geometric reasons. Category:14A25 114. CMB 2002 (vol 45 pp. 213) Gordon, B. Brent; Joshi, Kirti Griffiths Groups of Supersingular Abelian Varieties The Griffiths group $\Gr^r(X)$ of a smooth projective variety $X$ over an algebraically closed field is defined to be the group of homologically trivial algebraic cycles of codimension $r$ on $X$ modulo the subgroup of algebraically trivial algebraic cycles. The main result of this paper is that the Griffiths group $\Gr^2 (A_{\bar{k}})$ of a supersingular abelian variety $A_{\bar{k}}$ over the algebraic closure of a finite field of characteristic $p$ is at most a $p$-primary torsion group. As a corollary the same conclusion holds for supersingular Fermat threefolds. In contrast, using methods of C.~Schoen it is also shown that if the Tate conjecture is valid for all smooth projective surfaces and all finite extensions of the finite ground field $k$ of characteristic $p>2$, then the Griffiths group of any ordinary abelian threefold $A_{\bar{k}}$ over the algebraic closure of $k$ is non-trivial; in fact, for all but a finite number of primes $\ell\ne p$ it is the case that $\Gr^2 (A_{\bar{k}}) \otimes \Z_\ell \neq 0$. Keywords:Griffiths group, Beauville conjecture, supersingular Abelian variety, Chow group Categories:14J20, 14C25 115. CMB 2002 (vol 45 pp. 89) Grant, David On Gunning's Prime Form in Genus $2$ Using a classical generalization of Jacobi's derivative formula, we give an explicit expression for Gunning's prime form in genus 2 in terms of theta functions and their derivatives. Categories:14K25, 30F10 116. CMB 2001 (vol 44 pp. 491) Wang, Weiqiang Resolution of Singularities of Null Cones We give canonical resolutions of singularities of several cone varieties arising from invariant theory. We establish a connection between our resolutions and resolutions of singularities of closure of conjugacy classes in classical Lie algebras. Categories:14L35, 22G 117. CMB 2001 (vol 44 pp. 452) Ishihara, Hironobu Some Adjunction Properties of Ample Vector Bundles Let $\ce$ be an ample vector bundle of rank $r$ on a projective variety $X$ with only log-terminal singularities. We consider the nefness of adjoint divisors $K_X + (t-r) \det \ce$ when $t \ge \dim X$ and $t>r$. As an application, we classify pairs $(X,\ce)$ with $c_r$-sectional genus zero. Keywords:ample vector bundle, adjunction, sectional genus Categories:14J60, 14C20, 14F05, 14J40 118. CMB 2001 (vol 44 pp. 257) Abánades, Miguel A. Algebraic Homology For Real Hyperelliptic and Real Projective Ruled Surfaces Let $X$ be a reduced nonsingular quasiprojective scheme over ${\mathbb R}$ such that the set of real rational points $X({\mathbb R})$ is dense in $X$ and compact. Then $X({\mathbb R})$ is a real algebraic variety. Denote by $H_k^{\alg}(X({\mathbb R}), {\mathbb Z}/2)$ the group of homology classes represented by Zariski closed $k$-dimensional subvarieties of $X({\mathbb R})$. In this note we show that $H_1^{\alg} (X({\mathbb R}), {\mathbb Z}/2)$ is a proper subgroup of $H_1(X({\mathbb R}), {\mathbb Z}/2)$ for a nonorientable hyperelliptic surface $X$. We also determine all possible groups $H_1^{\alg}(X({\mathbb R}), {\mathbb Z}/2)$ for a real ruled surface $X$ in connection with the previously known description of all possible topological configurations of $X$. Categories:14P05, 14P25 119. CMB 2001 (vol 44 pp. 313) Reverter, Amadeu; Vila, Núria Images of mod $p$ Galois Representations Associated to Elliptic Curves We give an explicit recipe for the determination of the images associated to the Galois action on $p$-torsion points of elliptic curves. We present a table listing the image for all the elliptic curves defined over $\QQ$ without complex multiplication with conductor less than 200 and for each prime number~$p$. Keywords:Galois groups, elliptic curves, Galois representation, isogeny Categories:11R32, 11G05, 12F10, 14K02 120. CMB 2001 (vol 44 pp. 223) Marshall, M. Extending the Archimedean Positivstellensatz to the Non-Compact Case A generalization of Schm\"udgen's Positivstellensatz is given which holds for any basic closed semialgebraic set in $\mathbb{R}^n$ (compact or not). The proof is an extension of W\"ormann's proof. Categories:12D15, 14P10, 44A60 121. CMB 2000 (vol 43 pp. 312) Dobbs, David E. On the Prime Ideals in a Commutative Ring If $n$ and $m$ are positive integers, necessary and sufficient conditions are given for the existence of a finite commutative ring $R$ with exactly $n$ elements and exactly $m$ prime ideals. Next, assuming the Axiom of Choice, it is proved that if $R$ is a commutative ring and $T$ is a commutative $R$-algebra which is generated by a set $I$, then each chain of prime ideals of $T$ lying over the same prime ideal of $R$ has at most $2^{|I|}$ elements. A polynomial ring example shows that the preceding result is best-possible. Categories:13C15, 13B25, 04A10, 14A05, 13M05 122. CMB 2000 (vol 43 pp. 304) Darmon, Henri; Mestre, Jean-François Courbes hyperelliptiques à multiplications réelles et une construction de Shih Soient $r$ et $p$ deux nombres premiers distincts, soit $K = \Q(\cos \frac{2\pi}{r})$, et soit $\F$ le corps r\'esiduel de $K$ en une place au-dessus de $p$. Lorsque l'image de $(2 - 2\cos \frac{2\pi}{r})$ dans $\F$ n'est pas un carr\'e, nous donnons une construction g\'eom\'etrique d'une extension r\'eguliere de $K(t)$ de groupe de Galois $\PSL_2 (\F)$. Cette extension correspond \`a un rev\^etement de $\PP^1_{/K}$ de \og{} signature $(r,p,p)$ \fg{} au sens de [3, sec.~6.3], et son existence est pr\'edite par le crit\`ere de rigidit\'e de Belyi, Fried, Thompson et Matzat. Sa construction s'obtient en tordant la representation galoisienne associ\'ee aux points d'ordre $p$ d'une famille de vari\'et\'es ab\'eliennes \`a multiplications r\'eelles par $K$ d\'ecouverte par Tautz, Top et Verberkmoes [6]. Ces vari\'et\'es ab\'eliennes sont d\'efinies sur un corps quadratique, et sont isog\`enes \`a leur conjugu\'e galoisien. Notre construction g\'en\'eralise une m\'ethode de Shih [4], [5], que l'on retrouve quand $r = 2$ et $r = 3$. Let $r$ and $p$ be distinct prime numbers, let $K = \Q(\cos \frac{2\pi}{r})$, and let $\F$ be the residue field of $K$ at a place above $p$. When the image of $(2 - 2\cos \frac{2\pi}{r})$ in $\F$ is not a square, we describe a geometric construction of a regular extension of $K(t)$ with Galois group $\PSL_2 (\F)$. This extension corresponds to a covering of $\PP^1_{/K}$ of ``signature $(r,p,p)$'' in the sense of [3, sec.~6.3], and its existence is predicted by the rigidity criterion of Belyi, Fried, Thompson and Matzat. Its construction is obtained by twisting the mod $p$ galois representation attached to a family of abelian varieties with real multiplications by $K$ discovered by Tautz, Top and Verberkmoes [6]. These abelian varieties are defined in general over a quadratic field, and are isogenous to their galois conjugate. Our construction generalises a method of Shih [4], [5], which one recovers when $r = 2$ and $r = 3$. Categories:11G30, 14H25 123. CMB 2000 (vol 43 pp. 162) Foth, Philip Moduli Spaces of Polygons and Punctured Riemann Spheres The purpose of this note is to give a simple combinatorial construction of the map from the canonically compactified moduli spaces of punctured complex projective lines to the moduli spaces $\P_r$ of polygons with fixed side lengths in the Euclidean space $\E^3$. The advantage of this construction is that one can obtain a complete set of linear relations among the cycles that generate homology of $\P_r$. We also classify moduli spaces of pentagons. Categories:14D20, 18G55, 14H10 124. CMB 2000 (vol 43 pp. 239) Yu, Gang On the Number of Divisors of the Quadratic Form $m^2+n^2$ For an integer $n$, let $d(n)$ denote the ordinary divisor function. This paper studies the asymptotic behavior of the sum $$ S(x) := \sum_{m\leq x, n\leq x} d(m^2 + n^2). $$ It is proved in the paper that, as $x \to \infty$, $$ S(x) := A_1 x^2 \log x + A_2 x^2 + O_\epsilon (x^{\frac32 + \epsilon}), $$ where $A_1$ and $A_2$ are certain constants and $\epsilon$ is any fixed positive real number. The result corrects a false formula given in a paper of Gafurov concerning the same problem, and improves the error $O \bigl( x^{\frac53} (\log x)^9 \bigr)$ claimed by Gafurov. Keywords:divisor, large sieve, exponential sums Categories:11G05, 14H52 125. CMB 2000 (vol 43 pp. 174) Gantz, Christian; Steer, Brian Stable Parabolic Bundles over Elliptic Surfaces and over Riemann Surfaces We show that the use of orbifold bundles enables some questions to be reduced to the case of flat bundles. The identification of moduli spaces of certain parabolic bundles over elliptic surfaces is achieved using this method. Categories:14J27, 32L07, 14H60, 14D20 Page    1 ... 3 4 5 6     © Canadian Mathematical Society, 2017 : https://cms.math.ca/
__label__pos
0.953968
Search Premium Membership Get 15% OFF on Pro Premium Plan with discount code: UX21M. Study specialized electrical engineering articles and papers in Low- and High-Voltage areas. Home / Technical Articles / Getting Electricity From Solid Oxide Fuel Cell Content Getting Electricity From Solid Oxide Fuel Cell Bloom Energy Corporation has announced the availability of its Bloom Energy Server. This patented solid oxide fuel cell (SOFC) technology is aimed at providing a cleaner, more reliable, and more affordable alternative to both today Abstract Electricity is no more a luxury but it has become a necessary in today’s life. An increase in share of global energy needs is expected to be met by renewable in the years ahead. Renewable sources have an enormous potential to meet the growing energy requirements of the increasing population of the developing world. Fuel cells is one of them, provide a range of critical benefits that no other single power generating technology can match. This technical article describes the main characteristics of fuel cell and in that mainly Solid Oxide Fuel Cell (SOFC). Solid Oxide Fuel Cell Solid Oxide Fuel Cell High temperature solid oxide fuel cells (SOFCs) offer a clean, pollution-free technology to electrochemically generate electricity at high efficiencies. High temperature solid oxide fuel cell (SOFC) technology is a promising power generation option that features high electrical efficiency and low emissions of environmentally polluting gases such as CO2, NOx and SOx. SOFCs are suitable for stationary applications as well as for auxiliary power units (APUs) used in vehicles to power electronics. Much development has focused on solid oxide fuel cells (SOFC) because it is able to convert a wide variety of fuels and with such high efficiency. Go back to content ↑ Introduction Engineers and environmentalists have long dreamed of being able to obtain the benefits of clean electric power without pollution-producing engines or heavy batteries. Solar panels and wind farms are familiar images of alternative energy technologies. While they are effective sources of electrical energy, there are problems with the stability of their energy source as, for example, on a cloudy or windless day. Their applications are somewhat limited due to lack of portability; a windmill is not much help to the power plant of a diesel truck, a solar panel cannot provide power at night, etc. In 1962 a revolution in energy research occurred. Scientists at Westinghouse Electric Corporation (now Siemens Westinghouse) demonstrated for the first time the feasibility of extracting electricity from a device they called a “solid electrolyte fuel cell”. Since then there has been an intense research and development effort to develop the alternative energy technology known as fuel cells. Now, as energy issues are at the forefront of current events, fuel cell technology is ripening and on the verge of being ready for large scale commercial implementation. Go back to content ↑ Fuel Cell A fuel cell is an electrochemical device that converts the chemical energy in fuels (such as hydrogen, methane, butane or even gasoline and diesel) into electrical energy by exploiting the natural tendency of oxygen and hydrogen to react. By controlling the means by which such a reaction occurs and directing the reaction through a device, it is possible to harvest the energy given off by the reaction. Highly efficient hydrogen fuel cells are wanted due to the high price the existing ones have so far. So, being efficient means less money spent on them, and more market share for hydrogen. SOFCs (solid oxide fuel cell) are a type of hydrogen fuel cell that use solid (not liquid) electrolyte to do their job, while being much more efficient. SOFC technology dominates competing fuel cell technologies because of the ability of SOFCs to use currently available fossil fuels, thus reducing operating costs. Other fuel cell technologies (e.g. molten carbonate, polymer electrolyte, phosphoric acid and alkali) require hydrogen as their fuel. Working Principle of SOFC Operating characteristic of SOFC Figure 1 – operating characteristic of SOFC Figure 1 above shows schematically how a solid oxide fuel cell works. The cell is constructed with two porous electrodes which sandwich an electrolyte. Air flows along the cathode (which is therefore also called the “air electrode”). When an oxygen molecule contacts the cathode/electrolyte interface, it catalytically acquires four electrons from the cathode and splits into two oxygen ions. The oxygen ions diffuse into the electrolyte material and migrate to the other side of the cell where they encounter the anode (also called the “fuel electrode“). The oxygen ions encounter the fuel at the anode/electrolyte interface and react catalytically, giving off water, carbon dioxide, heat, and – most importantly for a cycle two electrons. The electrons transport through the anode to the external circuit and back to the cathode, providing a source of useful electrical energy in an external circuit. Go back to content ↑ [] <h3″>Materials Selection and Processing Although the operating concept of SOFCs is rather simple, the selection of materials for the individual components presents enormous challenges. Each material must have the electrical properties required to perform its function in the cell. There must be enough chemical and structural stability to endure fabrication and operation at high temperatures. The fuel cell needs to run at high temperatures in order to achieve sufficiently high current densities and power output; operation at up to 1000 °C is possible using the most common electrolyte material, yttria-stabilized zirconia (YSZ). Reactivity and interdiffusion between the components must be as low as possible. The thermal expansion coefficients of the components must be as close to one another as possible in order to minimize thermal stresses which could lead to cracking and mechanical failure. The air side of the cell must operate in an oxidizing atmosphere and the fuel side must operate in a reducing atmosphere. The temperature and atmosphere requirements drive the materials selection for all the other components. In order for SOFCs to reach their commercial potential, the materials and processing must also be cost-effective. The first successful SOFC used platinum as both the cathode and anode, but fortunately less expensive alternatives are available today. Fuel cells are simple devices, containing no moving parts and only four functional component elements: cathode, electrolyte, anode and interconnection. Go back to content ↑ Cathode The cathode must meet all the above requirements and be porous in order to allow oxygen molecules to reach the electrode/electrolyte interface. In some designs (e.g. tubular) the cathode contributes over 90% of the cell’s weight and therefore provides structural support for the cell. Materials used for Catode Today the most commonly used cathode material is lanthanum manganite (LaMnO3), a p-type perovskite. Typically, it is doped with rare earth elements (e.g. Sr, Ce, Pr) to enhance its conductivity. Most often it is doped with strontium and referred to as LSM (La1-xSrxMnO3). The conductivity of these perovskites is all electronic (no ionic conductivity), a desirable feature since the electrons from the open circuit flow back through the cell via the cathode to reduce the oxygen molecules, forcing the oxygen ions through the electrolyte. In addition to being compatible with YSZ electrolytes, it has the further advantage of having adequate functionality at intermediate fuel cell temperatures (about 700 C), allowing it to be used with alternative electrolyte compositions. Any reduction in operating temperature reduces operating costs and expands the materials selection, creating an opportunity for additional cost savings. Fabrication of LSM depends on cell design. For example, the tubular cell is constructed by extruding a cathode tube and building the rest of the cell around it, where several planar cell designs are being investigated, the cathode is designed as the bottom supporting layer, and fabricated with tape casting techniques using nanoscale particles. In both cases, the challenge is to sinter the cathode adequately, often by co-sintering with the other components, while maintaining sufficient interconnected porosity. Go back to content ↑ Electrolyte Once the molecular oxygen has been converted to oxygen ions it must migrate through the electrolyte to the fuel side of the cell. In order for such migration to occur, the electrolyte must possess a high ionic conductivity and no electrical conductivity. It must be fully dense to prevent short circuiting of reacting gases through it and it should also be as thin as possible to minimize resistive losses in the cell. As with the other materials, it must be chemically, thermally, and structurally stable across a wide temperature range. There are several candidate materials: YSZ, doped cerium oxide, and doped bismuth oxide. Of these, the first two are the most promising. Bismuth oxide-based materials have a high oxygen ion conductivity and lower operating temperature (less than 800 C), but do not offer enough crystalline stability at high temperature to be broadly useful. YSZ has emerged as the most suitable electrolyte material. Yttria serves the dual purpose of stabilizing zirconia into the cubic structure at high temperatures and also providing oxygen vacancies at the rate of one vacancy per mole of dopant. A typical dopant level is 10mol% yttria. If the conductivity for oxygen ions in SOFC can remain high even at lower temperature, material choice for SOFC will broaden and many existing problems can potentially be solved. Certain processing technique such as thin film deposition can help solve this problem with existing material by: 1. Reducing the traveling distance of oxygen ions and electrolyte resistance as resistance is inversely proportional to conductor length; 2. Producing grain structures that are less resistive such as columnar grain structure; 3. Controlling the micro-structural nano-crystalline fine grains to achieve “fine-tuning” of electrical properties; 4. Building composite with large interfacial areas as interfaces have shown to have extraordinary electrical properties. Cerium oxide has also been considered as a possible electrolyte. Its advantage is that it has high ionic conductivity in air but can operate effectively at much lower temperatures(under 700 C); this temperature range significantly broadens the choice of materials for the other components, which can be made of much less expensive and more readily available materials. The problem is that this electrolyte is susceptible to reduction on the anode (fuel) side. At low operating temperatures (500-700 C) grain boundary resistance is a significant impediment to ionic conductivity. Efforts are underway to develop compositions which address these problems. Go back to content ↑ Anode The anode (the fuel electrode) must meet most of the same requirements as the cathode for electrical conductivity, thermal expansion compatibility and porosity, and must function in a reducing atmosphere. The reducing conditions combined with electrical conductivity requirements make metals attractive candidate materials. Most development has focused on nickel owing to its abundance and affordability. The most common material used is a cermet made up of nickel mixed with the ceramic material that is used for the electrolyte in that particular cell, typically YSZ (yttria stabilized zirconia), this YSZ part helps stop the grain growth of Nickel Ni. The anode is commonly the thickest and strongest layer in each individual cell, because it has the smallest polarization losses, and is often the layer that provides the mechanical support. The oxidation reaction between the oxygen ions and the hydrogen produces heat as well as water and electricity. If the fuel is a light hydrocarbon, for example methane, another function of the anode is to act as a catalyst for steam reforming the fuel into hydrogen. This provides another operational benefit to the fuel cell stack because the reforming reaction is endothermic, which cools the stack internally. The YSZ provides structural support for separated Ni particles, preventing them from sintering together while matching the thermal expansions. Adhesion of the anode to the electrolyte is also improved. Anodes are applied to the fuel cell through powder technology processes. Either slurry of Ni is applied over the cell and then YSZ is deposited by electrochemical vapor deposition, or Ni-YSZ slurry is applied and sintered. More recently NiO-YSZ slurries have been used, the NiO being reduced to particulate Ni in the firing process. Although Ni-YSZ is currently the anode material of choice and the freeze-drying process solves most of the associated problems, nickel still has a disadvantage: it catalyzes the formation of graphite from hydrocarbons. The deposition of graphite residues on the interior surfaces of the anode reduces its usefulness by destroying one of the main advantages of SOFCs, namely their ability to use unreformed fuel sources. Cu-cerium oxide anodes are being studied as a possible alternative. Copper is an excellent electrical conductor but a poor catalyst of hydrocarbons; cerium oxide is used as the matrix in part because of its high activity of hydrocarbon oxidation. A composite of the two thus has the advantage of being compatible with cerium oxide electrolyte fuel cells. Initial results using a wide range of hydrocarbon fuels are promising. Go back to content ↑ Interconnection The interconnection serves as the electrical contact to the cathode while protecting it from the reducing atmosphere of the anode. The requirements of the interconnection are the most severe of all cell components and include the following: 1. 100% electrical conductivity, 2. No porosity (to avoid mixing of fuel and oxygen), 3. Thermal expansion close to that of the air electrode and the electrolyte. compatibility, and 4. Inertness with respect to the other fuel cell components. 5. It will be exposed simultaneously to the reducing environment of the anode and the oxidizing atmosphere of the cathode. To satisfy these requirements, doped lanthanum chromite is used as the interconnection material. Ca-doped yttrium chromite is also being considered because it has better thermal expansion compatibility, especially in reducing atmospheres. At operating temperatures in the 900-1000 C range interconnects made of such nickel base alloys as Inconel 600 is possible. At or below 800 C, ferritic steels can be used. At even lower temperatures (below 700 C), it becomes possible to use stainless steels, which are comparatively inexpensive and readily available. Go back to content ↑ Types of SOFC Two possible design configurations for SOFCs have emerged: 1. Planar design (Figure 2) 2. Tubular design (Figure 3) Configuration of planar design SOFC Figure 2 – Configuration of planar design SOFC Configuration of tubular design SOFC Figure 3 – Configuration of tubular design SOFC In the planar design, the components are assembled in flat stacks, with air and fuel flowing through channels built into the cathode and anode. In the tubular design, components are the cell constructed in layers around a tubular cathode; air flows through the inside of the tube and fuel flows around the exterior. Go back to content ↑ Merits 1. High efficiency 2. Fuel adaptability 3. SOFCs are attractive as energy sources because they are clean, reliable, and almost entirely nonpolluting. 4. If the hydrogen used comes from the electrolysis of water, then using fuel cells eliminates greenhouse gases. 5. Because there are no moving parts and the cells are therefore vibration-free, the noise pollution associated with power generation is also eliminated. 6. By using SOFC in CHP to reduce the emissions resulting in Zero Emission Power Generation. Demerits 1. The largest disadvantage is the high operating temperature which results in longer start-up times and mechanical and chemical compatibility issues. 2. Fuelling fuel cells is still a problem since the production, transportation, distribution and storage of hydrogen is difficult Go back to content ↑ Applications SOFCs are targeted for use in three energy applications: stationary energy sources, transportation, and military applications. Go back to content ↑ Stationary energy sources Stationary installations would be the primary or auxiliary power sources for such facilities as homes, office buildings, industrial sites, ports, and military installations. They are well suited for mini-power-grid applications at places like universities and military bases. SOFCs can be positioned on-site, even in remote areas; on-site location makes it possible to match power generation to the electrical demands of the site. Stationary SOFC power generation is no longer just a hope for the future. Go back to content ↑ Transportation In the transportation sector, SOFCs are likely to find applications in both trucks and automobiles. In diesel trucks, they will probably be used as auxiliary power units to run electrical systems like air conditioning and on-board electronics thereby leading to a savings in diesel fuel expenditures and a significant reduction in both diesel exhaust and truck noise. Go back to content ↑ Military applications Finally, SOFCs are of high interest to the military because they can be established on-site in remote locations, are quiet, and non-polluting. Moreover, the use of fuel cells could significantly reduce deployment costs: 70% by weight of the material that the military moves is nothing but fuel. Stationary fuel cells for military applications can provide back up or standby power for special operations and activities and can provide power in remote areas. Go back to content ↑ SOFC-GT An SOFC-GT system is one which comprises a solid oxide fuel cell combined with a gas turbine. Further combination of the SOFC-GT in a combined heat and power plant also has the potential to yield even higher thermal efficiencies in some cases. In these plant SOFC is using as a replacement to combustor near gas turbine. It will generate electrical power at greater than 45% electrical efficiency. Within the SOFC module the desulfurized fuel is utilized electrochemically and oxidized below the temperature for NOx generation. Therefore NOx and SOx emissions for the SOFC power generation system are near negligible. The byproducts of the power generation from hydrocarbon fuels that are released into the environment are CO2 and water vapor. The development of methods to capture and sequester the CO2, resulting in a Zero Emission power generation system. Go back to content ↑ Conclusions Forty years have passed since the first successful demonstration of a solid oxide fuel cell. Through ingenuity, materials science, extensive research, and commitment to developing alternative energy sources, that seed of an idea has germinated and is about to bloom into a viable, robust energy alternative. Materials development will certainly continue to make SOFCs increasingly affordable, efficient, and reliable. The rapid increasing in technology definitely brings a change in the usage of this SOFC and also in the power generation sector. Ultimately helps in bringing Zero Emission Power Generation. Go back to content ↑ References: • http://en.wikipedia.org/wiki/Solid_oxide_fuel_cell • http://www.csa.com/discoveryguides/fuecell/ • http://www.pg.siemens.com/en/fuelcells • http://www.seca.doe.gov/overview.html Premium Membership Get access to premium HV/MV/LV technical articles, electrical engineering guides, research studies and much more! It helps you to shape up your technical skills in your everyday life as an electrical engineer. More Information author-pic vinod ramireddy I completed my post graduation in power electronics and graduated in Electrical and Electronics Engineering stream. Looking for an opportunity, internship in an core sector. My areas of interest on Renewable, power plants, especially Traction drives. 6 Comments 1. RAMAKRISHNAN KARUPPIAH Sep 21, 2019 Good Mr Vinod Reddy . . SOFC is the future technology and Time is the Factor ..Recently I attended a USER MEET at ARSI,Hyderabad and had very interesting discussion and updates.. If possible /interetced reach me me in 9392148281, and even can meet if you are in Hyderabad We are going to venture in to SOFC Pl share your contacts Good luck.. K.Ramakrishnan Vice President Vijai Electricals ltd. Hyderabad 2. susan krumdieck Jun 03, 2019 At what point do we stop participating in creating false hope narratives? One photo of some shiny boxes and a lot of old stuff about “how fuel cells work” even though they don’t… and you have created click-bait for the hopeful. There is a movement in science and engineering to help us reform and be brutally honest about which technologies have potential and which don’t. Please everybody evolve and become Transition Engineers. • vinod ramireddy vinod ramireddy Jun 13, 2019 Thanks for your valuable suggestions Susan, You have to know one thing, this was written by me when i was doing undergraudate degree. Yes, technology will boom every day, every sec. If you have any thoughts, you can write an article continuing this so that our Engineers will have clear idea. See, the point idea may looks like old stuff, when it comes to reality, it requires a lot of effort and understanding. Thanks for all !! 3. Ianj pearson Jul 29, 2017 Hi, one thing that everyone overlooks and that is…..if the SOFC system has been on the boil for 40 years, why are we not seeing a usable device for every day electricity production……..Blue Gen is supposed to be the answer but is not even on the table for the Plebs. Taken to the extreme….we are very comfortable with the petrol engine working at only 25% efficiency, and that has been for more than 100 years……steam engines in various forms produce the most power for electricity production and are the most user friendly of all devices. When I read that 1,000 deg C is needed to make the SOFC more efficient, I have to realise that the plot has well and truly been lost……you might as well stick a pipe into all the volcanoes that are active and harness the free energy that goes to waste…..since time began. That would be a technology that anyone can understand……….and you wouldn’t have to drill deep into the Earth’s mantle to reach the hot depths that geo thermal plants are doing. 4. Edvard Edvard Sep 03, 2012 Great ovreview Vinod, thank you. • vinod ramireddy vinod ramireddy Sep 03, 2012 Thanx for ur appreciation…:.!! I am thinking that the fuel cells also play am important role in ecofriendly generation of power..if the research scholars extend their work on fuel cells we will find better outcomes Leave a Comment Tell us what you're thinking... we care about your opinion! Electrical Engineering Courses Learn to design electrical power and industrial automation systems through professional video courses.
__label__pos
0.837174
@article {Revuelto-Rey3, author = {Revuelto-Rey, Jaume and Burns, Ted M. and Egea-Guerrero, Juan J. and Murillo-Cabezas, Francisco and Mauermann, Michelle L.}, title = {The evaluation of polyneuropathiesAuthors Respond:}, volume = {1}, number = {1}, pages = {3--4}, year = {2011}, doi = {10.1212/01.CPJ.0000410052.07169.b7}, publisher = {Wolters Kluwer Health, Inc. on behalf of the American Academy of Neurology}, issn = {2163-0402}, URL = {https://cp.neurology.org/content/1/1/3}, eprint = {https://cp.neurology.org/content/1/1/3.full.pdf}, journal = {Neurology: Clinical Practice} }
__label__pos
0.999067
Skip to main content Streptococcal Disease, Invasive, Group A Streptococcal bacteria   Group A streptococcal disease (GAS) is caused by a bacteria called Streptococcus pyogenes, group A. Most often, group A streptococcal infections are mild illnesses such as “strep throat” or impetigo. Sometimes, the bacteria invade the lungs, blood, or spread along the layers of tissue that surround muscle. These infections are called invasive group A streptococcal (iGAS) disease and are very serious, even life-threatening. Epidemiology In 2016 and 2017, BC experienced an increase in iGAS cases. A detailed report is available here More details on iGAS in British Columbia are available in the Annual Summary of Reportable Diseases and the Reportable Disease Dashboard.  Information for Health Professionals Group A streptococcal disease (GAS) is caused by a bacterium (germ) called Streptococcus pyogenes,group A. People may carry the germ on their skin or in their noses and throats and have no symptoms of illness. Most often, Group A streptococcal infections are mild illnesses such as “strep throat” or impetigo. Sometimes, the bacteria invade the lungs (pneumonia), blood (septicemia), or spread along the layers of tissue that surround muscle (called the fascia). These infections are called invasive Group A streptococcal (iGAS) disease and are very serious, even life-threatening.  Two of the most severe, but least common, forms of iGAS are necrotizing fasciitis and streptococcal toxic shock syndrome. Necrotizing fasciitis, also known as "flesh-eating disease," is a rapidly progressing disease which destroys muscles, fat, and skin tissue. Streptococcal toxic shock syndrome results in a rapid drop in blood pressure and causes organs such as the kidneys, liver, or lungs to stop working.   Symptoms of septicemia (blood poisoning) include fever, chills, headache, generally not feeling well, pale skin, lack of energy, rapid breathing, and increased heart rate.  Early symptoms of necrotizing fasciitis include severe pain and swelling, often rapidly getting worse; fever; redness around a wound. Early symptoms of streptococcal toxic shock syndrome include fever; sudden severe pain, often in an arm or leg; dizziness; confusion; feelings of having “the flu"; and a  flat red rash over large areas of the body.   The bacteria are spread from person to person through close personal contact with the nose and throat secretions of an infected person: • Breathing in air contaminated with streptococcal bacteria when an infected person has coughed, sneezed, or talked • Kissing, sharing drinking cups, forks, spoons, or cigarettes • Touching the nose and throat secretions of an infected person • Touching articles recently contaminated with the nose and throat secretions of an infected person People with chronic illnesses such as cancer, diabetes, and chronic heart or lung disease, and those who use medications such as steroids have a higher risk for iGAS. Persons with cuts to the skin, wounds, or chicken pox, the elderly, and adults with a history of alcohol abuse or injection drug use also have a higher risk for disease.   • 10 to 15 people out of 100 will die from their infection • 25 people out of 100 with necrotizing fasciitis will die • More than 35 people out of 100 with streptococcal toxic shock syndrome will die    Diagnosis is made by a test of blood, cerebrospinal fluid, or tissue from deep inside a wound.   The case is usually hospitalized and is treated with antibiotics. For persons with necrotizing fasciitis, early and aggressive surgery is often needed to remove damaged tissue and stop the spread of the disease. Close contacts of severe cases are also offered antibiotics to prevent them from getting sick.   There is no vaccine to prevent group A streptococcal infections. Antibiotics are recommended for certain close contacts of severe cases of iGAS (for example, persons living in the same household). Wash hands well, especially after coughing and sneezing and before preparing foods or eating. Keep all cuts and wounds clean and watch for possible signs of infection such as redness, swelling, drainage, and pain at the wound site. If there are signs of an infected wound, especially with fever, see a doctor as soon as possible.     SOURCE: Streptococcal Disease, Invasive, Group A ( ) Page printed: . Unofficial document if printed. Please refer to SOURCE for latest information. Copyright © BC Centre for Disease Control. All Rights Reserved. Copyright © 2019 Provincial Health Services Authority.
__label__pos
0.564925
What is it about? When stressed nurses may adopt behaviours which have an negative impact on their health and wellbeing. Learning about Making Every Contact Count may help improve nurses self-care abilities and their health promotion skills. Featured Image Why is it important? Health behaviours are shaped by a number of factors, some of which are beyond the control of the individual. These structural influences on health need to be addressed by nurse managers, Trust employers and the NHS. However there are many things that nurses can do to influence and improve both their own health and the health of the people they serve. Perspectives I hope this article will be a practical useful addition to the discussion on how nurses can improve their health promotion communication skills. Anne Mills Bournemouth University Read the Original This page is a summary of: Helping students to self-care and enhance their health-promotion skills, British Journal of Nursing, July 2019, Mark Allen Group, DOI: 10.12968/bjon.2019.28.13.864. You can read the full text: Read Contributors The following have contributed to this page
__label__pos
0.987338
Your browser doesn't support javascript. loading Show: 20 | 50 | 100 Results 1 - 20 de 30 Filter 1. Article in Chinese | WPRIM | ID: wpr-928170 ABSTRACT Lonicerae Japonicae Flos, as common Chinese medicine, has been used for thousands of years in the treatment of inflammation and infectious diseases with definite efficacies. The complex composition of Lonicerae Japonicae Flos results in its extensive pharmacological effects, so the assessment of its quality by only a few index components is not comprehensive. Guided by the quality marker(Q-marker), the present study comprehensively analyzed and predicted the quality connotation of Lonicerae Japonicae Flos based on the chemical composition and component transfer, the phylogenetic relationship, chemical composition effectiveness, measurability, and specificity. Chlorogenic acid, isochlorogenic acids A, B, and C, luteoloside, rutin, sweroside, and secoxyloganin were predicted as candidate Q-markers of Lonicerae Japonicae Flos. Subject(s) Chromatography, High Pressure Liquid , Drugs, Chinese Herbal/chemistry , Flowers/chemistry , Lonicera/chemistry , Phylogeny , Quality Control 2. Chinese Journal of School Health ; (12): 618-621, 2022. Article in Chinese | WPRIM | ID: wpr-924119 ABSTRACT Objective@#To learn about the construction and staffing of the school health system in Chinese institutions for disease prevention and control, and to provide basic information for the school health system, team capacity building and work development.@*Methods@#Electronic questionnaire was used to collect the setting and staffing of school health departments (including school health centers and departments/rooms) at the provincial, prefecture and county (district) levels in the centers for disease control and prevention. Statistical analysis was made on the proportion of school health, the number of staff and the characteristics such as age, education, major and working years in the provincial, prefecture and county (district) levels.@*Results@#Among the 3 313 institutions, the proportion of independent school health departments was 10.8%, and those of the provincial, prefecture and county (district) levels were 74.2%, 15.0%, and 9.6%, respectively. Among the institutions with separated department, the average number of staff members was 4.4, while the number of staff was 2.5. The average age of school health workers was 40.4 years old, and the proportion of male and female employees was 45.2% and 54.8%. The proportion of personnel who have been engaged in school health work for less than 5 years on average was as high as 65.1%. The majors of the staff were mainly public health ( 40.4 %), 54.0% of the provincial staff had a master s degree or above, and 47.8% and 58.7% of the staff at the prefecture and county (district) levels were junior college or below respectively.The proportion of provincial level personnel with intermediate and senior titles was 69.6%, and the proportion of municipal and countylevel personnel at the junior level and below was 52.2% and 56.2% respectively.@*Conclusion@#The proportion of independent school health departments within centers of disease control and prevention across China was low. There is a serious shortage of school health personnel, and there are problems such as low levels of education and professional titles, especially in county (district) level institutions. It is urgent to strengthen the construction of the school health system of the centers for disease control and prevention in China. 3. Chinese Journal of School Health ; (12): 618-621, 2022. Article in Chinese | WPRIM | ID: wpr-924118 ABSTRACT Objective@#To learn about the construction and staffing of the school health system in Chinese institutions for disease prevention and control, and to provide basic information for the school health system, team capacity building and work development.@*Methods@#Electronic questionnaire was used to collect the setting and staffing of school health departments (including school health centers and departments/rooms) at the provincial, prefecture and county (district) levels in the centers for disease control and prevention. Statistical analysis was made on the proportion of school health, the number of staff and the characteristics such as age, education, major and working years in the provincial, prefecture and county (district) levels.@*Results@#Among the 3 313 institutions, the proportion of independent school health departments was 10.8%, and those of the provincial, prefecture and county (district) levels were 74.2%, 15.0%, and 9.6%, respectively. Among the institutions with separated department, the average number of staff members was 4.4, while the number of staff was 2.5. The average age of school health workers was 40.4 years old, and the proportion of male and female employees was 45.2% and 54.8%. The proportion of personnel who have been engaged in school health work for less than 5 years on average was as high as 65.1%. The majors of the staff were mainly public health ( 40.4 %), 54.0% of the provincial staff had a master s degree or above, and 47.8% and 58.7% of the staff at the prefecture and county (district) levels were junior college or below respectively.The proportion of provincial level personnel with intermediate and senior titles was 69.6%, and the proportion of municipal and countylevel personnel at the junior level and below was 52.2% and 56.2% respectively.@*Conclusion@#The proportion of independent school health departments within centers of disease control and prevention across China was low. There is a serious shortage of school health personnel, and there are problems such as low levels of education and professional titles, especially in county (district) level institutions. It is urgent to strengthen the construction of the school health system of the centers for disease control and prevention in China. 4. Article in Chinese | WPRIM | ID: wpr-878910 ABSTRACT ATP-binding cassette(ABC) transporters are one of the largest protein families in organisms, with important effects in regulating plant growth and development, root morphology, transportation of secondary metabolites and resistance of stress. Environmental stress promotes the biosynthesis and accumulation of secondary metabolites, which determines the quality of medicinal plants. Therefore, how to improve the accumulation of secondary metabolites has been a hotspot in studying medicinal plants. Many studies have showed that ABC transporters are extremely related to the transportation and accumulation of secondary metabolites in plants. Recently, with the great development of genomics and transcriptomic sequencing technology, the regulatory mechanisms of ABC transporters on secondary metabolites have attached great attentions in medicinal plants. This paper reviewed the mechanisms of different groups of ABC transporters in transporting secondary metabolites through cell membranes. This paper provided key theoretical basis and technical supports in studying the mechanisms of ABC transporters in medicinal plant, and promoting the accumulation of secondary metabolites, in order to improve the quality of medicinal plants. Subject(s) ATP-Binding Cassette Transporters/metabolism , Biological Transport , Plant Development , Plants, Medicinal/metabolism , Stress, Physiological 5. Journal of Experimental Hematology ; (6): 1231-1235, 2021. Article in Chinese | WPRIM | ID: wpr-888543 ABSTRACT OBJECTIVE@#To evaluate the diagnostic value of peripheral blood cell parameters for early recognition of myelodysplastic syndrome (MDS) patients.@*METHODS@#The clinical and laboratory data of 86 patients with MDS and 72 patients with non-malignant clonal anemia treated in first diagnosed in the Second Hospital of Hebei Medical University from January 1, 2015 to December 31, 2017 was retrospectively analyzed. The peripheral blood cell parameters of the patients in two groups were analyzed, generated the receiver operator characteristic curve (ROC curve) from the statistically significant parameters, the binary logistic model was build to calculate and compare the area under the ROC curve (AUC) combined with multiple indicators and individual indicators, sensitivity, specificity, positive and negative likelihood ratio, and diagnostic accuracy, the diagnostic efficacy of the patients was analyzed.@*RESULTS@#Compared with patients in the non-malignant clonal anemia group ,white blood cell count (WBC), neutrophil percentage (NE%), eosinophil percentage (E%), eosinophil absolute value (E#), platelet count (PLT), platelet specific volume (PCT%) in the MDS patients were significantly reduced; while percentage of lymphocytes (LY%), basophilic percentage (B%), and the width of platelet distribution (PDW) significantly increased. The several ROC curves with the above indicators were established, which showed that AUC@*CONCLUSION@#PDW, B% and LY% in peripheral blood cell parameters have certain diagnostic value for early recognition of MDS. Subject(s) Humans , Leukocyte Count , Lymphocytes , Myelodysplastic Syndromes/diagnosis , Platelet Count , Retrospective Studies 6. Article in Chinese | WPRIM | ID: wpr-846387 ABSTRACT Objective: Puerarin nanoemulsion lyophilized powder (Pue-NE-LP) was prepared using natural surfactant glycyrrhizic acid as stabilizer and evaluated in vitro. Methods: Pue-NE was prepared by high-speed shear and high-pressure homogenization method, and further combined with freeze-drying method to prepare Pue-NE-LP. Taking the average particle size and polydispersity index (PDI) as the evaluation indexes, the optimal prescription and process parameters of this experiment were screened out through a single factor test. The prepared Pue-NE-LP was characterized by physicochemical properties and dissolution in vitro. Results: The average particle size and PDI of Pue-NE-LP prepared with 5% glyceryl caprylate as oil phase, 2.0 mg/mL glycyrrhizic acid as stabilizer, and 7% glucose as lyophilization protectant was (215.1 ± 0.7) nm and (0.133 ± 0.024), respectively. Scanning electron microscopy showed that Pue-NE-LP was irregularly small and uniform in size; X-ray diffraction showed that Pue-NE-LP existed in an amorphous state. In vitro release results showed that the dissolution rate of Pue-NE-LP was significantly higher than the physical mixture. Conclusion: Pue-NE-LP prepared with natural surfactant glycyrrhizic acid as a stabilizer is not only simple to prepare, but also can significantly improve the solubility and bioavailability of puerarin. It provides a reference for the multiple development of Pue-NE formulations. 7. Article in Chinese | WPRIM | ID: wpr-846355 ABSTRACT Objective: Puerarin nanoemulsion (Pue-NE) was prepared with glycyrrhizic acid as a natural stabilizer, and its release characteristics in vitro were investigated. Methods: Data processing was performed using particle size and polydispersity index (PDI) as independent variables, and using the overall desirability (OD) as the evaluation index. The central composite design-response surface method was used to optimize the prescription, and the physical and chemical properties and release characteristics of Pue-NE prepared by the optimal prescription were investigated. Results: The best prescription for Pue-NE is puerarin at a concentration of 5.0 mg/mL, glycyrrhizic acid at a concentration of 1.75 mg/mL, and caprylic glyceride in an amount of 3.5 mL. The average particle size of the nanoemulsion is (184.5 ± 0.8) nm, the PDI is 0.088 ± 0.002, the zeta potential is (10.56 ± 0.35) mv, the conductivity is (98.3 ± 0.4) μs/cm, pH is 6.750 ± 0.005, solubility (4.970 ± 0.008) mg/mL, drug loading is (99.4 ± 0.2)%, turbidity (24.3 ± 1.0) cm-1 (n = 3). It was identified as O/W emulsion by dyeing method. TEM scanning results show that the droplets are spherical and uniform in size and the stability results showed that Pue-NE has good storage stability at 25 ℃. In vitro release results showed that Pue-NE has the greatest release in phosphate buffered pH 6.8 within 24 hours. Conclusion: The preparation of Pue-NE with glycyrrhizic acid as a natural stabilizer is not only simple and convenient, but also can effectively replace the use of traditional chemical synthetic stabilizers and improve the solubility of puerarin. 8. Journal of Integrative Medicine ; (12): 229-241, 2020. Article in English | WPRIM | ID: wpr-829101 ABSTRACT OBJECTIVE@#Lung-toxin Dispelling Formula No. 1, referred to as Respiratory Detox Shot (RDS), was developed based on a classical prescription of traditional Chinese medicine (TCM) and the theoretical understanding of herbal properties within TCM. Therapeutic benefits of using RDS for both disease control and prevention, in the effort to contain the coronavirus disease 2019 (COVID-19), have been shown. However, the biochemically active constituents of RDS and their mechanisms of action are still unclear. The goal of the present study is to clarify the material foundation and action mechanism of RDS.@*METHODS@#To conduct an analysis of RDS, an integrative analytical platform was constructed, including target prediction, protein-protein interaction (PPI) network, and cluster analysis; further, the hub genes involved in the disease-related pathways were identified, and the their corresponding compounds were used for in vitro validation of molecular docking predictions. The presence of these validated compounds was also measured in samples of the RDS formula to quantify the abundance of the biochemically active constituents. In our network pharmacological study, a total of 26 bioinformatic programs and databases were used, and six networks, covering the entire Zang-fu viscera, were constructed to comprehensively analyze the intricate connections among the compounds-targets-disease pathways-meridians of RDS.@*RESULTS@#For all 1071 known chemical constituents of the nine ingredients in RDS, identified from established TCM databases, 157 passed drug-likeness screening and led to 339 predicted targets in the constituent-target network. Forty-two hub genes with core regulatory effects were extracted from the PPI network, and 134 compounds and 29 crucial disease pathways were implicated in the target-constituent-disease network. Twelve disease pathways attributed to the Lung-Large Intestine meridians, with six and five attributed to the Kidney-Urinary Bladder and Stomach-Spleen meridians, respectively. One-hundred and eighteen candidate constituents showed a high binding affinity with SARS-coronavirus-2 3-chymotrypsin-like protease (3CL), as indicated by molecular docking using computational pattern recognition. The in vitro activity of 22 chemical constituents of RDS was validated using the 3CL inhibition assay. Finally, using liquid chromatography mass spectrometry in data-independent analysis mode, the presence of seven out of these 22 constituents was confirmed and validated in an aqueous decoction of RDS, using reference standards in both non-targeted and targeted approaches.@*CONCLUSION@#RDS acts primarily in the Lung-Large Intestine, Kidney-Urinary Bladder and Stomach-Spleen meridians, with other Zang-fu viscera strategically covered by all nine ingredients. In the context of TCM meridian theory, the multiple components and targets of RDS contribute to RDS's dual effects of health-strengthening and pathogen-eliminating. This results in general therapeutic effects for early COVID-19 control and prevention. Subject(s) Antiviral Agents , Chemistry , Therapeutic Uses , Betacoronavirus , Chemistry , Coronavirus Infections , Drug Therapy , Virology , Cysteine Endopeptidases , Chemistry , Drugs, Chinese Herbal , Chemistry , Therapeutic Uses , Humans , Mass Spectrometry , Medicine, Chinese Traditional , Molecular Docking Simulation , Pandemics , Pneumonia, Viral , Drug Therapy , Virology , Protein Interaction Maps , Viral Nonstructural Proteins , Chemistry 9. Article in Chinese | WPRIM | ID: wpr-824946 ABSTRACT Objective: By observing the effects of electroacupuncture (EA) on the apoptosis of conjunctival cells of rabbits with dry eye syndrome (DES) and the expressions of apoptosis-related proteins Caspase-3, Fas and Bcl-2, to discuss the mechanism of EA in the treatment of DES from the perspective of cell apoptosis. Methods: Male New Zealand rabbits were randomly divided into a normal group (NG), a model group (MG), an EA group (EAG) and a sham EA group (SEAG). DES rabbit model was developed by eye drop of 0.1% benzalkonium chloride. The rabbit tear secretion and tear film break-up time (BUT) were measured; terminal deoxynucleotidyl transferase- mediated dUTP nick end labeling (TUNEL) assay was used to detect the apoptosis of conjunctival cells; the expressions of Caspase-3, Fas and Bcl-2 proteins in conjunctival cells were detected by immunohistochemistry. Results: Compared with the NG, the rabbit tear secretion decreased and the BUT was shortened in the MG (both P<0.01); compared with the MG and the SEAG, the rabbit tear secretion increased and the BUT was prolonged in the EAG (all P<0.05). Compared with the NG, the apoptosis of rabbit conjunctival cells increased (P<0.01), the expressions of Caspase-3 and Fas proteins increased (both P<0.05), and the expression of Bcl-2 protein decreased (P<0.01) in the MG; compared with the MG and the SEAG, the apoptosis of rabbit conjunctival cells decreased (both P<0.01), the expressions of Caspase-3 and Fas proteins decreased (all P<0.05), and the expression of Bcl-2 protein increased (both P<0.01) in the EAG. Conclusion: EA can inhibit the apoptosis of rabbit conjunctival cells, down-regulate the expressions of apoptosis-related proteins Caspase-3 and Fas, and up-regulate the expression of Bcl-2 protein, which may be one of the mechanisms of EA in treatment of DES. 10. Chinese Pharmaceutical Journal ; (24): 1425-1431, 2019. Article in Chinese | WPRIM | ID: wpr-857925 ABSTRACT OBJECTIVE: To establish a quality evaluation method of Ningxinbao capsules based on HPLC fingerprint, quantitative analysis of multi-components and chemometrics. METHODS: The fingerprint of Ningxinbao capsules was established by HPLC. Six common peaks were identified as uracil, hypoxanthine, uridine, adenine, guanosine, and adenosine by comparison with reference substances, and their contents in samples were simultaneously determined. The chemometrics methods such as hierarchical clustering heat map analysis and principal component analysis were used to evaluate the quality of Ningxinbao capsules from different manufacturers based on the results of fingerprint and content determination. RESULTS: The similarity of samples from 27 different manufacturers ranged from 0.656 to 0.997. Hierarchical clustering heat map analysis and principal component analysis showed that the samples from 27 different manufacturers were clearly divided into two categories. The main influencing factors were fingerprint similarity and the contents of uridine, guanosine and total nucleosides. Different sources of raw materials were the main reasons for the quality differences between samples from different manufacturers. The purity of strain in raw materials was the key factor affecting the quality of Ningxinbao capsules. CONCLUSION: The method is accurate and reliable, and it can be used to control and comprehensively evaluate the quality of Ningxinbao capsules. 11. Article in Chinese | WPRIM | ID: wpr-737248 ABSTRACT Increasing evidence has revealed that maternal cytomegalovirus (CMV) infection may be associated with neurodevelopmental disorders in offspring.Potential relevance between the placental inflammation and CMV-related autism has been reported by clinical observation.Meanwhile,abnormal expression of Toll-like receptor 2 (TLR2) and TLR4 in placenta of patients with chorioamnionitis was observed in multiple studies.IL-6 and IL-10 are two important maternal inflammatory mediators involved in neurodevelopmental disorders.To investigate whether murine CMV (MCMV) infection causes alterations in placental IL-6/10 and TLR2/4 levels,we analyzed the dynamic changes in gene expression of TLR2/4 and IL-6/10 in placentas following acute MCMV infection.Mouse model of acute MCMV infection during pregnancy was created,and pre-pregnant MCMV infected,lipopolysaccharide (LPS)-treated and uninfected mice were used as controls.At E13.5,E14.5 and E18.5,placentas and fetal brains were harvested and mRNA expression levels of placental TLR2/4 and IL-6/10 were analyzed.The results showed that after acute MCMV infection,the expression levels of placental TLR2/4 and IL-6 were elevated at E13.5,accompanied by obvious placental inflammation and reduction of placenta and fetal brain weights.However,LPS 50 μg/kg could decrease the IL-6 expression at E13.5 and E14.5.This suggests that acute MCMV infection during pregnancy could up-regulate the gene expression of TLR2/4 in placental trophoblasts and activate them to produce more proinflammatory cytokine IL-6.High dose of LPS stimulation (50 tg/kg) during pregnancy can lead to down-regulation of IL-6 levels in the late stage.Imbalance ofIL-6 expression in placenta might be associated with the neurodevelopmental disorders in progeny. 12. Article in Chinese | WPRIM | ID: wpr-735780 ABSTRACT Increasing evidence has revealed that maternal cytomegalovirus (CMV) infection may be associated with neurodevelopmental disorders in offspring.Potential relevance between the placental inflammation and CMV-related autism has been reported by clinical observation.Meanwhile,abnormal expression of Toll-like receptor 2 (TLR2) and TLR4 in placenta of patients with chorioamnionitis was observed in multiple studies.IL-6 and IL-10 are two important maternal inflammatory mediators involved in neurodevelopmental disorders.To investigate whether murine CMV (MCMV) infection causes alterations in placental IL-6/10 and TLR2/4 levels,we analyzed the dynamic changes in gene expression of TLR2/4 and IL-6/10 in placentas following acute MCMV infection.Mouse model of acute MCMV infection during pregnancy was created,and pre-pregnant MCMV infected,lipopolysaccharide (LPS)-treated and uninfected mice were used as controls.At E13.5,E14.5 and E18.5,placentas and fetal brains were harvested and mRNA expression levels of placental TLR2/4 and IL-6/10 were analyzed.The results showed that after acute MCMV infection,the expression levels of placental TLR2/4 and IL-6 were elevated at E13.5,accompanied by obvious placental inflammation and reduction of placenta and fetal brain weights.However,LPS 50 μg/kg could decrease the IL-6 expression at E13.5 and E14.5.This suggests that acute MCMV infection during pregnancy could up-regulate the gene expression of TLR2/4 in placental trophoblasts and activate them to produce more proinflammatory cytokine IL-6.High dose of LPS stimulation (50 tg/kg) during pregnancy can lead to down-regulation of IL-6 levels in the late stage.Imbalance ofIL-6 expression in placenta might be associated with the neurodevelopmental disorders in progeny. 13. Chinese Circulation Journal ; (12): 545-549, 2018. Article in Chinese | WPRIM | ID: wpr-703893 ABSTRACT Objectives:To investigate the relationship between the changes of blood lipids and the progression of non-target lesions after percutaneous coronary intervention (PCI). Methods:Consecutive patients hospitalized in Beijing Anzhen Hospital of Capital Medical University from January 2013 to December 2016 for acute coronary syndrome (ACS) with coronary angiography evidence of multivessel disease, in which single vessel disease (Target lesion) stenosis> 75%, and the single vessel was treated with PCI, and the remaining non-target lesions with stenosis <50%, and re-hospitalized due to chest pain within 6 to 24 months, were eligible for this study. A total of 3 071 patients met the inclusion criteria were enrolled in this study. According to the quantitative analysis of 3-dimensional reconstruction coronary angiography (QCA), patients were divided into A, B groups:group A (n=1 541) refers patients with progressive non-target lesions (stenosis from <50% to >75%), group B (n=1 530) refers progression-free non-target lesions (stenosis <75%). Blood lipid levels at two hospitalizations, blood lipid changes and the lipid control rate, LDL-C control rate = (<1.8 mmol/L patients + LDL-C decline>50%)/ total number of patients, were compared between the two groups. Results:The LDL-C level [group A:(2.68 ± 0.88) mmol/L vs group B:(2.72 ± 0.92) mmol/L, P=0.509] and the LDL-C control rate (group A:14% vs group B:13.1%, P=0.476) at the first hospitalization were similar between the two groups. At the second hospitalization, the level of LDL-C was significantly lower in group B than that in group A ([1.91 ± 0.64] mmol/L vs [2.17 ± 0.76] mmol/L, P<0.001). The LDL-C control rate was significantly higher in group B than in the group A (43.66% vs 35.37%, P<0.001). Moreover, the reduction of total cholesterol and triglyceride was more significant in group B ([0.85±0.81] mmol/L and [0.24±1.58] mmol/L) compared to group A ([0.58±1.01] mmol/L and [0.17±1.37] mmol/L, both P<0.001) at the second hospitalization. Multivariate Logistic regression analysis showed that age, diabetes, hypertension, smoking, family history of coronary heart disease, hyperlipidemia and non-target lesions were not associated with progression of non-target lesions; LDL-C level at the second hospitalization (OR=1.686, 95%CI:1.508~1.885; P<0.001) and regular statin use after PCI (OR=0.275, 95%CI:0.230~0.328; P<0.001) were associated with progression of non-target lesions. Conclusions:Our results indicate that poor lipid control post PCI is one of the reasons leading to the progression of non-target lesions. 14. Article in Chinese | WPRIM | ID: wpr-313044 ABSTRACT <p><b>OBJECTIVE</b>To observe the regulatory effects of psoralen, oleanolic acid, and stilbene glucoside, three active components of psoralea fruit, glossy privet fruit and tuber fleeceflower root respectively, on Aβ25-35induced self-renewal and neuron-like differentiation of neural stem cells (NSCs).</p><p><b>METHODS</b>Embryonic NSCs werein vitro isolated and cultured from Kunming mice of 14-day pregnancy, and randomly divided into the control group, the Aβ25-35 group, the Aβ25-35 +psoralen group, the Aβ25-35 +oleanolic acid group, and the Aβ25-35 + stilbene glucoside group. The intervention concentration of Aβ25-35 was 25 µmol/L, and the intervention concentration of three active components of Chinese medicine was 10(-7)mol/L. The effect of three active components of Chinese medicine on the proliferation of NSCs was observed by counting method. The protein expression of Tubulin was observed by Western blot and immunofluorescence. The ratio of Tubulin+/DAPI was caculated. Results Compared with the control group, the sperical morphology of NSCs was destroyed in the Aβ25-35 group, the counting of NSCs, the expression of Tubulin protein, and the ratio of Tubulin /DAPI all decreased (P <0.01, P <0.05). Compared with the Aβ25-35 group, the counting of NSCs, the expression of Tubulin protein, and the ratio of Tubulin + /DAPI all increased in the three Chinese medicine treated groups (P <0. 01, P <0. 05).</p><p><b>CONCLUSIONS</b>25 µmol/L Aβ25-35 could inhibit self-renewal and neuron-like differentiating of NSCs. But psoralen, oleanolic acid, and stilbene glucoside could promote self-renewal of NSCs and neuron-like differentiation.</p> Subject(s) Amyloid beta-Peptides , Physiology , Animals , Cell Differentiation , Cell Proliferation , Cells, Cultured , Drugs, Chinese Herbal , Pharmacology , Embryo, Mammalian , Female , Mice , Neural Stem Cells , Neurogenesis , Neurons , Cell Biology , Peptide Fragments , Physiology , Pregnancy 15. Journal of Medical Biomechanics ; (6): E397-E402, 2013. Article in Chinese | WPRIM | ID: wpr-804277 ABSTRACT Objective To investigate biomechanical properties of the contact interface between residual limb and prosthetic socket of the transfemoral amputee during walking by using threedimensional (3D) finite element analysis method, so as to provide references for establishing the complete system of measurement, design and evaluation on prosthetic socket. Methods Based on CT images, two 3D geometric models of a trans-femoral amputee including the femur, soft tissues and transfemoral socket was established, with soft tissues defined as non-linear hyper-elastic and linear elastic material, respectively. The behaviors of the interface between trans-femoral residual limb and prosthetic socket were defined as nonlinear contact. Dynamic loads on the knee joint were applied on distal ends of both the hyper-elastic model and linear elastic model to simulate loading on residual limb-prosthetic socket system during heel strike, mid-stance and toe off phase in a gait cycle, respectively. The stress distributions on interface between trans-femoral residual limb and prosthetic socket were calculated to compare and analyze the effects of different mechanical properties (i.e. hyper-elasticity and linear elasticity) of the femur soft tissue on biomechanical behaviors of the interface. Results For both the hyper-elastic model and linear elastic model, the peak contact pressures were all located on the distal end of the residual femur during different gait phases. The peak contact pressure on the interface of the hyper-elastic model during heel strike, mid-stance and toe off phase was 55.80, 47.63 and 50.44 kPa, respectively, while that on linear elastic model was increased by two times, being 149.86, 118.55 and 139.68 kPa, respectively. Simulation on longitudinal and circumferential shear stress distributions at the limb-socket interface showed that stress on the interface was higher at the distal end of soft tissue during different gait phases. From heel strike to toe off phase, some pressures were transferred from the rear edge to the front edge of the socket. Conclusions The pressure and shear stress distributions on the contact interface between transfemoral residual limb and prosthetic socket were different during different gait phases, thus the relative mechanical properties should be considered in the socket design. 16. Chinese Journal of Surgery ; (12): 518-521, 2013. Article in Chinese | WPRIM | ID: wpr-301256 ABSTRACT <p><b>OBJECTIVE</b>To study relationships between serum ferritin and bone metabolism in patients with hip fragility fractures.</p><p><b>METHODS</b>This cross-sectional study included 76 postmenopausal women with hip fracture from Feburary 2011 to June 2012. The mean age of the women was (73 ± 10) years (range, 55-93 years) and the mean duration of menstruation was (22 ± 10)years (range, 5-50 years). Serum concentrations of ferritin, transferrin, alkaline phosphatase (ALP), amino-terminal extension peptide of type I collagen (P1NP), C-terminal telopeptides of type I collagen (β-CTX)and femoral and lumbar bone mineral density by dual-energy X-ray absorptiometry were measured. Bone metabolism was compared between normal and elevated ferritin groups with t-test, Pearson linear, partial correlation and multiple regression analysis examined associations between iron- and bone-related markers.</p><p><b>RESULTS</b>Serum ferritin concentration raised to (230 ± 146)µg/L, transferrin concentration reduced to (1.89 ± 0.33)g/L. P1NP concentration raised to (61 ± 32) ng/L when the concentration of serum ALP and β-CTX were in the normal range. T-scores for bone mineral density in the femoral neck (-2.0 ± 1.1) and lumbar (-2.1 ± 1.2) were below the normal ranges(-1.0-1.0). The subjects were divided into two groups according to serum ferritin concentration, normal group(serum ferritin concentration ≤ 150 µg/L, n = 25) and elevated group(serum ferritin concentration > 150 µg/L, n = 51). Patients of elevated group had lower bone mineral density in femoral neck and lumbar than normal group(t = 3.13,2.89, P < 0.01), and higher P1NP, β-CTX concentration (t = -2.38, -3.59, P < 0.05) . In partial correlation analysis adjusted for confounders, serum ferritin concentration was correlated negatively with bone mineral density in both femoral neck and lumbar (r = -0.335,-0.295, P < 0.05), and positively with P1NP and β-CTX (r = 0.467,0.414, P < 0.05), but not correlated with ALP (r = 0.188, P > 0.05). Transferrin concentration tended to be correlated positively with bone mineral density in both femoral neck and lumbar (r = 0.444, 0.262, P < 0.05) and negatively with ALP, P1NP and β-CTX(r = -0.326,-0.285,-0.278, P < 0.05).</p><p><b>CONCLUSIONS</b>Iron overload has a high prevalence in postmenopausal women with fragility fracture. Increased iron stores, which might lead to bone loss and lower bone mineral density by enhancing the activity of bone turnover, could be an independent factor to take effects on bone metabolism on postmenopausal women.</p> Subject(s) Aged , Aged, 80 and over , Bone Density , Bone Remodeling , Collagen Type I , Blood , Cross-Sectional Studies , Female , Hip Fractures , Metabolism , Humans , Iron Overload , Iron-Binding Proteins , Metabolism , Middle Aged , Osteoporosis, Postmenopausal , Metabolism , Postmenopause , Retrospective Studies 17. Chinese Journal of Hematology ; (12): 16-20, 2013. Article in Chinese | WPRIM | ID: wpr-323458 ABSTRACT <p><b>OBJECTIVE</b>To screen the potential protein biomarkers in minimal residual disease (MRD) of the acute promyelocytic leukemia (APL) by comparison of differentially expressed serum protein between APL patients at diagnosis and after complete remission (CR) and healthy controls, and to establish and verify a diagnostic model.</p><p><b>METHODS</b>Serum proteins from 36 cases of primary APL, 29 cases of APL during complete remission and 32 healthy controls were purified by magnetic beads and then analyzed by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). The spectra were analyzed statistically using FlexAnalysis(TM) and ClinProt(TM) software.</p><p><b>RESULTS</b>Two prediction model of primary APL/healthy control, primary APL/APL CR were developed. Thirty four statistically significant peptide peaks were obtained with the m/z value ranging from 1000 to 10 000 (P < 0.001) in primary APL/healthy control model. Seven statistically significant peptide peaks were obtained in primary APL/APL CR model (P < 0.001). Comparison of the protein profiles between the two models, three peptides with m/z 4642, 7764 and 9289 were considered as the protein biomarker of APL MRD. A diagnostic pattern for APL CR using m/z 4642 and 9289 was established. Blind validation yielded correct classification of 6 out of 8 cases.</p><p><b>CONCLUSIONS</b>The MALDI-TOF MS analysis of APL patients serum protein can be used as a promising dynamic method for MRD detection and the two peptides with m/z 4642 and 9289 may be better biomarkers.</p> Subject(s) Adolescent , Adult , Aged , Case-Control Studies , Child , Humans , Leukemia, Promyelocytic, Acute , Classification , Diagnosis , Male , Middle Aged , Neoplasm, Residual , Classification , Diagnosis , Prognosis , Spectrometry, Mass, Matrix-Assisted Laser Desorption-Ionization , Methods , Young Adult 18. Article in Chinese | WPRIM | ID: wpr-359322 ABSTRACT <p><b>OBJECTIVE</b>To explore the rules of clinical application of Shenmai Injection (SI).</p><p><b>METHODS</b>The data sets of SI were downloaded from CBM database by the method of literature retrieved from Jan. 1980 to May 2012. Rules of Chinese medical patterns, diseases, symptoms, Chinese patent medicines (CPM), and Western medicine (WM) were mined out by data slicing algorithm, and they were demonstrated in frequency tables and two-dimension based network.</p><p><b>RESULTS</b>Totally 3 159 literature were recruited. Results showed that SI was most frequently correlated with stasis syndrome and deficiency syndrome. Heart failure, arrhythmia, myocarditis, myocardial infarction, and shock were core diseases treated by SI. Symptoms such as angina pectoris, fatigue, chest tightness/pain were mainly relieved by SI. For CPM, SI was most commonly used with Compound Danshen Injection, Astragalus Injection, and so on. As for WM, SI was most commonly used with nitroglycerin, fructose, captopril, and so on.</p><p><b>CONCLUSIONS</b>The syndrome types and mining results of SI were the same with its instructions. Stasis syndrome was the potential Chinese medical pattern of SI. Heart failure, arrhythmia, and myocardial infarction were potential diseases treated by SI. For CPM, SI was most commonly used with Danshen Injection, Compound Danshen Injection, and so on. And for WM, SI was most commonly used with nitroglycerin, fructose, captopril, and so on.</p> Subject(s) Data Mining , Databases, Factual , Drug Combinations , Drugs, Chinese Herbal , Therapeutic Uses , Humans , Medicine, Chinese Traditional , Methods 19. Journal of Medical Biomechanics ; (6): E502-E507, 2011. Article in Chinese | WPRIM | ID: wpr-804120 ABSTRACT Objective To investigate the biomechanical characteristics of the human pelvis-femur complex under lateral pelvic impacts during sideways falls using three dimensional (3D) finite element (FE) method. Methods Based on the model database of China Mechanical Virtual Human, a 3D FE model of the pelvis femur soft tissue complex was created, including cortical bone, cancellous bone and soft tissue capsule. A rigid plane model was also constructed for ground simulation and constrained in all freedoms. The average hip lateral impact velocity of 2 m/s was applied to the model and the time for simulation analysis was set at 20 ms. The stress and strain distribution on the pelvis-femur complex were obtained by the 3D FE calculation and analysis. ResultsOn the contact surface, the peak impact load reached to 7 656 N at 13 ms, while the maximum Von Mises stress on the soft tissue was 2.64 MPa. Simultaneously, the peak Von Mises stress of 142.64 MPa on the cortical bone occurred in the region of pubic symphysis, which was approximate to the yield stress on the cancellous bone. The Von Mises stress level was higher in the region of the femur neck and greater trochanter. At 13 ms, the peak Von Mises stress on the cortical bone of the femur neck was 76.49 MPa and that on the cancellous bone was 8.44 MPa with the peak compressive principal strain being 0.94%. The peak Von Mises stress on the cancellous bone of greater trochanter was 8.50 MPa, while the peak compressive principal strain was 0.93%. Conclusions Bone fractures of the pelvis-femur complex tend to occur in the region of the femur neck, greater trochanter and pubic symphysis under deceleration impacts during sideways falls. 20. Chinese Journal of Burns ; (6): 207-211, 2010. Article in Chinese | WPRIM | ID: wpr-305602 ABSTRACT <p><b>OBJECTIVE</b>To study the effect of calcium on the activity and protein expression of integrin beta1 promoter in human immortal keratinocyte colony HaCaT cell and cell migration.</p><p><b>METHODS</b>(1) HaCaT cells were cultured in vitro (12-slot plate) and divided into 5 groups according to the random number table, with 18 slots in each group: reporter plasmid pGL3 promoter (positive control group, PC), pGL3 empty vector (negative control group, NC), pGL3-1756 bp (total length promoter group, TL), pGL3-1442 bp (distal promoter group, D), and pGL3-261 bp (proximal promoter group, P) was respectively used to transfect HaCaT cells in non-serum RPMI 1640 culture medium with 0.00, 0.03, 0.09, 0.30, 0.80, or 1.20 mmol/L calcium (3 slots in each group with each concentration). Luciferase activity was detected with dual-luciferase reporter assay system 24 hours after transfection. (2) HaCaT cells steadily transfected with small interfering RNA-integrin beta1 vector (steadily transfected in brief) constructed in our laboratory were normally cultured and divided into 6 parts according to the random number table. And then they were treated with former 6 different concentrations of calcium, with 3 samples for each concentration. Expression level of integrin beta1 protein was determined with Western blot. (3) Normal and steadily transfected HaCaT cells were cultured in 6-slot plate, 18 slots for each kind of cells. They were cultured with former 6 kinds of calcium culture media (divided according to the random number table, with 3 slots of cells for each concentration) for 12 hours after scratch test. Cell migration rate was observed and determined. (4) Data were processed with one-way analysis of variance and independent samples t test.</p><p><b>RESULTS</b>(1) The luciferase activity of cells in TL group increased from 0.16+/-0.09 to 0.39+/-0.09 and 0.35+/-0.05 (with t value respectively 3.143, 3.140, P values all below 0.05) as calcium concentration increasing from 0.00 mmol/L to 0.09 and 0.30 mmol/L, and it decreased as calcium concentration increased to 0.80 and 1.20 mmol/L. The change pattern of luciferase activity of cells along with calcium concentration in D group was similar to that in TL group, but its activity (0.56+/-0.32, 0.64+/-0.06) at the concentration of 0.09, 0.30 mmol/L was respectively higher than that in TL group (with t value respectively 0.887, 6.122, P values all below 0.05). There was no obvious influence of calcium in either concentration on the luciferase activity of cells in P group. (2) The expression amount of integrin beta1 of steadily transfected HaCaT cells cultured with 0.03, 0.09, 0.30, 0.80, 1.20 mmol/L calcium (0.58+/-0.09, 1.40+/-0.29, 1.41+/-0.09, 0.99+/-0.10, 1.16+/-0.15) were all increased as compared with that cultured with 0.00 mmol/L calcium (0.53+/-0.10, with t value respectively 0.687, 4.880, 11.210, 5.578, 6.199, P values all below 0.05). (3) Migration speed of normal HaCaT cells cultured with 0.09, 0.30 mmol/L calcium increased obviously as compared with that cultured with 0.00 mmol/L calcium, and it slowed down when cultured with 0.80, 1.20 mmol/L calcium. There was no obvious difference of migration rate among steadily transfected HaCaT cells treated with different concentration of calcium.</p><p><b>CONCLUSIONS</b>Distal promoter region of integrin beta1 plays a vital role in regulating integrin beta1 transcription in human epidermal cells. And calcium regulates activity, protein expression of integrin beta1 promoter and cell migration.</p> Subject(s) Calcium , Pharmacology , Cell Line , Cell Movement , Epidermis , Cell Biology , Metabolism , Humans , Integrin beta1 , Metabolism , Promoter Regions, Genetic , Transfection SELECTION OF CITATIONS SEARCH DETAIL
__label__pos
0.562989
--- old/src/share/classes/sun/applet/AppletPanel.java 2013-11-18 10:54:17.000000000 +0400 +++ new/src/share/classes/sun/applet/AppletPanel.java 2013-11-18 10:54:17.000000000 +0400 @@ -794,18 +794,13 @@ doInit = true; } else { // serName is not null; - InputStream is = (InputStream) - java.security.AccessController.doPrivileged( - new java.security.PrivilegedAction() { - public Object run() { - return loader.getResourceAsStream(serName); - } - }); - ObjectInputStream ois = - new AppletObjectInputStream(is, loader); - Object serObject = ois.readObject(); - applet = (Applet) serObject; - doInit = false; // skip over the first init + try (InputStream is = AccessController.doPrivileged( + (PrivilegedAction)() -> loader.getResourceAsStream(serName)); + ObjectInputStream ois = new AppletObjectInputStream(is, loader)) { + + applet = (Applet) ois.readObject(); + doInit = false; // skip over the first init + } } // Determine the JDK level that the applet targets. @@ -1239,20 +1234,13 @@ // append .class final String resourceName = name + ".class"; - InputStream is = null; byte[] classHeader = new byte[8]; - try { - is = (InputStream) java.security.AccessController.doPrivileged( - new java.security.PrivilegedAction() { - public Object run() { - return loader.getResourceAsStream(resourceName); - } - }); + try (InputStream is = AccessController.doPrivileged( + (PrivilegedAction) () -> loader.getResourceAsStream(resourceName))) { // Read the first 8 bytes of the class file int byteRead = is.read(classHeader, 0, 8); - is.close(); // return if the header is not read in entirely // for some reasons.
__label__pos
0.992461
Carbon nanotube conductive composite and preparation method thereof A technology of carbon nanotubes and composites, which is applied in the manufacture of cables/conductors, conductive materials dispersed in non-conductive inorganic materials, circuits, etc., can solve the problem of difficulty in preparing conductive paste with high carbon nanotube content and poor storage stability Good and other problems, to achieve good storage stability, good performance, good dispersion effect Active Publication Date: 2020-07-17 内蒙古骏成新能源科技有限公司 10 Cites 0 Cited by AI-Extracted Technical Summary Problems solved by technology [0005] The present invention aims at the poor storage stability of the carbon nanotube slurry obtained by chemically or physically modifying the surface of the carbon nanotube at present, and it is difficult to prep... View more Abstract The invention discloses a carbon nanotube conductive composite and a preparation method thereof. The carbon nanotube conductive composite comprises, by mass percentage, at least 1% of carbon nanotubes, 0.1%-1.0% of a pi-pi action control agent, a dispersant and a dispersion medium, wherein the pi-pi action control agent is a nitrogen-containing heterocyclic organic matter. According to the preparation method, a kinetic method is adopted, pi-pi interaction among the carbon nanotubes is utilized, and the conductive composite can form a reversible gel system on the basis of ensuring that the system has high carbon nanotube content by controlling the pi-pi action among the carbon nanotubes in the system so that the conductive composite has good dispersion and stable storage performance and good use performance as conductive paste. Application Domain Cell electrodesNon-conductive material with dispersed conductive material +2 Technology Topic ChemistryElectrically conductive +5 Image • Carbon nanotube conductive composite and preparation method thereof • Carbon nanotube conductive composite and preparation method thereof • Carbon nanotube conductive composite and preparation method thereof Examples • Experimental program(4) • Comparison scheme(3) Example Embodiment [0036] Example 1 [0037] 1) Dissolve 1 part of polyvinylpyrrolidone and 0.5 part of pyrazine in 94.5 parts of N-methylpyrrolidone, then add 4 parts of carbon nanotubes (outer diameter 8nm~12nm, specific surface area 265m 2 /g) Stir to obtain a pre-dispersion; [0038] 2) Add the pre-dispersed liquid to the sand mill for rough grinding for 1 hour, and the linear speed of rough grinding is 7m/s; [0039] 3) After rough grinding, fine grinding is performed for 3 hours, the linear speed of fine grinding is 9m/s, and the fineness of slurry composite is ≤10μm and the viscosity is 1698mPa·s. [0040] Test the performance of the prepared slurry compound: [0041] 1. Observe the appearance and viscosity changes of the prepared slurry composite within 360 days. The test results are shown in Table 1. figure 1 and figure 2. [0042] 2. The volume resistivity of lithium iron phosphate positive pole piece containing 1% carbon nanotubes: At the end of sanding, take the prepared slurry composite and mix it according to lithium iron phosphate: carbon nanotubes: PVDF = 95.5: 1.0: 3.5, After 1h of ball milling, the positive electrode slurry was obtained; the positive electrode slurry was coated on the PET film with an automatic film coating machine, and the dried pole piece was made into a 15mm diameter disc with a smooth and uniform surface using a punching machine. The needle tester measures the volume resistivity of the pole piece; subsequently, the conductive composite prepared above is taken every certain time interval to measure the volume resistivity of the pole piece according to the above method. See the test results image 3 Example Embodiment [0043] Example 2 [0044] 1) Dissolve 0.8 parts of polyvinylpyrrolidone and 0.12 parts of pyrazine in 98.08 parts of N-methylpyrrolidone, then add 1 part of carbon nanotubes (outer diameter 4nm~6nm, specific surface area 496m 2 /g) Stir to obtain a pre-dispersion; [0045] 2) Add the pre-dispersed liquid to the sand mill for rough grinding for 1 hour, and the rough grinding linear speed is 6m/s; [0046] 3) After rough grinding, fine grinding is performed for 3h, the linear speed of fine grinding is 9.5m/s, and the fineness of slurry composite is ≤10μm and the viscosity is 4294mPa·s. [0047] Refer to the method in Example 1 to test the properties of the prepared slurry composite. The test results are shown in Table 1. figure 2 and image 3. Example Embodiment [0048] Example 3 [0049] 1) Dissolve 1 part of polyvinylpyrrolidone in 95.5 parts of N-methylpyrrolidone, then add 3 parts of carbon nanotubes (outer diameter 6nm~10nm, specific surface area 327m 2 /g) Stir to obtain a pre-dispersion; [0050] 2) Add the pre-dispersed liquid and 0.5 parts of pyrimidine to the sand mill for coarse grinding for 1.5 hours, and the linear speed of coarse grinding is 6m/s; [0051] 3) After rough grinding, fine grinding is carried out for 3.5h, the linear speed of fine grinding is 9.5m/s, and the fineness of slurry composite is ≤10μm and the viscosity is 2672mPa·s. [0052] Refer to the method in Example 1 to test the properties of the prepared slurry composite. The test results are shown in Table 1. figure 2 and image 3. PUM PropertyMeasurementUnit Diameter1.0 ~ 20.0nm Specific surface area>= 150.0m²/g Outer diameter8.0 ~ 12.0nm Description & Claims & Application Information We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more. Similar technology patents Aqueous fluorocarbon emulsion-based single component contamination-resistant decorative crack sealer InactiveCN106396479AImproves abrasion and stain resistancegood storage stability Owner:西卡(上海)管理有限公司 Adhesive anti-falling rubber formula InactiveCN109627515AGood process performancegood storage stability Owner:SHANDONG MEICHEN ECOLOGY & ENVIRONMENT CO LTD Classification and recommendation of technical efficacy words • good storage stability • well dispersed Water-based artificial granite paint ActiveCN102719162Agood storage stabilityStrong anti-bleeding Owner:浙江博星化工涂料有限公司 Preparing method of phosphorus nitrogen high load graphene flame retardant ActiveCN105418971AImprove mechanical propertieswell dispersed Owner:浙江迪恩新材料有限公司 Method for detecting carmine in food by using graphene-loaded Pt single-atom composite material InactiveCN106645340Awell dispersedImprove electrocatalytic oxidation performance Owner:TAIYUAN UNIV OF TECH Who we serve • R&D Engineer • R&D Manager • IP Professional Why Eureka • Industry Leading Data Capabilities • Powerful AI technology • Patent DNA Extraction Social media Try Eureka PatSnap group products
__label__pos
0.865879
DEV Community Cover image for Unit testing Kafka producer using MockProducer Dejan Maric Dejan Maric Posted on • Originally published at codingharbour.com Unit testing Kafka producer using MockProducer Sometimes your Kafka producer code is doing things that need to be properly validated and of course, we developers resort to writing a test. If the functionality we want to test is nicely encapsulated we can do that using a unit test. Kafka helps us with that by providing a mock implementation of Producer<> interface called, you guessed it, MockProducer. Preparation for test TransactionProcessor class below is our class under test. It has a process(Transaction) method that receives a Transaction object which in our example only contains userId and amount properties. Depending on the amount the processor will decide to which topic to write the object. If the amount is above 100.000 it will use the transactions_high_prio topic. Otherwise, it will write a transaction to the transactions_regular_prio topic. public class TransactionProcessor { public static final double HIGH_PRIORITY_THRESHOLD = 100.000; private final Producer<String, String> kafkaProducer; private final String highPrioTopic; private final String regularPrioTopic; private final Gson gson = new Gson(); public TransactionProcessor(Producer<String, String> kafkaProducer, String highPrioTopic, String regularPrioTopic) { this.kafkaProducer = kafkaProducer; this.highPrioTopic = highPrioTopic; this.regularPrioTopic = regularPrioTopic; } public void process(Transaction transaction){ String selectedTopic = regularPrioTopic; if (transaction.getAmount() >= HIGH_PRIORITY_THRESHOLD) { selectedTopic = highPrioTopic; } String transactionJson = gson.toJson(transaction); ProducerRecord<String, String> record = new ProducerRecord<>(selectedTopic, transaction.getUserId(), transactionJson); kafkaProducer.send(record); } } Enter fullscreen mode Exit fullscreen mode And Transaction class looks like this: public class Transaction { private String userId; private double amount; //removed for brevity } Enter fullscreen mode Exit fullscreen mode An important thing to notice here is that TransactionProcessor uses the Producer interface, not the implementation (which is the KafkaProducer class). This will make it possible to unit test our adapter using the MockProducer. MockProducer in action Ok, now onto the test class. TransactionProcessorTest creates an instance of the MockProducer that we will provide to the TransactionProcessor. class TransactionProcessorTest { private static final String HIGH_PRIO_TOPIC = "transactions_high_prio"; private static final String REGULAR_PRIO_TOPIC = "transactions_regular_prio"; MockProducer<String, String> mockProducer = new MockProducer<>(true, new StringSerializer(), new StringSerializer()); Enter fullscreen mode Exit fullscreen mode MockProducer constructor takes a couple of parameters, namely key and value serializers, in our case StringSerializer(s). The first parameter, autocomplete, is a boolean that tells MockProducer to automatically complete all requests immediately. In regular testing, you want to set this parameter to true so that messages are immediately ‘sent’. If you set it to false you will need to explicitly call completeNext() and errorNext(RuntimeException) methods after calling the send() method. You would want to do this to e.g. test the error handling in your producer (by providing the exception you want to handle as the parameter to the errorNext method). After we’ve created the MockProducer, we create the instance of the class we wish to test. TransactionProcessor processor = new TransactionProcessor(mockProducer, HIGH_PRIO_TOPIC, REGULAR_PRIO_TOPIC); Enter fullscreen mode Exit fullscreen mode Now it's time to test whether the selection of topics based on amount is correct. We will create two Transaction objects, the first one with a low amount and the second one with an amount higher than our threshold (which is 100.000). @Test public void testPrioritySelection(){ Double lowAmount = 50.2d; Double highAmount = 250000d; Transaction regularPrioTransaction = new Transaction("user1", lowAmount); processor.process(regularPrioTransaction); Transaction highPrioTransaction = new Transaction("user2", highAmount); processor.process(highPrioTransaction); assertThat(mockProducer.history()).hasSize(2); ProducerRecord<String, String> regTransactionRecord = mockProducer.history().get(0); assertThat(regTransactionRecord.value()).contains(lowAmount.toString()); assertThat(regTransactionRecord.topic()).isEqualTo(REGULAR_PRIO_TOPIC); ProducerRecord<String, String> highTransactionRecord = mockProducer.history().get(1); assertThat(highTransactionRecord.value()).contains(highAmount.toString()); assertThat(highTransactionRecord.topic()).isEqualTo(HIGH_PRIO_TOPIC); } Enter fullscreen mode Exit fullscreen mode After calling processor.process(…) method twice we want to validate that there are two records sent to Kafka. For that, we use MockProducer#history() method which returns the list of records that the producer received to send to Kafka. We fetch each record from the history to ensure it is ‘sent’ to the proper topic. Code on Github All code examples from this blog post are available on Coding Harbour’s GitHub. Would you like to learn more about Kafka? I have created a Kafka mini-course that you can get absolutely free. Sign up for it over at Coding Harbour. Photo credit: @paulschnuerle Top comments (0)
__label__pos
0.635536
Brown's hot air engines patents Felix Brown and his brother Adolphus have been practical inventors of machinery. They brought many improvements to existing technologies. For example they succeeded in building an oscillating steam engine, In 1877 Felix Brown patented a much improved caloric engine that pushed the practical size limit from 6 HP to 14 HP. It has been his sole known air engine. But it was quite famous at that time. Brown's Air Engines patents 1877 Brown patent #186,535 Caloric Engine
__label__pos
0.827138
Working for People with Sight Loss Diabetic Retinopathy   Diabetic retinopathy is a common complication of diabetes which affects the small blood vessels in the lining at the back of the eye. This lining is called the retina. The retina helps to change what you see into messages that travel along the sight nerve to the brain. A healthy retina is necessary for good eyesight. Diabetic retinopathy can cause the blood vessels in the retina to leak or become blocked and damage your sight. In the early stages, diabetic retinopathy will not affect the sight, but if the changes get worse, eventually the sight will be affected. Image detailing the internals of the eye, lens, pupil, retina, macula and sight nerve The categories of retinopathy are: • Background retinopathy Background retinopathy occurs in the early stages and damage is limited to tiny bulges (microaneurysms) in the blood vessel walls. Although these can leak blood and fluid they do not usually affect vision. • Pre-proliferative diabetic retinopathy is detected This is where there are changes detected in the retina that do not require treatment but need to be monitored closely as there is a risk that they may progress and affect the eyesight. A referral will be made to an Ophthalmology Clinic. It is important that you attend this appointment. • Proliferative diabetic retinopathy Proliferative diabetic retinopathy occurs where fragile new blood vessels form on the surface of the retina over time. These abnormal vessels can bleed or develop scar tissue causing severe loss of sight. • Diabetic macular oedema Diabetic macular oedema occurs where leaky blood vessels affect the part of the retina called the macula. If fluid leaks from these vessels and affects the centre of the macula, the sight will be affected. This is the more common eye change. Both proliferative diabetic retinopathy and diabetic macular oedema can be treated and managed if they are detected early enough. If they are left untreated, sight problems will develop. What causes diabetic retinopathy? When someone has diabetes, over time the blood vessels in the retina become thicker and the blood flowing in the blood vessels slows down. In the early stages, diabetic retinopathy will not affect the sight, but if the changes get worse, eventually the sight will be affected Who is at risk of developing diabetic retinopathy? Anybody with diabetes, either Type 1 or Type 2, is at risk of developing diabetic retinopathy. The longer you have had diabetes, the more likely you are to develop diabetic retinopathy. Ways to minimise your risks of diabetic retinopathy? • Control your blood sugar and blood pressure • Take your medication as prescribed • Attend your free diabetic retinopathy eye screening appointments Source: https://www.diabeticretinascreen.ie/diabetic-retinopathy.8.html Listen to the podcast on Diabetic Retinopathy with Dr David Keegan
__label__pos
0.895118
Assessing the nature of lipid raft membranes P.S. Niemelä, S.T.T. Ollila, M.T. Hyvönen, M.E.J. Karttunen, I. Vattulainen Research output: Contribution to journalArticleAcademicpeer-review 250 Citations (Scopus) 137 Downloads (Pure) Abstract The paradigm of biological membranes has recently gone through a major update. Instead of being fluid and homogeneous, recent studies suggest that membranes are characterized by transient domains with varying fluidity. In particular, a number of experimental studies have revealed the existence of highly ordered lateral domains rich in sphingomyelin and cholesterol (CHOL). These domains, called functional lipid rafts, have been suggested to take part in a variety of dynamic cellular processes such as membrane trafficking, signal transduction, and regulation of the activity of membrane proteins. However, despite the proposed importance of these domains, their properties, and even the precise nature of the lipid phases, have remained open issues mainly because the associated short time and length scales have posed a major challenge to experiments. In this work, we employ extensive atom-scale simulations to elucidate the properties of ternary raft mixtures with CHOL, palmitoylsphingomyelin (PSM), and palmitoyloleoylphosphatidylcholine. We simulate two bilayers of 1,024 lipids for 100 ns in the liquid-ordered phase and one system of the same size in the liquid-disordered phase. The studies provide evidence that the presence of PSM and CHOL in raft-like membranes leads to strongly packed and rigid bilayers. We also find that the simulated raft bilayers are characterized by nanoscale lateral heterogeneity, though the slow lateral diffusion renders the interpretation of the observed lateral heterogeneity more difficult. The findings reveal aspects of the role of favored (specific) lipid–lipid interactions within rafts and clarify the prominent role of CHOL in altering the properties of the membrane locally in its neighborhood. Also, we show that the presence of PSM and CHOL in rafts leads to intriguing lateral pressure profiles that are distinctly different from corresponding profiles in nonraft-like membranes. The results propose that the functioning of certain classes of membrane proteins is regulated by changes in the lateral pressure profile, which can be altered by a change in lipid content. Original languageEnglish Pages (from-to)e34-1/9 JournalPLoS Computational Biology Volume3 Issue number2 DOIs Publication statusPublished - 2007 Fingerprint Dive into the research topics of 'Assessing the nature of lipid raft membranes'. Together they form a unique fingerprint. Cite this
__label__pos
0.890056
RubyGuides Share this post! Ruby Internals: Exploring the Memory Layout of Ruby Objects Would you like a quick tour of Ruby internals? Then you’re in for a treat. Because We’re going to explore together how a Ruby object is laid out in memory & how you can manipulate internal data structures to do some cool stuff. Fasten your seatbelts & get ready for a journey into the depths of the Ruby interpreter! Memory Layout of Arrays When you create an array, Ruby has to back that up with some system memory & a little bit of metadata (like the array size). Since the main Ruby interpreter (MRI) is written in C there are no objects. But there is something else: structs. A struct in C helps you store related data together, and this is used a lot in MRI’s source code to represent things like Array, String‘s & other kinds of objects. By looking at one of those structs we can infer the memory layout of an object. So let’s look at the struct for Array, called RArray: struct RArray { struct RBasic basic; union { struct { long len; union { long capa; VALUE shared; } aux; const VALUE *ptr; } heap; const VALUE ary[RARRAY_EMBED_LEN_MAX]; } as; }; I know this can look a bit intimidating if you are not familiar with C, but don’t worry! I will help you break this down into easy to digest bits 🙂 The first thing we have is this RBasic thing, which is also a struct: struct RBasic { VALUE flags; VALUE klass; } This is something that most Ruby objects have & it contains a few things like the class for this object & some binary flags that say if this object is frozen or not (and other things like the ‘tainted’ attribute). In other words: RBasic contains the generic metadata for the object. After that we have another struct, which contains the length of the array (len). The union expression is saying that aux can be either capa (for capacity) or shared. This is mostly an optimization thing, which is explained in more detail in this excellent post by Pat Shaughnessy. In terms of memory allocation, the compiler will use the biggest type inside an union. Then we have ptr, which contains the memory address where the actual Array data is stored. Here’s a picture of what this looks like (every white/grey box is 4 bytes in a 32-bit system): array memory layout You can see the memory size of an object using the ObjectSpace module: require 'objspace' ObjectSpace.memsize_of([]) # 20 Now we are ready to have some fun! Fiddle: A Fun Experiment RBasic is exactly 8 bytes in a 32-bit system & 16 bytes in a 64-bit system. Knowing this we can use the Fiddle module to access the raw memory bytes for an object & change them for some fun experiments. For example: We can change the frozen status by toggling a single bit. This is in essence what the freeze method does, but notice how there is no unfreeze method. Let’s implement it just for fun! First, lets require the Fiddle module (part of the Ruby Standard Library) & create a frozen string. require 'fiddle' str = 'water'.freeze str.frozen? # true Next: We need the memory address for our string, which can be obtained like this. memory_address = str.object_id * 2 Finally: We flip the exact bit that Ruby checks to see if an object is frozen. We also check to see if this worked by calling the frozen? method. Fiddle::Pointer.new(memory_address)[1] ^= 8 str.frozen? # false Notice that the index [1] refers to the 2nd byte of the flags value (which is composed of 4 bytes in total). Then we use ^= which is the “XOR” (Exclusive OR) operator to flip that bit. We do this because different bits inside flags have different meanings & we don’t want to change something unrelated. If you have read my ruby tricks post you may have seen this before, but now you know how it works 🙂 Another thing you can try is to change the length of the array & print the array. You will see how the array becomes shorter! You can even change the class to make an Array think it’s a String Conclusion You have learned a bit about how Ruby works under the hood. How memory for Ruby objects is laid out & how you can use the Fiddle module to play around with that. You should probably not use Fiddle like this in a real app, but it’s fun to experiment with. Don’t forget to share this post so more people can see it 🙂
__label__pos
0.661211
Dipole spectrum structure of nonresonant nonpertubative driven two-level atoms TitleDipole spectrum structure of nonresonant nonpertubative driven two-level atoms Publication TypeJournal Article Year of Publication2010 AuthorsPicón, A, Roso, L, Mompart, J, Varela, O, Ahufinger, V, Corbalán, R, Plaja, L JournalPhys. Rev. A Volume81 Pagination033420 Date PublishedMar Abstract We analyze the dipole spectrum of a two-level atom excited by a nonresonant intense monochromatic field under the electric dipole approximation and beyond the rotating wave approximation. We show that the apparently complex spectral structure can be completely described by two families: harmonic frequencies of the driving field and field-induced nonlinear fluorescence. Our formulation of the problem provides quantitative laws for the most relevant spectral features: harmonic ratios and phases, nonperturbative Stark shift, and frequency limits of the harmonic plateau. In particular, we demonstrate the locking of the harmonic phases at the wings of the plateau opening the possibility of ultrashort pulse generation through harmonic filtering. URLhttps://link.aps.org/doi/10.1103/PhysRevA.81.033420 DOI10.1103/PhysRevA.81.033420 Campus d'excel·lència internacional U A B
__label__pos
0.591465
Toby Inkster avatar Toby Inkster committed 1a10b56 saner detection; fallback if PERL_MM_USE_DEFAULT Comments (0) Files changed (11) package Ask; our $AUTHORITY = 'cpan:TOBYINK'; - our $VERSION = '0.004'; + our $VERSION = '0.005'; use Carp qw(croak); - use File::Which qw(which); use Moo::Role qw(); use Module::Runtime qw(use_module use_package_optimistically); - use namespace::sweep 0.006; + + use namespace::clean; + + use Module::Pluggable ( + search_path => 'Ask', + except => [qw/ Ask::API Ask::Functions /], + inner => 0, + require => 0, + ); sub import { shift; my $class = shift; my %args = @_==1 ? %{$_[0]} : @_; - my $instance_class = $class->_detect_class_with_traits(\%args) - or croak "Could not establish an appropriate Ask backend"; + my @implementations = + reverse sort { $a->quality <=> $b->quality } + grep { use_package_optimistically($_)->DOES('Ask::API') } + $class->plugins; - return $instance_class->new(\%args); - } - - my %_classes; - sub _detect_class_with_traits { - my ($class, $args) = @_; - my @traits = @{ delete($args->{traits}) // [] }; - - my $instance_class = $class->_detect_class($args); - return unless defined $instance_class; - return $instance_class unless @traits; - - # Cache class - my $key = join q(|), $instance_class, sort @traits; - $_classes{$key} //= "Moo::Role"->create_class_with_roles( - $instance_class, - @traits, - ); - } - - sub _detect_class { - my ($class, $args) = @_; - - if (exists $ENV{PERL_ASK_BACKEND}) { - return use_package_optimistically($ENV{PERL_ASK_BACKEND}); + if ($ENV{AUTOMATED_TESTING} or $ENV{PERL_MM_USE_DEFAULT} or not @implementations) { + @implementations = use_module('Ask::Fallback'); + } + elsif (exists $ENV{PERL_ASK_BACKEND}) { + @implementations = use_module($ENV{PERL_ASK_BACKEND}); } - if (exists $args->{class}) { - return use_package_optimistically(delete $args->{class}); + my @traits = @{ delete($args{traits}) // [] }; + for my $i (@implementations) { + my $k = @traits ? "Moo::Role"->create_class_with_roles($i, @traits) : $i; + my $self = eval { $k->new(\%args) } or next; + return $self if $self->is_usable; } - if (-t STDIN and -t STDOUT) { - return use_module("Ask::STDIO"); - } - - if (eval { require Ask::Gtk }) { - return 'Ask::Gtk'; - } - - if (eval { require Ask::Tk }) { - return 'Ask::Tk'; - } - - if (eval { require Ask::Wx }) { - return 'Ask::Wx'; - } - - if (my $zenity = which('zenity')) { - $args->{zenity} //= $zenity; - return use_module("Ask::Zenity"); - } - - return; + croak "No usable backend for Ask"; } } them to enter. The C<hide_text> argument can be set to true to I<hint> that the text entered should not be displayed on screen (e.g. password input). +The C<default> argument can be used to supply a default return value if the +user cannot be asked for some reason (e.g. running on an unattended terminal). + =item C<< question(text => $text, %arguments) >> Ask the user to answer a affirmative/negative question (i.e. OK/cancel, can be used to set the label for the affirmative button; the C<cancel_label> argument for the negative button. +The C<default> argument can be used to supply a default return value if the +user cannot be asked for some reason (e.g. running on an unattended terminal). + =item C<< file_selection(%arguments) >> Ask the user for a file name. Returns the file name. No checks are made to selected (they are returned as a list); the C<directory> argument can be used to I<hint> that you want a directory. +The C<default> argument can be used to supply a default return value if the +user cannot be asked for some reason (e.g. running on an unattended terminal). +If C<multiple> is true, then this must be an arrayref. + =item C<< single_choice(text => $text, choices => \@choices) >> Asks the user to select a single option from many choices. ], ); +The C<default> argument can be used to supply a default return value if the +user cannot be asked for some reason (e.g. running on an unattended terminal). + =item C<< multiple_choice(text => $text, choices => \@choices) >> Asks the user to select zero or more options from many choices. ], ); +The C<default> argument can be used to supply a default return value if the +user cannot be asked for some reason (e.g. running on an unattended terminal). +It must be an arrayref. + =back If you wish to create your own implementation of the Ask API, please outcome of C<< Ask->detect >>. Indeed, it trumps all other factors. If set, it should be a full class name. +If either of the C<AUTOMATED_TESTING> or C<PERL_MM_USE_DEFAULT> environment +variables are set to true, the C<< Ask::Fallback >> backend will automatically +be used. + =head1 BUGS Please report any bugs to package Ask::API; our $AUTHORITY = 'cpan:TOBYINK'; - our $VERSION = '0.004'; + our $VERSION = '0.005'; use Moo::Role; requires 'entry'; # get a string of text requires 'info'; # display a string of text + sub is_usable { + my ($self) = @_; + return 1; + } + + sub quality { + return 50; + } + sub warning { my ($self, %o) = @_; $o{text} = "WARNING: $o{text}"; methods, but they're not espcially good, so you probably want to implement most of those too. -There is not currently any mechanism to "register" your implementation -with L<Ask> so that C<< Ask->detect >> knows about it. +If you name your package C<< Ask::Something >> then C<< Ask->detect >> +will find it (via [mod://Module::Pluggable]). + +Methods used during detection are C<is_usable> which is called as an +object method, and should return a boolean indicating its usability (for +example, if STDIN is not connected to a terminal, Ask::STDIO returns +false), and C<quality> which is called as a class method and should return +a number between 0 and 100, 100 being a high-quality backend, 0 being +low-quality. + +C<< Ask->detect >> returns the highest quality module that it can load, +instantiate and claims to be usable. =head1 BUGS lib/Ask/Callback.pm package Ask::Callback; our $AUTHORITY = 'cpan:TOBYINK'; - our $VERSION = '0.004'; + our $VERSION = '0.005'; use Moo; use namespace::sweep; has input_callback => (is => 'ro', required => 1); has output_callback => (is => 'ro', required => 1); - + + sub is_usable { + my ($self) = @_; + ref $self->output_callback eq 'CODE' + and ref $self->input_callback eq 'CODE'; + } + + sub quality { + return 0; + } + sub entry { my ($self) = @_; return $self->input_callback->(); lib/Ask/Fallback.pm +use 5.010; +use strict; +use warnings; + +{ + package Ask::Fallback; + + our $AUTHORITY = 'cpan:TOBYINK'; + our $VERSION = '0.005'; + + use Moo; + use Carp qw(croak); + use namespace::sweep; + + with 'Ask::API'; + + sub quality { + return 1; + } + + sub info { + my ($self, %o) = @_; + say STDERR $o{text}; + } + + sub warning { + my ($self, %o) = @_; + say STDERR "WARNING: $o{text}"; + } + + sub error { + my ($self, %o) = @_; + say STDERR "ERROR: $o{text}"; + } + + sub question + { + my ($self, %o) = @_; + exists $o{default} and return $o{default}; + croak "question (Ask::Fallback) with no default"; + } + + sub entry + { + my ($self, %o) = @_; + exists $o{default} and return $o{default}; + croak "entry (Ask::Fallback) with no default"; + } + + sub file_selection + { + my ($self, %o) = @_; + $o{multiple} and exists $o{default} and return @{$o{default}}; + exists $o{default} and return $o{default}; + croak "file_selection (Ask::Fallback) with no default"; + } + + sub single_choice + { + my ($self, %o) = @_; + exists $o{default} and return $o{default}; + croak "single_choice (Ask::Fallback) with no default"; + } + + sub multiple_choice + { + my ($self, %o) = @_; + exists $o{default} and return @{$o{default}}; + croak "multiple_choice (Ask::Fallback) with no default"; + } +} + +1; + +__END__ + +=head1 NAME + +Ask::Fallback - backend for unattended scripts + +=head1 SYNOPSIS + + my $ask = Ask::Fallback->new; + + $ask->info(text => "I'm Charles Xavier"); + if ($ask->question( + text => "Would you like some breakfast?", + default => !!1, + )) { + ... + } + +=head1 DESCRIPTION + +This backend prints all output to STDERR; returns defaults for +C<question>, C<file_selection>, etc, and croaks if no defaults are +available. + +=head1 BUGS + +Please report any bugs to +L<http://rt.cpan.org/Dist/Display.html?Queue=Ask>. + +=head1 SEE ALSO + +L<Ask>. + +=head1 AUTHOR + +Toby Inkster E<lt>[email protected]<gt>. + +=head1 COPYRIGHT AND LICENCE + +This software is copyright (c) 2012-2013 by Toby Inkster. + +This is free software; you can redistribute it and/or modify it under +the same terms as the Perl 5 programming language system itself. + +=head1 DISCLAIMER OF WARRANTIES + +THIS PACKAGE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED +WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF +MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE. + lib/Ask/Functions.pm package Ask::Functions; our $AUTHORITY = 'cpan:TOBYINK'; - our $VERSION = '0.004'; + our $VERSION = '0.005'; our $ASK; package Ask::Gtk; our $AUTHORITY = 'cpan:TOBYINK'; - our $VERSION = '0.004'; + our $VERSION = '0.005'; use Moo; use Gtk2 -init; package Ask::STDIO; our $AUTHORITY = 'cpan:TOBYINK'; - our $VERSION = '0.004'; + our $VERSION = '0.005'; use Moo; use namespace::sweep; with 'Ask::API'; + sub is_usable { + my ($self) = @_; + -t STDIN and -t STDOUT; + } + + sub quality { + (-t STDIN and -t STDOUT) ? 80 : 20; + } + sub entry { my ($self, %o) = @_; $self->info(text => $o{text}) if exists $o{text}; return $line; } - + sub info { my ($self, %o) = @_; say STDOUT $o{text}; } - + sub warning { my ($self, %o) = @_; say STDERR "WARNING: $o{text}"; } - + sub error { my ($self, %o) = @_; say STDERR "ERROR: $o{text}"; package Ask::Tk; our $AUTHORITY = 'cpan:TOBYINK'; - our $VERSION = '0.004'; + our $VERSION = '0.005'; use Moo; use Tk; with 'Ask::API'; + sub quality { + return 30; + } + sub info { my ($self, %o) = @_; package Ask::Wx; our $AUTHORITY = 'cpan:TOBYINK'; - our $VERSION = '0.004'; + our $VERSION = '0.005'; use Moo; use Wx; with 'Ask::API'; + sub quality { + return 10; # raise to 50 once multi file selection implemented + } + sub info { my ($self, %o) = @_; lib/Ask/Zenity.pm package Ask::Zenity; our $AUTHORITY = 'cpan:TOBYINK'; - our $VERSION = '0.004'; + our $VERSION = '0.005'; use Moo; + use File::Which qw(which); use System::Command; use namespace::sweep; has zenity_path => ( is => 'ro', isa => sub { die "$_[0] not executable" unless -x $_[0] }, - default => sub { '/usr/bin/zenity' }, + default => sub { which('zenity') || '/usr/bin/zenity' }, ); has system_wrapper => ( with 'Ask::API'; + sub quality { + return 40; + } + sub _optionize { my $opt = shift; $opt =~ s/_/-/g; meta/changes.pret item "Correctly destroy no longer used Tk::MainWindow objects created in info, warning, question and file_selection methods."^^Bugfix; ]. +`Ask 0.005 cpan:TOBYINK` + issued 2013-01-16; + changeset [ + item "New (internal) API method: quality"^^Addition; + item "New (internal) API method: is_usable"^^Addition; + item "Saner implementation of Ask->detect, using Module::Pluggable."^^Change; + item "Ask::Fallback backend, which kicks in if $ENV{AUTOMATED_TESTING} or $ENV{PERL_MM_USE_DEFAULT}."^^Addition; + ]. + Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js. Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java. Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory. Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml. Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file. Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o. Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.
__label__pos
0.968897
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.             util-linux/term-utils/wall.c 450 lines 11 KiB /* * Copyright (c) 1988, 1990, 1993 * The Regents of the University of California. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the University of * California, Berkeley and its contributors. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * Modified Sun Mar 12 10:34:34 1995, [email protected], for Linux */ /* * This program is not related to David Wall, whose Stanford Ph.D. thesis * is entitled "Mechanisms for Broadcast and Selective Broadcast". * * 1999-02-22 Arkadiusz Miśkiewicz <[email protected]> * - added Native Language Support * */ #include <sys/param.h> #include <sys/stat.h> #include <sys/time.h> #include <sys/uio.h> #include <errno.h> #include <paths.h> #include <ctype.h> #include <pwd.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> #include <unistd.h> #include <utmpx.h> #include <getopt.h> #include <sys/types.h> #include <grp.h> #include "nls.h" #include "xalloc.h" #include "strutils.h" #include "ttymsg.h" #include "pathnames.h" #include "carefulputc.h" #include "c.h" #include "cctype.h" #include "fileutils.h" #include "closestream.h" #include "timeutils.h" #define TERM_WIDTH 79 #define WRITE_TIME_OUT 300 /* in seconds */ /* Function prototypes */ static char *makemsg(char *fname, char **mvec, int mvecsz, size_t *mbufsize, int print_banner); static void __attribute__((__noreturn__)) usage(void) { FILE *out = stdout; fputs(USAGE_HEADER, out); fprintf(out, _(" %s [options] [<file> | <message>]\n"), program_invocation_short_name); fputs(USAGE_SEPARATOR, out); fputs(_("Write a message to all users.\n"), out); fputs(USAGE_OPTIONS, out); fputs(_(" -g, --group <group> only send message to group\n"), out); fputs(_(" -n, --nobanner do not print banner, works only for root\n"), out); fputs(_(" -t, --timeout <timeout> write timeout in seconds\n"), out); fputs(USAGE_SEPARATOR, out); printf(USAGE_HELP_OPTIONS(25)); printf(USAGE_MAN_TAIL("wall(1)")); exit(EXIT_SUCCESS); } struct group_workspace { gid_t requested_group; int ngroups; /* getgrouplist() on OSX takes int* not gid_t* */ #ifdef __APPLE__ int *groups; #else gid_t *groups; #endif }; static gid_t get_group_gid(const char *group) { struct group *gr; gid_t gid; if ((gr = getgrnam(group))) return gr->gr_gid; gid = strtou32_or_err(group, _("invalid group argument")); if (!getgrgid(gid)) errx(EXIT_FAILURE, _("%s: unknown gid"), group); return gid; } static struct group_workspace *init_group_workspace(const char *group) { struct group_workspace *buf = xmalloc(sizeof(struct group_workspace)); buf->requested_group = get_group_gid(group); buf->ngroups = sysconf(_SC_NGROUPS_MAX) + 1; /* room for the primary gid */ buf->groups = xcalloc(sizeof(*buf->groups), buf->ngroups); return buf; } static void free_group_workspace(struct group_workspace *buf) { if (!buf) return; free(buf->groups); free(buf); } static int is_gr_member(const char *login, const struct group_workspace *buf) { struct passwd *pw; int ngroups = buf->ngroups; int rc; pw = getpwnam(login); if (!pw) return 0; if (buf->requested_group == pw->pw_gid) return 1; rc = getgrouplist(login, pw->pw_gid, buf->groups, &ngroups); if (rc < 0) { /* buffer too small, not sure how this can happen, since we used sysconf to get the size... */ errx(EXIT_FAILURE, _("getgrouplist found more groups than sysconf allows")); } for (; ngroups >= 0; --ngroups) { if (buf->requested_group == (gid_t) buf->groups[ngroups]) return 1; } return 0; } int main(int argc, char **argv) { int ch; struct iovec iov; struct utmpx *utmpptr; char *p; char line[sizeof(utmpptr->ut_line) + 1]; int print_banner = TRUE; struct group_workspace *group_buf = NULL; char *mbuf, *fname = NULL; size_t mbufsize; unsigned timeout = WRITE_TIME_OUT; char **mvec = NULL; int mvecsz = 0; static const struct option longopts[] = { { "nobanner", no_argument, NULL, 'n' }, { "timeout", required_argument, NULL, 't' }, { "group", required_argument, NULL, 'g' }, { "version", no_argument, NULL, 'V' }, { "help", no_argument, NULL, 'h' }, { NULL, 0, NULL, 0 } }; setlocale(LC_ALL, ""); bindtextdomain(PACKAGE, LOCALEDIR); textdomain(PACKAGE); close_stdout_atexit(); while ((ch = getopt_long(argc, argv, "nt:g:Vh", longopts, NULL)) != -1) { switch (ch) { case 'n': if (geteuid() == 0) print_banner = FALSE; else warnx(_("--nobanner is available only for root")); break; case 't': timeout = strtou32_or_err(optarg, _("invalid timeout argument")); if (timeout < 1) errx(EXIT_FAILURE, _("invalid timeout argument: %s"), optarg); break; case 'g': group_buf = init_group_workspace(optarg); break; case 'V': print_version(EXIT_SUCCESS); case 'h': usage(); default: errtryhelp(EXIT_FAILURE); } } argc -= optind; argv += optind; if (argc == 1 && access(argv[0], F_OK) == 0) fname = argv[0]; else if (argc >= 1) { mvec = argv; mvecsz = argc; } mbuf = makemsg(fname, mvec, mvecsz, &mbufsize, print_banner); iov.iov_base = mbuf; iov.iov_len = mbufsize; while((utmpptr = getutxent())) { if (!utmpptr->ut_user[0]) continue; #ifdef USER_PROCESS if (utmpptr->ut_type != USER_PROCESS) continue; #endif /* Joey Hess reports that use-sessreg in /etc/X11/wdm/ produces * ut_line entries like :0, and a write to /dev/:0 fails. * * It also seems that some login manager may produce empty ut_line. */ if (!*utmpptr->ut_line || *utmpptr->ut_line == ':') continue; if (group_buf && !is_gr_member(utmpptr->ut_user, group_buf)) continue; mem2strcpy(line, utmpptr->ut_line, sizeof(utmpptr->ut_line), sizeof(line)); if ((p = ttymsg(&iov, 1, line, timeout)) != NULL) warnx("%s", p); } endutxent(); free(mbuf); free_group_workspace(group_buf); exit(EXIT_SUCCESS); } struct buffer { size_t sz; size_t used; char *data; }; static void buf_enlarge(struct buffer *bs, size_t len) { if (bs->sz == 0 || len > bs->sz - bs->used) { bs->sz += len < 128 ? 128 : len; bs->data = xrealloc(bs->data, bs->sz); } } static void buf_puts(struct buffer *bs, const char *s) { size_t len = strlen(s); buf_enlarge(bs, len + 1); memcpy(bs->data + bs->used, s, len + 1); bs->used += len; } static void buf_printf(struct buffer *bs, const char *fmt, ...) { int rc; va_list ap; size_t limit; buf_enlarge(bs, 0); /* default size */ limit = bs->sz - bs->used; va_start(ap, fmt); rc = vsnprintf(bs->data + bs->used, limit, fmt, ap); va_end(ap); if (rc >= 0 && (size_t) rc >= limit) { /* not enough, enlarge */ buf_enlarge(bs, (size_t)rc + 1); limit = bs->sz - bs->used; va_start(ap, fmt); rc = vsnprintf(bs->data + bs->used, limit, fmt, ap); va_end(ap); } if (rc > 0) bs->used += rc; } static void buf_putc_careful(struct buffer *bs, int c) { if (isprint(c) || c == '\a' || c == '\t' || c == '\r' || c == '\n') { buf_enlarge(bs, 1); bs->data[bs->used++] = c; } else if (!c_isascii(c)) buf_printf(bs, "\\%3o", (unsigned char)c); else { char tmp[] = { '^', c ^ 0x40, '\0' }; buf_puts(bs, tmp); } } static char *makemsg(char *fname, char **mvec, int mvecsz, size_t *mbufsize, int print_banner) { struct buffer _bs = {.used = 0}, *bs = &_bs; register int ch, cnt; char *p, *lbuf; long line_max; line_max = sysconf(_SC_LINE_MAX); if (line_max <= 0) line_max = 512; lbuf = xmalloc(line_max); if (print_banner == TRUE) { char *hostname = xgethostname(); char *whom, *where, date[CTIME_BUFSIZ]; struct passwd *pw; time_t now; if (!(whom = getlogin()) || !*whom) whom = (pw = getpwuid(getuid())) ? pw->pw_name : "???"; if (!whom) { whom = "someone"; warn(_("cannot get passwd uid")); } where = ttyname(STDOUT_FILENO); if (!where) { where = "somewhere"; } else if (strncmp(where, "/dev/", 5) == 0) where += 5; time(&now); ctime_r(&now, date); date[strlen(date) - 1] = '\0'; /* * all this stuff is to blank out a square for the message; * we wrap message lines at column 79, not 80, because some * terminals wrap after 79, some do not, and we can't tell. * Which means that we may leave a non-blank character * in column 80, but that can't be helped. */ /* snprintf is not always available, but the sprintf's here will not overflow as long as %d takes at most 100 chars */ buf_printf(bs, "\r%*s\r\n", TERM_WIDTH, " "); snprintf(lbuf, line_max, _("Broadcast message from %s@%s (%s) (%s):"), whom, hostname, where, date); buf_printf(bs, "%-*.*s\007\007\r\n", TERM_WIDTH, TERM_WIDTH, lbuf); free(hostname); } buf_printf(bs, "%*s\r\n", TERM_WIDTH, " "); if (mvec) { /* * Read message from argv[] */ int i; for (i = 0; i < mvecsz; i++) { buf_puts(bs, mvec[i]); if (i < mvecsz - 1) buf_puts(bs, " "); } buf_puts(bs, "\r\n"); } else { /* * read message from <file> */ if (fname) { /* * When we are not root, but suid or sgid, refuse to read files * (e.g. device files) that the user may not have access to. * After all, our invoker can easily do "wall < file" * instead of "wall file". */ uid_t uid = getuid(); if (uid && (uid != geteuid() || getgid() != getegid())) errx(EXIT_FAILURE, _("will not read %s - use stdin."), fname); if (!freopen(fname, "r", stdin)) err(EXIT_FAILURE, _("cannot open %s"), fname); } /* * Read message from stdin. */ while (fgets(lbuf, line_max, stdin)) { for (cnt = 0, p = lbuf; (ch = *p) != '\0'; ++p, ++cnt) { if (cnt == TERM_WIDTH || ch == '\n') { for (; cnt < TERM_WIDTH; ++cnt) buf_puts(bs, " "); buf_puts(bs, "\r\n"); cnt = 0; } if (ch == '\t') cnt += (7 - (cnt % 8)); if (ch != '\n') buf_putc_careful(bs, ch); } } } buf_printf(bs, "%*s\r\n", TERM_WIDTH, " "); free(lbuf); bs->data[bs->used] = '\0'; /* be paranoid */ *mbufsize = bs->used; return bs->data; }
__label__pos
0.9949
1 /* 2 * Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. 3 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 4 * 5 * This code is free software; you can redistribute it and/or modify it 6 * under the terms of the GNU General Public License version 2 only, as 7 * published by the Free Software Foundation. Oracle designates this 8 * particular file as subject to the "Classpath" exception as provided 9 * by Oracle in the LICENSE file that accompanied this code. 10 * 11 * This code is distributed in the hope that it will be useful, but WITHOUT 12 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 14 * version 2 for more details (a copy is included in the LICENSE file that 15 * accompanied this code). 16 * 17 * You should have received a copy of the GNU General Public License version 18 * 2 along with this work; if not, write to the Free Software Foundation, 19 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 20 * 21 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 22 * or visit www.oracle.com if you need additional information or have any 23 * questions. 24 */ 25 26 package jdk.jfr.tool; 27 28 import java.nio.file.Path; 29 import java.time.OffsetDateTime; 30 import java.util.Collections; 31 import java.util.Iterator; 32 import java.util.List; 33 34 import javax.script.ScriptEngine; 35 import javax.script.ScriptEngineManager; 36 37 import jdk.jfr.Timespan; 38 import jdk.jfr.Timestamp; 39 import jdk.jfr.ValueDescriptor; 40 import jdk.jfr.consumer.RecordedEvent; 41 import jdk.jfr.consumer.RecordedObject; 42 import jdk.jfr.consumer.RecordingFile; 43 import jdk.nashorn.api.scripting.JSObject; 44 import jdk.test.lib.Asserts; 45 import jdk.test.lib.process.OutputAnalyzer; 46 47 /** 48 * @test 49 * @key jfr 50 * @summary Tests print --json 51 * 52 * @library /lib / 53 * @modules jdk.scripting.nashorn 54 * jdk.jfr 55 * 56 * @run main/othervm jdk.jfr.tool.TestPrintJSON 57 */ 58 public class TestPrintJSON { 59 60 public static void main(String... args) throws Throwable { 61 62 Path recordingFile = ExecuteHelper.createProfilingRecording().toAbsolutePath(); 63 64 OutputAnalyzer output = ExecuteHelper.jfr("print", "--json", "--stack-depth", "999", recordingFile.toString()); 65 String json = output.getStdout(); 66 67 // Parse JSON using Nashorn 68 String statement = "var jsonObject = " + json; 69 ScriptEngineManager factory = new ScriptEngineManager(); 70 ScriptEngine engine = factory.getEngineByName("nashorn"); 71 engine.eval(statement); 72 JSObject o = (JSObject) engine.get("jsonObject"); 73 JSObject recording = (JSObject) o.getMember("recording"); 74 JSObject jsonEvents = (JSObject) recording.getMember("events"); 75 76 List<RecordedEvent> events = RecordingFile.readAllEvents(recordingFile); 77 Collections.sort(events, (e1, e2) -> e1.getEndTime().compareTo(e2.getEndTime())); 78 // Verify events are equal 79 Iterator<RecordedEvent> it = events.iterator(); 80 81 for (Object jsonEvent : jsonEvents.values()) { 82 RecordedEvent recordedEvent = it.next(); 83 String typeName = recordedEvent.getEventType().getName(); 84 Asserts.assertEquals(typeName, ((JSObject) jsonEvent).getMember("type").toString()); 85 assertEquals(jsonEvent, recordedEvent); 86 } 87 Asserts.assertFalse(events.size() != jsonEvents.values().size(), "Incorrect number of events"); 88 } 89 90 private static void assertEquals(Object jsonObject, Object jfrObject) throws Exception { 91 // Check object 92 if (jfrObject instanceof RecordedObject) { 93 JSObject values = (JSObject) ((JSObject) jsonObject).getMember("values"); 94 RecordedObject recObject = (RecordedObject) jfrObject; 95 Asserts.assertEquals(values.values().size(), recObject.getFields().size()); 96 for (ValueDescriptor v : recObject.getFields()) { 97 String name = v.getName(); 98 Object jsonValue = values.getMember(name); 99 Object expectedValue = recObject.getValue(name); 100 if (v.getAnnotation(Timestamp.class) != null) { 101 // Make instant of OffsetDateTime 102 jsonValue = OffsetDateTime.parse("" + jsonValue).toInstant().toString(); 103 expectedValue = recObject.getInstant(name); 104 } 105 if (v.getAnnotation(Timespan.class) != null) { 106 expectedValue = recObject.getDuration(name); 107 } 108 assertEquals(jsonValue, expectedValue); 109 return; 110 } 111 } 112 // Check array 113 if (jfrObject != null && jfrObject.getClass().isArray()) { 114 Object[] jfrArray = (Object[]) jfrObject; 115 JSObject jsArray = (JSObject) jsonObject; 116 for (int i = 0; i < jfrArray.length; i++) { 117 assertEquals(jsArray.getSlot(i), jfrArray[i]); 118 } 119 return; 120 } 121 String jsonText = String.valueOf(jsonObject); 122 // Double.NaN / Double.Inifinity is not supported by JSON format, 123 // use null 124 if (jfrObject instanceof Double) { 125 double expected = ((Double) jfrObject); 126 if (Double.isInfinite(expected) || Double.isNaN(expected)) { 127 Asserts.assertEquals("null", jsonText); 128 return; 129 } 130 double value = Double.parseDouble(jsonText); 131 Asserts.assertEquals(expected, value); 132 return; 133 } 134 // Float.NaN / Float.Inifinity is not supported by JSON format, 135 // use null 136 if (jfrObject instanceof Float) { 137 float expected = ((Float) jfrObject); 138 if (Float.isInfinite(expected) || Float.isNaN(expected)) { 139 Asserts.assertEquals("null", jsonText); 140 return; 141 } 142 float value = Float.parseFloat(jsonText); 143 Asserts.assertEquals(expected, value); 144 return; 145 } 146 if (jfrObject instanceof Integer) { 147 Integer expected = ((Integer) jfrObject); 148 double value = Double.parseDouble(jsonText); 149 Asserts.assertEquals(expected.doubleValue(), value); 150 return; 151 } 152 if (jfrObject instanceof Long) { 153 Long expected = ((Long) jfrObject); 154 double value = Double.parseDouble(jsonText); 155 Asserts.assertEquals(expected.doubleValue(), value); 156 return; 157 } 158 159 String jfrText = String.valueOf(jfrObject); 160 Asserts.assertEquals(jfrText, jsonText, "Primitive values don't match. JSON = " + jsonText); 161 } 162 }
__label__pos
0.992947
Euclides 3 3 Si $(a,b)=1$, demostrar que $(a+b,a^{2}+b^{2})=1,2$- Sea $d:=(a+b,a^{2}+b^{2})$. Como $d \mid a+b$, $d \mid (a+b)^{2}=a^{2}+b^{2}+2ab$. Y como $d \mid a^{2}+b^{2}$, $d \mid 2ab$. (*) Ahora, $d=1$ cumple esto pero si $d>1$, $d \nmid a,b$ pues esto sería una contradicción a que $(a.b)=1$. Entonces $d \mid 2$ y esto nos da $d=2$. Por lo tanto, $d=1,2$. Esto completa la prueba. Por lo que discutimos en clase, no es suficiente que $d\nmid a,b$ para concluir que tiene que dividir a 2, debemos usar los resultados de mcd 5 para probar que como $d\mid (a+b)$ y $(a,b)=1$ entonces $(d, ab)=1$ lo que nos permite concluir que $d\mid 2ab$, implica que $d\mid 2$, lo que completa la prueba, pues los únicos divisores positivos de 2, son 1y2. Si no se indica lo contrario, el contenido de esta página se ofrece bajo Creative Commons Attribution-ShareAlike 3.0 License
__label__pos
0.676446
Select Page How to Create a Virtual Machine on Avalanche avalanche-virtual-machine The Avalanche VM is an important component of the Avalanche platform, and it is a key reason why the platform is well-suited for building and deploying decentralized applications. One of the key features of Avalanche is its virtual machine (VM), which is responsible for executing the smart contracts that power dApps on the platform. The Avalanche VM is a highly efficient and optimized implementation of the Ethereum Virtual Machine (EVM), the runtime environment that executes smart contracts on the Ethereum blockchain. One of the main benefits of the Avalanche VM is its high performance and scalability. It is designed to handle a large number of transactions per second (TPS), making it well-suited for applications that require fast and efficient execution of smart contracts. Additionally, the Avalanche VM is designed to be fully compatible with the EVM, which means that it can execute any smart contract written in Solidity, the programming language used to write smart contracts on Ethereum. Another key feature of the Avalanche VM is its security. It is designed to be resistant to common vulnerabilities and exploits, and it includes a number of security measures to protect against potential attacks. For example, it has a built-in garbage collector to prevent memory leaks and several measures to prevent code injection and other code-level attacks. This article will explain the basics of Avalanche protocol, the importance of a virtual machine on Avalanche and how to create a virtual machine on Avalanche. About the Avalanche blockchain Avalanche is a decentralized protocol for building and deploying blockchain networks. It is designed to be fast, secure, and scalable and aims to solve many problems plaguing existing blockchains, such as slow transaction speeds and high fees. Avalanche is a blockchain that promises rapid confirmation times and scaling capabilities through its Avalanche Consensus Protocol. It can process approx. 4,500 transactions per second (TPS). Avalanche’s native token, AVAX, is the 10th largest, and had a market capitalization of $33 billion in March 2022. Avalanche was launched in September 2020, and has since grown to be one of the most popular blockchains. According to statistics it has more than $11 billion in total value. This makes it the fourth largest DeFi-supporting blockchain after Terra Smart Chain and Binance Smart Chain. Avalanche’s DeFi ecosystem is thriving and includes some protocols from Ethereum, such as the lending protocol Aave and the decentralized exchange protocol SushiSwap. However, Avalanche doesn’t only support DeFi. Ava Labs financially supports Metaverse Investments in this network. The idea is that a fast, cheap network could easily support blockchain-based games as well as virtual worlds. The key features of Avalanche • Fast transaction processing: The Avalanche consensus mechanism allows for faster transaction processing than traditional proof-of-work (PoW) or proof-of-stake (PoS) consensus mechanisms, resulting in faster confirmation times and lower fees. • High scalability: The Avalanche protocol is designed to scale horizontally, meaning that it can easily handle an increase in the number of nodes in the network. This allows for faster transaction processing as the network grows. • Customizable: The modular design of the Avalanche protocol enables developers to build and deploy customized blockchain networks for a variety of use cases. • Secure: The Avalanche consensus mechanism is designed to be resistant to certain types of attacks, such as the “nothing at stake” attack, which can plague PoS systems. • Interoperability: The Avalanche protocol is designed to be interoperable with other blockchain networks, allowing for easy integration and communication between different networks. How does Avalanche work? Avalanche’s platform may be complex, but its three main aspects stand out from other blockchain projects. These are its consensus mechanism, its incorporation of subnetworks and its use of multiple built-in blockchains. Avalanche consensus A protocol allowing nodes to agree is necessary for a blockchain to verify transactions and keep them secure. This protocol is referred to as consensus. With regard to cryptocurrencies, the discussion has centered around proof-of-work (PoW) vs. proof-of-stake (PoS) as the most popular methods to reach this agreement. Avalanche utilizes a new consensus mechanism that is based on the PoS foundation. In this mechanism, when a user initiates a transaction, it is then received by a validator, which samples a small number of validators and checks for agreement. To reach a consensus, the validators “gossip” with one another repeatedly during this sampling process. This is how one validator sends a message to another validator, which samples more validators. The process continues until all parties reach an agreement. A single transaction can turn into an Avalanche, just as a single snowflake could become a snowball. Depending on the amount of time a node has staked its tokens, validators reward scale accordingly, which is called Proof of Uptime. If the node acts according to the software’s rules, it is known as Proof of Correctness. Subnetworks Avalanche users have the ability to launch their own chains, which can be operated using their own rules. This system is similar to other blockchain scaling solutions like Polkadot’s parachains or Ethereum 2.0’s shards. Subnetworks, or subnets, are groups of nodes that participate in validating a set of blockchains to reach a consensus on these chains. Subnet validators must also validate Avalanche’s Primary Network. Built-in blockchains Avalanche uses three different blockchains to overcome the limitations of the blockchain trilemma. Each chain can hold digital assets that can be used to perform different functions within the ecosystem. • The Exchange Chain (X-Chain) is where assets can be created and traded. This includes Avalanche’s native token, AVAX. • The Contract Chain (C-Chain) allows you to create and execute smart contracts. Avalanche smart contracts, which are based on Ethereum Virtual Machine, can benefit from cross-chain interoperability because they are cross-chain compatible. • The Platform Chain (P-Chain) coordinates validators and allows the creation and management of subnets. Benefits that the Avalanche protocol offers • Anyone can create dApps on Avalanche Avalanche allows anyone to create custom-made applications. This is a result of the revolutionary Avalanche consensus. It’s very similar to Cosmos and Polkadot but has higher transaction throughput and finality. Furthermore, it can scale to millions of validators and impact its network. Avalanche is also the first platform for smart contracts to confirm transactions in less than one second. • Uses a solid consensus method Avalanche is distinguished by its combination of the Nakamoto consensus’s strengths, scalability, decentralization. Avalanche Consensus, which is also known as Classical consensus, offers all the benefits that Classical consensus has – speed, finality, and energy efficiency. These are the key features of Avalanche Consensus: • High throughput of 4500 TPS • Upto 51% resilient to attacks • Highly decentralized • Scalable • Robust • Ethereum compatibility Avalanche is a smart contract chain that is 100% compatible with the Ethereum Virtual Machine. This allows anyone to deploy smart contracts in Ethereum-related languages such as Solidity on Avalanche. You can also develop dApps using Ethereum. Avalanche allows you to do the same but with much faster transactions and very low fees. Other benefits include the following: • Energy efficient: Because the avalanche protocol does not rely on mining, it is much more energy efficient than PoW. • Resilient to attack: The avalanche protocol is resistant to many types of attacks, including double-spend attacks and Sybil attacks, because it requires a large number of nodes to reach a consensus on a new block. • Low fees: Because the avalanche protocol does not require miners to solve complex computational problems, fees for transactions on the blockchain are typically much lower than on other blockchain systems. • Decentralized: The avalanche protocol is highly decentralized, as it does not rely on a small number of miners or stakers to validate transactions. This makes it less vulnerable to centralization and censorship. What is a virtual machine? In the context of blockchain, a virtual machine is a software program that executes the instructions of a smart contract. Smart contracts are self-executing contracts with the terms of the agreement between buyer and seller being directly written into lines of code. The code and the agreements contained therein are stored and replicated on a blockchain network. As part of smart contract execution, the virtual machine is responsible for executing codes of smart contracts, which can include things like sending and receiving transactions, modifying data stored on the blockchain, and interacting with other smart contracts. The virtual machine ensures that the instructions of the smart contract are carried out as intended, providing a secure and reliable way to automate complex processes and agreements on the blockchain. There are several virtual machine implementations in various blockchain platforms, each with its own set of features and capabilities. Some examples include the Ethereum Virtual Machine (EVM), the Hyperledger Fabric Virtual Machine (HLF VM), and the Avalanche Virtual Machine (BSVVM). Role of Avalanche virtual machine Two components are essential to a blockchain: the consensus engine and the virtual machine (VM). The VM describes each application’s behavior and how blocks are built and parsed to create the blockchain. VMs run on top of the Avalanche Consensus Engine, allowing all nodes to agree on the status of the blockchain. Here is a quick example to show how VMs interact and support the consensus. • A node wants to update the state of the blockchain. • The node’s VM will inform the consensus engine that the state needs to be updated. • The consensus engine will request the block from the VM. • The consensus engine will verify the returned block using verify () VM implementation. • The consensus engine will allow the network to agree on the acceptance or rejection of the newly verified block. • Every network node will show the same preference for a specific block if it is virtuous and well-behaved. • The consensus results will determine whether the engine accepts or rejects the block. • The implementation of the VM will determine what happens to a block that is rejected or accepted. AvalancheGo is the consensus engine for all blockchains on the Avalanche network. The VM interface is used to build, parse, store and verify blocks for the consensus engine. Developers can build their applications quickly using virtual machines. This allows them to avoid having to worry about Avalanche’s consensus layer, which manages how nodes decide whether to accept or reject a block. Binaries are used to supply VMs to an AvalancheGo node. These binaries should be named the VMID that was assigned to the VM. A VMID is a 32 -byte hash encoded in CB58, which is used to build a VM. Empower your business with the speed, security and scalability of Avalanche blockchain. Develop your next feature-rich dApps with LeewayHertz. How to create a virtual machine on Avalanche? This article will show you how to create a simple Avalanche virtual machine called TimestampVM. Each block of the TimestampVM’s blockchain contains a 32-byte payload and an increasing timestamp at the block creation time. This server can be used to prove that a block of data existed at the time it was created. Step 1: Prerequisites 1. Interfaces that every VM must implement • block.ChainVM– To reach a consensus on linear blockchains, Avalanche uses the Snowman consensus engine. To be compatible with Snowman, a VM must implement the block.ChainVM interface. type ChainVM interface { common.VM Getter Parser BuildBlock() (snowman.Block, error) LastAccepted() (ids.ID, error) } // Getter defines the functionality for fetching a block by its ID. type Getter interface { // Attempt to load a block. // // If the block does not exist, an error should be returned. // GetBlock(ids.ID) (snowman.Block, error) } // Parser defines the functionality for fetching a block by its bytes. type Parser interface { // Attempt to create a block from a stream of bytes. // // The block should be represented by the full byte array, without extra // bytes. ParseBlock([]byte) (snowman.Block, error) } • common.VM – This is a type that every VM must implement. type VM interface { // Contains handlers for VM-to-VM specific messages AppHandler // Returns nil if the VM is healthy. // Periodically called and reported via the node's Health API. health.Checkable // Connector represents a handler that is called on connection connect/disconnect validators.Connector // Initialize this VM. // [ctx]: Metadata about this VM. // [ctx.networkID]: The ID of the network this VM's chain is running on. // [ctx.chainID]: The unique ID of the chain this VM is running on. // [ctx.Log]: Used to log messages // [ctx.NodeID]: The unique staker ID of this node. // [ctx.Lock]: A Read/Write lock shared by this VM and the consensus // engine that manages this VM. The write lock is held // whenever code in the consensus engine calls the VM. // [dbManager]: The manager of the database this VM will persist data to. // [genesisBytes]: The byte-encoding of the genesis information of this // VM. The VM uses it to initialize its state. For // example, if this VM were an account-based payments // system, `genesisBytes` would probably contain a genesis // transaction that gives coins to some accounts, and this // transaction would be in the genesis block. // [toEngine]: The channel used to send messages to the consensus engine. // [fxs]: Feature extensions that attach to this VM. Initialize( ctx *snow.Context, dbManager manager.Manager, genesisBytes []byte, upgradeBytes []byte, configBytes []byte, toEngine chan<- Message, fxs []*Fx, appSender AppSender, ) error // Bootstrapping is called when the node is starting to bootstrap this chain. Bootstrapping() error // Bootstrapped is called when the node is done bootstrapping this chain. Bootstrapped() error // Shutdown is called when the node is shutting down. Shutdown() error // Version returns the version of the VM this node is running. Version() (string, error) // Creates the HTTP handlers for custom VM network calls. // // This exposes handlers that the outside world can use to communicate with // a static reference to the VM. Each handler has the path: // [Address of node]/ext/VM/[VM ID]/[extension] // // Returns a mapping from [extension]s to HTTP handlers. // // Each extension can specify how locking is managed for convenience. // // For example, it might make sense to have an extension for creating // genesis bytes this VM can interpret. CreateStaticHandlers() (map[string]*HTTPHandler, error) // Creates the HTTP handlers for custom chain network calls. // // This exposes handlers that the outside world can use to communicate with // the chain. Each handler has the path: // [Address of node]/ext/bc/[chain ID]/[extension] // // Returns a mapping from [extension]s to HTTP handlers. // // Each extension can specify how locking is managed for convenience. // // For example, if this VM implements an account-based payments system, // it have an extension called `accounts`, where clients could get // information about their accounts. CreateHandlers() (map[string]*HTTPHandler, error) } • snowman.Block – The snowman.Block interface defines the functionality a block must implement to be a block in a linear Snowman chain. type Block interface { choices.Decidable // Parent returns the ID of this block's parent. Parent() ids.ID // Verify that the state transition this block would make if accepted is // valid. If the state transition is invalid, a non-nil error should be // returned. // // It is guaranteed that the Parent has been successfully verified. Verify() error // Bytes returns the binary representation of this block. // // This is used for sending blocks to peers. The bytes should be able to be // parsed into the same block on another node. Bytes() []byte // Height returns the height of this block in the chain. Height() uint64 } • choices.Decidable – This interface is a superset of every decidable object, such as transactions, blocks, and vertices. type Decidable interface { // ID returns a unique ID for this element. // // Typically, this is implemented by using a cryptographic hash of a // binary representation of this element. An element should return the same // IDs upon repeated calls. ID() ids.ID // Accept this element. // // This element will be accepted by every correct node in the network. Accept() error // Reject this element. // // This element will not be accepted by any correct node in the network. Reject() error // Status returns this element's current status. // // If Accept has been called on an element with this ID, Accepted should be // returned. Similarly, if Reject has been called on an element with this // ID, Rejected should be returned. If the contents of this element are // unknown, then Unknown should be returned. Otherwise, Processing should be // returned. Status() Status } 2. Download the timestampvm code from GitHub. Step 2: Writing TimestampVM The following classes are used to write the TimestampVM.The detailed code of the classes is available in downloadable from GitHub, as mentioned in the previous step. We will describe the functionality of each of the classes. codec.go – required to encode/decode the block into byte representation. const( // CodecVersion is the current default codec version CodecVersion =0 ) // Codecs do serialization and deserialization var( Codec codec.Manager ) funcinit(){ // Create default codec and manager c := linearcodec.NewDefault() Codec = codec.NewDefaultManager() // Register codec to manager with CodecVersion if err := Codec.RegisterCodec(CodecVersion, c); err !=nil{ panic(err) } } state.go – The State interface defines the database layer and connections. Each VM should define its own database methods. State embeds the BlockState which defines block-related state operations. var( // These are prefixes for db keys. // It's important to set different prefixes for each separate database objects. singletonStatePrefix =[]byte("singleton") blockStatePrefix =[]byte("block") _ State =&state{} ) // State is a wrapper around avax.SingleTonState and BlockState // State also exposes a few methods needed for managing database commits and close. type State interface{ // SingletonState is defined in avalanchego, // it is used to understand if db is initialized already. avax.SingletonState BlockState Commit()error Close()error } type state struct{ avax.SingletonState BlockState baseDB *versiondb.Database } funcNewState(db database.Database, vm *VM) State { // create a new baseDB baseDB := versiondb.New(db) // create a prefixed "blockDB" from baseDB blockDB := prefixdb.New(blockStatePrefix, baseDB) // create a prefixed "singletonDB" from baseDB singletonDB := prefixdb.New(singletonStatePrefix, baseDB) // return state with created sub state components return&state{ BlockState:NewBlockState(blockDB, vm), SingletonState: avax.NewSingletonState(singletonDB), baseDB: baseDB, } } // Commit commits pending operations to baseDB func(s *state)Commit()error{ return s.baseDB.Commit() } // Close closes the underlying base database func(s *state)Close()error{ return s.baseDB.Close() } block_state.go – This interface and its implementation provide storage functions to VM to store and retrieve blocks. const( lastAcceptedByte byte=iota ) const( // maximum block capacity of the cache blockCacheSize =8192 ) // persists lastAccepted block IDs with this key var lastAcceptedKey =[]byte{lastAcceptedByte} var_ BlockState =&blockState{} // BlockState defines methods to manage state with Blocks and LastAcceptedIDs. type BlockState interface{ GetBlock(blkID ids.ID)(*Block,error) PutBlock(blk *Block)error GetLastAccepted()(ids.ID,error) SetLastAccepted(ids.ID)error } // blockState implements BlocksState interface with database and cache. type blockState struct{ // cache to store blocks blkCache cache.Cacher // block database blockDB database.Database lastAccepted ids.ID // vm reference vm *VM } // blkWrapper wraps the actual blk bytes and status to persist them together type blkWrapper struct{ Blk []byte`serialize:"true"` Status choices.Status `serialize:"true"` } // NewBlockState returns BlockState with a new cache and given db funcNewBlockState(db database.Database, vm *VM) BlockState { return&blockState{ blkCache:&cache.LRU{Size: blockCacheSize}, blockDB: db, vm: vm, } } // GetBlock gets Block from either cache or database func(s *blockState)GetBlock(blkID ids.ID)(*Block,error){ // Check if cache has this blkID if blkIntf, cached := s.blkCache.Get(blkID); cached { // there is a key but value is nil, so return an error if blkIntf ==nil{ returnnil, database.ErrNotFound } // We found it return the block in cache return blkIntf.(*Block),nil } // get block bytes from db with the blkID key wrappedBytes, err := s.blockDB.Get(blkID[:]) if err !=nil{ // we could not find it in the db, let's cache this blkID with nil value // so next time we try to fetch the same key we can return error // without hitting the database if err == database.ErrNotFound { s.blkCache.Put(blkID,nil) } // could not find the block, return error returnnil, err } // first decode/unmarshal the block wrapper so we can have status and block bytes blkw := blkWrapper{} if_, err := Codec.Unmarshal(wrappedBytes,&blkw); err !=nil{ returnnil, err } // now decode/unmarshal the actual block bytes to block blk :=&Block{} if_, err := Codec.Unmarshal(blkw.Blk, blk); err !=nil{ returnnil, err } // initialize block with block bytes, status and vm blk.Initialize(blkw.Blk, blkw.Status, s.vm) // put block into cache s.blkCache.Put(blkID, blk) return blk,nil } // PutBlock puts block into both database and cache func(s *blockState)PutBlock(blk *Block)error{ // create block wrapper with block bytes and status blkw := blkWrapper{ Blk: blk.Bytes(), Status: blk.Status(), } // encode block wrapper to its byte representation wrappedBytes, err := Codec.Marshal(CodecVersion,&blkw) if err !=nil{ return err } blkID := blk.ID() // put actual block to cache, so we can directly fetch it from cache s.blkCache.Put(blkID, blk) // put wrapped block bytes into database return s.blockDB.Put(blkID[:], wrappedBytes) } // DeleteBlock deletes block from both cache and database func(s *blockState)DeleteBlock(blkID ids.ID)error{ s.blkCache.Put(blkID,nil) return s.blockDB.Delete(blkID[:]) } // GetLastAccepted returns last accepted block ID func(s *blockState)GetLastAccepted()(ids.ID,error){ // check if we already have lastAccepted ID in state memory if s.lastAccepted != ids.Empty { return s.lastAccepted,nil } // get lastAccepted bytes from database with the fixed lastAcceptedKey lastAcceptedBytes, err := s.blockDB.Get(lastAcceptedKey) if err !=nil{ return ids.ID{}, err } // parse bytes to ID lastAccepted, err := ids.ToID(lastAcceptedBytes) if err !=nil{ return ids.ID{}, err } // put lastAccepted ID into memory s.lastAccepted = lastAccepted return lastAccepted,nil } // SetLastAccepted persists lastAccepted ID into both cache and database func(s *blockState)SetLastAccepted(lastAccepted ids.ID)error{ // if the ID in memory and the given memory are same don't do anything if s.lastAccepted == lastAccepted { returnnil } // put lastAccepted ID to memory s.lastAccepted = lastAccepted // persist lastAccepted ID to database with fixed lastAcceptedKey return s.blockDB.Put(lastAcceptedKey, lastAccepted[:]) } block.go – It is used for block implementation.  There are three important methods here – • Verify – This method verifies that a block is valid and stores it in the memory. It is important to store the verified block in the memory and return them in the vm.GetBlock method as shown above. func (b *Block) Verify() error { // Get [b]'s parent parentID := b.Parent() parent, err := b.vm.getBlock(parentID) if err != nil { return errDatabaseGet } } • Accept – Accept is called by consensus to indicate this block is accepted. func (b *Block) Accept() error { b.SetStatus(choices.Accepted) // Change state of this block blkID := b.ID() // Persist data if err := b.vm.state.PutBlock(b); err != nil { return err } // Set last accepted ID to this block ID if err := b.vm.state.SetLastAccepted(blkID); err != nil { return err } // Delete this block from verified blocks as it's accepted delete(b.vm.verifiedBlocks, b.ID()) // Commit changes to database return b.vm.state.Commit() } • Reject – This is called by the consensus to indicate this block is rejected. func (b *Block) Reject() error { b.SetStatus(choices.Rejected) // Change state of this block if err := b.vm.state.PutBlock(b); err != nil { return err } // Delete this block from verified blocks as it's rejected delete(b.vm.verifiedBlocks, b.ID()) // Commit changes to database return b.vm.state.Commit() } The following methods are required by the snowman.Block interface // ID returns the ID of this block func (b *Block) ID() ids.ID { return b.id } // ParentID returns [b]'s parent's ID func (b *Block) Parent() ids.ID { return b.PrntID } // Height returns this block's height. The genesis block has height 0. func (b *Block) Height() uint64 { return b.Hght } // Timestamp returns this block's time. The genesis block has time 0. func (b *Block) Timestamp() time.Time { return time.Unix(b.Tmstmp, 0) } // Status returns the status of this block func (b *Block) Status() choices.Status { return b.status } // Bytes returns the byte repr. of this block func (b *Block) Bytes() []byte { return b.bytes } Step 3: Implementation of TimestampVM Let’s now look at how timestamp VM implements block.ChainVM interface. The complete implementation is done in the vm.go class. Here we have described the most important functions of the vm.go class. To initialize the VM, the class calls Initialize function.  func (vm *VM) Initialize( ctx *snow.Context, dbManager manager.Manager, genesisData []byte, upgradeData []byte, configData []byte, toEngine chan<- common.Message, _ []*common.Fx, _ common.AppSender, ) error { version, err := vm.Version() if err != nil { log.Error("error initializing Timestamp VM: %v", err) return err } log.Info("Initializing Timestamp VM", "Version", version) vm.dbManager = dbManager vm.ctx = ctx vm.toEngine = toEngine vm.verifiedBlocks = make(map[ids.ID]*Block) // Create new state vm.state = NewState(vm.dbManager.Current().Database, vm) // Initialize genesis if err := vm.initGenesis(genesisData); err != nil { return err } // Get last accepted lastAccepted, err := vm.state.GetLastAccepted() if err != nil { return err } ctx.Log.Info("initializing last accepted block as %s", lastAccepted) // Build off the most recently accepted block return vm.SetPreference(lastAccepted) } This class is also responsible for initializing the genesis block through its initGenesis helper method func (vm *VM) initGenesis(genesisData []byte) error { stateInitialized, err := vm.state.IsInitialized() if err != nil { return err } // if state is already initialized, skip init genesis. if stateInitialized { return nil } if len(genesisData) > dataLen { return errBadGenesisBytes } // genesisData is a byte slice but each block contains an byte array // Take the first [dataLen] bytes from genesisData and put them in an array var genesisDataArr [dataLen]byte copy(genesisDataArr[:], genesisData) // Create the genesis block // Timestamp of genesis block is 0. It has no parent. genesisBlock, err := vm.NewBlock(ids.Empty, 0, genesisDataArr, time.Unix(0, 0)) if err != nil { log.Error("error while creating genesis block: %v", err) return err } // Put genesis block to state if err := vm.state.PutBlock(genesisBlock); err != nil { log.Error("error while saving genesis block: %v", err) return err } // Accept the genesis block // Sets [vm.lastAccepted] and [vm.preferred] if err := genesisBlock.Accept(); err != nil { return fmt.Errorf("error accepting genesis block: %w", err) } // Mark this vm's state as initialized, so we can skip initGenesis in further restarts if err := vm.state.SetInitialized(); err != nil { return fmt.Errorf("error while setting db to initialized: %w", err) } // Flush VM's database to underlying db return vm.state.Commit() } The class builds a new block and returns it through its BuildBlock method as requested by the consensus engine. func (vm *VM) BuildBlock() (snowman.Block, error) { if len(vm.mempool) == 0 { // There is no block to be built return nil, errNoPendingBlocks } // Get the value to put in the new block value := vm.mempool[0] vm.mempool = vm.mempool[1:] // Notify consensus engine that there are more pending data for blocks // (if that is the case) when done building this block if len(vm.mempool) > 0 { defer vm.NotifyBlockReady() } // Gets Preferred Block preferredBlock, err := vm.getBlock(vm.preferred) if err != nil { return nil, fmt.Errorf("couldn't get preferred block: %w", err) } preferredHeight := preferredBlock.Height() // Build the block with preferred height newBlock, err := vm.NewBlock(vm.preferred, preferredHeight+1, value, time.Now()) if err != nil { return nil, fmt.Errorf("couldn't build block: %w", err) } // Verifies block if err := newBlock.Verify(); err != nil { return nil, err } return newBlock, nil } To send messages to the consensus engine, the class uses one of its helper methods, called NotifyBlockReady. func (vm *VM) NotifyBlockReady() { select { case vm.toEngine <- common.PendingTxs: default: vm.ctx.Log.Debug("dropping message to consensus engine") } } The block ID is ascertained with the GetBlock method. func (vm *VM) GetBlock(blkID ids.ID) (snowman.Block, error) { return vm.getBlock(blkID) } func (vm *VM) getBlock(blkID ids.ID) (*Block, error) { // If block is in memory, return it. if blk, exists := vm.verifiedBlocks[blkID]; exists { return blk, nil } return vm.state.GetBlock(blkID) } The proposeBlock method adds a piece of data to the mempool and notifies the consensus layer of the blockchain that a new block is ready to be built and voted on func (vm *VM) proposeBlock(data [dataLen]byte) { vm.mempool = append(vm.mempool, data) vm.NotifyBlockReady() } • The NewBlock method creates a new block func (vm *VM) NewBlock(parentID ids.ID, height uint64, data [dataLen]byte, timestamp time.Time) (*Block, error) { block := &Block{ PrntID: parentID, Hght: height, Tmstmp: timestamp.Unix(), Dt: data, } // Get the byte representation of the block blockBytes, err := Codec.Marshal(CodecVersion, block) if err != nil { return nil, err } // Initialize the block by providing it with its byte representation // and a reference to this VM block.Initialize(blockBytes, choices.Processing, vm) return block, nil } Step 4: Factory creation factory.go – VMs should implement the Factory interface. New method in the interface returns a new VM instance. var_ vms.Factory =&Factory{} // Factory ... type Factory struct{} // New ... func (f *Factory) New(*snow.Context) (interface{}, error) { return &VM{}, nil } Step 5: Static API creation static_service.go – Creates static API A VM may have a static API, which allows clients to call methods that do not query or update the state of a particular blockchain but rather apply to the VM as a whole. This is analogous to static methods in computer programming. AvalancheGo uses Gorilla’s RPC library to implement HTTP APIs. For each API method, there is: • A struct that defines the method’s arguments • A struct that defines the method’s return values • A method that implements the API method and is parameterized on the above 2 structs This API method encodes a string to its byte representation using a given encoding scheme. It can be used to encode data that is then put in a block and proposed as the next block for this chain. For the detailed implementation of static_service.go refer to the static_service.go code. Step 6: API creation service.go – Creates non-static API A VM may also have a non-static HTTP API, which allows clients to query and update the blockchain’s state.This VM’s API has two methods. One allows a client to get a block by its ID. The other allows a client to propose the next block of this blockchain. The blockchain ID in the endpoint changes since every blockchain has a unique ID. Step 7: Defining the main package In order to make this VM compatible with go-plugin, we need to define a main package and method, which serves our VM over gRPC so that AvalancheGo can call its methods. func main() { log.Root().SetHandler(log.LvlFilterHandler(log.LvlDebug, log.StreamHandler(os.Stderr, log.TerminalFormat()))) plugin.Serve(&plugin.ServeConfig{ HandshakeConfig: rpcchainvm.Handshake, Plugins: map[string]plugin.Plugin{ "vm": rpcchainvm.New(&timestampvm.VM{}), }, // A non-nil value here enables gRPC serving for this plugin... GRPCServer: plugin.DefaultGRPCServer, }) } Now AvalancheGo’s rpcchainvm can connect to this plugin and calls its methods. Step 8: Binary execution This VM has a build script that builds an executable of this VM (when invoked, it runs the main method from above.) The path to the executable and its name can be provided to the build script via arguments. For example: ./scripts/build.sh ../avalanchego/build/plugins timestampvm Your VM is now ready. Endnote VMs provide a way to isolate the execution of code from the underlying hardware and operating system, which can be useful for a number of reasons. One reason to use VMs on Avalanche is to enable the execution of untrusted code in a controlled environment. By running code in a VM, you can ensure that it cannot access sensitive resources or harm the system in any way, even if the code contains malicious intent. This can be particularly useful for running smart contracts or other code that is executed on the platform. Another reason to use VMs on Avalanche is to enable the execution of code in different environments or configurations. Creating a VM allows you to specify the operating system, runtime environment, and other settings to provide the right code execution environment. This can be useful for testing and debugging purposes or running code requiring specific dependencies or configurations. Overall, using VMs on Avalanche can help improve the platform’s security, scalability, and flexibility and facilitate a wide range of applications and use cases. Unlock the full potential of the decentralized world with Avalanche VMs. Contact LeewayHertz’s team of experts to create and run a virtual machine on Avalanche. Author’s Bio   Akash Takyar Akash Takyar avalanche virtual machine CEO LeewayHertz Akash Takyar is the founder and CEO at LeewayHertz. The experience of building over 100+ platforms for startups and enterprises allows Akash to rapidly architect and design solutions that are scalable and beautiful. Akash's ability to build enterprise-grade technology solutions has attracted over 30 Fortune 500 companies, including Siemens, 3M, P&G and Hershey’s. Akash is an early adopter of new technology, a passionate technology enthusiast, and an investor in AI and IoT startups. Start a conversation by filling the form Once you let us know your requirement, our technical expert will schedule a call and discuss your idea in detail post sign of an NDA. All information will be kept confidential. Insights
__label__pos
0.984944
JUC并发编程 xlc520JavaJava大约 262 分钟约 78529 字 一、线程基础 1、Java多线程相关概念 1、进程 是程序的⼀次执⾏,是系统进⾏资源分配和调度的独⽴单位,每⼀个进程都有它⾃⼰的内存空间和系统资源 进程(Process)是计算机中的程序关于某数据集合上的一次运行活动,是系统进行资源分配和调度的基本单位,是操作系统结构的基础。程序是指令、数据及其组织形式的描述,进程是程序的实体。 进程具有的特征: • 动态性:进程是程序的一次执行过程,是临时的,有生命期的,是动态产生,动态消亡的 • 并发性:任何进程都可以同其他进行一起并发执行 • 独立性:进程是系统进行资源分配和调度的一个独立单位 • 结构性:进程由程序,数据和进程控制块三部分组成 我们经常使用windows系统,经常会看见.exe后缀的文件,双击这个.exe文件的时候,这个文件中的指令就会被系统加载,那么我们就能得到一个关于这个.exe程序的进程。进程是**“活”**的,或者说是正在被执行的。 2、线程 在同⼀个进程内⼜可以执⾏多个任务,⽽这每⼀个任务我们就可以看做是⼀个线程 ⼀个进程会有1个或多个线程的 线程是轻量级的进程,是程序执行的最小单元,使用多线程而不是多进程去进行并发程序的设计,是因为线程间的切换和调度的成本远远小于进程。 3、进程与线程的一个简单解释 进程(process)和线程(thread)是操作系统的基本概念,但是它们比较抽象,不容易掌握。 1.计算机的核心是CPU,它承担了所有的计算任务。它就像一座工厂,时刻在运行。 f65f6640-fde3-4f6f-be2a-aa05b6a3c1b9 f65f6640-fde3-4f6f-be2a-aa05b6a3c1b9 2.假定工厂的电力有限,一次只能供给一个车间使用。也就是说,一个车间开工的时候,其他车间都必须停工。背后的含义就是,单个CPU一次只能运行一个任务。 aa874eba-0c27-4924-be97-9c853c009ca9 aa874eba-0c27-4924-be97-9c853c009ca9 3.进程就好比工厂的车间,它代表CPU所能处理的单个任务。任一时刻,CPU总是运行一个进程,其他进程处于非运行状态。 f03b160d-4e18-46a1-9158-913a4afdb2b2 f03b160d-4e18-46a1-9158-913a4afdb2b2 4.一个车间里,可以有很多工人。他们协同完成一个任务。 9985e48b-92bd-434e-8a4e-6f85c95a8dbd 9985e48b-92bd-434e-8a4e-6f85c95a8dbd 5.线程就好比车间里的工人。一个进程可以包括多个线程。 3dc76caa-b3d9-4555-b8c4-c805bb97e03e 3dc76caa-b3d9-4555-b8c4-c805bb97e03e 6.车间的空间是工人们共享的,比如许多房间是每个工人都可以进出的。这象征一个进程的内存空间是共享的,每个线程都可以使用这些共享内存。 b3ef804e-346b-4280-bb8d-809c9bd42853 b3ef804e-346b-4280-bb8d-809c9bd42853 7.可是,每间房间的大小不同,有些房间最多只能容纳一个人,比如厕所。里面有人的时候,其他人就不能进去了。这代表一个线程使用某些共享内存时,其他线程必须等它结束,才能使用这一块内存。 bea2bf20-2b08-484e-80b7-3e210d0e20df bea2bf20-2b08-484e-80b7-3e210d0e20df 8.一个防止他人进入的简单方法,就是门口加一把锁。先到的人锁上门,后到的人看到上锁,就在门口排队,等锁打开再进去。这就叫”互斥锁”(Mutual exclusion,缩写 Mutex),防止多个线程同时读写某一块内存区域。 876a4f82-7931-4674-ac6b-a6d04a88a5b1 876a4f82-7931-4674-ac6b-a6d04a88a5b1 9.还有些房间,可以同时容纳n个人,比如厨房。也就是说,如果人数大于n,多出来的人只能在外面等着。这好比某些内存区域,只能供给固定数目的线程使用。 ae960b3a-c0e8-4c3d-bbcb-bfc5ebbecb79 ae960b3a-c0e8-4c3d-bbcb-bfc5ebbecb79 10.这时的解决方法,就是在门口挂n把钥匙。进去的人就取一把钥匙,出来时再把钥匙挂回原处。后到的人发现钥匙架空了,就知道必须在门口排队等着了。这种做法叫做”信号量”(Semaphore),用来保证多个线程不会互相冲突。 11.操作系统的设计,因此可以归结为三点: (1)以多进程形式,允许多个任务同时运行; (2)以多线程形式,允许单个任务分成不同的部分运行; (3)提供协调机制,一方面防止进程之间和线程之间产生冲突,另一方面允许进程之间和线程之间共享资源。 4、管程 Monitor(监视器),也就是我们平时所说的锁 // Monitor其实是一种同步机制,他的义务是保证(同一时间)只有一个线程可以访问被保护的数据和代码。 // JVM中同步是基于进入和退出监视器对象(Monitor,管程对象)来实现的,每个对象实例都会有一个Monitor对象, Object o = new Object(); new Thread(() -> { synchronized (o) { } },"t1").start(); // Monitor对象会和Java对象一同创建并销毁,它底层是由C++语言来实现的。 image-20210904000040589 image-20210904000040589 5、线程状态? // Thread.State public enum State { NEW,(新建) RUNNABLE,(准备就绪) BLOCKED,(阻塞) WAITING,(不见不散) TIMED_WAITING,(过时不候) TERMINATED;(终结) } 线程几个状态的介绍: • New:表示刚刚创建的线程,这种线程还没有开始执行 • RUNNABLE:运行状态,线程的start()方法调用后,线程会处于这种状态 • BLOCKED:阻塞状态。当线程在执行的过程中遇到了synchronized同步块,但这个同步块被其他线程已获取还未释放时,当前线程将进入阻塞状态,会暂停执行,直到获取到锁。当线程获取到锁之后,又会进入到运行状态(RUNNABLE) • WAITING:等待状态。和TIME_WAITING都表示等待状态,区别是WAITING会进入一个无时间限制的等,而TIME_WAITING会进入一个有限的时间等待,那么等待的线程究竟在等什么呢?一般来说,WAITING的线程正式在等待一些特殊的事件,比如,通过wait()方法等待的线程在等待notify()方法,而通过join()方法等待的线程则会等待目标线程的终止。一旦等到期望的事件,线程就会再次进入RUNNABLE运行状态。 • TERMINATED:表示结束状态,线程执行完毕之后进入结束状态。 注意:从NEW状态出发后,线程不能在回到NEW状态,同理,处理TERMINATED状态的线程也不能在回到RUNNABLE状态 6、wait/sleep的区别? 功能都是当前线程暂停,有什么区别? wait放开手去睡,放开手里的锁 sleep握紧手去睡,醒了手里还有锁 2、线程的基本操作 1、新建线程 新建线程很简单。只需要使用new关键字创建一个线程对象,然后调用它的start()启动线程即可。 Thread thread1 = new Thread1(); t1.start(); 那么线程start()之后,会干什么呢?线程有个run()方法,start()会创建一个新的线程并让这个线程执行run()方法。 这里需要注意,下面代码也能通过编译,也能正常执行。但是,却不能新建一个线程,而是在当前线程中调用run()方法,将run方法只是作为一个普通的方法调用。 Thread thread = new Thread1(); thread1.run(); 所以,希望大家注意,调用start方法和直接调用run方法的区别。 start方法是启动一个线程,run方法只会在垫钱线程中串行的执行run方法中的代码。 默认情况下, 线程的run方法什么都没有,启动一个线程之后马上就结束了,所以如果你需要线程做点什么,需要把您的代码写到run方法中,所以必须重写run方法。 Thread thread1 = new Thread() { @Override public void run() { System.out.println("hello,我是一个线程!"); } };thread1.start(); 上面是使用匿名内部类实现的,重写了Thread的run方法,并且打印了一条信息。**我们可以通过继承Thread类,然后重写run方法,来自定义一个线程。**但考虑java是单继承的,从扩展性上来说,我们实现一个接口来自定义一个线程更好一些,java中刚好提供了Runnable接口来自定义一个线程。 @FunctionalInterfacepublic interface Runnable { public abstract void run();} Thread类有一个非常重要的构造方法: public Thread(Runnable target) 我们在看一下Thread的run方法: public void run() { if (target != null) { target.run(); } } 当我们启动线程的start方法之后,线程会执行run方法,run方法中会调用Thread构造方法传入的target的run方法。 实现Runnable接口是比较常见的做法,也是推荐的做法。 2、终止线程 一般来说线程执行完毕就会结束,无需手动关闭。但是如果我们想关闭一个正在运行的线程,有什么方法呢?可以看一下Thread类中提供了一个stop()方法,调用这个方法,就可以立即将一个线程终止,非常方便。 import lombok.extern.slf4j.Slf4j; import java.util.concurrent.TimeUnit; @Slf4j public class Demo01 { public static void main(String[] args) throws InterruptedException { Thread thread1 = new Thread() { @Override public void run() { log.info("start"); boolean flag = true; while (flag) { ; } log.info("end"); } }; thread1.setName("thread1"); thread1.start(); //当前线程休眠1秒 TimeUnit.SECONDS.sleep(1); //关闭线程thread1 thread1.stop(); //输出线程thread1的状态 log.info("{}", thread1.getState()); //当前线程休眠1秒 TimeUnit.SECONDS.sleep(1); //输出线程thread1的状态 log.info("{}", thread1.getState()); } } 运行代码,输出: 18:02:15.312 [thread1] INFO com.itsoku.chat01.Demo01 - start 18:02:16.311 [main] INFO com.itsoku.chat01.Demo01 - RUNNABLE 18:02:17.313 [main] INFO com.itsoku.chat01.Demo01 - TERMINATED 代码中有个死循环,调用stop方法之后,线程thread1的状态变为TERMINATED(结束状态),线程停止了。 我们使用idea或者eclipse的时候,会发现这个方法是一个废弃的方法,也就是说,在将来,jdk可能就会移除该方法。 stop方法为何会被废弃而不推荐使用?stop方法过于暴力,强制把正在执行的方法停止了。 大家是否遇到过这样的场景:电力系统需要维修,此时咱们正在写代码,维修人员直接将电源关闭了,代码还没保存的,是不是很崩溃,这种方式就像直接调用线程的stop方法类似。线程正在运行过程中,被强制结束了,可能会导致一些意想不到的后果。可以给大家发送一个通知,告诉大家保存一下手头的工作,将电脑关闭。 3、线程中断 在java中,线程中断是一种重要的线程写作机制,从表面上理解,中断就是让目标线程停止执行的意思,实际上并非完全如此。在上面中,我们已经详细讨论了stop方法停止线程的坏处,jdk中提供了更好的中断线程的方法。严格的说,线程中断并不会使线程立即退出,而是给线程发送一个通知,告知目标线程,有人希望你退出了!至于目标线程接收到通知之后如何处理,则完全由目标线程自己决定,这点很重要,如果中断后,线程立即无条件退出,我们又会到stop方法的老问题。 Thread提供了3个与线程中断有关的方法,这3个方法容易混淆,大家注意下: public void interrupt() //中断线程 public boolean isInterrupted() //判断线程是否被中断 public static boolean interrupted() //判断线程是否被中断,并清除当前中断状态 interrupt()方法是一个实例方法,它通知目标线程中断,也就是设置中断标志位为true,中断标志位表示当前线程已经被中断了。isInterrupted()方法也是一个实例方法,它判断当前线程是否被中断(通过检查中断标志位)。最后一个方法interrupted()是一个静态方法,返回boolean类型,也是用来判断当前线程是否被中断,但是同时会清除当前线程的中断标志位的状态。 while (true) { if (this.isInterrupted()) { System.out.println("我要退出了!"); break; } } } }; thread1.setName("thread1"); thread1.start(); TimeUnit.SECONDS.sleep(1); thread1.interrupt(); 上面代码中有个死循环,interrupt()方法被调用之后,线程的中断标志将被置为true,循环体中通过检查线程的中断标志是否为ture(this.isInterrupted())来判断线程是否需要退出了。 再看一种中断的方法: static volatile boolean isStop = false; public static void main(String[] args) throws InterruptedException { Thread thread1 = new Thread() { @Override public void run() { while (true) { if (isStop) { System.out.println("我要退出了!"); break; } } } }; thread1.setName("thread1"); thread1.start(); TimeUnit.SECONDS.sleep(1); isStop = true; } 代码中通过一个变量isStop来控制线程是否停止。 通过变量控制和线程自带的interrupt方法来中断线程有什么区别呢? 如果一个线程调用了sleep方法,一直处于休眠状态,通过变量控制,还可以中断线程么?大家可以思考一下。 此时只能使用线程提供的interrupt方法来中断线程了。 public static void main(String[] args) throws InterruptedException { Thread thread1 = new Thread() { @Override public void run() { while (true) { //休眠100秒 try { TimeUnit.SECONDS.sleep(100); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("我要退出了!"); break; } } }; thread1.setName("thread1"); thread1.start(); TimeUnit.SECONDS.sleep(1); thread1.interrupt(); } 调用interrupt()方法之后,线程的sleep方法将会抛出InterruptedException异常。 Thread thread1 = new Thread() { @Override public void run() { while (true) { //休眠100秒 try { TimeUnit.SECONDS.sleep(100); } catch (InterruptedException e) { e.printStackTrace(); } if (this.isInterrupted()) { System.out.println("我要退出了!"); break; } } } }; 运行上面的代码,发现程序无法终止。为什么? 代码需要改为: Thread thread1 = new Thread() { @Override public void run() { while (true) { //休眠100秒 try { TimeUnit.SECONDS.sleep(100); } catch (InterruptedException e) { this.interrupt(); e.printStackTrace(); } if (this.isInterrupted()) { System.out.println("我要退出了!"); break; } } } }; 上面代码可以终止。 注意:sleep方法由于中断而抛出异常之后,线程的中断标志会被清除(置为false),所以在异常中需要执行this.interrupt()方法,将中断标志位置为true 4、等待(wait)和通知(notify) 为了支持多线程之间的协作,JDK提供了两个非常重要的方法:等待wait()方法和通知notify()方法。这2个方法并不是在Thread类中的,而是在Object类中定义的。这意味着所有的对象都可以调用者两个方法。 public final void wait() throws InterruptedException; public final native void notify(); 当在一个对象实例上调用wait()方法后,当前线程就会在这个对象上等待。这是什么意思?比如在线程A中,调用了obj.wait()方法,那么线程A就会停止继续执行,转为等待状态。等待到什么时候结束呢?线程A会一直等到其他线程调用obj.notify()方法为止,这时,obj对象成为了多个线程之间的有效通信手段。 那么wait()方法和notify()方法是如何工作的呢?如图2.5展示了两者的工作过程。如果一个线程调用了object.wait()方法,那么它就会进出object对象的等待队列。这个队列中,可能会有多个线程,因为系统可能运行多个线程同时等待某一个对象。当object.notify()方法被调用时,它就会从这个队列中随机选择一个线程,并将其唤醒。这里希望大家注意一下,这个选择是不公平的,并不是先等待线程就会优先被选择,这个选择完全是随机的。 f950f73b-52a6-4ecd-a8cb-5422bcd3a44e f950f73b-52a6-4ecd-a8cb-5422bcd3a44e 除notify()方法外,Object独享还有一个nofiyAll()方法,它和notify()方法的功能类似,不同的是,它会唤醒在这个等待队列中所有等待的线程,而不是随机选择一个。 这里强调一点,Object.wait()方法并不能随便调用。它必须包含在对应的synchronize语句汇总,无论是wait()方法或者notify()方法都需要首先获取目标独享的一个监视器。图2.6显示了wait()方法和nofiy()方法的工作流程细节。其中T1和T2表示两个线程。T1在正确执行wait()方法钱,必须获得object对象的监视器。而wait()方法在执行后,会释放这个监视器。这样做的目的是使其他等待在object对象上的线程不至于因为T1的休眠而全部无法正常执行。 线程T2在notify()方法调用前,也必须获得object对象的监视器。所幸,此时T1已经释放了这个监视器,因此,T2可以顺利获得object对象的监视器。接着,T2执行了notify()方法尝试唤醒一个等待线程,这里假设唤醒了T1。T1在被唤醒后,要做的第一件事并不是执行后续代码,而是要尝试重新获得object对象的监视器,而这个监视器也正是T1在wait()方法执行前所持有的那个。如果暂时无法获得,则T1还必须等待这个监视器。当监视器顺利获得后,T1才可以在真正意义上继续执行。 给大家上个例子: public class Demo06 { static Object object = new Object(); public static class T1 extends Thread { @Override public void run() { synchronized (object) { System.out.println(System.currentTimeMillis() + ":T1 start!"); try { System.out.println(System.currentTimeMillis() + ":T1 wait for object"); object.wait(); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(System.currentTimeMillis() + ":T1 end!"); } } } public static class T2 extends Thread { @Override public void run() { synchronized (object) { System.out.println(System.currentTimeMillis() + ":T2 start,notify one thread! "); object.notify(); System.out.println(System.currentTimeMillis() + ":T2 end!"); try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } } } } public static void main(String[] args) throws InterruptedException { new T1().start(); new T2().start(); } } 运行结果: 1562934497212:T1 start! 1562934497212:T1 wait for object 1562934497212:T2 start,notify one thread! 1562934497212:T2 end! 1562934499213:T1 end! 注意下打印结果,T2调用notify方法之后,T1并不能立即继续执行,而是要等待T2释放objec投递锁之后,T1重新成功获取锁后,才能继续执行。因此最后2行日志相差了2秒(因为T2调用notify方法后休眠了2秒)。 注意:Object.wait()方法和Thread.sleep()方法都可以让现场等待若干时间。除wait()方法可以被唤醒外,另外一个主要的区别就是wait()方法会释放目标对象的锁,而Thread.sleep()方法不会释放锁。 再给大家讲解一下wait(),notify(),notifyAll(),加深一下理解: 可以这么理解,obj对象上有2个队列,如图1,q1:等待队列,q2:准备获取锁的队列;两个队列都为空。 5f1e0099-c802-4e79-803a-a5117e6666ff 5f1e0099-c802-4e79-803a-a5117e6666ff obj.wait()过程: synchronize(obj){ obj.wait(); } 假如有3个线程,t1、t2、t3同时执行上面代码,t1、t2、t3会进入q2队列,如图2,进入q2的队列的这些线程才有资格去争抢obj的锁,假设t1争抢到了,那么t2、t3机型在q2中等待着获取锁,t1进入代码块执行wait()方法,此时t1会进入q1队列,然后系统会通知q2队列中的t2、t3去争抢obj的锁,抢到之后过程如t1的过程。最后t1、t2、t3都进入了q1队列,如图3。 6c5eec72-3303-40a5-b70d-6e24ba12d2de 6c5eec72-3303-40a5-b70d-6e24ba12d2de 45fd3950-b371-4d46-a5df-2ce1c8303707 45fd3950-b371-4d46-a5df-2ce1c8303707 上面过程之后,又来了线程t4执行了notify()方法,如下:** synchronize(obj){ obj.notify(); } t4会获取到obj的锁,然后执行notify()方法,系统会从q1队列中随机取一个线程,将其加入到q2队列,假如t2运气比较好,被随机到了,然后t2进入了q2队列,如图4,进入q2的队列的锁才有资格争抢obj的锁,t4线程执行完毕之后,会释放obj的锁,此时队列q2中的t2会获取到obj的锁,然后继续执行,执行完毕之后,q1中包含t1、t3,q2队列为空,如图5 fbf3b798-65f7-4b90-a614-66854fcce5fa fbf3b798-65f7-4b90-a614-66854fcce5fa b3868bc9-3da0-474b-a48e-20f90c1335ee b3868bc9-3da0-474b-a48e-20f90c1335ee 接着又来了个t5队列,执行了notifyAll()方法,如下: synchronize(obj){ obj.notifyAll(); } 2.调用obj.wait()方法,当前线程会加入队列queue1,然后会释放obj对象的锁 t5会获取到obj的锁,然后执行notifyAll()方法,系统会将队列q1中的线程都移到q2中,如图6,t5线程执行完毕之后,会释放obj的锁,此时队列q2中的t1、t3会争抢obj的锁,争抢到的继续执行,未增强到的带锁释放之后,系统会通知q2中的线程继续争抢索,然后继续执行,最后两个队列中都为空了。 9e648015-c445-4e84-bd8c-a53a380cbd7f 9e648015-c445-4e84-bd8c-a53a380cbd7f 5、挂起(suspend)和继续执行(resume)线程 Thread类中还有2个方法,即线程挂起(suspend)继续执行(resume),这2个操作是一对相反的操作,被挂起的线程,必须要等到resume()方法操作后,才能继续执行。系统中已经标注着2个方法过时了,不推荐使用。 系统不推荐使用suspend()方法去挂起线程是因为suspend()方法导致线程暂停的同时,并不会释放任何锁资源。此时,其他任何线程想要访问被它占用的锁时,都会被牵连,导致无法正常运行(如图2.7所示)。直到在对应的线程上进行了resume()方法操作,被挂起的线程才能继续,从而其他所有阻塞在相关锁上的线程也可以继续执行。但是,如果resume()方法操作意外地在suspend()方法前就被执行了,那么被挂起的线程可能很难有机会被继续执行了。并且,更严重的是:它所占用的锁不会被释放,因此可能会导致整个系统工作不正常。而且,对于被挂起的线程,从它线程的状态上看,居然还是Runnable状态,这也会影响我们队系统当前状态的判断。 上个例子: public class Demo07 { static Object object = new Object(); public static class T1 extends Thread { public T1(String name) { super(name); } @Override public void run() { synchronized (object) { System.out.println("in " + this.getName()); Thread.currentThread().suspend(); } } } public static void main(String[] args) throws InterruptedException { T1 t1 = new T1("t1"); t1.start(); Thread.sleep(100); T1 t2 = new T1("t2"); t2.start(); t1.resume(); t2.resume(); t1.join(); t2.join(); } } 运行代码输出: in t1 in t2 我们会发现程序不会结束,线程t2被挂起了,导致程序无法结束,使用jstack命令查看线程堆栈信息可以看到: "t2" #13 prio=5 os_prio=0 tid=0x000000002796c000 nid=0xa3c runnable [0x000000002867f000] java.lang.Thread.State: RUNNABLE at java.lang.Thread.suspend0(Native Method) at java.lang.Thread.suspend(Thread.java:1029) at com.itsoku.chat01.Demo07$T1.run(Demo07.java:20) - locked <0x0000000717372fc0> (a java.lang.Object) 发现t2线程在suspend0处被挂起了,t2的状态竟然还是RUNNABLE状态,线程明明被挂起了,状态还是运行中容易导致我们队当前系统进行误判,代码中已经调用resume()方法了,但是由于时间先后顺序的缘故,resume并没有生效,这导致了t2永远滴被挂起了,并且永远占用了object的锁,这对于系统来说可能是致命的。 6、等待线程结束(join)和谦让(yeild) 很多时候,一个线程的输入可能非常依赖于另外一个或者多个线程的输出,此时,这个线程就需要等待依赖的线程执行完毕,才能继续执行。jdk提供了join()操作来实现这个功能。如下所示,显示了2个join()方法: public final void join() throws InterruptedException; public final synchronized void join(long millis) throws InterruptedException; 第1个方法表示无限等待,它会一直只是当前线程。知道目标线程执行完毕。 第2个方法有个参数,用于指定等待时间,如果超过了给定的时间目标线程还在执行,当前线程也会停止等待,而继续往下执行。 比如:线程T1需要等待T2、T3完成之后才能继续执行,那么在T1线程中需要分别调用T2和T3的join()方法。 上个示例: public class Demo08 { static int num = 0; public static class T1 extends Thread { public T1(String name) { super(name); } @Override public void run() { System.out.println(System.currentTimeMillis() + ",start " + this.getName()); for (int i = 0; i < 10; i++) { num++; try { Thread.sleep(200); } catch (InterruptedException e) { e.printStackTrace(); } } System.out.println(System.currentTimeMillis() + ",end " + this.getName()); } } public static void main(String[] args) throws InterruptedException { T1 t1 = new T1("t1"); t1.start(); t1.join(); System.out.println(System.currentTimeMillis() + ",num = " + num); } } 执行结果: 1562939889129,start t1 1562939891134,end t1 1562939891134,num = 10 num的结果为10,1、3行的时间戳相差2秒左右,说明主线程等待t1完成之后才继续执行的。 看一下jdk1.8中Thread.join()方法的实现: public final synchronized void join(long millis) throws InterruptedException { long base = System.currentTimeMillis(); long now = 0; if (millis < 0) { throw new IllegalArgumentException("timeout value is negative"); } if (millis == 0) { while (isAlive()) { wait(0); } } else { while (isAlive()) { long delay = millis - now; if (delay <= 0) { break; } wait(delay); now = System.currentTimeMillis() - base; } } } 从join的代码中可以看出,在被等待的线程上使用了synchronize,调用了它的wait()方法,线程最后执行完毕之后,系统会自动调用它的notifyAll()方法,唤醒所有在此线程上等待的其他线程。 注意:被等待的线程执行完毕之后,系统自动会调用该线程的notifyAll()方法。所以一般情况下,我们不要去在线程对象上使用wait()、notify()、notifyAll()方法。 另外一个方法是Thread.yield(),他的定义如下: public static native void yield(); yield是谦让的意思,这是一个静态方法,一旦执行,它会让当前线程出让CPU,但需要注意的是,出让CPU并不是说不让当前线程执行了,当前线程在出让CPU后,还会进行CPU资源的争夺,但是能否再抢到CPU的执行权就不一定了。因此,对Thread.yield()方法的调用好像就是在说:我已经完成了一些主要的工作,我可以休息一下了,可以让CPU给其他线程一些工作机会了。 如果觉得一个线程不太重要,或者优先级比较低,而又担心此线程会过多的占用CPU资源,那么可以在适当的时候调用一下Thread.yield()方法,给与其他线程更多的机会。 7、总结 1. 创建线程的2中方式:继承Thread类;实现Runnable接口 2. 启动线程:调用线程的start()方法 3. 终止线程:调用线程的stop()方法,方法已过时,建议不要使用 4. 线程中断相关的方法:调用线程实例interrupt()方法将中断标志置为true;使用**线程实例方法isInterrupted()获取中断标志;调用Thread的静态方法interrupted()**获取线程是否被中断,此方法调用之后会清除中断标志(将中断标志置为false了) 5. wait、notify、notifyAll方法,这块比较难理解,可以回过头去再理理 6. 线程挂起使用线程实例方法suspend(),恢复线程使用线程实例方法resume(),这2个方法都过时了,不建议使用 7. 等待线程结束:调用线程实例方法join() 8. 出让cpu资源:调用线程静态方法yeild() 2、为什么多线程极其重要??? 1. 硬件方面 - 摩尔定律失效 摩尔定律: 它是由英特尔创始人之一Gordon Moore(戈登·摩尔)提出来的。其内容为: 当价格不变时,集成电路上可容纳的元器件的数目约每隔18-24个月便会增加一倍,性能也将提升一倍。 换言之,每一美元所能买到的电脑性能,将每隔18-24个月翻一倍以上。这一定律揭示了信息技术进步的速度。 可是从2003年开始CPU主频已经不再翻倍,而是采用多核而不是更快的主频。 摩尔定律失效。 在主频不再提高且核数在不断增加的情况下,要想让程序更快就要用到并行或并发编程。 1. 软件方面 高并发系统,异步+回调等生产需求 3、从start一个线程说起 // Java线程理解以及openjdk中的实现 private native void start0(); // Java语言本身底层就是C++语言 OpenJDK源码网址:http://openjdk.java.net/open in new window openjdk8\hotspot\src\share\vm\runtime 更加底层的C++源码解读 openjdk8\jdk\src\share\native\java\lang thread.c java线程是通过start的方法启动执行的,主要内容在native方法start0中,Openjdk的写JNI一般是一一对应的,Thread.java对应的就是Thread.c start0其实就是JVM_StartThread。此时查看源代码可以看到在jvm.h中找到了声明,jvm.cpp中有实现。 image-20210903235656449 image-20210903235656449 openjdk8\hotspot\src\share\vm\prims jvm.cpp image-20210903235812379 image-20210903235812379 image-20210903235817486 image-20210903235817486 openjdk8\hotspot\src\share\vm\runtime thread.cpp image-20210903235840971 image-20210903235840971 4、用户线程和守护线程 Java线程分为用户线程和守护线程,线程的daemon属性为true表示是守护线程,false表示是用户线程 守护线程 是一种特殊的线程,在后台默默地完成一些系统性的服务,比如垃圾回收线程 用户线程 是系统的工作线程,它会完成这个程序需要完成的业务操作 public class DaemonDemo { public static void main(String[] args) { Thread t1 = new Thread(() -> { System.out.println(Thread.currentThread().getName() + "\t 开始运行," + (Thread.currentThread().isDaemon() ? "守护线程" : "用户线程")); while (true) { } }, "t1"); //线程的daemon属性为true表示是守护线程,false表示是用户线程 t1.setDaemon(true); t1.start(); //3秒钟后主线程再运行 try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("----------main线程运行完毕"); } } 重点 当程序中所有用户线程执行完毕之后,不管守护线程是否结束,系统都会自动退出 如果用户线程全部结束了,意味着程序需要完成的业务操作已经结束了,系统可以退出了。所以当系统只剩下守护进程的时候,java虚拟机会自动退出 设置守护线程,需要在start()方法之前进行 5、获得多线程的方法几种? • 传统的是 • 继承thread类 • 实现runnable接口, • java5以后 • 实现callable接口 • java的线程池获得 6、Callable接口 1、与runnable对比 // 创建新类MyThread实现runnable接口 class MyThread implements Runnable{ @Override public void run() { } } // 新类MyThread2实现callable接口 class MyThread2 implements Callable<Integer>{ @Override public Integer call() throws Exception { return 200; } } // 面试题:callable接口与runnable接口的区别? // 答:(1)是否有返回值 // (2)是否抛异常 // (3)落地方法不一样,一个是run,一个是call 2、怎么用 直接替换runnable是否可行? image image 不可行,因为:thread类的构造方法根本没有Callable image image 认识不同的人找中间人 image image public static void main(String[] args) throws ExecutionException, InterruptedException { FutureTask futureTask = new FutureTask(new MyThread2()); new Thread(futureTask,"AA").start(); } 运行成功后如何获得返回值? image image public static void main(String[] args) throws ExecutionException, InterruptedException { FutureTask futureTask = new FutureTask(new MyThread2()); new Thread(futureTask,"AA").start(); System.out.println(futureTask.get()); } 二、线程池 1、什么是线程池 大家用jdbc操作过数据库应该知道,操作数据库需要和数据库建立连接,拿到连接之后才能操作数据库,用完之后销毁。数据库连接的创建和销毁其实是比较耗时的,真正和业务相关的操作耗时是比较短的。每个数据库操作之前都需要创建连接,为了提升系统性能,后来出现了数据库连接池,系统启动的时候,先创建很多连接放在池子里面,使用的时候,直接从连接池中获取一个,使用完毕之后返回到池子里面,继续给其他需要者使用,这其中就省去创建连接的时间,从而提升了系统整体的性能。 线程池和数据库连接池的原理也差不多,创建线程去处理业务,可能创建线程的时间比处理业务的时间还长一些,如果系统能够提前为我们创建好线程,我们需要的时候直接拿来使用,用完之后不是直接将其关闭,而是将其返回到线程中中,给其他需要这使用,这样直接节省了创建和销毁的时间,提升了系统的性能。 简单的说,在使用了线程池之后,创建线程变成了从线程池中获取一个空闲的线程,然后使用,关闭线程变成了将线程归还到线程池。 2、为什么用线程池 线程池的优势: ​ 线程池做的工作主要是控制运行的线程数量,处理过程中将任务放入队列,然后在线程创建后启动这些任务,如果线程数量超过了最大数量,超出数量的线程排队等候,等其他线程执行完毕,再从队列中取出任务来执行。 它的主要特点为:线程复用;控制最大并发数;管理线程。 第一:降低资源消耗。通过重复利用已创建的线程降低线程创建和销毁造成的销耗。 第二:提高响应速度。当任务到达时,任务可以不需要等待线程创建就能立即执行。 第三:提高线程的可管理性。线程是稀缺资源,如果无限制的创建,不仅会销耗系统资源,还会降低系统的稳定性,使用线程池可以进行统一的分配,调优和监控 3、线程池的使用 1、Executors.newFixedThreadPool(int) ​ newFixedThreadPool创建的线程池corePoolSize和maximumPoolSize值是相等的,它使用的是LinkedBlockingQueue执行长期任务性能好,创建一个线程池,一池有N个固定的线程,有固定线程数的线程 public static ExecutorService newFixedThreadPool(int nThreads) { return new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>()); } 2、Executors.newSingleThreadExecutor() ​ newSingleThreadExecutor 创建的线程池corePoolSize和maximumPoolSize值都是1,它使用的是LinkedBlockingQueue一个任务一个任务的执行,一池一线程 public static ExecutorService newSingleThreadExecutor() { return new FinalizableDelegatedExecutorService (new ThreadPoolExecutor(1, 1, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>())); } 3、Executors.newCachedThreadPool() ​ newCachedThreadPool创建的线程池将corePoolSize设置为0,将maximumPoolSize设置为Integer.MAX_VALUE,它使用的是SynchronousQueue,也就是说来了任务就创建线程运行,当线程空闲超过60秒,就销毁线程。 执行很多短期异步任务,线程池根据需要创建新线程,但在先前构建的线程可用时将重用它们。可扩容,遇强则强 public static ExecutorService newCachedThreadPool() { return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>()); } import java.util.Arrays; import java.util.List; import java.util.concurrent.Executor; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; /** * 线程池 * Arrays * Collections * Executors */ public class MyThreadPoolDemo { public static void main(String[] args) { //List list = new ArrayList(); //List list = Arrays.asList("a","b"); //固定数的线程池,一池五线程 // ExecutorService threadPool = Executors.newFixedThreadPool(5); //一个银行网点,5个受理业务的窗口 // ExecutorService threadPool = Executors.newSingleThreadExecutor(); //一个银行网点,1个受理业务的窗口 ExecutorService threadPool = Executors.newCachedThreadPool(); //一个银行网点,可扩展受理业务的窗口 //10个顾客请求 try { for (int i = 1; i <=10; i++) { threadPool.execute(()->{ System.out.println(Thread.currentThread().getName()+"\t 办理业务"); }); } } catch (Exception e) { e.printStackTrace(); } finally { threadPool.shutdown(); } } } 4、ThreadPoolExecutor底层原理 fdae3766-9607-424f-8f77-ba9a14583e8e fdae3766-9607-424f-8f77-ba9a14583e8e 举个例子,加深理解: 咱们作为开发者,上面都有开发主管,主管下面带领几个小弟干活,CTO给主管授权说,你可以招聘5个小弟干活,新来任务,如果小弟还不到吴哥,立即去招聘一个来干这个新来的任务,当5个小弟都招来了,再来任务之后,将任务记录到一个表格中,表格中最多记录100个,小弟们会主动去表格中获取任务执行,如果5个小弟都在干活,并且表格中也记录满了,那你可以将小弟扩充到20个,如果20个小弟都在干活,并且存放任务的表也满了,产品经理再来任务后,是直接拒绝,还是让产品自己干,这个由你自己决定,小弟们都尽心尽力在干活,任务都被处理完了,突然公司业绩下滑,几个员工没事干,打酱油,为了节约成本,CTO主管把小弟控制到5人,其他15个人直接被干掉了。所以作为小弟们,别让自己闲着,多干活。 **原理:**先找几个人干活,大家都忙于干活,任务太多可以排期,排期的任务太多了,再招一些人来干活,最后干活的和排期都达到上层领导要求的上限了,那需要采取一些其他策略进行处理了。对于长时间不干活的人,考虑将其开掉,节约资源和成本。 image image public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler) { if (corePoolSize < 0 || maximumPoolSize <= 0 || maximumPoolSize < corePoolSize || keepAliveTime < 0) throw new IllegalArgumentException(); if (workQueue == null || threadFactory == null || handler == null) throw new NullPointerException(); this.corePoolSize = corePoolSize; this.maximumPoolSize = maximumPoolSize; this.workQueue = workQueue; this.keepAliveTime = unit.toNanos(keepAliveTime); this.threadFactory = threadFactory; this.handler = handler; } 1. corePoolSize:核心线程大小,当提交一个任务到线程池时,线程池会创建一个线程来执行任务,即使有其他空闲线程可以处理任务也会创新线程,等到工作的线程数大于核心线程数时就不会在创建了。如果调用了线程池的prestartAllCoreThreads方法,线程池会提前把核心线程都创造好,并启动 2. maximumPoolSize:线程池允许创建的最大线程数,此值必须大于等于1。如果队列满了,并且以创建的线程数小于最大线程数,则线程池会再创建新的线程执行任务。如果我们使用了无界队列,那么所有的任务会加入队列,这个参数就没有什么效果了 3. keepAliveTime:多余的空闲线程的存活时间,当前池中线程数量超过corePoolSize时,当空闲时间,达到keepAliveTime时,多余线程会被销毁直到只剩下corePoolSize个线程为止,如果任务很多,并且每个任务的执行时间比较短,避免线程重复创建和回收,可以调大这个时间,提高线程的利用率 4. unit:keepAliveTIme的时间单位,可以选择的单位有天、小时、分钟、毫秒、微妙、千分之一毫秒和纳秒。类型是一个枚举java.util.concurrent.TimeUnit,这个枚举也经常使用 5. workQueue:任务队列,被提交但尚未被执行的任务,用于缓存待处理任务的阻塞队列 6. threadFactory:表示生成线程池中工作线程的线程工厂,用于创建线程,一般默认的即可,可以通过线程工厂给每个创建出来的线程设置更有意义的名字 7. handler:拒绝策略,表示当队列满了,并且工作线程大于等于线程池的最大线程数(maximumPoolSize)时如何来拒绝请求执行的runnable的策略 image image image image 调用线程池的execute方法处理任务,执行execute方法的过程: 1. 判断线程池中运行的线程数是否小于corepoolsize,是:则创建新的线程来处理任务,否:执行下一步 2. 试图将任务添加到workQueue指定的队列中,如果无法添加到队列,进入下一步 3. 判断线程池中运行的线程数是否小于maximumPoolSize,是:则新增线程处理当前传入的任务,否:将任务传递给handler对象rejectedExecution方法处理 1、在创建了线程池后,开始等待请求。 2、当调用execute()方法添加一个请求任务时,线程池会做出如下判断: 2.1如果正在运行的线程数量小于corePoolSize,那么马上创建线程运行这个任务; 2.2如果正在运行的线程数量大于或等于corePoolSize,那么将这个任务放入队列; 2.3如果这个时候队列满了且正在运行的线程数量还小于maximumPoolSize,那么还是要创建非核心线程立刻运行这个任务; 2.4如果队列满了且正在运行的线程数量大于或等于maximumPoolSize,那么线程池会启动饱和拒绝策略来执行。 3、当一个线程完成任务时,它会从队列中取下一个任务来执行。 4、当一个线程无事可做超过一定的时间(keepAliveTime)时,线程会判断: 如果当前运行的线程数大于corePoolSize,那么这个线程就被停掉。 所以线程池的所有任务完成后,它最终会收缩到corePoolSize的大小。 5、拒绝策略?生产中如设置合理参数 1、线程池的拒绝策略 ​ 等待队列已经排满了,再也塞不下新任务了,同时,线程池中的max线程也达到了,无法继续为新任务服务。这个是时候我们就需要拒绝策略机制合理的处理这个问题。 2、JDK内置的拒绝策略 AbortPolicy(默认):直接抛出RejectedExecutionException异常阻止系统正常运行 CallerRunsPolicy:“调用者运行”一种调节机制,该策略既不会抛弃任务,也不会抛出异常,而是将某些任务回退到调用者,从而降低新任务的流量。 DiscardOldestPolicy:抛弃队列中等待最久的任务,然后把当前任务加入队列中尝试再次提交当前任务。 DiscardPolicy:该策略默默地丢弃无法处理的任务,不予任何处理也不抛出异常。如果允许任务丢失,这是最好的一种策略。 以上内置拒绝策略均实现了RejectedExecutionHandle接口 import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.Executors; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; public class Demo5 { static class Task implements Runnable { String name; public Task(String name) { this.name = name; } @Override public void run() { System.out.println(Thread.currentThread().getName() + "处理" + this.name); try { TimeUnit.SECONDS.sleep(5); } catch (InterruptedException e) { e.printStackTrace(); } } @Override public String toString() { return "Task{" + "name='" + name + '\'' + '}'; } } public static void main(String[] args) { ThreadPoolExecutor executor = new ThreadPoolExecutor(1, 1, 60L, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1), Executors.defaultThreadFactory(), (r, executors) -> { //自定义饱和策略 //记录一下无法处理的任务 System.out.println("无法处理的任务:" + r.toString()); }); for (int i = 0; i < 5; i++) { executor.execute(new Task("任务-" + i)); } executor.shutdown(); } } 无法处理的任务:Task{name='任务-2'} 无法处理的任务:Task{name='任务-3'} pool-1-thread-1处理任务-0 无法处理的任务:Task{name='任务-4'} pool-1-thread-1处理任务-1 输出结果中可以看到有3个任务进入了饱和策略中,记录了任务的日志,对于无法处理多任务,我们最好能够记录一下,让开发人员能够知道。任务进入了饱和策略,说明线程池的配置可能不是太合理,或者机器的性能有限,需要做一些优化调整。 3、生产中合理的设置参数 要想合理的配置线程池,需要先分析任务的特性,可以从以下几个角度分析: • 任务的性质:CPU密集型任务、IO密集型任务和混合型任务 • 任务的优先级:高、中、低 • 任务的执行时间:长、中、短 • 任务的依赖性:是否依赖其他的系统资源,如数据库连接。 性质不同任务可以用不同规模的线程池分开处理。CPU密集型任务应该尽可能小的线程,如配置cpu数量+1个线程的线程池。由于IO密集型任务并不是一直在执行任务,不能让cpu闲着,则应配置尽可能多的线程,如:cup数量*2。混合型的任务,如果可以拆分,将其拆分成一个CPU密集型任务和一个IO密集型任务,只要这2个任务执行的时间相差不是太大,那么分解后执行的吞吐量将高于串行执行的吞吐量。可以通过Runtime.getRuntime().availableProcessors()方法获取cpu数量。优先级不同任务可以对线程池采用优先级队列来处理,让优先级高的先执行。 使用队列的时候建议使用有界队列,有界队列增加了系统的稳定性,如果采用无解队列,任务太多的时候可能导致系统OOM,直接让系统宕机。 线程池汇总线程大小对系统的性能有一定的影响,我们的目标是希望系统能够发挥最好的性能,过多或者过小的线程数量无法有消息的使用机器的性能。咋Java Concurrency inPractice书中给出了估算线程池大小的公式: Ncpu = CUP的数量 Ucpu = 目标CPU的使用率,0<=Ucpu<=1 W/C = 等待时间与计算时间的比例 为保存处理器达到期望的使用率,最有的线程池的大小等于: Nthreads = Ncpu × Ucpu × (1+W/C) 1. CPU密集型 // 查看CPU核数 System. out .println(Runtime. getRuntime ().availableProcessors()); image-20220329145519183 2. IO密集型 1. 由于IO密集型任务线程并不是一直在执行任务,则应配置尽可能多的线程,如CPU核数 * 2 2. image-20220329145652545 image-20220329145652545 看公司业务是CPU密集型还是IO密集型的,这两种不一样,来决定线程池线程数的最佳合理配置数。 6、超级大坑 在工作中单一的/固定数的/可变的三种创建线程池的方法哪个用的多? 答案是一个都不用,我们工作中只能使用自定义的 image image 7、自定义线程池 import java.util.Arrays; import java.util.List; import java.util.concurrent.*; /** * 线程池 * Arrays * Collections * Executors */ public class MyThreadPoolDemo { public static void main(String[] args) { ExecutorService threadPool = new ThreadPoolExecutor( 2, 5, 2L, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(3), Executors.defaultThreadFactory(), //new ThreadPoolExecutor.AbortPolicy() //new ThreadPoolExecutor.CallerRunsPolicy() //new ThreadPoolExecutor.DiscardOldestPolicy() new ThreadPoolExecutor.DiscardPolicy() ); //10个顾客请求 try { for (int i = 1; i <= 10; i++) { threadPool.execute(() -> { System.out.println(Thread.currentThread().getName() + "\t 办理业务"); }); } } catch (Exception e) { e.printStackTrace(); } finally { threadPool.shutdown(); } } private static void threadPool() { //List list = new ArrayList(); //List list = Arrays.asList("a","b"); //固定数的线程池,一池五线程 // ExecutorService threadPool = Executors.newFixedThreadPool(5); //一个银行网点,5个受理业务的窗口 // ExecutorService threadPool = Executors.newSingleThreadExecutor(); //一个银行网点,1个受理业务的窗口 ExecutorService threadPool = Executors.newCachedThreadPool(); //一个银行网点,可扩展受理业务的窗口 //10个顾客请求 try { for (int i = 1; i <= 10; i++) { threadPool.execute(() -> { System.out.println(Thread.currentThread().getName() + "\t 办理业务"); }); } } catch (Exception e) { e.printStackTrace(); } finally { threadPool.shutdown(); } } } 8、线程池中的2个关闭方法 线程池提供了2个关闭方法:shutdownshutdownNow,当调用者两个方法之后,线程池会遍历内部的工作线程,然后调用每个工作线程的interrrupt方法给线程发送中断信号,内部如果无法响应中断信号的可能永远无法终止,所以如果内部有无线循环的,最好在循环内部检测一下线程的中断信号,合理的退出。调用者两个方法中任意一个,线程池的isShutdown方法就会返回true,当所有的任务线程都关闭之后,才表示线程池关闭成功,这时调用isTerminaed方法会返回true。 调用shutdown方法之后,线程池将不再接口新任务,内部会将所有已提交的任务处理完毕,处理完毕之后,工作线程自动退出。 而调用shutdownNow方法后,线程池会将还未处理的(在队里等待处理的任务)任务移除,将正在处理中的处理完毕之后,工作线程自动退出。 至于调用哪个方法来关闭线程,应该由提交到线程池的任务特性决定,多数情况下调用shutdown方法来关闭线程池,如果任务不一定要执行完,则可以调用shutdownNow方法。 9、BlockingQueue阻塞队列 1、栈与队列 栈:先进后出,后进先出 队列:先进先出 2、阻塞队列 阻塞:必须要阻塞/不得不阻塞 image image 线程1往阻塞队列里添加元素,线程2从阻塞队列里移除元素 当队列是空的,从队列中获取元素的操作将会被阻塞 当队列是满的,从队列中添加元素的操作将会被阻塞 试图从空的队列中获取元素的线程将会被阻塞,直到其他线程往空的队列插入新的元素 试图向已满的队列中添加新元素的线程将会被阻塞,直到其他线程从队列中移除一个或多个元素或者完全清空,使队列变得空闲起来并后续新增 image image 3、种类分析 ArrayBlockingQueue:是一个基于数组结构的有界阻塞队列,此队列按照先进先出原则对元素进行排序 LinkedBlockingQueue:由链表结构组成的有界(但大小默认值为integer.MAX_VALUE)阻塞队列,此队列按照先进先出排序元素,吞吐量通常要高于ArrayBlockingQueue。静态工厂方法Executors.newFixedThreadPool使用了这个队列。 PriorityBlockingQueue:支持优先级排序的无界阻塞队列。 DelayQueue:使用优先级队列实现的延迟无界阻塞队列。 SynchronousQueue:不存储元素的阻塞队列,也即单个元素的队列,每个插入操作必须等到另外一个线程调用移除操作,否则插入操作一直处理阻塞状态,吞吐量通常要高于LinkedBlockingQueue,静态工厂方法Executors.newCachedThreadPool使用这个队列 LinkedTransferQueue:由链表组成的无界阻塞队列。 LinkedBlockingDeque:由链表组成的双向阻塞队列。 import java.util.concurrent.*; public class Demo2 { public static void main(String[] args) { ExecutorService executor = Executors.newCachedThreadPool(); for (int i = 0; i < 50; i++) { int j = i; String taskName = "任务" + j; executor.execute(() -> { System.out.println(Thread.currentThread().getName() + "处理" + taskName); //模拟任务内部处理耗时 try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } }); } executor.shutdown(); } } 代码中使用Executors.newCachedThreadPool()创建线程池,看一下的源码: public static ExecutorService newCachedThreadPool() { return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>()); } 从输出中可以看出,系统创建了50个线程处理任务,代码中使用了SynchronousQueue同步队列,这种队列比较特殊,放入元素必须要有另外一个线程去获取这个元素,否则放入元素会失败或者一直阻塞在那里直到有线程取走,示例中任务处理休眠了指定的时间,导致已创建的工作线程都忙于处理任务,所以新来任务之后,将任务丢入同步队列会失败,丢入队列失败之后,会尝试新建线程处理任务。使用上面的方式创建线程池需要注意,如果需要处理的任务比较耗时,会导致新来的任务都会创建新的线程进行处理,可能会导致创建非常多的线程,最终耗尽系统资源,触发OOM。 PriorityBlockingQueue优先级队列的线程池 import java.util.concurrent.*; public class Demo3 { static class Task implements Runnable, Comparable<Task> { private int i; private String name; public Task(int i, String name) { this.i = i; this.name = name; } @Override public void run() { System.out.println(Thread.currentThread().getName() + "处理" + this.name); } @Override public int compareTo(Task o) { return Integer.compare(o.i, this.i); } } public static void main(String[] args) { ExecutorService executor = new ThreadPoolExecutor(1, 1, 60L, TimeUnit.SECONDS, new PriorityBlockingQueue()); for (int i = 0; i < 10; i++) { String taskName = "任务" + i; executor.execute(new Task(i, taskName)); } for (int i = 100; i >= 90; i--) { String taskName = "任务" + i; executor.execute(new Task(i, taskName)); } executor.shutdown(); } } 输出中,除了第一个任务,其他任务按照优先级高低按顺序处理。原因在于:创建线程池的时候使用了优先级队列,进入队列中的任务会进行排序,任务的先后顺序由Task中的i变量决定。向PriorityBlockingQueue加入元素的时候,内部会调用代码中Task的compareTo方法决定元素的先后顺序。 4、BlockingQueue核心方法 image image image image import java.util.ArrayList; import java.util.List; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; import java.util.concurrent.TimeUnit; /** * 阻塞队列 */ public class BlockingQueueDemo { public static void main(String[] args) throws InterruptedException { // List list = new ArrayList(); BlockingQueue<String> blockingQueue = new ArrayBlockingQueue<>(3); //第一组 // System.out.println(blockingQueue.add("a")); // System.out.println(blockingQueue.add("b")); // System.out.println(blockingQueue.add("c")); // System.out.println(blockingQueue.element()); //System.out.println(blockingQueue.add("x")); // System.out.println(blockingQueue.remove()); // System.out.println(blockingQueue.remove()); // System.out.println(blockingQueue.remove()); // System.out.println(blockingQueue.remove()); // 第二组 // System.out.println(blockingQueue.offer("a")); // System.out.println(blockingQueue.offer("b")); // System.out.println(blockingQueue.offer("c")); // System.out.println(blockingQueue.offer("x")); // System.out.println(blockingQueue.poll()); // System.out.println(blockingQueue.poll()); // System.out.println(blockingQueue.poll()); // System.out.println(blockingQueue.poll()); // 第三组 // blockingQueue.put("a"); // blockingQueue.put("b"); // blockingQueue.put("c"); // //blockingQueue.put("x"); // System.out.println(blockingQueue.take()); // System.out.println(blockingQueue.take()); // System.out.println(blockingQueue.take()); // System.out.println(blockingQueue.take()); // 第四组 System.out.println(blockingQueue.offer("a")); System.out.println(blockingQueue.offer("b")); System.out.println(blockingQueue.offer("c")); System.out.println(blockingQueue.offer("a",3L, TimeUnit.SECONDS)); } } 10、扩展线程池 虽然jdk提供了ThreadPoolExecutor这个高性能线程池,但是如果我们自己想在这个线程池上面做一些扩展,比如,监控每个任务执行的开始时间,结束时间,或者一些其他自定义的功能,我们应该怎么办? 这个jdk已经帮我们想到了,ThreadPoolExecutor内部提供了几个方法beforeExecuteafterExecuteterminated,可以由开发人员自己去这些方法。看一下线程池内部的源码: try { beforeExecute(wt, task);//任务执行之前调用的方法 Throwable thrown = null; try { task.run(); } catch (RuntimeException x) { thrown = x; throw x; } catch (Error x) { thrown = x; throw x; } catch (Throwable x) { thrown = x; throw new Error(x); } finally { afterExecute(task, thrown);//任务执行完毕之后调用的方法 } } finally { task = null; w.completedTasks++; w.unlock(); } beforeExecute:任务执行之前调用的方法,有2个参数,第1个参数是执行任务的线程,第2个参数是任务 protected void beforeExecute(Thread t, Runnable r) { } afterExecute:任务执行完成之后调用的方法,2个参数,第1个参数表示任务,第2个参数表示任务执行时的异常信息,如果无异常,第二个参数为null protected void afterExecute(Runnable r, Throwable t) { } terminated:线程池最终关闭之后调用的方法。所有的工作线程都退出了,最终线程池会退出,退出时调用该方法 import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.Executors; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class Demo6 { static class Task implements Runnable { String name; public Task(String name) { this.name = name; } @Override public void run() { System.out.println(Thread.currentThread().getName() + "处理" + this.name); try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } } @Override public String toString() { return "Task{" + "name='" + name + '\'' + '}'; } } public static void main(String[] args) throws InterruptedException { ThreadPoolExecutor executor = new ThreadPoolExecutor(10, 10, 60L, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1), Executors.defaultThreadFactory(), (r, executors) -> { //自定义饱和策略 //记录一下无法处理的任务 System.out.println("无法处理的任务:" + r.toString()); }) { @Override protected void beforeExecute(Thread t, Runnable r) { System.out.println(System.currentTimeMillis() + "," + t.getName() + ",开始执行任务:" + r.toString()); } @Override protected void afterExecute(Runnable r, Throwable t) { System.out.println(System.currentTimeMillis() + "," + Thread.currentThread().getName() + ",任务:" + r.toString() + ",执行完毕!"); } @Override protected void terminated() { System.out.println(System.currentTimeMillis() + "," + Thread.currentThread().getName() + ",关闭线程池!"); } }; for (int i = 0; i < 10; i++) { executor.execute(new Task("任务-" + i)); } TimeUnit.SECONDS.sleep(1); executor.shutdown(); } } 1564324574847,pool-1-thread-1,开始执行任务:Task{name='任务-0'} 1564324574850,pool-1-thread-3,开始执行任务:Task{name='任务-2'} pool-1-thread-3处理任务-2 1564324574849,pool-1-thread-2,开始执行任务:Task{name='任务-1'} pool-1-thread-2处理任务-1 1564324574848,pool-1-thread-5,开始执行任务:Task{name='任务-4'} pool-1-thread-5处理任务-4 1564324574848,pool-1-thread-4,开始执行任务:Task{name='任务-3'} pool-1-thread-4处理任务-3 1564324574850,pool-1-thread-7,开始执行任务:Task{name='任务-6'} pool-1-thread-7处理任务-6 1564324574850,pool-1-thread-6,开始执行任务:Task{name='任务-5'} 1564324574851,pool-1-thread-8,开始执行任务:Task{name='任务-7'} pool-1-thread-8处理任务-7 pool-1-thread-1处理任务-0 pool-1-thread-6处理任务-5 1564324574851,pool-1-thread-10,开始执行任务:Task{name='任务-9'} pool-1-thread-10处理任务-9 1564324574852,pool-1-thread-9,开始执行任务:Task{name='任务-8'} pool-1-thread-9处理任务-8 1564324576851,pool-1-thread-2,任务:Task{name='任务-1'},执行完毕! 1564324576851,pool-1-thread-3,任务:Task{name='任务-2'},执行完毕! 1564324576852,pool-1-thread-1,任务:Task{name='任务-0'},执行完毕! 1564324576852,pool-1-thread-4,任务:Task{name='任务-3'},执行完毕! 1564324576852,pool-1-thread-8,任务:Task{name='任务-7'},执行完毕! 1564324576852,pool-1-thread-7,任务:Task{name='任务-6'},执行完毕! 1564324576852,pool-1-thread-5,任务:Task{name='任务-4'},执行完毕! 1564324576853,pool-1-thread-6,任务:Task{name='任务-5'},执行完毕! 1564324576853,pool-1-thread-10,任务:Task{name='任务-9'},执行完毕! 1564324576853,pool-1-thread-9,任务:Task{name='任务-8'},执行完毕! 1564324576853,pool-1-thread-9,关闭线程池! 从输出结果中可以看到,每个需要执行的任务打印了3行日志,执行前由线程池的beforeExecute打印,执行时会调用任务的run方法,任务执行完毕之后,会调用线程池的afterExecute方法,从每个任务的首尾2条日志中可以看到每个任务耗时2秒左右。线程池最终关闭之后调用了terminated方法。 三、CompletableFuture 1、Future和Callable接口 Future接口定义了操作异步任务执行一些方法,如获取异步任务的执行结果、取消任务的执行、判断任务是否被取消、判断任务执行是否完毕等。 image-20210904000352470 image-20210904000352470 Callable接口中定义了需要有返回的任务需要实现的方法 比如主线程让一个子线程去执行任务,子线程可能比较耗时,启动子线程开始执行任务后,主线程就去做其他事情了,过了一会才去获取子任务的执行结果。 2、从之前的FutureTask开始 Future接口相关架构 image-20210904000529226 image-20210904000529226 code1 public class CompletableFutureDemo{ public static void main(String[] args) throws ExecutionException, InterruptedException, TimeoutException{ FutureTask<String> futureTask = new FutureTask<>(() -> { System.out.println("-----come in FutureTask"); try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } return ThreadLocalRandom.current().nextInt(100); }); Thread t1 = new Thread(futureTask,"t1"); t1.start(); //3秒钟后才出来结果,还没有计算你提前来拿(只要一调用get方法,对于结果就是不见不散,会导致阻塞) //System.out.println(Thread.currentThread().getName()+"\t"+futureTask.get()); //3秒钟后才出来结果,我只想等待1秒钟,过时不候 System.out.println(Thread.currentThread().getName()+"\t"+futureTask.get(1L,TimeUnit.SECONDS)); System.out.println(Thread.currentThread().getName()+"\t"+" run... here"); } } • get()阻塞 一旦调用get()方法,不管是否计算完成都会导致阻塞 code2 public class CompletableFutureDemo2 { public static void main(String[] args) throws ExecutionException, InterruptedException { FutureTask<String> futureTask = new FutureTask<>(() -> { System.out.println("-----come in FutureTask"); try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } return ""+ ThreadLocalRandom.current().nextInt(100); }); new Thread(futureTask,"t1").start(); System.out.println(Thread.currentThread().getName()+"\t"+"线程完成任务"); /** * 用于阻塞式获取结果,如果想要异步获取结果,通常都会以轮询的方式去获取结果 */ while (true){ if(futureTask.isDone()){ System.out.println(futureTask.get()); break; } } } } isDone()轮询 轮询的方式会耗费无谓的CPU资源,而且也不见得能及时地得到计算结果. 如果想要异步获取结果,通常都会以轮询的方式去获取结果 尽量不要阻塞 不见不散 -- 过时不候 -- 轮询 3、对Future的改进 1、类CompletableFuture image-20210904001054865 image-20210904001054865 image-20210904001102892 image-20210904001102892 image-20210904001143180 image-20210904001143180 2、接口CompletionStage image-20210904001214909 image-20210904001214909 代表异步计算过程中的某一个阶段,一个阶段完成以后可能会触发另外一个阶段,有些类似Linux系统的管道分隔符传参数。 4、核心的四个静态方法 1、runAsync 无 返回值 public static CompletableFuture<Void> runAsync(Runnable runnable) public static CompletableFuture<Void> runAsync(Runnable runnable,Executor executor) 2、supplyAsync 有 返回值 public static <U> CompletableFuture<U> supplyAsync(Supplier<U> supplier) public static <U> CompletableFuture<U> supplyAsync(Supplier<U> supplier,Executor executor) 上述Executor executor参数说明 没有指定Executor的方法,直接使用默认的ForkJoinPool.commonPool() 作为它的线程池执行异步代码。 如果指定线程池,则使用我们自定义的或者特别指定的线程池执行异步代码 3、Code 无 返回值 public class CompletableFutureDemo3{ public static void main(String[] args) throws ExecutionException, InterruptedException{ CompletableFuture<Void> future = CompletableFuture.runAsync(() -> { System.out.println(Thread.currentThread().getName()+"\t"+"-----come in"); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("-----task is over"); }); System.out.println(future.get()); } } image-20210904001511947 image-20210904001511947 4、Code 有 返回值 public class CompletableFutureDemo3{ public static void main(String[] args) throws ExecutionException, InterruptedException{ CompletableFuture<Integer> completableFuture = CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + "\t" + "-----come in"); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } return ThreadLocalRandom.current().nextInt(100); }); System.out.println(completableFuture.get()); } } 5、Code 减少阻塞和轮询 从Java8开始引入了CompletableFuture,它是Future的功能增强版,可以传入回调对象,当异步任务完成或者发生异常时,自动调用回调对象的回调方法 public class CompletableFutureDemo3{ public static void main(String[] args) throws Exception{ CompletableFuture<Integer> completableFuture = CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + "\t" + "-----come in"); int result = ThreadLocalRandom.current().nextInt(10); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("-----计算结束耗时1秒钟,result: "+result); if(result > 6){ int age = 10/0; } return result; }).whenComplete((v,e) ->{ if(e == null){ System.out.println("-----result: "+v); } }).exceptionally(e -> { System.out.println("-----exception: "+e.getCause()+"\t"+e.getMessage()); return -44; }); //主线程不要立刻结束,否则CompletableFuture默认使用的线程池会立刻关闭:暂停3秒钟线程 try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } } } 6、CompletableFuture的优点 异步任务结束时,会自动回调某个对象的方法; 异步任务出错时,会自动回调某个对象的方法; 主线程设置好回调后,不再关心异步任务的执行,异步任务之间可以顺序执行 5、join和get对比 get会抛出异常,join不需要 6、案例精讲-从电商网站的比价需求说开去 切记,功能→性能 ​ 经常出现在等待某条 SQL 执行完成后,再继续执行下一条 SQL ,而这两条 SQL 本身是并无关系的,可以同时进行执行的。我们希望能够两条 SQL 同时进行处理,而不是等待其中的某一条 SQL 完成后,再继续下一条。同理, 对于分布式微服务的调用,按照实际业务,如果是无关联step by step的业务,可以尝试是否可以多箭齐发,同时调用。我们去比同一个商品在各个平台上的价格,要求获得一个清单列表, 1 step by step,查完京东查淘宝,查完淘宝查天猫...... 2 all 一口气同时查询。。。。。 import lombok.Getter; import java.util.Arrays; import java.util.List; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ThreadLocalRandom; import java.util.concurrent.TimeUnit; import java.util.stream.Collectors; public class T1{ static List<NetMall> list = Arrays.asList( new NetMall("jd"), new NetMall("tmall"), new NetMall("pdd"), new NetMall("mi") ); public static List<String> findPriceSync(List<NetMall> list,String productName){ return list.stream().map(mall -> String.format(productName+" %s price is %.2f",mall.getNetMallName(),mall.getPriceByName(productName))).collect(Collectors.toList()); } public static List<String> findPriceASync(List<NetMall> list,String productName){ return list.stream().map(mall -> CompletableFuture.supplyAsync(() -> String.format(productName + " %s price is %.2f", mall.getNetMallName(), mall.getPriceByName(productName)))).collect(Collectors.toList()).stream().map(CompletableFuture::join).collect(Collectors.toList()); } public static void main(String[] args){ long startTime = System.currentTimeMillis(); List<String> list1 = findPriceSync(list, "thinking in java"); for (String element : list1) { System.out.println(element); } long endTime = System.currentTimeMillis(); System.out.println("----costTime: "+(endTime - startTime) +" 毫秒"); long startTime2 = System.currentTimeMillis(); List<String> list2 = findPriceASync(list, "thinking in java"); for (String element : list2) { System.out.println(element); } long endTime2 = System.currentTimeMillis(); System.out.println("----costTime: "+(endTime2 - startTime2) +" 毫秒"); } } class NetMall{ @Getter private String netMallName; public NetMall(String netMallName){ this.netMallName = netMallName; } public double getPriceByName(String productName){ return calcPrice(productName); } private double calcPrice(String productName){ try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } return ThreadLocalRandom.current().nextDouble() + productName.charAt(0); } } 7、CompletableFuture常用方法 1、获得结果和触发计算 获取结果 // 不见不散 public T get() // 过时不候 public T get(long timeout, TimeUnit unit) // 没有计算完成的情况下,给我一个替代结果 // 立即获取结果不阻塞 计算完,返回计算完成后的结果 没算完,返回设定的valueIfAbsent值 public T getNow(T valueIfAbsent) public class CompletableFutureDemo2{ public static void main(String[] args) throws ExecutionException, InterruptedException{ CompletableFuture<Integer> completableFuture = CompletableFuture.supplyAsync(() -> { try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } return 533; }); //去掉注释上面计算没有完成,返回444 //开启注释上满计算完成,返回计算结果 try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(completableFuture.getNow(444)); } } public T join() public class CompletableFutureDemo2{ public static void main(String[] args) throws ExecutionException, InterruptedException{ System.out.println(CompletableFuture.supplyAsync(() -> "abc").thenApply(r -> r + "123").join()); } } 主动触发计算 // 是否打断get方法立即返回括号值 public boolean complete(T value) public class CompletableFutureDemo4{ public static void main(String[] args) throws ExecutionException, InterruptedException{ CompletableFuture<Integer> completableFuture = CompletableFuture.supplyAsync(() -> { try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } return 533; }); //注释掉暂停线程,get还没有算完只能返回complete方法设置的444;暂停2秒钟线程,异步线程能够计算完成返回get try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } //当调用CompletableFuture.get()被阻塞的时候,complete方法就是结束阻塞并get()获取设置的complete里面的值. System.out.println(completableFuture.complete(444)+"\t"+completableFuture.get()); } } 2、对计算结果进行处理 thenApply // 计算结果存在依赖关系,这两个线程串行化 // 由于存在依赖关系(当前步错,不走下一步),当前步骤有异常的话就叫停。 public class CompletableFutureDemo4{ public static void main(String[] args) throws ExecutionException, InterruptedException{ //当一个线程依赖另一个线程时用 thenApply 方法来把这两个线程串行化, CompletableFuture.supplyAsync(() -> { //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("111"); return 1024; }).thenApply(f -> { System.out.println("222"); return f + 1; }).thenApply(f -> { //int age = 10/0; // 异常情况:那步出错就停在那步。 System.out.println("333"); return f + 1; }).whenCompleteAsync((v,e) -> { System.out.println("*****v: "+v); }).exceptionally(e -> { e.printStackTrace(); return null; }); System.out.println("-----主线程结束,END"); // 主线程不要立刻结束,否则CompletableFuture默认使用的线程池会立刻关闭: try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } } } handle // 有异常也可以往下一步走,根据带的异常参数可以进一步处理 public class CompletableFutureDemo4{ public static void main(String[] args) throws ExecutionException, InterruptedException{ //当一个线程依赖另一个线程时用 handle 方法来把这两个线程串行化, // 异常情况:有异常也可以往下一步走,根据带的异常参数可以进一步处理 CompletableFuture.supplyAsync(() -> { //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("111"); return 1024; }).handle((f,e) -> { int age = 10/0; System.out.println("222"); return f + 1; }).handle((f,e) -> { System.out.println("333"); return f + 1; }).whenCompleteAsync((v,e) -> { System.out.println("*****v: "+v); }).exceptionally(e -> { e.printStackTrace(); return null; }); System.out.println("-----主线程结束,END"); // 主线程不要立刻结束,否则CompletableFuture默认使用的线程池会立刻关闭: try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } } } image-20210904003033912 image-20210904003033912 image-20210904003036925 image-20210904003036925 3、对计算结果进行消费 接收任务的处理结果,并消费处理,无返回结果 //thenAccept public static void main(String[] args) throws ExecutionException, InterruptedException{ CompletableFuture.supplyAsync(() -> { return 1; }).thenApply(f -> { return f + 2; }).thenApply(f -> { return f + 3; }).thenApply(f -> { return f + 4; }).thenAccept(r -> System.out.println(r)); } Code之任务之间的顺序执行 thenRun thenRun(Runnable runnable) // 任务 A 执行完执行 B,并且 B 不需要 A 的结果 thenAccept thenAccept(Consumer action) // 任务 A 执行完执行 B,B 需要 A 的结果,但是任务 B 无返回值 thenApply thenApply(Function fn) // 任务 A 执行完执行 B,B 需要 A 的结果,同时任务 B 有返回值 System.out.println(CompletableFuture.supplyAsync(() -> "resultA").thenRun(() -> {}).join()); System.out.println(CompletableFuture.supplyAsync(() -> "resultA").thenAccept(resultA -> {}).join()); System.out.println(CompletableFuture.supplyAsync(() -> "resultA").thenApply(resultA -> resultA + " resultB").join()); 4、对计算速度进行选用 谁快用谁 applyToEither public class CompletableFutureDemo5{ public static void main(String[] args) throws ExecutionException, InterruptedException{ CompletableFuture<Integer> completableFuture1 = CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in "); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } return 10; }); CompletableFuture<Integer> completableFuture2 = CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in "); try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } return 20; }); CompletableFuture<Integer> thenCombineResult = completableFuture1.applyToEither(completableFuture2,f -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in "); return f + 1; }); System.out.println(Thread.currentThread().getName() + "\t" + thenCombineResult.get()); } } 5、对计算结果进行合并 两个CompletionStage任务都完成后,最终能把两个任务的结果一起交给thenCombine 来处理 先完成的先等着,等待其它分支任务 thenCombine code标准版,好理解先拆分 public class CompletableFutureDemo2{ public static void main(String[] args) throws ExecutionException, InterruptedException{ CompletableFuture<Integer> completableFuture1 = CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in "); return 10; }); CompletableFuture<Integer> completableFuture2 = CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in "); return 20; }); CompletableFuture<Integer> thenCombineResult = completableFuture1.thenCombine(completableFuture2, (x, y) -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in "); return x + y; }); System.out.println(thenCombineResult.get()); } } code表达式 public class CompletableFutureDemo6{ public static void main(String[] args) throws ExecutionException, InterruptedException{ CompletableFuture<Integer> thenCombineResult = CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in 1"); return 10; }).thenCombine(CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in 2"); return 20; }), (x,y) -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in 3"); return x + y; }).thenCombine(CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in 4"); return 30; }),(a,b) -> { System.out.println(Thread.currentThread().getName() + "\t" + "---come in 5"); return a + b; }); System.out.println("-----主线程结束,END"); System.out.println(thenCombineResult.get()); // 主线程不要立刻结束,否则CompletableFuture默认使用的线程池会立刻关闭: try { TimeUnit.SECONDS.sleep(10); } catch (InterruptedException e) { e.printStackTrace(); } } } 8、分支合并框架 Fork:把一个复杂任务进行分拆,大事化小 Join:把分拆任务的结果进行合并 image image 1、相关类 1、ForkJoinPool image image 2、ForkJoinTask image image 3、RecursiveTask image image // 递归任务:继承后可以实现递归(自己调自己)调用的任务 class Fibonacci extends RecursiveTask<Integer> { final int n; Fibonacci(int n) { this.n = n; } Integer compute() { if (n <= 1) return n; Fibonacci f1 = new Fibonacci(n - 1); f1.fork(); Fibonacci f2 = new Fibonacci(n - 2); return f2.compute() + f1.join(); } } 2、示例 import java.util.concurrent.ExecutionException; import java.util.concurrent.ForkJoinPool; import java.util.concurrent.ForkJoinTask; import java.util.concurrent.RecursiveTask; class MyTask extends RecursiveTask<Integer>{ private static final Integer ADJUST_VALUE = 10; private int begin; private int end; private int result; public MyTask(int begin, int end) { this.begin = begin; this.end = end; } @Override protected Integer compute() { if((end - begin)<=ADJUST_VALUE){ for(int i =begin;i <= end;i++){ result = result + i; } }else{ int middle = (begin + end)/2; MyTask task01 = new MyTask(begin,middle); MyTask task02 = new MyTask(middle+1,end); task01.fork(); task02.fork(); result = task01.join() + task02.join(); } return result; } } /** * 分支合并例子 * ForkJoinPool * ForkJoinTask * RecursiveTask */ public class ForkJoinDemo { public static void main(String[] args) throws ExecutionException, InterruptedException { MyTask myTask = new MyTask(0,100); ForkJoinPool forkJoinPool = new ForkJoinPool(); ForkJoinTask<Integer> forkJoinTask = forkJoinPool.submit(myTask); System.out.println(forkJoinTask.get()); forkJoinPool.shutdown(); } } 四、Java“锁”事 1、Lock image image // Lock implementations provide more extensive locking operations than can be obtained using synchronized methods and statements. They allow more flexible structuring, may have quite different properties, and may support multiple associated Condition objects. // 锁实现提供了比使用同步方法和语句可以获得的更广泛的锁操作。它们允许更灵活的结构,可能具有非常不同的属性,并且可能支持多个关联的条件对象 2、synchronized与Lock的区别 1. 首先synchronized是java内置关键字,在jvm层面,Lock是个java类; 2. synchronized无法判断是否获取锁的状态,Lock可以判断是否获取到锁; 3. synchronized会自动释放锁(a 线程执行完同步代码会释放锁 ;b 线程执行过程中发生异常会释放锁),Lock需在finally中手工释放锁(unlock()方法释放锁),否则容易造成线程死锁; 4. 用synchronized关键字的两个线程1和线程2,如果当前线程1获得锁,线程2线程等待。如果线程1阻塞,线程2则会一直等待下去,而Lock锁就不一定会等待下去,如果尝试获取不到锁,线程可以不用一直等待就结束了; 5. synchronized的锁可重入、不可中断、非公平,而Lock锁可重入、可判断、可公平(两者皆可) 6. Lock锁适合大量同步的代码的同步问题,synchronized锁适合代码少量的同步问题。 3、synchronized 1. 修饰实例方法,作用于当前实例,进入同步代码前需要先获取实例的锁 2. 修饰静态方法,作用于类的Class对象,进入修饰的静态方法前需要先获取类的Class对象的锁 3. 修饰代码块,需要指定加锁对象(记做lockobj),在进入同步代码块前需要先获取lockobj的锁 1、synchronized作用于实例对象 所谓实例对象锁就是用synchronized修饰实例对象的实例方法,注意是实例方法,不是静态方法,如: public class Demo2 { int num = 0; public synchronized void add() { num++; } public static class T extends Thread { private Demo2 demo2; public T(Demo2 demo2) { this.demo2 = demo2; } @Override public void run() { for (int i = 0; i < 10000; i++) { this.demo2.add(); } } } public static void main(String[] args) throws InterruptedException { Demo2 demo2 = new Demo2(); T t1 = new T(demo2); T t2 = new T(demo2); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(demo2.num); } } main()方法中创建了一个对象demo2和2个线程t1、t2,t1、t2中调用demo2的add()方法10000次,add()方法中执行了num++,num++实际上是分3步,获取num,然后将num+1,然后将结果赋值给num,如果t2在t1读取num和num+1之间获取了num的值,那么t1和t2会读取到同样的值,然后执行num++,两次操作之后num是相同的值,最终和期望的结果不一致,造成了线程安全失败,因此我们对add方法加了synchronized来保证线程安全。 注意:m1()方法是实例方法,两个线程操作m1()时,需要先获取demo2的锁,没有获取到锁的,将等待,直到其他线程释放锁为止。 synchronize作用于实例方法需要注意: 1. 实例方法上加synchronized,线程安全的前提是,多个线程操作的是同一个实例,如果多个线程作用于不同的实例,那么线程安全是无法保证的 2. 同一个实例的多个实例方法上有synchronized,这些方法都是互斥的,同一时间只允许一个线程操作同一个实例的其中的一个synchronized方法 2、synchronized作用于静态方法 当synchronized作用于静态方法时,锁的对象就是当前类的Class对象。如: public class Demo3 { static int num = 0; public static synchronized void m1() { for (int i = 0; i < 10000; i++) { num++; } } public static class T1 extends Thread { @Override public void run() { Demo3.m1(); } } public static void main(String[] args) throws InterruptedException { T1 t1 = new T1(); T1 t2 = new T1(); T1 t3 = new T1(); t1.start(); t2.start(); t3.start(); //等待3个线程结束打印num t1.join(); t2.join(); t3.join(); System.out.println(Demo3.num); /** * 打印结果: * 30000 */ } } 上面代码打印30000,和期望结果一致。m1()方法是静态方法,有synchronized修饰,锁用于与Demo3.class对象,和下面的写法类似: public static void m1() { synchronized (Demo4.class) { for (int i = 0; i < 10000; i++) { num++; } } } 3、synchronized同步代码块 除了使用关键字修饰实例方法和静态方法外,还可以使用同步代码块,在某些情况下,我们编写的方法体可能比较大,同时存在一些比较耗时的操作,而需要同步的代码又只有一小部分,如果直接对整个方法进行同步操作,可能会得不偿失,此时我们可以使用同步代码块的方式对需要同步的代码进行包裹,这样就无需对整个方法进行同步操作了,同步代码块的使用示例如下: public class Demo5 implements Runnable { static Demo5 instance = new Demo5(); static int i = 0; @Override public void run() { //省略其他耗时操作.... //使用同步代码块对变量i进行同步操作,锁对象为instance synchronized (instance) { for (int j = 0; j < 10000; j++) { i++; } } } public static void main(String[] args) throws InterruptedException { Thread t1 = new Thread(instance); Thread t2 = new Thread(instance); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(i); } } 从代码看出,将synchronized作用于一个给定的实例对象instance,即当前实例对象就是锁对象,每次当线程进入synchronized包裹的代码块时就会要求当前线程持有instance实例对象锁,如果当前有其他线程正持有该对象锁,那么新到的线程就必须等待,这样也就保证了每次只有一个线程执行i++;操作。当然除了instance作为对象外,我们还可以使用this对象(代表当前实例)或者当前类的class对象作为锁,如下代码: //this,当前实例对象锁 synchronized(this){ for(int j=0;j<1000000;j++){ i++; } } //class对象锁 synchronized(Demo5.class){ for(int j=0;j<1000000;j++){ i++; } } 分析代码是否互斥的方法,先找出synchronized作用的对象是谁,如果多个线程操作的方法中synchronized作用的锁对象一样,那么这些线程同时异步执行这些方法就是互斥的。如下代码: public class Demo6 { //作用于当前类的实例对象 public synchronized void m1() { } //作用于当前类的实例对象 public synchronized void m2() { } //作用于当前类的实例对象 public void m3() { synchronized (this) { } } //作用于当前类Class对象 public static synchronized void m4() { } //作用于当前类Class对象 public static void m5() { synchronized (Demo6.class) { } } public static class T extends Thread{ Demo6 demo6; public T(Demo6 demo6) { this.demo6 = demo6; } @Override public void run() { super.run(); } } public static void main(String[] args) { Demo6 d1 = new Demo6(); Thread t1 = new Thread(() -> { d1.m1(); }); t1.start(); Thread t2 = new Thread(() -> { d1.m2(); }); t2.start(); Thread t3 = new Thread(() -> { d1.m2(); }); t3.start(); Demo6 d2 = new Demo6(); Thread t4 = new Thread(() -> { d2.m2(); }); t4.start(); Thread t5 = new Thread(() -> { Demo6.m4(); }); t5.start(); Thread t6 = new Thread(() -> { Demo6.m5(); }); t6.start(); } } 分析上面代码: 1. 线程t1、t2、t3中调用的方法都需要获取d1的锁,所以他们是互斥的 2. t1/t2/t3这3个线程和t4不互斥,他们可以同时运行,因为前面三个线程依赖于d1的锁,t4依赖于d2的锁 3. t5、t6都作用于当前类的Class对象锁,所以这两个线程是互斥的,和其他几个线程不互斥 4、ReentrantLock ReentrantLock是Lock的默认实现,在聊ReentranLock之前,我们需要先弄清楚一些概念: 1. 可重入锁:可重入锁是指同一个线程可以多次获得同一把锁;ReentrantLock和关键字Synchronized都是可重入锁 2. 可中断锁:可中断锁时子线程在获取锁的过程中,是否可以相应线程中断操作。synchronized是不可中断的,ReentrantLock是可中断的 3. 公平锁和非公平锁:公平锁是指多个线程尝试获取同一把锁的时候,获取锁的顺序按照线程到达的先后顺序获取,而不是随机插队的方式获取。synchronized是非公平锁,而ReentrantLock是两种都可以实现,不过默认是非公平锁 1、synchronized的局限性 synchronized是java内置的关键字,它提供了一种独占的加锁方式。synchronized的获取和释放锁由jvm实现,用户不需要显示的释放锁,非常方便,然而synchronized也有一定的局限性,例如: 1. 当线程尝试获取锁的时候,如果获取不到锁会一直阻塞,这个阻塞的过程,用户无法控制 2. 如果获取锁的线程进入休眠或者阻塞,除非当前线程异常,否则其他线程尝试获取锁必须一直等待 JDK1.5之后发布,加入了Doug Lea实现的java.util.concurrent包。包内提供了Lock类,用来提供更多扩展的加锁功能。Lock弥补了synchronized的局限,提供了更加细粒度的加锁功能。 2、ReentrantLock基本使用 我们使用3个线程来对一个共享变量++操作,先使用synchronized实现,然后使用ReentrantLock实现。 synchronized方式 public class Demo2 { private static int num = 0; private static synchronized void add() { num++; } public static class T extends Thread { @Override public void run() { for (int i = 0; i < 10000; i++) { Demo2.add(); } } } public static void main(String[] args) throws InterruptedException { T t1 = new T(); T t2 = new T(); T t3 = new T(); t1.start(); t2.start(); t3.start(); t1.join(); t2.join(); t3.join(); System.out.println(Demo2.num); } } ReentrantLock方式 import java.util.concurrent.locks.ReentrantLock; public class Demo3 { private static int num = 0; private static ReentrantLock lock = new ReentrantLock(); private static void add() { lock.lock(); try { num++; } finally { lock.unlock(); } } public static class T extends Thread { @Override public void run() { for (int i = 0; i < 10000; i++) { Demo3.add(); } } } public static void main(String[] args) throws InterruptedException { T t1 = new T(); T t2 = new T(); T t3 = new T(); t1.start(); t2.start(); t3.start(); t1.join(); t2.join(); t3.join(); System.out.println(Demo3.num); } } ReentrantLock的使用过程: 1. 创建锁:ReentrantLock lock = new ReentrantLock(); 2. 获取锁:lock.lock() 3. 释放锁:lock.unlock(); 对比上面的代码,与关键字synchronized相比,ReentrantLock锁有明显的操作过程,开发人员必须手动的指定何时加锁,何时释放锁,正是因为这样手动控制,ReentrantLock对逻辑控制的灵活度要远远胜于关键字synchronized,上面代码需要注意**lock.unlock()**一定要放在finally中,否则,若程序出现了异常,锁没有释放,那么其他线程就再也没有机会获取这个锁了。 3、ReentrantLock获取锁的过程是可中断的 对于synchronized关键字,如果一个线程在等待获取锁,最终只有2种结果: 1. 要么获取到锁然后继续后面的操作 2. 要么一直等待,直到其他线程释放锁为止 而ReentrantLock提供了另外一种可能,就是在等待获取锁的过程中(发起获取锁请求到还未获取到锁这段时间内)是可以被中断的,也就是说在等待锁的过程中,程序可以根据需要取消获取锁的请求。有些使用这个操作是非常有必要的。比如:你和好朋友越好一起去打球,如果你等了半小时朋友还没到,突然你接到一个电话,朋友由于突发状况,不能来了,那么你一定达到回府。中断操作正是提供了一套类似的机制,如果一个线程正在等待获取锁,那么它依然可以收到一个通知,被告知无需等待,可以停止工作了。 示例代码: import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.ReentrantLock; public class Demo6 { private static ReentrantLock lock1 = new ReentrantLock(false); private static ReentrantLock lock2 = new ReentrantLock(false); public static class T extends Thread { int lock; public T(String name, int lock) { super(name); this.lock = lock; } @Override public void run() { try { if (this.lock == 1) { lock1.lockInterruptibly(); TimeUnit.SECONDS.sleep(1); lock2.lockInterruptibly(); } else { lock2.lockInterruptibly(); TimeUnit.SECONDS.sleep(1); lock1.lockInterruptibly(); } } catch (InterruptedException e) { System.out.println("中断标志:" + this.isInterrupted()); e.printStackTrace(); } finally { if (lock1.isHeldByCurrentThread()) { lock1.unlock(); } if (lock2.isHeldByCurrentThread()) { lock2.unlock(); } } } } public static void main(String[] args) throws InterruptedException { T t1 = new T("t1", 1); T t2 = new T("t2", 2); t1.start(); t2.start(); } } 先运行一下上面代码,发现程序无法结束,使用jstack查看线程堆栈信息,发现2个线程死锁了。 Found one Java-level deadlock: ============================= "t2": waiting for ownable synchronizer 0x0000000717380c20, (a java.util.concurrent.locks.ReentrantLock$NonfairSync), which is held by "t1" "t1": waiting for ownable synchronizer 0x0000000717380c50, (a java.util.concurrent.locks.ReentrantLock$NonfairSync), which is held by "t2 lock1被线程t1占用,lock2被线程t2占用,线程t1在等待获取lock2,线程t2在等待获取lock1,都在相互等待获取对方持有的锁,最终产生了死锁,如果是在synchronized关键字情况下发生了死锁现象,程序是无法结束的。 我们对上面代码改造一下,线程t2一直无法获取到lock1,那么等待5秒之后,我们中断获取锁的操作。主要修改一下main方法,如下: T t1 = new T("t1", 1); T t2 = new T("t2", 2); t1.start(); t2.start(); TimeUnit.SECONDS.sleep(5); t2.interrupt(); 新增了2行代码TimeUnit.SECONDS.sleep(5);t2.interrupt();,程序可以结束了,运行结果: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireInterruptibly(AbstractQueuedSynchronizer.java:898) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1222) at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335) at com.itsoku.chat06.Demo6$T.run(Demo6.java:31) 中断标志:false 从上面信息中可以看出,代码的31行触发了异常,中断标志输出:false ec24264d-651f-4eb6-aa60-bb98a3098f78 ec24264d-651f-4eb6-aa60-bb98a3098f78 t2在31行一直获取不到lock1的锁,主线程中等待了5秒之后,t2线程调用了interrupt()方法,将线程的中断标志置为true,此时31行会触发InterruptedException异常,然后线程t2可以继续向下执行,释放了lock2的锁,然后线程t1可以正常获取锁,程序得以继续进行。线程发送中断信号触发InterruptedException异常之后,中断标志将被清空。 关于获取锁的过程中被中断,注意几点: 1. ReentrankLock中必须使用实例方法lockInterruptibly()获取锁时,在线程调用interrupt()方法之后,才会引发InterruptedException异常 2. 线程调用interrupt()之后,线程的中断标志会被置为true 3. 触发InterruptedException异常之后,线程的中断标志会被清空,即置为false 4. 所以当线程调用interrupt()引发InterruptedException异常,中断标志的变化是:false->true->false 4、ReentrantLock锁申请等待限时 申请锁等待限时是什么意思?一般情况下,获取锁的时间我们是不知道的,synchronized关键字获取锁的过程中,只能等待其他线程把锁释放之后才能够有机会获取到锁。所以获取锁的时间有长有短。如果获取锁的时间能够设置超时时间,那就非常好了。 ReentrantLock刚好提供了这样功能,给我们提供了获取锁限时等待的方法tryLock(),可以选择传入时间参数,表示等待指定的时间,无参则表示立即返回锁申请的结果:true表示获取锁成功,false表示获取锁失败。 tryLock无参方法 看一下源码中tryLock方法: public boolean tryLock() 返回boolean类型的值,此方法会立即返回,结果表示获取锁是否成功,示例: import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.ReentrantLock; public class Demo8 { private static ReentrantLock lock1 = new ReentrantLock(false); public static class T extends Thread { public T(String name) { super(name); } @Override public void run() { try { System.out.println(System.currentTimeMillis() + ":" + this.getName() + "开始获取锁!"); //获取锁超时时间设置为3秒,3秒内是否能否获取锁都会返回 if (lock1.tryLock()) { System.out.println(System.currentTimeMillis() + ":" + this.getName() + "获取到了锁!"); //获取到锁之后,休眠5秒 TimeUnit.SECONDS.sleep(5); } else { System.out.println(System.currentTimeMillis() + ":" + this.getName() + "未能获取到锁!"); } } catch (InterruptedException e) { e.printStackTrace(); } finally { if (lock1.isHeldByCurrentThread()) { lock1.unlock(); } } } } public static void main(String[] args) throws InterruptedException { T t1 = new T("t1"); T t2 = new T("t2"); t1.start(); t2.start(); } } 代码中获取锁成功之后,休眠5秒,会导致另外一个线程获取锁失败,运行代码,输出: 1563356291081:t2开始获取锁! 1563356291081:t2获取到了锁! 1563356291081:t1开始获取锁! 1563356291081:t1未能获取到锁! tryLock有参方法 可以明确设置获取锁的超时时间,该方法签名: public boolean tryLock(long timeout, TimeUnit unit) throws InterruptedException 该方法在指定的时间内不管是否可以获取锁,都会返回结果,返回true,表示获取锁成功,返回false表示获取失败。此方法有2个参数,第一个参数是时间类型,是一个枚举,可以表示时、分、秒、毫秒等待,使用比较方便,第1个参数表示在时间类型上的时间长短。此方法在执行的过程中,如果调用了线程的中断interrupt()方法,会触发InterruptedException异常。 import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.ReentrantLock; public class Demo7 { private static ReentrantLock lock1 = new ReentrantLock(false); public static class T extends Thread { public T(String name) { super(name); } @Override public void run() { try { System.out.println(System.currentTimeMillis() + ":" + this.getName() + "开始获取锁!"); //获取锁超时时间设置为3秒,3秒内是否能否获取锁都会返回 if (lock1.tryLock(3, TimeUnit.SECONDS)) { System.out.println(System.currentTimeMillis() + ":" + this.getName() + "获取到了锁!"); //获取到锁之后,休眠5秒 TimeUnit.SECONDS.sleep(5); } else { System.out.println(System.currentTimeMillis() + ":" + this.getName() + "未能获取到锁!"); } } catch (InterruptedException e) { e.printStackTrace(); } finally { if (lock1.isHeldByCurrentThread()) { lock1.unlock(); } } } } public static void main(String[] args) throws InterruptedException { T t1 = new T("t1"); T t2 = new T("t2"); t1.start(); t2.start(); } } 程序中调用了ReentrantLock的实例方法tryLock(3, TimeUnit.SECONDS),表示获取锁的超时时间是3秒,3秒后不管是否能否获取锁,该方法都会有返回值,获取到锁之后,内部休眠了5秒,会导致另外一个线程获取锁失败。 运行程序,输出: 1563355512901:t2开始获取锁! 1563355512901:t1开始获取锁! 1563355512902:t2获取到了锁! 1563355515904:t1未能获取到锁! 输出结果中分析,t2获取到锁了,然后休眠了5秒,t1获取锁失败,t1打印了2条信息,时间相差3秒左右。 关于tryLock()方法和tryLock(long timeout, TimeUnit unit)方法,说明一下: 1. 都会返回boolean值,结果表示获取锁是否成功 2. tryLock()方法,不管是否获取成功,都会立即返回;而有参的tryLock方法会尝试在指定的时间内去获取锁,中间会阻塞的现象,在指定的时间之后会不管是否能够获取锁都会返回结果 3. tryLock()方法不会响应线程的中断方法;而有参的tryLock方法会响应线程的中断方法,而触发InterruptedException异常,这个从2个方法的声明上可以可以看出来 5、ReentrantLock其他常用的方法 1. isHeldByCurrentThread:实例方法,判断当前线程是否持有ReentrantLock的锁,上面代码中有使用过。 获取锁的4种方法对比 获取锁的方法是否立即响应(不会阻塞)是否响应中断 lock()×× lockInterruptibly()× tryLock()× tryLock(long timeout, TimeUnit unit)× 6、总结 1. ReentrantLock可以实现公平锁和非公平锁 2. ReentrantLock默认实现的是非公平锁 3. ReentrantLock的获取锁和释放锁必须成对出现,锁了几次,也要释放几次 4. 释放锁的操作必须放在finally中执行 5. lockInterruptibly()实例方法可以相应线程的中断方法,调用线程的interrupt()方法时,lockInterruptibly()方法会触发InterruptedException异常 6. 关于InterruptedException异常说一下,看到方法声明上带有 throws InterruptedException,表示该方法可以相应线程中断,调用线程的interrupt()方法时,这些方法会触发InterruptedException异常,触发InterruptedException时,线程的中断中断状态会被清除。所以如果程序由于调用interrupt()方法而触发InterruptedException异常,线程的标志由默认的false变为ture,然后又变为false 7. 实例方法tryLock()会尝试获取锁,会立即返回,返回值表示是否获取成功 8. 实例方法tryLock(long timeout, TimeUnit unit)会在指定的时间内尝试获取锁,指定的时间内是否能够获取锁,都会返回,返回值表示是否获取锁成功,该方法会响应线程的中断 5、悲观锁 认为自己在使用数据的时候一定有别的线程来修改数据,因此在获取数据的时候会先加锁,确保数据不会被别的线程修改。 synchronized关键字和Lock的实现类都是悲观锁 适合写操作多的场景,先加锁可以保证写操作时数据正确。 显式的锁定之后再操作同步资源 //=============悲观锁的调用方式 public synchronized void m1() { //加锁后的业务逻辑...... } // 保证多个线程使用的是同一个lock对象的前提下 ReentrantLock lock = new ReentrantLock(); public void m2() { lock.lock(); try { // 操作同步资源 }finally { lock.unlock(); } } 6、乐观锁 //=============乐观锁的调用方式 // 保证多个线程使用的是同一个AtomicInteger private AtomicInteger atomicInteger = new AtomicInteger(); atomicInteger.incrementAndGet(); ​ 乐观锁认为自己在使用数据时不会有别的线程修改数据,所以不会添加锁,只是在更新数据的时候去判断之前有没有别的线程更新了这个数据。 如果这个数据没有被更新,当前线程将自己修改的数据成功写入。如果数据已经被其他线程更新,则根据不同的实现方式执行不同的操作 乐观锁在Java中是通过使用无锁编程来实现,最常采用的是CAS算法,Java原子类中的递增操作就通过CAS自旋实现的。 适合读操作多的场景,不加锁的特点能够使其读操作的性能大幅提升。 乐观锁则直接去操作同步资源,是一种无锁算法,得之我幸不得我命,再抢 乐观锁一般有两种实现方式: 1. 采用版本号机制 2. CAS(Compare-and-Swap,即比较并替换)算法实现 7、八锁案例 1、JDK源码(notify方法) image-20210907200227293 image-20210907200227293 2、8种锁的案例实际体现在3个地方 1. 作用于实例方法,当前实例加锁,进入同步代码前要获得当前实例的锁; 2. 作用于代码块,对括号里配置的对象加锁。 3. 作用于静态方法,当前类加锁,进去同步代码前要获得当前类对象的锁; 1、标准访问有ab两个线程,请问先打印邮件还是短信 class Phone //资源类 { public synchronized void sendEmail() { System.out.println("-------sendEmail"); } public synchronized void sendSMS() { System.out.println("-------sendSMS"); } } public class Lock8Demo { public static void main(String[] args)//一切程序的入口,主线程 { Phone phone = new Phone();//资源类1 new Thread(() -> { phone.sendEmail(); },"a").start(); //暂停毫秒 try { TimeUnit.MILLISECONDS.sleep(300); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { phone.sendSMS(); },"b").start(); } } -------sendEmail -------sendSMS 2、sendEmail方法暂停3秒钟,请问先打印邮件还是短信 class Phone //资源类 { public synchronized void sendEmail() { //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("-------sendEmail"); } public synchronized void sendSMS() { System.out.println("-------sendSMS"); } } public class Lock8Demo { public static void main(String[] args)//一切程序的入口,主线程 { Phone phone = new Phone();//资源类1 new Thread(() -> { phone.sendEmail(); },"a").start(); //暂停毫秒 try { TimeUnit.MILLISECONDS.sleep(300); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { phone.sendSMS(); },"b").start(); } } -------sendEmail -------sendSMS 1-2结论 一个对象里面如果有多个synchronized方法,某一个时刻内,只要一个线程去调用其中的一个synchronized方法了, 其它的线程都只能等待,换句话说,某一个时刻内,只能有唯一的一个线程去访问这些synchronized方法 锁的是当前对象this,被锁定后,其它的线程都不能进入到当前对象的其它的synchronized方法 3、新增一个普通的hello方法,请问先打印邮件还是hello class Phone //资源类 { public synchronized void sendEmail() { //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("-------sendEmail"); } public synchronized void sendSMS() { System.out.println("-------sendSMS"); } public void hello() { System.out.println("-------hello"); } } public class Lock8Demo { public static void main(String[] args)//一切程序的入口,主线程 { Phone phone = new Phone();//资源类1 new Thread(() -> { phone.sendEmail(); },"a").start(); //暂停毫秒 try { TimeUnit.MILLISECONDS.sleep(300); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { phone.hello(); },"b").start(); } } -------hello -------sendEmail 4、有两部手机,请问先打印邮件还是短信 class Phone //资源类 { public synchronized void sendEmail() { //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("-------sendEmail"); } public synchronized void sendSMS() { System.out.println("-------sendSMS"); } public void hello() { System.out.println("-------hello"); } } public class Lock8Demo { public static void main(String[] args)//一切程序的入口,主线程 { Phone phone = new Phone();//资源类1 Phone phone2 = new Phone();//资源类2 new Thread(() -> { phone.sendEmail(); },"a").start(); //暂停毫秒 try { TimeUnit.MILLISECONDS.sleep(300); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { phone2.sendSMS(); },"b").start(); } } -------sendSMS -------sendEmail 3-4结论 加个普通方法后发现和同步锁无关,hello 换成两个对象后,不是同一把锁了,情况立刻变化。 5、两个静态同步方法,同1部手机,请问先打印邮件还是短信 class Phone //资源类 { public static synchronized void sendEmail() { //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("-------sendEmail"); } public static synchronized void sendSMS() { System.out.println("-------sendSMS"); } public void hello() { System.out.println("-------hello"); } } public class Lock8Demo { public static void main(String[] args)//一切程序的入口,主线程 { Phone phone = new Phone();//资源类1 new Thread(() -> { phone.sendEmail(); },"a").start(); //暂停毫秒 try { TimeUnit.MILLISECONDS.sleep(300); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { phone.sendSMS(); },"b").start(); } } -------sendEmail -------sendSMS 6、两个静态同步方法, 2部手机,请问先打印邮件还是短信 class Phone //资源类 { public static synchronized void sendEmail() { //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("-------sendEmail"); } public static synchronized void sendSMS() { System.out.println("-------sendSMS"); } public void hello() { System.out.println("-------hello"); } } public class Lock8Demo { public static void main(String[] args)//一切程序的入口,主线程 { Phone phone = new Phone();//资源类1 Phone phone2 = new Phone();//资源类2 new Thread(() -> { phone.sendEmail(); },"a").start(); //暂停毫秒 try { TimeUnit.MILLISECONDS.sleep(300); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { phone2.sendSMS(); },"b").start(); } } -------sendEmail -------sendSMS 5-6结论 都换成静态同步方法后,情况又变化 三种 synchronized 锁的内容有一些差别: 对于普通同步方法,锁的是当前实例对象,通常指this,具体的一部部手机,所有的普通同步方法用的都是同一把锁——实例对象本身, 对于静态同步方法,锁的是当前类的Class对象,如Phone.class唯一的一个模板 对于同步方法块,锁的是 synchronized 括号内的对象 7、1个静态同步方法,1个普通同步方法,同1部手机,请问先打印邮件还是短信 class Phone //资源类 { public static synchronized void sendEmail() { //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("-------sendEmail"); } public synchronized void sendSMS() { System.out.println("-------sendSMS"); } public void hello() { System.out.println("-------hello"); } } public class Lock8Demo { public static void main(String[] args)//一切程序的入口,主线程 { Phone phone = new Phone();//资源类1 new Thread(() -> { phone.sendEmail(); },"a").start(); //暂停毫秒 try { TimeUnit.MILLISECONDS.sleep(300); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { phone.sendSMS(); },"b").start(); } } -------sendSMS -------sendEmail 8、1个静态同步方法,1个普通同步方法,2部手机,请问先打印邮件还是短信 class Phone //资源类 { public static synchronized void sendEmail() { //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("-------sendEmail"); } public synchronized void sendSMS() { System.out.println("-------sendSMS"); } public void hello() { System.out.println("-------hello"); } } public class Lock8Demo { public static void main(String[] args)//一切程序的入口,主线程 { Phone phone = new Phone();//资源类1 Phone phone2 = new Phone();//资源类2 new Thread(() -> { phone.sendEmail(); },"a").start(); //暂停毫秒 try { TimeUnit.MILLISECONDS.sleep(300); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { phone2.sendSMS(); },"b").start(); } } -------sendSMS -------sendEmail 7-8结论 当一个线程试图访问同步代码时它首先必须得到锁,退出或抛出异常时必须释放锁。 所有的普通同步方法用的都是同一把锁——实例对象本身,就是new出来的具体实例对象本身,本类this 也就是说如果一个实例对象的普通同步方法获取锁后,该实例对象的其他普通同步方法必须等待获取锁的方法释放锁后才能获取锁。 所有的静态同步方法用的也是同一把锁——类对象本身,就是我们说过的唯一模板Class 具体实例对象this和唯一模板Class,这两把锁是两个不同的对象,所以静态同步方法与普通同步方法之间是不会有竞态条件的 但是一旦一个静态同步方法获取锁后,其他的静态同步方法都必须等待该方法释放锁后才能获取锁。 8、公平锁和非公平锁 在大多数情况下,锁的申请都是非公平的,也就是说,线程1首先请求锁A,接着线程2也请求了锁A。那么当锁A可用时,是线程1可获得锁还是线程2可获得锁呢?这是不一定的,系统只是会从这个锁的等待队列中随机挑选一个,因此不能保证其公平性。这就好比买票不排队,大家都围在售票窗口前,售票员忙的焦头烂额,也顾及不上谁先谁后,随便找个人出票就完事了,最终导致的结果是,有些人可能一直买不到票。而公平锁,则不是这样,它会按照到达的先后顺序获得资源。公平锁的一大特点是:它不会产生饥饿现象,只要你排队,最终还是可以等到资源的;synchronized关键字默认是有jvm内部实现控制的,是非公平锁。而ReentrantLock运行开发者自己设置锁的公平性。 看一下jdk中ReentrantLock的源码,2个构造方法: public ReentrantLock() { sync = new NonfairSync(); } public ReentrantLock(boolean fair) { sync = fair ? new FairSync() : new NonfairSync(); } 默认构造方法创建的是非公平锁。 第2个构造方法,有个fair参数,当fair为true的时候创建的是公平锁,公平锁看起来很不错,不过要实现公平锁,系统内部肯定需要维护一个有序队列,因此公平锁的实现成本比较高,性能相对于非公平锁来说相对低一些。因此,在默认情况下,锁是非公平的,如果没有特别要求,则不建议使用公平锁。 公平锁和非公平锁在程序调度上是很不一样,来一个公平锁示例看一下: import java.util.concurrent.locks.ReentrantLock; public class Demo5 { private static int num = 0; private static ReentrantLock fairLock = new ReentrantLock(true); public static class T extends Thread { public T(String name) { super(name); } @Override public void run() { for (int i = 0; i < 5; i++) { fairLock.lock(); try { System.out.println(this.getName() + "获得锁!"); } finally { fairLock.unlock(); } } } } public static void main(String[] args) throws InterruptedException { T t1 = new T("t1"); T t2 = new T("t2"); T t3 = new T("t3"); t1.start(); t2.start(); t3.start(); t1.join(); t2.join(); t3.join(); } } 看一下输出的结果,锁是按照先后顺序获得的。 修改一下上面代码,改为非公平锁试试,如下: ReentrantLock fairLock = new ReentrantLock(false); 从ReentrantLock卖票编码演示公平和非公平现象 import java.util.concurrent.locks.ReentrantLock; class Ticket { private int number = 30; ReentrantLock lock = new ReentrantLock(); public void sale() { lock.lock(); try { if(number > 0) { System.out.println(Thread.currentThread().getName()+"卖出第:\t"+(number--)+"\t 还剩下:"+number); } }catch (Exception e){ e.printStackTrace(); }finally { lock.unlock(); } } } public class SaleTicketDemo { public static void main(String[] args) { Ticket ticket = new Ticket(); new Thread(() -> { for (int i = 0; i <35; i++) ticket.sale(); },"a").start(); new Thread(() -> { for (int i = 0; i <35; i++) ticket.sale(); },"b").start(); new Thread(() -> { for (int i = 0; i <35; i++) ticket.sale(); },"c").start(); } } 生活中,排队讲求先来后到视为公平。程序中的公平性也是符合请求锁的绝对时间的,其实就是 FIFO,否则视为不公平 1、源码解读 ​ 按序排队公平锁,就是判断同步队列是否还有先驱节点的存在(我前面还有人吗?),如果没有先驱节点才能获取锁;先占先得非公平锁,是不管这个事的,只要能抢获到同步状态就可以 image-20210916224629198 image-20210916224629198 2、为什么会有公平锁/非公平锁的设计为什么默认非公平? 1. 恢复挂起的线程到真正锁的获取还是有时间差的,从开发人员来看这个时间微乎其微,但是从CPU的角度来看,这个时间差存在的还是很明显的。所以非公平锁能更充分的利用CPU 的时间片,尽量减少 CPU 空闲状态时间。 2. 使用多线程很重要的考量点是线程切换的开销,当采用非公平锁时,当1个线程请求锁获取同步状态,然后释放同步状态,因为不需要考虑是否还有前驱节点,所以刚释放锁的线程在此刻再次获取同步状态的概率就变得非常大,所以就减少了线程的开销。 3、使⽤公平锁会有什么问题 公平锁保证了排队的公平性,非公平锁霸气的忽视这个规则,所以就有可能导致排队的长时间在排队,也没有机会获取到锁,这就是传说中的 “锁饥饿” 4、什么时候用公平?什么时候用非公平? 如果为了更高的吞吐量,很显然非公平锁是比较合适的,因为节省很多线程切换时间,吞吐量自然就上去了;否则那就用公平锁,大家公平使用。 9、可重入锁(又名递归锁) 是指在同一个线程在外层方法获取锁的时候,再进入该线程的内层方法会自动获取锁(前提,锁对象得是同一个对象),不会因为之前已经获取过还没释放而阻塞。 如果是1个有 synchronized 修饰的递归调用方法,程序第2次进入被自己阻塞了岂不是天大的笑话,出现了作茧自缚。所以Java中ReentrantLock和synchronized都是可重入锁,可重入锁的一个优点是可一定程度避免死锁。 1、“可重入锁”这四个字分开来解释: 可:可以。 重:再次。 入:进入。 锁:同步锁。 进入什么:进入同步域(即同步代码块/方法或显式锁锁定的代码) 一句话:一个线程中的多个流程可以获取同一把锁,持有这把同步锁可以再次进入。 自己可以获取自己的内部锁 2、可重入锁种类 1、隐式锁(即synchronized关键字使用的锁)默认是可重入锁 指的是可重复可递归调用的锁,在外层使用锁之后,在内层仍然可以使用,并且不发生死锁,这样的锁就叫做可重入锁。 简单的来说就是:在一个synchronized修饰的方法或代码块的内部调用本类的其他synchronized修饰的方法或代码块时,是永远可以得到锁的 与可重入锁相反,不可重入锁不可递归调用,递归调用就发生死锁。 同步块 public class ReEntryLockDemo{ public static void main(String[] args){ final Object objectLockA = new Object(); new Thread(() -> { synchronized (objectLockA){ System.out.println("-----外层调用"); synchronized (objectLockA){ System.out.println("-----中层调用"); synchronized (objectLockA){ System.out.println("-----内层调用"); } } } },"a").start(); } } 同步方法 public class ReEntryLockDemo{ public synchronized void m1(){ System.out.println("-----m1"); m2(); } public synchronized void m2(){ System.out.println("-----m2"); m3(); } public synchronized void m3(){ System.out.println("-----m3"); } public static void main(String[] args){ ReEntryLockDemo reEntryLockDemo = new ReEntryLockDemo(); reEntryLockDemo.m1(); } } 2、显式锁(即Lock)也有ReentrantLock这样的可重入锁。 public class Demo4 { private static int num = 0; private static ReentrantLock lock = new ReentrantLock(); private static void add() { lock.lock(); lock.lock(); try { num++; } finally { lock.unlock(); lock.unlock(); } } public static class T extends Thread { @Override public void run() { for (int i = 0; i < 10000; i++) { Demo4.add(); } } } public static void main(String[] args) throws InterruptedException { T t1 = new T(); T t2 = new T(); T t3 = new T(); t1.start(); t2.start(); t3.start(); t1.join(); t2.join(); t3.join(); System.out.println(Demo4.num); } } 上面代码中add()方法中,当一个线程进入的时候,会执行2次获取锁的操作,运行程序可以正常结束,并输出和期望值一样的30000,假如ReentrantLock是不可重入的锁,那么同一个线程第2次获取锁的时候由于前面的锁还未释放而导致死锁,程序是无法正常结束的。ReentrantLock命名也挺好的Re entrant Lock,和其名字一样,可重入锁。 代码中还有几点需要注意: 1. lock()方法和unlock()方法需要成对出现,锁了几次,也要释放几次,否则后面的线程无法获取锁了;可以将add中的unlock删除一个事实,上面代码运行将无法结束 2. unlock()方法放在finally中执行,保证不管程序是否有异常,锁必定会释放 /** * @create 2020-05-14 11:59 * 在一个Synchronized修饰的方法或代码块的内部调用本类的其他Synchronized修饰的方法或代码块时,是永远可以得到锁的 */ public class ReEntryLockDemo{ static Lock lock = new ReentrantLock(); public static void main(String[] args){ new Thread(() -> { lock.lock(); try { System.out.println("----外层调用lock"); lock.lock(); try { System.out.println("----内层调用lock"); }finally { // 这里故意注释,实现加锁次数和释放次数不一样 // 由于加锁次数和释放次数不一样,第二个线程始终无法获取到锁,导致一直在等待。 lock.unlock(); // 正常情况,加锁几次就要解锁几次 } }finally { lock.unlock(); } },"a").start(); new Thread(() -> { lock.lock(); try { System.out.println("b thread----外层调用lock"); }finally { lock.unlock(); } },"b").start(); } } 3、Synchronized的重入的实现机理 ​ 每个锁对象拥有一个锁计数器和一个指向持有该锁的线程的指针。 ​ 当执行monitorenter时,如果目标锁对象的计数器为零,那么说明它没有被其他线程所持有,Java虚拟机会将该锁对象的持有线程设置为当前线程,并且将其计数器加1。 ​ 在目标锁对象的计数器不为零的情况下,如果锁对象的持有线程是当前线程,那么 Java 虚拟机可以将其计数器加1,否则需要等待,直至持有线程释放该锁。 ​ 当执行monitorexit时,Java虚拟机则需将锁对象的计数器减1。计数器为零代表锁已被释放。 10、死锁 ​ 死锁是指两个或两个以上的线程在执行过程中,因争夺资源而造成的一种互相等待的现象,若无外力干涉那它们都将无法推进下去,如果系统资源充足,进程的资源请求都能够得到满足,死锁出现的可能性就很低,否则就会因争夺有限的资源而陷入死锁。 图像 图像 1、产生死锁主要原因 1. 系统资源不足 2. 进程运行推进的顺序不合适 3. 资源分配不当 public class DeadLockDemo{ public static void main(String[] args){ final Object objectLockA = new Object(); final Object objectLockB = new Object(); new Thread(() -> { synchronized (objectLockA){ System.out.println(Thread.currentThread().getName()+"\t"+"自己持有A,希望获得B"); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } synchronized (objectLockB) { System.out.println(Thread.currentThread().getName()+"\t"+"A-------已经获得B"); } } },"A").start(); new Thread(() -> { synchronized (objectLockB){ System.out.println(Thread.currentThread().getName()+"\t"+"自己持有B,希望获得A"); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } synchronized (objectLockA){ System.out.println(Thread.currentThread().getName()+"\t"+"B-------已经获得A"); } } },"B").start(); } } 2、如何排查死锁 1. 纯命令 jps -l jstack 进程编号 1. 图形化 jconsole 五、线程间通信 1、面试题:两个线程打印 两个线程,一个线程打印1-52,另一个打印字母A-Z打印顺序为12A34B...5152Z 1、synchronized实现 package com.xue.thread; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; class ShareDataOne//资源类{ private int number = 0;//初始值为零的一个变量 public synchronized void increment() throws InterruptedException { //1判断 if(number !=0 ) { this.wait(); } //2干活 ++number; System.out.println(Thread.currentThread().getName()+"\t"+number); //3通知 this.notifyAll(); } public synchronized void decrement() throws InterruptedException { // 1判断 if (number == 0) { this.wait(); } // 2干活 --number; System.out.println(Thread.currentThread().getName() + "\t" + number); // 3通知 this.notifyAll(); } } /** * * @Description: *现在两个线程, * 可以操作初始值为零的一个变量, * 实现一个线程对该变量加1,一个线程对该变量减1, * 交替,来10轮。 * @author xialei * * * 笔记:Java里面如何进行工程级别的多线程编写 * 1 多线程变成模板(套路)-----上 * 1.1 线程 操作 资源类 * 1.2 高内聚 低耦合 * 2 多线程变成模板(套路)-----下 * 2.1 判断 * 2.2 干活 * 2.3 通知 */ public class NotifyWaitDemoOne{ public static void main(String[] args){ ShareDataOne sd = new ShareDataOne(); new Thread(() -> { for (int i = 1; i < 10; i++) { try { sd.increment(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }, "A").start(); new Thread(() -> { for (int i = 1; i < 10; i++) { try { sd.decrement(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }, "B").start(); } } /* * * * 2 多线程变成模板(套路)-----下 * 2.1 判断 * 2.2 干活 * 2.3 通知 * 3 防止虚假唤醒用while * * * */ 2、换成4个线程 ​ 换成4个线程会导致错误,虚假唤醒 ​ 原因:在java多线程判断时,不能用if,程序出事出在了判断上面, 突然有一添加的线程进到if了,突然中断了交出控制权, 没有进行验证,而是直接走下去了,加了两次,甚至多次 3、4个线程解决方案 解决虚假唤醒:查看API,java.lang.Object image image 中断和虚假唤醒是可能产生的,所以要用loop循环,if只判断一次,while是只要唤醒就要拉回来再判断一次。if换成while 4、java8新版实现 image image class BoundedBuffer { final Lock lock = new ReentrantLock(); final Condition notFull = lock.newCondition(); final Condition notEmpty = lock.newCondition(); final Object[] items = new Object[100]; int putptr, takeptr, count; public void put(Object x) throws InterruptedException { lock.lock(); try { while (count == items.length) notFull.await(); items[putptr] = x; if (++putptr == items.length) putptr = 0; ++count; notEmpty.signal(); } finally { lock.unlock(); } } package com.xue.thread; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; import org.omg.IOP.Codec; class ShareData//资源类 { private int number = 0;//初始值为零的一个变量 private Lock lock = new ReentrantLock(); private Condition condition = lock.newCondition(); public void increment() throws InterruptedException { lock.lock(); try { //判断 while(number!=0) { condition.await(); } //干活 ++number; System.out.println(Thread.currentThread().getName()+" \t "+number); //通知 condition.signalAll(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } public void decrement() throws InterruptedException { lock.lock(); try { //判断 while(number!=1) { condition.await(); } //干活 --number; System.out.println(Thread.currentThread().getName()+" \t "+number); //通知 condition.signalAll(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } /*public synchronized void increment() throws InterruptedException { //判断 while(number!=0) { this.wait(); } //干活 ++number; System.out.println(Thread.currentThread().getName()+" \t "+number); //通知 this.notifyAll();; } public synchronized void decrement() throws InterruptedException { //判断 while(number!=1) { this.wait(); } //干活 --number; System.out.println(Thread.currentThread().getName()+" \t "+number); //通知 this.notifyAll(); }*/ } /** * * @Description: *现在两个线程, * 可以操作初始值为零的一个变量, * 实现一个线程对该变量加1,一个线程对该变量减1, * 交替,来10轮。 * * * 笔记:Java里面如何进行工程级别的多线程编写 * 1 多线程变成模板(套路)-----上 * 1.1 线程 操作 资源类 * 1.2 高内聚 低耦合 * 2 多线程变成模板(套路)-----下 * 2.1 判断 * 2.2 干活 * 2.3 通知 */ public class NotifyWaitDemo { public static void main(String[] args) { ShareData sd = new ShareData(); new Thread(() -> { for (int i = 1; i <= 10; i++) { try { sd.increment(); } catch (InterruptedException e) { e.printStackTrace(); } } }, "A").start(); new Thread(() -> { for (int i = 1; i <= 10; i++) { try { sd.decrement(); } catch (InterruptedException e) { e.printStackTrace(); } } }, "B").start(); new Thread(() -> { for (int i = 1; i <= 10; i++) { try { sd.increment(); } catch (InterruptedException e) { e.printStackTrace(); } } }, "C").start(); new Thread(() -> { for (int i = 1; i <= 10; i++) { try { sd.decrement(); } catch (InterruptedException e) { e.printStackTrace(); } } }, "D").start(); } } /* * * * 2 多线程变成模板(套路)-----下 * 2.1 判断 * 2.2 干活 * 2.3 通知 * 3 防止虚假唤醒用while * * * */ 2、线程间定制化调用通信 1、有顺序通知,需要有标识位 2、有一个锁Lock,3把钥匙Condition 3、判断标志位 4、输出线程名+第几次+第几轮 5、修改标志位,通知下一个 package com.xue.thread; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; class ShareResource { private int number = 1;//1:A 2:B 3:C private Lock lock = new ReentrantLock(); private Condition c1 = lock.newCondition(); private Condition c2 = lock.newCondition(); private Condition c3 = lock.newCondition(); public void print5(int totalLoopNumber) { lock.lock(); try { //1 判断 while(number != 1) { //A 就要停止 c1.await(); } //2 干活 for (int i = 1; i <=5; i++) { System.out.println(Thread.currentThread().getName()+"\t"+i+"\t totalLoopNumber: "+totalLoopNumber); } //3 通知 number = 2; c2.signal(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } public void print10(int totalLoopNumber) { lock.lock(); try { //1 判断 while(number != 2) { //A 就要停止 c2.await(); } //2 干活 for (int i = 1; i <=10; i++) { System.out.println(Thread.currentThread().getName()+"\t"+i+"\t totalLoopNumber: "+totalLoopNumber); } //3 通知 number = 3; c3.signal(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } public void print15(int totalLoopNumber) { lock.lock(); try { //1 判断 while(number != 3) { //A 就要停止 c3.await(); } //2 干活 for (int i = 1; i <=15; i++) { System.out.println(Thread.currentThread().getName()+"\t"+i+"\t totalLoopNumber: "+totalLoopNumber); } //3 通知 number = 1; c1.signal(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } } } /** * * @Description: * 多线程之间按顺序调用,实现A->B->C * 三个线程启动,要求如下: * * AA打印5次,BB打印10次,CC打印15次 * 接着 * AA打印5次,BB打印10次,CC打印15次 * ......来10轮 * */ public class ThreadOrderAccess { public static void main(String[] args) { ShareResource sr = new ShareResource(); new Thread(() -> { for (int i = 1; i <=10; i++) { sr.print5(i); } }, "AA").start(); new Thread(() -> { for (int i = 1; i <=10; i++) { sr.print10(i); } }, "BB").start(); new Thread(() -> { for (int i = 1; i <=10; i++) { sr.print15(i); } }, "CC").start(); } } 六、LockSupport与线程中断 1、线程中断机制 1、如何停止、中断一个运行中的线程?? fdsfsdf fdsfsdf 2、什么是中断? 首先 一个线程不应该由其他线程来强制中断或停止,而是应该由线程自己自行停止。所以,Thread.stop, Thread.suspend, Thread.resume 都已经被废弃了。 其次 在Java中没有办法立即停止一条线程,然而停止线程却显得尤为重要,如取消一个耗时操作。因此,Java提供了一种用于停止线程的机制——中断。 ​ 中断只是一种协作机制,Java没有给中断增加任何语法,中断的过程完全需要程序员自己实现。若要中断一个线程,你需要手动调用该线程的interrupt方法,该方法也仅仅是将线程对象的中断标识设成true;接着你需要自己写代码不断地检测当前线程的标识位,如果为true,表示别的线程要求这条线程中断, 此时究竟该做什么需要你自己写代码实现。 ​ 每个线程对象中都有一个标识,用于表示线程是否被中断;该标识位为true表示中断,为false表示未中断;通过调用线程对象的interrupt方法将该线程的标识位设为true;可以在别的线程中调用,也可以在自己的线程中调用 3、中断的相关API方法 public void interrupt()实例方法, 实例方法interrupt()仅仅是设置线程的中断状态为true,不会停止线程 public static boolean interrupted()静态方法,Thread.interrupted(); 判断线程是否被中断,并清除当前中断状态 这个方法做了两件事: 1 返回当前线程的中断状态 2 将当前线程的中断状态设为false 这个方法有点不好理解,因为连续调用两次的结果可能不一样。 public boolean isInterrupted()实例方法, 判断当前线程是否被中断(通过检查中断标志位) 2、如何使用中断标识停止线程? 在需要中断的线程中不断监听中断状态,一旦发生中断,就执行相应的中断处理业务逻辑。 1、通过一个volatile变量实现 public class InterruptDemo{ public volatile static boolean exit = false; public static class T extends Thread { @Override public void run() { while (true) { //循环处理业务 if (exit) { break; } } } } public static void setExit() { exit = true; } public static void main(String[] args) throws InterruptedException { T t = new T(); t.start(); TimeUnit.SECONDS.sleep(3); setExit(); } } 代码中启动了一个线程,线程的run方法中有个死循环,内部通过exit变量的值来控制是否退出。TimeUnit.SECONDS.sleep(3);让主线程休眠3秒,此处为什么使用TimeUnit?TimeUnit使用更方便一些,能够很清晰的控制休眠时间,底层还是转换为Thread.sleep实现的。程序有个重点:volatile关键字,exit变量必须通过这个修饰,如果把这个去掉,程序无法正常退出。volatile控制了变量在多线程中的可见性。 2、通过AtomicBoolean public class StopThreadDemo { private final static AtomicBoolean atomicBoolean = new AtomicBoolean(true); public static void main(String[] args) { Thread t1 = new Thread(() -> { while(atomicBoolean.get()) { try { TimeUnit.MILLISECONDS.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("-----hello"); } }, "t1"); t1.start(); try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } atomicBoolean.set(false); } } 3、通过Thread类自带的中断api方法实现 1. 实例方法interrupt(),没有返回值 image-20210916231409508 image-20210916231409508 public void interrupt()实例方法, 调用interrupt()方法仅仅是在当前线程中打了一个停止的标记,并不是真正立刻停止线程。 vcxvdfgsdgdg vcxvdfgsdgdg image-20210916231506817 image-20210916231506817 1. 实例方法isInterrupted,返回布尔值 image-20210916231603313 image-20210916231603313 public boolean isInterrupted()实例方法, 获取中断标志位的当前值是什么, 判断当前线程是否被中断(通过检查中断标志位),默认是false image-20210916231626044 image-20210916231626044 public class InterruptDemo { public static void main(String[] args) { Thread t1 = new Thread(() -> { while(true) { if(Thread.currentThread().isInterrupted()) { System.out.println("-----t1 线程被中断了,break,程序结束"); break; } System.out.println("-----hello"); } }, "t1"); t1.start(); System.out.println("**************"+t1.isInterrupted()); //暂停5毫秒 try { TimeUnit.MILLISECONDS.sleep(5); } catch (InterruptedException e) { e.printStackTrace(); } t1.interrupt(); System.out.println("**************"+t1.isInterrupted()); } } 运行上面的程序,程序可以正常结束。线程内部有个中断标志,当调用线程的interrupt()实例方法之后,线程的中断标志会被置为true,可以通过线程的实例方法isInterrupted()获取线程的中断标志。 4、当前线程的中断标识为true,是不是就立刻停止? 具体来说,当对一个线程,调用 interrupt() 时: ① 如果线程处于正常活动状态,那么会将该线程的中断标志设置为 true,仅此而已。 被设置中断标志的线程将继续正常运行,不受影响。所以, interrupt() 并不能真正的中断线程,需要被调用的线程自己进行配合才行。 ② 如果线程处于被阻塞状态(例如处于sleep, wait, join 等状态),在别的线程中调用当前线程对象的interrupt方法, 那么线程将立即退出被阻塞状态,并抛出一个InterruptedException异常。 public class InterruptDemo2 { public static void main(String[] args) throws InterruptedException { Thread t1 = new Thread(() -> { for (int i = 0; i < 300; i++) { System.out.println("-------" + i); } System.out.println("after t1.interrupt()--第2次---: " + Thread.currentThread().isInterrupted()); }, "t1"); t1.start(); System.out.println("before t1.interrupt()----: " + t1.isInterrupted()); //实例方法interrupt()仅仅是设置线程的中断状态位设置为true,不会停止线程 t1.interrupt(); //活动状态,t1线程还在执行中 try { TimeUnit.MILLISECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("after t1.interrupt()--第1次---: " + t1.isInterrupted()); //非活动状态,t1线程不在执行中,已经结束执行了。 try { TimeUnit.MILLISECONDS.sleep(3000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("after t1.interrupt()--第3次---: " + t1.isInterrupted()); } } image-20210916231745805 image-20210916231745805 image-20210916231758523 image-20210916231758523 中断只是一种协同机制,修改中断标识位仅此而已,不是立刻stop打断 5、静态方法Thread.interrupted() /** * 作用是测试当前线程是否被中断(检查中断标志),返回一个boolean并清除中断状态, * 第二次再调用时中断状态已经被清除,将返回一个false。 */ public class InterruptDemo { public static void main(String[] args) throws InterruptedException { System.out.println(Thread.currentThread().getName()+"---"+Thread.interrupted()); System.out.println(Thread.currentThread().getName()+"---"+Thread.interrupted()); System.out.println("111111"); Thread.currentThread().interrupt(); System.out.println("222222"); System.out.println(Thread.currentThread().getName()+"---"+Thread.interrupted()); System.out.println(Thread.currentThread().getName()+"---"+Thread.interrupted()); } } public static boolean interrupted()静态方法,Thread.interrupted(); 判断线程是否被中断,并清除当前中断状态,类似i++ 这个方法做了两件事: 1 返回当前线程的中断状态 2 将当前线程的中断状态设为false 这个方法有点不好理解,因为连续调用两次的结果可能不一样。 image-20210916231923048 image-20210916231923048 都会返回中断状态,两者对比 image-20210916231944854 image-20210916231944854 6、总结 线程中断相关的方法: interrupt()方法是一个实例方法 它通知目标线程中断,也就是设置目标线程的中断标志位为true,中断标志位表示当前线程已经被中断了。 isInterrupted()方法也是一个实例方法 它判断当前线程是否被中断(通过检查中断标志位)并获取中断标志 Thread类的静态方法interrupted() 返回当前线程的中断状态(boolean类型)且将当前线程的中断状态设为false,此方法调用之后会清除当前线程的中断标志位的状态(将中断标志置为false了),返回当前值并清零置false 3、LockSupport是什么 LockSupport位于java.util.concurrent简称juc)包中,算是juc中一个基础类,juc中很多地方都会使用LockSupport,非常重要,希望大家一定要掌握。 关于线程等待/唤醒的方法,前面的文章中我们已经讲过2种了: 1. 方式1:使用Object中的wait()方法让线程等待,使用Object中的notify()方法唤醒线程 2. 方式2:使用juc包中Condition的await()方法让线程等待,使用signal()方法唤醒线程 image-20210916232319808 image-20210916232319808 image-20210916232333615 image-20210916232333615 image-20210916232340860 image-20210916232340860 LockSupport是用来创建锁和其他同步类的基本线程阻塞原语。 下面这句话,后面详细说 LockSupport中的park() 和 unpark() 的作用分别是阻塞线程和解除阻塞线程 4、线程等待唤醒机制 1、3种让线程等待和唤醒的方法 1. 使用Object中的wait()方法让线程等待,使用Object中的notify()方法唤醒线程 2. 使用JUC包中Condition的await()方法让线程等待,使用signal()方法唤醒线程 3. LockSupport类可以阻塞当前线程以及唤醒指定被阻塞的线程 2、Object类中的wait和notify方法实现线程等待和唤醒 /** * * 要求:t1线程等待3秒钟,3秒钟后t2线程唤醒t1线程继续工作 * * 1 正常程序演示 * * 以下异常情况: * 2 wait方法和notify方法,两个都去掉同步代码块后看运行效果 * 2.1 异常情况 * Exception in thread "t1" java.lang.IllegalMonitorStateException at java.lang.Object.wait(Native Method) * Exception in thread "t2" java.lang.IllegalMonitorStateException at java.lang.Object.notify(Native Method) * 2.2 结论 * Object类中的wait、notify、notifyAll用于线程等待和唤醒的方法,都必须在synchronized内部执行(必须用到关键字synchronized)。 * * 3 将notify放在wait方法前面 * 3.1 程序一直无法结束 * 3.2 结论 * 先wait后notify、notifyall方法,等待中的线程才会被唤醒,否则无法唤醒 */ public class LockSupportDemo { public static void main(String[] args)//main方法,主线程一切程序入口 { Object objectLock = new Object(); //同一把锁,类似资源类 new Thread(() -> { synchronized (objectLock) { try { objectLock.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } System.out.println(Thread.currentThread().getName()+"\t"+"被唤醒了"); },"t1").start(); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3L); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { synchronized (objectLock) { objectLock.notify(); } //objectLock.notify(); /*synchronized (objectLock) { try { objectLock.wait(); } catch (InterruptedException e) { e.printStackTrace(); } }*/ },"t2").start(); } } 1、正常 public class LockSupportDemo { public static void main(String[] args)//main方法,主线程一切程序入口 { Object objectLock = new Object(); //同一把锁,类似资源类 new Thread(() -> { synchronized (objectLock) { try { objectLock.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } System.out.println(Thread.currentThread().getName()+"\t"+"被唤醒了"); },"t1").start(); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3L); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { synchronized (objectLock) { objectLock.notify(); } },"t2").start(); } } 2、异常1 /** * 要求:t1线程等待3秒钟,3秒钟后t2线程唤醒t1线程继续工作 * 以下异常情况: * 2 wait方法和notify方法,两个都去掉同步代码块后看运行效果 * 2.1 异常情况 * Exception in thread "t1" java.lang.IllegalMonitorStateException at java.lang.Object.wait(Native Method) * Exception in thread "t2" java.lang.IllegalMonitorStateException at java.lang.Object.notify(Native Method) * 2.2 结论 * Object类中的wait、notify、notifyAll用于线程等待和唤醒的方法,都必须在synchronized内部执行(必须用到关键字synchronized)。 */ public class LockSupportDemo { public static void main(String[] args)//main方法,主线程一切程序入口 { Object objectLock = new Object(); //同一把锁,类似资源类 new Thread(() -> { try { objectLock.wait(); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName()+"\t"+"被唤醒了"); },"t1").start(); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3L); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { objectLock.notify(); },"t2").start(); } } wait方法和notify方法,两个都去掉同步代码块 image-20210916232855724 image-20210916232855724 3、异常2 /** * * 要求:t1线程等待3秒钟,3秒钟后t2线程唤醒t1线程继续工作 * * 3 将notify放在wait方法前先执行,t1先notify了,3秒钟后t2线程再执行wait方法 * 3.1 程序一直无法结束 * 3.2 结论 * 先wait后notify、notifyall方法,等待中的线程才会被唤醒,否则无法唤醒 */ public class LockSupportDemo { public static void main(String[] args)//main方法,主线程一切程序入口 { Object objectLock = new Object(); //同一把锁,类似资源类 new Thread(() -> { synchronized (objectLock) { objectLock.notify(); } System.out.println(Thread.currentThread().getName()+"\t"+"通知了"); },"t1").start(); //t1先notify了,3秒钟后t2线程再执行wait方法 try { TimeUnit.SECONDS.sleep(3L); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { synchronized (objectLock) { try { objectLock.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } System.out.println(Thread.currentThread().getName()+"\t"+"被唤醒了"); },"t2").start(); } } 将notify放在wait方法前面 程序无法执行,无法唤醒 4、总结 wait和notify方法必须要在同步块或者方法里面,且成对出现使用 先wait后notify才OK 3、Condition接口中的await后signal方法实现线程的等待和唤醒 1、正常 public class LockSupportDemo2 { public static void main(String[] args) { Lock lock = new ReentrantLock(); Condition condition = lock.newCondition(); new Thread(() -> { lock.lock(); try { System.out.println(Thread.currentThread().getName()+"\t"+"start"); condition.await(); System.out.println(Thread.currentThread().getName()+"\t"+"被唤醒"); } catch (InterruptedException e) { e.printStackTrace(); } finally { lock.unlock(); } },"t1").start(); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3L); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { lock.lock(); try { condition.signal(); } catch (Exception e) { e.printStackTrace(); } finally { lock.unlock(); } System.out.println(Thread.currentThread().getName()+"\t"+"通知了"); },"t2").start(); } } 2、异常1 /** * 异常: * condition.await();和condition.signal();都触发了IllegalMonitorStateException异常 * * 原因:调用condition中线程等待和唤醒的方法的前提是,要在lock和unlock方法中,要有锁才能调用 */ public class LockSupportDemo2 { public static void main(String[] args) { Lock lock = new ReentrantLock(); Condition condition = lock.newCondition(); new Thread(() -> { try { System.out.println(Thread.currentThread().getName()+"\t"+"start"); condition.await(); System.out.println(Thread.currentThread().getName()+"\t"+"被唤醒"); } catch (InterruptedException e) { e.printStackTrace(); } },"t1").start(); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3L); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { try { condition.signal(); } catch (Exception e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName()+"\t"+"通知了"); },"t2").start(); } } 去掉lock/unlock image-20210916233230684 image-20210916233230684 condition.await();和 condition.signal();都触发了 IllegalMonitorStateException异常。 结论: lock、unlock对里面才能正确调用调用condition中线程等待和唤醒的方法 3、异常2 /** * 异常: * 程序无法运行 * * 原因:先await()后signal才OK,否则线程无法被唤醒 */ public class LockSupportDemo2 { public static void main(String[] args) { Lock lock = new ReentrantLock(); Condition condition = lock.newCondition(); new Thread(() -> { lock.lock(); try { condition.signal(); System.out.println(Thread.currentThread().getName()+"\t"+"signal"); } catch (Exception e) { e.printStackTrace(); }finally { lock.unlock(); } },"t1").start(); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3L); } catch (InterruptedException e) { e.printStackTrace(); } new Thread(() -> { lock.lock(); try { System.out.println(Thread.currentThread().getName()+"\t"+"等待被唤醒"); condition.await(); System.out.println(Thread.currentThread().getName()+"\t"+"被唤醒"); } catch (Exception e) { e.printStackTrace(); }finally { lock.unlock(); } },"t2").start(); } } 先signal后await 4、总结 Condtion中的线程等待和唤醒方法之前,需要先获取锁 一定要先await后signal,不要反了 4、Object和Condition使用的限制条件 线程先要获得并持有锁,必须在锁块(synchronized或lock)中 必须要先等待后唤醒,线程才能够被唤醒 5、LockSupport类中的park等待和unpark唤醒 通过park()和unpark(thread)方法来实现阻塞和唤醒线程的操作 image-20210916233452889 image-20210916233452889 LockSupport是用来创建锁和其他同步类的基本线程阻塞原语。 ​ LockSupport类使用了一种名为Permit(许可)的概念来做到阻塞和唤醒线程的功能, 每个线程都有一个许可(permit), permit只有两个值1和零,默认是零。 可以把许可看成是一种(0,1)信号量(Semaphore),但与 Semaphore 不同的是,许可的累加上限是1。 1、主要方法 image-20210916233517944 image-20210916233517944 阻塞 park() /park(Object blocker) image-20210916233615025 image-20210916233615025 阻塞当前线程/阻塞传入的具体线程 唤醒 unpark(Thread thread) image-20210916233726972 image-20210916233726972 唤醒处于阻塞状态的指定线程 2、代码 正常+无锁块要求 public class LockSupportDemo3 { public static void main(String[] args) { //正常使用+不需要锁块 Thread t1 = new Thread(() -> { System.out.println(Thread.currentThread().getName()+" "+"1111111111111"); LockSupport.park(); System.out.println(Thread.currentThread().getName()+" "+"2222222222222------end被唤醒"); },"t1"); t1.start(); //暂停几秒钟线程 try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } LockSupport.unpark(t1); System.out.println(Thread.currentThread().getName()+" -----LockSupport.unparrk() invoked over"); } } 之前错误的先唤醒后等待,LockSupport照样支持 public class T1 { public static void main(String[] args) { Thread t1 = new Thread(() -> { try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName()+"\t"+System.currentTimeMillis()); LockSupport.park(); System.out.println(Thread.currentThread().getName()+"\t"+System.currentTimeMillis()+"---被叫醒"); },"t1"); t1.start(); try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } LockSupport.unpark(t1); System.out.println(Thread.currentThread().getName()+"\t"+System.currentTimeMillis()+"---unpark over"); } } image-20210916233832563 image-20210916233832563 七、集合不安全 1、线程不安全错误 java.util.ConcurrentModificationException ArrayList在迭代的时候如果同时对其进行修改就会 抛出java.util.ConcurrentModificationException异常 并发修改异常 2、List不安全 List<String> list = new ArrayList<>(); for (int i = 0; i <30 ; i++) { new Thread(()->{ list.add(UUID.randomUUID().toString().substring(0,8)); System.out.println(list); },String.valueOf(i)).start(); } // 看ArrayList的源码 public boolean add(E e) { ensureCapacityInternal(size + 1); // Increments modCount!! elementData[size++] = e; return true; } // 没有synchronized线程不安全 1、 解决方案 1、Vector List list = new Vector<>(); image image // 看Vector的源码 public synchronized boolean add(E e) { modCount++; ensureCapacityHelper(elementCount + 1); elementData[elementCount++] = e; return true; } // 有synchronized线程安全 2、Collections List list = Collections.synchronizedList(new ArrayList<>()); // Collections提供了方法synchronizedList保证list是同步线程安全的 // 那HashMap,HashSet是线程安全的吗?也不是,所以有同样的线程安全方法 image image 3、写时复制(JUC) List<String> list = new CopyOnWriteArrayList<>(); image image 4、CopyOnWrite理论 /** * Appends the specified element to the end of this list. * * @param e element to be appended to this list * @return {@code true} (as specified by {@link Collection#add}) */ public boolean add(E e) { final ReentrantLock lock = this.lock; lock.lock(); try { Object[] elements = getArray(); int len = elements.length; Object[] newElements = Arrays.copyOf(elements, len + 1); newElements[len] = e; setArray(newElements); return true; } finally { lock.unlock(); } } ​ CopyOnWrite容器即写时复制的容器。往一个容器添加元素的时候,不直接往当前容器Object[]添加,而是先将当前容器Object[]进行Copy,复制出一个新的容器Object[] newElements,然后向新的容器Object[] newElements里添加元素。添加元素后,再将原容器的引用指向新的容器setArray(newElements)。这样做的好处是可以对CopyOnWrite容器进行并发的读,而不需要加锁,因为当前容器不会添加任何元素。所以CopyOnWrite容器也是一种读写分离的思想,读和写不同的容器。 3、Set不安全 Set<String> set = new HashSet<>();//线程不安全 Set<String> set = new CopyOnWriteArraySet<>();//线程安全 HashSet底层数据结构是什么? HashMap ?HashSet的add是放一个值,而HashMap是放KV键值对 public HashSet() { map = new HashMap<>(); } private static final Object PRESENT = new Object(); public boolean add(E e) { return map.put(e, PRESENT)==null; } 4、Map不安全 Map<String,String> map = new HashMap<>();//线程不安全 Map<String,String> map = new ConcurrentHashMap<>();//线程安全 import java.util.*; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.CopyOnWriteArraySet; /** * 请举例说明集合类是不安全的 */ public class NotSafeDemo { public static void main(String[] args) { Map<String,String> map = new ConcurrentHashMap<>(); for (int i = 0; i <30 ; i++) { new Thread(()->{ map.put(Thread.currentThread().getName(),UUID.randomUUID().toString().substring(0,8)); System.out.println(map); },String.valueOf(i)).start(); } } private static void setNoSafe() { Set<String> set = new CopyOnWriteArraySet<>(); for (int i = 0; i <30 ; i++) { new Thread(()->{ set.add(UUID.randomUUID().toString().substring(0,8)); System.out.println(set); },String.valueOf(i)).start(); } } private static void listNoSafe() { // List<String> list = Arrays.asList("a","b","c"); // list.forEach(System.out::println); //写时复制 List<String> list = new CopyOnWriteArrayList<>(); // new CopyOnWriteArrayList<>(); //Collections.synchronizedList(new ArrayList<>()); //new Vector<>();//new ArrayList<>(); for (int i = 0; i <30 ; i++) { new Thread(()->{ list.add(UUID.randomUUID().toString().substring(0,8)); System.out.println(list); },String.valueOf(i)).start(); } } } /** * 写时复制 CopyOnWrite容器即写时复制的容器。往一个容器添加元素的时候,不直接往当前容器Object[]添加, 而是先将当前容器Object[]进行Copy,复制出一个新的容器Object[] newElements,然后向新的容器Object[] newElements里添加元素。 添加元素后,再将原容器的引用指向新的容器setArray(newElements)。 这样做的好处是可以对CopyOnWrite容器进行并发的读,而不需要加锁,因为当前容器不会添加任何元素。 所以CopyOnWrite容器也是一种读写分离的思想,读和写不同的容器。 * * * * public boolean add(E e) { final ReentrantLock lock = this.lock; lock.lock(); try { Object[] elements = getArray(); int len = elements.length; Object[] newElements = Arrays.copyOf(elements, len + 1); newElements[len] = e; setArray(newElements); return true; } finally { lock.unlock(); } } */ 八、JUC强大的辅助类 1、CountDownLatch减少计数 CountDownLatch称之为闭锁,它可以使一个或一批线程在闭锁上等待,等到其他线程执行完相应操作后,闭锁打开,这些等待的线程才可以继续执行。确切的说,闭锁在内部维护了一个倒计数器。通过该计数器的值来决定闭锁的状态,从而决定是否允许等待的线程继续执行。 常用方法: public CountDownLatch(int count):构造方法,count表示计数器的值,不能小于0,否者会报异常。 public void await() throws InterruptedException:调用await()会让当前线程等待,直到计数器为0的时候,方法才会返回,此方法会响应线程中断操作。 public boolean await(long timeout, TimeUnit unit) throws InterruptedException:限时等待,在超时之前,计数器变为了0,方法返回true,否者直到超时,返回false,此方法会响应线程中断操作。 public void countDown():让计数器减1 CountDownLatch使用步骤: 1. 创建CountDownLatch对象 2. 调用其实例方法await(),让当前线程等待 3. 调用countDown()方法,让计数器减1 4. 当计数器变为0的时候,await()方法会返回 package com.xue.thread; import java.util.concurrent.CountDownLatch; /** * * @Description: * *让一些线程阻塞直到另一些线程完成一系列操作
__label__pos
0.994935
Frequently Asked Questions PEER What do I need to use PEER? PEER runs on any iPad or iPhone running iOS 10 or later. You will need an Apple ID in order to purchase the app. You will also need a MUSE headset from Interaxon (we recommend the 2016 model), and a computer (Mac or PC) to analyze your results. On how many different devices can I use my copy of PEER? When you purchase PEER, it will be associated with your Apple ID, rather than with any particular device. In general, you can use apps on more than one device as long as they share the same Apple ID, but we do not set the rules for multiple device usage. For more specific information on this topic, contact Apple. In what format does PEER output its data? PEER stores data in comma-separated values (CSV) files. It divides the data so that each sample has its own row, and the values for that sample are presented in the columns. What's in the CSV file? PEER gives you two options. The first is labelled "EEG & Markers," which is a minimal format that only includes the EEG data and the event markers. The second option is labelled "All." It includes all of the data recorded by the MUSE, such as gyro, accelerometer and artifact data. How do I export my CSV file from PEER to my computer? There are several options. You can email it, although you may experience difficulties if your email service can't handle large files. You can also transfer it via any file-sharing service that is installed on your iOS device, such as iCloud, Google Drive, or Dropbox. You can also transfer the file to a Mac via AirDrop. Here are the instructions for exporting: 1. View a recording in PEER by tapping on it. 2. Tap the "Share" button (upper right corner of the screen). 3. Pick your export format from the resulting menu (e.g. Share CSV (All)) 4. The "Share" panel will appear. Pick the option you wish to use to transfer the file (e.g. Air Drop, Email, Save to Dropbox etc.) What should I do with my CSV-format output file? Many different programs can handle PEER’s CSV output. You should be able to choose whichever one works best for you. One prominent option is MATLAB. Many of our users like to use EEGLAB, which is a specialised toolbox for use with MATLAB. I'm having trouble adapting my PEER data for use with EEGLAB. Can you help? Yes! Our neuroscientists have written some MATLAB code that will convert PEER data into EEGLAB data. Click here to download a ZIP with the three necessary files. Note that you should apply this conversion only to the EEG channels and not to the full export. How do I know what all of the event markers mean? Here's a key that tells you: How do I know if PEER can do my experiment? PEER is optimally suited to handle any experiment that fits within either the “Oddball” or “Go-No-Go” research paradigms. However, it also features a raw EEG data recorder that could potentially be used for almost any experiment that involves the collection of EEG data. What if I want to use subjects who are colour blind? PEER can be configured to use any selection of the colours in our palette, so you can avoid any colours that are problematic for your subjects. How does PEER affect the battery life of my iPad/iPhone? We recommend that you charge your device fully before use, as PEER will increase battery usage. A fully-charged device has enough power to complete many rounds of testing. What about the MUSE battery? In general, the battery for the MUSE will last longer than the battery for your iPhone or iPad. According to Interaxon, it should last for about five hours. However, we recommend that you charge it fully before beginning testing as well. How much free space do I need to have on my device to store recordings? PEER needs a significant amount of free space for data storage. We recommend that you have at least 100MB of free space per test subject. What are the technical characteristics of the MUSE? You can find a detailed listing of the MUSE headband’s technical characteristics at: http://developer.choosemuse.com/technical-specifications Where can I buy a MUSE? For your convenience, we are happy to provide you with this direct link to help you get started on your purchase quickly. You can also buy headsets through major retailers such as Best Buy or Amazon, or directly from Interaxon. How do we reach you if we have questions or concerns? If your question is not covered by this list of FAQs, please check our support forum to see if your question can be resolved there. If you still have questions, contact us through our support page and we will respond within 24 hours.
__label__pos
0.995745
Modern technology gives us many things. Bharat Bhise Examines Physical Therapy for Lower Back Pain Introduction According to Bharat Bhise, physical therapy is often recommended to people who suffer from chronic lower back pain since it not only helps to decrease the pain but also increases function and helps to manage or prevent the recurrences of lower back pain. Generally, four weeks of physical therapy can be a great non-surgical treatment option for dealing with lower back pain. The Details Here are a few important details that you should know if you are opting for physical therapy for lower back pain: 1. Active vs Passive physical therapy – The goal of passive physical therapy (modalities) is to reduce the pain levels of the patient to a manageable level. It involves the use of ice packs, heat application, and electrical stimulation. Active physical therapy for lower back pain involves the use of stretching and specific exercises to strengthen the muscles of the lower back so that they can support your body during physical activity without getting fatigued. Generally, it is recommended that you combine active physical therapy with passive physical therapy techniques to reap maximum results. 1. Stretching for relieving lower back pain – Following a stretching routine that is individually designed for you by your physical therapist or spine physician can be of immense help. Properly stretching the lower back muscles, legs, hips, and abdominal muscles can help to provide relief for muscle spasms caused due to nerve irritation or improper posture. It can also prevent the shrinkage of muscles (atrophy) due to disuse and restore your natural range of motion. However, it is important to ensure that you perform stretching exercises in a slow and gradual manner since bouncing or fast movements can cause more harm. 1. Core strengthening exercises – Core strengthening exercises include specific abdominal strengthening exercises such as leg raises, crunches, sit-ups, and more are designed to strengthen the lower back muscles and the abdominal muscles. Additionally, you can practice dynamic stabilization exercises to strengthen the secondary muscles of the spine so that they can support your spine through various ranges of motion. 1. Passive physical therapy – Your physical therapist or chiropractor can utilize multiple modalities to treat intense and debilitating lower back pain. Generally, heat and cold therapy are the most popular since it helps to reduce inflammation and reduce muscle spasms. For more severe cases of lower back pain, you might require steroidal treatment such as Iontophoresis or electrical stimulation such as a transcutaneous electrical nerve stimulator (TENS) in order to relieve lower back pain. Furthermore, you can also opt for an ultrasound treatment since it also helps to enhance tissue healing. Conclusion Bharat Bhise suggests you hire a spine specialist or qualified physical therapist to guide you when performing active physical therapy exercises and stretching. It will ensure you have a proper understanding of the techniques involved and you perform them correctly. It is also recommended that you stick to the program for the long term to prevent lower back pain in the future and strengthen your back muscles and core further. Comments are closed.
__label__pos
0.840305
Rustfmt butchers snippet I'm overall very satisfied with rustfmt. However, I have a piece of code which rustfmt butchers so badly that I fear I might have committed some kind of crime against humanity. (Not a bug, rustfmt seems to be doing the right thing technically, but not aesthetically). I have some vague memory of having seen some place to report exactly such code snippets (that don't look good even though they technically may be correct) to the rustfmt developers, but I can't find it. Did I have a very weird rustfmt-inspired dream, or is there such a form somewhere? I believe it even had a field for posting one's rustfmt.toml? ... or was this clang-format? :thinking: the rustfmt repository has bug templates, e.g. the one for formatting issues 1 Like
__label__pos
0.797971
Lung cancer screening: does pulmonary nodule detection affect a range of smoking behaviours? Marcia E. Clark, Ben Young, Laura E. Bedford, Roshan das Nair, John F. R. Robertson, Kavita Vedhara, Francis Sullivan, Frances S. Mair, Stuart Schembri, Roberta C. Littleford, Denise Kendrick (Lead / Corresponding author) Research output: Contribution to journalArticlepeer-review 2 Citations (Scopus) Abstract Background: Lung cancer screening can reduce lung cancer mortality by 20%. Screen-detected abnormalities may provide teachable moments for smoking cessation. This study assesses impact of pulmonary nodule detection on smoking behaviours within the first UK trial of a novel auto-antibody test, followed by chest x-ray and serial CT scanning for early detection of lung cancer (Early Cancer Detection Test-Lung Cancer Scotland Study). Methods: Test-positive participants completed questionnaires on smoking behaviours at baseline, 1, 3 and 6 months. Logistic regression compared outcomes between nodule (n = 95) and normal CT groups (n = 174) at 3 and 6 months follow-up. Results: No significant differences were found between the nodule and normal CT groups for any smoking behaviours and odds ratios comparing the nodule and normal CT groups did not vary significantly between 3 and 6 months. There was some evidence the nodule group were more likely to report significant others wanted them to stop smoking than the normal CT group (OR across 3- and 6-month time points: 3.04, 95% CI: 0.95, 9.73; P = 0.06). Conclusion: Pulmonary nodule detection during lung cancer screening has little impact on smoking behaviours. Further work should explore whether lung cancer screening can impact on perceived social pressure and promote smoking cessation. Original languageEnglish Pages (from-to)600-608 Number of pages9 JournalJournal of Public Health Volume41 Issue number3 Early online date29 Sep 2018 DOIs Publication statusPublished - Sep 2019 Keywords • lung cancer screening • pulmonary nodules • smoking behaviour Fingerprint Dive into the research topics of 'Lung cancer screening: does pulmonary nodule detection affect a range of smoking behaviours?'. Together they form a unique fingerprint. Cite this
__label__pos
0.552626
Urinary Tract Infections What are Urinary Tract Infections? UTIs. If you’ve had one, you know how disruptive they can be to life. If you haven’t had one, you still could get one. Urinary tract infections (UTIs) are one of the most common reasons for patients to see their providers. UTIs are the cause of more than 8.1 million visits to health care providers each year. About 60 percent of women will get one UTI within their lifetime with approximately 20 to 30 percent of women experiencing recurrent UTIs. 12 percent of men will have at least one UTI during their lifetime. Causes of UTIs A UTI develops when bacteria gets into the urine and travels up to your bladder. Large numbers of bacteria live in and around the genital area and can often get into the urine easily––allowing them to travel to the bladder or kidneys. Women are more likely to get UTIs due to the length of their urethras. A shorter urethra means less distance to travel to reach the bladder. However, people of any age and sex can develop a UTI. Risk Factors: • Diabetes • Frequent intercourse • Bladder or bowel changes • BPH • Kidney stones • Menopause • Poor hydration • Difficulty emptying the bladder • Pregnancy • Immunocompromised conditions • Urinary catheters Signs & Symptoms of UTIs When you have a UTI, the lining of your bladder and urethra become red and irritated just as your throat does when you have a throat infection. This irritation causes symptoms. If the infection travels up to the kidney, it is termed a kidney infection or pyelonephritis. Regardless of how far the bacteria go, they can cause problems. Symptoms • Burning with urination • Urinary frequency / urgency • Lower back pain • Blood in the urine • Cloudy urine • Change in the odor of the urine. • Fever • Nausea • Vomiting • Severe back pain How are UTIs Diagnosed? UTIs are diagnosed by analyzing a sample of your urine. There are three methods to diagnose a UTI using a urine sample. Traditionally a urinalysis and urine culture have been the most common ways to detect a UTI. However, within the last two to three years multiplex PCR-based urine assessment is increasingly being used. Urinalysis A quick, in-office test where a dipstick or microscope is used to detect any white blood cells, nitrates or blood in your urine sample. This is a quick look at the urine, but not a very sensitive or specific test to determine what type (if any) bacteria is causing the infection and what best antibiotic should be used to treat that bacteria. Urine Culture Your urine sample is sent off to a lab and the results may take up to seven days to return. At the lab your urine sample is being analyzed in a different and more thorough way. The lab attempts to grow and specifically identify any bacteria from the urine sample and, if so, determine exactly the most sensitive antibiotics to treat/kill the bacteria. This is considered the “gold standard” and is a very sensitive and specific way to diagnose and treat a UTI. Urine PCR A Urine PCR test detects the presence of bacteria differently. It is a multiplex polymerase chain reaction (PCR) test which identifies more bacteria than a traditional urine culture in patients with symptoms of a UTI. It is also done using a sample of your urine. Urine PCR Studies Previously, a urine culture seemed to be the most effective method to detect a UTI. However, studies are beginning to reveal that PCR-based urine tests may be better at diagnosing UTIs– in both identifying and detecting the bacteria responsible for the infections versus the traditional “gold standard” urine culture. The PCR urine test exhibits greater accuracy for the detection of bacteria, identifying bacteria in the urine samples of 36% of patients who had a negative urine culture. Additionally, the PCR urine test has a fast turnaround time, typically results are available in a day, whereas, a urine culture can take up to seven days for results to return.  A notable study demonstrated the superior detection of bacteria by the PCR urine test versus a urine culture. The urine samples of patients with symptoms of a UTI were tested using a urine culture and a PCR. In the 582 patients that were tested, bacteria was detected and identified in 56% of the patients using a urine PCR. Using a urine culture, only 37% of patients had bacteria detected in their urine. In 175 patients that had a UTI caused by multiple bacteria, the PCR tested detected 166 of those cases whereas the urine culture only detected 39.  Additionally, the PCR urine test detected 22 out of 24 of the bacteria/organisms that were the cause of the UTI whereas the urine culture detected only 15. The explanation for the more accurate and precise results from PCR test versus a urine culture may be because some bacteria/organisms that can cause a UTI may be slow growing or may require specific growth conditions that may not be available in a lab when attempting to grow bacteria from a urine culture.  Urine PCR tests are quickly becoming the most accurate way to diagnose and treat UTIs or screen for UTIs if you have urinary symptoms and a UTI is suspected. The sensitivity, accuracy and quick turnaround time of the urine PCR tests are changing the game in how quickly you can be diagnosed, treated and get back to your normal routine versus waiting for results and having your days disrupted by urinary symptoms. If you have concerns that you have UTI, have been treated for a UTI that has not resolved, have UTI-like symptoms or have frequent/chronic UTIs– schedule a visit with one of our providers where the urine PCR test is available What do I need to know before my first visit? Read Frequently Asked Questions to learn more Schedule your Appointment Find a time that works for you. • This field is for validation purposes and should be left unchanged. Testimonials Leave A Review
__label__pos
0.629288
Sunday, August 4, 2024 How Can You Get Psoriasis What Can I Do To Help My Feet How to Cure Psoriasis Naturally The most important action is to seek advice and help when you notice any changes in your foot, whatever they may be. You can talk to your GP or local pharmacist for advice. Some problems can be resolved simply. For issues that are more persistent you may be referred to a specialist, such as a dermatologist, rheumatologist, physiotherapist, surgeon or chiropodist/podiatrist. For general foot care, personal hygiene is important, particularly in avoiding fungal and viral infections. Change shoes and socks regularly, avoid shoes which are ill-fitting or cause bad posture. If you are overweight, losing weight could relieve the pressure on your joints and improve your walking gait. If you are diagnosed with psoriasis, develop a treatment regime that works for you often, applying treatment after a bath or shower, along with the use of an emollient, can make the process easier. If you have nail involvement, keep nails trimmed and clean. If they are thick, try trimming them after soaking them in a bath or shower, as this makes them softer and easier to cut. Alternatively, seek an appointment with a chiropodist, which is often available via the NHS. If you have psoriatic arthritis, it is important to rest inflamed joints. Sourcing footwear that supports the foot and helps to reduce the pressure on the inflamed areas can help, as can inner soles and orthotic supports. Once again, a chiropodist is best placed to advise you. This article is adapted from The psoriatic foot leaflet. Symptoms Of Ear Psoriasis Symptoms of ear psoriasis include: • Dry patches of skin on or around the ear, appearing red in color on lighter skin and purple on darker skin • The formation of crusty silvery or gray scales, called plaques • Temporary hearing loss • Otitis externa • Tenderness, burning, or itching outside or within the ear • A buildup of scaly skin in the ear canal Temporary hearing loss is perhaps the most concerning complication associated with ear psoriasis. Hearing loss can occur as a result of the buildup of plaques and scales that block the inner ear canal. People with psoriasis are also more likely to experience a type of hearing loss known as sudden sensorineural hearing loss . SSNHL can affect individuals with psoriasis, even if they dont have psoriasis in their ears. The cause of SSNHL is unknown, but scientists believe its related to an autoimmune attack on a part of the inner ear called the cochlea. Does This Mean I Will Have Psoriasis For Life In the absence of a cure you will always have psoriasis, but this does not mean that the signs will always be visible. Normally, the rash tends to wax and wane . There will be periods when your skin is good, with little or no sign of psoriasis. Equally, there will be times when it flares up. The length of time between clear skin and flare-ups differs for each individual and is unpredictable. It may be weeks, months or even years. You May Like: Medical Treatment For Scalp Psoriasis How Psoriasis Is Diagnosed A GP can often diagnose psoriasis based on the appearance of your skin. In rare cases, a small sample of skin called a biopsy will be sent to the laboratory for examination under a microscope. This determines the exact type of psoriasis and rules out other skin disorders, such as seborrhoeic dermatitis, lichen planus, lichen simplex and pityriasis rosea. You may be referred to a specialist in diagnosing and treating skin conditions if your doctor is uncertain about your diagnosis, or if your condition is severe. If your doctor suspects you have psoriatic arthritis, which is sometimes a complication of psoriasis, you may be referred to a doctor who specialises in arthritis . You may have blood tests to rule out other conditions, such as rheumatoid arthritis, and X-rays of the affected joints may be taken. Sex Fertility And Pregnancy How Do You Get Psoriasis: Potential Causes and Risk Factors Sex can sometimes be painful for people with psoriatic arthritis, particularly a woman whose hips are affected. Experimenting with different positions and communicating well with your partner will usually provide a solution. Psoriatic arthritis wont affect your chances of having children. But if youre thinking of starting a family, its important to discuss your drug treatment with a doctor well in advance. If you become pregnant unexpectedly, talk to your rheumatology department as soon as possible. The following must be avoided when trying to start a family, during pregnancy and when breastfeeding: Also Check: Treatments For Plaque Psoriasis Scalp How To Keep Your Feet Healthy With Psoriatic Arthritis Psoriatic arthritis is a chronic condition that can get worse over time. A small percentage of people with PsA develop arthritis mutilans, which is a severe and painful form of the disease that can lead to deformity and disability. Though theres no cure for psoriatic arthritis, you can take steps to manage symptoms, control inflammation, and protect your joints. To help keep your feet healthy: 1. Stick to your PsA treatment plan. Your rheumatologist may prescribe nonsteroidal anti-inflammatory drugs to relieve pain and reduce inflammation, disease-modifying antirheumatic drugs to help slow the progression of psoriatic arthritis, or biologics, which are complex, targeted DMARDs that act on certain immune system pathways, to manage psoriatic arthritis symptoms and help prevent disease progression. 2. Lose weight if you need to. Maintaining a healthy weight reduces the amount of stress on the joints in your feet, which can help relieve pain and improve your walking gait. Excess body weight can also increase inflammation, and potentially make arthritis symptoms worse. Check out these weight loss tips that are especially helpful when you have arthritis. Stretching exercises, especially ones that are focused on the source of your foot pain, such as the plantar fascia or Achilles tendon, can help relieve pain. Talk to your doctor or podiatrist about exercises that are safe for you. Can Psoriatic Arthritis Affect Other Parts Of The Body Having psoriatic arthritis can put you at risk of developing other conditions and complications around the body. The chances of getting one of these are rare. But its worth knowing about them and talking to your doctor if you have any concerns. Eyes Seek urgent medical attention if one or both of your eyes are red and painful, particularly if you have a change in your vision. You could go to your GP, an eye hospital, or your local A& E department. These symptoms could be caused by a condition called uveitis, which is also known as iritis. It involves inflammation at the front of the eye. This can permanently damage your eyesight if left untreated. Other symptoms are: • blurred or cloudy vision • sensitivity to light • not being able to see things at the side of your field of vision known as a loss of peripheral vision • small shapes moving across your field of vision. These symptoms can come on suddenly, or gradually over a few days. It can affect one or both eyes. It can be treated effectively with steroids. Heart Psoriatic arthritis can put you at a slightly higher risk of having a heart condition. You can reduce your risk by: • not smoking • staying at a healthy weight • exercising regularly • eating a healthy diet, thats low in fat, sugar and salt • not drinking too much alcohol. These positive lifestyle choices can help to improve your arthritis and skin symptoms. Talk to your doctor if you have any concerns about your heart health. Crohns disease Non-alcoholic fatty liver disease Also Check: Best Natural Shampoo For Psoriasis Will My Ear Psoriasis Go Away There is no cure for psoriasis. However, as experts learn more about the condition, inflammation, and the immune system, more effective treatments are being developed to make symptoms more manageable. Talk with your doctor about available treatment options and recommended lifestyle changes that can help manage your psoriasis. How Will Psoriatic Arthritis Affect Me How to Get Rid of Scalp Psoriasis Starting the right treatment as soon as possible will give you the best chance of keeping your arthritis under control and minimise damage to your body. Psoriatic arthritis can vary a great deal between different people. This makes it difficult to offer advice on what you should expect. It will usually have some effect on your ability to get around and your quality of life, but treatment will reduce the effect it has. Psoriatic arthritis can cause long-term damage to joints, bones and other tissues in the body, especially if it isnt treated. Don’t Miss: What Does Psoriasis Look Like On The Face Foods That Contain Gluten Research suggests that people with psoriasis tend to have higher rates of celiac disease. In people with celiac disease, gluten triggers an autoimmune response that causes the body to attack tissues in the small intestine. People with celiac disease need to avoid gluten completely, though some people without the disease have found that reducing gluten in their diet lessens psoriasis flare-ups. You May Like: Is Silver Sulfadiazine Cream Good For Psoriasis What Are Other Genital Parts Of Your Body Prone To Psoriasis Genital psoriasis may flare up on • Pubis : Psoriasis in the pubis area can be treated in the same way as scalp psoriasis. However, you may need to use a milder treatment because of the sensitive nature of the skin. • Upper thighs: Psoriasis in this region usually consist of multiple small, round patches of rush that are dark red and scaly. It can easily be irritated, especially when the thighs rub against each other when walking or running. • Folds between the groin and the thigh: this psoriasis looks non-scaly and reddish white. The skin may also have cracks. • Anus: Genital psoriasis symptoms on or near the anus, usually looks red, non-scaly and inclined to itchiness. Most times, it is often confused with infections, yeast, ringworm infestation or haemorrhoids itching. • Buttocks crease: Genital psoriasis between the crease in your buttocks may appear red with thick scales, red and non-scaly. Dont Miss: How To Cover Up Psoriasis Don’t Miss: Should You Remove Psoriasis Plaques What Causes Genital Psoriasis Flare The risk factors for genital psoriasis are the same as those for psoriasis anywhere on your body. If you have a family history of psoriasis, smoke, or have other health conditions, such as diabetes or high blood pressure, you may have a higher risk of developing psoriasis, according to a 2019 study published in the International Journal of Molecular Sciences.7 If you already have psoriasis, you might notice your symptoms sometimes get worse. This is called a psoriasis flare-up, and means the condition is actively causing visible symptoms. Everyone has different triggers, but there are a few things to note when it comes to genital psoriasis specifically, says Dr. Radusky. One thing that is certainly different in genital psoriasis is how fast a patient can go from symptom-free to flare-up, he says. Athletic activity, wearing tight underwear or tight-fitting clothing, and sexual activity can all trigger a genital psoriasis flare-up, he says. Also Check: What Does Plaque Psoriasis Look Like When It Starts Scratching Can Irritate Your Skin Which Can Lead To A Psoriasis Flare psoriasisonface ipl and psoriasis It might not be just the red, scaly plaques from psoriasis that drives you nuts. The itch that goes along with psoriasis can bother you even in places that are lesion free. Up to 90 percent of people with psoriasis experience itching, according to the National Psoriasis Foundation, and it can impact your quality of life. It can interfere with your sleep, increase your stress, and even take a toll on your sex life. Its not always a pure itch, says Gil Yosipovitch, MD, a professor of dermatology at the University of Miami Miller School of Medicine in Florida. Instead, you might feel a burning or pinching sensation. And though the urge to scratch can be hard to resist, scratching can just make psoriasis symptoms worse. Scratching can damage your skin, leading to infection or skin injuries that can trigger a psoriasis flare. Following your psoriasis treatment plan is the best way to prevent bothersome itching. But there are other steps you can take to find relief when itching strikes. Also Check: Can Scalp Psoriasis Make Your Hair Fall Out Problems With The Immune System Your immune system is your body’s defence against disease and it helps fight infection. One of the main types of cell used by the immune system is called a T-cell. T-cells normally travel through the body to detect and fight invading germs, such as bacteria. But in people with psoriasis, they start to attack healthy skin cells by mistake. This causes the deepest layer of skin to produce new skin cells more quickly than usual, triggering the immune system to produce more T-cells. It’s not known what exactly causes this problem with the immune system, although certain genes and environmental triggers may play a role. Know The Underlying Causes Eczema and psoriasis have different causes. Psoriasis is an autoimmune disease, which occurs when your immune system becomes dysfunctional and your skin cells start to grow too fast. The cells that pile up on the top of the skin then lead to the formation of a white scale. Both genetic and environmental factors may cause eczema. It may be due to the mutation of the gene responsible for creating a protective layer on the top of the skin. Thus, the mutated gene leaves the skin prone to infection and flare. Dry climate can also play a role in triggering eczema. Recommended Reading: I Have Psoriasis On My Face Research And Statistics: Who Has Psoriasis According to the National Psoriasis Foundation, about 7.5 million people in the United States have psoriasis. Most are white, but the skin disease also affects Black, Latino, and Asian Americans as well as Native Americans and Pacific Islanders. The disease occurs about equally among men and women. According to the National Institutes of Health , it is more common in adults, and you are at a greater risk if someone in your family has it. A study published in September 2016 in the journal PLoS One concluded that interactions between particular genes as well as genetic and environmental factors play an important role in the diseases development. People with psoriasis generally see their first symptoms between ages 15 and 30, although developing the disease between 50 and 60 years of age is also common. The biggest factor for determining prognosis is the amount of disease someone has, says Michael P. Heffernan, MD, a dermatologist at the San Luis Dermatology and Laser Clinic in San Luis Obispo, California. Can I Still Have Sex If I Have Genital Psoriasis What Happens During Severe Psoriasis | WebMD The short answer is yes if it feels good. It all depends on the severity of your flare-up and personal preference. Genital psoriasis doesnt spread by sexual contact, nor does it affect fertility. If youre having a genital psoriasis flare-up, friction from sexual contact can be painful and might worsen your symptoms. Ask your doctor if condoms or lubricants are advisable and which kinds are best. After having sex, gently clean and pat dry the area completely. Read Also: What Is The Best Soap For Psoriasis Causes Triggers And Risk Factors Psoriasis develops when the body replaces skin cells too fast. Doctors do not fully understand what causes this skin condition, but they believe it to be an autoimmune disease. This means that the bodys immune system attacks healthy tissue, such as skin cells, by mistake. A persons genes may play a role in the development of psoriasis, and it may run in families. People who have other autoimmune diseases are also more likely to develop psoriasis. Many people with psoriasis find that certain things trigger or worsen their symptoms. Potential triggers can vary from person-to-person, but may include: • a recent injury to the skin, such as a cut, insect bite, or sunburn • weather changes, especially when they cause skin dryness • an illness or infection • certain medications Some people first notice psoriasis after they have experienced a trigger, so may mistake their foot symptoms for an allergic reaction or an infection, such as athletes foot. Athletes foot is a common fungal infection that occurs on the feet. Unlike psoriasis, it is contagious. A person can get athletes foot from surfaces, towels, and clothes that have become infected with the fungus. In most cases, athletes foot requires treatment. However, a person can usually treat the infection at home with over-the-counter antifungal medications. Some differences between athletes foot and psoriasis include: Causes Of Psoriasis In The Ears Psoriasis is a skin disease caused by a persons immune system attacking their own skin. Excess inflammation and activity by the immune system causes skin cells to replicate out of control. Overactive skin-cell production leads to skin buildup, which often appears as the discolored lesions covered with gray or silvery scales characteristic of psoriasis. This response can occur on skin all over the body, including the ears. Factors that can make psoriasis worse or lead to a flare-up of ear psoriasis may include: • Cold weather • Infection You May Like: How To Treat Severe Psoriasis How Can I Get Started With A Psoriasis Diet If youre going to change your diet to combat psoriasis, Wesdock recommends starting slowly. Jumping into a highly restrictive diet isnt usually sustainable and may deprive you of important nutrients. Instead, start by cutting out some highly processed foods. Substitute the pastries and cookies with fresh fruit. Opt for herbal tea or water flavored with fresh fruit, mint or cucumber. If you think theres a specific food or ingredient thats triggering psoriasis flare-ups, talk to your doctor or a registered dietitian. Being overweight or obese can also make psoriasis worse, so you may want to start a weight loss plan that includes fewer calories and smaller portion sizes. Any psoriasis treatment diet should be accompanied by healthy lifestyle choices. Get plenty of sleep and regular exercise, and try to reduce stress in your life. If you smoke, talk to your doctor about a plan to quit. Popular Articles Related news
__label__pos
0.523793
Tag: conversion van cost Posts related to conversion van cost conversion vans sherry chrysler If you’re in the market for a versatile, high-functioning vehicle, you’ve probably stumbled upon conversion vans. While these specialized vehicles offer a myriad of amenities and customized features, they also come with a hefty price tag. But why do they cost so much? Let’s delve into the details. Are Conversion Vans Worth It? The short answer is: it depends on your needs. For those who require a mobile office, personalized recreational vehicle, or even a customized living space, conversion vans offer unparalleled features that go beyond traditional automobiles. Features like state-of-the-art audio systems, custom interior designs, and advanced mobility options add a layer of luxury and functionality. Therefore, if your lifestyle or occupation requires these unique configurations, a conversion van provides a comprehensive solution, making the investment worthwhile. How Much Should I Budget for a Van Conversion? Budgeting for a conversion van involves several variables. A basic van chassis will cost you [...] There’s no “one-size-fits-all” conversion van. In fact, you can design a conversion van to fit practically any need, ranging from camping convenience to a mobile home to a remote workstation where you can make money from anywhere in the world. But those different conversion options also correlate to different prices. It can be tough to know how much a good conversion van costs, especially when considering features, amenities, living space, and more. What Affects Conversion Van Price? Lots of things affect the cost of a conversion van, including: • The van itself. Conversion vans can range anywhere from $1000-$100,000 in total! • Electricity and electrical systems. Conversion and electrical systems can be anywhere from several hundred dollars to several thousand dollars • Furnishings and other materials. The more extravagant your van, the more it will cost. Furnishings and materials may cost a few hundred dollars to upwards of $2000 Back to top
__label__pos
0.850403
 python – Keras VGG16 ajuste fino - Código de registro python – Keras VGG16 ajuste fino Hay un ejemplo de ajuste fino VGG16 en keras blog, pero no puedo reproducirlo. Más precisamente, aquí está el código utilizado para iniciar VGG16 sin capa superior y para congelar todos los bloques, excepto el más alto: WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5' weights_path = get_file('vgg16_weights.h5', WEIGHTS_PATH_NO_TOP) model = Sequential() model.add(InputLayer(input_shape=(150, 150, 3))) model.add(Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(Conv2D(128, (3, 3), activation='relu', padding='same')) model.add(Conv2D(128, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(Conv2D(256, (3, 3), activation='relu', padding='same')) model.add(Conv2D(256, (3, 3), activation='relu', padding='same')) model.add(Conv2D(256, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(Conv2D(512, (3, 3), activation='relu', padding='same')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')) model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')) model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block5_maxpool')) model.load_weights(weights_path) for layer in model.layers: layer.trainable = False for layer in model.layers[-4:]: layer.trainable = True print("Layer '%s' is trainable" % layer.name) A continuación, creando un modelo superior con una sola capa oculta: top_model = Sequential() top_model.add(Flatten(input_shape=model.output_shape[1:])) top_model.add(Dense(256, activation='relu')) top_model.add(Dropout(0.5)) top_model.add(Dense(1, activation='sigmoid')) top_model.load_weights('top_model.h5') Tenga en cuenta que anteriormente se formó en funciones de cuellos de botella como se describe en la publicación del blog. A continuación, agregue este modelo superior al modelo base y compile: model.add(top_model) model.compile(loss='binary_crossentropy', optimizer=SGD(lr=1e-4, momentum=0.9), metrics=['accuracy']) Y, finalmente, encaja en los datos de gatos / perros: batch_size = 16 train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) train_gen = train_datagen.flow_from_directory( TRAIN_DIR, target_size=(150, 150), batch_size=batch_size, class_mode='binary') valid_gen = test_datagen.flow_from_directory( VALID_DIR, target_size=(150, 150), batch_size=batch_size, class_mode='binary') model.fit_generator( train_gen, steps_per_epoch=nb_train_samples // batch_size, epochs=nb_epoch, validation_data=valid_gen, validation_steps=nb_valid_samples // batch_size) Pero aquí hay un error que estoy obteniendo cuando intento encajar: ValueError: Error when checking model target: expected block5_maxpool to have 4 > dimensions, but got array with shape (16, 1) Por lo tanto, parece que algo está mal con la última capa de agrupación en el modelo base. O probablemente he hecho algo mal al conectar el modelo base con el superior. ¿Alguien tiene un problema similar? ¿O tal vez hay una mejor manera de construir tales modelos “concatenados”? Estoy usando keras == 2.0.0 con el backend theano. Note: I was using examples from gist and applications.VGG16 utility, but has issues trying to concatenate models, I am not too familiar with keras functional API. So this solution I provide here is the most “successful” one, i.e. it fails only on fitting stage. Actualización # 1 Ok, aquí hay una pequeña explicación sobre lo que estoy tratando de hacer. En primer lugar, estoy generando funciones de cuello de botella de VGG16 de la siguiente manera: def save_bottleneck_features(): datagen = ImageDataGenerator(rescale=1./255) model = applications.VGG16(include_top=False, weights='imagenet') generator = datagen.flow_from_directory( TRAIN_DIR, target_size=(150, 150), batch_size=batch_size, class_mode=None, shuffle=False) print("Predicting train samples..") bottleneck_features_train = model.predict_generator(generator, nb_train_samples) np.save(open('bottleneck_features_train.npy', 'w'), bottleneck_features_train) generator = datagen.flow_from_directory( VALID_DIR, target_size=(150, 150), batch_size=batch_size, class_mode=None, shuffle=False) print("Predicting valid samples..") bottleneck_features_valid = model.predict_generator(generator, nb_valid_samples) np.save(open('bottleneck_features_valid.npy', 'w'), bottleneck_features_valid) Luego, creo un modelo superior y lo entreno en estas características de la siguiente manera: def train_top_model(): train_data = np.load(open('bottleneck_features_train.npy')) train_labels = np.array([0]*(nb_train_samples / 2) + [1]*(nb_train_samples / 2)) valid_data = np.load(open('bottleneck_features_valid.npy')) valid_labels = np.array([0]*(nb_valid_samples / 2) + [1]*(nb_valid_samples / 2)) model = Sequential() model.add(Flatten(input_shape=train_data.shape[1:])) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) model.fit(train_data, train_labels, nb_epoch=nb_epoch, batch_size=batch_size, validation_data=(valid_data, valid_labels), verbose=1) model.save_weights('top_model.h5') Básicamente, hay dos modelos entrenados, base_model con ponderaciones de ImageNet y top_model con ponderaciones generadas a partir de características de cuello de botella. Y me pregunto cómo concatenarlos. ¿Es posible o estoy haciendo algo mal? Porque, como puedo ver, la respuesta de @ thomas-pinetz supone que el modelo superior no está entrenado por separado y de inmediato se agrega al modelo. No estoy seguro si estoy claro, aquí hay una cita del blog: In order to perform fine-tuning, all layers should start with properly trained weights: for instance you should not slap a randomly initialized fully-connected network on top of a pre-trained convolutional base. This is because the large gradient updates triggered by the randomly initialized weights would wreck the learned weights in the convolutional base. In our case this is why we first train the top-level classifier, and only then start fine-tuning convolutional weights alongside it. Mejor respuesta Creo que los pesos descritos por vgg net no se ajustan a su modelo y el error se deriva de esto. De todos modos, hay una manera mejor de hacerlo utilizando la propia red como se describe en (https://keras.io/applications/#vgg16). Solo puedes usar: base_model = keras.applications.vgg16.VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=None) para crear una instancia de una red virtual que está pre-entrenada. Luego, puede congelar las capas y usar la clase de modelo para crear una instancia de su propio modelo de esta manera: x = base_model.output x = Flatten()(x) x = Dense(your_classes, activation='softmax')(x) #minor edit new_model = Model(input=base_model.input, output=x) Para combinar la parte inferior y la red superior, puede utilizar el siguiente fragmento de código. Se utilizan las siguientes funciones (Capa de entrada (https://keras.io/getting-started/functional-api-guide/) / load_model (https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model) y la API funcional de keras): final_input = Input(shape=(3, 224, 224)) base_model = vgg... top_model = load_model(weights_file) x = base_model(final_input) result = top_model(x) final_model = Model(input=final_input, output=result) Por favor indique la dirección original:python – Keras VGG16 ajuste fino - Código de registro
__label__pos
0.921238
Click on each question below for information and resources on autism What is autism? Autism or Autism Spectrum Disorder is a neurological disorder that affects how someone communicates, behaves, and views the world. ASD is different for every person, but there are some common symptoms which may help diagnose a person with ASD.  This PDF from AMAZE has a simple and understandable explanation of ASD:   The CDC has great information about ASD and symptoms on their website:  https://www.cdc.gov/ncbddd/autism/facts.html What’s the difference between autism and autism spectrum disorder (ASD)?  Autism and autism spectrum disorder are the same diagnosis, just with different phrasing. ASD is wording used to describe the wide range of symptoms those with autism can have. Previously, the diagnosis of “Asperger’s” described someone with “high functioning autism.” Asperger’s became part of ASD in 2013.  What are some signs of autism? According to the CDC, here are some signs that your child might be on the autism spectrum:  Ages 9-12 months • Avoids or does not keep eye contact • Does not respond to name  • Does not show facial expressions like happy, sad, angry, and surprised  • Does not play simple interactive games like pat-a-cake by 12 months  • Uses few or no gestures by 12 months of age (like waving goodbye) Ages 1 year old to 3 years old • Does not show you an object that he or she likes by 15 months of age • Does not pretend by 2-3 years old like pretending to “feed” a doll • Shows little interest in other children • Has trouble understanding other people’s feelings or talking about own feelings at 3 years old  • Does not take turns by 3 years old Ages 4-11 • Delayed language and/or movement skills • Hyperactive, impulsive, and/or inattentive behavior • Anxiety, stress, or excessive worry Any Age (including adults) • Gets upset by minor changes • Has obsessive interests • Must follow certain routines • Flaps hands, rocks body, or spins self in circles • Has unusual reactions to the way things sound, smell, taste, look, or feel Where can I find more information about autism spectrum disorder? The National Institute for Neurological Disorders and Stroke has a great website page with detailed information about autism Spectrum Disorder: https://www.ninds.nih.gov/Disorders/Patient-Caregiver-Education/Fact-Sheets/Autism-Spectrum-Disorder-Fact-Sheet Autism Speaks also has various toolkits on their website to aid in understanding autism: https://www.autismspeaks.org/tool-kit You can also call a representative at Autism Speaks to learn about autism spectrum disorder directly: 1-888-288-4762 (En Español: 1-888-772-9050) My child is diagnosed with ASD, what do I tell his teachers? This PDF has some helpful information that you can give your child’s teacher(s), as well as a form that can be printed and filled out based on your child’s specific needs: https://behavioral-pediatrics.org/wp-content/uploads/2019/07/ASD-Teacher-Handout-instructions.pdf  What do I tell my child’s sibling about autism?  When telling children about their sibling’s diagnosis, help them understand why their sibling may act differently, need different things, or get “special treatment.” This website provides some helpful information about how to approach the topic with your children: https://raisingchildren.net.au/autism/communicating-relationships/family-relationships/siblings-asd#explaining-autism-to-siblings-nav-title  This webpage from Kids Health has a helpful Q&A in children’s language: https://kidshealth.org/en/kids/autism.html?ref=search For younger children to understand better, there are a few books which can explain ASD in easily understood language:  For Children 3-6 years old:  “Since We’re Friends” by Celeste Shally. Here is a video of the book being read: https://www.youtube.com/watch?v=m6Sy3FT82fg  For children 4-8 years old:   “My Brother Charlie” by Holly Robinson Peete & Ryan Elizabeth Peete. Here is a video of the book being read: https://www.youtube.com/watch?v=LKxelsOXD4Q For children 5-8 years old: “Uniquely Wired: A Story About Autism and Its Gifts” by Julia Cook. Here is a video of the book being read: https://www.youtube.com/watch?v=br-Y-ntf6Ss  How do I tell other people about my child’s autism diagnosis? When telling other people about your child’s autism diagnosis, they may respond with shock, disbelief, or express some untrue assumptions or prejudices towards autism. Try to be patient with those who have just learned about your child’s autism diagnosis, answer any questions they may have, and correct any misinformation they may believe.  Since an autism diagnosis takes a long time, you and your child have had to adjust to this new information. Give the people around you some time to adjust to the idea and learn about autism. After learning more about autism, the people around you can better understand how it affects your child. You also do not have to tell everyone right away, and can tell people gradually over time.  Your child may also have needs and traits which do not match up with a person’s understanding of autism. Take time to explain to them that autism looks different on every person.  Specifically for telling your child’s grandparents about their diagnosis, this PDF from Autism Speaks is a helpful toolkit for grandparents to learn about ASD: https://www.autismspeaks.org/tool-kit/grandparents-guide-autism
__label__pos
0.972489
-5 This question already has an answer here: How can i group a series of integer numbers, eg., [4, 2, 3, 3, 2, 4, 1, 2, 4] to become [4, 4, 4, 2, 2, 2, 3, 3, 1] without using any sorting algorithm. Note that i don't need the result to be in any sorted order, but i do need the suggested algorithm to group a million of numbers faster than qsort. marked as duplicate by user2100815, πάντα ῥεῖ, cow, geza, Makyen May 3 at 23:58 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. • 1 It might be faster to sort it (say, with std::sort) than to do the kind of grouping you suggest. – Fred Larson May 1 at 21:16 • What is the range of your numbers? – geza May 1 at 21:20 • The ranges can be wide as the numbers can 8/16/32/64 bits. Actually i need the algorithm to be generalized for float/double or even strings. – cow May 1 at 21:22 • It might worth checking out a hash table based solution. But maybe it will be slower than quicksort because of bad cache utilization. – geza May 1 at 21:26 • 1 why bother optimizing with such a small dataset? just stick to std::sort – skeller May 1 at 21:42 2 This should work if you don't care too much about using extra space. It first stores the number of occurrences of each number in an unordered_map and then creates a vector that contains each value in the map, repeated the number of times it was seen in the original vector. See the documentation for insert for how this works. The [] operator for an unordered_map works in O(1) on average. So creating the unordered_map takes O(N) time. Iterating through the map and populating the return vector again takes O(N) time, so this whole thing should run in O(N). Note that this creates two extra copies of the data. In the worst case, the [] operator takes O(N) time, so the only way to really know if this is faster than qsort would be to measure it. #include <vector> #include <unordered_map> #include <iostream> std::vector<int> groupNumbers(const std::vector<int> &input) { std::vector<int> grouped; std::unordered_map<int, int> counts; for (auto &x: input) { ++counts[x]; } for (auto &x: counts) { grouped.insert(grouped.end(), x.second, x.first); } return grouped; } // example int main() { std::vector<int> test{1,2,3,4,3,2,3,2,3,4,1,2,3,2,3,4,3,2}; std::vector<int> result(groupNumbers(test)); for (auto &x: result) { std::cout << x << std::endl; } return 0; } • Grouping can be done in basically O(n) using partitioning and using no extra space. – PaulMcKenzie May 1 at 21:35 • worth a try but i'd expect this to be slower because of overhead of hashing, the missing memory alignment of map and the copies – skeller May 1 at 21:37 • @PaulMcKenzie: How? Here you say O(n), under the question you say O(n*m). O(n*m) seems OK, but O(n) is not (Note: this solution is O(n) as well, but with a much larger constant factor than qsort likely has). – geza May 1 at 21:41 • It depends on the number of unique groups. If there are a million numbers and only a few unique groups, then the complexity is O(n*(m-2)). We don't really know what the OP's dataset looks like, but if it's where there are a lot of numbers and only 3 or 4 groups, a grouping algorithm will beat a sorting algorithm. – PaulMcKenzie May 1 at 21:45 • 2 did measurement, this takes about 1.5 times longer than std::sort for 1 mio and about 2.5 times longer for 10 mio numbers. – skeller May 1 at 22:19 Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.858729
Convert SQL Query to IList using of Dapper c#-4.0 dapper sql sql-server visual-studio-2012 Question I use VS2012 and SQL Server 2012 I add Dapper to my VS2012 like this enter image description here I have one class like this : public class DomainClass { public SexEnum sex { get; set; } public int Id { get; set; } public string Name { get; set; } } enum SexEnum { Men = 0, Women = 1 } I have a table ATest and i have a query like this : Select * From ATest How i can execute this query and convert my result in to IList of DomainClass with Dapper ? Accepted Answer public static IList<DomainClass> GetAllDomains() { using (var con = new SqlConnection(Properties.Settings.Default.YourConnection)) { const String sql = "Select sex, Id, Name From ATest ORDER BY Name ASC;"; con.Open(); IList<DomainClass> domains = con.Query<DomainClass>(sql).ToList(); return domains; } } Popular Answer I write this static class and this extension method : /// <summary> /// Put this Class in your project /// and use ConvertSqlQueryToIList extension method on SqlConnection object. /// </summary> public static class Convert { public static IList<T> ConvertSqlQueryToIList<T>( this SqlConnection sqlConn, T domainClass, string sqlQueryCommand ) { ConnectionState connectionState = ConnectionState.Closed; try { if (sqlConn.State != ConnectionState.Open) { sqlConn.Open(); connectionState = sqlConn.State; } IList<T> result = sqlConn.Query<T>(sqlQueryCommand).ToList(); return result; } catch (Exception exception) { throw exception; } finally { if (connectionState == ConnectionState.Closed) sqlConn.Close(); } } } And i fond that Dapper convert int field in to Enum property, like Sex property in my quetsion. Licensed under: CC-BY-SA with attribution Not affiliated with Stack Overflow Is this KB legal? Yes, learn why Licensed under: CC-BY-SA with attribution Not affiliated with Stack Overflow Is this KB legal? Yes, learn why
__label__pos
0.999852
The Community for Technology Leaders Green Image ABSTRACT <p><b>Abstract</b>—Repeated subcube allocation and deallocation in hypercubes tend to cause fragmentation, which can be taken care of by task migration. Earlier task migration dealt with the establishment of a single path from each participating node for transmitting migrated information. The time required for migration with single paths is long, if a large amount of information is moved in hypercubes. This paper considers speedy task migration in that two disjoint paths are created between every pair of corresponding nodes for delivering migrated information simultaneously, reducing the size of data transmitted over a path. All migration paths selected are pairwise disjoint and contain no link of active subcubes, so that task migration can be performed quickly and on-line without interrupting the execution of other jobs. Our approach could lead to a considerable savings in the migration time for contemporary hypercube systems, where circuit switching or wormhole routing is implemented.</p> INDEX TERMS Disjoint paths, fragmentation, hypercubes, subcubes, task migration. CITATION Nian-Feng Tzeng, Hsing-Lung Chen, "On-Line Task Migration in Hypercubes Through Double Disjoint Paths", IEEE Transactions on Computers, vol. 46, no. , pp. 379-384, March 1997, doi:10.1109/12.580437 93 ms (Ver 3.3 (11022016))
__label__pos
0.529606
12.6 C London HomeOtherHydrogen End-Use Applications Hydrogen End-Use Applications Integrating hydrogen end-use applications in industries such as automotive, marine, industrial, and aviation requires the development and deployment of hydrogen technologies specific to each sector. Here’s an overview of how hydrogen can be integrated into these industries. • Hydrogen End-use Application in Aviation Sector: Expand on the points related to hydrogen end-use applications in the aviation sector, focusing on hydrogen-powered aircraft and the necessary infrastructure development: Hydrogen-Powered Aircraft: Combustion Engines: Hydrogen can be used in combustion engines to propel aircraft. In this method, hydrogen combusts with oxygen to produce water vapor and heat, generating the necessary thrust for propulsion. Fuel Cells: Another approach is using hydrogen fuel cells. Fuel cells electrochemically convert hydrogen into electricity, which then powers electric motors to drive the aircraft. Environmental Benefits: Reduced Carbon Emissions: Hydrogen-powered aircraft offer a promising solution for reducing carbon emissions in the aviation sector. Infrastructure Development: Hydrogen Storage Facilities: To facilitate the use of hydrogen in aviation, airports need to develop adequate storage facilities for hydrogen. Hydrogen Refueling Systems: Specialized hydrogen refueling systems are required at airports to efficiently and safely refuel hydrogen-powered aircraft. Aircraft Design Modifications: Existing aircraft designs may need modifications to accommodate the storage and distribution of hydrogen. Technology Advancements: Research and Development: Ongoing research and development efforts are crucial for advancing hydrogen propulsion technology in aviation. Testing and Certification: Rigorous testing and certification processes are necessary to ensure the safety and reliability of hydrogen-powered aircraft. Collaboration and Industry Support: Public-Private Partnerships: Collaboration between governments, aviation industry stakeholders, and research institutions is essential to drive the development and adoption of hydrogen-powered aviation. Incentives and Policy Support: Governments can incentivize the adoption of hydrogen in aviation through policies such as tax incentives, grants, and emissions reduction targets. DOWNLOAD-  https://www.marketsandmarkets.com/industry-practice/RequestForm.asp ·         Hydrogen End-use Application in Industrial Sector: Hydrogen for Industrial Processes: Refineries: Hydrogen is a crucial element in the refining of crude oil. It is used in hydrocracking processes to remove impurities and produce high-quality fuels. Petrochemicals: In petrochemical production, hydrogen is a feedstock for various processes, including hydrocracking and desulfurization. Steel Production: Hydrogen is gaining attention as a cleaner alternative to coal in the production of steel. By replacing coke in blast furnaces with hydrogen, the steel industry can achieve a reduction in carbon emissions, moving towards a more sustainable and environmentally friendly steel manufacturing process. Cement Manufacturing: Hydrogen can be used in cement production to replace traditional fuels in kilns. This can help decarbonize the cement industry, which is a significant source of carbon dioxide emissions. Transition to Low-Carbon or Renewable Hydrogen: Gray Hydrogen: Traditionally, hydrogen has been produced from fossil fuels, resulting in gray hydrogen. Transitioning from gray to low-carbon or renewable hydrogen is crucial for reducing the environmental impact of industrial processes. Blue Hydrogen: In some cases, carbon capture and storage (CCS) can be applied to gray hydrogen production, resulting in blue hydrogen. This is a transitional step towards achieving a low-carbon hydrogen economy. Green Hydrogen: Produced through the electrolysis of water using renewable energy sources, green hydrogen is considered the most environmentally friendly option. Its use in industrial processes aligns with broader sustainability goals. On-site Hydrogen Production: Electrolysis: Industries with high hydrogen demand can install on-site electrolysis facilities. Electrolysis involves splitting water into hydrogen and oxygen using an electric current. Co-production: Some industries generate hydrogen as a byproduct of existing processes, such as chlor-alkali production or ammonia production. Economic and Environmental Benefits: Cost Savings: On-site hydrogen production can offer economic advantages by reducing transportation costs associated with the delivery of hydrogen. Emissions Reduction: Shifting from fossil fuel-based hydrogen to low-carbon or renewable hydrogen helps industries meet emission reduction targets. Investment and Policy Support: Industry Collaboration: Collaboration between industrial stakeholders, governments, and research institutions is essential for advancing the adoption of hydrogen in industrial processes. Government Incentives: Governments can provide financial incentives, grants, and supportive policies to encourage industries to invest in low-carbon and renewable hydrogen technologies. Here are some examples of hydrogen integration in various industries: Automotive Sector: Examples: Toyota Mirai: The Toyota Mirai is a hydrogen fuel cell electric vehicle (FCEV) that utilizes hydrogen to generate electricity, powering an electric motor for propulsion. It offers a range of over 500 kilometers and refueling times comparable to conventional vehicles. Hyundai Nexo: The Hyundai Nexo is another hydrogen-powered FCEV that provides long-range capabilities and emits only water vapor. It has been deployed in several countries, including South Korea, the United States, and Europe. Use Cases Municipal Fleets: Municipalities can deploy hydrogen-powered vehicles in their fleets, such as buses and garbage trucks. These vehicles can operate on fixed routes and return to centralized refueling stations, making hydrogen a viable option for clean and efficient public transportation. Long-Haul Trucks: Hydrogen fuel cell technology can be employed in long-haul trucks, offering zero-emission transportation for heavy-duty freight. Marine Sector: Examples: Viking Energy: The Viking Energy is a hydrogen-powered offshore vessel being developed by Eidesvik Offshore, with hydrogen fuel cells providing propulsion. MS Hydroville: The MS Hydroville is the first certified passenger vessel powered by hydrogen fuel cells in Belgium. It operates as a shuttle for commuters and tourists, demonstrating the feasibility and environmental benefits of hydrogen in the maritime sector. Use Cases: Passenger Ferries: Hydrogen can be utilized in passenger ferries operating in coastal areas and inland waterways. Hydrogen fuel cell systems enable zero-emission transportation for commuters and tourists, reducing the environmental impact of marine transport. Offshore Support Vessels: Hydrogen-powered vessels can be employed in the offshore sector, supporting operations in the oil and gas industry, offshore wind farms, and other offshore installations. READ MORE- https://www.marketsandmarkets.com/industry-practice/hydrogen/hydrogen-end-use-applications   explore more
__label__pos
0.890708
Scielo RSS <![CDATA[Revista médica de Chile]]> https://scielo.conicyt.cl/rss.php?pid=0034-988720070001&lang=es vol. 135 num. 1 lang. es <![CDATA[SciELO Logo]]> https://scielo.conicyt.cl/img/en/fbpelogp.gif https://scielo.conicyt.cl <![CDATA[<b>El aniversario 135 de la Revista Médica de Chile</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100001&lng=es&nrm=iso&tlng=es Revista Médica de Chile was founded 135 years ago and it has been published monthly since then, being now the 23rd oldest biomedical journal in the world and the second oldest published in Spanish (Table 1). It is included in the major international data bases and it adheres since their first version to the Uniform Requirements for Manuscripts Submitted to Biomedical Journals (ICMJE) and to the recommendations established by the World Association of Medical Journal Editors (WAME) . The number of articles submitted for publication to the Revista has increased in the last decade, including manuscripts coming from other countries and these are published in English when the authors do not have Spanish as their original language. The rejection rate in 2006 raised to 35% and the time-lag for publication of accepted manuscripts did not differ importantly from other regional or international journals (Table 2). This 135th Anniversary pictures the Revista as a respected medical publication in Chile and in a relevant position among those biomedical journals whose main publication language is not English <![CDATA[<b>Estudio retrospectivo de la endocarditis infecciosa en diferentes grupos de riesgo</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100002&lng=es&nrm=iso&tlng=es Background: Due to the increasing number of intravenous drug users, subjects with immune deficiencies or with prosthetic valves, infective endocarditis (IE) continues to be prevalent and to have a high mortality. Aim: To review all cases of infective endocarditis diagnosed in an Internal Medicine Service. Material and methods: Retrospective review of medical records of all patients with infective endocarditis, hospitalized in an Internal Medicine ward, between 1989 and 2003. Dukes criteria were used to define definitive, possible and less probable cases of IE. Results: Eighty seven patients with definite IE were identified (66 males, age range 19-84 years), with a mean incidence of 5.3 per 1000 hospitalizations. IE in intravenous drugs users was usually caused by Staphylococcus aureus and presented high risk of embolism (RR: 3,21). Subjects aged over 70 years had a relative risk of mortality of 5.5. Hospital acquired IE was associated with advanced age and IV catheters appeared as the only predisposing factor. Patients with prosthetic valves were also older, their main complication was abscess formation and their mortality was higher. Conclusions: A closer approach to differential conditions of patients, according to age, intravenous drug use or the presence of prosthetic valves, is necessary <![CDATA[<b>Identificación de asociaciones clínico-patológicas e hipermetilación de genes supresores de tumores en cáncer gástrico difuso a través de análisis de <i>Hierarchical clustering</i></b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100003&lng=es&nrm=iso&tlng=es Background:Methylation is an inactivation mechanism for tumor suppressor genes, that can have important clinical implications. Aim: To analyze the methylation status of 11 tumor suppressor genes in pathological samples of diffuse gastric cancer. Material and methods: Eighty three patients with diffuse gastric cancer with information about survival and infection with Epstein Barr virus, were studied. DNA was extracted from pathological slides and the methylation status of genes p14, p15, p16, APC, p73, FHIT, E-caderin, SEMA3B, BRCA-1, MINT-2 y MGMT, was studied using sodium bisulphite modification and polymerase chain reaction. Results were grouped according to the methylation index or Hierarchical clustering (TIGR MultiExperiment Viewer). Results: Three genes had a high frequency of methylation (FHIT, BRCA1, APC), four had an intermediate frequency (p15, MGMT, p14, MINT2) and four had a low frequency (p16, p73, E-cadherin, SEMA3B). The methylation index had no association with clinical or pathological features of tumors or patients survival. Hierarchical clustering generated two clusters. One grouped clinical and pathological features with FHIT, BRCA1, and APC and the other grouped the other eight genes and Epstein Barr virus infection. Two significant associations were found, between APC and survival and p16/p14 and Epstein Barr virus infection. Conclusions: Hierarchical clustering is a tool that identifies associations between clinical and pathological features of tumors and methylation of tumor suppressor genes <![CDATA[<b>Incidencia de hipocalcemia pos tiroidectomía total</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100004&lng=es&nrm=iso&tlng=es Background: Postoperative hypocalcemia is one of the most common complications of thyroid surgery. It is related to the type of disease (malignant or benign), the number of identified parathyroid glands during the surgical procedure, and the surgeon's experience. Total thyroidectomy is the procedure of choice in our hospital for benign and malignant thyroid disease, but it can increase the incidence of complications. Aim: To evaluate the incidence of postoperative hypocalcemia in patients subjected to a total thyroidectomy. Material and methods: Two studies were performed. A retrospective review of medical records of 448 patients subjected to total thyroidectomy, looking for serum calcium levels of less than 8 mg/dl and clinical signs of hypocalcemia. In a second study, 45 patients were followed with measurements of preoperative and postoperative serum calcium levels. Results: In the retrospective study, only 136 records had reliable information. Clinical signs of hypocalcemia were registered in 14% of patients and a low serum calcium level was detected in 50%. In the prospective study, 42% of patients had a postoperative low serum calcium level and seven patients (15%) had symptoms. Patients were handled with oral calcium and calcitriol in some cases. Ninety nine percent of patients had normal serum calcium levels two moths after surgery. Conclusions: In this series, the rate of postoperative hypocalcemia after total thyroidectomy is similar to internaitonal reports <![CDATA[<b>Fracturas vertebrales, osteoporosis y vitamina D en la posmenopausia</b>: <b>Estudio en 555 mujeres en Chile</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100005&lng=es&nrm=iso&tlng=es Background: Approximately one-third of vertebral fractures can be clinically diagnosed. Aim: To study the frequency of vertebral fractures in postmenopausal women. Patients and methods: We recruited 555 postmenopausal women from Santiago, Chile, aged 55-84 years, who manifested interest in their bone health. All were healthy by self-declaration and by general clinical and laboratory tests and had not taken any bone-active therapy. They all underwent a spine and femoral neck (FN) densitometry and a digital lateral spine X-ray from T4 to L4 was obtained. PTH, calcidiol, and other parameters of calcium metabolism were also measured. Results: Overall, 142 of 478 patients with a complete study (29.7%) had at least one vertebral fracture. The proportion of women with fractures increased with age. A T score below -2.5 in the spine and hip was found in 32% and 14% of women, respectively. The proportion of women with spinal opeoporosis doubled between ages 55-70 and remained constant afterwards. In contrast, at the femoral neck, this proportion increased progressively reaching 53.3% at age 80-85. However, 56% of patients with vertebral fractures did not have densitometric osteoporosis in any location. Calcidiol levels were 16.8±6.8 ng/mL. With a cutoff point of 17 ng/mL, 47.5% of the patients had hypovitaminosis D. There was no association between calcidiol levels and vertebral fractures or bone density at the spine or femoral neck. Patients with fractures differed from those without fractures in that they had significantly lower bone density at the spine and hip and were older (p <0.001). However they did not differ in weight, body mass index, or calcidiol levels. Conclusions: Thirty percent of postmenopausal women in this series had a vertebral fractures. Osteoporosis and vitamin D deficiency were also common. Most vertebral fractures were observed in women without osteoporosis by densitometric criteria <![CDATA[<b>Inestabilidad microsatelital en lesiones preneoplásicas y neoplásicas del cuello uterino</b>: <b>Correlación con el genotipo del virus papiloma humano</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100006&lng=es&nrm=iso&tlng=es Background: The association between some specific human papilloma virus (HPV) types and cervix cancer is well known. However, the genetic conditions that favor the development of cervical cancer are less well known. Aim: To determine the presence of satellite instability (MSI) in preneoplastic and neoplastic lesions of the cervix and correlate these findings with HPV genotypes. Material and methods: Biopsy samples of cervical lesions were studied. Sixteen had low grade lesions, 22 had high grade lesions and 28 had an epidermoid cancer. Viral types were identified with polymerase chain reaction, dot-blot hybridization and restriction fragment length polymorphism. MSI was determined using a panel of eight highly informative microsatellites. Results: Microsatellite instability in at least one locus was observed in 91, 56 and 69% of low grade lesions, high grade lesions and epidermoid carcinomas, respectively. MSI-High grade, MSI-Low grade instability and microsatellite stability were observed in 5, 60 and 46% of samples, respectively. Two of three samples with high grade instability had HPV 52 genotype. Other viral subtypes had frequencies that ranged from 78% to 100%, with the exception of HPV16 that was present in only 53% of samples with low grade instability. Conclusions: Two thirds of biopsy samples from cervical lesions had MSI, mechanism that can be involved in the first stages of cervical carcinogenesis. The low frequency of high grade instability, its association with HPV52 and the low frequency of HPV16 in samples with low grade instability, suggest different coadjutant mechanisms in cervical carcinogenesis <![CDATA[<b>Consumo de sustancias y conductas de riesgo en consumidores de pasta base de cacaína no consultantes a servicios de rehabilitación</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100007&lng=es&nrm=iso&tlng=es Background: In Chile, cocaine base paste (CBP) is the illegal substance that produces the highest rate of addiction. Nonetheless, a marginal number of users receive treatment each year. Aim: To compare the consumption patterns and risk behavior of CBP and cocaine hydrochloride (CH) users who do not attend rehabilitation services. Material and Methods: In a prospective research design, through a study methodology called Privileged Access Interview of hidden populations, 28 surveyors recruited 231 CBP users (group 1) and 236 CH users (group 2). The Risk Behavior Questionnaire was applied in four communities of Metropolitan Santiago, that have the highest prevalence of PBC and CH use. Results: CBP users showed higher schools drop-out and unemployment rates. Subjects of both groups were predominantly polysubstance and polyaddicted users. The severity of addiction to CBP of group 1 was significantly higher than the severity of addiction to CH of group 2 (5.5 versus 5.1: p<0.001). CBP users showed significantly higher rates of sexual risk behaviors, antisocial behavior, self infliction of injuries, suicide attempt and child neglect. Conclusions: A higher vulnerability was shown for users of CBP than those of CH. Attention is drawn to the need for developing community interventions in order to alter substance abuse and the risk behavior of these vulnerable groups <![CDATA[<b>Modelo de asignación de recursos en atención primaria</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100008&lng=es&nrm=iso&tlng=es Resource allocation in primary health care is a worldwide issue. In Chile, the state allocates resources to city halls using a mechanism called "per capita". However, each city hall distributes these resources according to the historical expenses of each health center. None of these methods considers the epidemiological and demographic differences in demand. This article proposes a model that allocates resources to health centers in an equitable, efficient and transparent fashion. The model incorporates two types of activities; those that are programmable, whose demand is generated by medical teams and those associated to morbidity, generated by patients. In the first case the health promotion, prevention and control activities are programmed according to the goals proposed by health authorities. In the second case, the utilization rates are calculated for different sociodemographic groups. This model was applied in one of the most populated communities of Metropolitan Santiago and proved to increase efficiency and transparency in resource allocation <![CDATA[<b>Obesidad en preescolares de la Región Metropolitana de Chile</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100009&lng=es&nrm=iso&tlng=es Background: In Chile, obesity is currently the main nutritional problem. Since prevention should start early in life, it is important to determine the prevalence of obesity during childhood according to age category. Aim: To determine and compare the evolution of the obesity prevalence and other anthropometric indicators in preschool children between 2002 and 2004. Material and methods: Twice a year, we analyzed the data of children aged 2 to 4 yrs, from day care centers belonging to the National Association of Day Care Centers located in Greater Santiago, from 2002 till 2004 (the number of children included on each point in time fluctuated between 3,500 and 10,000). Cross-sectional and longitudinal analyses were carried out to determine the evolution of obesity prevalence, weight for age (WA) and body mass index (BMI) Z scores (according to the Centers for Disease Control 2000 reference) on preschoolers who were 2 years old in March 2002 and that were followed 3 years, until November 2004. These parameters were compared by age and gender over time. Results: The prevalence of obesity varied between 11 and 13.6% in two-year old children and between 17% and 20% in three and four year olds. The cross-sectional analysis showed that WA and BMI Z scores were significantly lower at 2 years of age, while the longitudinal analysis clearly demonstrated that there was a sharp rise in obesity between 2 and 3 years of age. Conclusions: The prevalence of obesity is high in preschool children, especially among the 3 and 4 year-olds with a significant rise from 2 to 3 years of age <![CDATA[<b>Estado nutricional, consumo de alimentos y actividad física en escolares mujeres de diferente nivel socioeconómico de Santiago de Chile</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100010&lng=es&nrm=iso&tlng=es Background: A high prevalence of obesity is the main public health problem in Chilean school children. Aim: To compare the nutritional status, consumption of selected foods and extracurricular physical activity (PA) habits in school children of different socioeconomic levels as a baseline for developing effective educational interventions. Material and methods: Cross-sectional study that determined the body mass index, food consumption and physical activity with previously validated instruments in 202 and 358 girls from 3rd to 8th grade in schools of medium-high and low socioeconomic level (SEL) from Santiago, Chile, respectively. Results: Compared to their counterparts of low socioeconomic level (SEL), the prevalence of obesity was significantly lower in 8-9 year-old girls of medium high SEL (19% and 9%, respectively, p =0.012) and 12-13 year-old (12% and 2.5% respectively, p =0.008). Also median daily intake of dairy products was higher in girls of medium high SEL (250 and 470 ml/day, respectively). The intake of fruits and vegetables was similar (200 g/d); and the intake of bread was lower (230 and 70 g/day, respectively, p <0.01). Consumption of energy-dense foods was lower in 10-13 year-old girls of medium high SEL (80 and 50 g/day, respectively, p <0.01). 45% of 8-9 year-old girls and 35% of 12-13 year-old girls of both SEL engaged in PA four or more times per week (NS). Conclusions: Although the prevalence of obesity in girls of medium-high SEL was not as high as in those from low SEL, it is still high. There is a need for educational interventions to improve their food and PA habits and to promote an environment that enhances healthy behaviors <![CDATA[<b>Madres niñas-adolescentes de 14 años y menos</b>: <b>Un grave problema de salud pública no resuelto en Chile</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100011&lng=es&nrm=iso&tlng=es Background: Teenage fecundity rates are an indicator of epidemiological discrimination in developing countries. Aim: To study fertility rates of girls under 14 years of age in Chile from 1993 to 2003. Material and methods: Information of children born alive from mothers aged 10 to 15 years, was obtained from the Chilean National Institute of Statistics. Age segmented population data was obtained from the Ministry of Health. Trends were analyzed by regions and single ages. The rates in communities of the Metropolitan Region were compared. Results: Between 1993 and 2003, there was an increasing trend in fecundity rates, ratios and crude numbers. These rates duplicate from 14 to 15 years of age. In the Metropolitan Region, the fecundity ratios of communities with lower economical incomes is seven times greater than those with higher incomes. During 2003, the fecundity rates in Chile were 100 and 10 higher than those of Holland and Sweden in 1981. Conclusions: In developing countries with very low infant mortality rates such as Chile, the high fecundity rates of young girls is an indicator of a deficient human and social development. Sexual Education and Health Services for adolescents are essential to prevent this public health problem <![CDATA[<b>Cetoacidosisdiabética</b><b> reversible con metotrexato</b>: <b>Resistencia al tratamiento insulínico. Caso clínico</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100012&lng=es&nrm=iso&tlng=es We report a 42 year-old woman with a hypothyroidism and a mixed connective tissue disease treated with prednisone and methotrexate. The patient had normal blood glucose levels but when the methotrexate dose was tapered, she presented a diabetic ketoacidosis that required up to 520 units of insulin per day. Due to the intensification of the mixed connective tissue disease symptoms, the doses of methotrexate and prednisone were increased again with a simultaneous normalization of serum glucose levels and glucose tolerance. In the following six months, when the dose of methotrexate was tapered again, the hyperglycemia reappeared and was again controlled increasing the dose. Thirty months after the episode of keotacidosis, the patient was with a weekly dose of methotrexate, asymptomatic and with a normal glucose tolerance. Anti insulin antibodies were not detected and anti islet antibodies were indeterminate, due to interference with antinuclear antibodies. It is possible that the episode of ketoacidosis was unveiled by an autoimmune phenomenon <![CDATA[<b>Tratamiento quirúrgico de la isquemia mesentérica crónica</b>: <b>Caso clínico</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100013&lng=es&nrm=iso&tlng=es Although the classic therapy for chronic mesenteric ischemia is surgical revascularization, endovascular therapy is a new therapeutic option. We report a 55 year-old female, with a 2 years history of post prandial abdominal pain, diarrhoea, and weight loss, with occlusion of both mesenteric arteries and critical stenosis of the celiac artery. The initial treatment consisted in angioplasty and celiac artery stent placement in two occasions, with a brief symptomatic relief. Finally, a visceral artery bypass was performed, with good post operative outcome and complete symptomatic resolution at one year follow up. In our opinion endovascular therapy is a good therapeutic option for chronic mesenteric ischemia in high surgical risk patients, specially when dealing with stenotic injuries. It may also be a complement for patients who need to recover their nutritional status prior to revascularization surgery. On the other hand, due to the long term patency and symptomatic relief, surgical treatment is a good option in low risk patients <![CDATA[<b>Trombosis de la arteria renal después de suspender la terapia anticoagulante en una trasplantada renal con trombofilia</b>: <b>Caso Clínico</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100014&lng=es&nrm=iso&tlng=es Kidney graft loss because arterial thrombosis is not common and is related to risk factors such as recurrent vascular hemodialysis access thrombosis, collagen-vascular disease, repeated miscarriage, diabetes mellitus and thrombophilia. Patients having this last disorder have an increased risk of repeated thrombosis in successive transplants unless they receive anticoagulation therapy. We report a 51 year-old diabetic woman who had a history of recurrent vascular hemodialysis access thrombosis (both native and prosthetic) while on dialysis and received a cadaveric donor kidney. One month after transplantation she had axillary vein thrombosis complicated with pulmonary embolism and received anticoagulants for six months. Just days after stopping the anticoagulation, she became suddenly anuric due to renal artery thrombosis and complete graft infarction. The coagulation study showed moderate hyperhomocysteinemia and a significant protein C deficiency (39%). Days after nephrectomy she suffered a femoral vein thrombosis and anticoagulation was prescribed for life <![CDATA[<b>Cien años de la enfermedad de Alzheimer</b>: <b>La inmunoterapia ¿una esperanza?</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100015&lng=es&nrm=iso&tlng=es In 1906 Alois Alzheimer, described the cerebral lesions characteristic of the disorder that received his name: senile plaques and neurofibrillary tangles. Alzheimer's disease (AD) is now, 100 years after, the most prevalent form of dementia in the world. The longer life expectancy and aging of the population renders it as a serious public health problem of the future. Urgent methods of diagnosis and treatment are required, since the definitive diagnosis of AD continues to be neuropathologic. In the last 30 years several drugs have been approved to retard the progression of the disease; however, there are still no curative or preventive treatments. Although still in experimentation, the visualization of amyloid deposition by positron emission tomography or magnetic resonance imaging will allow in vivo diagnosis of AD. In addition, experiments with the amyloid vaccine are still ongoing, and very recent data suggest that intravenous gammaglobulins may be beneficial and safe for the treatment of AD <![CDATA[<b>Quistes renales, manifestación de diversas patologías</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100016&lng=es&nrm=iso&tlng=es Many diseases can be associated with kidney cysts and they may be classified as hereditary and non-hereditary renal cystic disease. The first group can be sub-classified as autosomal recessive cystic disease, such as autosomal recessive polycystic kidney disease and nephronophthisis, as autosomal dominant kidney disease such as autosomal dominant polycystic kidney disease, glomerulocystic disease and tuberous sclerosis, and as cysts associated with syndromes. Cystic dysplasia, multicystic dysplastic kidney, simple cyst, multilocular cysts, Wilm's tumor and acquired cystic kidney disease are classified in the second group. The genetic study of renal cysts is becoming increasingly important, due to the possible therapeutic interventions that could be devised in the future. The aim of this review is to provide a fast and easy clinical approach to renal cystc <![CDATA[<b>Sobre el origen ontogénico del ser humano</b>: <b>La solución científica</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100017&lng=es&nrm=iso&tlng=es Every living being is the result of a genome-environment interaction. Neither human oocytes nor spermatozoids have human functional genomes, but the zygote that they constitute may have a human functional genome and other functional genomes such as those of the hydatidiform mole, polyploids, and non-human living beings. When the zygotic human functional genome is integrated and activated, the biotic humanity is acquired. This may occur when the paternal chromatin decondenses; the nuclear environment and envelope of both nuclei are changed to constitute pronuclei; the replacement of sperm protamines by histones; genome imprinting modifications; centriole duplication; and more importantly, the fourfold genome replication. Other propositions on the origin of humans are: embryo implantation [6-7 days post fertilization, (dpf)]; the appearance of the antero-posterior axis; the limit for monozygote twining (13dpf) and the appearance of the neural tissue (16dpf). They are refuted because some mammals do not implant; embryo axes are present in the zygote; some animals regenerate complete individuals from each part in which they are divided; plants do not have neural system; a human whose brain was destroyed by cancer continues to be a human. Alternative propositions coming from philosophies, theologies, perceptive knowledge, beliefs and intuitions and based on conceptualizations like person, anima, soul, organization, socio-cultural relations are ideologically or religiously biased and based on irreducible beliefs such as faith. They lead to disagreement rather than to agreement <![CDATA[<b>Necesidad de prudencia frente a las promesas de la terapia celular </b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100018&lng=es&nrm=iso&tlng=es Every living being is the result of a genome-environment interaction. Neither human oocytes nor spermatozoids have human functional genomes, but the zygote that they constitute may have a human functional genome and other functional genomes such as those of the hydatidiform mole, polyploids, and non-human living beings. When the zygotic human functional genome is integrated and activated, the biotic humanity is acquired. This may occur when the paternal chromatin decondenses; the nuclear environment and envelope of both nuclei are changed to constitute pronuclei; the replacement of sperm protamines by histones; genome imprinting modifications; centriole duplication; and more importantly, the fourfold genome replication. Other propositions on the origin of humans are: embryo implantation [6-7 days post fertilization, (dpf)]; the appearance of the antero-posterior axis; the limit for monozygote twining (13dpf) and the appearance of the neural tissue (16dpf). They are refuted because some mammals do not implant; embryo axes are present in the zygote; some animals regenerate complete individuals from each part in which they are divided; plants do not have neural system; a human whose brain was destroyed by cancer continues to be a human. Alternative propositions coming from philosophies, theologies, perceptive knowledge, beliefs and intuitions and based on conceptualizations like person, anima, soul, organization, socio-cultural relations are ideologically or religiously biased and based on irreducible beliefs such as faith. They lead to disagreement rather than to agreement <![CDATA[<b>CRÓNICA</b>]]> https://scielo.conicyt.cl/scielo.php?script=sci_arttext&pid=S0034-98872007000100019&lng=es&nrm=iso&tlng=es Every living being is the result of a genome-environment interaction. Neither human oocytes nor spermatozoids have human functional genomes, but the zygote that they constitute may have a human functional genome and other functional genomes such as those of the hydatidiform mole, polyploids, and non-human living beings. When the zygotic human functional genome is integrated and activated, the biotic humanity is acquired. This may occur when the paternal chromatin decondenses; the nuclear environment and envelope of both nuclei are changed to constitute pronuclei; the replacement of sperm protamines by histones; genome imprinting modifications; centriole duplication; and more importantly, the fourfold genome replication. Other propositions on the origin of humans are: embryo implantation [6-7 days post fertilization, (dpf)]; the appearance of the antero-posterior axis; the limit for monozygote twining (13dpf) and the appearance of the neural tissue (16dpf). They are refuted because some mammals do not implant; embryo axes are present in the zygote; some animals regenerate complete individuals from each part in which they are divided; plants do not have neural system; a human whose brain was destroyed by cancer continues to be a human. Alternative propositions coming from philosophies, theologies, perceptive knowledge, beliefs and intuitions and based on conceptualizations like person, anima, soul, organization, socio-cultural relations are ideologically or religiously biased and based on irreducible beliefs such as faith. They lead to disagreement rather than to agreement
__label__pos
0.748727
1 February 2011 High-resolution resonant and nonresonant fiber-scanning confocal microscope Author Affiliations + J. of Biomedical Optics, 16(2), 026007 (2011). doi:10.1117/1.3534781 Abstract We present a novel, hand-held microscope probe for acquiring confocal images of biological tissue. This probe generates images by scanning a fiber-lens combination with a miniature electromagnetic actuator, which allows it to be operated in resonant and nonresonant scanning modes. In the resonant scanning mode, a circular field of view with a diameter of 190 μm and an angular frequency of 127 Hz can be achieved. In the nonresonant scanning mode, a maximum field of view with a width of 69 μm can be achieved. The measured transverse and axial resolutions are 0.60 and 7.4 μm, respectively. Images of biological tissue acquired in the resonant mode are presented, which demonstrate its potential for real-time tissue differentiation. With an outer diameter of 3 mm, the microscope probe could be utilized to visualize cellular microstructures in vivo across a broad range of minimally-invasive procedures. Hendriks, Bierhoff, Horikx, Desjardins, Hezemans, ‘t Hooft, Lucassen, and Mihajlovic: High-resolution resonant and nonresonant fiber-scanning confocal microscope 1. Introduction Microscopic imaging of living tissue in vivo could allow for real-time disease diagnosis.1 Various approaches have been investigated to develop miniature microscopes that are compatible with minimally invasive procedures. Most of these approaches involve either single-fibers or fiber-bundles to transmit and receive light [as reviewed in Refs. 2 and 3)]. Important design criteria are the spatial resolution, the field of view (FOV), as well as the contrast that can be achieved. Designs based on coherent fiber bundles have many advantages including the potential for high levels of miniaturization and mechanical flexibility, 4, 5, 6 but are limited in resolution by the diameter and spacing of the fibers. With some designs, Fresnel reflection of illumination light from the distal ends of the fibers can impose limitations on the dynamic range; fiber autofluorescence can also have confounding effects. Single-fiber solutions require an actuation method to move the fiber tip at the distal end of the probe. Methods employing piezomotors, 7, 8 microelectromechanical systems (MEMS), 9, 10 and tuning-forks11 have been investigated. In most of these approaches, the objective lens system in front of the fiber distal end is not actuated, which results in constraints on the achievable numerical aperture (NA) and the FOV of the scanner. 12, 13 Different optical imaging modalities can be employed in a single-fiber optic scanner such as confocal reflectance,13 confocal fluorescence,14 two-photon fluorescence, 15, 16 and optical coherence tomography. 17, 18 Most single-fiber scanning microscopes that have been previously demonstrated operate in resonant modes; as such, they may not be ideal for applications requiring longer acquisition times such as two-photon microscopy and Raman spectroscopy. In this paper, we present a single-lens, high resolution, electromagnetically-controlled confocal fiber-scanning microscope with an outer diameter of 3.0 mm, that can be operated in resonant and nonresonant modes. As a preliminary indication of the capability of this microscope to perform tissue differentiation, we present images obtained from tissues with confocal reflectance with and without polarization gating. 2. Materials and Methods 2.1. Optical Design The scanner design involves a single optical fiber with a flexible distal end. The position of the distal end of the fiber is controlled by two sets of electromagnetic coils that allow for deflections in orthogonal directions transverse to the probe axis. To focus the light beam delivered from the fiber, an objective lens system is required. This objective lens system in front of the distal end of the fiber could either be attached to the fiber or mounted on the housing of the scanner. When the lens system is mounted to the housing, the FOV that can be achieved with the given stroke of the fiber tip P fiber scales with the ratio of NA of the objective lens system, NA obj, and the NA of the exit beam, NA fiber. With NA obj = 0.65 and NA fiber ∼ 0.1, a stroke of P fiber = 1.3 mm is required in order to acheive a FOV of 200 μm. The large ratio between the stroke distance and the FOV imposes severe constraints on the FOV for small diameter sized fiber scanners. Another drawback of this approach is that complicated objective lens systems are required to allow for illumination of the lens to be performed at different angles while maintaining constant resolution. 12, 13, 19 In the approach taken in this study, the objective lens system is mounted to the moveable part of the fiber at a fixed distance from the fiber tip. In this way, the FOV is equal to the lateral stroke of the fiber. Furthermore, the beam enters the lens on-axis, allowing for the use of a single-lens with a high NA.20 The objective lens system in our scanner is plano-aspherical, constructed from poly(methylmethacrylate) (PMMA) (refractive index n = 1.492 and Abbe number V = 57.4), and mounted on a 0.2 mm thick AF45 Schott glass plate with a design wavelength of 780 nm (Fig. 1). For fast prototyping, the aspheric lens was directly diamond turned on a high precision lathe. For this lens, NA obj = 0.68; the entrance pupil was 0.82 mm and the focal length was 0.68 mm. The total on-axis thickness of the lens was 0.65 mm. The objective lens was positioned at a distance of 10 mm from the distal end of the fiber. The air gap between the objective and the proximal end of the exit glass window was 0.1 mm. The exit window had a thickness of 0.2 mm and was constructed of AF45 Schott glass. The objective lens focused the exit beam of the fiber at a distance of 0.1 mm beyond the distal end of the exit window that was optimized for immersion in a water-like environment (n = 1.33). Fig. 1 The single plano-aspherical objective lens: (a) optical design layout and (b) a photograph of a manufactured objective lens (the line separation of the ruler is 0.5 mm). 026007_1_1.jpg In order to be flexible for different imaging modalities, especially for modalities involving short pulses such as two-photon imaging, chromatic aberration introduced by the objective lens must be low. 21, 22 With our objective lens system, chromatic aberration results in a time shift ΔT between the marginal ray and the principle ray. In order to maintain a short pulse width, the time shift must be smaller than the pulse width Δτ. According to Bor,22 the time shift is given by 1 [TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} \left| {\Delta T} \right| = \left| {\frac{{{\it NA}_{obj}^2 \lambda f}}{{2c(n - 1)}}\frac{{dn}}{{d\lambda }}} \right| = \left| {\frac{{{\it NA}_{obj}^2 f\lambda }}{{2c(\lambda _F - \lambda _C)V}}} \right|, \end{equation}\end{document} ΔT=NAobj2λf2c(n1)dndλ=NAobj2fλ2c(λFλC)V, where λ is the wavelength, NA obj is the numerical aperture objective, c is the speed of light, n is the refractive index, f the focal length of the lens, and V is the Abbe number of the lens material. The two wavelengths λ F and λ C are the Fraunhofer F- and C-spectral lines, given by λ F = 486.13 nm and λ C = 656.27 nm. For the objective lens system in our microscope, we find that ΔT = 42 fs, which is smaller than pulse widths of approximately 100 fs that are typical for two-photon microscopy applications. 2.2. Mechanical Design A drawing of the mechanical construction as well as a photograph of the fiber scanning microscope is shown in Fig. 2. It consists of the cylindrical fiber housing, which is constructed from a stainless steel cylindrical tube with an inner diameter of 0.25 mm and outer diameter of 0.5 mm. The proximal end of this tube is rigidly connected to the microscope housing; the distal part of this tube can move freely. The length of the fiber housing is 40 mm. We note that the mechanical properties of the scanner are determined by the fiber housing and not by the fiber itself. This property has the advantage that approximately the same resonant frequency can be maintained with different optical fibers. At the distal part of the fiber housing, an objective lens is mounted. Fig. 2 (a) Schematic drawing (not to scale) and (b) photograph of the scanning fiber microscope. 026007_1_2.jpg Two independently-driven driving coil pairs are fixed to the microscope housing, as shown in Fig. 2. In order to minimize the outer diameter of the microscope housing, the central axes of the coils are chosen to be perpendicular to the flux of the magnet. When a current is applied to one of the coil pairs, the magnet experiences Lorenz forces in a direction that depends on the sign of the current. By application of appropriate currents in the coil pairs, the fiber tip and lens can be arbitrarily positioned in x- and y- lateral directions within the scanning area. A sensing coil is attached to the lens mount to measure the position of the fiber tip and lens with respect to the microscope housing. While the electromagnetic coupling between magnet and driving coils deliver the drive forces, the electromagnetic coupling between the sensing coil and driving coils delivers the position information. The position measurement method is described in detail in Sec. 2.3. From a mechanical standpoint, the scanning part of the system can be modeled as a hollow tube with a distributed mass m (lens, lens mount, magnet, and sensing coil) at the distal end of the tube. This total distributed mass is m = 55 mg. The hollow tube is fixed at the proximal end, while the distal end can move freely. Taking the shapes and materials into account, the stiffness of our system is calculated to be k = 37.5 N/m. The resonance frequency f res, is given by 4 [TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} f_{res} = \frac{1}{{2\pi }}\sqrt {\frac{k}{m}} . \end{equation}\end{document} fres=12πkm. Equation 4 predicts that for our microscope, f res is equal to 131 Hz. The microscopic image is formed by making a spiral movement with the distal end of the fiber. The field of view FOV res of the scanner in the resonance mode can be estimated as follows. The electromechanical coupling constant k em can be computed from the design of the driving coils and the location of the magnet and is determined to be k em = 3.7 mN/A. Measurements on the maximum allowed current through the driving coils such that the temperature in the coils remains below 70 °C is found to be 0.42 A. To include a safety margin, we limited the maximum current to I max = 0.35 A in our system. Finally, the quality factor Q (i.e., the amount of underdamping of the resonator) is designed to be larger than 50. From these parameters, the FOV which is defined by 5 [TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} {\it FOV}^{res} = \frac{{k_{em} I_{\max } }}{{\pi bf_{res} }}, \end{equation}\end{document} FOVres=kemImaxπbfres, with the damping coefficient b given by 6 [TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} b = \frac{{\sqrt {km} }}{Q}, \end{equation}\end{document} b=kmQ, where k is the stiffness, m the distributed mass, and Q is the quality factor, yielding FOV res = 3.46 mm. For a typical required FOV of 0.2 mm, the system has sufficient tolerance to cope for instance with manufacturing errors. This large FOV at resonant scanning shows that with nonresonant scanning, a significant FOV can still be achieved. For example, if the scanning is performed with a very low frequency, the field of view FOV non is then given by 7 [TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} {\it FOV}^{non} = 2\frac{{k_{em} I_{\max } }}{k}. \end{equation}\end{document} FOVnon=2kemImaxk. For our given parameters, Eq. 7 yields a prediction of FOV non = 69 μm. 2.3. Electronic Design In order to estimate the position of the fiber tip in real-time during scanning, we let the current through x- and y-coils, I x and I y, consist of driving currents and sensing currents: I xI dx+I sx and I y = I dy+I sy (see Fig. 3). The sensing currents are high-frequency sinusoidal signals with zero DC components. Their frequencies are much higher than the angular scanning frequency, so that they have a negligible effect on the lens displacement. Furthermore, the frequencies of the I sx and I sy signals differ from each other in order to independently measure positions of the lens in x- and y-directions. The sensing currents I sx and I sy induce currents I ix and I iy in the sensing coil, respectively [see Fig. 2a]. The amplitude of the induced current I ix (I iy) represents the difference between the currents induced by the x- (y-) coil pairs. When the sensing coil is at the center of the x- (y-) coil pairs, the induced current I ix (I iy) is zero. Furthermore, the amplitudes of these induced currents are linearly dependent on the displacements of the measurement coil for the relevant displacements encountered in the scanner system. Fig. 3 Schematic diagram of the control electronics. The position information of the objective lens coming from the sensing coil is compared with the required set-points. Depending on the deviation from the required set-points the control unit adjusts the position of the lens. 026007_1_3.jpg In order to obtain the amplitudes of I ix and I iy, the signal from the sensing coil is independently multiplied by I sx and I sy, respectively, and then filtered with a low pass filter. This demodulation method (i.e., the method of synchronous detection) allows for the simultaneous detection of the x- and y- positions of the sensing coil. Finally, a proportional-integral-derivative controller (PID controller) has been implemented in the control unit (see Fig. 3) to control the position of the lens. 2.4. Console Optics Design The light source is a superluminescent light-emitting diode (EXALOSl, EXS8010-2411), coupled into single-mode fiber (SM800-5.6-125). It has a spectrum of ∼25 nm width [full width at half maximum (FWHM)], centered at 785 nm and a maximum output power of 5 mW. A light source with short coherence length was chosen to prevent interference fringes that result from interference between the light reflected by the sample and the (unwanted) light reflected by the polished face of the fiber inside the scanner. To reduce its reflectivity, the fiber face is polished at an angle. This introduces a linear phase variation over the mode reflected at the facet, greatly reducing the efficiency with which it couples back into the fiber. Also, due to refraction, the far-field pattern of the fiber mode shifts over the entrance pupil of the objective lens and the amount of light captured by the lens and focused onto the sample is diminished. The results of model calculations for a representative step-index fiber are displayed in Fig. 4. A polishing angle of 5 deg has been selected, giving a reduction of the facet reflectivity of about three orders of magnitude, while decreasing the amount of light captured by the lens by only about one-third. Fig. 4 (a) Computed facet reflectivity and (b) fraction of power captured by the entrance pupil of the objective as a function of polishing angle of the fiber facet in the scanner. In the computations, a step-index quartz fiber with a core radius of 2.32 μm and an NA of 0.13 has been assumed, with λ = 785 nm. The objective lens parameters are described in Sec. 2.1. 026007_1_4.jpg Figure 5a shows the optical system utilized for polarization-insensitive detection. A polarization-independent fiber-optical circulator (OFR, OC-3-780-FC/APC3) was used to direct reflected light to a detector. For samples with relatively high reflectivity, a large-area photoreceiver (New Focus, Model 2031) is used, while for low-reflectivity samples, a more sensitive avalanche photodiode module is used (Hamamatsu, C5460-01). Fig. 5 Console optics: (a) single mode (SM) fibers setup for polarization-insensitive backscattered light imaging mode; (b): polarization-sensitive setup with a polarization-maintaining fiber in the scanner. 026007_1_5.jpg Figure 5b shows the optical system utilized for polarization-sensitive detection. The incident light, collimated by lens L1 (part of a PAF-X-5-B FiberPort collimator module from ThorLabs), passes through a polarizing beam splitter cube (PBS, PSCLB-VR-780 from ThorLabs; extinction ratio >1000:1 at the design wavelength of 780 nm) and is focused into the polarization-maintaining scanner fiber (Nufern, PM-780-HP). A λ/2 waveplate provides some control over the polarization of the light incident on the PBS; it can be rotated to maximize the amount of light incident on the sample. The λ/4 waveplate between the PBS and the PM scanner fiber determines which polarization component of the reflected light was directed by the PBS toward the detector. When the axes of the λ/4 waveplate are parallel to those of the PBS, only cross-polarized light is detected; light of parallel polarization is included when the waveplate is rotated (however, also the light reflected by the fiber facet will then be detected to a larger degree). To convert the intensities measured during the spiral scanning into an image, we proceeded as follows. The image on the screen was divided into equal rectangular bins onto which the segments of the spirals were mapped. Since the angular velocity and the pitch of the spiral movement were kept constant during scanning, the measurement time for each bin was longer for the central pixels than for the outer pixels. Aside from the Polar to Cartesian map, the only image processing employed was the application of an offset and scaling to intensities. 3. Results and Discussion 3.1. Mechanical Properties The measured frequency response functions of the scanning system in x- and y-direction are shown in Fig. 6, revealing a resonance frequency of 127 Hz in the x-direction and 115 Hz in the y-direction. These values are in close agreement with the predicted value of 131 Hz. The electronic control system can correct for these differences in resonant frequencies and we selected 127 Hz as the scanning frequency. The Q-factor in x-direction is 62 and in y-direction 47, which also agrees well with the designed value of 50. Fig. 6 Measured frequency response functions (position divided by the applied current) of the electromagnetic actuator in (a) x- and (b) y-scanning direction. 026007_1_6.jpg With the actuator, a circular FOV with a maximum diameter of 190 μm can be imaged, see Fig. 7a. Note that this FOV is determined by the space available in the housing to make the stroke and not by the actuator. At a very low scanning speed (i.e., nonresonant scanning), it is possible to scan an area with a diameter of 69 μm as shown in Fig. 7b. The nonresonant scan width agrees very well with our predicted value. Fig. 7 Images of a Richardson microscope test slide (Ref. 25) made with the scanning fiber microscope at various scanning speeds: FOV of (a) 190 μm obtained in 1.2 s and (b) 69 μm obtained in 15 s. 026007_1_7.jpg 3.2. Optical Resolution The resolution that can be obtained by the fiber scanner system depends on the objective lens pupil filling. When we overfill the pupil of the objective lens, the spot width is given by the Airy distribution having a FWHM of the intensity in the lateral direction of Δx = 0.51λ/NA, under the assumption that aberrations are negligible. At a center wavelength of 780 nm and NA of 0.68, that formula predicts a transverse resolution Δx of 0.59 μm. In practice, however, the pupil filling will exhibit a distribution that is approximately Gaussian with a rim intensity that depends on the choice of the fiber used in the scanner and the distance of the fiber end to the objective lens. The lateral resolution was measured with a reflective edge in the focal plane of the objective. Under the assumption that the spot has a Gaussian spatial distribution, the FWHM of the lateral distribution can be approximated by 0.92 times the 10–90% edge width. 23, 24 Figure 8a shows that this FWHM is 0.60 μm at a wavelength of 780 nm. To experimentally determine the axial resolution, the detector intensity was measured when moving a plane mirror surface through the object focal plane. As shown in Fig. 8b, the measured FWHM was 7.4 μm. Fig. 8 The measured intensity on the detector when (a) moving an edge of a mirror in the lateral direction along the focal plane (i.e., edge response) and (b) moving a plane mirror surface through the object focal plane of the fiber scanner. 026007_1_8.jpg 3.3. Imaging The image contrast that can be achieved depends on the optical throughput of the fiber-objective system, as well as on the optical modality employed to generate optical contrast in the tissue. As a fraction of the light intensity that was coupled into fiber from the source, the light intensity that reached the focal point was measured to be 0.34 for polarization-insensitive detection. This throughput is consistent with the calculated value shown in Fig. 4b. It is determined by the fiber numerical aperture, the fiber-objective lens distance, the entrance pupil of the objective, and the losses in the objective lens system. In order to allow for the possibility of integrating different types of fibers with smaller numerical apertures than that of the polarization insensitive fiber used in this study, there was substantial overfilling of the objective. To demonstrate the potential of the scanner, we present several images of tissue samples for the polarization-insensitive and polarization-sensitive detection. In Fig. 9, images taken of ex vivo tissues with the scanning fiber microscope with polarization-insensitive detection are shown. Figure 9a shows a zoomed-in image of the Richardson microscope test slide25 with a 50 μm FOV. Figure 9b shows an image of an ex vivo pig bronchus wall with 120 μm FOV, in which contrast likely derived primarily from of ex vivo tissues collagen fibers. Fig. 9 Images acquired with the scanning fiber microscope and using the backscattered light imaging modality [see Fig. 4a]: (a) an image of the Richardson microscope test slide with 50 μm FOV, (b) pig bronchus wall with 120 μm FOV where collagen present in the inner bronchus wall can be seen. 026007_1_9.jpg In Fig. 10, images taken of ex vivo tissue with the scanning fiber microscope with polarization-sensitive detection are shown. Each image had a FOV of 190 μm and was constructed with 150 spirals so that the acquisition time was 1.2 s. Figure 10a shows an image of a rat skeletal muscle with the microscope positioned in a rigid stand during the acquisition, in which striations are clearly apparent. In Fig. 10b, the same tissue region shown in Fig. 10a was targeted, but in these cases the scanner was held by hand during acquisition without a rigid stand. The striations remained visible, which demonstrates that our microscope is relatively insensitive to the confounding effects of hand motion. Figure 10c shows an image of a rat hair with the texture of the outermost layer of the hair shaft (cuticle layer) clearly visible. Figure 10d shows an image of the inner surface of a mouse artery. In this image, the regions of high reflectance likely correspond to collagen fibers. Figure 10e shows a similar image of the artery wall with imaging performed at a different depth. In this image, fine structures that may correspond to elastin fibers are apparent. Figure 10f shows an image of a rat liver treated with 5% acetic acid, with areas of the nuclei of liver cells likely manifesting as regions of high reflectivity. Fig. 10 Images acquired with the scanning fiber microscope using the polarization-sensitive reflectance optical modality [see Fig. 4b] with a FOV of 190 μm; (a) rat skeletal muscle with microscope in holder during image acquisition, (b) same as (a) but with microscope held with hand during image acquisition, (c) image of a rat hair, (d) image of a mouse artery where the white structures are likely the collagen fibers present in the artery wall, (e) image of a mouse artery where the white structures are likely elastin fibers present in the artery and (f) image of a rat liver treated with 5% acetic acid where the white areas are likely the nuclei of the liver cells. 026007_1_10.jpg 3.4. Discussion In the current design, the outer diameter of the scanner is 3 mm. We estimate that downscaling the scanner to approximately 2 mm can be done with the same performance. Smaller dimensions become challenging, however, because manufacturing a small sized electromagnetic motor that can deliver a sufficient force is difficult at these scales. Furthermore, 3-dimensional imaging can be realized in the scanner by adding an actuator in the connection between the fiber housing and the microscope housing. Most of the fiber scanners reported in literature have scanning speeds above 2.5 frames per second (fps). Our current scanner design achieves a frame rate of 1 fps, which we showed to be sufficient to obtain well-resolved microscopic images when the scanner is held by hand while in contact with the tissue. Higher frame rates with the same spatial resolution could be achieved with our design (for instance by reducing the length of the fiber housing), but this would limit the FOV that can be achieved in the nonresonant mode. The current optical design, in which the lens is attached to the fiber, allows high numerical apertures to be realized. Numerical apertures of 0.85 for single objectives smaller than 1 mm pupil diameter have been reported in Ref. 20. The achievable numerical aperture for fixed entrance pupil diameter is limited by the required free working distance. When the objective lens is attached to the fiber, an additional exit window is required, which reduces the free working distance by approximately 75 μm compared to fiber scanners in which the objective fixed to the microscope housing. The resolution achieved across the FOV is determined by the numerical aperture as well as the lens aberrations. For the objective attached to the fiber, the lens system is used only on-axis, resulting in a constant resolution throughout the FOV. This allows for a simple lens design, as compared to lens systems that are fixed to the microscope housing. The images shown in Fig. 10 illustrate the potential use of the scanner in relevant biological applications. For instance the images in Figs. 10d and 10e show that certain structures in the blood vessels that are known to play a role in cardiovascular disease (see for instance Ref. 26) can be visualized, while Fig. 10f reveals structures that are relevant to determine diseased tissues in the field of oncology. 4. Conclusions The design and implementation of a high-resolution fiber-scanning confocal microscope with an outer diameter of 3 mm were presented. The images obtained from biological tissue demonstrate that microscopic-level imaging can be achieved even with the microscope held by hand. Further studies are required to determine the potential of this microscope to provide information relevant for real-time disease diagnosis. The microscope design allows for many different microscopic imaging modalities to be readily implemented, making it a powerful tool for many different clinical contexts. Acknowledgments We thank F. van Gaal (VDL), C. van der Vleuten, and R. van Rijswijk (MiPlaza) for their technical support and G. Braun and R. Harbers (Philips Research) for their experimental support. M. van der Mark and J. Schleipen (Philips Research) provided valuable feedback in the manuscript preparation phase. We thank M. van Zandvoort and R. Megens (Maastricht University) for their insights on the cardiovascular images. References 1.  P. Delaney and M. Harris, Chapter 26 in Handbook of Biological Confocal Microscopy, J. B. Pawley, Ed., Springer, New York (2006). Google Scholar 2.  A. D. Mehta, J. C. Jung, B. A. Flusberg, and M. J. Schnitzer, “Fiber optic in vivo imaging in the mammalian nervous system,” Curr. Opin. Neurobiol. 14(5), 617–628 (2004). 10.1016/j.conb.2004.08.017 Google Scholar 3.  B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005). 10.1038/nmeth820 Google Scholar 4.  E. Laemmel, M. Genet, G. Le Goualher, A. Perchant, J-F. Le Gargasson, and E. Vicaut, “Fibered confocal fluorescence microscopy (Cell-viZioTM) facilitates extended imaging in the field of microcirculation,” J. Vasc. Res. 41(5), 400–411 (2004). 10.1159/000081209 Google Scholar 5.  B. A. Flusberg, A. Nimmerjahn, E. D. Cocker, E. A. Mukamel, R. P. J. Barretto, T. H. Ko, L. D. Burns, J. C. Jung, and M. J. Schnitzer, “High-speed, miniaturized fluorescence microscopy in freely moving mice,” Nat. Methods 5(11), 935–938 (2008). 10.1038/nmeth.1256 Google Scholar 6.  J. Sun, C. Shu, B. Appiah, and R. Drezek, “Needle-compatible single fiber bundle image guide reflectance endoscope,” J. Biomed. Opt. 15(4), 040502 (2010). 10.1117/1.3465558 Google Scholar 7.  T. Ota, “In situ fluorescence imaging of organs through compact scanning head for confocal laser microscopy,” J. Biomed. Opt. 10(2), 024010 (2005). 10.1117/1.1890411 Google Scholar 8.  C. J. Engelbrecht, R. S. Johnton, E. J. Seibel, and F. Helmchen, “Ultra-compact fiber-optic two-photon miscroscope for functional fluorescence imaging in vivo,” Opt. Express 16(8), 5556–5564 (2008). 10.1364/OE.16.005556 Google Scholar 9.  H-J. Shin, M. KC. Pierce, D. Lee, H. Ra, O. Solgaard, and R. Richards-Kortum, “Fiber-optic confocal miscroscope using MEMS scanner and miniature objective lens,” Opt. Express 15(15), 9113 (2007). 10.1364/OE.15.009113 Google Scholar 10.  J. T. C. Liu, M. J. Mandella, H. Ra, L. K. Wong, O. Solgaard, G. S. Kino, W. Piyawattanametha, C. H. Contag, and T. D. Wang, “Miniature near-infrared dual-axes confocal microscope utilizing a two-dimensional microelectromechanical systems scanner,” Opt. Lett. 32(3), 256–258 (2007). 10.1364/OL.32.000256 Google Scholar 11.  P. Delaney, M. Harris, and R. G. King, “Fiber-optic laser scanning confocal microscope suitable for fluorescence imaging,” Appl. Opt. 33(4), 573–577 (1994). 10.1364/AO.33.000573 Google Scholar 12.  M. D. Chidley, K. D. Carlson, R. R. Richards-Kortum, and M. R. Descour, “Design, assembly, and optical bench testing of a high-numerical-aperture miniature injection-molded objective for fiber-optic confocal reflectance spectroscopy,” Appl. Opt. 45(11), 2545–2554 (2006). 10.1364/AO.45.002545 Google Scholar 13.  K. Carlson, M. Chidley, K-B. Sung, M. Descour, A. Gillenmwater, M. Follen, and R. Richards-Kortum, “In vivo fiber-optic confocal reflectance microscope with an injection-molded plastic miniature objective lens,” Appl. Opt. 44(10), 1792–1797 (2005). 10.1364/AO.44.001792 Google Scholar 14.  J. C. Jung, A. D. Mehta, E. Aksay, R. Stepnoski, and M. J. Schnitzer, “In vivo mammalian brain imaging using one- and two-photon fluorescence microendoscopy,” J. Neurophysiol. 92(5), 3121–3133 (2004). 10.1152/jn.00234.2004 Google Scholar 15.  J. C. Jung and M. J. Schnitzer, “Multiphoton endoscopy,” Opt. Lett. 28(11), 902–904 (2003). 10.1364/OL.28.000902 Google Scholar 16.  M. T. Myaing, D. J. MacDonald, and X. Li, “Fiber-optic scanning two-photon fluorescence endosocope,” Opt. Lett. 31(8), 1076–1078 (2006). 10.1364/OL.31.001076 Google Scholar 17.  G. J. Tearney, M. E. Brezinski, B. E. Bouma, S. A. Boppart, C. Pitris, J. F. Southern, and J. G. Fujimoto, “In vivo endoscopic optical biopsy with optical coherence tomography,” Science 276(5321), 2037–2039 (1997). 10.1126/science.276.5321.2037 Google Scholar 18.  X. Li, C. Chudoba, T. Ko, C. Pitris, and J. G. Fujimoto, “Imaging needle for optical coherence tomography,” Opt. Lett. 25(20), 1520–1522 (2000). 10.1364/OL.25.001520 Google Scholar 19.  M. Kanai, “Condensing optical system, confocal optical system and scanning confocal endoscope,” U.S. Patent No. 7338439 (2008). Google Scholar 20.  B. H. W. Hendriks, M. A. J. van As, and P. J. H. Bloemen, “Miniaturisation of high-NA objectives for optical recording,” Opt. Rev. 10(4), 241–245 (2003). 10.1007/s10043-003-0241-2 Google Scholar 21.  I. Walmsley, L. Waxer, and C. Dorrer, “The role of dispersion in ultrafast optics,” Rev. Sci. Instrum. 72(1), 1–29 (2001). 10.1063/1.1330575 Google Scholar 22.  Z. Bor, “Distortion of femtosecond laser pulses in lenses and lens systems,” J. Mod. Opt. 35(12), 1907–1918 (1988). 10.1080/713822325 Google Scholar 23.  J. M. Khosrofian and B. A. Garetz, “Measurement of a Gaussian laser beam diameter through the direct inversion of knife-edge data,” Appl. Opt. 22(21), 3406–3410 (1983). 10.1364/AO.22.003406 Google Scholar 24.  M. Rajadhyaksha, R. R. Anderson, and R. H. Webb, “Video-rate confocal scanning laser microscope for imaging human tissue in vivo,” Appl. Opt. 38(10), 2105–2115 (1999). 10.1364/AO.38.002105 Google Scholar 25. Richardson Technologies Inc., T. M. Richardson, “Test slide for microscopes and method for the production of such slide,” U. S. Patent No. 6381013 (2002),  www.emsdiasum.comGoogle Scholar 26.  R. T. A. Megens, M. A. M. J. van Zandvoort, M. G. A. oude Egbrink, M. Merkx, and D. W. Slaaf, “Two-photon microcopy on vital arteries: imaging the relationship between collagen and inflammatory cells in atherosderotic plagues,” J. Biomed. Opt. 13(4), 044022 (2008). 10.1117/1.2965542 Google Scholar Benno H. W. Hendriks, Walter C. J. Bierhoff, Jeroen J. L. Horikx, Adrien E. Desjardins, Cees A. Hezemans, Gert W. t'Hooft, Gerald W. Lucassen, Nenad Mihajlovic, "High-resolution resonant and nonresonant fiber-scanning confocal microscope," Journal of Biomedical Optics 16(2), 026007 (1 February 2011). https://doi.org/10.1117/1.3534781 JOURNAL ARTICLE 8 PAGES SHARE Back to Top
__label__pos
0.941937
Vue Watch - Find The Best Vue Plugins Computed Properties and Watchers — Vue.js Vue does provide a more generic way to observe and react to data changes on a Vue instance: watch properties. When you have some data that needs to change based on some other data, it is tempting to overuse watch - especially if you are coming from an AngularJS background. Vue.js Vue.js - The Progressive JavaScript Framework. Versatile. An incrementally adoptable ecosystem that scales between a library and a full-featured framework. Vue.js Watchers Tutorial - Flavio Copes Jun 09, 2018 Watch for Vuex State changes! - DEV PlayStation Vue is dead. These are the best alternatives In Vue we can watch for when a property changes, and then do something in response to that change. For example, if the prop colour changes, we can decide to log something to the console: export default {name: 'ColourChange', props: ['colour'], watch: {colour console. log ('The colour has changed!');}}} Getting Started | Vuex Centralized State Management for Vue.js. Again, the reason we are committing a mutation instead of changing store.state.count directly, is because we want to explicitly track it. This simple convention makes your intention more explicit, so that you can reason about state changes in your app better when reading the code. CLI Service | Vue CLI What the Tick is Vue.nextTick? - Vue.js Developers There is a watch object created with two functions kilometers and meters. In both the functions, the conversion from kilometers to meters and from meters to kilometers is done. As we enter values inside any of the texboxes, whichever is changed, Watch takes care of updating both the textboxes. Learn How to use Vue.js Watchers - Coding Explained Mar 13, 2017 Laravel Vue JS NewsPaper Project Part -11 | Create post Start another step for laravel vue js newspaper project tutorial series. This part most important on the full project. How to create a post data table for this project explain this video. ----- My What is Vue.js - W3Schools Vue.js provides built-in directives and user defined directives. Vue.js Directives. Vue.js uses double braces {{ }} as place-holders for data. Vue.js directives are HTML attributes with the prefix v-Vue Example. In the example below, a new Vue object is created with new Vue().
__label__pos
0.509187
Daily Medicos Brown Sequard Syndrome: All You Need To Know [Learn Through A Video] Brown Sequard Syndrome: All You Need To Know [Learn Through A Video] Brown Sequard Syndrome is a neurological syndrome that refers to a condition that damages a person’s half spinal cord. To make understanding brown sequard syndrome easier, we will be talking about the syndrome in detail. This article about the brown Sequard syndrome is divided into the following parts. 1. Background 2. How the world got to know 3. What happens if one gets brown sequard syndrome 4. Brown sequard syndrome symptoms 5. Brown sequard syndrome epidemiology  6. Brown sequard syndrome radiology 7. Brown sequard syndrome etiology or causes 8. Brown sequard syndrome diagnosis 9. Brown sequard syndrome prognosis treatment 10. Brown sequard syndrome complication 11. Patient education 12. Brown sequard syndrome physiopedia Background Brown Sequard disorder is an inadequate spinal cord sore described by a clinical picture reflecting hemisection injury of the spinal cord, regularly in the cervical cord area. It was first described by Charles Edouard in 1894, a famous physiologist. He stated that brown sequard disorder causes hemisection to damage the neural tracts inside the spinal cord. The neural tracts help in carrying the information to the brain and from the brain. How Did The World Get To Know About It? Charles Edouard, a famous physiologist talked about the brown sequard syndrome for the first time in 1894. He discovered this neurological disorder while examining a sea captain who was stabbed in the neck. He discovered that brown sequard syndrome is a condition that shows a fragmented pattern of injury showing a hemisection of the spinal cord which brings about shortcoming and loss of motion on one side of the harm and loss of pain and temperature sensations on the contrary sid What Happens in Brown Sequard Syndrome? As mentioned earlier, brown sequard syndrome affects the hemisection harms neural lots in the spinal string that convey data to and from the brain.  This results in a deficiency of sensations pain, temperature, contact, just as the loss of motion/movement or loss of muscle work in certain pieces of the body. Brown Sequard Syndrome: All You Need To Know [Learn Through A Video] 2 - Daily Medicos What Happens in Brown Sequard Syndrome? What Are The Symptoms? Brown sequard syndrome can be recognized by the following happenings: 1. Loss of motor function also known as hemiparaplegia 2. Loss of vibration sense 3. Loss of touch sense 4. Loss of position sense also known as proprioception  5. Contralateral loss of painless  6. Loss of temperature sensation 7. Loss of two-point discrimination Other than the above mentioned, any sort of weakness on the ipsilateral side of spinal injury can also be counted as one of the symptoms of brown sequard syndrome. Brown Sequard Syndrome: Epidemiology Brown Sequard Syndrome: All You Need To Know [Learn Through A Video] 3 - Daily Medicos 11,000 spinal cord injuries new cases are recorded each year in the United States including paraplegia and tetraplegia. However, since brown sequard syndrome talks about the damage to one side of the spinal cord (only hemisection), it is rare i.e. only 4% of the spinal cord injuries are stated as brown sequard syndrome. Brown Sequard Syndrome: Radiology Radiology helps to determine and diagnose the etiology of brown sequard syndrome. Move ahead to know what etiology is. Brown Sequard Syndrome: Etiology / What Are The Causes? Brown Sequard Syndrome can be caused by the following reason: 1. Any sort of spinal cord tumor 2. Any spinal cord trauma (a puncture/wound) 3. Ischemia (blocking of a blood vessel) 4. Any infectious disease (tuberculosis) The causes of brown sequard syndrome can be divided into two parts; traumatic and non-traumatic. However, traumatic injuries are mostly the reason behind brown sequard syndrome. Traumatic Reasons 1. Stabbing 2. Car accidents 3. Gunshots 4. Blunt trauma 5. Fracture 6. Falling off a height Non-traumatic Reasons 1. Vertebral disc herniation 2. Cyst 3. Tumors 4. Cystic disease 5. Hemorrhage 6. Ischemia 7. Decompression sickness How Is It Diagnosed? Brown sequard syndrome is diagnosed through MRI (Medical resonance imaging). Magnetic resonance imaging (MRI) is the imaging of decisions in spinal line lesions. The determination of Brown’s sequard condition is made based on history and actual assessment. Brown sequard syndrome is a fragmented spinal cord injury portrayed by discoveries on a clinical assessment that reflect the spinal cord’s hemisection (slicing the spinal string down the middle on either side).  Brown Sequard Syndrome: All You Need To Know [Learn Through A Video] 4 - Daily Medicos Brown sequard syndrome is diagnosed through MRI However, if the cause of brown sequard syndrome was a spinal cord trauma, there are high chances that other injuries may too be present. Research center examinations may likewise be valuable in nontraumatic etiologies, like infectious causes. Brown Sequard Syndrome: Prognosis, The Likely Outcome The prognosis of brown sequard syndrome varies from person to person depending on their strength and recovery procedure. The prognosis for the recovery of motion sensing in brown sequard syndrome is optimistic. It is said that having 1-year motor recovery (one-half to two-thirds) in the initial stages (first 1 or 2 months) of the injury can end in a successful recovery. However, the recovery may slow down for the next 3 to 6 months and can last up to 2 years following injury. What Are The Possible Treatments? Treatment of Brown sequard syndrome can vary from person to person focusing on preventive complications and causes. It is mainly focused on the underlying cause of the syndrome. In the initial early stages, it can be treated using a high dose of steroids in some cases such as traumatic spinal cord injuries due to infections.  Moreover, decompression surgeries are recommended to patients with traumatic injuries or tumors such as car accidents or stabbing. Other than this, physical, recreational, and occupational therapy is important as it helps the person to be mentally stable and to help people, with an accentuation on less reliance for everyday exercises and improving life quality with a multidisciplinary approach including spinal cord injury. Specific gadgets can assist with improving the personal life and day-to-day exercises for patients with Brown sequard syndrome-like wheelchairs and limb supporters. On the off chance that the patient experiences issues in breathing or gulping, different guides can be applied; cervical collars can likewise be utilized relying upon the degree of injury.  Complications If brown sequard syndrome is left untreated If brown sequard syndrome is left untreated, it can bring certain complications like: 1. Spinal shock 2. Depression 3. Pulmonary embolism 4. Infectious diseases (lungs and urinary tracts can be affected negatively) 5. hypotension Patient Education Since there can be traumatic reasons for brown sequard syndrome, in most cases physical therapy and rehabilitation have been said to prompt brief manifestation goals. Brown sequard syndrome has the best visualization for ambulation of all spinal cord wounds with up to 90% of people strolling without the help of gadgets after recovery.  Brown Sequard Syndrome: Physiopedia With Brown-Sequard disorder, a neat and tidy hemisection is typically not noticeable. Be that as it may, halfway hemisection is obvious, and it regularly incorporates all the nerve lots lying along the way in the harmed region. hemisection would make deficiencies in an accompanying way: 1. Dorsal Sections Sensations that are answerable for fine touch, vibration, two-point separation, would be influenced on a similar side of the sore. Second, there are two rising dorsal segment tracts: the fasciculus gracilis which conveys tactile data from the lower trunk and legs, and the fasciculus cuneatus which conveys tangible data from the upper trunk and arms. 1. Spinothalamic Tracts These are answerable for pain, temperature, and vibration touch would be influenced contralateral to the injury since they climb a level up and afterward cross to the contrary side of the spinal cord. These lots both convey sensations like pressing factor, vibration, fine touch – which is the place where you can limit where you were contacted, and proprioception which is a consciousness of your body position in space.  1. Dorsal And Ventral Spinocerebellar Tracts  Conveying impressions of oblivious proprioception, injury influencing dorsal spinocerebellar parcels cause ipsilateral dystaxia, and association of ventral spinocerebellar would cause contralateral dystaxia as these strands climb and cross to the contrary side. 1. Horner’s Disorder  This can cause redness of the face because of vasodilation. If the sore is at or above T1, this will cause ipsilateral loss of thoughtful strands bringing about ptosis, miosis, and anhidrosis.  1. Corticospinal Tracts At last, there’s the corticospinal tract which is a dropping pathway that conveys engine data from the brain to various muscles in the body and it controls deliberate muscle development. There would be an ipsilateral loss of developments at the site of the injury which gives flabby loss of motion, lower engine neuron sore like loss of bulk, fasciculations, and diminished force and tone. For instance, in the event that you inadvertently contact a hot skillet, the impression of pain and temperature is conveyed from the nerves in the skin of your fingers, through a first request neuron.  Brown Sequard Syndrome: Learn Through A Video This video talks about the brown sequard syndrome in detail.  Bottom Line Brown sequard syndrome is rare. It is a neurological disorder of the spinal cord that only affects a certain part of the spinal cord. It can cause one to lose their sensory feels. However, it can be treated according to the cause of it.  If you have any further queries related to this article, Kindly Comment below and Our team members will respond to them very soon! Leave a Reply %d bloggers like this:
__label__pos
0.988536
#!/bin/sh ## set up postgres (in this case 9.0 from macports export PGCTL=/opt/local/lib/postgresql90/bin/pg_ctl export CREATEUSER=/opt/local/lib/postgresql90/bin/createuser export PGDATA=/Volumes/pgtmp/postgres case $1 in start) ## Make a RAM filesystem diskutil erasevolume HFS+ "pgtmp" `hdiutil attach -nomount ram://1048576` ## start postgres ${PGCTL} -D ${PGDATA} init ${PGCTL} -D ${PGDATA} start sleep 2; psql -c "CREATE DATABASE ckantest;" postgres ;; stop) ## stop postgres ${PGCTL} -D ${PGDATA} stop ## poof! umount /Volumes/pgtmp ;; esac
__label__pos
0.961186
Practical C++ Programming: Beginner Course | Zach Hughes | Skillshare Playback Speed • 0.5x • 1x (Normal) • 1.25x • 1.5x • 2x Practical C++ Programming: Beginner Course teacher avatar Zach Hughes Watch this class and thousands more Get unlimited access to every class Taught by industry leaders & working professionals Topics include illustration, design, photography, and more Watch this class and thousands more Get unlimited access to every class Taught by industry leaders & working professionals Topics include illustration, design, photography, and more Lessons in This Class 27 Lessons (4h 24m) • 1. Welcome 3:27 • 2. Installation of the Code - Blocks IDE 6:43 • 3. Anatomy of the Hello World Program 8:05 • 4. Data Types and Variables 13:46 • 5. Basic Output 12:06 • 6. Basic Input 11:34 • 7. Arithmetic 9:25 • 8. Concatenation 5:02 • 9. If Statements 13:56 • 10. Switch Statements 8:03 • 11. Practical Program #1 12:31 • 12. While and Do-While Loops 8:13 • 13. For Loops 6:50 • 14. Data Structures - Arrays 9:32 • 15. File Output 6:46 • 16. File Input 15:26 • 17. Advanced Input and Output Manipulation 12:05 • 18. Practical Program #2 16:59 • 19. Functions 7:01 • 20. Parameters 4:57 • 21. Pass by Reference 9:54 • 22. Function Overloading 8:22 • 23. String Functions 3:33 • 24. Random Number Generator 6:51 • 25. Project -Hangman (Part #1) 18:16 • 26. Project -Hangman (Part #2) 15:54 • 27. Project -Hangman (Part #3) 8:34 • -- • Beginner level • Intermediate level • Advanced level • All levels Community Generated The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected. 2,088 Students 1 Project About This Class C++ is one of the most used programming languages. It is an object-oriented language, offering you the utmost control over interface, resource allocation and data usage. This class covers the basics of programming in C++.  Created for the beginner programmer, this class requires no prior knowledge of programming. The main aspects of the language are  introduced in a logical, gradient manner with a step by step approach.  This will provide you with a solid foundation for writing useful, correct, maintainable, and effective code.  By the end of this class you’ll have all the skills you need to start programming in C++. With this complete class, you’ll quickly learn the basics, and then move on to more advanced concepts.  Meet Your Teacher Teacher Profile Image Zach Hughes Teacher Class Ratings Expectations Met? Exceeded! • 0% • Yes • 0% • Somewhat • 0% • Not really • 0% Why Join Skillshare? Take award-winning Skillshare Original Classes Each class has short lessons, hands-on projects Your membership supports Skillshare teachers Learn From Anywhere Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best. Transcripts 1. Welcome: Hello. Welcome to Practical C Plus plus programming the beginner course. My name is Zak and I will be your instructor. Now, before we get started with this, syllabus might go ahead and tell you a little bit about myself and some of my credentials. I am currently a student at Carleton University, where I am on my way to earn a bachelor's degree in computer science and a miner's and associates degree in electrical engineering. My electrical engineering Associates degree is actually coming from a community calling in my area, and I do live in Texas if you couldn't tell by my accent. So I hope that doesn't bother you too much, but I'll try not to make it sound too Texan when I'm recording My background in programming involves a heavy use and heavy practice of C plus plus. I've taken many semesters and C plus plus at my university. I have actually taken three semesters in C plus plus total, and I've taken a semester in Matt Lab and engineering programming language and I, currently on the side programming Java for Android application development, and I've actually developed my own angel. Would applications for the Google play store. So that me and said, That's enough about me. Let's go ahead and look at what we're going to learning for this course. So if you look on screen, I've kind of listed everything that we will be for sure going over in this class. But just remember that the class is not limited to this syllabus. So there's gonna be things in between these these concepts right here that we will be going over, you know. So everything you see on screen is is not everything that you will learn. You will actually learn much more than just everything you see here. So, you know, if you're a complete beginner in programming, I would say that this course is definitely the perfect course for you because I'm not going to start with just the C plus plus principles. I'm going to actually introduce the basic programming principles in general to start the course off. So if you don't know anything about data tops and variables were actually going to cover that right at the beginning, of course. And then we're gonna move on to how to use these program and concepts in C plus plus and develop her own useful applications. And when I say useful, you know the course is called practical C plus plus programming. And that's because I think the C plus plus is a much fun or language to learn when you're using get in practical situations. So that's exactly what we're gonna do. We're going to develop a small business application, a simple calculator app and then at the very end of the course for a final project, we're going to develop a hangman game that you can show all your friends. And hopefully, if I get enough students for this course and enough people leave good reviews and tell me that they want to see an advanced course than that's then that's what we're gonna do. I'm actually going to make an advance C++ course. After this, we're going to object, wearing the design and everything like that. So stay tuned for this Siri's and I'm glad you're part of the course. Let's get started 2. Installation of the Code - Blocks IDE: Hello, everyone. My name is Zak and I'm here with practical C plus plus the beginner course. And in this tutorial, we will be going over the installation of the code blocks. I D e um I d e stand for interactive development environment. And ah, we will be using code blocks as our i d. For this entire course, our chose code box. Because, as I said in the introduction, it's actual what ice began not only learning c plus plus in but programming in general. So, uh, not only I mean that, but it's also free. So have fun is a really good choice to begin your programming. Um, you know your your programming going. So as you can see, I have a Web browser open. And if you go to Google on top in the search, more just code blocks. As so, um, the first link you will see is www dot code blocks dot work. And this is where you're gonna want Tokyo. You can either click on this first code blocks link and then click on downloads. Or you can do a common. Do you just click on the downloads link below when the page loads, You know, depending on my on how my Internet is ah, doing right now. But when the page does load, you're gonna be brought here and you will have several links. Like download the binary release, download the source code and that retrieve source code from SV end. You are going to want to click on download the binary release. They will bring you to this page. Now, depending on what operating system you're on, you're going to click on something different than I may be clicking on. You know, if you're on Lennix 32 bit or Lennox 64 bit, we're gonna be looking at these boxes right here from Mac OS X ray gun one scroll the way the bottom and they have a download link right here. Um, me, though I'm on windows seven. Salt will come up here. And if you look, it doesn't say Windows eight right here. But up here it says windows, you know, seven dash A. So these Ah, these binary builds should work on Windows eight neigh 80.1. In fact, I've actually downloaded on Windows eight and eight point warning, So I know for a fact that will work. If you look over here, there are two different links. There's burly OS and source forge dot net. I'm not that familiar with burly OS, Um, but I am familiar with source for it, and I use it for a lot of my downloads, so I would recommend using source foraged dot net. Now there are three different, um, types of binary releases that you can download now. When I first started, programming mind structure had us download this binary released right here. The 2nd 1 on the list, which is perfectly fine, works great. Um, but as I got in a more advanced C plus plus programming where I'll start doing concurrent threads and you know, different kind of ah, concurrent thread processing, multi threaded processing, I needed this GCC 4.8 point one for my co box toe work with threads. It's a specific compiler, so I would recommend if you plan on going Maurin debts with C plus plus and maybe taking a course after this to go ahead and download this one. Because if you start getting into threads and C plus plus, you will have to come back and download this compiler right here for toad blocks work. Um, otherwise, you know, this right here is a great auction is will. So either one is fine for this course, Um, you're gonna want to go ahead and click on source forge dot net to continue on either one of links and my click on the 2nd 1 It will take you to source foraged dot net and the countdown will begin for your download. And after the countdown, the XY foul should be downloaded. We'll give it a second, and right down here, you can see code blocks 13.1 point two Dottie XY, and, ah, it says they're still, you know, non 10 minutes left before it's done downloading on mind. So it might take a while for it to download. Um, I'm not going to sit through this tutorial on wait for it finished downloading simply because I already have it downloaded on my computer. But I will tell you when it does finish downloading, you're gonna want to launch the XY file, and a install wizard will pop up. And it's very simple. Install wizard. Basically, just click next on every single pop up window and it should install very easily with no problems. After it's done installing, you're gonna want to search for the program either by using your search like this, or it might have even put it in your task. Four like I have it right here. And it might even have a desktop shortcut. Whatever the case, you're gonna wanna launch code blocks, give it a little bit of time, especially on the first time launching it when the first time you launch it, it might take a little bit longer than you expect. Also, though, it seems like it's taken a while for it to download on my computer. My Internet connection is quite slow right now. I just moved, and ah, I have slow download speed as a right now because I haven't yet upgraded my Internet so that it's on your at your, you know, at your house. It might. It will probably go a lot faster than mine. I think I have, like, eight megabits download speed right now. This is what's going to pop up when code blocks launches and ah, in the next tutorial, we are going to create a new project and ah, we will discuss the hello world project that will be created and discuss the anatomy, uh, the our very first C plus plus program. So stay tuned, and I'll see you in the next tutorial. 3. Anatomy of the Hello World Program: Hello, everyone. Welcome to practical C plus plus beginner course. I'm Zak. And in this tutorial, we will be going over the hello world program. So if you open up code blocks, you're gonna want to click on, create a new project and then click on console application. Ah mahn, it's in the top right corner of the window on yours. It may be different, but you want to click on console application and then hit. Go then on the next window. You want to hit next until you get here, make sure you highlight C plus plus click next again and then give your project and name in mind. I'm just gonna call it tutorial wine and then specify folder to keep it in. Make sure it's a folder that you confined easily and then quit. Next, Leave all the default settings on this window right here. These air simply ah, direct directory and compiler settings Just hit finish and then your project is created. So right now you don't see anything. But if you go over here to the left and click on sources, you'll see the main dot CPP file, which stands for main dot C plus plus, and if you double click it, you'll see the code. Now, before we analyze this code, I want to go ahead and show you how to run it and what this code does. And to do that on Windows you can either hit F nine or if you're on a Mac or Lennox machine . You just go up here and hit, build and build and run. The code will compile, and then you'll see this console window. It prints the word hello world and then says process returned. Zero. Okay, so you can close out of this Now that we know what the code does, we're gonna look at how this code does what it does. Um, so beginning with kind of the main thing I want to show you in this tutorial other than you know how to run your first program and compile it is you know how to type out the skeleton of a C plus plus program. It's what I call the skeleton. And it's everything that you need Ah, for your code to rind, at least in this course anyways, for every program that we will be riding together. So if we go ahead I want to go ahead and take out this line because this line is not actually needed for this program to compile and Ryan. So if we take it out and we hit F nine and we build and run it again, we will get the process returned to zero. We just won't have hello world printed to the console, which means, I mean, that's fine. The program just ran and executed till it finished with no heirs, and it's a perfectly good program. So if we exit out of this, we're gonna now analyze everything that we need, which is everything that you see right here. It's a starting from the top you're going to see pound include Io stream. This line of code right here just simply tells the programme that it needs to include a C plus plus library known as I O Stream, which stands for input output stream. Now, on every program that we write, you will need this line of code, which is why I include it in our skeleton is because, you know, if you don't have this line of code, your program will lose its basic input and output functionality. So you do need this line of code and every program that we right moving on, you're gonna see using names based standard. Now, this line of code is not necessary for your program to rind. Okay, so if we took this code out right now, it should still run. Fine. We hit f non, everything goes fine and we still get the same result process returned to zero. However, I do want this lot of code in here for reasons I'll explain in the future For now. Just know that we do want to include it as part of our skeleton because it will make your life easier when we start riding more code and I will show you why in future tutorials. But for now, just know that you do need using name, space standard semi colon. Okay? And it doesn't with the cynical and I know right now you're saying, why does this line of code in with Semi Colon and this one doesn't? Well, we'll get to that in future tutorials again. It's all gonna become a habit for now. Just know this. This is the code that you will need in all of our C plus plus programs moving on to the next big chunk of code. This right here is known as your main function. And in every C plus plus program that we write, you will need a main function and type it up. You just simply right I into you extends for integer and then Maine open prints, sees close parentheses and then your brackets. What the return zero statement now and programming. There's two different conventions for writing these brackets. I'm gonna go ahead and show them to you now so you don't get confused later. If I do this one, convention is the way just saw it, which is like this where your brackets are open and closed down here. But the other way, you may see it is like this where your brackets opened at the top and closed down there, which is fine. There's no, um, difference in the code whatsoever. It will run just the same, so just know that it's ah, it's just a program and convention. There's no right or wrong way. Some people have their own opinions on why they do it a certain way, but just know, you know, it's all a matter of preference so before we in this tutorial, I want you go through and I want you to top this up with me so you can get in the habit of doing it. So what's the first thing we need to do? What we need to include the input output strings C plus plus library so that we can, you know, output stuff to the console window. To do that, we're going to hit pound, include i O Stream. Okay, No semi colon on this line again. We'll get in the habit of knowing where to put him when we're not to put him. But for now, you know, just know there's no cynical And at the end of this line now, though, we don't need it for this program You do want as part of your skeleton for this class. I do want you to get in the habit of riding, getting all of our programs. So let's go ahead and do it now. You want to use the standard name space and how do we do that? Remember, we topped using name Space Standard, and that one did have a cynical moving on them. Other really critical piece of code that we need for our program to run is the main function. And remember, that was proceeded with I and T. You extend for integer the name of the function main open princes, close parentheses and then our brackets, depending on what convention you decide to use will be different and then returned the value zero. This right here is working code. If you take out some of this code, such is that this code will return in air. It will not run. So for now, just know that everything in this code is needed. And in future tutorials, we will go into a discussion as to why they're needed and what exactly, they dio. But for now, let's move on to the next tutorial where we will be discussing data types and variables. Thank you. 4. Data Types and Variables: Hello, everyone. Welcome to practical C Plus plus the beginner course. I'm Zak. And in this tutorial, we will be discussing data types and variables before we get into the code blocks editor. I kind of wanna do this on a, um I'm no pad sheet real quick so that weaken we can show you. I want to kind discuss with you how these data types are declared and what they mean. So, a dead a type. What is the data type? What is a data type? Um, a data type is basically a description of what we are using. So, for instance, and the real world, um, if we were gonna use the this the letter B, for instance. Well, this is to us is known as a letter. This would be the data type in programming. Okay. Or what if we were talking about the number seven on our world? This is just called a number. This again is a data type. And, uh, you know, if we were talking about programming except in programming and in C plus plus, we don't call them letters and numbers. So how do what do we call them? Well, let's start with just a single letter. Let's let's to start with the character. Be okay again. This is just a letter, but in programming, this is called a character data type and the character data type. Sorry about that. The character data type is denoted or, um, kind of encapsulated with single quotation marks as so So That's how so and C plus place this letter B. It's called character data, and you have to declare it with single quotation marks. And when we get into example code you'll you'll understand what I mean by declare with single quotation marks. But let's move on to a number. So let's say seven and C plus Plus. This is called an integer, and an integer is denoted just as it seemed. No quotation marks, nothing special. What about multiple letters? So you know what? What about? You know, the name Bill? You know, that's that's four characters. But this thing is a whole What is it? Is that a word? You know, in our world, it's called a word, but, uh, what is it? What is it in C plus plus well in C plus plus, it is known as a string, which is Alfa numeric data. Okay, but for now, just know it's called a string data. And he did note string data with double quotation marks as so okay. And I kind of want to go over one last thing. Ah, and that's decimal numbers to, like, seven point 77 you know, Is that an integer? Well, no, it's not. It's not a whole number. So what is that called? Well, in this, it could be one of two things you know, actually could be multiple things, but for this course, we're gonna keep it simple. Know that it's either a float or a double. Okay? And it's denoted just as 7.77 for this class. We're going to use the word double on the reason why is because if you use the word float when you declare it as this 7.77 behind the scenes and code blocks code blocks automatically and converts it to a double anyways, so for now, we're just gonna call it a double data type. Okay, so let's move on to some actual code, okay? And we're gonna practice declaring these four major data types that I've shown you here So let's open up our code. And by the way, in the last tutorial, the code was a zoomed in. If you couldn't tell if it was hard for you to read in the last tutorial, Um, hopefully this will make it easier for you to read. Um, so right now, let's go ahead and practice what we learned in the last tutorial in this project. On a practice riding our skeleton, so to speak, everything we need for our code. Right. So we need to do basic input and output library. We need to include it. So lets include Io stream. Okay. We want to use the standard name space. Okay. And then we need to declare our main function. And we need a return value for this main function, which, as we said, we're gonna use zero. And this is our skeleton. This is everything we learned in the last tutorial again. If you If this isn't if you still haven't got this down yet, you know, she suggests his practice and get over and over until you get it down to where you can basically do right this code in your sleep to get your program to compile and rind, as with process returned zero. Let's get into our data types and variables. Okay, so we discussed. How do you know what the data types were? You know, it's kind of a description of what you're dealing with, but what's a variable? Well, a variable. It is kind of like a box. And your data type is a label on that box. That's how I want you picture this. So you have a box, and then you have a label on this box. Okay, so So let's say we put the letter B inside of box. Okay, So the letter B is character data, right? We discussed this earlier, so to declare the character data, we write C h a r, which stands for character. Okay, so that's our label character. C h a R. That we're putting on her box in our box is user to find meaning we can name it, whatever we want. So I'm going to name it, um, letter, because that's what's in this box is letter. It's a letter. OK, And then we need to declare what the letter is. We put a equal sign. I'm sorry about that. Guys don't know what that is popping up. They put me in equal sign. Okay? And then again, character data is denoted with single quotation marks. And then inside the single quotation marks, we put our letter be and then end the statement with a semi colon. And this right here is declaring a variable. What? The data type of character and that variable that character variable this letter B. Okay, so let's move on. Let's let's let's move on to the other. Now, if you don't know what this is used for yet, that's fine. In the next tutorial, we're gonna discuss how we use these variables and what exactly they're used for. But for now, we're just declaring them, and I'm kind of giving you a visual ization of what they are. Okay, so just for now, think of them is a A box with a label and then something inside the box. In this case, it's a box that holds characters and in the box is the character be okay? So let's move on to a number of box that holds numbers. So again, a number is gonna be an indicator if it's a whole number, so to declare a whole number integer value. We write the word I m t which he makes. You probably remember from here in Maine, which will get into why you need that again in future tutorials. But for now, let's focus on this, and we're gonna put a number in here. So let's call this variable number again. It's user to find. If I wanted to call it Jimmy, I could call it Jimmy. But you gotta want it. You know, it's convention to call it something that represents So we're gonna call it a number, and that's gonna equal with number seven and you end that with a semicolon. Remember, No quotation mark. Try here. Just the number. Okay, so that's That's another variable. Let's do the other two variables we did. Let's dio a, uh let's make a variable that holds the name Bill inside. So to do that, we write string because that's the data type. Remember, multiple letters is Alfa Alfa Numeric data is called string data. Okay, which you know I'll actually get into it later. String is technically a class, and I know you don't know what classes. So for now, just know it's it is a data type for now. Okay, string data we're gonna call it Name equals double quotation marks. Still semi colon. Okay, so there's your variable for a name or multi character data. OK, now and string don't don't get it confused with, um Onley being characters if I want to do Bill 99. Hyphen, hyphen, hyphen. Cynical on semi colon Inside these quotation marks, that's fine. Alphanumeric data will hold all these characters, and this will not cause an air. This is everything you can keep in this variable that is find. So I just know that, but for now, actually, let's call it Bill 99. Okay? So that you you don't forget that you can actually put numbers and string. If you wanted to do just 99. You could as long as you have the double quotation marks. It's still string data her case. But for now, it's Bill 99. And then for the last data type. Let's do a decimal number, Ricks. Remember? I said it could be flow or double, but for this class, we're going to use double, so it's a double. Let's call it decimal equals 7.7 seven. Cynical and all this is fine. If we run this program, it will not cause any heirs. Everything runs, process returns zero. Same outcome. All this stuff is happening behind the scenes. So of course you won't see anything in the console window when we run it. Anything different, you know, you'll still get the same result process returned. Zero Everything's fine. But the point is to just show you how to declare these variables and data types kind of discuss what they are, and, um, show that, you know, if you if you declare them right, you won't get a nadir, let me show you what will happen if you if you know if you call this character or let's say we call it, um, string okay, without the double quotation marks if you declare it wrong, which this is declared wrong because you're calling it a string and you don't have your double quotation marks. If you wanted to be right, you'd have to put in double quotation marks. But if you take those out, then this is declared wrong. When you try to build and run, we'll get an air, See this red box, And if you look down here in the log, you can scroll down and it says air conversion from double to non scaler. Top standard Colin Colin string requested. And that build failed one air. So they're right there will cause the program to crash. So when you change that back Teoh double. If we run it again building rind, everything will go fine. So that's it for this tutorial. Um, let's move on to the next tutorial and ah, we will learn about, um, input and output and ah, continue on after that some basic arithmetic and some more fun stuff. So thank you for watching. 5. Basic Output: Hello, everyone. Welcome to practical C plus plus for beginners. My name is Zak. And in this tutorial, we're going to be discussing basic input and output, and we will be using the stuff we learned in the previous tutorials to do. So, um, as you can see on screen, I've already got our, uh, basic code typed up. What we've been calling the skeleton. Do you need to add a return statement? Return zero, and ah, for this tutorial. You know, I kind of wanna I want to go into We're gonna start with output because I'll put it's going to be a little bit. We've kind of already seen it in the hello World program. So in the hello world program, we saw something along the lines of this. And when we ran it, the screen printed hello world before we saw her process return to zero. Okay, so for now, we're gonna go ahead and take out this end. L right here, because I just want to focus on something a little bit easier, and that's just basic output. And then we're going to basic input. So right now we're just going to see, um this right here hello world, and when we run it, you'll see a little bit of difference, but not much. The only difference is there's not much space here. It says Hello world, And then immediately process returned zero and later we'll get into Why that so when we took out that last code, But for now, let's just leave it as is. So So what is this? Well, this is an output stream. See out. That's where the out comes from, is output. So when you see out, you're referring to output to the console, and that's what the C stands for console output, and the console is the big black window that pops up when we were on our program. So when we say Consul Output and then these two operators right here and then we specify a string and this is a stream because there's double quotation marks, we specify. Hello, world. Um, the console will actually out plate the word hello world, and that's exactly what happens. Okay, so, you know, kind of just to show you we can play integer. We can put an integer here if we want to. We could say I'll put um nine and, ah, it'll output the number nine on the console. Okay, so that's kind of basic output. Um, but I kind of want to go into all come one throwing variables into this for a second. So the previous tutorial we discussed, you know, the variable letter. You know, we will call it the letter Z this time. Okay, So for one toe to declare a data type of one character and we want that character to be the letter z, we would do it as we specify C h a r character, and then we'll call it. You know, we're calling anything we want. We'll call it, You know, we can call it Ah, letter Z, um, equals And in single quotation marks, Z semi colon. So here's our variable right here. Well, if we wanted to output this variable, we would just say, see out operators, and then we re type the name of our variable letter Z. And if we run this, we'll get what you expect. We get the single characters e because what this is saying? This is saying, consul output, the variable letters E, which we call letters e. Okay, um, output, whatever this variable is holding and it's holding this letter and remember it just cause we called it letters. E Let's let's call it something else. Let's call it. Um, Let's just call it box. Okay? Let's say we called this box, okay? When we see out box, it's gonna go toe where we declared box at, which is right here the variable box. It's gonna look. What? What? It's holding what's holding a character Z. So when we consul output box, it outputs Z to the screen and the process returned to zero. Now let's back up for a second because I didn't really explain this. What are these right here? Well, these work hall ish Icahn stream operators there basically they're just the output stream and what it is is any time you say, see out, you're wanting to out. But you can't just say see, outbox, that we're returning air right there. That's an air. So what you do is use a output stream operator, which is just to less than in signs. So you have to say, see out to Leslie in signs Not this. That has to be to s and signs and then box. And if we do, I can. I just want to show you know more for practice. I really want you to do this on your own so you can see all the different possibilities. But if we do to these lines of code right after the other, you're gonna get just that you're gonna get to disease one after the other. Okay, so I got twosies because we wrote statement twice. So that means said, let's let's try something Here s O. So let's go into that right here in Del. What is that? Well, that stands for end line. And this is used a lot with basic output because it gives you spacing. So if we do output stream operator, which basically this is the same thing, it's saying, See outbox council output, end line. This will work the same way Z and then you'll see there's a space here because we added a blank line and if we do so, we could take this out. We can even add it right here. This is the same thing. We just did that. We're using it all in one line of code. Basically, you're gonna get the same exact results. I kind of wanted to show you that there's many ways you can do this. And I really want you to play with this on their own because you're gonna learn, you know, you you could say see outbox, um, en del box in Del. And then when you run it, you get just that you get a Z than on the next on Z and then a new lon. But if you know if you add another Z, you it gives you got there. So says Kind of you really just need play with this output because you're gonna get a lot of cool stuff. Let's move on below this. Let's leave that code there. Let's let's declare a new data type. Let's call it Let's do am a decimal about you. So remember, Double, we'll call it. Um, Box two. The variable name equals 89.47 semi colon groups semi colon. So that's our second variable. So let's had see out Box two in Del and let's run that. So what do you get? Will you get your twosies? Because he had box here, you ended the line On the next line, you put another box and you ended the lawn and then on the next line you put in Box two or what's in box to 89.47 That's why you 89.47 Then you ended the line and then you return zero process return zero So you can see if you play with this, you're gonna learn and you and I want you to do it. I want you to go through. I want you use different data types. Um, you know this this right now if we just did Ah, let's try another data type just before before we in this tutorial. Let's dio the string and we'll call a string Um, address equals 1400 College drive. That's an actress. And we had to use string because it's alphanumeric data, right? You know, if we wanted use, we couldn't use an integer because it obviously has Alfa characters in it, too. That's why we used string for the address. Variable. If we wanted Teoh, I'll put that you know, we could do you see out. We'll give it some space will do to new lines and then we'll say output address and then at another new on end line. Let's help put that. Let's see what that looks side. So we get our twosies, or box to variable 89.47 And if you look down here, we added two new lines. That's where the blank space comes in. And then we output in the address. Oops. So then we output address 1400 college drive, which we declared in this string variable right here. That's where that comes from, and then process return to zero. So there you go. I want you play with this when you try doing it with character data into your data string data and, um, double decimal point data. Alex, you practice out putting everything that we learned here, you know, maybe try ah, out putting your name and writing a sentence, you know, like, maybe try doing, you know, string name equals and then put your name. So my name is Zak and then try to output that So be, like, so in the line. And then output name and then output, um, is teaching a class okay. And then in the ill in the line. And then I'll put that and look at what you're gonna get you're going to get really, really cool output because you're getting a name that you declared in this variable string name. And you in that variable you're declaring it is the name Zack, which is output right here. So you're saying, um, you know, Consul Output, my name, which we declared here, Zack. And then immediately after that, don't in the line or anything. You know, output. This string that we're declaring as a raw string right here. This isn't an invariable. This is actually we're just putting this Straighten Your output stream is teaching a class and then in the line and look at everything. Look at all the cool stuff you can do with that. You'll surprise yourself. You know, I just really want your practice that originally we were going to do both input and output in this video. But we're gonna save input for the next video, I guess. You know, I really want you to practice this right now, declaring your variables, writing everything up and seeing how you can output different things to the stream on your own. But for now, that's all there is to this video. And I'll see you in the next tutorial. 6. Basic Input: Hello, everyone. Welcome, Teoh Practical C plus plus for beginners. My name is Zak, and in this tutorial, we will be going over basic input. So in the last tutorial, we discussed basic output. We discuss things such as, you know, if we declared a string with, um you know, we'll call it name equals Zach. And then we would declare, You know what? Let's say an integer value age equals 23. And then let's say we we wanted to output that, you know, we could say something like console output. See out the two less and signs, you know, don't forget that. And then we could say name to a saint signs and then is that's gonna be a raw string that were in putting into this ALPA stream right here. Then let's say I'm less than age. Let's say in less than years old and then we'll even throw in a new on end line out there when we printed that, you know, if you messed around with this enough. Oops. So you saw on air out there? I'm getting an air, and the reason lies because I wasn't paying attention, and I'm sure you caught it is You were watching me, but this right here is an integer, and I accidentally put in double quotation marks. So it's let's take Weathers. Double quotation marks turned this into an insecure. There we go. So now we shouldn't get an air. Whenever we build and run this someone build and run, it no airs, and we get the output. Zack, it's 23 years old, and, uh, if you practice this enough, this should seem pretty, uh, pretty easy for you to understand. Bullet. Let's let's move on to what we're talking about now. And that that is input console input. So, yes, if council output ISS see out. And what do you think Consul in played in is? Well, that's gonna be CNN, okay. And then see out deals with less and less than well, CNN deals with greater than greater than. Let's if you look at the difference between these two the stream operators and right now I know you're thinking, Wow, I'm going to get those mixed up a lot. Well, well, believe me, I got a mixed up all the time when I first started programming c++. But I promise you, after lots of practice, you will probably never, ever get a mixed up again because you'll get so used to using the right wine. And, uh, this is gonna have to be something that you practice a lot, though, because it's not something that you can get now and you know, immediately. So you have to remember seeing is greater than greater than see, how is less saying less than operator. So how do we you see? And, you know, console input? Well, if you're asking, let's take this out real quick, okay? Actually, no. Let let's leave this up here, okay? Let's just give us a little bit more space. Give us another new line and then let let's do at another name. Let's declare another variable. We'll call it String. Name two, and we won't. We won't. Uh, We won't give it an actual name. See how we have it appear we haven't declared as a Zack. We declared name. Is that right? Here were declining name, too. Is nothing named to isn't holding anything right now. And so what we can do is we can say something like we're gonna output enter name to. So what we're gonna do is basically we're gonna have Zach is 23 years old, output and then blow that. We're gonna get an output saying Inter name too. Well, it's asking for you to enter something. So to enter something and toe, let the user use the keyboard to type in a value u CN greater than greater than and then the variable name that you want to hold the employees in. So since we're telling them to intra name, we're gonna use this variable to hold the name that they enter. So let's say seeing greater than greater than name to and what that's going to new is when the when the user enters and a name, it's going to store whatever he enters into the variable name, too. Okay. And then we can say, see out, we're gonna end a few lines. We're gonna say you entered and then name, too, En Del, What this is going to do is whatever I enter and for this right here seeing name, too. It's gonna output you entered. And then whatever name too is holding at that point. So let's test it. Let real quick. Let's build and run this. So if you see we get. Zach is 23 years old. That is the result of this first declaration and output stream that we have going on in this first part of code that we did it beginning. But then it says Inter name, too. And you see a cursor blinking right here that shows that the consoles waiting for input. So right now, we're right here in the code seeing name, too. So it's saying Inter name, too, and the consul's waiting for us to input something. And whatever we input, it's gonna story into name, too. Okay, so if we enter, let's say Jimmy right here and I hit. Enter. They'll say you entered Jimmy because right here we consul output you entered in the name to and Name two is holding the value that we entered into. And that's where it says you entered Jimmy. So what happens if we want to do something like it age to you? Well, then, obviously, since we're declaring age two is an integer whenever we ask for input. Oops. I forgot to change this to age, to build and run it again. And then, of course, shot here. This needs to be H T. That's where these aiders are popping up, so fix that. Let's run it again. So it's asking Inter age too Well. Age, we declared, isn't integers. Obviously we need to enter an integer such as eight. I'll say you entered eight. What happens if we enter a character data Instead? I could be Were beat 39. That string data we'll send you entered zero because basically what's happening is is that's not a validated type. So it's giving us a garbage value right here. So any time you get a weird value, that's not what you're expecting. You not want to go look at your data types that you've declared and make sure that their matching the input of the user. So what I want you do is I want you to practice using the CNN Council input and maybe make you a little script or something, a little program that asks you what your name is and practice entering it in with different data types. You know, for instance, you could declare, you know, can't do first initial the character last initial and then say see, you know, Consul output, enter first initial, then you're going to a consul. Input first initial and then see out and then maybe give yourself some space with some in Dell's in lines and say, you know, enter second initial, and then you're gonna have to do another consul input, seeing greater than greater than last an issue initial and then say see out and then give yourself some space and say you know something along the lines of you know your initials are. And then first initial last initial. Then if you run that I will say in her first initial, the inner second initial are your first. Your initials rz are just kind of practice doing something like that, you know, entering different data and make sure you're declaring your data tops, right? And, uh, make sure you can do mix your input and output and get everything right. You know, practice using the stream operators because, you know, seeing is using greater than greater than and see out as usual left saying less. Man, I would say, Take a few hours, you know, practicing this and, uh, just doing different scenarios, you know, do your initials, a program that's for initials, and then do one that may be asked for your address and stuff like that and just practice entering in data and I'm action. Get with your data types. And in the next video, we're gonna We're gonna actually do something a little bit more practical. We're gonna use arithmetic. And then after that, Ah, you know, after we get the math down, we're gonna right, you're gonna make a calculator or something. So I look forward to that. I'll see you in the next tutorial. 7. Arithmetic: Hello, everyone. Welcome to practical C plus plus the beginner course. My name is Zak. And in this tutorial, we will be discussing arithmetic, which is all your basic math functionality in a C plus plus program. So to start off, I'm gonna go ahead and assume that all of you have been practicing declaring your variables and what not? Uh, so I'm not I'm not going to explain that stuff in debt time. I assume you already have this stuff down. So to begin, we're gonna start with simple addition and subtraction, and I just want you to follow along, and you should notice that it's fairly straightforward. So you could start by declaring your variable as so And then if to use these two variables and her arithmetic, you know, operation, you could do several things. You know, you could hold you could declare variable called result and hold the arithmetic value of the addition of numb one. Plus numb to know that will hold the value of this operation. So if we wanted to actually output that we could see what the value is after this operation , and you should see that it's 11 as so in the same thing if we wanted to do subtraction. You know, you just add a minus sign of hyphen, and when you put that, you should get negative. One negative went so you can see that in C plus plus addition of subtraction are fairly straightforward. And I do want to show you a few things you know, with respect to hard coating value. So if we want to do number one minus four, we can do that. We can hard code the value in there and C plus plus that's Bond will get one. And I also want to show you another outplayed trick. If we wanted to just output the result of numb one plus numb to we can do that, we can output. That result will get 11. So there's lots different things you can do with addition and subtraction and all your basic order of operations rules do apply here. So if we went to output, you know, number one plus numb to, you know, minus four, that's gonna It's gonna do the order of operations to do this so it's going to start in parentheses. Do this operation here that will result in 11 and then subtract four to give you seven. And we're just gonna output that all once and we get seven as so. So when you play went around with the addition and subtraction and you'll find it's fairly easy and that there's lots stuff you can do with it with respect to output in order of operations and hard coating values, etcetera. But let's move on to multiplication and division. Okay, So same thing with multiplication division, we're just gonna go ahead and put the whole operation in this output stream right here. So we're gonna see out number one and then for multiplication. It's not an X, as some of you may think, that section Asterix so number one times numb to which will give us 30 in this operation and we all put, we get 30. Multiplication is fairly easy. And you know I can set Order of operations again applies here. So if we went to put that there and then plus seven, we should get 37. And I do believe if you you know, if you if you remember this stuff from math class, you don't even have to have these parentheses here for this operation because multiplication will come before. Addition, Multiplication division first and then addition and subtraction come after that. So if we run that, we will still get 37 you know? So it's different. It's not gonna do no more implicit. You know, if we even put just to kind of prove to you if we put seven plus num wine, it's still going to do this operation first and then add seven. We'll get 37. So I just kind of wanna show the importance of order of operations and c++ because the rules still do apply. So let's do something like this. Let's let's change number 22 30 Okay? And let's let's cross some division and division you would just do numb to. And then the backslash is the division sign number one. And if this works rise could be 30 divided by five and it's gonna output six. That's fairly simple right there. And you could do the same thing again. Order of operations. You know, if I wanted to ad to to this will get eight, no matter where I put it. You know, I can't put it in between here because if I do something lets out do something like this. It's audio, you know, Numb one divided by five plus numb one that's gonna be It's gonna basically do this first. Actually, let's do this. You know, this is a cool order of operations, because here you have addition. But you have division first, so you might say, Well, division's gonna go first. Well, that's not true, because parentheses goes before multiplication and division. So here is going to do this operation 1st 5 plus Numb Boyne. So that's gonna turn into 10 and then it's going to the divisions. So 30 divided by 10 and it should output the number three. If we run that, that's exactly what we're gonna get. We're gonna get three. So this stuff, if you practice it enough, it's gonna become fairly straightforward. And you're gonna realize you could do a lot of cool stuff with this arithmetic operations. So one more main arithmetic operations on my show you is the module ist operator, which is the percent sign on the keyboard and what This days, this returns a remainder value from a division operation. So just to kind of show you, we're going to change number 2 to 11 and leave number one in five. And we're gonna hold. Result equals numb to module its operator NUM wine. Now, I want you to think about this. This right here. This operator is basically returning the value toe hold in result of the remainder of the division operation. And if we divide numb to buy numb wine, basically, we're gonna get 11. Divided by 55 will go into 11 2 times, with one being the remainder. So this operation will store the number one into results. And if we output result, you'll see. Oops. Hold on one second. If we output results, I think I hit the wrong key. Yeah, here we go. We get wine, which is the remainder. I kind of want to show you that. You know, if we do also, you know all the rules apply. You know you can do. I mean, I'm sure this becomes fairly clear to you, but you could do result plus four right here. See our result plus four. You know, you can you do That's going to do five. I just can't wait to show you that real quick. Okay. And, ah, going back to the module ist operator, you know, let's do another. How about let's do this? Let's let's do to modelo operators in a row. So let's do just so you can kind of see if you can guess what the value will be after this operation. Let's do the value. 14 here and let's output result module its operator Margallo. Whatever you wanna call it, Result module. Oh, let's do to you. Okay, so think about this. Results is holding the remainder of this operation, and then we're out putting the remainder of this operation. So think about that for a second. Now, I want you to try to guess without puts Gonna be. Now, if you guess 20 you're correct. Because what's happening is result is holding none to divide about number one and the remainder, which is going to be four. Okay, because five or go into 14 2 times with four left over, and then we're gonna output four divided by two in the remainder of that, Whether there isn't a remainder of four, divide by two, it zero because two goes into four evenly. So when we output this, we're going to get zero, and it's that easy. So that means said, that's all for arithmetic for this tutorial. And in the next tutorial, I'm not going to something called Concatenation, which is sort of like addition with strings. And ah, you will find that pretty interesting, too, I'm sure. So I'll see you in the next tutorial and thank you for watching. 8. Concatenation: Hello, everyone. Welcome to practical C plus plus the beginner course. My name is Zak, and in this tutorial we will be discussing concatenation now, before we actually get into Concoct Nation, which may sound like a difficult topic, which it's really not. I just kind of want to discuss this using name, space Standard one more time with you guys just to give you an idea of why it is in our code and the reason why I told you at the beginning of the Siri's not to worry about it. And that was just the reason why we're putting in our code is to make our lives easier. And I want to show you it's because when we do something as simple as cl hello world and we try to output that if we don't have this using name Space standard, all of a sudden our coat falls apart and we get an air right here in air and it says air Sea out was not declared in this scope Well, without getting into too much detail. This name space is including a function. Ah, see out the standard functions see out the output operators so we need using the name space standard just to do simple, you know, standard operations such as Output Hello World to the screen and then in the line. Now there is a way to get around this. Obviously, you could take this out and do something else to use this function, but I don't want to get into that yet because that's more of an advanced topic. I wouldn't consider that a good topic to discuss with absolute beginners and programming know that goes into using name space functions, which I consider, and advanced data structure. You know, it's similar to a class in a way which is going into object oriented design. And that's not something I want to get into in this series with you guys, because I just want to cover all the basics. And then when you get this down, maybe in a future, Siri's will go over advanced data structures and object oriented programming. But for now, we're gonna keep it simple, and we're just gonna keep using name space standard in our code. That being said, let's move on to concatenation, which is a simple and my opinion. It says it's a simple topic, even though it sounds difficult. And all concatenation is it's basically the addition of strings, and I want to show you that what I mean. So if we do string first name equals Tom and then string last name equals Jones, then we can actually out. Do you know something like string? Full name equals first name. Plus, let's add a space in there, plus last name, and we can output that we can output full name and it will output. Tom Space Jones. Let's run it. As you can see, Tom Jones appears in the Consul. So that being said, that's basically all there is to concoct nation. Now, there are a few rules. Um, you know, if you mess around with that, you're gonna find out you can't do stuff like output, Tom Place place Jones. When you output that you get an air, so you need to have a variable in between your raw strings. You need to be adding a raw string to a variable when you Dukan cat nation. Either that or two variables together. If you understand that, and if you don't, I would say cause practices concatenation topic and it will become ah, simple to understand? You know, when you can use concatenation when you can't and just messing around with it, you should, you know, get enough airs just playing around. You'll say, Oh, okay. I get what he's saying. You know, you have tohave, you know, if if I wanted to do output it Jimmy place, um, last name, I can do that. I can say Jimmy Jones, But if I want to say Jimmy Place Jones, I can't do that. That will throw in air. So that being said, that's basically all there is to strengthen Cat Nation. There are some, you know, built in library functions that you can use, but we'll gettinto built in functions later in this series. For now, I just want you to mess around with Can Cap Nation. And I wanted to show you while we have using the new space standard in our code. So thank you for watching and I'll see you in the next video 9. If Statements: Hello, everyone. Welcome to practical C Plus plus the beginner's course. My name is Zak, and in this tutorial, we will be discussing if statements now if statements are a very important part of programming and C plus plus programming. Um, because and if statement what you think of it as a way for a computer to make a decision based on certain conditions being mitt meaning if you think about like a small weather app , there might be an if statement. This says, you know, if it is raining, then show a cloud on screen. But if it is sunny, then show the sun on screen. And that's kind of what if statement is, it says if this is true, then do this. And I want to show you that I'm going to say if we're just gonna put true in here, which is a boolean variable, we're gonna go over that here in the second as well. We're going to say this code is ran. Okay, what's actually had an incline. And when we run this, I will say this code is ran because the condition inside these parentheses is true. And inside these parentheses is where you put your condition. So if we put false, okay, this code will not be ran. When we run it, you will not see this. It will just return zero and just kind of show you to kind of touch more on true and false . If you remember in the one the first tutorials over data tops, we talked about 1,000,000,000 debt data. And to do that, you talking bull because we're gonna declare a booing and data top. I don't I don't think we actually did an example of 1,000,000,000 data top, but I think we did discuss it may be and ah, I'd actually have to check and look. But boot Boolean data is a another day to top. That is either true or false. So we could say bullion, You know, um, var wine equals true and bull var too equals false. And then we could actually put the variable in here so we could say boulevard wine and which is holding the value true. And, ah, this coat will be ran if we run it. This code is ran, as you can see. So, uh, you can kind of see in this tutorial, you know, along with if statements were also kind of learning about Boolean variables, which are a very important part of C plus plus programming. It's will because all of these if statements are going, uh be focused on whether the condition inside these princes is true or false. Now, that being said, you don't have to have Boolean data in here. Um, necessarily. We could actually dio something like this. We could do it. Num wine equals five. And we can say if five is greater than three, then run this code and this is the greater the inside son. And if we do that, I will say this code is ran because actually, I hard coded five in here, but you could actually put number one is well, if number one is greater than three. And, um, I will say this code is rand the same token, though if you put if five is less than three, which it's not, then this condition is false because five is not less than three that returns false. So this code will not be ran. If we run it, you can see process return zero. This output statement was not ran, so that's pretty simple stuff. You know, you can kind of make an if statement Ryan based off of whether the condition inside here is true or false. And we're gonna go more in depth with that later in this section when we make our own at the practical add that you could use and I might even make some kind of number guessing game. I haven't decided yet, But either way, we're going to really show how we can use these if statements to make a really nice flowing code that makes decisions based on user input. So that being said, um, what if we wanted to add another part of this if statement basically saying, you know, if number one is less than three will say, um, well, actually say it will say Number one is less than three. But what if we want to say if numb one is greater than three will say Number one is greater than three. Well, to do that, we put else if number one is greater than three brackets, output number one is greater than three in line. To make it more interesting, we're going to say it numb one where I say enter a number and then put that number in them wine. And then depending on that, um, will be what code is ran. So you can say when we enter the number, it's gonna hold it in, um, wine. Okay. And if number one is less than three, this code is going to run. But it else if numb one is greater than three than this code is going to run. Watch. When we run it, what happens? Enter a number. We're gonna say 77 is greater than three. So this code down here should run. But this code right here should nine. When we hear enter, it says number one is greater than three and Onley this code ran. And so to run multiple if statements together based on one calculation, kind of you would use if and then else if and then if you wanted to basically do a default if all the above or false you add else and then brackets and then you don't add a condition to l statement. This basically says, if oil's fails, if all these return false, then do this no matter what. So just kind of thinking about it. What? What would be the default would could say, you know, if if number one is less than three, do this. If numb was greater than three, do this well. Otherwise, that would mean that number one is equal to three, right? So we could say else number point is equal to three. And to show you that we're on it and we'll enter three. I'll say Number one is equal to three because basically, we didn't even have to do a condition because its it knows that says, well, the way we did it, the way we coated it, We said, If this is false and this is false, then do this. You know, if all else fails, do this and that's what happened. But at that same token, you really don't even need this if this l statement, if you just wanted to do another else if you could say else. If numb one equals and just to This may seem confusing to you at first. But in an if statement when checking, if something is equal, you do need to equal signs. This may seem confusing at first, and it will probably take a little bit of practice, but that's just how C plus plus and even Java is coated. You know, you do need to equal signs to check a condition inside an if statement. So that's why I have to equal signs here rather than just wind, because that will actually return in air. So we need to equal signs here. So basically it's saying if number one is less than three, do this else. If number one is greater than three, do all this else if number If numb one is equal to three, then say Number one is equal to three. And that's the same thing is just saying else do this because those were really the only three possible outcomes. But you can see if you had a whole bunch of different statements, how you might just want an else statement at the end to return. A default value section is, um, you know, for instance, there's really not anything that would run this code. She would say you didn't enter a number, you know, because that's probably what would happen. Um, in fact, I think if we entered a string, it would actually return zero. It would just throw zero or a garbage value and no more in So So let's see. Let's just run it real quick if we enter three again, Um, this code will run. Number one is equal to three. But let's see if we don't enter if we enter something, the civilian try to get this code to run. Um, six years. If we Rhine just type in star or something, they will say Number one is less than three. And the reason why it's saying number one is less than three is because even though we entered Star is because Number one is a garbage value right now because we entered star and we're supposed to have an integer value inside our morning. So instead it through a garbage Valium in there, which is probably actually zero. It probably just defaulted to zero, and we can actually check at the end of everything we can actually output after all these if statements number one. So if we type in something like Zach says, number one is less than three. And the reason why, because the default value for no more and just happened to be zero. That was the value in the memory address for numb wind and That's why so So you could see how important it is that the user enters a number. Because if if he enters a string than the first branch of code is going to get, Rand, this branch right here, which may not be what you want toe happen to so it it may, you know, maybe. Ah, good idea. In a program like this, put emphasis on number. You know, um, there's obviously other ways you could handle it other than just putting emphasis on that. Um, for instance, you know, there's try and catch, um, pieces of code, but that's all advanced stuff again, so we won't be worrying about that. But when you get in, the more advanced programming you will be doing, trying catch causes and stuff like that and catching your exceptions that get thrown for whenever the user enters wrong data. So that means said, that's pretty much all there is to if statements, you know, I do think it's a good idea for you. Maybe go look up the operators that you can Do you know, for instance, um, if numb one is greater than you can also do greater than equal, which means greater than or equal to three. You can also dio less than or equal which. Basically, if number one is less than or equal to three, it will say Number one. It's less than three. If numb was greater than or equal to three, Number one is greater than three. And let's just run that and see if both codes get Rand. Because, basically, if we enter three, it will say Number one, it's less than three because Onley this code got rand but really could have ran if we just had, if instead of else, if because by putting else if it's basically adding it onto these if statements free under three again, I think all three of these election get rand. Um, let's check it real quick. I believe I d. But and I am I believe it's actually froze. But I do want you toe. I want you to play around with it right now and, um, just kind of look at all the different operators again. Um, you know, Ah, another good one to look at is not equal to exclamation point equal. That means not equal to so if it says if numb one is not equal to three run this code, And so before moving on to the next tutorial, really want your practices. If statements and really watch where you're code runs, you know when you can do stuff such as I want you to, you know, use different data tops to because you could say if, um, for instance, if you had a variable called name equals Jim, you know, string values. If you had a string in the name and if the name was GM, you could basically run this code. If name equals, Jim will say Welcome, Jim or something like that, So just mess around with it. Practice with ease if statements. And in the next tutorial, we're gonna look at an alternative to if statements called switch statements and you will kind of be ableto decide on your own, which ones you like using more in your code or when's the right situation to use? Which one is, and I think you'll find it pretty interesting. So stay tuned 10. Switch Statements: Hello, everyone. Welcome to practical C Plus plus programming the begin. Of course. My name is Zak. And in this tutorial, we will be discussing switch statement. Now, As I said in the previous tutorial, a switch statement is basically just an alternative to an if statement, but they are used in different scenarios. Now, I'm gonna go ahead and kind of give you an example of what a switch statement looks like and then discussed you how it works. So gonna go ahead and type out everything right here is where the switch statement starts. Okay, Groups, You put everything in brackets, go ahead and give you some room, and then you put your cases in. So here is our sweet statement. The basic functionality of a switch statement. Okay. And I attacked it out because I just kind of want you fall along. It's gonna be easier for me to explain it to you like this. So here we have a variable called Raid and our great isn't be okay. And then below that, we have our sweet statement defined in these brackets right here. Everything in these brackets. Okay, So, basically, to define a switch statement, you write the word switch and then in parentheses. Next to that, you put the variable that you are analyzing. And this Kate it In this case, it's grade. So we put grade here, and then in your brackets, you put your cases, so you put case and then what you're comparing the grade to. In this case, we're comparing it to different letter grades. So the 1st 1 is case A. And here you enter more brackets and you put the code that you want for case A. So you made a 90 or above. And then in Case V, you do the same thing. You can put your code here, you made a 80. Or, Abed, you can repeat that for each of these. And we're gonna go ahead and do it here so that you get full visual ization of how the switch statement works. In case F, you failed. So basically what's happening here is this code runs. We have a grade of a B, and then we look in the switch statement. We tell the switch statement to analyze grade, which is be so right here is grade. The variable that we're analyzing, we say, isn't in a well, no, it's not. Is it to be well, yes, it ISS So we're going to run this code. Is it a C? Well, no, it's not. Isn't NF well? No, it's not. So this is the only code that you get ran, but I want to show you something real quick so that you can You can see what happens when before I fix it. Let's go ahead and run this program and you can see it says you made an 80 or above you made a 70 or above You failed. Well, that's interesting because we made a B. It said everything except a But once he got to be, it basically did all the Cobell obi. And that's because, and a switch statement you need to add a break. And to do that after all your code, you say break and you do that at the end of each case to tell it to leave the sweet statement because switch statements have what I like to call a waterfall effect, meaning If you don't put your break and right here on this code to break from this switch statement, then once this is crew is gonna waterfall down into the rest of the code, the rest of the switch state and code. So if we took out this break statement right here, it should Ryan the B code and then run the C code before it breaks. Let's check it out. As you could see, it said you made an 80 year above you made a 70 year above because it didn't break from the sweet statement until it got right here. So what we need to change is, of course, if we just add a break here, you'll see that it says you made it a year. But because our greatest be and just to kind of show you how we can further this let's do grade and then we'll say something like Inter a grade, enter a letter grade, okay? And then let's you see in tow, hold our grade and then let's watch how this sweet state is He's injury letter. Great. We're gonna enter F. It says you failed because basically, it went through the sweet statement that put the grade we entered in here in the sweet statement, analyzed it. Look for the case is that in a no is it be? No. Is it a c? No. Is it in f? Yes, you failed. And that's how switch statement works. Okay, so that's right. One more time we're gonna enter in what's interim and eight and says he made a 90 year above. So that's the basic functionality of a switch statement, and you can see how it's very similar to an if statement by checking which condition is met . And like an else statement in an if statement, a switch statement also has something similar to else, which is called default. So if we wanted to take out this f, we can just say default. You made an Don't worry, you failed. Which is the same thing is saying is if none of these were true and obviously he failed, let's let's go to the default sweet statement, which is? He failed. If we run that and we enter in and f well, that's even entering and D. It'll say you failed because if we enter into D, obviously A, B and C aren't aren't a d. So it's going to go the default. Just say you failed, but you know, obviously find Rizzi. That same code is gonna run because it's either a, B or C. So if you want in tow to restrict the user to Onley entering the correct letter grades, what you would probably want to do is say something along lines of case. If and then output, you failed in line and then on the deep ball, you could say something like You entered an in valid letter grade. And now when you run the code, if it's not ABC, or if if you enter something like our will say you entered invalid letter grade because it's going to the default. So that's the basic functionality of switch statements, and I will see you in the next tutorial. 11. Practical Program #1: Hello. Welcome to Practical C Plus plus programming mining, Bizet. In this tutorial, we're going to be taking a look at her first practical program that we're gonna make together. And it's just gonna be a simple calculator at and to do it. The main focus of this is I want you kind of understand how we're gonna structure this program and use the concepts that we've already gone over to make it work. Like we wanted to. That being said, let's go ahead and begin and that the way I want to structure this is basically, we're gonna make a calculator that lets the user decide at the very beginning. If he wants to do addition, subtraction, multiplication or division. And to do that, we're gonna use a switch statement. Okay, so let's go ahead and structure. It s so we're gonna right switch. And then there we go. Sorry about that, guys. I am. The rest of my brackets got deleted. There we go. That's right. It just like that. Make sure you get your return statement, okay? And then in this switch statement is gonna be the variable that we're checking and the while I do it is, I basically wanna have the program opened up and have numbers. 134 ill say one addition to subtraction. Three multiplication and four division. And so to do that, we're gonna hold A variables were going to say it and we'll call it choice. Okay? And we're just gonna leave it like that. We're gonna say, see out, Enter a choice. Actually, the way we're going to do this, we want to let them know beforehand on what their options are. So we'll say Wine addition. Okay. In line to sub track in line three, Malta application in line and four division in line. So this is what they're going to see. They're going to see basically this on screen, and they're gonna have to make a choice on what they want to use. And then what we'll do is we'll say, at the very end, let's give a little bit more space, and we'll say, enter a choice, and then we're gonna hold that with a C and statement and choice or variable choice. So kinda if you need to push, pause and kind of breathe all this in exactly what we're doing. This is all stuff we've covered in the previous tutorials, and it should be fairly straightforward to you at this point. So at this point, we're holding the integer of the choice that the user selected into choice. So what we do is and switch, we need to put the variable that we're analyzing, which is our choice variable and then make cases. So obviously we'll have case wine case two case three case for let's go ahead and add a default as so okay. And obviously the deep all we can go ahead and add something like exiting you entered and in valid number eso because basically, we're going to say if they don't enter wine and they don't enter to or they don't under three or they don't enter four billion or something else and we're gonna say, exiting you entered something invalid and then it's just gonna go straight to this code return zero in the program will end. So that's how we're gonna handle that Now, in these, let's go ahead. And just so we don't forget, we're gonna add brackets to all of these as so that way, These are format and nice and easy, so that we can see. You know, Case three is gonna be right here. Case for is gonna be right here, and we're gonna go ahead and add or break statements so that we don't forget, because that's gonna be very important. These break statements were very important for this code, the way we're structuring it because you don't want multiplication and division to be run on the same at the same time. So let's go ahead and add or break statements, which is a good practice to do with sweet statements. They don't forget. I would recommend always adding your break statements first, if they're necessary. So there we go. So, as you can see, if you need push pause and kind of look at this, Matri had everything right. Go induce of now because this is how our program, our calculator, is gonna be structured. Is that so? With a switch statement? Okay, so that me and said Now that we have or choice entered, basically, everything is going to be the same at this point. So we'll say right here will say, Enter number one and just a the top. Let's go ahead and add our new variables. So we have choice here and if in you may see people to a different way. But the convention that I learned in school was always declare your variables at the top of your main function or at the top of any function that you're in, for that matter. So that's what we're gonna do. You just taken getting a good habit of doing it. Let's give us a little bit space. We have any choice, our sweet statement, and then we're gonna have We're gonna use doubles in case they decide they want to use, you know, floating point values for their calculations. So we're going to say double number wine. And then actually, before putting our seven corn here, I'm gonna Mills were going to say number two, which is a new way. This is another way you can declare your variables, and this basically says double number one and then double number two. You can declare him like this is the same. Both of these air doubled, and we're not initializing them to anything. We're just making two variables of the data Titan double, so you can try that out and again. It's a convention you could do. You could just write double number two down here if you wanted, but it's all a matter of preference. So we're gonna leave it like that for now. And then we're actually gonna add one more thing. We're gonna We'll just, uh, believe that just like that. Actually, we got, say, on each of these, we're gonna have him enter the number one in the number two. So the codes gonna be pretty repetitive, actually, on each one. So I'll say, see out into number one. Okay. And then CNN number one Okay. And then we're going to say we're gonna give a little bit of space, and we're gonna say Enter number to We're gonna get the input for a number two. Okay, Then we're gonna output. We're gonna give us plenty of rain. Okay? We're gonna tell him the result. We're gonna do it like this, cause so you can see the the numbers don't run off the board. We're going to say result equals Okay. And then we're gonna say since case one is addition, we're going to say number one plus number to and then break, and this code is gonna be pretty repetitive. So if you want You can just copy this because I could said it's gonna be the same for each one. Pretty much have pasted in here. Okay, True. Fix your formatting in the copy and paste mint stuff. Remember, Case to is subtraction. So the only one you're really gonna have to change is this change that to anonymous Case three was multiplication. So we'll just have to change the sedition to a multiply. A sign in case for was division will change that to a divisions on. So there you go. So now our program should run just like we want. We got our switch statements and everything. How it needs to be. Gwen. Save it will say build and run and let's see what happens. So there we go. As you can see on screen, we have our choices, Addition, subtraction, multiplication and division. Let's go ahead, Inter subtraction. I will say inter number one. So let's do five. Inter number 23 Well, say result equals to process ends. Have a subtraction. Let's go ahead and do ah, Division four else they enter number one. I kind of want to show you something with division. Um, because I don't know if we went over this in arithmetic. But let's say I dio that's how Do non Okay And then we do for number two, we do four, so they'll be non about about four. Obviously, there's a remainder there, but just straight division, it's not gonna give you the remainder. That's just gonna give you two, because it goes into it two times. Now you can see that the result was 2.25 And this is an interesting topic. If you if you practiced your arithmetic, the reason why we're actually getting a decimal. The actual answer is because we're doing double on double division. So we haven't since both of these or doubles the results gonna be in double. But let's change these two integers for a second. We're going to do that same That same problem there were going to four. We're going to non divided by four, and we should get to, As you can see, we can get to you even though the real answer is 2.25 And that's because reason whole number division and not allowing for a double, uh, result value. And that's really what we need to do if we you know, if you want into your just make it simple, you just change this to double And the other alternative would be If you want into, you could just leave these as integers and then hard code, a double result value. And basically, you could go down here and say result equals, um, you know, here you could say result equals number one plus number two and then say result equals result, and that will still give you in double value because you declare results in dough. But this was our first practical program. I just wanted to show you how we're gonna use everything that we learned throughout this course to actually actually applied to practical situations. This is the 1st 1 you know. We used our switch statement going. Thing we really didn't use to mention here was an if statement. But if you take the same token, if you want tried on herself, you can switch out the switch statement for an if statement. So to do you know, if choice equals Boyne, do this else if choice equals to do this and so on and so on. So I challenge you to do that are trying to try to this program with sweet statements and not telling you trying to it with, um if statements. Thank you for watching. And in the next tutorial, we're going to go on to more intermediate programming topics, So thank you for watching. 12. While and Do-While Loops: Hello. Welcome to Practical C Plus plus programming the beginner course. My name is Zak. And in this tutorial and this section in general, we're gonna be discussing a little bit more intermediate topics. And ah, starting off, we're gonna discuss looping such as while and do I leaps. And this should be a pretty short tutorial because we're not gonna go too much in depth about. We're just gonna kind of discuss how to use them. And once we get further into this section, you'll see the practical use of them and how often you will actually be using them real life situations. So let's go ahead and show how to define the loop. We're going to start with a Y, a leap such as this while you taught the word wild. And then you put your parentheses for your condition, and then your brackets and basically anything in your brackets right here will be run. As long as this condition is true. On this condition is true. This condition is checked. Better to say this condition is checked at the beginning of the loop, the code is ran and then it's checked again. And if it's still true, the code continues to write. So the best way I can say this is, um let's go ahead and do it like this. We're gonna say int um, Rhine equals 10. Okay? And then we're gonna basically say, See out, Ryen, see out, run, end line. Okay. And then we'll say brine equals run minus wine. Okay? And in this loop for the condemned for the condition, we're going to say, while run is greater than or equal to zero. Remember, we discussed this operator. That means greater than or equal to zero. Okay, so moving nine. Basically, this is going to say, while this is true, do this. And if you look at the end of our code, basically, we're saying we're setting run equal to run minus one. So the first time the code runs, um, run equals 2 10 and then the next time it equals two. Not until it gets to zero, and then it should quit running. So let's run it quit. And as you can see it that real quick, but it printed off because we're out putting get 10 9 all the way down to zero. And that's basically what we want. You to do? Um, one more. One thing I won't discuss that maybe we haven't discussed earlier in the arithmetic because there really is a lot of rithmetic things you can do and C plus. Plus, they're very interesting, and one of them I want to show you now over our while we can use it. Is this when we say run equals run minus wine Another way in C plus plus that weaken do that is, say, for it's actually easier less. Code writes, a run equals Monets wine. And what that does is that basically means run equals run minus one. And it would be the same thing if he said, run equals plus one. So say, Let's just run it real quick and you'll see that we get the same result. You're just gonna have to visualize. Okay, so it's wrong right there. I was wrong about that. So it may be it's minus equals. I believe it's minus equals wind. Yes, that's right. So I had it backwards. Sorry about that. So So this right here, this minus equals or if you did plus equals, is the same thing as saying Rhine equals run minus wine. So if we do that, obviously we're going to get the same thing is run equals run minus one. Right now, the one thing about that's dangerous about loops is that you could get caught in an infinite leap. Okay, so that means said, if we did, you know, Rhine plus equals one that sent and say run equals run plus point. Obviously, the variable will never get to zero, and this thing will be caught in an infinite loop. We're gonna go ahead and hit building, Ryan. So you can see what happens when this, uh when you get in this situation. And as you can see on screen, the number is just adding up very quickly. You can see how fast processor is clicking through these numbers going through this leap. I mean, it's almost instantaneous Will be 100,000. This Coby ran 100,000 right about now, you can see we've already ran this code over 100,000 times. And if you get caught in this, one thing that I would recommend doing is goes pushing control, see on windows, and that shuts it down. So if you get caught in an infinite loop on windows hit control C. I just come one to show you what that was. And, um so yeah, it infinitely hit control C and get out of it. And that's one thing you need to watch for. You know, when you're doing this is to make sure you get your code writing that you think about in your head first before you run it so that you don't get caught mothers infinite loops and have your computer accidentally crash. So there's a difference between minus equals and plus equals. You see how much different the code is, rather than getting in an infinite loop. It it quits because this statement is no longer true. Once run is less than one. So once it gets to negative oin, it doesn't run anymore and it goes ahead. It's return zero. So that that is a wild loop. Okay, so now I'm gonna introduce a do while loop and a dual leap basically says, Do this anything in here and then it checks the condition at the end, while Ryan is greater than or equal 20 Okay. And then you put a semi colon, so it's a little bit different syntax. You say do and then your brackets, and then while your condition and then it's semi colon. So let's see what the difference is between this having your having your condition checked at the end of the code rather than the beginning. But the difference is this. If we set and run equal to negative five, obviously it's not greater than or equal to zero. But the difference is, is this. This code will always Rhine at least annoyance, So I'll show you what I mean. Even though run is less than zero. When we run this, the code will still get Rand once and shows it's negative. Five. Okay, so that's the difference between a wall and and do well. But you can still have the same. You know, you say, Well, when would I use that? Well, when we get into more practical examples, you'll see these Duvall. And while loops are used interchangeably, depending on the situation, you know, if you want your code to run at least once, no matter what, then, obviously, and do all this more appropriate than a while loop. But just cachet that we can still get the same result out of this code is wildly. We're gonna go ahead and set, run equal to 10 leave our condition is the same and say run Mantis equals one. We should get the same result as before 10 all the way down through zero. So that's basically an introduction on loops and, uh, particularly just with a focus on while and do while loops in the coming tutorials, We're gonna go into four loops and more fun stuff like that, so stay tuned. 13. For Loops: Hello. Welcome to Practical C Plus plus programming the begin. Of course. My name is Zak. And in this tutorial, we will be discussing four leaps. Now, in the last tutorial, we discussed wild leaps, and I got to say four loops are quite different, and you'll see why here in a minute. So the way of four loops works. You're gonna go ahead and set it up the same. You're going to say four your condition and then the brackets, just as you would a while loop or an if statement. You just proceeded, you know, before the condition is where you write four. So the confusing part too many people is the condition inside the four leave and how it works, and I'm gonna go ahead and explain it to you all. But first time I set up a variable, call it value, and we'll say, um, equal. Actually, the way we're gonna do it is yeah, Well, say value equals zero. Okay, we're just set value equal to zero. And then here in the four Lee, we're going to declare an integer called index. We're gonna set it equal to zero, okay? And then put a semi Colon. Now just stick with me for a second, because it I know right now you're thinking What? That's that right there. You're declaring something in a condition. Well, this is not the whole condition. This is only 1/3 of the condition. So after you declare your variable int index equals 20 we're going to say index less Van 10 and then we'll say Index plus place. Now, I want you sit here and breathe this in for a second, because I know it looks complicated. Especially if this is your first time ever looking at afford a leap. So basically what we're doing in this four Lee, where declaring indexes variable in setting, get equal to zero. Okay. And then were saying this right here is basically our condition is what I would call the condition Assad from these other two. This middle part says, do this wild loop as long as index is less than 10. And then this third part is what I like to call the increment. The incremental part of the four loop. This is how much one increments each time. Baluch Rhines the variable that you're testing Index and I mentioned earlier in the last tutorial. I believe that minus equals was the same thing is saying index equals index minus one. Well in C plus place index plus plus is the same thing. That same index equals index plus wine. So easier way to write that is to just say index plus plus. And I know Ah, we did you know something like index minus equals one. Well, another way to do that, actually in the last tutorial could have just been index minus minus is well, so that's the same thing as saying index minus equals one. And that's also the same thing. And saying, index, um, monness index equals index minus one, Just as this is the same thing is saying index, this is the Sundays and index equals index plus one. So just keep that mind index plus plus were incriminating index by wine each time. The code inside this loop runs and what I want to do is I want to say, see out value en del, and what we'll do is we'll actually add, um will add five the value each time. So if we run this, you'll see we get 05 10 all the way up to 45 because this code is ran all the way until index equals 10. And each time this four loop is ran indexes getting incriminated by wine on one index equals 10. That means index is no longer less than 10. And the four loop jumps down here to return zero and you get process returned. Zero. So just to kind of take kind of show you a little bit more about it, let me let me actually take out value. We're gonna output index is what we'll do instead. And you can actually see what happens to index throughout the four loop. We're gonna hit Rhine. Let me, um let me get rid of this real quick. That was still open. We're gonna hit Ryen, and you can see what happens to indexes. It goes through the leap. Um, it goes from zero all the way up to nine and then loop ends. So I want you to practice with this four loops. See how you conduce different counting exercises and kind of cycle through numbers with this four loop and, you know, even even tried changing this operator Teoh index may be greater than 10 and see how that changes it. Um, you know, because obviously, if we ran this right now, it wouldn't even run. It would just return, because index start zero. So indexes never actually greater than 10. So this code never runs. And also, I want you to try, you know? You know, obviously we declared index right here. But what if we just said index equals 20 and we changed this value? We said index right here equals zero where we just said it index. Then we don't have to put right here. We can just right, index, and we can actually declare it like that. So there's different ways you can, um, kind of, I guess, declare this four loop. And I know right now, just counting through these loops like we have in and adding numbers doesn't seem very practical. But I promise you, by the end of this section, you will see very useful and practical examples of how we will use these four loops. So stay tuned for the next tutorial. Thank you. 14. Data Structures - Arrays: Hello. Welcome to Practical C Plus plus programming. My name is Zak. And in this tutorial, we will be discussing a raise, which is basically my introduction to data structures because I see an array as the most simplest, the most simple data structure that we can kind of delve into without getting too advanced . And I really went to introduce these arrays because you can use four loops and while leaves is a way of populating these rays and we'll get into that probably at the end of this tutorial. But to start off with what is an array? Well, an array is basically the best way I can explain. It is a list. So the way I was taught was think about when you go to the grocery store. Let's do this together. So we're going to declare an array, We're going to call it String. We're gonna make an array of strings, Okay, strength, and we're gonna call it grocery list. Okay, this is our grocery list. And this. I want you think of an array because it will give you a really good visualization. So we're going to the grocery store, and we need to buy several things we need by eggs when you buy milk, we need buy bread when you put all these things on the list. Well, to do that, we need to know how many items are gonna be on our list, First of all, with an array. And to do that, you put two brackets like this, not curly brackets, but straight brackets. And inside here we put a constant value. Variables are not allowed on the inside of the brackets during the declaration of an array , and that's very important to remember. So you have to know how many items you are going to populate your array with when you start . That means said, let's go ahead and assume we're going toe have only three items on our list. Okay, so we're gonna put the value three here. Now, the next part is to put these items in our list so he right equals and then curly brackets . And inside these curly brackets is where we write the items on our list. Now, obviously, we declared this grocery list is a string data type, which means we have to put string elements inside this array. So the first element is going to be Eggs were going to say we want eggs, Okay. And then you separate each element with a comma. We'll say we need milk. And then we'll say we need bread and then end three declaration with semi colon. So this right here is your first declaration of an array in C plus plus. And it's fairly simple. You just have to remember that you need to put the constant number of items right here in the brackets, and then you declare each item each element in the array, so to speak, and the curly brackets over here. Now, you may be thinking how when would I use an array? Well, you're going to use it all the time. We were gonna go into that later, but before we do that, I want to discuss this value right here. This three. Now that we wrote a three right here and we can't put variable. So, for instance, if we wrote into index equals three were not allowed to put index right here during our declaration. And I believe code blocks co blocks sometimes lets you get away with it, but actually it doesn't see, I just try to run it and I got on air. And that's why because we put a variable here, you're not allowed to put a variable there. However, I want to go ahead. And while we're on this topic discussed constants with you and a constant is different from a variable in that it never changes. Meaning what? All you have to do is add the constant keyword, which is CEO Seo in S T c o N s T. Const it index three. So now if we run, this index becomes a constant value in the program. The program runs now. Keep in mind when you add CONST. Right here. You are not allowed to change index later in the value. So if I try to say index equals to two all of a sudden or index plus plus, I will all of a sudden get an air because you are not allowed to change values that have the constant keyword in it because it's constant. It's not supposed to change throughout your program, so I just want you to keep that in mind. The other thing with Constance I want you to keep in mind is that it's often a convention to make constants in all capital letters and C plus plus so that when you look through the program, you automatically know what's a constant and what isn't. And so this is how you would probably see it most C plus plus programs. And though Index probably isn't a really good name, you would probably want to say something like Size Size is probably a a better word for the array constant. And that's usually how you'll see it in C plus plus programs when you're talking about a raise. Now I do want to go into we discuss four leaps last time in our last tutorial, and I want to go ahead and dive into how we're gonna use four loops and a raise together. And that's why I made a raise as our next tutorial after four leaps. So that would be fresh on your mind and you can see exactly how we're gonna use it. So let me go ahead and show you that Now we're gonna make a four leap, okay? And in this four loop were going to say integer index equals 20 and then we're going to say index less van size index plus plus. Now think about this for a minute. We're declaring a new variable called Index, and we're setting it equal to zero. Then we're saying, Index, this is our condition. We want to do this for loot. While index is less than size and size, we set to three, which is also the size of our stringer A. And there were incriminating index by one each time we run through this for lead. So this four loop If we ran it, we'll just see out index real quick so you can see it. It should only run three times. And if we run it, that's exactly what you'll see. 0123 times. And remember, index starts zero. Now when you look at a raise, this is a very important thing to know, because if you don't understand this concept, a race will get very confusing, and that is to access an array. So let's go ahead and access grocery list. We're going to say output grocery list, and then you put brackets okay and L and inside these brackets you put the number that you want to output. So that means said, let's let's go ahead and say, Um, we want to output eggs. No, let's say we want to output milk. Okay, So if we want to output milk, you would think you would enter to hear right. Well, that's wrong, because the thing with computers is in a race, especially, is a start counting it? Zero. So if you wanted to output the word milk, you would have to say grocery list and then put one in the brackets. Because this is index zero. This is index wine, and this is index, too, Which is why, oftentimes of four loops, you're going to see him start at zero. Because in a four Lee, oftentimes he used a raise or even vectors, and they all start counting it. Zero. Which is why you will always see most of the time in your program a career. These four loops start with a variable. There's initialized it. Zero. So that means said, we can actually put Index in this box to print out zero. The next time it'll count through will be one the next time it will be, too. So it printout eggs, milk and then bread. Now going back to a topic we discussed at the beginning, we said that you could only have constants in these brackets, and right here we have a variable. Well, the constant rule only applies when you're initializing the list. When you are actually accessing the array, you can use variables as we are right here. So let's go ahead and run this program and we'll see. We get eggs, milk and bread. It prints out the whole list for us. So, as you can see, this is a very practical example of using a for loop to iterated through a string, um, array, which we declared is a grocery list. And when we get further into this section, we're really gonna take this to the next level, and I think you're really going to enjoy it. So stay tuned and thank you for watching. 15. File Output: Hello. Welcome to Practical C Plus plus programming the beginner course. My name is Zak. And in this tutorial, we will be discussing file output. Now, In earlier sections, we talked about just basic console output, and I want to stress that you to not get too, um, concerned about file output because it it's actually a lot more simple than it's going to seem at first. It's gonna be a lot of new stuff, but if you just look at it and practice it, you will see how much more simple it actually is. Then it's gonna first appear that being said, the first thing we have to do when dealing with file output is include a new library. So we've been using this pound include Io String, which stands for input output stream. And we still need this library for for a programmer work. But we need to add a new one. And to do that, we're going to say pound include, and the new library that we're using is called F Stream, which stands for file stream. They have input output stream, and now we have filed stream. Okay, so the next thing we're going to do is declare an output stream, um, file, so to speak and output sharing file that we're going to use. And to do that, we say, oh, F Stream, which stands for output File stream, and then you give the output file, stream a name, and we're just gonna call it output file. Okay, now, output file. What you want to do is add parentheses and put a semi colon. And in these parentheses, I want you to declare the file name of the output file that you want to use now. That being said, if there's already an output file that is, that has been made, that is, in the current directory. Then, of course, you just want to enter that file name. Now, if it's in a different directory, you're gonna have to specify the full path. And to do that, you would say, you know, see, for the c drive. Um, Colt, you know, colon backslash, backslash users, backslash, backslash, and you do have to have double backslash when using the strings. And ah, without going to mention debt, you know, without cause I would really need to put that in a new tutorial. And ah, I will actually make a tutorial about that. But the first backslash window, almost strings, is considered an escape character. So just know that whenever you're specifying these files and you use a backslash, you need to put two of them in order for the 1st 1 to be read. So that's how I would do that. But for me, I'm just going to declare a new file, and it's gonna be in the current directory. I'm just going to call it names dot txt just like that. And if you run this, everything should run final, just return zero. And you know that you don't get any heirs now named Start txt is gonna be the name of our output file that just got created. So if we actually go to open right here, it should have created the name stuck txt file right here, as you can see for us since we just ran the program. So now that the file is created and everything is fine, we can actually start riding to that file now, before we do that, it's always good practice to have a branching statement in case something goes wrong with creating that file or finding it. And what I like to do is say, if no, I output file, which means output file, if not output file. It basically means if if the output file returns false meaning that it couldn't be created , then run this code. So that means if the help a foul name start txt could not be created nor found, then do this and we'll just output. The file could not be found, and then we'll say, return negative five and this can be any value, actually return negative seven. I could say Return on. I should put a negative value aknegative 10 or negative five. And that way, when we run, the program will run it right now if it says process returned. Negative. Five. I know that the file was not found, and you just use a a random number like that that you can easily associate with an air, and that would definitely been air. But since process returned zero, we know that names dot txt was created, so let's move on and let's actually write to this file. And to do that, let's create a string name equals Zach, and that's gonna be a fairly easy ah example. And whenever we make our practical program at the end of the section will see a more ah, in debt way of looking at file output. But for now, I'm just going to show you an easy that's in the simplest way I can, how it's performed. So we're gonna make string name equals AC. And now we're just gonna write that name to have fallen. To do that, we use our output stream name and the handle on that is called Output File. And then to write to it, you use the output operators as so let's say, unless saying, just like if we were doing see out the output to the screen, we used these less and less. Then instead, we're going to do output file, which is our output stream appear and we're using output file. That's the handle name you should say so output file less and less than name and then return zero. And if we run that everything should run, find with no heirs. And if we open, let's just open named start Txt year old Quit. We're gonna open it right here and you can see Zack was written to this file So that is a basic introduction on file output. And in the next tutorial, we're gonna discuss file input, which is actually a little bit more complicated. So stay tuned and thank you for watching. 16. File Input: Hello, everyone. Welcome to practical C Plus plus programming. My name is Zak, and in this tutorial, we will be discussing file input. Now, as we file output, the first thing we need to do is include the right library. That's gonna be the same library viol stream F stream, and then the We also need to declare a file handle that we're going to be in putting in using the if stream declaration. So in output stream, we use OEF stream and input stream. We're gonna use I f stream for input file string. Now the difference is in input. We need to already have a files. Specify that we're reading from. You know, you don't want the file to be empty. You wanna have a file that ah has data in its That's what we're gonna do. We're gonna use named start. Txt is from the last tutorial, and I'll go ahead and open it real quick so we can add some data we're gonna use names such as, Ah, will use Zack, Um, Troy, Sam, Jim, Mark and Kristen and Margaret Taylor, Um, Jake and, um, Sherry and Francis as the names for our We're just gonna basically make a name list. So these are all names. And let's say we're wanting to read these names in from this file and, um, store them in a variable. So to do that, we need to know the name of the file and declare a file. So we're gonna say will say input file. And we need to declare the name of the file that we're reading from, which was named dot txt. So we're reading from names dot txt. And as with output file, we just need to go ahead and say if night input file C L file not found in Taiwan and let's return a value like negative six is that we know it wouldn't be found. And let's go ahead and run it and we got process return zero. So named start txt was found, and it should be because it has been created. All these names in it. And remember, this is just a cautionary thing. So you your file for some reason comes up missing. It'll return negative six, and you'll know that it that it's gone. This is good again. Just a practical way to, ah code your program to look for heirs and problems with the with the code. So moving on, um, let's go ahead and create something to store these names in. And if you if you want to think about you know, the things that we've gone over, probably the perfect I think that we could use as a data structure sections an array, and we can just make one variable one array and then store all the names in that array. And so to do that, what we'll do is we'll say, um, you know one thing. One thing about an array is that we have. You could have an unknown number of, um, unknown number of names here, but and obviously, if that was the case, you would probably want to use a a different dead data structure. But since this is the beginning of beginner's course, we're gonna go ahead and use an array, or we're gonna assume that we know how many names are in the file. If I If I teach in advance course, but but you know, kind of depending on how how well this one does, if I teach in advance course, we're gonna definitely go into more advanced data structures and a better way to store this data whenever you know the amount of names is not known. But, like I said, for now, we're gonna go ahead and assume that we know how many names you're gonna be on this list. So it's countem ups. 123456789 10 11. So there's 11 names, So let's just let's go ahead and declare a constant value. Constant in size equals 11 and we use that for the size of our list. OK, and then let's go ahead and make an array. Like I said, I like to do. It's kind of a convention to declare your raised the beginning of the functions. That's what going to do. We're gonna declare the array, and it's it's gonna hold string values because these are all string values Alfa numeric, you know, uh, multiple character values. So, um, we're gonna use a string and will say, um, names list. We're gonna hold size for the value, and we're just going to declare just like that, as um, actually, I think the better way to do that would say equal and then brackets and then just do that just like that. And if we run that that should not get. Give us an air and it doesn't. And the reason why is because co blocks and most ID's will see this as instead of having to basically do empty strings 11 times to declare this array. If you just put one empty string in there, co box goes ahead and assumes to set all the default values to this an empty string, which is what we want. We want the strings to start out empty so that we can put new ones in their place later. So that's basically were initializing this array to a bunch of empty string values. That means said, Let's go ahead and get to where we can read in these thes file names. And to do that, the best way to do it is with a pre read and a post read within a wild leap, and you can play with this you want and kind of figure out the best way that you see fit. But when you do it with this wildly, you're going to realize that the pre read on Post Street is actually the best way to go about it because oftentimes, if you don't use a pre read and if you don't do a post read, you're gonna figure out that either you're going to read the last name twice or you won't read the first name at all. And this. That's why I like to use this strategy to read from a text files and I'll show you exactly what I mean in a second. So so beginning with the pre read This is gonna be the pre read and we'll go ahead and comments that I will say pre read. And if you didn't know, I know we haven't discussed this yet in any of the other tutorials to comment code. You just used double backslash that will comment code. So if I if I write double backslash, I can write whatever I want, and it won't affect the code at this point, so to kind of keep track of what you're doing, it might be a good idea to actually comment your code, especially when you get toe pretty, reads and Post reads, because it will make it easier to read when you go back and look at it, starting with the pre read you're gonna use the file handle, which is input file. And then they used the input stream operator, which is greater than greater than if you remember. I'm seeing operator. And then you're gonna store the the, um the the the name The string from named start. Txt. Sorry, I got my I got my words twisted there for a second. You're going to store the stream taken from this file into a variable declared here. Now, it's not advised to put this straight into her straight into an array. So what we're gonna do is we're gonna is gonna say strained. Um, tip name. And we're just gonna leave it like that. We're gonna put it into a variable called temp name. So the first time it reads is gonna read Zack, and it's going to store it here into temp name. Okay, Now, let's make a wild leap, and this will all make sense after we're done coding it and you'll see why. So then we make a while leave and we say, while not input file dot e o f parentheses and this dot e o f is a function and we'll go over more with functions and next tutorial, but it stains for end of file. So basically this condition says wow, input files, not at the end of its file. So basically went, reads through the cursor starts here. And as we read through it, the cursor is gonna move like this all the way down through this file as this wildly continues until the cursor gets to hear of the end of Francis. And that's considered an Indo file because there's no more text in this vile and ah, as long as it's not a day into the foul. This while loop is gonna keep looping. That's why I like to use for my loop. And while it's not at the end of file, go ahead and it interview times comment post read. We're going to say do the same thing is Thea pre greed just temp name and this right here. This pre read this post read in this while loop is your basic set up for file input, and I know right now you're saying that is really complicated. It doesn't make sense, but this is the best way to receive input from a file and you'll see why I want you to play with it and see if you can figure out a better way to do it. But I think I think once you play with it for a while, you're going to realize that this is definitely the cleanest way to receive foul from ah received text from an input file. So let's go ahead and continue on. And you always want the post re to be the last thing in your wild leap in your pre read to be the first thing. They're the last thing before your wild leap. So you never want anything in between your wildly like right here and you're pretty read. And he never won anything in between your post read and the end of your wild leap right here. And that's just a golden rule for foul input. So but anything anything that you want to do, um, you know, data processing wise congee Oh, in between here and that's exactly what we're gonna do. So we're going to say we're basically going to say input or we're going to say, How about this? The name was names list, so we'll say names list. Let's go ahead and declare an integer value so we'll say int index equals zero. We're going to say names, lists, indexes got started. The first Valiant Names list equals temp name. And then we'll say Index plus plus on. What that's going to do is it's gonna go through. Each name in this file is going to start with Zach, and it's going to start Index zero. Go to the first index and names list is gonna store Zach because Zach is me and held in temp name. There's gonna add one to index. It's going to the Post read. It's going to go back up to the top of this wild leap and is gonna put the next name on the list Troy into our array. And if you don't believe me, we're gonna hit run, you'll get no airs and nothing happened. So as of right now, there's no output. But I promise you, it just populated the whole array. Our whole names list array with the with the names in this violin to prove it to you. We're going to use a four loop on the outside of this wild leaps, so we're going to say four. It was called I equals 20 I less than size I plus place. And then we're just going to go through this Ah, this loop and prove to you that names list is populated with the names and are named start txt file. So we'll say names list I and a lot. And if we run this now it's gonna output are array. And right now it just says Francis. So let's take a look at what went wrong there. So something's gone wrong with our names dot txt. So we have. We have Francis here. Maybe. I believe it has something to do with the way we declared this array with this empty string . So let's see if we take this out. If that will fix it real quick, Let's hit run and it's still, says Francis. So we're having an issue with our with our, um, array declaration because I know for a fact we're getting the input file and we're storing it in Tempt name and them were using the index Teoh there. There's your problem right there. So So obviously, if indexes at the beginning of the while loop then and we're declaring it to zero that each time this while loop runs is gonna set index back to zero. So what we need to do is we need toe, take this index equals zero out and put it on the outside of our wildly. That way, it's only declared 20 once. Now, when we run our program, we will get all the names in our list as a minor is a minor mistake. Obviously it ah, completely changed output of the program. And so you really gotta look out for that stuff. And, um, if you didn't really get the mistake like I said, we had index equals zero at the top of our wild leaps. So every time this loop brand, it was setting index back to zero, which is why it has to be on the outside of our wildly. But I think you should definitely go through this program several times because when I first started C plus place, I found foul input is a pretty complicated topic. So go through this program in this tutorial several times, practice the pre read in the post read, and I promise you, when you get it down, it will make a whole lot of sense. And ah, it'll just be another thing to you will be very simple. So thank you for watching and stay teamed 17. Advanced Input and Output Manipulation: Hello. Welcome to Practical C Plus plus programming the beginner course planning Izet. In this tutorial, we will be discussing advanced input and output manipulation. And to do that, I've already got the code that we used from the last tutorial with foul input. If you remember, were just grabbing some names from this names dot txt file. And we're storing them in a variable when we read it in and are pre read. And then as long as we haven't reached the end of file, we are taking this temp variable and copying it to our array with the index of zero. And then we add one to it each time as we go through the output. It was just this names list that we have, but what I want discuss is these tips and tricks for advanced file input, output manipulation and these techniques are going to help you whenever you get in funny situations. The first technical when talking about is if you get a file with something like this. Now this is a header and many files have headers. But if we run this program right now and we change, for instance, we would have to change this to 12 before we did it. But if we ran it, it wouldn't crash. But we would get Well, let me make sure that I have this right. So Yeah, so So let me save it first, cause I haven't saved it. But if we save it now, now it's saved and we rain it again. Now we get names up here. So we read in this, But But what if we don't want that? We don't want to populate our ray with this header. We just want to ignore it. Well, that's what we're going to do. We're gonna use a function called ignore. So let's change this back to 11. We're gonna go down here and before our pre read, we're going to specify with a function that we want to ignore that header. And to do that we use we access the function with our input stream foul handle, which was input file diet. Ignore, which is a function in this function, has two parameters that we need to use. The first is the amount of characters that we want to ignore, which is 255 and the reason it's 255 is because in a console window in a C plus plus application and the console window, there is 255 characters on each line. So if we specify 255 then it will ignore this whole line in the console window, and the cursor will be moved to right here. Right before is that and the other, the parameter that we want specifies What's Nosa delimit? Er? And that's a character that says, If you reach this character, then go ahead and start reading. Quit ignoring. And this is a new line character and later in this tutorial, we're going to go over this and debts because this is also a formatting advanced formatting option that I want to discuss with you guys. But this is a new line delimit er, which means, basically, if you get to the end of this line, you're going to reach a new line character. I want you quit ignoring because when you get to a new on character, you're gonna end up right here right before. Is that so? That's what the delimit er does. So now if we run this program again, what they ignore function in we only pick up the names that we want and it ignores the header just like we wanted it to. So moving on. And once you go ahead and memorize this function because he will be using it a lot more than likely with foul input. But moving on, we're going to discuss this guy right here. These special formatting characters. We're gonna do that here at the bottom of our main function. We're going to say something along a lot. Let's give us some space. We're going to say this is a new blind character and then we're gonna put several of easy in those air Three new lot characters, backslash in bets, Leshy and back slash in. And essentially, what we're doing is by putting these backslash ends in with our screen, it's not going to pre peas. These are basically the same thing is saying this, but instead we can just use the formatting character with the backslash in and accomplish the same thing. And if we run it, we'll see exactly what I mean. We get three new lines right here below our output because we added these new on characters . Let's move on to another example of these special characters that we can use. We're gonna use the formatting tab character will say this is a tad character. Then we're gonna we're gonna enter in four tad characters which your back slash tes will say tan and then a couple of backslash ends to give us some space because remember, those are the same thing as New line characters. And these are gonna be our tab characters and you'll see what I mean. In a second, we run it, so we run it and you can see all this space in between. Tad and this is a tap character, and that's where these backslash teas coming to play. So that's a special formatting option you can use. The other one that I want to show you is quotation marks will say This is a quote and we'll be back slash quotation. I will say quote backslash, quotation and then a couple of new on characters in the quotation marks. What this backslash quotation does is it escapes the stream and it puts these in because if we take out this backslash and it messes up our stream, so we have to have that in there and this is just a special formatting option. And when we run it, you'll see be able to actually have quotations and or output and says This is a quote and then have quotation marks. Quote. So that would be useful to you in the future whenever you need to use these escape characters and there's plenty more of them that you can use, and I suggest that you probably go try to look up some of them and see what you can do with this output. Okay, moving, going. I want to go ahead and discuss one more thing with you are. Actually we're gonna do a couple more things, but but one more with foul input before we continue. And that is we're gonna go named start txt. And what if we had something like Troy Hodges here? Troy Hodge is. That's the last name. And if we change us to 12 I'll show you what happens when we actually run this so we'll run it. And instead of saying Troy Hodge's oops, we need to say that girl quick before we do that. Okay, so there we have it saved. Let's run it one more time. Instead of saying Troy Hodges, it says Troy and then Hodges on the next line. That's not what we want. We want the whole name on this same line. But what's happening is in a file as soon as the scanner and gets to a white space character. It seems that that's the end of this of what we're reading and puts that into our temporary variable. So we mean what we need is a function that will read this whole line and put that in a single variable, and that's what we're gonna use. So just like we used input file dot Ignore. We're going to use another function called get line. We're gonna put it in our pre read and post re and the gate line function. All you do is type get line. You specify the input stream that you're using, which were using input file that we specified. And then you specify the variable that you want to hold the string in, or the line in which we're calling Tim. We're gonna do that for both of these will say, get line input, vile Thomas Tim that changes or are pretty reading Poe Street, where it reads in the whole line instead of just a single string value on when we run it, you'll see the difference. Now we get Troy Hodges on one line, whereas before it was, it was separated into two separate variables. Now it's holding this whole value in one index of the array. So that's the get line. Functioned and then the other one. Last thing I want to show you is what's called the Iot Manipulation Library, which is the input output manipulation library. And if you include that include I o minute. You can do some really cool things without, but and I'll show you what I mean. So we'll come down here and we'll say, See, out left, which specifies a left alignment. I will say set precision to to and then fixed. And what this does is a specifies left alignment. Set the precision to to, and what that means is that decimal values will only hold two places. Were any value for that matter. So if you have the number 200 it's really just gonna look like 20 because it's not gonna hold that other zero Well, what fixed does fix says, Take this set precision and Onley apply it to after to the right of the decimal place. So now if you have 200 it will hold the whole number 200 plus 2000.0 Or if you have 200.134 it will only hold the value 200.13 So let me show you what I mean. We're gonna make a double value to 1.792 and we gotta give had a name, we'll call it double value equals and then we're just gonna output that will say, see out in a couple lines will say double value. And when we do that, you'll see that we only get to a 1.79 and not to a 1.792 And that's because we used this Iot manipulation technique to set the position to To after the decimal point. That's exactly what we did. And one last thing I want to show you is what's called the set wit. So we're gonna go ahead and make another value called insecure. Value equals 7 227 And what we'll do is we'll say, C l. We'll give it give us a little bit of space. Well, say cl set w specify 25 which is 25 characters. Then we'll say double value and will say set never you again. 25 And we'll say integer value and wine and what this does. This sets the width to 25 between each value of output and you'll see what I mean when we run it. Now we get to a 1.79 a width of 25 characters and then 227. And then, if you if you print it out something again, you would have another width of 225 in between it because we we specified to put another one right here, and that's exactly what that does. So I suggest you play around with these advanced output and input manipulation techniques, and you'll learn that you can do some really cool things. So thank you for watching and state aimed 18. Practical Program #2: Hello. Welcome to Practical C Plus plus programming the beginner course. And in this tutorial, we will be building our second practical program. And it's gonna be primarily a council application that you could maybe use in several business environments, and I'll show you what I mean. So I already have a employee stock txt file with two headers, name and salary and then the employee list on top and in several employees names with their salary on the right and you'll notice that has their first and last name. So that's gonna be a tricky part with this program that we're gonna have to pay attention to. But the primary focus of this program is to be able to read in this file and then display the contents of the file in the console window. And no matter what, if someone goes in and changes this file, maybe throws in a another name, Jake Long, and then adds another salary to it. So we added, you know, 82,000 to the salary. Whenever this final updates, we want our program to automatically know that it updates and be able to add that name to the console so that means said, that's something that we're gonna have to really focus on is we build this code and, ah, afterward I'm building, get, really advise you to try and see if you can go in and use file output and falling. Put together to create a program using everything that we've learned and, ah, modify this program toe where you can maybe, you know, push the number one in it and you can add a name to the list. Or, if you push number two, you can delete a name from the list. It will constantly be updating this employee list foul, and the idea is for it to seem like a human resource Ah, program that companies could use to kind of update all the employees that are on their payroll. So that being said, let's continue on and let's start this. Let's start coating this So the first thing we need to do is include our libraries, and we know that we're gonna be dealing with fouls a lot. So let's go ahead and include that library, the file stream library and then the other library we want to include, since since we're gonna be printing out this data in the console window, we're probably going to be doing a lot of output manipulation. So let's include the input output manipulation library, which is pound include io minutes, if you remember from the last tutorial. So now that we have all the libraries that we need, we're going to go ahead and continue on to the main function and ah, and set up our file input, so to speak. So So we need to include our foul input handle with the input file stream, Um, declaration and we're just gonna call it input or we'll call it. We'll give it a different name. We'll call it Employees file and will say employees dot txt for the constructor. And ah, we'll go over constructors, maybe in a future class if I do in advanced C plus plus tutorial on, you know, classes and advanced data structures. But basically, this is this is a function of if stream and this is the constructor. And all we're saying is we want to create a file handle called employee file and we want this file to automatically be associated with this text file. And I know we went over that in the file input tutorial, but I just kind of want to rehash your memory about that idea. So then let's set up or checking function to make sure that this file was found. Employee file. Okay, we got to say C l employees text file not found well in the line a few times just so we could throw those in there and then we'll return. Negative nine. Let's go ahead and run it. Make sure we don't get any heirs and that the file is found and it appears that the file was found cause the process returned zero rather than negative nine. So we're good to go and then let's go ahead and set up some variables to hold this data. So let's make a string employees name variable, and then let's make an ENT employee's salary variable. And these are the two variables that we're gonna use. And since we don't know, since we're gonna make this program toe where you can kind of update this program at any time and we don't want to have to go in and change the size of our arrays were not gonna use an array, but instead we're going to actually use our output in or wildly that we used to read in the input and I'll show you what I mean. So the first thing we need to do after declaring our variables with this program is get rid of these headers because we don't want to save these headers in any string. And to do that, we're gonna use our ignore function. So we're going to say employees file dot ignore And then the 255 characters in the new land a limiter that's for the first line. So that's gonna ignore this this first line and employees dot txt. Now we need to ignore the second line. So we're going to say employees vile diet. Ignore 255 new on the limiter Cole and Cole and then the semi colon. So now that we've got rid of these headers, we can actually start a pretty read in our post Read are wildly and start a real being in this data and printing it out. So to do that, let's go ahead and comment pre read. We're gonna put in our pre read right here, but remember, we want to hold the whole employee name in one variable. So we can't just read in one name at the time. We're gonna have to read in all this at a time without actually picking up this data so we can't use get line and actually read in the whole data. We're gonna actually have to make use of our delimit er and stream size with the get line function. So let's do that now. So for a pre read, we're gonna say get line. Our employee file is our our is our input string. And then for the stream size where we need to Assane that no name is pretty much gonna be 50 characters long because that's gonna be a really that's gonna be a really long name. So I 50 character, Long would probably name would probably come all the way out here all the way to the beginning of salary. Nearly so we're gonna use fifties are stream size limiter pretty much and ah, but first we need to hold it into a variable. So we're gonna say employees name employees name is our variable, and then we're gonna put in our stream size, which is 50. And if we run that we shouldn't get an air. OK, so we do get in there. But we need to, and it's probably cause we need to delimit er and so we're just going to throw in a new on delimited because we shouldn't reach that. Let's see if that runs and it still doesn't run. So let let's look at a functions. Let me ah, go back real quick and let's see what we can throw in here for a delimit er. So we start out with the input based extreme basic stream input and then character delimit er. So I'm pretty sure that the only thing we can do is that the limiter on the get line and to do that, let's go ahead and say, Let's see if this works. If we just put in a delimit er on get line and that does work just by putting in a delimit er so I don't I don't think, Let me Atacama. I think you can Maybe I was thinking maybe you could add in a size, but apparently that throws an air so you can't throw in a size, could only throw in a delimit er so the delimit er that we're gonna use. Let's go ahead and do that. The delimited er that will use is a comma. Okay, that will be the delimit er that we use. And we're gonna update our text, file this employee text file to separate the names, uh, and the salaries with the Kama. So, you know, we can just be like this. It doesn't even but only had to be lined up correctly or anything like that. But we will go ahead and do that. So So we have these this text file, right? And I kind of went, Let's make it a little bit prettier, so that so that it looks nice. And we have these commas separating the names and the salary because the get lan function as right now, just off the top of my head doesn't use a string sized a limiter like the ignore function does. So I could ignore function. We could ah, we could actually put in a string size and ignore up to a certain point. But on this one, we're going to use a delimit er, which is the comma. So basically, what's happening with get line is is that we're using the employee file stream were storing the very the data in the employee named String Variable and all the data that we're storing in there is all the day All the data up to this, um, comma right here, which is what we said is a delimit er So let's save that. And let's, uh, throw in our wild leave now. So we're going to say, While no employees filed on e o. F function, let's go ahead and set up a post read comment function so we know where it is. Our post read. It's gonna be the same as of right now, um, to store our data is going to Samos hard pre read. So we're gonna store employees. We're gonna store employees name and then the the limiter of a comma and then a semi colon . But remember, the pre and the Post Street isn't complete yet because we need the whole name. And then we also want to store the salary. And to do that, we need to add something in our pre read and post read, and that's exactly what we're gonna do. So we're gonna extend or pre read by a line, and we're going to say employee file, use the input stream operator and then say employees e salary and the same thing with our post read. So say sorry about that employee file input stream employee salary just like that. And that concludes our pre read and post read. And if we run that everything should run find with no heirs and we're actually reading in all the data into a variable. But the thing is, we're constantly overriding the same variable because we don't have an array set up yet. Well, we don't need an array because we're going to actually print out everything that we need in this, um, while leap. So let's go ahead and do that. We're gonna say we're going to set it up. We're going to say, see out and let's actually set up a We're gonna go ahead and set up or input where everything works, just like we want it. So at the top, we're going to set up a how we're going to manipulate our output with C out left and then we really don't need a set precision and fixed because we're not dealing with decimal values. So we'll just say, see out, left up there and then right here will say set ever you and will say 25 and then employees e name and then set W 25. Actually, we won't even need that one since we're using the left and left orientation so it gets a set every 25 employees name. And then I think if we just say employee salary, that should output just like we want in Milan and let's see how that works. And there you go. So we have the name and then the salary and then a new on the name and then a salary and then a new line, just like we wanted. But let's make this a little bit prettier. Let's go outside of the leap and let's add some. Ah, let's add some output So we'll say. Well, say something along the lines of right here before our ignore will say, See out, um, tab a few tabs and will say employees or will say Human resource is human resource is payroll list and we'll end a few lines and then we'll say C L set W 25 four name and then salary, and then we'll end a few lines again on. What this output will do is we'll make it look nice and set it up for us. So we have full name salary. Okay, so but salary needs to go over more. And I think the reason why is because we have this Jimmy Clark and binge Minutes actually picking up all the data to the comma. So just the way it looks right now, even though we have the same wit, it's not the same because because of the length of these variables, they're actually ah, lot longer than just Jimmy Clark because it's storing all the white space up to the comma that we used. It's a delimit er, so let's just push salary out a little bit further. It's going to kind of be a hit, miss top thing. For now, we'll just use, um, 35 see, see what that looks like on 35. It's nearly there, right over the top. We'll just add, you know, maybe 37 see what that looks like. 37. Perfect. So we have human resource is payroll list and then we have the header full name and then salary, and then we have a whole list of everyone on payroll in their salary. And if we update this, employees dot txt file. I just want to show you so we could say, um, Jimmie Johnson And we're just Atacama will give him, you know, 100. And we were just given 13,000 will save it. When we run this program again, Jimmie Johnson will be added because we put everything inside or a while loop. And that's kind of what I wanted Teoh do for this practical program was kind of show you how you can update instantly just by using these variables inside the wild leave and not having to use a data structure like an array. So I want you to play with this. I want you to figure out maybe a better way to do it on your own. If you can kind of dive into mawr output manipulation and see if maybe you can Ah, you know, if we obviously if we changed the delimit er if we change the comma like let's say we pulled the comma of way to right here, you know, in in our text file, that should change exactly how are ah, program looks so Let me just show you that real quick before I leave. Because I want to show you that just by changing your text file, it's gonna actually change the output of your program since we're using Akamas a delimit. Er let me just fix this real quick. And since we're using this commas a delimit er the variable like on Jake Long, for instance, is actually this long, this many characters, and when we save it, you should be able to see the difference in our program. So you see how much shorter since are variable names were actually shorter, that these numbers shift over, and that's kind of wives want to show you. And that's why we had to push salary over to kind of, ah, compensate for that delimit er being the comma and how far over it was in the text file. So that's it for this tutorial. Stay tuned for the next section when we start discussing functions and actually make our hangman game. Thank you 19. Functions: Hello. Welcome to Practical C Plus plus programming Beginner Course. My name is Zak. And in this tutorial we will begin discussing advanced topics and C plus plus, um, mainly all we're gonna be dealing with his functions and how to use functions. And though many people may say this isn't really an advanced topic, I do consider it as an introduction to advance topics, because in C Plus plus, you will be using functions all the time. And they can get pretty complicated when you start throwing in templates and using, you know, structures as a parameter and passing by pointers and returning a pointer of a pointer. So basically, I consider it this going to be an introduction to advance topics and ah, that's that's all we're gonna discuss. We're basically going to discuss everything that we need to know about functions to get started and toe build or Hank Mann gain. So that being said, let's get started. Um, from the very beginning, with we've always had this main function and this whole thing right here is the main function as well as we discussed. But I want to discuss the the anatomy of the function so to speak. So So what is this end right here? Will that end me? Is the return value of the main function and a function can either have a return value where it can not have a return value and just be void. The main function always returns an integer value. And you can see that here when says return zero and we can actually return whatever we want . So let me show you. For example, we could return eight right here. And when we run, this program will say Process returned eight. Well, that's because the main function is returning the value. Eight. So what? That same token? Let's go ahead and turn it back to zero. And let's create our own function that returns the value. So let's say, um, we make a function and we're gonna call it, um, we're just gonna make our first function, is not gonna return anything, and then we'll make a second function. The desert turned something, So our first function is going to be void, meaning it doesn't have a return type. And to do that, you type void. We'll call our function. You can call it anything you want. We'll call it print. Hello. Put the parent disease. That's where your parameters go. We'll discuss parameters in a later tutorial, and then put your brackets. And since this is avoid function, there is no need to type return and then a value that will actually throw an air. If we do that, because the return type is void, meaning we don't need a return type. And this function right now, this code will run just as his, you know. And ah, that's all it does. All it does is basically return zero, because this function never gets called. But let's let's give this function some code to run so we'll run. See out. Actually, it may go down a line real quick will say see out hello and then in the line. But when we run the code, we still won't get hello printed on screen. And the reason why is because we have to call that function. And to call this function, you simply just type the name of the function in your main and your main function. So you say print hello at your prints, sees and then add your semi colon. And so the code will always start running with your main function, it will say So come here. And the first thing this code days is look up the print hello function, go to it and run the code in this. And then it's gonna return Void so returned back to the main function and they'll say Process returned to zero because they know we're on this code. When we run it, that's what we get. We get the word hello and then process return. Zero. This is back in the main function, so let's make let's make one more function. Let's let's make one that returns a number So we're going to call it it. Or let's say double, um, get age and we'll say it returns the age Well, we can actually say we can do one thing we can say return age, which we could say is 23 or were what we're going to do. We're going to declare a variable. We're gonna say double age equals 23.0, what a semi colon and then we'll say return age. Okay, and then let's make one more function. This says string get name and will return a string value return. Zach just like that. So you're seeing various examples of functions being declared here, and in these functions they're returning something different, and they're doing it in different ways. So let's go to our main function and see if we can use these other functions to make a cool message. So first will call print Hello, and then we'll say, See out. I will say, Get name And then we'll say is get age years old implied. And this right here is basically saying print. Hello. So it's going to call the function print Hello, which will just say hello, it'll in the line. And then it's gonna output, get named the function, get name, and when you call, get name, it's gonna return the value, Zach. Which means basically, when you call, this is gonna return the string Zack. And it's basically just going toe output Zach right here since we're using it in an output stream. So say output. Zack is get age. This one returns age, which is 23. So say Zach is 23 years old and blind and let's run it. And that's exactly what we get we get. Hello, Zack is 23 years old. So that's all there is to functions right now. And the future tutorials. We're gonna go Maurin depth with these functions. But for now, once you have practice using the ideas we discussed in this tutorial and I'll see you in the next lecture Thank you. 20. Parameters: Hello. Welcome to practical C Plus plus programming. My name is Zak. And in this tutorial, we will be discussing function parameters. Now, function parameters are a pretty simple topic once you get your head wrapped around it. And in the previous tutorial, we discussed functions. And, you know, we did something like this would put, you know, string, print, name. And, um, basically, we would just say, you know, string name equals Zach, and then we would say Return name. And then down here, we did something like C l print name. And when we ran it, we got the name Zack to appear in the console. Well, with parameters, you can specify in the main function of value that you want to pass to the function that you're calling. So meaning basically, if we go up here if we want to return a name that, um, you know, or any names specified and we can add a parameter right here that says, you know, data type string because the name is probably gonna be a string and we'll call it name, Okay. And then this function will return the parameter name that is passed to it. So here If we say see out print name, we have to pass it a parameter a string value. We could call it Jim. And when we run this, it will print the name Jim because GM is being passed as name into this function and it's returning the name, which is Jim. And as I say, we can change this to Sam. And then Sam will be printed just like that. And just to kind of show, you know, you can you can pass in multiple parameters so we could pass in age. You know, if and if we just ran this as it is, we would get an air because we have to pass in another value. Units will pass in 17 and then it will run. Obviously, it's still just gonna print Sam because we aren't doing anything with age. But I just kind of wanted to show you on introduction on parameters and how to use them and functions. So, you know, maybe let's let me do one more example before we move on because I want to. I want to kind of give you a little bit more insight on how to use these. So we're going to say, um right here will say string, print name and it will pass in the name. And then let's make another one called, um, and age will pass in. We'll call it get age will pass in a double. We'll just call it X. We won't call it a judge just to show you, you know, that this is user defined. You can call it whatever you want and will return X Okay. And then here what we will do, we will say, um, you know, see out, enter a name, Let's give us Ah, let's say string name and then devil Age will say Enter a name and then we'll say CNN name . And then c l give it a couple of new lines with our new formatting characters that we learned will say Enter a age bitter and age the end age And then we'll say, See out, Give it a couple of lines will say your name is well, Say, get name. Is that what we called it? What we call it? We call print name. Okay, we'll say print name will pass in name that we got from the keyboard input, and then we'll say we'll go down here just to show you that you can do this and see in code blocks you can. This is all gonna be seen. It's one line of code, even though it's on two different lines. So I'll say print, name and then and you are get age will pass an age that we got from the user input and we'll say years old, just like that, when we run it, I will say, Enter a name that name will be You will just say Jim Intern age 23. I'll say your name is Jim and you're 23 years old and that was the result of passing those values to these functions as parameters. So that's it on parameters. Stay tuned and we'll get to talking about passing by reference and function. Overloading. Thank you. 21. Pass by Reference: Hello. Welcome to Practical C Plus plus programming the beginner course. And in this tutorial, we will be discussing passing by reference. So what is passing by reference? Well, let's go ahead and make a few Mawr functions. Let's just make one function and we'll call it, Um, I will say void, um, a git or will say void print, age, print, age and will pass in an integer value called X. Okay. And, um, go ahead and put it here and let me go ahead and show you before we do this because I was going to save this for a different tutorial. But, um, let's go ahead and show it to you here right now. If we run this program. Everything runs fine, because print ages declared above main. But if we move this function below main when we run it, it'll run fine until we call it when we call it. When we call print, age, past of value, we'll get an air, and that's because, um, the code the main function starts running. It says print age. It looks forward up here, and it's not there. So what we have to do is we have to prototype it. So I'm gonna go ahead and show you how to prototype that All you have to do is up here in the under using name space. You just type the name of the function. So you type void Print, age, ex semi colon. And now, whenever it gets to print age down here, it'll come up here and it will look at the prototype, look it up and then say OK, I know this function exists. I'm going to go find it, and that's exactly what what it does. So we're gonna start prototyping are functions for now on using this method instead of declaring our function above main. So if we run it right now, you can see everything will work Fine, because we've prototype function now. But let's me vine Teoh what we actually came here for and that's to learn what? Passing by references. So we have this function called print age. What we'll do is we'll just say C l X and ah, if we if we run this if we say print age seven, we will see that it prints out seven. But what happens if we say, for instance, let's make a value it age equal seven. We pass in age, Okay? And then here it's gonna print out the age it would pass in. It will print out seven. OK, but what happens if before we print out seven, we change. X equals it's five. So now we just past seven in for X, but we change it. We say X equals five. So let's run that seen out Prince L five instead of seven, even though we passed seven into the function. Okay, well, what happens if we if we're changing it here? The question is, if we're changing it in this function for change, because we're passing in seven is it getting changed in this function? Well, there's one way to find out. We can print out age after we run it, and I see what we get. And so we get five and then seven. So obviously age is not changing, except in this function. Well, what do we do if we want to change age in this function? But do it in this function, for instance, here we're changing X equal to five. What if we wanted to change the value that we're passing in 25 as well. Well, to do that, we have to pass by reference. And what that means is instead of passing a copy of age into this ah parameter, we're gonna pass the memory address of age into this parameter. And to do that, you simply just type in the ampersand symbol before your variable name. So now whenever we run it, we have toe type in our prototype up here is Wells. Don't forget to do that. So now when we run it both equal five. And the reason why is because pull this back up so you could see both at the same time. So here were saying into age equal seven. And as we declared in the prototype and in the function when we pass age and right here we're not passing in the value seven were passing in the memory address that this variable lies that in memory. So now in this function, X is equal to the value at the memory address of age, which is seven, and it changes it to five. Well, that means it's also changing wherever ages at the value at age 25 as well. And so it's changing both the copy and the actual variable 25 So So let me see if I can kind of show you what I mean. And one more quick example before we quit, cause that's gonna seem kind of confusing to you. Maybe at first. So we're going to do one more. That's maybe a little bit more clear. So we're going to say, believe this function we just won't call it will make another function called Ah, void change address. Okay, we'll pass in a string and we'll call it. Oops, we're passing a string and we'll call it address. Okay? And we need to prototype it appear remembers that will say string change, address, string. And you actually, in the prototype, you can take out this eggs. It just has to know that you're passing in a stringer and it in the prototype so you could actually make the variable name. You know why here and ex down here and it doesn't matter, but it's totally up to you if you want to leave it there. Just so you have a same look going out through the same look going throughout your code, that's fine. But often just leave the value the data type and then the ampersand symbol. If I'm passing by reference in the prototype parameter and here I'll do the same thing. I would just say String Ampersand, Which means we're passing in a string memory address here in this prototype. And then we'll add a cynical and finished prototype. Now, in the change address function, we're going to say Address equals four 1800 College drive. Okay, and that's all it's gonna do. It's gonna take the memory address that is provided. It's gonna look at the value at that memory address, and it's gonna change it to this. So let me show you again. So we'll say String. My address equals 24 18 Willow Road and then we'll say, See out, address before function, call my address in plan and then we'll say, will say change. Address will pass in my address, which is really the memory address, the memory location of this variable, and then we'll say, see out address after function, call my AG dress. So even though the variable my address is being declared, here's 24 18 Willow Road and it's not being changed anywhere in this function. We're passing it to this function as a memory address and changing it in here. Okay, so it's changing to 1400 college drive, and when we return, it's gonna be different. So watch. So we have void. Change address. Let's see what went wrong. Here we have string address equals 1400 College drive. It's saying, um, old declaration. So let's see. Let's make sure our pro top is fine And right here is What's wrong? So we have we need. We had a string return top here. Let's change it to avoid and let's run it again. Address before function Call is 24 18 Will a row. But the address after the function calls 1400 college derive so you can see how it's actually changing the my address Variable in this function, and just to prove it to you, we're gonna take this out that ampersand. We're gonna take out this ampersand, and we'll run it again without the ampersand and look at the difference. It says. Address before function. Call 24 18 will erode actress after function. Call 24 18 Willow Road, and that's basically the basics of passing by reference. And in the next tutorial, we're going to go over function overloading. So thank you for watching 22. Function Overloading: Hello. Welcome to Practical C Plus plus programming. My name is Zak. And in this tutorial, we will be going over function. Overloading, um functions. Overloading is an interesting topic, and I found it pretty easy to understand. Once you get your head wrapped around, it's not too difficult. And even though we probably won't be using it in our final Hang Man project, it's still something I think you should know as a beginner so that when you see it, you understand what is happening. So what we're gonna do is we're going to make a function that says, um, you know, void print salary, OK, and this function is going to take in an integer value and will make that function down here void print salary it x Well, basically, just say C l in Dillon Eggs in Dillon. And if we run that, we'll get just what we expect will say print salary will say 20,000 When we run it, it'll just print out 20,000 which is exactly what want. But what happens if we want to use the same function? But we want multiple ways to do it, for instance, will say something like c l enter your salary and we'll say, um, you know, CNN salary and salary could be, for instance, we don't know if they are going to enter a string, um, salary or let well safe right now. We'll say it's an end salary and will pass in salary out here. So whenever they enter it, it won't get put into the function. So if they enter, your salary will say 23 country male pronoun 2300. But what if you know, we're having, You know, in this function, it won't really make sense because you have to declare your data type meaning whatever you declare, they're going to have to enter it anyways. But what if you have, Whenever you start programming an object oriented design and, um, different tops of programming architecture, you're going to see that sometimes you don't know what kind of data is coming in. And even though that won't be the case here, we're gonna pretend like it is so that you can go ahead and get your head wrapped around the whole concept. And so let's assume that someone enters in a double salary all of a sudden, and when we run it and we enter the salary, you know, they enter in something like that. Well, it's only printing out this because it was supposed taking an injured your value. Well, what if we really wanted to print this value? But we can't with this print salary function because it's on Lee asking for an interview value. Or better yet, what if we went into, uh, say double salary and Pesce in a string on when it says into your salary? Now it doesn't even run because you can't pass in a string at all. But what if we did want to pass in a string and have him type out $2300 as a word? Well, to do that, we can solve this problem about something called function overloading. And to do that you simply make multiple prototypes of the same function, but with different parameters. For instance, will say void print salary and we'll say stream will make two will make double, Then we'll say, avoid print salary stream, and then we'll come down here. Copy this, and we're gonna make we're going to reprint them, and we're just gonna change these so this one will take a string value, and this one will take a double value. Well, now we have the same function, but we haven't overloaded with different parameters, so we can expect pretty much anything to be inputted. So now, before, when we got an air when we ran this, we can run it again and print salary We can actually enter in 20 three 100 dollars. I'll say 23. And I know it didn't say $100 because, technically, we didn't do get line. We're gonna go ahead and fix that real quick just so I can show you. Um, and I know you've seen this before in the previous tutorial, but probably not with CNN. So we'll say CNN with get line instead of before we did something like output file or input foul. And then we'll store it in salary. And now when we run this, if we can say, you know, 20 three 100 dollars, I'll say $2300. But that same token, you know, we can change this to a double, and it will use that same overloaded function and before you will use this function and before whenever we used. So let's see What's going on here? Says See out, double salary prince salary function Call to double And I believe let's just try to do this , I believe get lines only gonna work with string values anyways. So lets say, salary. And when we run it, we will say 2300.246 or whatever or 0246 We run it and it will actually get the get the whole value. And though it didn't do four sixes, because right now it's on default. Value of double precision point is 0.0.2 after the decibel. That's where the double comes from. If we actually changed double too flow, which is the same thing, is the devil's. Basically, they're both decimal point values. I just with they take them different bites and memory when we run it. Um, see here to make sure we have we have a prototype wrong appear change that float. I'm just kind of showing you how to overload different tops, and data went into your salary. Now 2300.2345 Again, we get the same thing, and I'm thinking that maybe it's because it's defaulting to a floating a precision point of 0.0.2 Let's just check that out real quick. Let's say you know, see out, Uh, set precision toe four fixed and let's see if that fixes it. We don't have Iona nip involved, so let's go ahead and include that. Let's run that. And there we go. So now we're getting the four position point. So we had to specify to set the precision point of four after the decimal because it's defaulting to no matter whether it's a float or a double data type. But, um, needless to say, you know, this tutorial was more about overloading functions, and that's basically what we did. We can enter in now when we run it. I know that you see when we're running it, we know that it's a float coming in because we had to declare it here. But what I want you to understand is that in future classes we get in the object oriented programming. You may not know what kind of data is coming in, and that's where function overloading is important because oftentimes you don't know if a string or a float is going to be passed into a function. And so you have to prepare for were all scenarios. So that's it for this tutorial in the coming tutorials. We're gonna go ahead and start building our hangman game and conclude this course. Thank you for watching. 23. String Functions: Hello. Welcome to Practical C Plus plus programming. My name is Zak, and in this tutorial, we will be discussing string functions. Now, I just want to kind of go over this because this is something that you will be using the lot throughout C plus plus. And I'm not gonna be able to show you every string function, obviously, because that would be a whole video Siri's. But I will show you the ones that you'll probably find yourself using quite a bit. And all the string function is Remember what I told you at the beginning of this quarter's that string wasn't really a data top, but was a class well, without diving too much in the classes. Um, what a class is basically is a It's an object that you can create an object of the class, and that object will have specific functions. Well, without, you know, you might not be able to wrap your head around that just yet. I want to kind of show you what I mean. Each time you create a variable of top string, for instance, name equals AC. This variable has several functions built into it because it is type string that we can use . For instance, we can say, you know, name diet saws and that will return the size of the variable name. So if we say C L name diet sighs when we run that it will print out the size of name, which is four characters. At that same token, there's another function called name dot length, which will do the exact sending thing I'll print out for. So, like I said, there's several string functions that you can use. And if you look through you, all you have to do is tapping name diet and all these functions pop up you that you can see . And if you just play around with them, you know, you can kinda see uh, what they dio. For instance, let's just use fine. We'll use dot find and will search for C. And I believe if we see out that if it runs, I will return to because that's the position of sea in the string. Because remember, 012 And if we type in H here, if we find H, it will return three. And that's what the find function does. You confined certain characters throughout the stream. But if we top in a character that isn't in the string, such as Jay, should return negative wine or a value sections. This some kind of garbage value because obviously that's not a position anywhere in the string. So you could, for instance, type in why, and you'll get another strange value. So we get another strange value. And that's kind of how you can decide for whether the whether the character was found. Nine. So that's basically all I kind of wanted to show you. Was it? Each of these name objects has for each of the string objects. Variables have their own built in functions that you can use, such as find size, length, replace in. You know, they're all down here. You can kind of stroll through him, Look at all of them, but that's all I want to show you for this tutorial. I know it wasn't very much, but it's something I want you to play with on Rhone, and, uh, I'll see you in the next tutorial 24. Random Number Generator: Hello. Welcome to Practical C Plus plus programming. My name is Zak. And then this tutorial, we will be discussing how to create our own random number generator. Now all the random number generator is is it's a function that returns a random number so that we can use it in our program. And the reason why I went to cover this is because, believe it or not, this is something that many people like to figure out how to do so that they can incorporate it into games or certain programs that require some level of randomness. Now, if if you if you're just trying to kind of figure out how to make a random number on her own, you would have to kind of create your own algorithm and would be quite a lengthy process. So what I recommend doing as a beginner c++ programmer is any time you're looking for some kind of kind of functionality, like, such as randomness, for example, like we're doing here, I recommend going to C plus plus dot com as you can see up here and just searching for what , what you're looking for in this case, I topped in random and I ended up with this function called Rand, and you can see it here is called Int rand void. And if you kind of just looked through these documents, you can see how you can use the this library in these libraries to create a random number generator. And it's really quite simple. And they spell it out for you out here. How easy it is to get your program to spit out a random number. Well, about his random is you can get anyways, you know, all computers. There's no way to really make them completely random, but you can at least make it appear and, um, to the user. So that's exactly what we're gonna do. You can kind of mark this reference down if you want to come back and read it later. But basically, in this program, all we're gonna be doing is everything that this reference page tells us to do to create our generator. So let's go back to our program, and we're first going to include the libraries that we need for a random number generator to work, and that is include standard library dot H file and then include time dot H file. You might be wondering what this time dot h libraries for? Well, our random number generator is going to be based off the internal clock of the machine, and it's gonna incorporate that into its algorithm to come up with a random number. And, uh, you'll you'll see what I mean here in a second. I mean, it won't be extremely clear, but that's basically how this algorithm works, is it? It gets the current time down to the millisecond and throws that into a function. And basically, that function is going to spit out a different number every single time. Because the time is changing constantly and depending on what the actual time is, the algorithm may spit our completely different number than the one it spit out one millisecond ago. So that being said, let's go ahead and create a function that's gonna generate our number will have it return an integer value because we want to return an integer. We'll call it, generate random number and won't take any arguments. And then down here, we're actually going Teoh right out or function so well, right the same thing. Generate random number and then in here is where we won't rather toad. And if you go to C plus plus dot com and look at it, it's actually quite simple. You just write, include your libraries and then write this small function right here, which initialize is the random seed Teoh the internal clock on the computer. And then you just simply spell out your your variable with this function, with this number always being the number between zero and then on this number to be all your random values that are possible and then plus wine. And this will return any number. For instance, right here. This I secret variable will spit out any number randomly between zero and 10 because they have 10 specified. Right here. Let me show you what I mean. So the first thing we have to do is type s rand parentheses. And in these parentheses for the constructor you type the time and then another constructor is no and a cynical one just like that. That may seem really un intuitive at first, but that's what the C plus plus dot com reference tells us to do. So that's exactly what we're gonna do for a random number generator and on certain functionality that you might need in your program. It's not that important to understand exactly how it works. You just need to know how to use it. And that's kind of what I'm showing you right here. So this is how you would used the Raynham random number generator library with the time dot H and standard library header files. So now let's go ahead and hold a um, we're gonna hold a value will just return. There's so there's two ways to do this. You can create a value called yet, and we'll just call it number. I will set it equal to Rand, But you're constructor Modelo operator. The number between zero and fifth and zero and then X basically that you want the highest number to be will put 50 down and then plus wine cynical. And this number when this code runs will be any number between zero and 50 on a basically picked randomly. So for you return number there and then here we just return. We call the function, generate random number. When we run the program, the main will call that function and you see processor turned 41 because it's calling in this return function of the main is calling generate random number. And when it returns, the value returned from generate random number. He returned 41 but if we run it again, it'll give us a different number. This time it returned 50. But we can keep running this over and over again. And every time it will be a different number between zero and 50 and all we have to do. If we want to change the spread, we could change this to 200. For example, there'll be any number between zero and 200. This time it was 96. So that's how you use a basic random number generator. I just kind of wanted to go over it with you so that you would know what we were doing when we make our final project, and also to kind of show you what C plus plus dot com is and how to use it to incorporate certain functionalities into your program. So thank you for watching, and I'll see you in the next tutorial. 25. Project -Hangman (Part #1): Hello. Welcome to Practical C Plus plus programming. My name is Zak. And in this tutorial, we're going to start our Hank man game. Now, in all the previous tutorials who learned about everything that we need to know to build this game, And I'm gonna actually spread this game out through a series of three different tutorials so that we can split it up nicely. And, um, you can really understand how we're going to unfold this process and, ah, build the application as a whole. So in this first tutorial, we're just going to start out by billing, building the main skeleton. So to speak of our entire program, we're gonna lay out all our functions and everything that we're going to need. That being said, let's go ahead and prototype all the functions that we we know we're gonna need. So one of the functions that we're gonna need is to get a word from a word bank and return it so meaning Basically, we need a function that opens a file, looks inside the file and grabs a word, and then uses. That is the word that we're gonna try to use and ah, use. That's a word that we're gonna use for our hang meaning. So to do that, we're just going to call a function with the return type is string. We'll just call it, get word, and we won't give. It ain't parameters, because that's that's just gonna do its own thing. Go inside a word bank and get us a word. And since that's working with files, let's go ahead and include the library that we're gonna need for that that function, which is include F string for file stream. Okay, now we have that. We're also gonna want a function that the prince, the board, the board that we're gonna need when I say board I mean the man. So we're gonna want a function that prints out kind of a you representation of how many lives that the user has lift. And to do that, we're just gonna call it avoid return top because it's not gonna return anything. It's just gonna print out on screen something, and, uh, we're just gonna call it print board, and this is going to take an integer value and that integer values basically just gonna be the amount of lobs that we have lived because depending on the amount of lives that the user has left, um is going to depend how much of our of the man is drawn. So that's what that parameter is all about. And speaking of printing or board, we also need a function that Prince blanks. Uh, you know, Prince, the amount of Blank's for the word that gets returned. And so to do that, we're just going to call a function that also, since all it does is printing something, it's just gonna be void. Return tight, We'll just call it Print Blank's. And we're gonna give that we're gonna give that function to arguments. And they're both gonna be of type string because the the first parameter is going to be a, um, the word that we get returned here and the second parameter is going to be the letters that the user has already guessed. And that's how it's gonna determine what blanks to print. And, uh, what letters to print. And we'll go over all this and the you know, as we go through these tutorials, go see exactly how it's gonna work, okay? And, ah, let's go ahead and make another function that generates random number because we're gonna use the random number that we generate to actually decide what word to grab out of our word bank. And we already did a random number generator tutorial. As you know, So this should be fairly familiar to you. We'll just call it, generate random number and cynical. And so these are prototypes that we're gonna use if I remember a function that we might need If we decide we want to create another function, we will. But for now, these are all the ones that I can think of off the top of my head that we're gonna need. So that being said, let's go ahead and ah, set up these functions so we'll say, you know, string, get word and set up or brackets. Then we'll set up a void print board, and we'll call it lives for the parameter. Avoid print blanks, and that's going to take two parameters. 1st 1 we'll just call it chosen word. And the 2nd 1 we will call letters guest. You'll see exactly why we're calling him that later, and the last one was our random number generator, and that one didn't take any parameters. So there we go. So this is the basic skeleton. Now, I also want to go ahead and add some stuff to our main function while we're here. And the way our main function is gonna work, we're gonna go ahead and initialize are our use. Your lives will call it user lives to seven. And then basically, we're going to say, Wow, use your lives is greater than zero. We want to do this, anything inside this leap. And basically this loop is just going to allow the It's just gonna allow the user to keep guessing letters as long as the lives are greater than zero. And then we'll set a break statement in there somewhere. If the if the word gets guessed correctly, then we'll do that. So But that's how we're gonna set up our main for now, and we'll add more stuff later. And the other thing that I want to go ahead and do is since we it's kind of fresh in our minds, go ahead and make our random number generator while we're in this tutorial. And to do that, all we do is say, you know, string or my bad. I was had the I had that get word on my mind. But the random number generator we just have to include to libraries. If you remember, one was standard library dot h, and the other one was time H. There were those of the two robberies we need. Now, let's go ahead and make a random number generator. We're just gonna say s friend time. No. And remember, this was the This is the function that we need according to C plus plus dot com and their reference that we used. And then we're going to say, um, we're pretty much just going to say return. And you know, there's two ways to do this. Um, you could say return rand percent. And then I don't know how many words we're gonna have in our World Bank. We'll just say we're gonna have 10 for now, 10 plus one. And ah, this right here will return an integer value random, insecure value right here if we do that. So that's what we'll do there. And we might have to come back later and change, In fact, just for tow, avoid the confusion. What we're gonna do is say, random number equals rand and you make this integer and random number equals rand. Ah, Markkula Operator 10 plus one. And we'll say return random the number. There we go. And if you see if we go up here and we, um let me just comment this out real quick. Actually, I can't do that. Sorry. This is another way to comment, by the way, is with the store backslash just like that. That's a new way of doing it. Just to kind of explain that. Let's go ahead and test a random number. Will say returned. Generate random number. And just make sure it's given us a random number. And it's not, Let's see what's reference. Generate random number J green number. It's gonna make sure thank you. Right. There we go. We called it random number. Generator. There were probably screaming at me whenever I called it. That s so we called it. Generate random number. There we go. And it says process returned. Three. Let's run it one more time. Process returned. Three If coincidence. There we go. Processor. Turn on. So we're getting a random number every time. And while we're in this tutorial, I want to go ahead and ah, I want to make us a word bank And, uh, back of them actually in this one. And we have employees, don't text he let me delete that real quick. There we go. And let's just make us a new a new document, Star Command. We'll just call it a word list. Txt. We're gonna open it. Well, word list. Txt. There we go open. I will just say no. Give it a header. We're list. We'll give it a couple of nights, you know, Words. So we'll say draft Rhino. Um Reavy, um you know, truck, um cricket grass Hopper Buzzer. Just thinking a random, you know, words off tough my head. There's not really a theme going on here. That's yummy. That's 12345678 Let's get to more will just say, Oh, Taito and one really good word. I will say it, Lennox, There we go. That's 10 words. Not really a overall theme there, but it's the 10 words will use Forward bank for now. And ah, let's go ahead. And in this tutorial, go ahead and make our get word function. Since ah, you know, following putting falling output. It's kind of a thing. We've been practicing for a while, so we'll go kind of quick. We need toe. Go ahead. Make are variable. Call it if stream format that it will be better if stream will call it input five, then wordless dot txt and ah, we'll know pretty much since this is returning a string. Well, no. Well, say, ah, you know, if not input fireable will print out on air, but we won't return because this is a stringer. Turn time. So we won't actually be able to return a an insecure value here anyways. But we'll say, you know, air Negative six. Um, you know, word lists not found, that will, uh, that will let us know, at least if the word list wasn't found, and then we'll say, Let's go ahead and make another variable here. We'll call it temp word, and then we'll just say so. Uh, probably an agent. Well, yeah. Temp. Where there we go. Ah, and actually, we're gonna need an array, so we'll say a string Garay. Um, we'll call it word lift or Ah, yeah, we're just called word list. And let's go ahead and make a constant value. Const. In word list, size equals 10. And you can put a comment here in the code to let you know, you know, change word list size here. Just let yourself No, later. You know, if you make a bigger word Maine. Now, if you want change it to 100 all you have to do is had 100 right there. We'll say we're list wordless size equals just initialize the whole thing, Teoh blank strings. And then, since we're gonna need a four Lee, we'll use this index variable set to zero because we're gonna use a four leap later in this function and that. So that's what we're gonna go ahead and do now we're going to a pre read and we do have a header. So don't forget about the header that we need get rid of. We're going to use our ignore function So we'll say input, file, die, ignore or 255 bites. And then our delimit er of nuan that will get rid of the header, remember? And then we'll do a ah pre read commented there just for habit. We'll say input. Final camp word and then are wildly, we'll say, Well, not input. Fouled on u F. But I didn't make a post read. Same thing, remember is the pre read input file temp word. And then we're going to store everything that we get from this viable into a word list or word list. And to do that, we basically just say we're going to use our index variable that we created up here. We'll just say a word list index has got started. Zero where list index zero equals temp word. And then we'll say index plus place and then down here. Once this loop is done, basically it's gonna populate our entire word list array with all the words in this word list. So what we can do is, since we need to return a string value, but it needs to be a random string. From our word list. We're gonna use a random number generator or generate random number function to return a random index of this array and ah, return a random word. And to do that, all we do say, return word list and then for index, since it needs to be ringing them will say generate random number. That's our function generate random number semicolon. And it will return a random index of this word list which is populated with these words. And just to kind of show you that if we go up to our main function here, we were returning a random number before we'll go ahead and put that back to return. Zero. We're just going to see out, get word and make sure it works. If we do that, we get the word blue. If we run it again one second that save it, we run it again. Oops. We get the word truck. Now we get the word linens so you can see we're getting We're getting new words every single time, Revie All from our word list, grasshopper. And that's how our hang me in game is gonna function. You know, it's gonna grab random words from this word list, and that's pretty much all I want to do for this tutorial. In the next tutorial, we're gonna go more in depth with printing our man and printing the blanks. But ah, I want you can go through this tutorial few times. And really look at this. Get word function and see how We're using this function in conjunction with the generate random number function to return a string of a index of this word list and, ah, you'll find it's actually may be a lot more simple than you were at first thinking. So thank you for watching and I'll see you in the next tutorial. 26. Project -Hangman (Part #2): into practical C plus plus programming. The beginning course. My name is Zak. And in this tutorial, we will be continuing or Hank Mann application. So and this tutorial, I've kind of already got the print, um, print board, um, code already program. And the reason why I went ahead and did it is because you really don't want to sit here for 25 minutes and watch me code out all this. You know, nitty gritty stuff that basically you can do on your own. All I'm doing is using my formatting tab operators. And I've kind of drawing out with these, you know, standard characters, this Hank Mann guy, and you can see as the lives the way I did it. The way I programmed it was in this function takes printer of lives, and I use a switch case. And as the lives go down to zero, the man is fully drawn, but is the lives go up to five. You know, the man isn't fully drawn. He's only halfway John. And when he has full lives, there's no man there at all. But basically, I mean, it's really easy code. You just you can draw it however you want. But for those of you who just want to use this, you know, I would say, Study it a little bit, but not too much because it's pretty simple stuff. It's just you can you can customize it yourself. You know, you could make it bigger if you want or whatever, but this is the way that I usually do it, and I will provide this code for you. And the resource is tad of this lecture so you can actually download this code and just copy and paste it into your program if you want, because, like I said, topping it, Alice kind of a hassle. And, uh, but if you want to do it yourself, that's perfectly fine. So it's up to you. But what I do want to do in this tutorials work on our print blanks function. And it's actually pretty simple function that's just gonna print out the blanks and in the letters of each word that we use. And, uh, we're gonna have to use quite a bit of our string functions that we discussed in a previous tutorial. Teoh, get what we want out of this print blanks function so to begin, All we're gonna do is create a four leap integer I equals zero. And then I was gonna be less than the parameter chosen word. And all chosen word is is gonna be a word from our word list that was chosen by our you know, our get word function with the generate random number generator and whatever word is chosen , we're gonna pass into this function has chosen word. Yes, we went I to be less than chosen word dot size. And remember, this was one of those string functions that we talked about in the previous tutorials. And then we'll just say I plus plus and open our four leap. And now inside this four live We want two things to happen. We went, If the if the chosen word If the letter is in debts, you know, zero of the chosen war, let's have the first letter of the chosen. Word is a and A is in the any of the indexes of letter guest that we want to print out a on screen. But if it's a is not in any of the indexes of letter guest, then we want a print of blame so to do that, we're gonna use more string functions. And what does say if we're gonna use letters? Guest dot find? Remember, this is a string function native to all the stream data types or string objects, and we're gonna find the letter of chosen word don't at I And what this is saying, this function, it's it was going to be really complicated at first. But these air all string functions that kind of told you to study and, ah, the earlier in this section and all its saying is we're gonna get this string, this word, this list of letters and we're gonna look in it. We're gonna find to see if this letter, you know, chosen word at I that's just gonna return a single letter. So if this is chosen, word dot at three is going to return the fourth letter of this chosen word. So if the chosen word was, you know, buzzard, it would return. Or let's say, let's say the chosen word was truck. Then an index I was three is going to return is going to return, see, because three is actually 0123 So it's the fourth letter which return, See? And all this is saying is find in letters guest John C. And if it's found that story turned something other than negative one. But if it's not found, it's gonna return negative wine. So to kind of determine whether it was found or not, We just say letters get stopped, find chosen word dot at I not equal to negative one. And that means it was found as long. As long as this operation doesn't return. Negative one that we know the letter was found somewhere in the function. And so this if statement is saying that the letter was found and so all we do is see out chose word dot at I Well, I had a space at the end of it just to give it some spacing. And that's just saying, you know, output. The letter, um, this word, it a certain index. So really, study that, And then the alternative will just say else, because the alternative is that it was negative one, which means it wasn't found it all. If that's the case, we want to print out a blame the space at the end, uh, to give it some space So you know, this is alternative. This means that the letter wasn't found in letters, guest. So we're gonna leave it blank. And that's all this is This is all there is to dysfunction, and if we can actually test it real quick, so we'll save this. Let's go up to our main function and let's test it. So we're gonna say, uh, you know, here we go. Main function is up here. No, we'll say, Well, go ahead and say string. Um, word equals Get word. Well, well, output where? Girl quicks at the top of the screen so that you know what the word is. But then I also want to run print, uh, print blanks, and we're gonna pass inward as the chosen word. And then let's just pass in some letters ourselves will pass in r S t l n e think those the most famous will fortune letters. So these air the letters guessed that we're saying our guest and if we if we run this program, we should get no errors. And ah, hold on. Stop working. If we run this program, here we go. The word was Lennix and since in was one of our letters. Guest. Uh, we get we get the letter in now, you might see that ILL is capital Ill. And we had el here. That's something I need to fix. Obviously, I think I have capital letters in my word list. So we're actually gonna change that? I'll change that in between tutorials, because obviously that's a bug in our program because we want we want this. We want all our words are wordless to be lover case because whenever we enter, you know, lower case ill. That's not going to show up because that's Capitol Hill, even though it should be there and s. So let's go ahead and run it one more time. Just show you so truck, You know, as you see the are was there because we haven't are the tea wasn't because this is a capital T in truck. We have a lower case t. So I will have to change that small bug and there's a way to get around it. You know, if you can check, basically say if you know if it's capital letter Lower case letter counted anyways, fill in the blank, but you just have to add more code. And if you want to do that, then I challenge you to go ahead and do that. But that's a basic functionality of print blanks. And while we're here, we're gonna go ahead and, uh, take out the comment of our while. Leap and let's go ahead and add some. Let's go ahead and add some basic stuff, too. So we'll say. Since we have all our functions planned out and everything that we need, we can basically go ahead and add the rest of what we need to this loop and all that is is basically say, you know, uh, print board will pass in, use your lives, which is seven. And then we'll say, Let's give it some. Let's give the user some instructions. I will say See out. You know, give it some new ons and we'll say, Uh, well, first of all, we we want to tell him what letters have been. Yes, I will say letters guest. Let's actually create a string for that. We'll say string letters. Guest equals Well, say, see out letters, guest letters, guest. There you go. Now they can see what letters have guessed, and then we'll say I'm see out, Um, enter a letter. Oops. Sorry, guys. It's the inner letter. When will you see? End? Well, say, um, straining. We're just calling. Guess. Well, so I see on guess There we go. And that right there were basically So we're just telling them what they've guess. We're printing out the board, which is gonna start out as a clean board. No man hanging from it. No letters. Yes. Will say in her letter will enter the guests. And then what? The first thing we need to do is say letters guest plus equals, Remember? That's just gonna add a string to it will add Guess to it now, Letters guest is gonna have a guess in it. And then ah, we need to. Well, first of all, we need to get our words. So that's another thing we need to add real quick. So let's say I will say string word equals get word there. I guess Now I have string where it equals Get word. And that's basically just gonna return. Remember, that's gonna return a word for more word bank. And we got a story inward, and then we need to do is print blanks and we actually want to do that before this. So we'll say Print board. And then we'll say, Um, now, let's give us a little bit of space and will say um, print Blank's I will say a word Letters guest that'll that'll print out all blanks the first time letters, gas plus equals guest. And then basically, if we want to check and see if But if it was in it. So let's go back now to do print blanks. Actually, I think we can actually do that in here. Yeah, there we go. And there we could add that end if we want. But actually, let's just go ahead and do that appear. We're gonna do that in the main function, but you could do it and neither one will say letters plus equals. Guests will say if, um I will say if you know word dot find. Yes, does not equal mega for Boyne continue, and basically this is just going to continue more into the loop and then we'll say else because this means that they got the got the guests right, and then it's automatically gonna update it will say else lives minus minus or I think we called it. Use your lives. User loves minus minus. There we go. Let's go ahead and run that and just make sure everything runs okay, So if you look at that, we got our four blank printed out are hanging man Elsa in her letter Hostess Interim are there We go and it goes ahead and draws it. If we enter in, let's say, B, we don't get calls. Another one to say l look, it adds it to our blank e There we go E ads for blank you b and C The world was blue, but, you know, obviously there's still some bugs because it's not letting us know when we win and keep going. I think, until we can enter multiple letters, no process return zero because our lives went out and, uh, we just fix a few things. Few minor things to make sure that who knows if it zero will print out of our full guy? But ah says that's pretty much we wanted to go over in this tutorial. I want to go ahead and add one more thing that you haven't seen before, and that is at the end of the at the end of each. Before we continue in each of these statements, I'm gonna add it line it says System CLS, which tells the console to clear. And it'll actually make our game look a lot better. So we will see what I mean. When we run it again, let's go ahead and run it and it'll say So what's Inter in the G and is you could see it's not running down the screen anymore like it was the last time. That's because, um, when we when we clear the screen, it's reprinting it all in the same exact spot. So it looks like it's it's not going anywhere. That's the kind of effect that we want That system CLS is something new, but something easy and something that you can use in order applications. I just want to go ahead and show you that. So in the next tutorial, we're gonna completely finish it up and then test job or application and conclude our class . So thank you for watching and I'll see you in the next tutorial 27. Project -Hangman (Part #3): Hello. Welcome to practical C Plus plus programming. My name is that and this is our final tutorial. So in this tutorial, I've kind of already topped up and fixed everything that we needed to fix. If you look are wordless dot txt you can see I changed everything toe lower case So we don't have any conflicts with our user input And, uh, the word that has chosen and ah to in order to what we really need to add was in order to decide whether the winner whether the user has won or lost during each gas And to do that, the first thing I had to do was declare a global variable called flag. Now, I'm not sure if we went over global and local variables, but all the global variable is is a variable that is declared outside of all the functions you can see I could not. In the main function, this variable has actually declared underneath all my prototypes and what this does it is. It allows this variable to be used in all of not functions across the board. Now, this normally isn't recommended. You definitely don't want to do this with all your variables for privacy reasons, but in this case, it's gonna work out perfectly for us. So I created a Boolean variable called Flag. You can name it, whatever you want. Values called it flag. And I declared it too false. For when the program first start starts up, it's gonna be declared false. And then if we scroll down a little bit in our wild leap, you can see that I have the flag set to true right at the beginning of the while loop and then I have a condition. This says, if flag equals true break. So I kind of want to show what this is doing. This is basically on the program when this wild loop starts setting flag to true and then after these functions run the flag is still true. It's gonna break from the while loop and I want to show you where this flag would get changed and that to my print blanks function. So let's go down to print lengths. I'll show you what happens. So in my print blanks function, all I did was I basically said that if this branch is executed, set flag to false. But if this branch never gets executed at all. Flag is going to stay true because every time this four lube runs is gonna be running this piece of code rather than this piece of code which basically means that throughout the life cycle of this four loop, if one blank gets printed, the flag is going to be set to false basically meaning that the puzzle has not yet been solved. Flag equals false meaning. The word has not been completed yet because they're still a blank. But let's say this four loop runs all the way through and not one blank gets printed. Then flag will never get set to false. And if we go back up to our main function, flag never gets set to fall. So flag is still true. After print blanks, it says a flag equals true break. What? That point if you break, you come out here outside the while loop and I added these two conditional statements. And basically it says, If use your lives equals zero, then obviously you broke out of this function because use your lives was zero. And if you recall, are wildly basically said, while use your lives is greater than zero keep doing this. But if usual lives equals zero, then break out of this function and come down here. And if you come down here after you break out a function with usual live zero, then you're going to run this code and it's gonna say you lose. The word was and they will tell you the word. But if you use your lives is greater than zero, then obviously you broke out this while loop in a different way, which was via the flag. So let's say use your lives is at three. Flag is set to true print length runs. No bread, no blanks were printed, basically meaning all the letters were guessed correctly. Flag is still true and you break out of this wild loop. Well, then you're breaking out. This wild loop while usual lives is three diffuser lives is three hence greater than zero than this code is gonna get ran instead of this code, and it's gonna say you win. And that's basically the functionality of this code. It's actually rather simple because basically, we just created one more variable, and we just we basically just you have to think about it a little bit. You know, we came down here and we just set the flag to false. If one blank got printed, basically a blanket all gets printed in the flag is gonna be false, and you will never be able to break out a while loop, Uh, be a the If statement checks that flag is true, you won't be able to break out. And the only other way breakout is if lives is equal to zero. And if laws equals zero, then you lose. And, uh, so that that's how this works. And if we run it, I want to go ahead and show you our finished product. When we run it, we'll go ahead and well, so this first word just cause we have 10 words and I kind of ran it a few times. I know what it is is linens. So, you know, let me get a couple of letters wrong. So it's starting to draw her main, and you can see, but if But if we get it all the letters, right, so we'll say I in you, x, and you can see it's telling us the letters that we've guessed, which is something I want to add in and they says You win. The word was Lennox Process return zero and you can see that that's how the game works. But let's run it again. I think we actually have the same word. No, we did. So this is a different word, not Lennox, because it picked a different word out the World Bank, and we're gonna try to get it wrong. So we'll just guess random letters. You could see it's drawing our man and more and says You lose The word was truck, and that's the basic functionality of this program. It's a basic hanged man game, but we used literally, you know, everything that we could and this course and so everything that we learned we got to utilize for this program. And that's kind of why I picked this project for end of the course final project. And so what I challenge you to do is is, you know, kind of convert this program into something more advance may be used file output or something else that we learned in this course, you know, to save high scorers, maybe use a word bank with 100 different words and then, you know, use the foul output to say, You know, each time you run it to save your score to this vile and then check with the high score is in that file and then print out with the high score is compared to your score and keep updating that file every time you run it. I would recommend trying that and just really getting good at thes beginner concepts and run through this run through these last few tutorials a few times so you can see how we use these functions. Because when you take an advance C plus plus scores, you're gonna have to be Ah, you're gonna have to really know all that stuff really well. So the last thing I ask is, if you really enjoyed this course and you learned a lot is to give me some good feedback and maybe leave a review if you can. And if you really, really locked it, you can. You can leave a review and just kind of tell me, you know, your doctor see an advanced course and I'll get enough people interested. I will definitely make an advanced course, and we'll do some cool. We'll do some more cold projects. But for now, thank you for watching and thank you for being a part of this course Goodbye.
__label__pos
0.82037