content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Alcoholics Anonymous (AA) is often included as an adjunct to psychotherapy for individuals suffering from addiction. As a culture unto itself, AA has its own customs, philosophy, and language, including what is commonly referred to as AA slogans. This study investigated the frequency with which psychologists use these slogans as well as how familiar and comfortable they are with them. Additionally this study investigated whether these variables were related to psychologists' work setting or percentage of addicted caseload. Using a mix of quantitative and qualitative methodologies, results indicate that more than 80% of respondents utilize AA slogans at least some of the time. Familiarity varied greatly depending on the slogan. Over 80% of those surveyed are at least somewhat comfortable using AA slogans in psychotherapy. Work experience in an addiction treatment setting was a mediating variable for familiarity as well as use of specific AA slogans, though not for overall use of AA slogans or levels of comfort. A higher rate of use and familiarity was related to a higher caseload of addicted patients and comfort was not related to caseload. Frequency of slogan use, familiarity and comfort were significantly positively related to frequency of referral to AA. Themes regarding reasons for discomfort using AA slogans as well as their clinical utility were also explored.
Library of Congress Subject Headings
Dissertations (PsyD) -- Psychology; Alcoholics Anonymous; Alcoholism -- Treatment; Alcoholism counseling
Date of Award
2010
School Affiliation
Graduate School of Education and Psychology
Department/Program
Psychology
Degree Type
Dissertation
Degree Name
Doctorate
Faculty Advisor
de Mayo, Robert R.
Recommended Citation
Randall, Sarah L., "Psychologists' use of, familiarity, and comfort with Alcoholics Anonymous slogans in psychotherapy" (2010). Theses and Dissertations. 94. | https://digitalcommons.pepperdine.edu/etd/94/ |
Q:
Math Parlor Trick
A magician asks a person in the audience to think of a number $\overline {abc}$. He then asks them to sum up $\overline{acb}, \overline{bac}, \overline{bca}, \overline{cab}, \overline{cba}$ and reveal the result. Suppose it is $3194$. What was the original number?
The obvious approach was modular arithmetic.
$(100a + 10c + b) + (100b + 10a + c) + (100b + 10c + a) + (100c + 10a + b) + (100c + 10b + a) = 3194$
$122a + 212b + 221c = 3194$
Since $122, 212, 221 \equiv 5 (mod\space9)$ and $3194 \equiv 8 (mod\space9)$
$5(a + b + c) \equiv 8 (mod\space9)$
So, $a + b + c = 7$ or $16$ or $26$
Hit and trial produces the result $358$. Any other, more elegant method?
A:
The sum of all six combinations is $222(a+b+c)$
So, $3194+100a+10b+c=222(a+b+c)$
As $3194/222>14$
If $a+b+c=15, 100a+10b+c=222(15)-3194=136$
$\implies a+b+c=1+3+6=10\ne 15$
If $a+b+c=16, 100a+10b+c=222(16)-3194=358$
$\implies a+b+c=3+5+8=16$ as needed
A:
Let $S$ be the sum,
$$S \text{ mod} 10=A$$
$$S \text{ mod} 100=B$$
$$A=2b+2a+c$$
$$\frac{B-A}{10}=(2c+2a+b)$$
$$\frac{S-B}{100}=(a+2b+2c)$$
$$\text{Now just solve the system of equations for $a$ $b$ and $c$}$$
$$\text{ The original number will be a+10b+100c}$$
$$\text{ Now memorize this formula and do the addition in your head}$$
| |
The teacher creates a positive learning environment to facilitate the personal, social, and intellectual development of students. In order to respond to the individual needs and abilities of students, the teacher must work closely with other staff, the administration, and other programs of the school district. The teacher is responsible to the building principal.
Essential Functions:
- Facilitate the personal, social, and intellectual development of students.
- Establish a positive learning environment and respond to the individual needs of students.
- Ensure that all activities conform to district guidelines.
- Communicate effectively with parents, members of the school district and community.
- React to change productively and handle other tasks as assigned.
- Support the value of an education.
- Support the philosophy and mission of the Casa Grande Elementary School District.
- Plan and implement effective lessons, using time, materials and resources effectively.
- Motivate students through effective communication and evaluative feedback.
- Display a thorough knowledge of curriculum and subject matter.
- Demonstrate awareness of the needs of students and provide for individual differences.
Qualifications:
- Individual must possess a Bachelor's degree from an accredited college/university.
- A valid state teaching certificate or proof of eligibility.
- Ability to fulfill requirements to be Appropriately Certified under ESSA.
- General knowledge of curriculum and instruction.
- Excellent organizational, communication and interpersonal skills. | https://jobboard.simplifaster.com/job/pe-k-5-teacher/ |
J Tomlinson is an established, privately-owned company with a substantial heritage and wealth of experience in delivering integrated building solutions tailored to public and private sector clients. Due to our continued success and on-going commitment to providing a world-class service to our customers, we are looking to recruit a Buyer.
Our extensive portfolio of services includes the provision of Care, Commercial Refurbishment, Repairs and Maintenance, Engineering Services, Regeneration Programmes, Energy Efficiency and Renewables, and Facilities Management. We offer a totally integrated building solution.
Job Description
|Job Title||Buyer|
|Purpose|
|Undertake the material and plant procurement for nominated division(s) or business unit(s), as well as supporting selected strategic procurement initiatives.|
|Specific Responsibilities|
|Key Results and Accountabilities
|
· Aligned with the Group Procure to Pay Process and Procedures. Procurement of all material packages, making procurement recommendations for all projects on VE opportunities and supply chain routes to market within a nominated division or business unit(s). Lead on IFS issues
· Aligned with the Group Plant Hire Process and Procedures. Procure, raise and close plant orders as required.
· Material and Plant cost management and reporting of sites
· Able to proactively manage and resolve procurement and associated operational, commercial and finance day to day issues
· Directing specifications based on full understanding of group agreements
· Successful support of the work winning function, integrating as an embedded member of the team
Key Job Tasks
· Become Procurement champion a nominated division or business unit(s), owning all aspects of operational procurement for the division and its projects.
· Liaise with all work streams within the nominated division(s) or business unit(s) and become an integral team member supporting and adding value wherever possible
· Uses Group Agreements and works with the Strategic Procurement function to feedback relevant information to support the ongoing procurement of Group Agreements.
· Negotiation and placing of all material and plant orders for given sites, making sure all materials are delivered on time and within budget.
· Conducting quotation analysis of packages, and negotiating best value – presenting recommendations for approval
· Proactively make recommendations on specifications to incorporate group deals, better performing products and to drive value engineering opportunities.
· Agree and maintain accurate procurement schedules with site operational and commercial teams, reporting associated information as required
· Discuss and agree in principal with project and site manager’s best procurement methods for the site
· Proactively identify strategic procurement opportunities for the division and wider business
· Support the work winning function with attendance to tender launch meetings, advising the team on procurement VE opportunities and negotiation and agreement of costs to be used in divisional bids
· Challenge specification based on group supply chain, preferred manufacturer and third-party agreements
· Complete bi-annual/quarterly stock takes and advise on consignment and van stock opportunities
· Liaising with suppliers and sites regarding any key information or any issues raised
· Attend monthly contract reviews as required, highlighting spend trends and inform of any industry issues
· Identify new suppliers, initiatives and potential cost savings within the business
· Resolution of invoice queries, and full understanding of procurement/ accounting system
· Complete regular site visits to build relationships, monitor progress and fluctuation required for procurement schedules and highlight issues
· Attend supplier review meetings
· General administration
· Senior Buyer – support and mentor more junior members of the Procurement Team
|Criteria|
|· Good communication skills – both written and verbal
|
· Able to build positive productive working relationships with stakeholders
· Able to work effectively as part of a team
· Systematic, organised approach
· Able to manage time and priorities effectively
· Positive, can do and pro-active attitude is essential
Education
Construction Management or Construction Commercial (HNC or equal) or Corporate member of the Chartered Institute of Procurement & Supply or significantly progressed to achieving
What We Offer
In return we offer a competitive salary depending on experience and qualifications. We also offer attractive benefits that include pension, holiday purchasing scheme and life assurance.
How to Apply
To apply for this role please ensure that you fit the eligibility criteria above. Send your cv and a covering letter using the form below. We look forward to hearing from you.
Due to a high volume of applications we receive we regret that we cannot respond to each one and may only contact you if your application is successful.
J Tomlinson is proud to be recognised as a Disability Confident Committed employer. We are an equal opportunities employer and positively encourage applications from suitably qualified and eligible candidates regardless of sex, race, disability, age, sexual orientation, gender reassignment, religion or belief, marital status, or pregnancy and maternity. | https://www.jtomlinson.co.uk/careers/buyer/ |
East Africa is characterized by a rather dry annual precipitation climatology with two distinct rainy seasons. In order to investigate sea surface temperature driven precipitation anomalies for the region we use the algorithm of empirical orthogonal teleconnection analysis as a data mining tool. We investigate the entire East African domain as well as 5 smaller sub-regions mainly located in areas of mountainous terrain. In searching for influential sea surface temperature patterns we do not focus any particular season or oceanic region. Furthermore, we investigate different time lags from 0 to 12 months. The strongest influence is identified for the immediate (i.e., non-lagged) influences of the Indian Ocean in close vicinity to the East African coast. None of the most important modes are located in the tropical Pacific Ocean, though the region is sometimes coupled with the Indian Ocean basin. Furthermore, we identify a region in the southern Indian Ocean around the Kerguelen Plateau which has not yet been reported in the literature with regard to precipitation modulation in East Africa. Finally, it is observed that not all regions in East Africa are equally influenced by the identified patterns.
1. Introduction
In contrast to other tropical areas, East Africa is characterized by a rather dry annual precipitation climatology. Rainfall exhibits a strong seasonal signal with two distinct rainy seasons throughout the year (Yang et al., 2014). The major rainy season, the so-called “long rains” is from March until May (MAM), while the second rainy season from October to December (OND), the “short rains” is more variable but usually centered around November. The modulation of these rainy seasons by regional to global sea surface temperature (SST) anomalies has been the focus of numerous studies in the past (e.g., Rocha and Simmonds, 1997; Mutai et al., 1998; Latif et al., 1999; Plisnier et al., 2000; Behera et al., 2005; Black, 2005; Marchant et al., 2007; Ummenhofer et al., 2009; Manatsa et al., 2012, 2014; Manatsa and Behera, 2013; Bahaga et al., 2015; Tierney et al., 2015). From these studies it becomes evident that, at least for the “short rains,” the Indian Ocean Dipole (IOD) plays a much bigger role than the El Nino Southern Oscillation (ENSO) in East Africa. There is, however, a clear tendency of most of these studies to (i) focus on particular seasons and/or (ii) focus on the influences of one or two predefined (coupled) ocean (-atmosphere) indeces such as IOD or ENSO to name the two most widely investigated for this area. There are, to the best of our knowledge, no climatological studies that have approached SST induced precipitation influences over East Africa in a holistic, data driven manner up to now. Furthermore, most studies investigate SST-precipitation linkages for a rather broad regional area lacking conclusions on local scales. We intend to fill these gaps by investigating potential SST driven precipitation anomalies for (i) the complete time interval between 1982 and 2010 and (ii) for several sub-regions of about 100 × 100 km in addition to the entire East African domain. So far, few studies exist that link SST influences and eco-climatological anomalies for selected local regions in East Africa such as Chan et al. (2008); Otte et al. (personal communication) for Mt. Kilimanjaro, but these are often based on in situ rain gauge observations which limit their area-wide significance, even for the rather small local domain. In this study, we approach the SST driven precipitation anomalies at Mt. Kilimanjaro in a spatially and temporally holistic manner. Area-wide high-resolution precipitation grids (see next section for details) are being used over the complete period between 1982 and 2010, not limiting the investigation to any seasonal period.
2. Materials and Methods
2.1. Data
Monthly SSTs between 60°N and 60°S for the entire globe are obtained from the NOAA SST product (NOAA OI SST V2, Reynolds et al., 2007). For precipitation information we use monthly Climate Hazards Group Infra Red Precipitation with Station data (CHIRPS) version 2.0 (Funk et al., 2014). CHIRPS is a 30+ year quasi global rainfall data set, which is available from 1981 until the recent present and has a resolution of 0.05° × 0.05° (longitude × latitude). It incorporates a number of satellite precipitation products including Tropical Rainfall Measuring Mission (TRMM), rainfall fields from NOAA's Climate Forecast System version 2 (CFSv2) as well as in situ precipitation observations. For a detailed description of the incorporated data and the applied methodology for the creation of CHIRPS, the reader is referred to Funk et al. (2014). Figure 1 gives an overview of the precipitation domains used in this study. In addition to the entire East African region we also investigate SST driven precipitation anomalies for five small mountainous sub-regions, namely Lake Tana, Bale Mountains, Mt. Kenya, Mt. Kilimanjaro and Mt. Loleza (from North to south) to investigate whether local differences in identified patterns can be observed. Note that for the entire East African domain CHIRPS data was re-sampled to 0.25° × 0.25° (longitude × latitude) to ensure acceptable computation times. Prior to the analysis, both data sets were tested for potential break points potentially stemming from changing observational input data over time. No such breakpoints were found in either data set.
Figure 1. Mean annual precipitation between 1982 and 2010 in the response domains. Black squares show the locations of the small response domains.
2.2. Methods
In order to identify the influence of SST anomalies on East African rainfall we use Empirical Orthogonal Teleconnection (EOT) analysis as described in Appelhans et al. (2015). EOTs have first been introduced to the international literature as an alternative to the classical approach of Empirical Orthogonal Functions (EOF) by van den Dool et al. (2000) who outlines that both EOT and EOF are indeed very similar techniques with the former producing less abstract results. EOTs carry a quantitative meaning in the form of explained variance, thus enabling intuitive interpretation of the results. In brief, the algorithm works as follows:
1. Each pixel of the predictor domain time series is regressed against all pixels of the response domain time series.
2. The predictor pixel with the highest sum in the coefficients of determination is identified as the base point of the first mode.
3. All pixels of the predictor and response domains are then again regressed against this base point to quantify the relationships between this point and the rest of the domains.
4. To identify further modes, steps 1–3 are repeated on the residuals of the preceding mode, thus ensuring orthogonality.
These steps can be repeated until a desired number of modes is identified. Apart from similarity in the temporal dimension (i.e., identical amount of data points over time), the algorithm can be applied to any two data series without any requirements such as identical spatial resolution or physical units of the data. For detailed descriptions of the mathematics behind EOT analysis the reader is referred to van den Dool et al. (2000) and van den Dool (2007). Here, we use SSTs as the predictor series and precipitation as the response series and limit our investigation to the first two modes. Both data sets were pre-processed to extract climatic signals inherent in the time series. Seasonality was removed by subtracting the long term monthly mean. Background noise within the series was removed by principal component analysis (PCA), keeping those components that describe just above 80% of the time series inherent variance. Potential linear trends in the data sets were not removed as we explicitly want to capture possible changes in the relationships between the two data sets over time. In total we analyze 29 years of monthly SST and precipitation anomalies for the time period 1982–2010. We apply the EOT approach to moving chunks of 5 years of monthly observations. This is first carried out for the 60 time slices between 1982 and 1986. Then the application window is moved forward by 1 year so that the analysis is repeated with the next set of observations ranging from 1983 to 1987. This procedure is repeated until the last available 5 year chunk between 2006 and 2010 is analyzed producing 24 sets of modes between 1982 and 2010 in total. For each of these chunks we analyze temporal lags between SSTs and precipitation from 0 to 12 months.
3. Results
In this section we focus on the description of the results found for the entire East African domain and provide references to the corresponding findings within the sub-domains (see Supplementary Materials) where approproate. For each domain we identify a total of 624 base points (24 chunks * 13 lags * 2 modes) and associated SST regions, represented by the coefficients of determination between the base points and each pixel in the SST domain. In order to understand the temporal dynamics of these patterns, Figure 2 provides an overview over the combined explanatory power of modes 1 + 2 over all 312 chunk/lag combinations analyzed for the East African domain (left panel) and the corresponding correlation coefficients for mode 1 (center panel) and mode 2 (right panel) individually. The solid white squares highlight the 16 modes with highest explanatory power (the upper 0.95 quantile of explained variance in the left panel). In general, and irrespective of lag times, the 1980s, early 1990s and early 2000s reveal reduced influence, while the mid to late 1990s and mid to late 2000s show enhanced explanatory power. All of the 16 most important chunk/lag combinations occur during these periods. A detailed overview of the identified patterns for the 16 most dominant base points is given in Figures 4, 5 for the leading modes and their respective secondary modes. It becomes evident that some of these patterns are rather solitary occurrences, e.g., “1995–1999 lag12,” “2001–2005 lag12,” and “2003–2007 lag00.” The latter clearly shows the influence of a positive IOD event in mode 1, which is also the mode that provides the vast majority of the explanatory power as can be seen in Figure 2. In line with previous studies, this positive IOD phase related mode exhibits the strongest absolute influence in our analysis. Given the close vicinity of the SST pattern to the response domain this is hardly surprising. In addition to these solitary patterns, we see a few patterns evolving that occur at least twice (indicated by the small letters in Figure 2). This is, however, only true for the leading modes in Figure 4. The second modes shown in Figure 5 do not show any pattern consistency neither related to the leading modes, nor overall. This is true for all domains as can be seen in the respective figures in the Supplementary Materials. Figure 3 provides an overview of these labeled patterns averaged over all mode 1 occurrences together with the respective averaged correlation coefficients in the response domain. We see that the shortest lag time is found in the Indian Ocean (Figure 3a). This pattern is related to the negative phase of the IOD exhibiting negative correlation to East African rainfall. This pattern is also found for the smaller domains that are located near or south of the equator, namely Mt. Kenya, Mt. Kilimanjaro and Mt. Loleza even though the lag times differ slightly (see Figures S10a, S14a, and S18a in Supplementary Materials). Pattern 3b with a lag of 8 to 9 months is located in the extra-tropical Indian Ocean and is positively correlated with East African precipitation. The location just north of the Kerguelen Islands indicates that the Antarctic Polar Front plays a role here. Moore et al. (1999) report consistent modulation of SSTs around the Kerguelen Plateau, a large sub-marine topographical feature that poses a natural obstacle for the Antarctic Circumpolar Current (ACC). To the best of our knowledge, there are no previous studies linking this oceanic region to East African climate. Yet, this is the only pattern we see in the large East African domain as well as in all small sub-domains, which highlights its importance for the region. Pattern 3c is located in the equatorial Atlantic and has a lag of 5–6 months. It is negatively correlated with East African precipitation and the only sub-domain to also reveal this pattern is the one surrounding Mt. Kenya. Pattern 3d with 9–10 months lag is located in the southern subtropical Atlantic Ocean and shows negative correlation coefficients with the response domain. In comparison to the other negatively correlated patterns, this exhibits the strongest signal, especially in the East African lowlands. This pattern as such is not found in any of the sub-domains, yet Lake Tana and Bale Mountains exhibit a similar pattern both in terms of location and lag times indicating that this pattern mainly influences the northern parts of the East African region. Pattern 3e is located in the south-eastern subtropical Pacific and is lagged by 10 months. Given that no sub-domain exhibits any patterns similar to this positively correlated influence on East Africa, it may be that this pattern is especially important for the lower elevated regions throughout the domain, as all our sub-domains are domains of complex terrain.
Figure 2. Left: Explained space-time variance of the East African response domain for modes 1 and 2 by chunk and lag. Center: Mean correlation coefficient between the identified base point and all pixels in the response domain by chunk and lag for mode 1. Right: Mean correlation coefficient between the identified base point and all pixels in the response domain by chunk and lag for mode 2. Labels a–e show the chunk/lag combinations that are used in Figure 3. Solid white squares denote upper 0.95 quantile in explained variance.
Figure 3. The five most important SST regions for the East African domain as indicated by the small letters in Figure 2. Left panels show mean coefficient of determination between each pixel of the predictor domain and the respective base points (red diamonds). Right panels show average correlation coefficient between each pixel in the response domain and the respective base points. Lag times are provided in the lower left corner of the left panels.
Figure 4. The 16 most important patterns in mode 1 (upper 0.95 quantile in explained variance) as identified by the white squares in Figure 2.
Figure 5. The 16 most important patterns in mode 2 (upper 0.95 quantile in explained variance) as identified by the white squares in Figure 2.
Regarding the East African lowland regions, it is generally observed that the correlations are higher for these areas than for the areas of complex terrain, regardless of domain size, pattern location or lag times. One potential reason for this could be found in the nature of the data, with gridded data being generally less accurate over areas of more heterogeneous topography. A further general observation is that the two oceanic basins surrounding the African continent seem to play a bigger role than the tropical Pacific Ocean, though isolated patterns do exist and it is sometimes coupled which point to influences from this basin. Regarding the consistency between first and second modes, which, if found, would indicate teleconnectivity, we do not find any patterns that re-occur within the second mode as they do in the first in any of the domains. This indicates that, should such precipitation modulating teleconnections between oceanic regions exist, they operate on temporal scales beyond the five year chunks that we have investigated here.
4. Discussion
In this study we have used empirical orthogonal teleconnection analysis to identify global sea surface temperature regions that influence precipitation in East Africa and selected montane sub-regions within the region. We did not limit our analysis to any season or any region, rather we investigated SST influences in 5 year chunks for the entire period between 1982 and 2010 and lag times between 0 and 12 months. Our approach does not enable any process inference as it is purely statistical. There has been much focus in the international literature on understanding the dynamic links between IOD and ENSO and East African precipitation. Here, we found that the region around the Kerguelen Plateau in the southern Indian Ocean plays a role throughout the entire East African domain, regardless of scale, which has not been reported in the literature before. Furthermore, we found (i) that there are distinct times of enhanced influences, (ii) that the most influential SST regions are located in the Indian and Atlantic Ocean basins, (iii) that the Indian Ocean Dipole exhibits highest explanatory power with regard to precipitation modulation in East Africa, (iv) that not all identified influences are equally important throughout the East African domain and (v) that lowland areas are generally more influenced than mountainous regions. Our findings suggest that any inference of previously reported SST influences on East African precipitation need to be carefully tested when applied to smaller areas within the region as it can not be assumed that these will be of equal importance throughout the entire domain. This is also true for considerations over time. From our analysis it becomes evident that influential SST regions shift depending on the considered time span. Therefore, assumptions about regional SST influences in East Africa should be carefully considered when applied to investigations on local spatial and differing temporal scales. In addition, topography also influences correlations so that this aspect needs to be considered as well. In light of the identified influential region around the Kerguelen Plateau, we suggest that this region be investigated more closely in the future, especially regarding the processes underlying this link to precipitation in the East African region. Our findings suggest that there are other regions that do also play a role in influencing precipitation over the region and it would be desirable to understand the driving processes at a similar level of detail as we already do for IOD and ENSO.
Author Contributions
TA performed all analyses and wrote most of the manuscript. TN significantly contributed to the final submitted manuscript version.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
This work was carried out in the frame work of of the DFG-Research Unit 1246 KiLi - Kilimanjaro ecosystems under global change: Linking biodiversity, biotic interactions and bio-geochemical ecosystem processes. funded by the German Research Foundation (DFG, funding ID Ap 243/1-2, Na 783/5-1, Na 783/5-2). We are grateful for the contructive reviewer feedback which were very helpful and improved the quality of the manuscript significantly.
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/article/10.3389/feart.2016.00003
References
Appelhans, T., Detsch, F., and Nauss, T. (2015). remote: Empirical orthogonal teleconnections in r. J. Stat. Softw. 65, 1–19. doi: 10.18637/jss.v065.i10
Bahaga, T. K., Mengistu Tsidu, G., Kucharski, F., and Diro, G. T. (2015). Potential predictability of the sea-surface temperature forced equatorial east african short rains interannual variability in the 20th century. Q. J. R. Meteorol. Soc. 141, 16–26. doi: 10.1002/qj.2338
Behera, S. K., Luo, J.-J., Masson, S., Delecluse, P., Gualdi, S., Navarra, A., et al. (2005). Paramount impact of the indian ocean dipole on the east african short rains: a cgcm study. J. Clim. 18, 4514–4530. doi: 10.1175/JCLI3541.1
Black, E. (2005). The relationship between Indian Ocean sea-surface temperature and East African rainfall. Philos. Trans. A Math. Phys. Eng. Sci. 363, 43–47. doi: 10.1098/rsta.2004.1474
Chan, R. Y., Vuille, M., Hardy, D. R., and Bradley, R. S. (2008). Intraseasonal precipitation variability on Kilimanjaro and the East African region and its relationship to the large-scale circulation. Theor. Appl. Climatol. 93, 149–165. doi: 10.1007/s00704-007-0338-9
Funk, C., Peterson, P., Landsfeld, M., Pedreros, D., Verdin, J., Rowland, J., et al. (2014). A Quasi-Global Precipitation Time Series for Drought Monitoring. Technical Report 4, U.S. Geological Survey Data Series.
Latif, M., Dommenget, D., Dima, M., and Grötzner, A. (1999). The role of indian ocean sea surface temperature in forcing east african rainfall anomalies during decemberjanuary 1997/98. J. Clim. 12, 3497–3504.
Manatsa, D., and Behera, S. K. (2013). On the epochal strengthening in the relationship between rainfall of east africa and iod. J. Clim. 26, 5655–5673. doi: 10.1175/JCLI-D-12-00568.1
Manatsa, D., Chipindu, B., and Behera, S. (2012). Shifts in iod and their impacts on association with east africa rainfall. Theor. Appl. Climatol. 110, 115–128. doi: 10.1007/s00704-012-0610-5
Manatsa, D., Morioka, Y., Behera, S., Matarira, C., and Yamagata, T. (2014). Impact of mascarene high variability on the east african short rains. Clim. Dyn. 42, 1259–1274. doi: 10.1007/s00382-013-1848-z
Marchant, R., Mumbi, C., Behera, S., and Yamagata, T. (2007). The indian ocean dipole the unsung driver of climatic variability in east africa. Afr. J. Ecol. 45, 4–16. doi: 10.1111/j.1365-2028.2006.00707.x
Moore, J. K., Abbott, M. R., and Richman, J. G. (1999). Location and dynamics of the antarctic polar front from satellite sea surface temperature data. J. Geophys. Res. Oceans 104, 3059–3073. doi: 10.1029/1998JC900032
Mutai, C., Ward, M., and Colman, W. (1998). Towards the prediction of the East Africa short rains based on sea surface temperature atmosphere coupling. Int. J. Climatol. 18, 975–997.
Plisnier, P., Serneels, S., and Lambin, E. (2000). Impact of enso on east african ecosystems: a multivariate analysis based on climate and remote sensing data. Glob. Ecol. Biogeogr. 9, 481–497. doi: 10.1046/j.1365-2699.2000.00208.x
Reynolds, R. W., Smith, T. M., Liu, C., Chelton, D. B., Casey, K. S., and Schlax, M. G. (2007). Daily high-resolution-blended analyses for sea surface temperature. J. Clim. 20, 5473–5496. doi: 10.1175/2007JCLI1824.1
Rocha, A., and Simmonds, I. (1997). Interannual Variability of South-Eastern African Summer Rainfall. Part 1: Relationships With AirSea Interaction Processes. Int. J. Climatol. 17, 235–265.
Tierney, J. E., Ummenhofer, C. C., and deMenocal, P. B. (2015). Past and future rainfall in the horn of africa. Sci. Adv. 1:e1500682. doi: 10.1126/sciadv.1500682
Ummenhofer, C. C., Gupta, A. S., and England, M. H. (2009). Contributions of Indian Ocean Sea surface temperatures to enhanced East African rainfall. J. Clim. 22, 993–1013. doi: 10.1175/2008JCLI2493.1
van den Dool, H., Saha, S., and Johansson, R. (2000). Empirical orthogonal teleconnections. J. Clim. 13, 1421–1435. doi: 10.1175/1520-0442(2000)013<1421:EOT>2.0.CO;2
van den Dool, H. M. (2007). Empirical Methods in Short-Term Climate Prediction. Oxford; New York: Oxford University Press.
Yang, W., Seager, R., Cane, M. A., and Lyon, B. (2014). The annual cycle of east african precipitation. J. Clim. 28, 2385–2404. doi: 10.1175/JCLI-D-14-00484.1
Keywords: climatology, sea surface temperatures, precipitation, Kilimanjaro, empirical orthogonal teleconnection
Citation: Appelhans T and Nauss T (2016) Spatial Patterns of Sea Surface Temperature Influences on East African Precipitation as Revealed by Empirical Orthogonal Teleconnections. Front. Earth Sci. 4:3. doi: 10.3389/feart.2016.00003
Received: 29 August 2015; Accepted: 11 January 2016;
Published: 09 February 2016.
Edited by:Jing-Jia Luo, Australian Bureau of Meteorology, Australia
Reviewed by:Federico Porcu, University of Bologna, Italy
Tomoki Tozuka, University of Tokyo, Japan
Copyright © 2016 Appelhans and Nauss. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | https://www.frontiersin.org/articles/10.3389/feart.2016.00003/full |
The model is supposed to bring renewed prosperity to the United States but it brought more inequality and stripped safety net programs that actually helped most Americans. This lack of assistance means that struggling people are struggling even more and they have less money to spend and to put back into the economy. Since the creation of the Better Business Climate model, government spending on food stamps, unemployment insurance, and other social programs has been cut as
to recover from this depression. The unprecedented occurrences which happened in the late 1920’s and 1930’s caused much to change in America: socially, financially, and politically. Many laws and regulations were passed to prevent something similar from happening in the future, such as the Agricultural Adjustment Organization, the Federal Deposit Insurance Corporation, and the National Recovery Administration (Timeline). People who lived during the Great Depression often suffered because of it for the rest of their lives. People were forced to be stingy to survive, and after the depression was over they squandered their money on luxuries and necessities alike.
Even the number of hungry people in the world exceeds the total population of US and European Union. Extreme hunger and mal¬nutrition remain as blockade to development and creates a set up from which people cannot easily go out. Hunger and malnutrition mean less productive individuals, who are more susceptible to disease and often unable to earn much more and improve their livelihoods. There are nearly 800 million people in this world who suffer from hunger worldwide, the major¬ity
This affected the nation significantly, as the population decreases, not much children would grow up to work for the nation, thus creating less income and therefore not increasing the nation’s GDP as much as it can. The numbers of immigrants accepted into Canada dropped to less than 12,000 in the 1935 from 169,000 in 1929, thats over 1400% loss in immigrants. The amount of immigrants accepted into Canada never rose above 17,000 for the remainder of the decade. The number of deportations, however, rose from fewer than 2,000 people in 1929 to more than 7,600 in just under four years. In addition to the deportations, approximately 30,000 immigrants were forcibly returned to their original countries over the course of the decade, this was predominately due to illness or unemployment.
It had all kinds of effects in countries that were rich and poor. Cities and countries across worldwide markets that were hit extremely hard, countries that were especially hit hard were those that were dependent on heavy industry. Other sectors also hurt were construction, farming, mining and logging. But in some economies they had started to recover by the mid-1930s, but for many countries, the negative effects that the Great Depression had lasted until the start of World War 2. The 2008-2009 Financial Crisis The 2008-2009 financial crisis was the worst financial crisis since World War 2, it had threatened the total collapse of large financial institutions all around the world, which in return was prevented by the bailout of banks by national governments.
“Fifty percent of all child deaths are involved in undernourishment”(‘Facts About Hunger And Poverty’). Fifty percent is a huge number, especially when it comes to child deaths. Many people do not pay attention to their surroundings and often do not care that there are people in this world who are dying because of hunger. World Hunger all around our planet is mainly caused by food wastage, but by remembering to not waste after eating, can change everything. World Hunger around the earth is resulting in many people suffering and even dying.
The time period of which the book was written is the 1930’s and it was a quarrelsome time for race relations. During that period an economic slump, called the Great Depression, had affected many people’s lives as it was the most severe depression ever experienced by an industrialized country. Also factors like the Jim Crow laws and the 2nd Ku Klux Klan resulted in white people discriminating against blacks people. The Great Depresion is an important era in the United States’ history. In the 30’s, the complications that came along with the Great Depression affected the public severely.
Whipps continues, “Densely populated Europe, which had seen a recent growth in the population of its cities, was a tinderbox for the disease” (2). Also, since they had a lack of resources people were starving, causing their immune systems to weaken. There were many after-effects of the Black Death, such as, labor shortage, lack of religious faith, and Jewish people moving. Every factor played a part in causing the bubonic plague to become an
Chronic hunger claims and affects many victims by famine each year. Women are more likely to be sick and have smaller babies that would die earlier, resulting in high levels of infant mortality. In areas where chronic hunger is a problem the communities are in a vicious cycle of malnutrition and death. Effects also include vulnerability to common illness, more than two million children die every year from dehydration caused by diarrhea. Malnourished children often lacks the strength to survive a severe case of diarrhea.
What are the causes of hunger is a fundamental question, with varied answers: Poverty is the principal cause of hunger, Harmful economic systems are the principal cause of poverty and hunger, Conflict as a cause of hunger and poverty, hunger is also a cause of poverty, and thus of hunger, Climate change and Micronutrients. Moreover cases such as unemployment will certainly increase poverty in such a way that parents and child headed families will not be able to support and care for their families effectively. 2.3 Western Worldview on World Hunger: This world view originates from the non-Communist states of Europe and North America in contrast to the Eastern. Historical relating to or characteristic with mainly associated with a film, television drama, or novels. This view in regards to hunger is a bit complicated as mostly it focuses on individualistic characteristics so one may find the believers of this view do things different one may be caring and giving to does who need food in a form of a donating to charity or an individual owning a charity which is collecting money from other organisation but indirectly or directly benefiting more from the money as an individual in terms of their own private life.
Even though this is a high poverty rate the poverty line in South Korea is 15% (World Bank). South Korea’s population did drop from 1979 to 2008 but it wasn’t as effective and didn’t help the poverty line. Not only did China’s One Child Policy help with so many things, it also greatly helped reduce
The Great Depression of 1929 was one of America’s most influential downfalls that crippled society for years. The depression caused many years of failure and poverty for almost all of society. The government’s role during these times was crucial and critical for turning around the economy. The depression had a major effect on government’s power and involvement with the people and states. The government was less involved before the depression.
In 1929, the economy failed, unemployment rates soared, and almost every urban and rural family alike faced hardships. The Great Depression was in full effect and poverty gripped America. This economic depression lasted for about 12 years and grew to a horrific global problem. The depression was caused by the stock market crash of 1929, uneven prosperity, high supply and low demand, tight and loose monetary policies as well as the reduction of foreign trade. As the financial calamity continued to worsen, Herbert Hoover, 31st president; in office 1928-32, worked to meet the difficulties facing the American people and their economy.
When a person hears the word “The Great Depression,” almost everyone thinks the worst economic times in the United States. The Great Depression started in the late 1920’s and continued till the early 1930’s. It was the most worldwide economic down spiral in history. It remains the most important economic event in America history still today. This tragic event caused hardship for millions of people and the failure for many businesses, banks, and farmers.
Hunger with not meeting the nutritional needs and the medical complications that result from these conditions are the leading cause of death in the poverty population. This is a huge problem that needs to be brought to the attention of the public. Poverty breeds hunger and malnutrition. A congressional investigation in 1968 revealed a widespread hunger fest in the United States communities among the poor. Within the early 2000s these problems were still found in the majority of rural counties of central Appalachia, along with many other surrounding and similar areas (“Poverty”). | https://www.ipl.org/essay/Global-Hunger-And-Poverty-F3GECJF74AJFR |
Horicon Marsh, WI has been formally recognized as a Wetland of International Importance by the Ramsar Convention of the United Nations. This renowned marsh is home to the Horicon Marsh Education & Visitor Center.
The Wildlife Education Program has been conducted at the marsh since the mid-1980’s. This program focuses on the abundant wildlife resources of the marsh, their ecology and applied management. Public naturalist programs, special events and school education programs aim to connect people with wildlife and their environment by providing outdoor education programs.
In 1992, Wisconsin Department of Natural Resources (DNR) purchased the former Flyway Clinic, a 16,000 square foot building located along Hwy 28, with the intent of developing this as an education center. The building had been abandoned and only the upper floor had actually been developed. This served as the DNR’s staff office in the Horicon Marsh area and tentative plans were drawn up to expand this to also serve as an education facility.
A non-profit Friends Group was established in 1994 as a fund raising organization to support this cause. The organization has provided countless hours of volunteer assistance to the education program. Following a long campaign, sufficient funds were raised to allow hiring of an architect to develop the final construction plans. In the end, the Friends of Horicon Marsh Education & Visitor Center reached its goal of raising $1.9 million towards construction of the Center. The State of Wisconsin matched this through the Building Commission and additional funds were provided to DNR to renovate the office area to house its staff, creating a $4.8 million project. After 18 months of construction the new Education Center was completed in late March 2009.
The Education & Visitor Center brings a modern design and provides for enhanced visitor services. The lobby of the building features a spectacular Marsh Viewing Area, a Children’s Discovery area that provide seasonally changing hands-on activities for children to explore various facets of nature, a front desk providing visitor information, and the Flyway Gift Shop which has a range of items for visitors to enhance and remember their experience at Horicon Marsh.
In the lower level are two classrooms, which can be opened up into one large room. The classrooms have direct access onto an outdoor patio featuring a giant map of Horicon Marsh. The patio leads visitors onto trails that travel down to the edge and through the marsh.
There is also an auditorium with seating for up to 100 people. It comes with an 8 x 12 foot wide screen, rear projection audio-visual system capable of high-definition projection. This system can project laptop computers, DVD and Blu-Ray discs, all of which can serve the needs for a wide range of meetings, conferences, workshops, and public education programs.
In August 2015 the Explorium opened in the lower level of the Center. The Explorium gives you a hands-on experience of Horicon Marsh Life. See Horicon Marsh thousands of years before European settlement and witness how the current wetland came to be. The experience is narrated by a Clovis point arrowhead, that keeps visitors company throughout the journey as they view, listen, touch and even smell, exhibits that document the changes to the marsh over time. Videos and interactive displays greet guests at every turn, encouraging audiences of any age to learn more about the history and ecology of Horicon Marsh.
All of these amenities serve to enhance the visitor experience at Horicon Marsh. The center creates a beautiful environment for exploring and learning at any time of year for visitors from all corners of the world. | https://www.horiconmarsh.org/education-center/about-the-center/ |
I have an Iomega 1TB portable external harddrive. I’ve been using it on my windows laptop for the last year or so basically for storing photos. I hadn’t used it in 2 months or so, and just went to move some more photos onto it, and the following message comes up:
The disk is write protected. Remove the write protection or use another disk” – Try Again or Cancel.
I don’t understand. Nothing’s changed. I can still access all the photos on the hard drive, so I don’t think it’s corrupted. It’s just that I can’t copy any more new photos onto it. I’ve only used about 150GB so still heaps of room.
I’ve searched forums, and tried the command diskpart process for both the volume and the disk, and that doesnt appear to have worked, even after rebooting. I would appreciate some thoughts from some experts (my IT knowledge is not very strong!!)!
Thanks in anticipation. | https://www.makeuseof.com/answers/external-hard-drive-suddenly-become-write-protected/ |
I don’t know how to handle this Political Science question and need guidance.
You must choose an ethical dilemma you have run across, articulate that dilemma, as well as your point of view. Your posting should be full and complete so that the dilemma can be understood, the considerations, and what is your position on the dilemma.
Do you need help with this or a different assignment? We offer CONFIDENTIAL, ORIGINAL (Turnitin/LopesWrite/SafeAssign checks), and PRIVATE services using latest (within 5 years) peer-reviewed articles. Kindly click on ORDER NOW to receive an excellent paper from our writers. | https://writinggeeks.net/apus-intelligence-studies-accountability-of-intelligence-agencies-paper/ |
Microbial communities are important regulators of organic matter decomposition and nutrient cycling in soils. In arid regions such as Arizona and New Mexico about half of the annual total precipitation occurs during the summer monsoon season when plant growth is rapid and nutrient requirements are high. This study will investigate how monsoon rains and plant growth impact the soil microbial community in these arid regions. Three separate experiments will be conducted to assess how soil microbial biota is influenced by 1) timing of the water addition, 2) degree of moisture, and 3) plant growth. Instantaneous changes in bacterial community composition are expected with watering, while a delayed response is predicted for the fungal community. In plots where moisture is withheld, changes in the structure of both bacterial and fungal communities are predicted to be less pronounced. Finally, plant removal is expected to prevent establishment of many fungal populations and some bacterial strains.
Arid land ecosystems are undergoing change with grasslands being replaced by desert shrubs. Understanding how the soil microbial community responds to seasonal rains is critical to understanding organic matter decomposition and nutrient cycling in desert ecosystems, which in turn, is important for understanding vegetation dynamics. A partnership with a local Native American-serving K-8 school will allow the researchers to bring their research into the classroom, with lessons centering on the three sisters planting scheme — a traditional Native American companion planting of corn, beans, and squash. | http://www.secheresse.info/spip.php?article56924 |
The Earth may be on the brink of a sixth mass extinction, due to human activity, according to the academic journal Science.
The Earth’s most recent mass extinction event occurred roughly 65 million years ago, when an asteroid wiped out 75% of all existing species, including the dinosaurs.
Commenting on the progression of Earth’s present defaunation, or loss of species, Science author Sacha Vignieri said, “human impacts on animal biodiversity are an under-recognized form of global environmental change. Among terrestrial vertebrates, 322 species have become extinct since 1500, and populations of the remaining species show 25% average decline in abundance.”
The team of biologists and ecologists who contributed to the study revealed that a third of all vertebrates on the planet are presently threatened or endangered. Vignieri cites “overexploitation, habitat destruction and impacts from invasive species” as ongoing threats, but warns that climate change due to human activity will emerge as the leading cause of defaunation. Likewise, diseases that come from pathogens introduced by humans have become a factor.
Paleoecologists estimate that modern man has driven approximately 1,000 species into extinction during our 200,000 years on the planet. Since the sixteenth century, man has killed off hundreds of animals, including the passenger pigeon and the Tasmanian tiger. According to the International Union for the Conservation of Nature, there are another 20,000 species threatened today.
Though, research has suggested that the widening extinction trend can be reversed.
Humans presently use half of the planet’s unfrozen land for cities, logging or agriculture. Reforestation and restoration of lost habitats, coupled with relocation and recolonization efforts can assist in the “refaunation” of species driven from their native locales.
Based on data published in Nature in 2011, it will take a century or two to assure another mass extinction event at the present rate of global depredation. | https://www.webpronews.com/earth-facing-sixth-mass-extinction/ |
✅ Researched Based: This article is based on research. Number’s in Brackets are links to research papers and Scientific articles from well-established and authoritative websites (all links open in a new window).
In this article, we are exploring Vitamin K – a group of compounds such a Vitamin K1 and Vitamin K2.
What is Vitamin K
Vitamin K is a nutrient essential for life and health. It is involved in many important body functions, such as preventing blood clotting and maintaining healthy bones.
It is one of the most basic fat-soluble vitamins, i.e. it requires the presence of fat in order to be absorbed by the body.
The term vitamin K refers to a group of molecules with a similar chemical structure and action. In its natural form it is found in a wide variety of foods and in food supplements.
Vitamin K is classified into two main forms, vitamin K1 (phylloquinone) and vitamin K2 (menaquinone).
Vitamin K1 is mainly found in plant sources, such as green leafy vegetables (spinach, broccoli, lettuce, etc.), vegetable oils and certain fruits (blueberries, figs, etc.).
Related: Vitamin C: Sources, Health Benefits, Deficiency, Side effects, RDA
In contrast, vitamin K2 is mainly of microbial origin. Much of the daily vitamin K requirements in humans are produced by gut bacteria in the form of K2.
In nature, menaquinone is present in moderate amounts in animal products such as meat, milk, soy or eggs, and in fermented products such as traditional Natto food.
Although both types of vitamin K are equally beneficial to health, vitamin K2 (menaquinone) appears to have greater potency.
Unlike the other fat-soluble vitamins (A, D, E), vitamin K circulates in small amounts in the blood, as it is metabolized very quickly and then excreted. This means that when we ingest vitamin K1, only 30% to 40% is retained in our body, with the remaining 60 to 70% being eliminated through urine and stool.
Vitamin A Daily Requirements
Vitamin K requirements by age group are shown in the table below:
Vitamin A Foods
Foods that are considered good sources of vitamin K (in the form of vitamin K1) are vegetables, especially green leafy vegetables, while we can also get it from certain vegetable oils (e.g. soybean oil, rapeseed oil), fruits (kiwi, dried figs, avocados, blackberries, grapes, pear, mango, papaya) and nuts.
Meat, dairy products and eggs contain vitamin K1 at low levels, but vitamin K2 is found at higher levels in these foods. Also, fermented cheeses contain vitamin K2.
In more detail, the table below shows the vitamin K content of certain foods (mainly vitamin K1, unless the vitamin K2 content of the food is indicated in a note):
Vitamin A Health Benefits
- Protein Synthesis: Vitamin K is essential for the synthesis of proteins involved in blood clotting and bone metabolism, as well as other physiological functions.
One of the best known proteins that depends on the action of vitamin K and is directly involved in blood clotting is prothrombin (clotting factor II).
Osteocalcin is another vitamin K-dependent protein found in bone and regulates calcium deposition in bone, and it appears that the presence of vitamin K is essential for its synthesis.
In addition, vitamin K is absorbed from the small intestine, participates in lipid metabolism and is transported to the liver, where it is repackaged into very low-density lipoproteins (VLDL).
- Coagulation of blood: Vitamin K helps in blood clotting, stopping bleeding, reduces bruising as well as faster wound healing. The process of blood clotting is a highly complex process.
Many of the proteins involved in the formation of clotting need vitamin K for their proper action. Without the required amount of vitamin K there is an increased risk of bleeding.
- Cardiovascular health: One of the major causes of heart attack is the deposition of calcium in the arteries . Calcification of the arteries makes them harder, with reduced elasticity and reduces their width, making them narrower.
Taking vitamin K prevents calcium deposition in the vessels and tissues, it is suggested that it indirectly participates in cardiovascular protection [x]. However, larger and better designed studies are needed to highlight the role of vitamin K in preventing cardiovascular disease.
- Osteoporosis: Osteocalcin is a vitamin K-dependent hormone responsible for bone health, specifically for the deposition and elimination of calcium from bone. Specifically, vitamin K helps calcium circulating in the blood to be stored in the bones .
According to studies, people who consume large amounts of vitamin K through food or supplements have stronger bones and are less likely to suffer a bone fracture in their lifetime even if the individuals already had osteoporosis .
The action of vitamin K is enhanced by the action of vitamin D, which is why it is recommended to be taken together for better absorption of calcium by the bones.
- Improving brain health and inflammation: It is now proven that chronic conditions such as Alzheimer’s disease, cancer, Parkinson’s disease as well as heart failure are inflammatory conditions.
In recent years, more and more studies have shown that vitamin K has anti-inflammatory properties, protecting brain cells from oxidative stress and the harmful effects of free radicals.
Vitamin K Deficiency
Vitamin K deficiency is not common and can only occur in people who have malabsorption problems or liver disease. In healthy people, whose diet is varied, it is almost impossible to achieve a low intake of vitamin K so that blood clotting is affected.
However, in case of vitamin K deficiency, the activity of prothrombin in blood is significantly reduced, resulting in bleeding. Similarly, deficiency of this vitamin may lead to the development of osteoporosis. The groups at risk of vitamin K deficiency are:
- Newborns and infants who are not given vitamin K supplementation in the first weeks (2nd – 12th) of life (due to low vitamin K1 transfer to the placenta, low levels of clotting factors and low vitamin K content in breast milk.
- People with malabsorption disorders (e.g. cystic fibrosis, celiac disease, ulcerative colitis, short bowel syndrome);
- Patients who have undergone bariatric surgery;
- People taking medications that interfere with vitamin K metabolism;
*Vitamin K Toxicity effects are unlikely to occur due to increased vitamin K intake .
Related: Best Vitamin B Complex Supplement & Everything You Need To Know In Between
Vitamin A Pregnancy
Vitamin K is particularly important during pregnancy, especially in cases of premature birth. All premature babies show symptoms of haemorrhagic syndrome, more or less severe, related to the availability of vitamin K.
In particular, premature babies are likely to have a marked vitamin K deficiency, their intestinal tract may not be inhabited by those micro-organisms that synthesize vitamin K and the vitamin K stores in their liver may be inefficient.
Consequently, vitamin K supplementation may contribute significantly to the prophylaxis of premature infants from complications due to haemorrhage.
Vitamin K, which is naturally present in human breast milk, has been shown to be inadequate relative to that required by infants less than 6 months of age. Supplementation during breastfeeding improves the vitamin K content of breast milk and reduces the potential for neonatal haemorrhage.
Vitamin K Supplements
Below are some of the best Vitamin K supplements that you can buy online or via your local pharmacy.
Final Take
Vitamin K is important for protein synthesis, strong bones, brain and blood clotting. If you eat your vegetables, you probably don’t need a supplement.
If you’ve found this article helpful, please consider sharing it with the rest of the world. | https://foodnurish.com/vitamin-k/ |
Chicken tortilla soup incorporates all the signature seasonings and spices of Mexican cuisine, along with tender bites of chicken, plenty of flavorful vegetables and hearty beans, ideally served in steaming bowlfuls topped with creamy avocado and tortilla strips that become soft and silky as they soak into the savory broth. Such an appetizing complexity of flavors and textures can involve a long list of ingredients, but with a few shortcuts, such as using premade seasoning mixes and canned beans, the recipe is simplified without sacrificing flavor.
Total Time: 30 to 40 minutes | Prep Time: 10 minutes | Serves: 6 to 8
Ingredients:
- 2 tablespoons vegetable oil
- 3 boneless, skinless chicken breasts or 5 thighs, diced small
- 1 large onion, diced
- 1 red, yellow or orange bell pepper, diced
- 2 tablespoons, or 1 package, taco seasoning mix
- 1 10-ounce can chopped tomatoes with green chilies
- 1 15-ounce can black, pinto or kidney beans, drained
- 3 tablespoons tomato paste
- 32 ounces chicken broth
- 2 cups water
- 4 corn tortillas, sliced into strips
- 1 avocado, diced
- 1 bunch fresh cilantro
- 1 lime, cut into wedges
Directions:
- In a large saucepan over medium-high heat, add the vegetable oil, diced chicken, onion and taco seasoning. Cook, stirring frequently, until the chicken is opaque and no longer pink.
- Add to the pan the can of tomatoes with the juice, the drained beans, tomato paste, chicken broth and water. Stir, and bring the liquid to a gentle simmer.
- Simmer the soup for 15 to 20 minutes.
- Stir the tortilla strips and the leaves picked from a bunch of fresh cilantro into the soup immediately before serving.
- Ladle the soup into bowls. Over each bowl, squeeze the juice from a lime wedge, and add cubed avocado.
Related Articles
Easy Taco Soup Recipe
How to Make Tortilla Soup
How to Make Easy 5 Ingredient Fresh ...
How to: Crock-Pot Lima Bean Soup
How to Make Chicken Vegetable Soup
How to Blanch, Peel, & Freeze Whole ...
How to Cook Yakamein
Can You Use Vegetable Oil Instead of ...
How to Make Fish Taco Sauce
Easy Tortilla Soup Recipe
How to Rehydrate Chickpeas
How to Keep Tomatoes Fresh
How Many Calories in Chopped Antipasto ...
Easy Black Bean Soup Recipe
How to Make Homemade Spaghetti Sauce
Easy Spinach Artichoke Dip
How Many Calories in a Taco Bell Bean ...
How to Cook Trevally
How to Store and Freeze Sundried ...
How to Substitute Wheat Germ for Flour
References
Writer Bio
Joanne Thomas has worked as a writer and editor for print and online publications since 2004. Her writing specialties include relationships, entertainment and food, and she has penned pieces about subjects from social media tools for Adobe to artists’ biographies for StubHub. Thomas has also written for such names as Disney, Hyundai, Michelob and USA Today, among others. She resides in California and holds a bachelor’s degree in politics from the University of Bristol, U.K. | https://oureverydaylife.com/13527197/easy-chicken-tortilla-soup-recipe/ |
Editor’s Note: The College of Arts & Sciences launches the Notable Alumni Awards, honoring 37 Notable Alumni in 2017 for broad accomplishments in their careers, a commitment to community service, and valuable contributions to Ohio University, the College of Arts & Sciences, and its students.
Josh McConaughy ’06 Anthropology
Ohio University alum Josh McConaughy strengthened his ties with OHIO Anthropology faculty and alumni after graduation, interacting professionally as he worked to preserve archaeological sites in the Midwest and coming back to campus to talk about career pathways with students.
He earned a B.A. in Anthropology in 2006 from the College of Arts & Sciences. He returned last fall to help the OHIO Archaeology Field School celebrate its 30th anniversary, and he participated in an Archaeology Alumni Panel to discuss stories of life after graduating from Ohio University.
After working in the real estate field for a short time, McConaughy was offered the position of field representative and associate director of the Midwestern Office with the Archaeological Conservancy located in Columbus Ohio.
The Archaeological Conservancy is the only nonprofit organization in the country that preserves important archaeological sites located on private property. McConaughy worked to preserve many sites all over the Midwest, including the Dorr II mound located in The Plains in Athens County.
McConaughy worked with Ohio University Anthropology faculty and staff to help preserve sites in Ohio and also partnered with other OHIO graduates working in the field of archaeology all over the Midwest.
He is currently doing contract work and living in Columbus, Ohio.
McConaughy enjoys traveling, and he just returned from a two-week trip to Italy with his family. He also enjoys working on renovation projects for the 100-year-old craftsman house he owns in Columbus. | https://www.ohio-forum.com/2017/10/notable-alumni-anthropology-alum-helps-preserve-sites-gives-career-advice/ |
The National Safety Council in Itasca, IL, is considering changes to the proposed ergonomic standard this month, after which the standard will be approved or submitted once again for public comments. A review of the proposed standard by Occupational Health Management reveals that the standard would provide a managerial structure for addressing cumulative trauma disorders (CTDs), but no real clinical guidance.
About 200 comments were received on the proposed standard when the comment period closed at the end of June, according to spokesman David Alexander of the National Safety Council. He tells OHM that the responses were evenly split between those suggesting changes and those endorsing the proposed ergonomic standard as it stands. The council's ergonomic committee will meet Oct. 29-30 in Los Angeles, following the National Safety Council's annual meeting, to discuss the comments and decide how to proceed with the proposal.
The committee must consider each individual comment and then decide to make changes or leave the proposal as it is. If there are no substantive changes, the committee can vote on whether to approve the proposal. If there are substantive changes made to the proposal, it must be made available for public comment once again.
The proposed ergonomic standard was approved by the committee on May 5, 1998, by more than a 3-to-1 margin. Intended as a guide for managers and occupational health professionals to help control work-related trauma disorders, the proposed standard was developed by 55 members on the committee representing business, labor, academia, and professional societies. In the May vote, 42 members voted for the proposed standard, eight voted against it, and five abstained. Once the committee provides final approval in the October vote or later, the standard will be accredited by the American National Standards Institute (ANSI) in Orlando, FL.
A copy of the proposed standard obtained by OHM suggests it is no more than an outline of the overall approach to addressing cumulative trauma disorders (CTDs) in the workplace, not a specific guide to preventing or controlling CTDs. If implemented as a final standard, the current proposal would serve as a justification for efforts to address the disorders, and it would provide a managerial structure for how to do so. But there is no clinical guidance or specific steps for preventing and controlling CTDs.
Focus on upper limb disorders
The committee is focusing on upper limb disorders. Though many of the concepts are applicable to CTDs in other parts of the body, the National Safety Council says it intends to address other body areas in separate substandards of the ergonomic standard.
In the proposed standard, the committee states that it is possible to quantify exposure to work-related CTD risk factors, and that it is possible to identify many work situations in which CTDs can occur. Overall, the committee concludes that CTDs can be controlled and managed to minimize impairment and disability, but the proposed standard notes that it is "not yet possible to specify precise quantitative work design parameters for a given level of risk in a given population."
These are some highlights of the steps outlined in the proposal for controlling CTDs:
r Employers should have a written program for managing CTDs.
r Employers must provide training to managers and employees regarding how to recognize the symptoms and signs of CTDs, procedures for reporting CTDs, risk factors, job interventions, and other issues.
r Employees must be given the opportunity to participate in the program.
r Employers should select health care professionals with experience in the evaluation and treatment of CTDs.
r Job analysis must be performed when it is determined that a CTD is work-related. The analysis also can be required when a CTD trend is observed in jobs that use similar tools or processes, when a problem job is identified from record reviews, when a problem persists after changes, and during the design phase of equipment, processes, or jobs.
r But a job analysis is not necessary if there is an "obvious" solution.
r Job design or redesign must be used for eliminating or reducing work-related risk factors for CTDs, "as much as technically and practically feasible." | https://www.reliasmedia.com/articles/56820-no-clinical-focus-in-proposed-ctd-standard |
Darkness exists everywhere, and in no place greater than those where spirits and curses still reside. Tread not lightly on ancient lands that have been discovered by this collection of intrepid authors.
In DARK TALES OF LOST CIVILIZATIONS, you will unearth an anthology of twenty-five previously unpublished horror and speculative fiction stories, relating to aspects of civilizations that are crumbling, forgotten, rediscovered, or perhaps merely spoken about in great and fearful whispers.
• In “Quetzalcoatl's Conquistador,” Hernán Cortés's plans to subdue Moctezuma’s Aztec empire do not include the intervention by the vengeful god Quetzalcoatl.
• In “The Small, Black God,” a young archaeologist’s dig site is visited by a panel of scientists with motives of their own.
• In “The Nightmare Orchestra,” we learn who, not what, are responsible for causing our nightmares.
Also, what is it that lures explorers to distant lands where none have returned? Where is Genghis Khan buried? What happened to Atlantis? Who will displace mankind on Earth? What laments have the Witches of Oz?
Answers to these mysteries and other tales of of archaeologists and scientists, treasure hunters, tragic royalty, and ghosts ,are presented within this critically acclaimed anthology.
Including stories by: Joe R. Lansdale, David Tallerman, Jonathan Vos Post, Jamie Lackey, Aaron J. French, and twenty exceptional others.
** Nominee for the 2012 Bram Stoker Award® for Superior Achievement in an Anthology **
* Download the Press Release Here! *
Purchase or Support at:
"As a boy, some of my favorite stories were those of lost lands and civilizations, made popular by such writers as H. Rider Haggard, A. Merritt, and Talbot Mundy. I daydreamed of falling through some hidden cave entrance into a lost and forgotten world (sans injury of course) and if asked about my career ambitions I would have answered that I wanted to be one of those specially lucky explorers. As I gradually became aware that such civilizations weren't terribly likely in our closely-examined world, that fantasy became a bit bruised. But now Eric J. Guignard brings back a bit of that magic with Dark Tales of Lost Civilizations, an anthology mixing the values of pulp fiction (returning us to a milieu where such stories seem more possible) with contemporary standards of fresh description. Here we have lost islands, civilizations on the brink, and uncharted lands imaginatively described with new mythologies. David Tallerman, Mark Lee Pearson, Jamie Lackey, Folly Blaine, Jonathan Vos Post, and JC Hemphill—to mention just a few—all shine, and the new Joe Lansdale piece with a unique slant on a western railroad story is a special treat."
—Steve Rasnic Tem, Bram Stoker and World Fantasy Award-winning author of novels (including his latest, Deadfall Hotel) and numerous collections of short fiction; www.m-s-tem.com
"Bright new voices offer chilling glimpses of the darkness beyond mere night."
—David Brin, author of Earth, The Postman, and Otherness; www.davidbrin.com
"I have to say that I was very pleasantly surprised at the quality and depth of the stories in this anthology. I found the stories to be very well written, filled with interesting characters and places.
If you love tales of lost civilizations you would be hard pressed to find a better group of tales gathered in one place.
That being said, here are a few of my favorites from the book;
Directions by Michael G. Cornelius; This was my favorite story in the collection. For those of you who follow my reviews and know me know how much of an influence “The Wicked Witch Of The West” had on shaping my love of being scared, horror and monsters.
This simply a fabulous take on Oz Witches mythos and is one of the best short stories I’ve read this year.
Königreich der Sorge (Kingdom of Sorrow) by C. Deskin Rink; A really great story of the Nazi’s unquenchable search for power and what they discover. A really frightening tale of desecration and evil better left undisturbed.
The Nightmare Orchestra by Chelsea Armstrong; A first time published author presents a terrifying and unique look at the dreams that haunt us.
The Funeral Procession by Jay R. Thurston; Genghis Khan, who hasn’t heard of him this amazing and feared conqueror. This tale takes you on a journey with an archaeology team in search of the burial ground of Genghis Khan and what they discover.
The Tall Grass by Joe R. Lansdale; In this tale Mr. Lansdale again shows us his considerable talents and why he is one of the best in the business. This story is atmospheric and frightening and shows that when someone tells you not to stray to far…don’t.
These were just a few of my favorites in the collection. I am sure you will find your own. “Dark Tales Of Lost Civilizations” is a great group of stories, especially for those of us that love tales of adventure and discovery, even if some of the discoveries are horrific. You would be remiss if you didn’t give this anthology a try and I highly recommend it."
—Peter Schwotzer, Famous Monsters of Filmland; www.famousmonsters.com
"Collected in Dark Tales of Lost Civilizations are 25 short stories from the horror and speculative fiction genres, unearthing our forgotten worlds and societies. The stories all begin with some known reality: a familiar legend, an interesting era, textbook chapter, or archeological site. Then, leaping into the void from there, each writer suggests a gruesome alternate history. The stories range from mildly disturbing to downright terrifying, although none are particularly visceral. Most are written in a conservative, suggestive style, relying on the reader’s own imagination to take the plunge from speculation to horror. This element keeps the collection rooted in the possible, making it scarier, perhaps, than the current saturation of seductive monster-based and slasher fiction. The prevailing understatement of gore makes the book a good choice for treating high school history students to a read-aloud on stormy afternoons.
Among my personal favorites was “Quivara”, by Jackson Kuhl. It begins with an old Sioux legend, a tragedy involving brothers mocking their gods. Kuhl’s prospecting hero brings the curse upon himself through greedy pillaging. The story is dark and comical, and Kuhl’s style is brisk. This would be a great piece to read in conjunction with Native American studies; short, pointed, and entirely in character with the original mythology.
“British Guiana, 1853” by Folly Blaine, is a cool piece done in chin-up, British imperialist style. Classic horror tension builds steadily from start to finish as the reader watches helplessly while the explorers, desperately frightened and warned away at every step, still insist on carrying onward to their doom. They open a vault made deliberately impassable; descend into terrifying darkness and stench; ignore a menacing, unearthly, drumbeat, and are climactically pursued into madness by the unnameable horror they unwittingly release. The writing is metaphorical and skillfully done.
“In Eden” by Cherstin Holtzman, is a satiric and original take on re-animation and the problems of keeping order in a wild west town in literal decay. Although the sheriff is only half a man, he makes a tough decision that affects the crumbling existence of what’s left of the population. Holtzman’s style is polished and understated, and he takes a surprisingly fresh angle on a well-trodden subject. Recommended for grades 6 and up."
—Sheila Shedd, Monster Librarian; www.monsterlibrarian.com
"Being, as I am, a huge fan of H. Rider Haggard and the like, I came to this collection with high expectations. That’s not to say that this book is limited to stories set in ancient lost cities, found in the remote, unexplored regions of the world. It has a much wider remit than that.
The collection starts strongly with, ‘Angel of Destruction’, a short tale of the birth of an immortal evil at the fall of Assyria. Cynthia D. Witherspoon is one of a number of writers, unfamiliar to me, who I’ll be watching out for in the future.
I was on more familiar ground with ‘The Door Beyond the Water’, by David Tallerman. Readers will likely recognise the Lovecraftian nature of this excellent story of ancient evil influencing men through dreams, but it also has much of Dunsany, Chambers and Hodgson about it, all of whom were, of course, huge influences on HPL.
Michael G. Cornelius’ ‘Directions’ is a little gem, which has gone on my personal shortlist of best short stories of 2012 for when the time for awards nominations comes around. It does stretch the boundaries of the collection a bit, but this tale of how the witches of Oz met their individual ends and how their destinies failed to live up to their expectations is an absolute delight.
One of the real lost civilizations we revisit in the book is that of the Aztecs. In ‘Quetzalcoatl’s Conquistador’, by Jamie Lackey, we find out what happens when the feathered sepent himself possesses the Spanish explorer, Hernán Cortés. Naturally, subsequent events take a different path to that recorded in our history books.
‘Königreich der Sorge (Kingdom of Sorrow), by C. Deskin Rink is the second Lovecraftian tale in this collection. In 1939, Dr. Werner von Eichmann Phd. M.D., following an ancient map, takes his team far North, into the Arctic Circle. They eventually discover a huge trapdoor, one that appears to have been purposely buried and hidden by a Russian expedition a couple of years previously. The story is cleverly presented as a series of reports, sent to his superior, Herr Generalfieldmarschall, Willhem Keitel, and eventually from Major Joseph Müller, whose platoon is sent to find Eichmann’s expedition.
Sometimes less is more. ‘Bare Bones’, by Curtis James McConnell is just four pages, but it was my favourite in the book so far. How can the fully evolved Homo Sapiens skull be two million years old? What can our troubled scientists do with a discovery that completely invalidates everything they know about the evolutionary history of mankind? This one went straight into my personal list of best short stories of 2012.
Cherstin Hotzman’s ‘In Eden’ is a truly original zombie story with a difference. No flesh eating zombies these. It’s the old West, and in a small town named Eden, people were refusing to stay dead. They might go on forever, dead, but aware; their flesh rotting on their bones, until they either leave the town limits, or someone does something about it Only the sheriff seems to believe that something needs to be done, but if he fails to fix it, who will step up to help him?
‘Rebirth in Dreams’, by A.J. French, was interesting enough to have me searching Amazon for more of his work. It’s a weird metaphysical tale, which, in the words of the editor, is like a collaboration between Hunter S. Thompson and H.P. Lovecraft. Another one for my ongoing shortlist of the best short stories of the year.
Why did he insist that even his son refer to him as Dr. Phillips, and what is the terrible family tradition, passed down from father to son? ‘Sins of our Fathers’, by Wendra Chambers answers these questions in a manner which reminded me of a classic mystery/horror movie. Indeed, I could easily envision the cold, distant, secretive Dr. Phillips as played by Vincent Price.
‘Sumeria to the Stars’, by Jonathan Vos Post is an odd one. The author is a mathematician and Physicist. He packs his story with enough science to plough a highway over the heads of readers better educated than me. However, he manages to keep the science-blinded reader interested. Archaelogical evidence has been unearthed that shows the ancient Sumerians had knowledge of quantum physics and black holes. Teams of experts in various departments try to work out how. Was Von Daniken right? Was the Earth visited by an alien race, or was it time travellers from the future.
Joe R. Lansdale’s ‘The Tall Grass’ is one of the highlights of the collection. Why does the train stop in the middle of nowhere, for no reason? What is laying in wait for anyone who wanders too far away? It’s reminiscent of classic horror tales of an early time. Quiet, but creepy.
I made notes on all twenty-five stories as I read them. Then I brutally cut as many as I could from the final review, based on whether, or not I’d come up with anything more interesting to say about them, other than, “I liked it. It was really good.”
There are genuinely no bad stories in this book. Some of them I cut simply because I couldn’t think of much I could say about them without giving away too many spoilers. Several stories made it on to my best of the year shortlist and the book itself is now on my best anthologies of the year shortlist."
—David Brzeski, The British Fantasy Society; www.britishfantasysociety.co.uk
"Dark Tales of Lost Civilizations begins with an introduction by the book’s editor, Eric J. Guignard. The introduction is very well written. It asks poignant questions, and reads like a cross between a Rod Serling narrative and an article from the National Geographic magazine. In fact, Guignard continues introductions by placing one in front of each story to give it a brief synopsis. This is surprisingly effective and increases the interest by the reader.
This book is not horror. Instead, I would try to type it into a mix of the sci-fi and fantasy genre, along with a large helping of history. The premise of Dark Tales of Lost Civilizations is to showcase different tales of adventure and yes, lost civilizations, some ancient, some more recent and some futuristic. The stories can be compared to those of Sir Arthur Conan Doyle’s The Lost World and H. Rider Haggard’s King Solomon’s Mines.
Because this is an anthology of twenty-five stories, I don’t have room to critique them all. Therefore I will discuss my favorites in the order that they appeared in the book.
“Quivira” by Jackson Kuhl is a colorful and lively story that includes Sioux Native American folklore told with humor. Lyddy was in New Mexico on a quest for gold when “a man who resembles his twin” shows up dead. An entertaining story.
“Quetzalcoatl’s Conquistador” by Jamie Lackey is a realistic retelling of an actual historic event that originally took place in the 1500s. Spanish Conquistador Herman Cortez led an expedition that caused the fall of the Aztec Empire, and this story twists the truth…but only by a little. This is a well-researched yarn that is realistic and exciting.
“Gestures of Faith” by Fadzlishah Johanabas stands out for its beautifully descriptive prose. Johanabas, a neurosurgeon in Malaysa, manages to court us with flowery fiction that includes Isis, Mount Olypus, and an Oracle that talks to Poseidon. This story would appeal to fans of Middle Earth.
“Bare Bones” by Curtis James McConnell is one of my favorites in this book. Fast paced and humorous, this one is in-your-face with action. A two-million-year-old skull is found, or is it? Why does carbon dating say it is old, but its features say it is modern? Is it de-evolution or time travel? My only regret with McConnell’s story is that I didn’t grab it first for The Horror Zine.
“The Nightmare Orchestra” by Chelsea Armstrong is told from a child’s point of view. Skye doesn’t understand why his father forbids him to play with “the dreamers.” This story contains good character development and is a strange but compelling tale.
“Buried Treasure” by Rob Rosen is another personal favorite. What modern wonders of today will be archaic in the future? A 500-year-old map is the ticket to adventure. On a planet gone dry, water is worshipped as a god. But this water is man-made in a very surprising twist.
I was pleasantly surprised to see a story written by Joe R. Lansdale included in this book, who is one of my all-time favorite writers. And “The Tall Grass” lives up to Lansdale’s high standards of quality. I thoroughly enjoyed the character’s trip in 1901 on a train that always seems to break down in the middle of the night at a prairie of tall grass. The excitement begins when a passenger decides to explore the grass, and encounters frightening creatures within. “The Tall Grass” is probably the one story in the book that could be classified as horror. A real gem.
Of course all anthologies have their share of clunkers, and this one certainly does. Some of the fiction in Dark Tales of Lost Civilizations delves into so many explanations that the stories are bogged down under the weight of details. Others go off on unnecessary tangents, making me think, “Huh? What is this story about?” And there were one or two that were so slow in pace that my eyes glazed over and I could barely keep them open. I was disappointed that Eric J. Guignard, an accomplished writer in his own right, did not include one of his own works.
But overall, this is an anthology worth your time. Which stories would be your favorites depends upon what time period in history fascinates you the most. Dark Tales of Lost Civilizations seems to cover a lot of interesting ground, from ancient Mount Olympus to modern day. I liked this book and believe you will too."
—Jeani Rector, The Horror Zine; www.thehorrorzine.com
"This is a brilliant anthology of 25 stories that will capture the hearts and imagination of anyone who grew up like I did on a diet of Boys Own Adventures, Alan Quatermaine, and other tales of derring-do. Grab a copy of this book and let your imagination run free for a time."
—Ginger Nuts of Horror, book reviews; http://thegingernutcase.blogspot.com
"Many readers might think they knew what to expect from this book, just from the title. They would be wrong. Mr. Guignard does an astonishing job of expanding the apparent range of his title into a varied and colorful collection of almost everything under the sun, or rather, everything hidden away from the sun.
Who knew there were so many kinds of lost civilizations? The civilizations visited in these stories range from historically documented civilizations—either trampled under the march of history, as in Jamie Lackey’s story, “Quetzalcoatl’s Conquistador,” or active participants in the trampling, as in “The Funeral Procession” by Jay R. Thurston—to the entirely mythical, like that of “Gilgamesh and the Mountain” by Bruce L. Priddy. In between these two extremes, we find an intriguing half historical, half legendary lost society in Jackson Kuhl’s “Quivira”.
There are lost civilizations drawn from uncharted islands (”The Island Trovar” by JC Hemphill), from nameless ruins reeking of antiquity and better left unexplored (”The Door Beyond the Water” by David Tallerman or “Königreich der Sorge or Kingdom of Sorrow” by C. Deskin Rink) or even from fictional sources like Oz (”Directions” by Michael G. Cornelius).
Nor is it just the lost civilizations themselves that are varied. Rather, it is the changing mood and tone that keeps this anthology so fresh. The reader assumes there will be horror (since it is mentioned in the subtitle) and will not be disappointed by Chelsea Armstrong’s “The Nightmare Orchestra”. The reader also suspects there will be something spooky, and Joe R. Lansdale provides a deliciously creepy yarn in “The Tall Grass”.
But most readers will not expect a laugh-out-loud futuristic comedy like Gitte Christenson’s “Whale of a Time”. While it is not surprising that most of these tales deal with magic, or at least with a lost technology so advanced it might as well be magic, in “Sumeria to the Stars” by Jonathan Vos Post. we find a hard science SF tale. There is no horror, or even real magic, in “To Run a Stick Through a Fish” by Mark Lee Pearson, only bittersweet grace. And nobody could possibly expect the sheer quirky originality of “Bare Bones” by Curtis James McConnell. | http://www.ericjguignard.com/dark_tales.html |
Advancements in technology have helped humans reach where they are today: it has helped build modern cities, enabled science to cure diseases, connected people across oceans, and helped put a man on the moon. As algorithms became more sophisticated, however, the human race finds itself at a crossroads as the very same technology makes us question our own existence!
My research mainly explores consumer reactions to algorithmic decision making. I study how consumers process and react to the information (e.g., recommendations, feedback) provided by algorithms or humans. In addition to this line of inquiry, I also study how to motivate consumers to make more effective donations. I employ a mix of methods to address my research questions, including lab/online panel studies, field experiments, meta-analysis, and secondary data analysis.
JOURNAL PUBLICATIONS
- Yalcin, Gizem, Sarah Lim, Stefano Puntoni, and Stijn van Osselaer (in press), “Thumbs Up or Down: Consumer Reactions to Decisions by Algorithms versus Humans,” Journal of Marketing Research, https://doi.org/10.1177/00222437211070016.
- Yalcin, Gizem, Stefano Puntoni, Erlis Themeli, Stefan Philipsen, and Evert Stamhuis (in press), “Perceptions of Justice by Algorithms,” Artificial Intelligence and Law, https://doi.org/10.1007/s10506-022-09312-z.
- Paolacci, Gabriele and Gizem Yalcin (2020), “Benevolent Partiality in Prosocial Preferences,” Judgment and Decision Making, 15 (2), 173–181. | https://www.gizemyalcin.com/what-i-do.html |
In the fight against cancer, early and accurate detection is critical to success.
The ability to identify potential biomarkers related to the development of specific cancers allows patients to be apprised of their risk. Then they can make informed choices about medical treatment or lifestyle changes to combat that risk.
Associate professor of statistics Dr. Shuying Sun and her students conduct advanced statistical and bioinformatic work to identify and catalogue a broad assortment of cancer-linked biomarkers. Sun performs complex interdisciplinary research requiring expertise in mathematics, statistics, genetics, bioinformatics and computer science. She has published more than 20 peer-reviewed research articles in the fields of statistical genetics and bioinformatics.
Bioinformatics is the science of collecting and analyzing complex biological data such as DNA sequencing data; statistical genetics is the use of statistical tools to identify genetic and epigenetic factors related to the development of disease. Sun uses these methods to integrate different types of health data and accurately describe networks of interaction between different genes.
For example, much of Sun’s research focuses on DNA methylation, an epigenetic event that can affect the functions of genes. In recent papers, Sun has used statistical methods to identify and analyze the effects of methylation on gene expression as potential biomarkers for breast cancer and other disorders. Her research findings can aid in cancer screening, early detection, possible treatments and the development of even more effective methods of cancer study.
Epigenetics: “Epi-” meaning “on top of or in addition to,” epigenetics is the study of changes to gene expression resulting from environmental factors.
Additionally, each summer Sun works with undergraduates and high school students in the Texas State Mathworks Honors Summer Math Camp (HSMC) to train young researchers in the field of statistics and bioinformatics. Students in the HSMC work in three-person teams, with a faculty member acting as a mentor, to conduct original math research projects that may be submitted to various contests and journals.
Sun’s mentees have been published in highly respected academic journals and have gone on to study and conduct research for elite schools including Harvard, MIT and Stanford. Sun says that while it can be challenging to mentor students who come into the program with no research experience, the opportunity to see them grow and succeed is extremely rewarding as a teacher and mentor. | https://www.research.txstate.edu/research-highlights/2020/statistics-and-bioinformatics-for-cancer-data-analysis.html |
Animals use visual, tactile and chemiosensory cues to recognize food sources and conespecifics. Interestingly, insects rely to a high extent on the sense of smell to assess food quality and/or potential mates. Investigation of the insect olfactory system and other sensory signals provides a better understanding of how chemical cues (semio-chemicals) drive insect interactions with other living organisms. Specifically, I study emission and perception of signals in pest insects and their multitrophic interactions with plants and microbes. Investigating their chemical ecology generates opportunities to develop novel and safe tools for sustainable pest management.
Teaching
I participate in the following courses:
-Project based research training (Code: LB0066) (Dept. of Biosystems and Technology - Dept. Plant Protection Biology, SLU) (2019)
-Master course in Insect Chemical Ecology (Code: BI0914) (Dept.Plant Protection Biology, SLU) (2019).
-Master course in Horticultural Systems and Future Challenges (Dept. Plant Protection Biology, SLU) (Code: Bl1309) (2018, 2019).
Furthermore, I am currently involved as mentor for students doing their ‘Gymnasiearbete’ as well as for PROA-students at SLU.
Research
My research concerns odour signals that mediate communication between insects, and between insects, micro-organisms, and plants. My current project focuses on the spoted wing drosophila pest (SWD), Drosophila suzukii. Like other drosophilids, SWD is closely associated with microorganisms. The overall goal is to understand the relevance of microbial signals in the ecology and behavior of SWD flies with emphasis on the role of yeast and bacterial odors in sexual, food- and host-finding behavior. The main goal is that the integration of biological knowledge facilitates the development of sustainable tools to disrupt host-insect interactions in pest insects.
My expertise is on Chemical Ecology, with an emphasis on sensory physiology, behavioral assays, and field experiments.
Background
-2012. Bachelor´s degree in Biological Sciences, Universidad de la República, Uruguay.
-2016. Master´s Degree in Biological Sciences – Ecology and Evolution, Universidad de la República – PEDECIBA, Uruguay.
- Since 2018, Doctoral studies in Biological Sciences – Chemical Ecology, Swedish University of Agricultural Sciences, Sweden
Supervision
Co-supervision of Isabella Kelman´s Master´s thesis:
“The behavioural response of Drosophila suzukii to fermentation products” MSc Biology-Horticultural Science, Swedish University of Agricultural Sciences, Alnarp, 2018.
Selected publications
- Mansourian Suzan, Enjin Anders, Jirle Erling, Ramesh Vedika, Rehermann Guillermo, Becher Paul, E. Pool John & Stensmyr Marcus. (2018). Wild African Drosophila melanogaster Are Seasonal Specialists on Marula Fruit. Current Biology. 28. 10.1016/j.cub.2018.10.033.
- Rehermann Guillermo, Altesor Paula, Mcneil Jeremy & González Ritzel Andrés. (2016). Conspecific females promote calling behavior in the noctuid moth, Pseudaletia adultera. Entomologia Experimentalis et Applicata. 159. 362-369. 10.1111/eea.12448. | https://www.slu.se/en/ew-cv/guillermo-rehermann/ |
Copyright © 2014 Stefano Abbate et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Healthcare technologies are slowly entering into our daily lives, replacing old devices and techniques with newer intelligent ones. Although they are meant to help people, the reaction and willingness to use such new devices by the people can be unexpected, especially among the elderly. We conducted a usability study of a fall monitoring system in a long-term nursing home. The subjects were the elderly with advanced Alzheimer’s disease. The study presented here highlights some of the challenges faced in the use of wearable devices and the lessons learned. The results gave us useful insights, leading to ergonomics and aesthetics modifications to our wearable systems that significantly improved their usability and acceptance. New evaluating metrics were designed for the performance evaluation of usability and acceptability.
1. Introduction
Healthcare technology using wireless sensors has reached a high level of maturity and reliability and hence these devices are now being deployed in homes/nursing homes for use in managing people’s health. To take full advantage of the penetration of these pervasive systems in people’s well-being and reap their full benefits, the technologies must be minimally invasive and must be accepted by users willingly.
A necessary condition for acceptance is the awareness of benefits to the user population in using the system. Since young adults are well acquainted with modern sensor devices, they willingly accept the introduction of new technologies in their care process. In contrast, among the elderly, who are the main beneficiary population of these monitoring devices, there is still reluctance on their part to use them. Even though an increasing number of the elderly are aware of the advantages of a pervasive health monitoring system, they rarely understand how it works.
With the increase in Alzheimer’s disease (AD) among the elderly population, there is a crucial need for technological support in their care process. Since there is no cure yet to reverse the cognitive decline among these individuals, technology could contribute to safely perform their normal living activities.
Unfortunately, people affected by AD have difficulties in understanding their health conditions, and the use of a device or systems that could help in their day-to-day activities. Some of them have a different perception of objects and are prone to forget using them or be adamant in not using them. In our initial study we found that even the simplest interactive devices such as wristband buttons or call buttons to alert a caregiver in an emergency situation are unlikely to be used by them.
Despite the technological maturity of healthcare devices and networking, little effort has been done to assess their usability and acceptability before deployments in homes and nursing homes. Health monitoring platforms developed so far have mainly focused on the functionalities using specific sets of sensors, vendor specific software, and protocols, for which usability issues have not been sufficiently addressed.
We addressed this gap by undertaking this exploratory study on how to increase the usability and acceptability of our wearable monitoring system to a small group of elderly affected by AD. In the study, we used the wireless accelerometer and electroencephalograph (EEG) logger integrated in our minimally invasive monitoring sensor (MIMS) system , with the aim of detecting possible falls and their preconditions. The wireless monitoring system used did not require any interaction with the subjects. However, the living environment should include a network infrastructure with wired alert systems to connect to central nurse care stations and/or wireless networks to hand-held devices carried by the nurses.
We defined ad hoc usability and acceptability parameters and evaluated them during a month long field test with long-term nursing home residents. The results gave us some important insights, leading to ergonomics and aesthetics modifications to our system that significantly improved its usability and acceptance.
2. Related Work
A number of systems engaged in health monitoring are surveyed here to compare our approach to those of others. Though every system studied here has been in deployment for a long period of time, it was disappointing to observe that none of them reported about user’s acceptability of the system. Only technology descriptions were made public about these systems.
For example, Cao et al. survey of enabling technologies for wireless body area networks discussed the network characteristics, such as the type of wireless connection (Bluetooth, ultrawideband, ZigBee) and path loss of the signal sent by body sensors according to their placement on the body and to the radio frequency used. Performance evaluation and their usability study results are missing.
A wearable monitoring system called SATIRE collects the motion and location information of a subject. SATIRE requires sensors inserted in the garment worn by the user without the need of user’s interaction. The sensors collect and store data locally. Periodically, the data is uploaded to a base station for further analysis and archive. The paper presents the design which includes a layered architecture (for both the base station and the sensors). Real-world testing and adaptability study of this system are not known.
The MIThril LiveNet system is another distributed mobile system for real-time monitoring and analysis of the health status of an individual. The MIThril architecture offers many features to perform distributed sensing, classification in real-time, and context aware applications. It makes use of a PDA which should be worn by the patient at all times: body worn sensors send data through a network infrastructure for exchange of information and a machine learning infrastructure is used for classification of gathered data. Again, the usability and adaptability results are not reported.
Finally, the experience gained by the authors in developing fall monitoring systems presented in [5, 6] definitely proved that a usability study is important to provide the required metrics to evaluate the performance of a health monitoring system in real-world applications, especially with the elderly population.
3. Materials and Methods
The equipment we used for the usability study was based on the MIMS system described in . We developed MIMS with the aim of providing a flexible and scalable platform for building a comprehensive and customizable health monitoring system, which guarantees interoperability among different sensor systems. MIMS can be easily integrated with any wireless communication system already in place and with any existing networked alert system. Figure 1 shows the complete monitoring system, consisting of four sensing systems.
System 1 is a fall detector. It consists of a wireless sensor node (based, in our case, on the Shimmer 2R platform ) able to sense human movements using an embedded accelerometer. The microcontroller (MSP430 family) can perform on-chip analysis and communicate with a base station using a Bluetooth module or IEEE 802.15.4 radio, as shown in Figure 2. Being battery powered, small, and lightweight, the device can be conveniently worn near the waist. The device runs a simple yet very reliable algorithm for fall detection described in . In a nutshell, every time the acceleration reaches a given threshold, samples belonging to a fixed time window around the event are sent to the base station, which in turn analyzes the pattern of accelerations and decides whether the event was due to an activity of daily living (i.e., a false alarm) or to a real fall. In the case of a real fall, the system informs the caregiver through the alert system.
System 2 consists of a wireless electrophysiology sensor (based, in our case, on the Enobio platform ) which is able to capture the brain activity of a person in real time. Four digital electrodes are attached to the Enobio communication module and placed in the Enobio headband. Data from such electrodes/channels are wirelessly transferred to a base station using the IEEE 802.15.4 low power radio standard. The base station is represented by the Enobio USB receiver connected to a PC. Two wired ear clip electrodes (potential ground and potential ground feedback) act as references for sensed signals. Enobio (see Figure 3) is worn like a hat and can record not only brain activity but also heart activity through an electrocardiogram (ECG) and eye movements through an electrooculogram (EOG). The Enobio software is a Java application that allows (i) wireless communication between the Enobio and PC; (ii) data recording; (iii) data display; (iv) forwarding of data to other clients. Data is coded as simple ASCII file of tab delimited columns and can also be exported to the very common scientific format EDF (European Data Format). We used System 2 to analyze EEG potentials during the different stages of sleep, with the aim of studying brain signal patterns preceding a fall.
System 3 is composed of ambient sensors such as pressure pads, which are placed in the care environment to monitor lying on a bed or sitting on a chair, and volumetric motion detectors, which cover an area of 10.7 × 9.1 meters and are used to detect motion and activities such as entering/leaving a room. Door sensors are placed on the top of doors, windows, and drawers to detect when they are opened or closed; the toilet mat sensor detects both presence on or near the toilet to monitor bathroom activity and potential safety issues such as falls near to the toilet; emergency buttons can be placed where accidents are likely to happen; they are 7.6 centimeters in diameter; when pressed, they send an alert to the caregiver or to an emergency response service. The main panel acts as a base station and collects data from sensors exploiting the radio channel and a proprietary protocol (General Electric); the base station has battery backup and is equipped by a GSM transceiver and an interface to the landline.
System 4 is a camera-based monitoring system using the internet to continuously stream the video recording human activity (visual motion detection) over a selective region and over a specified observation period (e.g., at nighttime). Two types of cameras have been used: a fixed wireless camera (ADCV510) and a pan/tilt camera (ADC-V610PT). They both have the live resolution options 640 × 480, 320 × 240, and 176 × 144, whereas the recording resolution options are 640 × 480 and 320 × 240; the recording compression is based on MPEG-4. The video motion detection can be configured with three different windows having adjustable sensitivity and thresholds. The streaming of video to the internet relies on a wireless Wi-Fi router and standard encryption (WEP, WPA, or WPA2); cameras can also be connected using a standard Ethernet connection. They are designed to work with the Alarm.com hosted video service which provides a surveillance solution. High-quality live and recorded videos are available to customers through web browsers or via mobile apps. Users can set and recall “preset” views or manually pan and tilt the camera remotely.
In the case of resource constraints for sensing, processing, storing, and communicating, some computational operations are delegated to the Virtual Hub, which is a base station running on a smart phone environment. The Virtual Hub receives data from the subsystems and is connected to the local healthcare information system through a friendly graphical user interface. A thorough description of the MIMS platform can be found in .
System 1 and System 2 are examples of active monitoring sensor system (AMSS). As they are based on wearable devices, they offer advantages in terms of continuous monitoring, cost, and efficiency. However, their acceptability strongly depends on the level of usability. System 3 and System 4, instead, were not considered in this study as they are environmental systems, whose major concerns are intrusion and privacy rather than usability.
3.1. Measuring Usability and Acceptability
According to the human engineering principles , the design of a system must follow the users’ needs, fear, mental models, self-learning ability, social behavior, lifestyle, and fashion tastes. In fact, an accurate knowledge of end users can be achieved only by observing them closely.
In the case of monitoring people with AD/dementia living in a home or a nursing home, providing suitable care to them requires 24 × 7 continuous monitoring of their everyday activities . Some of their regular daily activities are walking in a corridor, watching television, and, with the help of caregivers, having breakfast, lunch, dinner, and medication; some subjects have a small nap in the afternoon. They are prone to disorientation and wandering at any time of the day or night, and statistics show that they are more prone to falls compared to general elderly population . To compensate psychomotor deficiencies, variant medical equipment is used such as canes, crutches, and wheelchairs. However, not all the subjects are able to understand (or remember) that they need to use them during their walking activity to prevent fall.
In this context, we define usability as the level at which a device can assist a user without interfering with his/her normal activities of daily living. Acceptability is defined as the constraints which guide the designer to realize factors that satisfy one’s need and therefore people’s willingness to use. The following are the evaluation criteria we developed to measure usability and acceptability:(1)willingness to use (WTU),(2)easiness to learn (ETL),(3)time to accept (TTA),(4)willingness to keep (WTK),(5)number of errors (NOE) due to incorrect interactions,(6)level of satisfaction (LOS),(7)interference with activities of daily living (IWA).
The ranking of evaluation criteria is shown in Table 1.
Results of a Nielsen’s research have shown that a usability study can be suitably performed with up to 5 subjects, because the behavior of users does not change significantly as their number increases. This matches the obvious fact that performing small tests does not require huge investments for devices, and it makes the test feasible when, as in our case, it is done on a very critical population such as the elderly affected by AD. Of course, this does not hold for statistical studies, but here we are interested in only qualitative results, as the goal is to gain insights for improving the design of wearable devices based on feedback received by users to increase the usability and adaptability.
Based on these observations, we tried to get the maximum benefit-cost ratio by delivering the usability and acceptability test to four subjects affected by AD with advanced age. The four subjects involved in the test were from 75 to 92 years old. All of them were at staggering stages of dementia progression and associated abnormal behaviors, thus limiting the usability study. In particular, AD subjects were at levels 5/6 of Reisberg stage and they resulted below a score of 12 out of 30 (severe cognitive impairment) of the MMSE (Folstein test). All the patients were in long-term care.
It should be mentioned here that, for 24 × 7 monitoring of AD individuals, a number of trials had to be conducted before a full data collection is accomplished, with the result that some of these tests could not be completed in consecutive days. Some contributing factors to the difficulty in monitoring were age, disease, and associated behavior. Nevertheless, the observations were invaluable.
4. Results and Discussion
Table 2 summarizes the results of the study using Systems 1 and 2 of the MIMS platform. Experiences and reactions of the subjects in adapting to the wearable devices and to the monitoring sessions are reported in terms of the usability and acceptability criteria described above, together with observed reactions from the subjects.
In the overall, the study showed that with a few modifications to the way devices are placed, colored, or integrated with clothing, and after some convincing story about the importance of wearing the devices, AD individuals eventually wore and benefited from the monitoring technologies. Since the Shimmer sensor has to be placed on the waist, System 1 achieved higher usability and acceptance than System 2. From the study, we found that integrating the sensor with clothing, so that it could be considered as an everyday accessory, made it better accepted. In doing so, the device must be prevented from choking the individual, and the difference in dress code between women and men must be considered. Since the device is sensitive to movements, careful wearing practices must be observed while placing/removing the device, and touching, meddling, or breaking the device should be prevented during specific activities such as lying on a bed (during the afternoon nap) or sitting on a chair.
Therefore, we adopted the following two solutions.(1)The device was integrated onto a belt buckle (see Figure 4(a)). A leather style gave the buckle a retro aesthetic that made it suitable for both men and women.(2)The device was attached to a Velcro stripe belt. Two small stripes were crossed to hold the device firm and properly placed on the waist (Figure 4(b)). For women, the device was hidden under the shirt/vest.
A significant effort was necessary to improve the usability of System 2, as the Enobio sensor must be worn like a hat during night sleep. The main problem arose from the presence of a bulky battery pack and transmitter on the user’s nape, and this initially made it almost impossible to carry out sleep tests. During the study, we repeatedly modified the device in order to improve its acceptability (measured by WTU) and to make it as least intrusive as possible in order to avoid the user feeling embarrassed while wearing it (measured by WTK).
To improve the ergonomics of the Enobio sensor, we moved the battery/transmitter from the back of head/neck to the top of the head on a belt. For acceptability, we observed that the elderly enjoy wearing caps in the night to keep them warm; so we worked on the aesthetic side by embedding the sensor in a bonnet style cap with a light texture to prevent sweating (see Figure 5). However, it happened that some users took the hat off before getting into a deep sleep. This problem was partially solved by asking them to wear the hat without the sensing device during daytime. In this way, they got acquainted with wearing the hat and no longer noticed the sensing device was embedded in the hat during the sleep time.
An important factor to be considered here is the color of the device’s enclosure. Colors have different impacts and meanings in one’s space or environment. Bright colors or color combinations can help visually impaired people in understanding the surroundings. Warm colors such as orange red, pink, yellow, brown, and their shades are favorable for identifying objects. Cool colors such as blue, green, purple, and their shades are useful to give an impression of coolness, discretion, and serenity. The study gave us evidence that when a device comes in one’s favorite colors, it is easier to make it acceptable, as it happened with subject 1 to whom we provided the Enobio with a pink cap.
5. Conclusions
Technologies applied to healthcare are meant to improve the wellness among people. However, not everyone easily accepts such technologies as designed by engineers. The usability study showed that the design and development of a monitoring device must consider its target users’ preferences before it can be broadly deployed. Nontechnical factors depending on both the users and the environment must be considered for quick adaptability and reap the benefit to improved care.
The wireless sensor devices developed within our fall monitoring project were tested to assess their usability among AD elderly in a long-term care home. This rare opportunity gave us many insights leading to positive changes in our system in terms of ergonomics and aesthetics, as well as some modifications to our system architecture. Though the sample size was small due to the complexity in conducting the tests and the difficulty to manage AD subjects for test during day and night, we were able to achieve a qualitative usability assessment.
Patient’s unawareness of the system’s benefits is a major concern. For example, the Enobio EEG wearable sensor was formerly tested with healthy subjects informed of the sleep study. Even though they reported a slight discomfort, they were always conscious of wearing the hat and often restricted their movements while being in the bed. During the study, our patients showed different reactions. They did not understand about the sleep tests performed. Some of them thought that the hat was to keep them warm. While someone did not move at all in the bed, assuming a supine position for the entire night, others kept removing the hat. As a result, their sleep was interrupted by the testing, as the hat had to be put back to the correct position several times.
The study suggests that ergonomic and aesthetic modifications are necessary to improve the level of usability and acceptability, especially in an elderly user population. Analysis of the users’ dress code was fundamental to figure out a comfortable and easily wearable solution. Typically, the elderly are attached to a specific aesthetic dress code, characteristic of their likes/dislikes. They prefer simple, loose, and comfortable dress and therefore the focus should be on a retro style. Unfortunately, such loose dresses make it difficult to put wearable devices close to the body in order to monitor accurately. At the same time, care must be taken such that these devices should not cause itching, rashes, or skin diseases if worn too tight.
From a manufacturing point of view, the devices worn by a patient must be robust and waterproof to avoid accidental damages (e.g., the sensor can fall to a sink full of water, or it can be thrown away or tampered). The device should also provide a switch or a special combination of buttons in order to be activated and deactivated by the nurse before placing it. A battery indicator is another element that would help nurses to identify if the device needs to be charged. The caregivers interface is fundamental to understand how to assign each sensing node to a person, to check the history and general status. In particular, the sensing devices should periodically send a message to signal that they are working correctly and, in this case, they will also provide an update to a localization system.
From the technical point of view, it was found that the deployment of a monitoring system should consider existing communication network infrastructure and the range of sensors in a pretesting phase. Feedback from such testing would enable modifications to system’s functions in order to improve its performance and adaptability. For example, in designing the Shimmer-based prototype we initially set the sensors to work within a 100-meter range from the base station, without considering that our subjects were used to walk back and forth along the corridors even far from their bedrooms. After the study, we realized the need of extending the coverage of the sensors for continuous monitoring. A possible solution would have been to adopt a multihop routing protocol, with each node running a message forwarding program besides the monitoring one. However, this would have drastically reduced the nodes’ battery lifetime and, in the end, the system’s usability. We identified the right solution by distinguishing nodes into two types: sensing nodes, worn by the individuals, and forwarding nodes, connected to power outlets at fixed locations in the nursing home. Sensing nodes send data to the base station through the forwarding nodes, by selecting the closest node at one-hop distance. Forwarding nodes run a routing algorithm that guarantees no packet loss, which is a critical requirement because the system cannot tolerate undelivered alarms.
As final remarks, even though a cost analysis is beyond the scope of this paper, we would like to stress on the fact that, as argued by analysts addressing the problem of why technology innovation tends to increase the cost of healthcare rather than making things simpler and cheaper , the best way technology can save costs is if it is used to better organize the healthcare system. In this sense, systems like the MIMS platform offer an automatic and seamless monitoring system, enabling continuous data collection otherwise very difficult or even impossible to obtain. The bare technological cost of the MIMS equipment used for our experiments, which glues together known and mature technologies, is in the order of 200 USD and 2,000 USD for Systems 1 and 2, respectively. However, we must be aware that the market cost of such systems is driven by healthcare stakeholders other than the research laboratories, whose role is only to propose proof-of-concept, innovative healthcare systems. In commercial use, the entire set of the ambient monitoring systems shown in Figure 1 are being deployed by healthcare companies in homes and nursing homes of some North American provinces, for an approximate cost of 100 USD for a month. So, when deployed in large numbers, there are considerable savings in healthcare cost with manageable equipment cost.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors wish to express their appreciation for the administrators, nurses, and families of the residents from the Kings Way Care, New Brunswick, Canada, who participated in the study. The environment monitoring equipment support provided to the researchers by industries Care Link Advantage and BeClose was valuable for the study. This work was partially funded by the Fondazione Cassa di Risparmio di Lucca, Italy, under the project “Fall Detection Alarm System for Elderly People” (FiDo). Special thanks are due to Arnoud Simonin and Anais Zeifer, the intern students from the U of Limoges, for their help during testing and in the ergonomic development of the hat solution.
References
- S. Abbate, M. Avvenuti, and J. Light, “MIMS: a minimally invasive monitoring sensor platform,” IEEE Sensors Journal, vol. 12, no. 3, pp. 677–684, 2012. View at Publisher · View at Google Scholar · View at Scopus
- H. Cao, V. Leung, C. Chow, and H. Chan, “Enabling technologies for wireless body area networks: a survey and outlook,” IEEE Communications Magazine, vol. 47, no. 12, pp. 84–93, 2009. View at Publisher · View at Google Scholar · View at Scopus
- R. K. Ganti, T. F. Abdelzaher, P. Jayachandran, and J. A. Stankovic, “SATIRE: a software architecture for smart AtTIRE,” in Proceedings of the 4th International Conference on Mobile Systems, Applications and Services (MobiSys '06), pp. 110–123, ACM, June 2006. View at Scopus
- M. Sung and A. S. Pentland, “Livenet: health and lifestyle networking through distributed mobile devices,” in Proceedings of the Workshop on Applications of Mobile Embedded Systems (WAMES ’04), Boston, Mass, USA, 2004.
- S. Abbate, M. Avvenuti, F. Bonatesta, G. Cola, P. Corsini, and A. Vecchio, “A smartphone-based fall detection system,” Pervasive and Mobile Computing, vol. 8, no. 6, pp. 883–899, 2012. View at Google Scholar
- S. Abbate, M. Avvenuti, G. Cola, P. Corsini, J. Light, and A. Vecchio, “Recognition of false alarms in fall detection systems,” in Proceedings of the IEEE Consumer Communications and Networking Conference (CCNC '11), pp. 23–28, January 2011. View at Publisher · View at Google Scholar · View at Scopus
- Shimmer Research, 2010, http://www.shimmer-research.com.
- Enobio by Starlabs, 2010, http://starlab.es.
- S. J. Guastello, Human Factors Engineering and Ergonomics: A System Approach, Lawrence Erlbaum Associates, New York, NY, USA, 2006.
- M. Avvenuti, C. Baker, J. Light, D. Tulpan, and A. Vecchio, “Non-intrusive patient monitoring of Alzheimer's disease subjects using wireless sensor networks,” in Proceedings of the World Congress on Privacy, Security, Trust and the Management of e-Business (CONGRESS '09), pp. 161–165, IEEE, August 2009. View at Publisher · View at Google Scholar · View at Scopus
- R. V. Pedroso, F. G. D. M. Coelho, R. F. Santos-Galduróz, J. L. R. Costa, S. Gobbi, and F. Stella, “Balance, executive functions and falls in elderly with Alzheimer's disease (AD): a longitudinal study,” Archives of Gerontology and Geriatrics, vol. 54, no. 2, pp. 348–351, 2012. View at Publisher · View at Google Scholar · View at Scopus
- J. Nielsen, “How Many Test Users in a Usability Study?” Jakob Nielsen's Alertbox, 2012, http://www.useit.com/alertbox/number-of-test-users.html.
- J. S. Skinner, “The Costly Paradox of Health-Care Technology,” MIT Technology, 2013, http://www.technologyreview.com/news/518876/the-costly-paradox-of-health-care-technology/. | https://www.hindawi.com/journals/ijta/2014/617495/ |
Race Traits: very dark skin, always with light-colored eyes [light blue, green, or gold].
Hair colors range from dark brown to black.
Lifestyle: Adapted to living life in the harsh deserts, the Rakkel seem to actually thrive in it. They live deep in the desert, coming to the coastlines frequently to trade with the Vess'ri. Nomads by demand, they tend to not form large cities - instead moving in small tribes. There is one fairly large city on the desert side of the mountains, where the ruling family lives. | http://www.sorla.com/r_rakkel.html |
The Massachusetts Veterinary Medical Association
For the advancement and protection of the veterinary medical profession in Massachusetts by promoting the betterment of animal health and well-being, enhancing the human-animal bond, safeguarding public health, supporting legislative advocacy and providing excellence in continuing education.
Mental Health Resources
Explore resources to help cope with stressors such as burnout and compassion fatigue, as well as info on maintaining mental health and wellness.
Learn More
Learn More
FAQs
Browse some of our most frequently asked questions. Don't see an answer to your question? Contact us and we'll find an answer!
Learn More
Learn More
Legislative Initiatives
Advocacy is one of the most important benefits MVMA provides member veterinarians. As a unified voice, we can affect positive change for the profession. Learn about our process and key legislative priorities. | https://www.massvet.org/?page=CharityLandingPage |
Lymphocytes in the blood are cells that play a central role in immunity, constituting cells (subsets) having different functions, such as T cells, B cells and natural killer (NK) cells. The T cells are not a uniform group of cells but include functionally differing subsets of cells which are known as CD4 T cells, CD8 T cells or naive T cells.
The individual types of cells mentioned above have surface proteins (antigens) characteristic of the respective cell types. The number and ratio of each type of cells have conventionally been determined by dyeing a monoclonal antibody of the relevant antigen and then using a flowcytometric analysis. Also, functions of individual lymphocytes used to be determined by measuring a protein (cytokine) related to proliferative activity or proliferation of the individual lymphocytes under cultural conditions.
Using the aforementioned method, the inventors of the present Application have revealed that the subset configuration of lymphocytes and functions thereof could vary or degrade as a result of aging (refer to Non-patent Documents 1 and 2).
Non-patent Document 1: “Differential age-change in the number of CD4+, CD45RA+ and CD4+CD29+ T cell subsets in the human peripheral blood”, Mechanism of Ageing and Development, Utsuyama M., Hirokawa K., Kurashima C, Fukayama M., Inamatsu T., Suzuki K., Hashimoto W. and Sato K., Vol. 63-1, pp. 57-88, Mar. 15, 1992.
Non-patent Document 2: Katsuiku Hirokawa, “Aging and Immunity”, Nippon Ronen Igakkai Zasshi (Journal of The Japan Geriatrics Society), vol. 40-6, pp. 543-552, November. 2003.
While individual data given in these Non-patent Documents show the ratios and functions of individual cell subsets, the data do not necessarily represent the comprehensive immunity of humans.
White blood cells exist at a concentration of 4000-8000 cells/μl in peripheral blood of a healthy person. The white blood cells include (I) granular leukocytes having segmented nuclei and neutrophilic granules and (II) mononuclear cells each having a round nucleus. Most of the mononuclear cells are lymphocytes part of which constitute monocytes (macrophages) having a function of phagocytosis. The lymphocytes made of such mononuclear cells play a major role as immune cells.
The lymphocytes are made up of various subgroups having different functions. Three larger subgroups are those of T cells, B cells and NK cells.
The T cells are further divided into two large groups called CD4 T cell and CD8 T cell subsets. Either the T cells or the B cells include memory cells which have received antigenic stimulation due to infection or the like and naive cells which have not experienced any antigenic stimulation.
While the above discussion has shown a rough grouping of the lymphocytes, these groups respectively have different functions. This means that the immunity of a human is made of total capability of the functions of various subsets having such different functions.
Therefore, a subset having one function does not represent the entirety of an individual.
As explained thus far in detail, it is possible to observe the functions of a wide variety of lymphocyte subsets and proportions thereof among the entire lymphocytes. Specifically, although it is possible to acquire cell population data, such as a T cell count of 1540 cells/μl, a B cell count of 105 cells/μl and an NK cell count of 225 cells/μl, it is not obvious how individual items of these data correlate with the immunity of a human. In other words, there is a problem that no method is available to an individual person for objectively evaluating the level of his or her own immunity (immune function).
| |
Summary:
My client is a Biopharmaceutical company who is looking for a Maintenance Technician to join their team in Carlow. The successful candidate will be responsible for maintaining and troubleshooting process devices, instrumentation and controls in support of vaccine manufacturing.
Responsibilities:
- Provide effective technical support to production in all aspects of machine and equipment maintenance, installation and modification; perform detailed maintenance, calibrations and troubleshooting.
- Required to operate and clean the process equipment as necessary.
- Maintain process and production equipment, ensuring ongoing preventative maintenance, equipment troubleshooting and repairs to ensure continuous, reliable and repeatable operation of all equipment; drive Total Productive Maintenance.
- Ensure effective management and equipment shutdown scheduling, ensuring resources are available, thereby minimizing downtown.
- Support maintenance planning and preventative maintenance through completion of planned and emergency work orders, calibrations, etc.
- Document maintenance work, including upgrades, made to existing equipment, including preventative maintenance performed and parts used, to ensure appropriate documentation and records of repair history.
- Operate and monitor production support equipment, using MES/DCS and PLC based systems, to ensure optimum equipment uptime and target outputs, whilst facilitating continuous process improvements using Lean Principles.
- Operate, troubleshoot and repair complex systems which may include CIP, Autoclaves, Glassware Washers, production vessels, HVAC, Isolators, compressed gasses, plant steam/condensate, bulk chemical distribution and waste water treatment under minimal supervision in a highly regulated, cGMP environment.
- Interpret P&ID’s, equipment/system layouts, wiring diagrams, and specifications in planning and performing maintenance and repairs.
- Support continuous improvement by leading and active participation in repairs, upgrades, preventative maintenance and system failure investigations and investigation reports, execution/development of change control, and contribution to Kaizen events as appropriate.
- Perform root cause analysis of system failures, substandard equipment performance, using standard tools and methods, to resolve machine and system issues e.g. FMEA, Fishbone diagrams, 5 why’s etc.; implement subsequent corrective action through the change management system.
- Supply information and technical data for securing spare parts and equipment asset entry into the CMMS; liaise directly with vendors on supply of parts, upgrades to systems etc as necessary.
- Assist in general facility up keep and provides responsive customer support with emphasis on customer satisfaction.
- Participate effectively in writing/revising/ rolling out accurate operational procedures, training materials and maintenance procedures for various IPT systems; ensure all work is carried out in line with same.
- Leadership activities including selection, development, coaching and day to day management.
- Ensure that the team receives appropriate resources and programmes to develop technical and other skills, to complete their jobs whilst stimulating personal growth and development in line with role.
- Develop and maintain training programmes required to comply with Global Policies, Procedures and Guidelines, regulatory requirements and execute current Good manufacturing Practices (cGMP) in the performance of day to day activities and all applicable job functions.
Qualifications & Experience:
- Time serviced Apprenticeship or equivalent Certificate/Diploma in an Engineering or related discipline is required.
- 5 years’ experience in a similar role; ideally in a manufacturing, preferably GMP setting.
- Troubleshooting and maintaining process instrumentation and equipment.
- Understanding of mechanical/electrical/Instrumentation / pneumatic processes.
- Sterile filling processes.
- Knowledge of regulatory/code requirements to Irish, European and International Codes, Standards and Practices.
- Report, standards, policy writing skills required.
- Proficiency in Microsoft Office and job related computer applications required.
- Lean Six Sigma Methodology experience desired.
- Effective communication and interpersonal skills to interface effectively with all levels of colleagues in a team environment, and with external customers.
- Understand the specific responsibilities of all departments as they relate to ones own department, understanding the business processes ones department supports. | https://job-openings.monster.ie/maintenance-technician-carlow-carlow-ie-tandem-project-management/223530914 |
Harbor Homes of Martha’s Vineyard is looking at a possible winter shelter space to accommodate homeless people during the day and overnight.
Homelessness continues to be a problem for Martha’s Vineyard, and with housing issues and the economic impacts of the COVID-19 pandemic touching many here, that problem is only growing. But a number of organizations, like Harbor Homes, the Houses of Grace, and the Martha’s Vineyard Hospital, are working together to figure out a way to keep people safe and comfortable during the cold winter months.
Initially, the volunteer base established within the Vineyard faith community (through Houses of Grace) and the broader Island community would serve homeless people at St. Andrew’s Epsicopal Church in Edgartown, but a large portion of those volunteers are in the susceptible age range of 65 and older.
Because of this, combined with the stringent space requirements laid out by major public health institutions, community homelessness advocates were forced to restructure and rethink how they might deliver accommodations to people this winter.
Karen Tewhey, Harbor Homes executive director, said she was approached by a community organization that offered a space for a winter shelter that would be able to accommodate around 15 people.
Tewhey said the conversation is still being had, and Harbor Homes may sign a lease soon.
According to Tewhey, the original plan was to locate smaller spaces to house the majority of the homeless population on the Island. But she said having multiple locations would require more staffing, which is currently very limited.
Advertisements will be posted soon for two paid positions: a part-time program coordinator who will oversee scheduling, and a shelter provider who will sleep at the shelter and oversee general operations. Both of these are temporary positions, from November through March, Tewhey said.
As of now, Harbor Homes is hoping the larger shelter space will meet the needs of the community. She said there are subcommittees formed for food issues and COVID protocols that will work to create a structured system for any shelter options. The hospital is working closely with Harbor Homes to ensure that any space will be safe for occupants.
In addition to paid staff, Tewhey said they are also looking for volunteers in order to possibly extend hours and serve other needs in the homeless population.
According to Tewhey, the prospective shelter would be available 24 hours a day for the winter season, so it is possible that the space could serve as both a daytime and nighttime shelter.
“That is really important, because there is currently no warming shelter established,” Tewhey said.
Although hot meals will not be prepared onsite due to COVID health restrictions, Tewhey said, the shelter will be giving out bagged meals. She added that the shelter cannot accept donated food like casseroles or other baked goods because of health restrictions.
“Preparing, serving, cleaning. We are really trying to make this process as simple and risk-free as possible. We are using all paper goods, and utilizing food that is easy to prepare and serve,” Tewhey said.
The shelter will serve a core group of people who will be identified and kept track of. When people show up at the shelter, their information will be processed and their temperature taken. They will also undergo an onsite screening.
If anyone is found to have been exposed to COVID, or is showing symptoms, Tewhey said, Harbor Homes is working on procedures to quarantine and mitigate exposure to others in the shelter.
“The tricky thing with COVID is, What if someone who is frequenting the shelter does become ill? We are looking into what the options are to quarantine that person, both immediately onsite, then what the long-term options are,” Tewhey said.
Volunteers or staff who are conducting screenings at the shelter do not have to have a health background, but will be trained by a medical professional. The shelter will also need individuals who have training in mental health issues.
Harbor Homes treasurer and Houses of Grace volunteer Marjorie Mason said the majority of the community is aware how the homelessness leadership on the Island has had to rapidly reinvent how the shelter program can work.
Apart from a prospective shelter, Mason said she wants to appeal to landlords and homeowners whose homes sit largely unoccupied during the winter months,
She referred to a program in Falmouth called Belonging to Each Other, a winter program where landlords accept a modest amount of rent and accommodate homeless people in their houses. Professional social service and health workers monitor and assist in those spaces, and make sure procedures are adhered to and oversight is administered.
“That is absolutely lifesaving,” Mason said. “On the Vineyard, we kind of have a mindset toward any rental property being used for maximum profit — that is a habit of thinking that really could be re-examined. I am positive there are landlords in our community who would be willing to revisit that way of thinking.”
On the Cape and Islands, Mason said there are more than 30 people per year who die as a result of homelessness. “This issue needs to be part of the larger conversation,” she said. “There are terrible consequences to having people who might not only be underserved, but neglected entirely. I think little by little, the word is getting out to the community.”
With COVID, Mason said, there will be more evictions, and more people who are losing their financial stability due to various issues. Apart from people who could find themselves newly homeless, there are people who continue to live in substandard housing and dangerous conditions outdoors, she said.
“The hospital and the boards of health are aware of this, but the public is only dimly aware of this,” Mason said. “People who love this Island need to understand that there are third world crises happening right here, right now.”
The Rev. Vincent (“Chip”) Seadale, rector at St. Andrew’s and member of Houses of Grace, said he hopes there will be a space secured by the first of December. He added that he is excited and heartened to see additional community involvement in supporting the homeless population on-Island.
“Being a leader of a faith community on the Island, my heart rejoices to see how much interest, caring, and involvement there are from so many community members,” Seadale said. | https://www.mvtimes.com/2020/10/26/possible-homeless-shelter-space-found/ |
The two days 12th Asia Pacific Economic Cooperation Senior Disaster Management Officials Forum (SDMOF) ended in Kokopo yesterday with recommendations to APEC economies on improving warning and reduce the impact of disasters in the Asia Pacific region.
The forum ended with a summary of recommendations to be presented to the concluding Senior Officials Meeting and Ministerial Meeting to be held in Port Moresby in November, 2018.
Among the recommendations, senior disaster management officials recommended that APEC economies;
Promote disaster resiliency through enhancing efficiency and coverage of all-hazards early warning systems by building a networked platform to exchange data, information and analysis for improving forecasting skills and human capacities based on proactive information dissemination and effective disaster risk communication strategies.
These recommendations were discussed under four different sessions which identified four main considerations to improve the effectiveness of multi – hazards monitoring and warning systems to reduce disaster risk;
1.The Collection and Synthesis of Research, Data and Analysis for Effective Warning, including approaches and experiences with the synthesis of hazard research, monitoring, horizon-scanning, data collection and analysis generation which yield effective seasonal and non-seasonal warnings, including the use of digital technologies.
2.The Communicating the Warning Message, including research and global good practice on communicating an effective and actionable warning messages which leads to timely action to avoid or reduce harm. This may also include effective practices in pre-disaster community-level mobilisation and education to enhance understanding of risks and warning.
3.The Delivery of Warning Message, including approaches and experiences with the use of digital and other technologies to deliver targeted warning messages in time and to the correct locations to enable actions to reduce or avoid harm to at-risk communities while avoiding unnecessary concern to communities not at risk. This includes situations where digital and other new technologies have been successfully integrated or combined with traditional communication methods.
4. Localizing Warning, including approaches and experiences with localized warning systems where centralized warning systems are not effective due to time, distance and connectivity constraints, including how centralized and localized warning systems can be linked or networked to provide pre-warning synthesized analysis, with attention to digital technologies as appropriate.
The forum was held at Kokopo, East New Britain, Papua New Guinea on 25 and 26 September 2018 and was officially opened by the Honorable Kevin Isifu Minister for Inter - Government Relations of Papua New Guinea, and Honorable Nakikus Konga Governor of East New Britain Province of Papua New Guinea.
More than 90 delegates that attended the forum included senior disaster management officials from 9 APEC member economies, namely Australia; People's Republic of China; Japan; New Zealand; Philippines; Chinese Taipei; The United States; Vietnam and Papua New Guinea as well as representatives from UNISDR; ADPC; ADRC; UN agencies; international and local private sectors.
The 12thSDMOF focused on “Advancing the Multi Hazard Early Warning Systems for Emergency Preparedness and Disaster Risk Management” where senior disaster management officials and regional key stakeholders focused on “the context of a “new normal” for disaster risk management, where a combination of natural and human factors are leading to disasters which are more complex, more frequent, and increasingly difficult to anticipate and manage.
One of the two SDMOF Co-Chairs and Climate Change and Development Authority Managing Director Mr. Ruel Yamuna said “with this new normal, the forum highlighted that timely hazard monitoring and warning, the efficient use of digital technologies, and warnings which result in timely and effective action, have become increasingly important to enhance disaster risk management to ensure economic growth in the region”.
He said the senior disaster officials expressed their sincere gratitude and appreciation to Papua New Guinea for the hospitality and for organizing the forum in raising the issues of early warning. The senior disaster officials look forward to further collaboration on emergency preparedness in Chile, 2019. | https://ccda.gov.pg/news/apec-economies-improve-warning-and-reduce-impact-disasters-within-asia-pacific-region |
This gorgeous custom-built home is located on a quiet cul-de-sac adjacent to the Castle Rock community pool. Custom features abound in this home from the barrel-vault in the entryway to the stone wall where the spacious open-concept living area and kitchen meet. The large kitchen overlooks the living and breakfast rooms and features granite counter-tops, travertine backsplash, and stainless steel appliances. The master suite has French doors that open to the covered back patio and features a spa-like bathroom with gigantic soaking tub and walk-in shower with multiple shower heads and bench. The split-bedroom plan features large guest bedrooms and a guest bathroom with plenty of storage and counter space. | https://schilton.agtowncb.com/for-sale/4107-rocky-mountain-court-college-station-tx-77845/149-153641 |
I’m Paul Willson I am a brown belt in Ju Jutsu, a centuries old Japanese martial art.
Netflix recently released the documentary Age of Samurai telling the story of the end of the Sengoku period or Warring States period from 1467 to 1615. The documentary dramatises the story, with commentary from historians, of the rise to power of Oda Nobunaga, Toyotomi Hideyoshi and Tokugawa Ieyasu and their eventual unification and control of the whole of Japan.
The Sengoku period was a period of constant warfare in Japan where any semblance of control dissolved and the various lord or daimyo fought amongst themselves for power, land and wealth. Think of Game of Thrones or War of the Roses. For over a thousand years Japanese Emperors, with a few exceptions, have held only ceremonial position. Real political power was in the hands of hereditary Shoguns who were really military dictators who gained control of Japan normally by defeating the previous Shogunate in battle.
In martial arts we often have a romanticised idea of Bushido. The Sengoku period was a last man standing civil war. Military defeat meant death by beheading or seppuku (ritual suicide) of the daimyo and their family (including their children even if they were young). Quite often if a daimyo fell those in their pay simply switched sides. Obedience of weaker daimyo and important families and adherence of agreements was gained by the taking of hostages. Loyalty was given only to the strong and honour was rare. The winner of course was someone nobody ever expected.
The story begins with the rise of Oda Nobunaga the heir to the daimyo of the small and unimportant Owari province. Oda Nobunaga was a belligerent and opportunistic character who first secured power in Owari province after the death of his father then ambushed and defeated the far superior army of Imagawa Yoshimoto as they marched through Owari province on their way to the then capital of Kyoto at the Battle of Okehazama and then began a blood thirsty campaign to unify Japan slaughtering anyone who who stood in his way earning himself the nickname the Demon Daimyo after the massacre on Mount Hiei in his destruction of the warrior monks of the Enryaku-ji monastary. We then had a perfect storm of three men coming together who were the only ones with the vision, strategic talent and greater tactical ability to unite Japan.
You would have noticed my use of the words blood thirsty, massacre, slaughter and destruction. These words are not exaggerating events. The death toll of Oda Nobunaga’s campaigns, killing men, women and children and civilians and combatants alike, were horrific and his nickname of the Demon Daimyo was well deserved as he shocked even those in a time when violent deaths and atrocities were common.
The reason of their success was down to what they did. Early on Oda Nobunaga saw the value of arquebuses (early fire arms) introduced previously by the Portuguese. His innovative use of arquebuses at the Battle of Nagashino defeated the Takeda clan who had the one of the best if not the best cavalry in Japan at the time. Also Oda Nobunaga had the ability to spot and promote talent. Toyotomi Hideyoshi was only a foot soldier but he was able to rise through the ranks to become one of Oda Nobunaga’s most trusted advisors at a time when this was virtually impossible and Tokugawa Ieyasu fought for Imagawa Yoshimoto and probably have been beheaded after the Battle of Okehazama but again he became an important advisor to Oda Nobunaga.
Even Oda Nobunaga’s death in the Honno-ji incident couldn’t stop the unification of Japan as we moved to the next stage of the story as Toyotomi Hideyoshi became Shogun and completed the unification of Japan. Now don’t think Toyotomi Hideyoshi and Tokugawa Ieyasu liked or trusted each other. However, they did respect each other and came to the conclusion that being allies would serve both their interests. The pair were joined by the highly talented tactician Date Masamune, the One-Eyed Dragon of Oshu. A man who cut out his own deformed eye ball after suffering smallpox to prove he was worthy to be a daimyo when even his own mother wanted him killed. His brazen disregard of Toyotomi Hideyoshi’s authority and his tactical ability forced Toyotomi Hideyoshi to offer an alliance and prestige to Date Masamune which allowed Toyotomi Hideyoshi to exert control of northern Japan without having to face resistance and spill a large amount of bloodshed.
After the disastrous invasion of Korea and death of the now mentally unstable Toyotomi Hideyoshi. Tokugawa Ieyasu came to power as Shogun after patiently waiting for decades by outmaneuvering Toyotomi Hideyoshi’s son’s 4 other guardians and winning the Battle of Sekigahara and Seige of Osaka bringing the end to the Sengoku period and creating an hereditary shogunate bringing over 200 years of peace to Japan.
The dramatising of events did a good job of bringing the Sengoku period to life. Obviously the makers of the documentary could only do so much of bring the horrors of battle to life but the smaller events were very well done. What the dramatising of events could not do the historians did well to fill in. They really made an effort to emphase the horrific nature and size of late Sengoku era battles were with a battlefield where bullets flew through the air as two large armies slashed and stabbed each other and strip away any romanticised notions of the period. Their passion for the era was on full display.
If you, as I am, have an interest in history and want to learn about this violent and complex period which really brought to an end of the age of the age of samurai as during the Edo period under the Tokugawa Shogunate the samurai became bureaucrats rather than soldiers it is really worth watching. | https://notquiteablackbelt.blog/2021/03/12/netflix-age-of-samurai-review/ |
The defining characteristic of a ronin is that he was a former samurai separated from service to a daimyo. The kanji that spell out the term "ronin" are literally translated as "wave person," as if he were set adrift to be tossed upon the waves of life. Sometimes, the term "ronin" is translated as "masterless samurai". There are quite a few chambara/jidaigeki films featuring ronin as main characters, including the very famous film Seven Samurai (Shichinin no Samurai) in which some Sengoku Period farmers hire ronin to protect their farms from bandits. In most of the films, the ronin characters are amazingly skilled swordsmen. They are sometimes completely villainous, lecherous, and greedy; more often, these film ronin are noble heroes who stand up for oppressed farmers and townspeople. The reality for most ronin was usually quite different than that portrayed in most films.
Strictly speaking, the term "samurai" means "servant" and designates a bushi (a warrior member of the buke class) who was a daimyo or retainer; samurai received a set stipend, given out in terms of koku (measurements of rice). Those samurai who were the shogun's direct retainers were known as hatamoto (bannermen). So strictly speaking, the term "ronin" refers to bushi who were not samurai retainers. However, many people throughout the ages have used the term "samurai" as a generic term indicating any bushi.
Ronin were allowed to continue to bear a family name and wear the distinctive two swords that they wore when they were clan samurai. However, they effectively existed outside of the official class structure (samurai, farmers, artisans, merchants) that existed from the late Sengoku period through the Edo period. Most ronin lived in poverty without fixed incomes.
Becoming a Ronin
A bushi usually became ronin in one of four ways:
- A clan or fief was defeated and abolished in battle, or the shogunate authorities reduced a fief in size or abolished the fief entirely (this is what happened with the well-known 47 ronin of Ako han who eventually mounted an attack upon the man they saw as being responsible). The samurai involved all would become ronin. Unless the lord of that fief took his retainers with him to his new fief, the samurai in his service would become ronin.
- A samurai was dismissed from service by his daimyo. During the Tokugawa era, according to the Buke Shohatto, no daimyo was allowed to take into service a ronin who had been dismissed by his original daimyo.
- A samurai voluntarily left his fief, with or without his daimyo’s permission, and thus become a ronin.
- A bushi was born as a ronin; he was the son of a ronin.
Ronin during the Sengoku Period
During the Sengoku era (1467-1603), there were numerous inter-clan conflicts. Many samurai changed masters during this time. A bushi who came from a defeated clan could attach himself to another clan and serve as a samurai retainer. It is unclear as to whether or not there were greater numbers of ronin (created by the defeat of clans) or whether there were greater numbers of daimyo seeking samurai retainers during this time. This situation probably would have fluctuated according to specific conditions and events.
After the battle of Sekigahara in 1600, Tokugawa Ieyasu emerged triumphant, becoming the first of a long line of Tokugawa shoguns and establishing peace and order throughout the country that lasted over 250 years. Many fiefs, mainly those connected with the Toyotomi clan, were abolished during the years following the 1615 siege of Osaka Castle in which the Toyotomi were defeated. At that time, around 500,000 ronin existed, without any income or means of support. These unengaged bushi were a persistant problem for the Tokugawa bakufu. There were at least two ronin rebellions during the 17th century. The first was led by Yui Shôsetsu, which was aborted before the actual attack; Shôsetsu and some colleagues disemboweled themselves before capture, while other conspirators were captured, tortured and executed. There was a second unsuccessful ronin rebellion in the latter part of the 17th century. The Tokugawa bakufu, at the beginning and middle of the 17th century, engaged in a campaign of suppression, advising daimyo against allowing ronin from entering their fiefs; law-abiding ronin engaged in making some sort of living were allowed to stay. Later on, more liberal government policies were put into place; daimyo and officials were encouraged to take more ronin into their service as samurai. However, this option only could serve a minority of ronin; in an era of peace, few clans needed the large number of samurai that they would need in times of war. The majority of ronin were basically left to fend for themselves. By the time of the end of the 17th century, the number of bushi – clan samurai plus ronin – had been reduced considerably.
Ronin during the Edo Period
Sixty-one daimyô lost their domains during the first fifty years of Tokugawa rule, most of them as the result of failing to properly name an heir in accordance with the stipulations and regulations set down by the shogunate. These attainders made roughly 150,000 samurai, as much as one-fifth of all the samurai in Japan, into ronin. Many of these newly lordless bushi traveled to Edo to seek new work; many failed to find work, and many turned to crime or other violent lifestyles. Many of these men joined forces opposing the shogunate in battles such as the Osaka Campaigns of 1614-1615 and the Shimabara Rebellion of 1637-1638.
After a few generations had passed since the end of the Sengoku period, the majority of Edo period bushi became distanced from actual martial experience and were not particularly skilled with swords or other weapons, even if they did study martial arts in clan dojo. During the Tokugawa era, most clan samurai performed bureaucratic duties for their domains rather than engage in war or martial pursuits. The reality was unlike what many chambara/jidaigeki films that are set during the Edo period show (i.e. most Japanese historical films). Most Edo period samurai who became ronin would thus not be able to establish themselves as strong swordsmen who would bring justice and keep peace for commoners in exchange for room and board, as they do in many of these films. Some Edo period ronin even ended up selling their sword blades, replacing them with bamboo blades.
Kumazawa Banzan wrote a telling summary of conditions faced by ronin during the 17th century: "Today, the worst off of these people are the ronin. There are innumerable occasions of their starving to death during the frequent famines. Even rich harvests and the consequent lowering of the price of rice would not give much relief to those who are already hard up. Every year there are cases of starvation which are unknown to the general public."
The options open to a ronin during the Tokugawa era (1603-1868) were few. One option would have been engaging in criminal activities, becoming a highwayman or being hired by a yakuza gang as a bodyguard. A ronin, strong in martial arts, could engage in a musha shugyô (“warrior’s journeys”), traveling the width and breadth of Japan, engaged in learning and teaching martial arts. Traditionally, such a ronin would be homeless, sleeping under the skies or in temples; he would earn his rice by such chores as chopping wood or working as a common laborer. He could offer martial arts lessons to commoners; it is strongly speculated that the 17th century swordsman Miyamoto Musashi, who spent most of his life as a ronin, earned some of his keep that way. There were also a number of cases of ronin traveling overseas as mercenaries in foreign countries or as pirates and raiders (wakô).
A ronin with a family or who desired a more settled life would have a few other options, most which were not related to martial arts. He could teach in terakoya (neighborhood temple schools for commoner’s children). Sometimes, as depicted in some films, a ronin would earn his living, engaging in piecework handicrafts, fashioning fans, umbrellas, inkbrushes, insect cages, women’s hair combs and the like, selling his handcrafted wares to wholesalers; these were occupations also performed by low-ranking clan samurai needing extra earnings to survive.
A ronin was able to renounce his buke status and become either a farmer, artisan, or merchant; this option would likely become feasible only if he had connections with well-established commoner families to acquire land or learn a trade.
Ronin during the Bakumatsu
During the Bakumatsu Period (mid-19th century - 1868), many ronin found new opportunities to take action in the conflicts; many samurai left their fiefs and became ronin, joining up either with the Loyalist side (advocating the overthrow of the Tokugawa bakufu) or with groups such as the Shinsengumi (advocating preserving the shogunate). These conflicts during the Bakumatsu period eventually led to the Meiji Period and ended the era of the bushi. The final Tokugawa shogun abdicated in 1868. The daimyô domains were abolished in 1871. In 1876, the wearing of swords was outlawed.
References
- Hall, John Carey, translator. Buke Shohatto (The Tokugawa Legislation, Yokohama 1910). This is the text of the laws that mainly concern the conduct and behavior of those in the Buke class during the Tokugawa period.
- Kumazawa Banzan, translated from Japanese by Tsunoda Ryusaku, William Theodore de Bary, and Donald Keene. "Development and Distribution of Wealth" included in Sources of Japanese History, Vol. I, compiled by Tsunoda Ryusaku, William Theodore de Bary, Donald Keene (Columbia University Press, New York, 1958 ) Kumazawa Banzan was a late 17th century bushi who was born a ronin and lived much of his life as a ronin. He was a poltical reformer who wrote many treatises. In this particular article, he discussed the general economy, the reform of government; among other points, he advocated relief for ronin suffering hardships.
- Sansom, George. History of Japan: 1615-1867, Stanford University Press June, 1963. This is a text of the general history of Japan during the Tokugawa period. There is a section that contains a general summary of how ronin fared during this time, including brief accounts of two different ronin rebellions.
- Tokitsu Kenji, translated from French by Cherad Kodzin Kohn. Miyamoto Musashi, His Life And Writings, Weatherhill; New Ed edition, June, 2006. A detailed biography and analysis of Miyamoto Musashi. Among other topics, discusses the particular issues that faced Miyamoto, a ronin who spent most of his life engaged in a musha shugyo.
- Yamakawa Kikue, translated by Kate Nakai. Women of the Mito Domain: Recollections of Samurai Family Life, Stanford University Press, March, 2001. Not very much specifically about ronin, but good information about samurai clan life during the late Tokugawa period.
- ↑ Roberts, Luke. Performing the Great Peace: Political Space and Open Secrets in Tokugawa Japan. University of Hawaii Press, 2012. p76. | http://samurai-archives.com/wiki/Ronin |
This invention relates to insoles for use in footwear.
An insole comprises a heel portion to support the heel of the foot, an arch portion to lie below the arch of the foot and a front portion on the side of the arch portion remote from the heel portion. An insole normally comprises a soft resilient material which cushions the foot in use. The insole may also be an orthotic device which is normally shaped to provide support for the arch of the foot. Such an orthotic device is usually made to the specification of a podiatrist and as such is quite an expensive item.
a main part which in use is inserted into footwear and which comprises a heel portion, an arch portion and a front portion; and
a deformable member located at the arch portion which is capable of relatively easy permanent deformation.
According to one aspect of the invention there is provided an insole consisting of
By the term “relatively easily permanent deformation” is meant that the item which can be deformed by hand but which when so deformed takes a generally permanent shape from which it would normally not be altered by the weight of the foot standing on the insole. A particular deformable member capable of relatively easy permanent deformation is a member comprising a thin strip of mild steel typically being a strip of mild steel of about one millimetre in thickness. The deformable member would normally be placed only at the arch portion of the insole.
The main part normally comprises flexible resilient sheet material such as expanded ethyl vinyl acetate. Conveniently on the underside of the main part at least the arch portion thereof is an additional support of flexible sheet material which preferably lies under the deformable part so that the deformable member may be sandwiched between the main part and the additional support. The additional support preferably also extends under the heel portion of the main part.
At the heel portion of the insole there is preferably an opening which is covered on the upper side of the main part by an upper member normally formed by part of the same material as the main part so that in use the spur of the heel would rest on the upper member above the opening so that there would be sufficient “give” to protect the spur of the heel from shock during walking.
Desirably a closure member is provided for the opening, the closure member comprising a bottom part and an upper part which fits within the opening. All the above parts are preferably bodies of revolution.
On the upper or preferably the lower side of the main part there are preferably markings to enable a user to cut the main part to the appropriate size to fit into footwear of various sizes.
Embodiments of the invention will now be described by way of example with reference to the accompanying drawings.
In the drawings:—
FIG. 1
is an underside view of an insole of the invention,
FIG. 2
is a plan of the insole,
FIG. 3
is an exploded perspective view of the underside of the insole, and
FIG. 4
is a detail exploded view of the insole with an additional support, and
FIG. 5
FIG. 1
5
5
is a section on line - of ,
10
10
12
12
Referring now to the drawings there is shown an insole for a shoe. The insole comprises a base part that is of a shape that would fit into a very large shoe. The base part is made of a resilient flexible sheet material conveniently expanded ethyl vinyl acetate of about three millimetres thickness.
12
14
16
18
The main part comprises a forward portion , a heel portion and an arch portion which in use support the front of the foot, the heel and the arch respectively.
20
12
20
22
10
22
The underside of the main part is of a light colour. On the underside of the forward portion there is printed a number of guidelines which correspond respectively to the shape of the forward part of the sole of shoes of different sizes. Reference numerals are printed on the underside to indicate the size of the shoe that the insole would fit if cut on the each particular guideline .
16
18
24
24
12
Running under the heel portion and extending under the arch portion and part of the forward portion is an intermediate member which is made of a resilient flexible sheet material conveniently expanded ethyl vinyl acetate of about five millimetres thickness. The intermediate member is bonded to the underside of the main part .
24
12
26
26
28
30
28
30
32
26
26
24
12
Sandwiched between the intermediate member and the base part is a deformable member formed from mild steel sheet of one millimetre thickness. This deformable member has a concave inner wall and a convex outer wall . The walls and are joined by rounded end parts to give the deformable member a generally kidney shape. This deformable steel part is bonded to the facing surfaces of the intermediate member and the main part .
16
12
24
34
36
34
34
24
The heel portions of the main part and the rear of the intermediate member have registering semi-circular rear portions . A circular opening of a radius about three quarters of the radius of the rear portion and being concentric with the rear portion is formed in the intermediate member .
16
38
36
38
12
16
40
42
44
36
42
44
24
38
42
44
On the upper side of the heel portion is a disc-shaped cover member located coaxially with the opening . The cover member comprises the same material as the base part . On the underside of the heel portion is a plug comprising a circular bottom member on which is coaxially mounted a smaller diameter circular projection that fits within the opening . The annular portion of the bottom member outside the projection rests under and supports the intermediate member . The cover member , the bottom member and the projection all are made from the same sheet material as the intermediate part.
14
12
22
10
26
40
44
36
10
In use, the forward portion of the base part is cut along an appropriate guide line to fit a selected shoe. The user will now manipulate by hand the insole so that the deformable steel member is bent to an appropriate shape to provide support for the arch of the user. The plug is fitted to the insole with the projection fitting into the opening . The insole is inserted into a shoe.
10
26
36
38
40
The user can now insert his/her foot into the shoe. The insole will provide a resilient support for the foot. The steel part will provide a generally permanent support for the arch of the foot. The spur of the heel of the user will be located above the opening so that there will be substantial “give” when the heel of the shoe meets the ground to protect the heel of the user. However excess distortion of the of the cover member is prevented by the plug .
It will be seen that the intermediate parts serve to raise the heel to accommodate the weight transfer on to the ball of the feet.
FIG. 5
46
40
42
48
36
46
10
42
40
48
36
44
48
46
10
16
In a modification as illustrated in , an annular accessory is provided. The accessory which is made of the same material as the plug has an outer diameter the same as the bottom member of the plug an a central opening of the same diameter as the opening . The accessory is applied to the underside of the insole between the insole and the bottom member of the plug with the opening registering with the opening and with the projection fitting into the opening . The accessory provides additional resilience to the heel part of the insole and thus improved comfort for the heel of the user. In addition the annular accessory builds up the height of the insole at the heel portion .
46
The accessory is preferably provided with bonding material on each of the annular surfaces covered with protective sheeting. This protective sheeting would be removed before tie application of the accessory to the plug and to the intermediate member so that the accessory will bond thereto.
46
10
26
One or more identical additional annular accessories can be provided lying coaxially with accessory to build up still further the height of the insole at the heel portion . This will help compensate a user who has legs of slightly different length. Furthermore the height of the heel portion will accommodate the weight transfer on to the ball of the foot and indeed two insoles, one in each shoe, should be used for this purpose.
26
24
26
We have found that the steel member can be deformed by hand into the appropriate shape to provide an arch support but will be of sufficient strength that it takes a generally permanent shape from which it would not normally be distorted back to the original position by the weight of the body of the user being applied to the part through the arch of the foot of the user. In other words the steel member is capable of relatively easy deformation as herein defined. Thus the insole may act as an orthotic without the costs normally incurred in purchasing a pre-manufactured item. Subsequent changes can also be made by hand to the deformed portion of the steel part if desired.
22
As is apparent a single size insole may be provided which is capable of use in most sizes of shoes by the user cutting along the appropriate guideline .
We have also found that ethyl vinyl acetate is particularly satisfactory because it is water repellant, odour free and washable.
40
46
In a modification (not, shown) the main part and the intermediate member with the deformable steel member located appropriately therein many be formed by injection moulding. Similarly the plug member and the accessory/ies may be formed by injection moulding. In this case the guidelines and reference numerals may be formed as indentations or low projections instead of being printed.
12
The invention is not limited to the precise constructional details hereinbefore described. For example, the insole may comprise other resilient material. The thickness of the base portion may vary from about one millimetre to six millimetres. At the heel portion the height of the insole may extend to about eighteen millimetres. The size of the steel part may vary as desired and may indeed extend across the entire width of the arch part of the insole. Any other material which is capable relatively easy deformation may be used in substitution for the steel part.
The colour of the insole may vary provided that the guide lines and reference numerals are clearly visible. | |
Department of Assessor — See Ch. 4.
Department of Planning and Development — See Ch. 49.
Sanitation Commission — See Ch. 57.
Plumbing — See Ch. 170.
Zoning — See Ch. 213.
Editor's Note: Former Ch. 93, Bureau of Administrative Adjudication, adopted 9-7-2004 by L.L. No. 24-2004, was repealed 7-18-2006 by L.L. No. 19-2006.
A.
There shall be in the Town of Babylon a Board of Grievances and Appeals (BGA), consisting of the then sitting voting members of the Sanitation Commission.
B.
No act of the Board of Grievances and Appeals shall be deemed to have become effective unless such action shall have been approved by at least three of the voting members.
C.
The BGA shall have no authority over labor and employment issues or contractual grievances or criminal violations of the Town Code or tax issues.
D.
The BGA will not have authority to hear or decide any issues involving the Tax Assessor's office, the Planning Board, the Zoning Board, the Plumbing Board, the Prior Nonconforming Use Board, the Sanitation Commission or the Two-Family Review Board.
The BGA, in addition to the duties prescribed by this chapter and any matter referred to it by resolution of the Town Board, may hear, address and adjudicate said matters, including but not necessarily limited to the following:
A.
The supervision of any licenses or permits other than building permits, certificates of occupancy or any permit/license issued by a Building Inspector or Fire Marshal, such as, but not limited to, beach permits, boat slips, camping permits, parade permits, dog licenses, etc.
B.
The efficiency and reliability of the quality and consistency of the service rendered by them the inhabitants of the Town by Town employees and departments.
C.
The rental, use and occupancy of any Town-owned land, facility or building.
D.
All licenses awarded pursuant to a bid or a request for proposals (RFP) granting license to a non-Town entity that may perform certain acts or services or supply certain materials.
E.
To hear such complaints, grievances and appeals and to order the appropriate remedy.
F.
To adjudicate violations by imposing fines, penalties and/or conditions, including the suspension or revocation of licenses and privileges, on the use or occupancy of Town property or facilities.
G.
To hear, address and adjudicate appeals from the decision of any commissioner, administrative employee, department or facility. The BGA may sustain or overturn the decision, order or charge complained of.
H.
The BGA shall have the power and authority to create, modify and establish the rules and procedures for appeals to the BGA, hearings regarding said appeals, licensing issues and the administration of Town privileges, licenses, rentals, vendors and the use, enjoyment and occupancy of any Town property or facility.
All rules and regulations adopted by the Sanitation BGA hereunder shall be filed in the office of the Town Clerk. | https://ecode360.com/6806073 |
Posted 22 December 2013. Updated 26 December 2014.
What happens to us in life comes from two basic sources:
When appraising people at PwC and advising people who I am mentoring, I have noticed that successful people accept responsibility for their decisions. They recognise that they have taken specific decisions, and if those decisions have led to a poor result they accept responsibility for the result and then look for ways to improve the position.
Conversely unsuccessful people often fail to acknowledge that the bad outcomes they experience in life are the natural consequences of the bad decisions that they have taken.
I used this as the theme for my 15th "Thought for the week" on BBC Radio Manchester delivered this morning which is reproduced below. As always, I was introduced as Co-Chair of the Muslim Jewish Forum of Greater Manchester because I want to promote the organisation and what it does.
I often mentor younger people. About a year ago, I was mentoring someone. Let’s call him Fred, not his real name.
I asked Fred to tell me about his career so far, which frankly has not gone well.
Part way through a long tale of many jobs which had gone nowhere, I pulled him up. I told Fred what was bothering me.
For every job, his basic story was the same. “I was at this firm, and this other person did something, so it worked out badly.”
None of his stories took the form “I chose to do this, and the following consequence happened.”
Fred saw his career only as a series of things that had happened to him, not as a series of thing that he had caused to happen.
Psychologists talk about the “locus of control” or in plain English, the place of control.
If you see control as internal, you believe that what happens in life is mainly down to your own actions and decisions.
If you see control as external, you believe that what happens in life is mainly down to what other people do.
All of us take decisions, small ones every day, such as what to eat, and big ones occasionally such as who to marry or what career to follow.
When I look back at my decisions, I can see that some of them were good, some were pretty average, and some were terrible. However I accept responsibility for every decision, and for the results that followed.
I studied hard at school, so I was able to go to Cambridge. I ate too much, so I became fat.
What does religion have to say about this?
Judaism, Christianity and Islam all teach that God controls everything that happens in the world. As it says in the Gospel of Matthew, no sparrow can fall to the ground except by the will of God.
However God’s power does not excuse us from taking responsibility for our own decisions.
Taking good decisions today is impossible if you deny any responsibility for the results of yesterday’s decisions.
Psychologists consider that where an individual places the "locus of control" ("locus" is Latin for "place"), whether it is internal or external, is an important part analysing that individual's personality. The concept was developed by Julian B. Rotter in 1954 and there is a very informative article about it on Wikipedia.
There is also an easier to read article "What Is Locus of Control?" by Kendra Cherry, who is described as a Psychology Expert.
You can test your own beliefs regarding the locus of control by taking a questionnaire which is based on Rotter's work. I did not find the questionnaire particularly useful since it was obvious for each multiple choice answer what questionnaire outcome would result from selecting the answer. However if you answer each question honestly, the questionnaire will tell you whether you have an internal or external locus of control.
My firm view is that people who have an internal locus of control are more likely to succeed, measuring success by traditional measures such as climbing a corporate hierarchy, earning more money as an employee or running a profitable business. However I have not undertaken any research to validate this.
I recommend reading the article "Leader Career Success & Locus of Control Expectancy" by Prof. Kurt April, PhD, University of Cape Town (South Africa) & Ashridge (UK) Babar Dharani, MBA, University of Cape Town (UK) Kai Peters, MBA, Ashridge (UK). While the authors' own research leads them to "conclude that higher levels of successes are achieved by individuals with an external locus of control expectancy" what makes the paper valuable is the extensive literature search it contains and its bibliography, much of which points the opposite way. | https://www.mohammedamin.com/Success-tips/Accept-responsibility-for-your-decisions.html |
RAID 10 vs RAID 5: Performance, Cost, Space, and HA
DISCLAIMER: I am not a SAN storage expert but I have spent a lot of time looking into SAN storage systems from the business side and I thought I’d share some of my conclusions.
It seems that the proverbial question is how to balance the performance, cost, usable space, and availability of a storage solution. Any DBA will ask you to give him RAID 10 on small fast disks. Anyone paying the bills will ask “Why can’t I use half the disks I bought?”
I took a couple hours with your friendly neighborhood spreadsheet and did the math. I base my calculations on EMC Clariion storage and tried to follow the EMC best practices guide as much as possible.
According to the best practices, I started my calculations based on a necessary performance level consisting of total IOPS, read percentage, and write percentage.
Then, using the following formulas, I calculate the actual disk IOPS required to provide the requested performance:
- RAID 5 (4+1 Groups)
Disk IOPS = (Read % * Required IOPS) + (Write % * RAID5 write penalty * Required IOPS)
- RAID 10
Disk IOPS = (Read % * Required IOPS) + (Write % * RAID10 write penalty * Required IOPS)
The RAID 5 write penalty in a 4+1 RAID group is 4 while the RAID 10 write penalty is 2.
Before you even put this in a spreadsheet you know what it will tell you-
- In a 100% Read Only environment RAID 5 and RAID 10 will give the same performance. RAID 5 may use less disks to do it but not necessarily.
- In a 100% Write Only environment, RAID 5 will require twice as many disk IOPS and almost twice the number of disks.
- Anywhere in between those two extremes, the more writes required, the less number of RAID 10 disks you will need to achieve the performance.
If we stop there, it doesn’t seem like there is any point in using RAID 5 since even in the best case scenario, there is only a partial chance that we will use less disks. That is where the cost and space effectiveness issues come in.
- Space Effective Storage Allocation
If I want 2000 IOPS, 100% Read Only, I can do that using 15 x 146GB 15k RPM disks in RAID 5 or in RAID 10. In RAID 5 I will get ~1.5TB net space while in RAID 10 I will get ~1TB.
- Cost Effective Storage Allocation
So far, we have compared different RAID types using the same size and speed disks and we saw that theoretically we can use less disks to reach the same performance but at the expense of usable disk space.
If we use bigger disks for the RAID 10, does it make up for the lost space? What effect does using RAID 10 with fewer large disks as opposed to RAID 5 with lots of smaller disks have on the cost of my solution?
That brings us back to the spreadsheet. Using the required disk IOPS we can figure out the required number of physical disks of each type. For the sake of comparison I use the following information which I found on the Internet (your mileage may vary):
- 146GB 4GbFC 15k RPM, 140 IOPS, $1256
- 300GB 4GbFC 10k RPM, 120 IOPS, $1348
- 1TB 4Gb SATA II 7.2k RPM, 80 IOPS, $2088
For each of these I calculate the minimum number of physical disks required for to reach the required IOPS with the required read/write profile for both RAID 10 and RAID 5. Then I figure in the RAID group sizes and calculated the usable disk space.
Using the prices above, I calculate the price per TB of disk space in each RAID configuration and find:
- 146GB, RAID 5 (4+1): $11.91K/TB
- 300GB, RAID 5 (4+1): $6.35K/TB
- 1TB, RAID 5 (4+1): $2.87K/TB
- 146GB, RAID 10 (4+1): $19.01K/TB
- 300GB, RAID 10: $10.15K/TB
- 1TB, RAID 10: $4.59K/TB
What is really interesting here is how close the 300GB RAID 10 is to the 146GB RAID 5! Is this a coincidence?
Looking at the IOPS/TB relationship and $K/IOPS, we find that the ratios are dependant on the read/write profile of the required IOPS. Given the similar Price/TB of 300GB RAID 10 and 146GB RAID 5, I look there for a price/performance/disk space sweet spot.
The following table shows the difference between 146GB RAID 5 IOPS/TB and 300GB RAID 10 IOPS/TB.
Each column represents a different Read percentage (the Write percentage is the inverse).
Negative numbers mean that for this Read percentage and IOPS requirement, RAID 10 gives more IOPS/TB of disk. Positive numbers mean that RAID 5 gives better IOPS/TB.
What you see from this is that for any read workload under 70%, you will get more IOPS/TB from 300GB 10k RPM disks using RAID 10 than you will with RAID 5 on 146GB 15k RPM disks.
Even if you hit 80%, RAID 5 will gain less than 100 IOPS over the RAID 10 configuration and you are still better off paying less for your disks- let the cache do it’s job. Combine all this with our previous conclusion – that the 300GB RAID 10 configuration is ~$1.75K less expensive per TB and I say you have a winner. | https://yonahruss.com/architecture/raid-10-vs-raid-5-performance-cost-space-and-ha.html |
Niger, which is formally known as the Republic of Niger and is bordered by Chad, Mali, Libya, Nigeria, Mali, Algeria and Burkina Faso, is the largest country in West Africa.
This country is named after the Niger River and is a landlocked country stretching an area of approximately 1,270,000 square kilometers. Surrounded by mostly sand dunes and desert plains along with savannahs on the southeast portion, Niger is also inhabited by mostly Muslims.
Niamey
Niamey, which is the center of economy, cultural and administrative of Niger, is the largest and capital city of this West African country. This destination possesses a mixture of African and European cultures that are noticeable with their architectures and other structures. Niamey is the most populated place in the country, which is comprised of a huge percentage of Muslim inhabitants. Niamey, which lies on the river banks of the gentle Niger River, is known for its modern and colonial heritages.
Zinder
Zinder, which is the first capital city of Niger before it was changed to Niamey, is the second largest city in this West African country. This destination is where the infamous Sultan’s Palace is located and is also known for its huge market found in Sabon Gari or the new town. Zinder is atypical from other African towns as it may not have much to offer but its historical importance and heritages say it all.
W National Park
W National Park, which is a UNESCO World Heritage site that shares management with Benin and Burkina Faso, covers around 10,000 square kilometers across three African countries. This destination is home to various species of wildlife including aquatic species which is abundant across the park.
W National Park Niger portion is where tiger bush plateau distribution is visible. Wildlife animals that can be found in W National Park include buffalos, bush elephants, African leopards, cheetahs, baboons and warthogs.
Air Mountains
Considered as one of the largest ring dike on earth, Air Mountains is found in the center of the Sahara Desert in Niger. This destination is one of the most scenic views that Africa has to offer because of the unique formations made up of volcanic materials. Air Mountains, which bears spectacular Neolithic rock arts, is a known wildlife reserve with a number of wildlife species inhabiting the area.
Tenere Desert
Covering a vast area of sand dunes and desert plains, Tenere Desert is found in the heart of the Sahara Desert in Niger. This destination showcases the spectacularly picturesque view of dry land overlooking Air Mountains in the west Djado Plateau in its northeast and Lake Chad in the south. Tenere Desert also offers captivatingly amazing landscapes of sand and desert plains.
Ayorou
Known for its hippo tour along the Niger River, Ayorou is recommended for a weekend getaway. This destination offers a relaxing ambiance far from the hustle and bustle of the city buzz. Ayorou, where people are accommodating and friendly, is known for its Sunday market where genuine cultural products are sold.
Agadez
An important road for trade and commerce from the 11th century until today, Agadez also played a significant role during the Sahel era. This destination is an Islamic center of learning, wherein its famous landmarks include the Agadez Mosque and the Sultanate Palace. Agadez is also known for its markets including their camel market as well as leatherwork and silver products.
Musee Nationale du Niger
Established in 1959, Musee Nationale du Niger is also known as the Musee National Boubou Hama and is located in Niger’s capital city Niamey. This destination is home to various artifacts including archeological, ethnological and cultural collections. Musee Nationale du Niger is one of Africa’s gems and its stunning Hausa architecture design makes it even more precious.
Grande Marche
Considered as the most important commercial center in Niger and the largest in Niamey, Grande Marche is where local and traditional products and handicrafts are being sold. This destination is also an important tourist attraction as it promotes local products and magnetizes over 20,000 tourists all year. Grand Marche is also the major commercial center in Niger where some imported products are being sold in more than 4,000 shops.
Koure
Home of the West African giraffes, Koure is a rural town that is located near the country’s capital Niamey. This destination is believed to be where the last herd of the endemic specie of giraffe is found, which is why it is being preserved. During the rainy seasons, giraffes move closer to the road to find food which makes it easier to watch the animals. | https://backpacker-footsteps.com/work-and-travel/work-and-travel-africa/niger/ |
TECHNICAL FIELD
BACKGROUND ART
PRIOR ART DOCUMENTS
Non-patent Documents
SUMMARY OF THE INVENTION
PROBLEM TO BE SOLVED BY THE INVENTION
MEANS TO SOLVE THE PROBLEM
ADVANTAGEOUS EFFECTS OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
EMBODIMENT
(System overall configuration)
(Polar code)
(Method 1)
<Encoding according to Method 1>
<Decoding according to method 1>
(Method 2)
<Coding according to method 2>
<Decoding according to method 2>
(Method 3)
<Coding according to method 3>
<Decoding according to method 3>
(Combinations of methods 1-3)
(Summary and advantageous effects of methods 1-3)
(Variant 1)
(Apparatus configuration)
<User apparatus>
<Base station 20>
<Hardware configurations>
(Summary of embodiment)
(Supplement to embodiment)
DESCRIPTION OF REFERENCE SIGNS
The present invention relates to a communication apparatus used as a user apparatus or a base station in a radio communication system.
Concerning 3GPP (3rd Generation Partnership Project), a radio communication scheme called 5G has been studied for a further increase in the system capacity, a further increase in the data transmission rate, a further reduction in the delay in the radio section, and so forth. Concerning 5G, various radio technologies have been studied for satisfaction of a requirement for implementation of the throughput greater than or equal to 10 Gbps and the delay in the radio section less than or equal to 1 ms. Because there is a high possibility that a radio technology different from LTE will be adopted for 5G, a radio network that supports 5G will be referred to as a new network (NR: New Radio) in 3GPP and thus is distinguished from a radio network that supports LTE.
For 5G, mainly three use cases, i.e., eMBB (extended Mobile Broadband), MTC (massive Machine Type Communication), and URLLC (Ultra Reliability and Low Latency Communication) are assumed.
For example, for eMBB, a further increase in the rate and a further increase in the capacity are demanded, whereas, for mMTC, it is demanded to connect to a great number of terminals and reduce power consumption; and, for URLLC, it is demanded to improve reliability and reduce the delay. In order to satisfy these requirements, it is also required to satisfy requirements also concerning channel encoding indispensable for mobile communication.
A candidate for satisfying the requirements is Polar codes (Non-patent document 1). Polar codes are error correcting codes with which it is possible to implement characteristics near the Shannon limit, on the basis of the idea of channel polarization. Furthermore, by using a simple Successive Cancelation Decoding (SCD) method as a Polar code decoding method, it is possible to implement superior characteristics of requiring a low operation amount and low power consumption. As decoding methods for Polar codes, a Successive Cancellation List Decoding (SCLD) method improving characteristics of SCD and a CRC-aided SCLD method using a CRC (Cyclic Redundancy Check) further improving the characteristics are known (Non-patent document 2). According to a CRC-aided SCLD method, a plurality of sequences (bit sequences) having high likelihoods are obtained, and thereamong, a single sequence that is successful in CRC judgement is selected as a final decoding result.
E. Arikan, "Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels", IEEE Trans. Inf. Theory, vol.55, no.7, pp.3051 - 3073, July 2009
Non-patent Document 1: .
Nobuhiko Miki and Satoshi Nagata, "Application of Polar Codes to Mobile Communication Systems and 5G Standardization Activity", IEICE technical report, vol.116, no. 396, RCS2016-271, pp.205-210, January 2017
Non-patent Document 2: .
3GPP TS 36.321 V14.1.0 (2016-12
Non-patent Document 3: )
According to NR, it is assumed to use Polar codes for a downlink control channel.
According to existing LTE, a base station attaches a CRC (hereinafter, a value to be used for CRC is referred to by a "CRC") to downlink control information, encodes CRC information obtained from masking using a RNTI (Radio Network Temporary Identifier), and transmits the information to a user apparatus. The user apparatus having received the information performs, in a decoding process for the received information, judgement using the CRC obtained from unmasking the information using the RNTI that the user apparatus itself has, to determine whether the information is addressed to the user apparatus itself.
However, for a case of using Polar codes that have not been used in existing LTE, it is not apparent how to use a RNTI. Depending on a method to apply a RNTI, there is a high possibility of a user apparatus erroneously determining that control information addressed to the user apparatus itself is not proper information (that is, a CRC check failure) or of erroneously determining that control information not addressed to the user apparatus itself is control information addressed to the user apparatus itself. Such a possibility is called a false alarm rate, which may be referred to as a false detection rate.
In addition, for a case of, for example, performing Polar encoding on payloads of downlink control information or the like, there is a case where, although payload sizes are different, bit lengths for encoding are the same. Therefore, it is not easy for a Polar decoder to identify a payload size.
Note that it is assumed that Polar codes and identifiers such as RNTIs will be used for, not only downlink communication from a base station to a user apparatus but also uplink communication from a user apparatus to a base station and sidelink communication between user apparatuses. Therefore, problems such as those mentioned above also occur in, not only downlink communication from a base station to a user apparatus but also uplink communication from a user apparatus to a base station and sidelink communication between user apparatuses. Apparatuses such as user apparatuses and base stations will be generally referred to by communication apparatuses.
The present invention has been made in consideration of the above-mentioned points, and an object is to make it easier to identify a payload length upon decoding a bit sequence encoded by a reception side, in a radio communication system.
According to a disclosed technology, a communication apparatus including an encoding unit configured to generate a second coded bit sequence by encoding according to a second encoding scheme a frozen bit sequence and a second bit sequence that includes a first bit sequence and a first coded bit sequence generated by encoding the first bit sequence according to a first encoding scheme; and a transmission unit configured to transmit a transmission signal generated from the second coded bit sequence is provided. The communication apparatus determines the second coded bit sequence on the basis of a length of the second bit sequence.
Thanks to the disclosed technology, a technology to make it easier to identify a payload length upon decoding a bit sequence encoded by a reception side, in a radio communication system is provided.
FIG. 1A
is a configuration diagram of a radio communication system according to an embodiment of the present invention including a base station 20 and a user apparatus 10.
FIG. 1B
is a configuration diagram of a radio communication system according to an embodiment of the present invention including a user apparatus 10 and a user apparatus 15.
FIG. 2
illustrates an example of Polar encoding.
FIG. 3
illustrates an example of decoding a Polar code.
FIG. 4
illustrates an example of decoding a Polar code.
FIG. 5
illustrates an example of decoding a Polar code.
FIG. 6A
illustrates an encoding process according to a method 1 and illustrates an outline of a flow of the process.
FIG. 6B
illustrates an encoding process according to the method 1 and illustrates operations of encoding.
FIG. 7A
illustrates a decoding process according to the method 1 and illustrates an outline of a flow of the process.
FIG. 7B
illustrates a decoding process according to the method 1 and illustrates operations of decoding.
FIG. 8A
illustrates an encoding process according to a method 2 and illustrates an outline of a flow of the process.
FIG. 8B
illustrates an encoding process according to the method 2 and illustrates operations of encoding.
FIG. 9
illustrates an encoding process according to the method 2.
FIG. 10A
illustrates a decoding process according to the method 2 and illustrates an outline of a flow of the process.
FIG. 10B
illustrates a decoding process according to the method 2 and illustrates operations of decoding.
FIG. 11
illustrates a decoding process according to the method 2.
FIG. 12
illustrates an example of a shortened Polar code.
FIG. 13A
illustrates an encoding process according to a method 3 and illustrates an outline of a flow of the process.
FIG. 13B
illustrates an encoding process according to the method 3 and illustrates operations of encoding.
FIG. 14
illustrates an encoding process according to the method 3.
FIG. 15A
illustrates a decoding process according to the method 3 and illustrates an outline of a flow of the process.
FIG. 15B
illustrates a decoding process according to the method 3 and illustrates operations of decoding.
FIG. 16
illustrates a decoding process according to the method 3.
FIG. 17
illustrates a comparison among the methods.
FIG. 18
illustrates effects.
FIG. 19
illustrates an encoding process according to a variant 1.
FIG. 20
illustrates an example (1) of an encoding process according to the variant 1.
FIG. 21
illustrates an example (2) of an encoding process according to the variant 1.
FIG. 22
illustrates one example of a functional configuration of a user apparatus 10.
FIG. 23
illustrates one example of a functional configuration of a base station 20.
FIG. 24
illustrates hardware configurations of a user apparatus 10 and a base station 20.
Below, an embodiment of the present invention (a present embodiment) will be described. Note that the embodiment that will now be described is merely one example and embodiments to which the present invention is applied are not limited to the embodiment that will now be described.
Existing technologies may be appropriately used for a radio communication system according to the present embodiment to operate. In this regard, the existing technologies include, for example, existing LTE. However, the existing technologies are not limited to the existing LTE.
For the embodiment that will now be described, terms such as PDCCH, DCI, and RNTI will be used. In this regard, these terms will be used for convenience of description; signals, functions, and so forth of these terms or the like may be called other names.
The present embodiment uses Polar codes, which are merely one example. The present invention can transmit known bits such as frozen bits in the same way. As long as a reception side can implement decoding successively on the basis of likelihoods of a received signal, other codes than Polar codes may be applied. For example, it is possible to apply the present invention to each of LDPC (LOW DENCITY PARITY CHECK) codes and convolutional codes. In addition, Polar codes used by the present embodiment may be called another name.
According to the present embodiment, as an example of an error detection code, a CRC is used. However, an error detection code applicable to the present invention is not limited to a CRC. According to the present embodiment, a target of code/decoding is control information. However, the present invention is applicable to information other than control information. According to the present embodiment, a RNTI is used as an identifier. However, the present invention is applicable to an identifier other than a RNTI.
According to the present embodiment, a user apparatus uses a RNTI as a medium for identifying a signal transmitted to the user apparatus itself. However, a RNTI is merely one example. The present invention can be applied to not only a RNTI but also, for example, another identifier such as a user ID unique to a user apparatus. Furthermore, such an identifier may be assigned for each user apparatus and, may be applied to each plurality of user apparatuses. In addition, such an identifier may be previously determined according to a specification.
Concerning the present embodiment, downlink communication is used as a main example. However, the present invention can be applied in the same way to uplink communication and sidelink communication.
FIGs. 1A and 1B
FIG. 1A
FIG. 1
illustrate configurations of radio communication systems according to the present embodiment. The radio communication system illustrated in includes a user apparatus 10 and a base station 20. Although illustrates the single user apparatus 10 and the single base station 20, this is an example, and, there may be a plurality of user apparatuses 10 and a plurality of base stations 20.
The user apparatus 10 is a communication apparatus having a radio communication function such as a smartphone, a cellular phone, a tablet, a wearable terminal, and a module for M2M (Machine-to-Machine) communication, wirelessly connects to the base station 20, and uses various communication services provided by the radio communication system. The base station 20 is a communication apparatus that provides one or more cells and performs radio communication with the user apparatus 10. According to the present embodiment, the duplex scheme may be a TDD (Time Division Duplex) scheme and may be a FDD (Frequency Division Duplex) scheme.
FIG. 1A
In the configuration illustrated in , the base station 20, for example, encodes, using Polar codes, information obtained from adding a CRC to downlink control information (DCI), and transmits the coded information through a downlink control channel (for example, PDCCH (Physical Downlink Control Channel)). The user apparatus 10 decodes information, encoded with the use of Polar codes, according to a successive cancelation decoding (SCD) method or the like.
It is also possible to apply Polar codes to uplink control information. In this case, for example, the user apparatus 10 encodes, using Polar codes, information obtained from adding a CRC to uplink control information (UCI), and transmits the coded information through an uplink control channel (for example, PUCCH (Physical Uplink Control Channel)). The base station decodes information, encoded with the use of Polar codes, according to a successive cancelation decoding (SCD) method or the like.
FIG. 1B
illustrates a case where sidelink communication is performed between user apparatuses as another example of a radio communication system according to the present embodiment. In a case of applying Polar codes to sidelink, for example, a user apparatus 10 encodes, using Polar codes, information obtained from adding a CRC to sidelink control information (SCI), and transmits the coded information through a control channel (for example, PSCCH (Physical Sidelink Control Channel)). A user apparatus 15 decodes information, encoded with the use of Polar codes, according to a successive cancelation decoding (SCD) method or the like. The same operations are carried out also in a case of communication from the user apparatus 15 to the user apparatus 10.
According to the present embodiment, Polar codes are used. Therefore, encoding and decoding Polar codes will be described. Because encoding and decoding Polar codes themselves are well known, only an outline will now be described (see Non-patent document 1 for details).
FIG. 2
FIG. 2
Concerning Polar codes, through repetitious combining and splitting, a plurality of channels are converted to polarized communication paths, and thus, the channels are classified into channels having high quality and channels having low quality. Then, information bits are allocated to channels having high quality and frozen bits that are known signals are allocated to channels having low quality. illustrates an encoder for Polar codes for a case of 3 repetitions. As illustrated in , the encoder has a configuration where communication paths are coupled through an exclusive-or operation.
n
0
N-1
0
k-1
0
N-1
FIG. 2
Inputs to the Polar encoder have N = 2 bits, i.e., u, ..., and u. Assuming that K bits (v, ..., v) are information bits, (N-K) bits are frozen bits. Coded bits that are output from the encoder have N bits (x, ..., x). illustrates an example of N = 8 and K = 4. Note that, in the description of the present embodiment, the value of a bit may be referred to by "bit".
FIG. 2
Polar encoding can be expressed by the following expression. The matrix illustrated below corresponds to the encoder of .
A frozen bit may be any bit as long as the bit is known on a transmission side and a reception side. In many cases, 0 is used as a frozen bit.
0
As a basic decoding method to decode a Polar code, a Successive Cancellation Decoding (SCD) method will now be described. According to the successive cancellation decoding method, on a reception side, a likelihood (specifically, for example, a log-likelihood ratio (LLR)) obtained from modifying each bit is input to a decoder and a predetermined calculation is performed on the likelihood in sequence. Thus, transmitted bits are successively decoded from u. Specifically, a likelihood of each transmitted bit is calculated and, on the basis of the likelihood, a value of the bit is determined. Concerning a frozen bit, the value of the frozen bit is used as a decoding result.
FIGs. 3-5
FIGs. 3-5
0
1
2
0
i-1
i
0
1
2
illustrate an example of a successive calculation. Through the respective steps illustrated in , decoding of u, u, and u is implemented. In the drawings, f denotes a calculation not directly using known information (values of bits for which decoding results have already been obtained, values of frozen bits); g denotes a calculation using known information. In decoding a Polar code, u, ..., and u need to be known to decode u. Therefore, decoding should be performed in the order of u, u, u, ....
Below, as encoding and decoding methods according to the present embodiment, a method 1, a method 2, and a method 3 will be described. The method 3 is a main method of the present invention. To the method 3, the method 1 and/or the method 2 may be combined. Therefore, also the method 1 and the method 2 will be described as methods according to the present embodiment. Note that the present invention is not limited to the method 3.
In the description for each method, downlink communication of transmitting from a base station 20 to a user apparatus 10 is assumed. However, also to uplink communication from a user apparatus 10 to a base station 20 and sidelink communication between user apparatuses, the same methods as the methods 1-3 for encoding and decoding that will be described now can be applied. A target to which each method is applied is not limited to control information.
Hereinafter, information such as downlink control information that is a target of encoding will be referred to by "target information". In each drawing, target information is expressed as "info" (an abbreviation of information). Frozen bits are expressed as "frozen".
FIGs. 6A and 6B
FIG. 6A
With reference to , an encoding process according to the method 1 will now be described. illustrates an outline of a flow of a process in the base station 20. The base station 20 attaches a CRC to target information and masks the CRC with a RNTI as in existing LTE (step S1). Masking according to the present embodiment means performing an exclusive-or operation on a per bit basis. Masking may be called scrambling. A RNTI is an identifier to identify a user apparatus and/or a channel and may be of one of various types (Non-patent document 3). For example, a C-RNTI is a RNTI for transmitting or receiving user data; a SPS (Semi Persistent Scheduling)-RNTI is a RNTI for transmitting and receiving data concerning SPS; a P-RNTI is a RNTI for transmitting and receiving paging; and a SI-RNTI is a RNTI for transmitting and receiving broadcast information (system information to be broadcasted). The base station 20 selects a RNTI, depending on a current operation, to use for masking.
The base station 20 performs Polar encoding on information obtained in step S1 (step S2) and performs rate matching on the coded information through puncturing or the like (step S3). A transmission signal is produced from the coded information on which rate matching has been thus performed and the transmission signal is transmitted wirelessly.
FIG. 6B
FIG. 6B
With reference to , an encoding operation will be described in detail. As illustrated in , the base station 20 attaches a CRC to information that includes frozen bits and target information, and performs masking on the CRC with a RNTI. The CRC on which masking has been performed with the RNTI is indicated as a CRC'. Note that the base station 20 may calculate the CRC from only the target information and may calculate the CRC from information that includes the frozen bits and the target information.
The base station 20 encodes the thus generated "frozen bits + target bits + CRC''' to obtain a coded block.
FIGs. 7A and 7B
FIG. 7A
With reference to , a decoding process according to the method 1 will be described. illustrates an outline of a flow of a process in the user apparatus 10. The user apparatus 10 demodulates, for example, a signal received at a search space of a PDCCH, performs a decoding process (step S11), applies a RNTI on the information thus obtained in the decoding process, and performs CRC check (step S12). For a case where the CRC check is successful, the user apparatus 10 uses the obtained target information.
FIG. 7B
With reference to , a decoding operation will be described in detail. The user apparatus 10 decodes a code block received from the base station 20. Then, the user apparatus 10 performs unmasking on a CRC' with a RNTI and performs CRC check with the use of the thus obtained a CRC. For a case where the CRC check is successful, the user apparatus 10 determines that the target information is addressed to the user apparatus 10 and uses the target information. In addition, the user apparatus 10 can determine, from the type of the RNTI with which the CRC check is successful, the type of the channel (data).
The user apparatus 10 may use only SCD in decoding; may use a Successive Cancellation List Decoding (SCLD) method; and may use a CRC-aided SCLD method using a CRC.
In a case of using SCLD, the user apparatus 10 determines L sequences having a high likelihood as surviving paths (L will be referred to as a list size), determines the sequence having the highest likelihood as a decoding result, applies a RNTI to the decoding result, and performs CRC check.
In a case of using CRC-aided SCLD, the user apparatus 10 applies a RNTI to and performs CRC check on each of the L sequences, and uses a sequence for which CRC check is successful as a decoding result.
FIGs. 8A, 8B
9
FIG. 8A
With reference to , and , an encoding process according to the method 2 will be described. illustrates an outline of a flow of a process in the base station 20. The base station 20 calculates a CRC and attaches the CRC to target information (step S21).
The base station 20 Polar encodes information in which a RNTI is applied to frozen bits (step S22) and performs rate matching on the coded information through puncturing or the like (step S23). A transmission signal is generated from the coded information on which the rate matching has been performed and is transmitted wirelessly.
FIG. 8B
With reference to , an encoding operation will be described in detail. In a Polar encoding process, the base station 20 attaches frozen bits to "target information + CRC". Instead, the base station 20 may produce "target information + frozen bits" before attaching a CRC, calculate a CRC from "target information + frozen bits", and attach the CRC to "target information + frozen bits".
Then, the base station 20 performs masking on the frozen bits included in "target information + CRC + frozen bits" using a RNTI. For example, assuming that the bit length of the frozen bits is the same as the bit length of the RNTI and all of the frozen bits are 0, the frozen bits on which a RNTI masking has been performed are the same bits as those of the RNTI.
0
1
2
3
0
1
2
3
0
1
2
3
The bit length of frozen bits may be different from the bit length of a RNTI. For example, it will now be assumed that the bit length of a RNTI is 4 bits, which have the values (a, a, a, a), and the bit length of frozen bits is 8 bits, each of which has the value 0. In this case, the base station 20, for example, performs masking the frozen bits using "RNTI + RNTI" and obtains (a, a, a, a, a, a, a, a) as frozen bits on which the masking has been performed. In this case, to use "RNTI + RNTI" for masking (i.e., using connected RNTIs) is known by the user apparatus 10. Alternatively, to use "RNTI + RNTI" may be indicated by the base station 20 to the user apparatus 10 through higher layer signaling or broadcast information.
In a case where the bit length of a RNTI is greater than the bit length of frozen bits, the base station 20, for example, applies a hash function to a RNTI to shorten the RNTI to make the RNTI have the same length as the length of the frozen bits, and uses the shortened RNTI for a masking process. To use the RNTI to which the hash function has been applied for a masking process is known also by the user apparatus 10. Alternatively, to use the RNTI to which the hash function has been applied for a masking process may be indicated by the base station 20 to the user apparatus 10 through higher layer signaling or broadcast information.
FIG. 8B
As illustrated in , the base station 20 encodes "target information + CRC + frozen''' generated as mentioned above to obtain a code block.
FIG. 9
FIG. 9
0
1
2
3
illustrates an encoding process for a case of N = 8; N - K = 4. As illustrated in , bits (a, a, a, a) of a RNTI as frozen bits to which the RNTI has been applied are input to an encoder; information bits are input to the encoder. The information bits are "target information + CRC".
FIG. 9
0
1
According to the example illustrated in , from among 8 coded bits, x' and x' are punctured; resource mapping and so forth are then performed on the remaining 6 bits, which are then transmitted. Puncturing is used to, for example, make the number of bits to be transmitted match a transmission resource amount. Concerning a puncturing method for a Polar code such as a method to determine bits to be punctured, there are various existing methods (example: QUP (quasi-uniform puncturing)), which can be used.
FIGs. 10A, 10B
11
FIG. 10A
Next, with reference to , and , a decoding process according to the method 2 will be described. illustrates an outline of a flow of a process in the user apparatus 10. A user apparatus 10 demodulates a signal received, for example, at a search space of a PDCCH; performs a decoding process on the thus obtained information (a code block) using a RNTI (step S31); and performs CRC check on the thus obtained information (step S32).
FIG. 10B
With reference to , a decoding operation will be described in detail. Now, it will be assumed that frozen bits (frozen') at an encoding side are a RNTI itself; this fact is known by the user apparatus 10. Therefore, the user apparatus 10 performs a decoding process assuming that frozen bits are the RNTI. Thereafter, the user apparatus 10 performs CRC check. If the CRC check is successful, the user apparatus 10 determines that the target information is target information addressed to the user apparatus 10 and uses the target information. In addition, the user apparatus 20 can determine, from the type of the RNTI with which the CRC check has been successful, the type of the channel (data). Also according to the method 2, the user apparatus 10 may use SCD, may use SCLD, and may use CRC-aided SCLD in decoding, as according to the method 1.
FIG. 11
FIG. 9
FIG. 11
FIG. 11
i
0
i-1
0
1
2
3
0
1
2
3
5
6
7
illustrates a decoding process corresponding to the encoding process (N = 8; N - K = 4) of . As illustrated in , a RNTI is used as frozen bits. As described above, in a decoding process, upon decoding u, known values (frozen bits or decoded bits) u, ..., u are used. Therefore, in the decoding process illustrated in , a, a, and a are used for decoding u; a, a, a, and a are used for decoding u, u, and u.
For example, in a case where there are a plurality of RNTIs available to the base station 20, the user apparatus 10 uses each of the RNTIs as frozen bits to perform a decoding process. Then, the user apparatus 10 can determine that a RNTI resulting in CRC check being successful is a RNTI applied by the base station 20. For example, in a case where, as a result of the user apparatus 10 using each of a P-RNTI and a SI-RNTI to perform a decoding process and CRC check, CRC check is successful for a case of using SI-RNTI, the user apparatus 10 can determine that the user apparatus 100 will receive broadcast information.
FIG. 12
FIG. 12
According to the method 3, an encoding side uses a shortened Polar code where the length of bits (information bits + frozen bits) that are input is shortened. illustrates one example of a shortened Polar code. In the example illustrated in , to an encoder to which 8 bits are input, information bits + frozen bits having 6 bits are input. In this case, 2 bits are used as padding bits. Then, coded bits corresponding to the padding bits are punctured. A reception side performs a decoding process assuming that likelihoods of the punctured bits are determined as, for example, likelihoods (for example, +∞) that indicate 0. The punctured bits are known on the reception side.
FIGs. 13A
13B
14
FIG. 13A
With reference to , , and , an encoding process according to the method 3 will be described. illustrates an outline of a flow of a process in the base station 20. The base station 20 calculates a CRC for target information and attaches the CRC to the target information (step S41). The base station 20 attaches padding bits, performs Polar encoding (step S42), and performs rate matching (shortening) through puncturing on the coded information (step S43). A transmission signal is produced from the coded information on which the rate matching has been performed and is transmitted wirelessly.
FIG. 13B
With reference to , an encoding operation will be described in detail. The base station 20 attaches frozen bits to "target information + CRC" in a Polar encoding process. It is also possible that the base station 20 produces "frozen bits + target information" before attaching a CRC, calculates a CRC from "frozen bits + target information", and attaches the CRC to "frozen bits + target information".
n
Then, the base station 20 attaches padding bits to "frozen bits + target information + CRC". The bit length of "frozen bits + target information + CRC + padding bits" is a bit length (N = 2) of an input of the Polar encoder.
Padding bits according to the present embodiment are bits that become a RNTI through an encoding process. In other words, padding bits are obtained from applying an inverse function of encoding to a RNTI. The bit length of padding bits may be the same as or different from the bit length of a RNTI.
For example, if the bit length of padding bits is greater than the bit length of a RNTI, for example, information where a plurality of RNTIs are connected together (for example, RNTI + RNTI) is used as a RNTI to generate padding bits, and a decoding process that will be described later is performed. If the bit length of padding bits is smaller than the bit length of RNTI, for example, a hash function is applied to the RNTI to shorten the RNTI, the shortened RNTI is used as a RNTI to generate padding, and a decoding process that will be described later is performed.
The padding bits (bit positions and their values) are known in the user apparatus 10. In detail, the user apparatus 10 has the same inverse function as an inverse function that the base station 20 has, and calculates the padding bits from a RNTI using the inverse function. Alternatively, the base station 20 may indicate an inverse function used by the base station 20 to the user apparatus 10 through higher layer signaling or broadcast information.
In addition, in a case where the base station 20 uses padding bits longer than a RNTI or in a case where the base station 20 uses padding bits shorter than a RNTI as mentioned above, the fact is known by the user apparatus 10. It is also possible that the base station 20 indicates the fact to the user apparatus through higher layer signaling or broadcast information.
FIG. 12
The base station 20 encodes "frozen bits + target information + CRC + padding bits" to obtain coded information. The coded information includes a code block that is the coded information of "frozen bits + target information + CRC" and a RNTI that is coded information of padding bits. The base station 20 punctures the RNTI in a rate matching process; therefore, in , the corresponding section is indicated as shortened.
FIG. 14
FIG. 14
3
0
1
6
7
illustrates an encoding process for a case of N = 8 (2). Assuming that K denotes the bit length of information bits and M denotes the bit length of "frozen bits + information bits", the bit length of frozen bits is M - K and the bit length of padding bits is N - M. illustrates a case of K = 4 and M = 6; frozen bits have 2 bits (u, u) and padding bits have also 2 bits (u, u).
FIG. 14
0
1
2
5
6
7
6
7
6
7
As illustrated in , the frozen bits (u, u), the information bits (u-u), and the padding bits (u, u) are input to an encoder and coded bits are output. The information bits correspond to "target information + CRC". Coded bits (x', x') corresponding to the padding bits (u, u) become a RNTI. The bits of the RNTI are punctured; resource mapping is performed on the remaining 6 bits, which are then transmitted.
n
Even in a case where, for example, the length of input information (target information + CRC + frozen bits) is N = M = 2, it is possible to apply the method 3 by shortening the input information.
FIGs. 15A, 15B
16
FIG. 15A
FIG. 15A
Next, with reference to , and , a decoding process according to the method 3 will be described. illustrates an outline of a flow of a process in the user apparatus 10. As illustrated in , the user apparatus 10 demodulates a signal received at, for example, a search space of a PDCCH, performs a decoding process on the thus obtained information (a likelihood for each bit) using a RNTI (step S51), and performs CRC check on the thus obtained information (step S52). In the decoding process, the values of frozen bits and the values of padding bits, as known information, are used.
FIGs. 15B
16
FIG. 15B
With reference to and , a decoding operation will be described in detail. It will now be assumed that, on an encoding side, bits obtained from encoding padding bits are used as a RNTI and are punctured. The user apparatus 10 performs a decoding process using received bits and the RNTI as values of punctured bits (the section "shortened" in ). Thereafter, the user apparatus 10 performs CRC check. If the CRC check is successful, the user apparatus 10 determines that the target information is directed to the user apparatus 10 and uses the target information. In addition, the user apparatus 10 can determine, from the type of the RNTI with which the CRC check has been successful, the type of the channel (data). Also according to the method 3, as according to the methods 1 and 2, the user apparatus 10 may use SCD, may use SCLD, and may use CRC-aided SCLD in the decoding.
FIG. 16
FIG. 14
FIG. 16
FIG. 16
illustrates a decoding process corresponding to the encoding process (N = 8, K = 4, and M = 6) of . As illustrated in , as inputs to a decoder (the right side in ), the likelihoods of respective bits are input. For the punctured bits, the likelihoods indicating the values of the RNTI are used. For example, if the value of a bit is 0, a positive great value (for example, +∞) is used as a likelihood; if the value of a bit is 1, a negative great value (for example, -∞) is used as a likelihood.
0
1
6
7
The user apparatus 10 performs the decoding process using known information for the values (u, u) of frozen bits and the values (u, u) of padding bits on the output side.
For example, for a case where a plurality of RNTIs are applicable by the base station 20, the user apparatus 10 performs, for each of the RNTIs, a decoding process using the likelihoods corresponding to the respective bits of the RNTI as the inputs. In response to CRC check being successful, the user apparatus 10 can determine that the RNTI used in the CRC check is the RNTI applied by the base station 20. For example, in a case where the user apparatus 10 uses each of a P-RNTI and a SI-RNTI to perform a decoding process and CRC check resulting in CRC check being successful with the SI-RNTI, the user apparatus 10 can determine that the user apparatus 10 will receive broadcast information from the base station 20.
FIG. 16
Note that, in the above-mentioned example, the transmission side punctures all of the bits corresponding to the RNTI. However, such a manner is one example. Instead, the transmission side may puncture some bits from among all of the bits corresponding to the RNTI and may puncture none of the bits corresponding to the RNTI. Even in a case of not performing puncturing, the transmission side can perform the decoding process illustrated in .
Furthermore, concerning the method 3, a RNTI is used as padding bits; and, as values to be used as likelihood inputs on a decoding side, values obtained from applying an inverse function to the RNTI may be used.
FIG. 13B
The base station 20 may combine, in an encoding process according to the method 3, for example, the method 1 and/or the method 2. For example, the base station 20 performs masking on a CRC and/or frozen bits illustrated in using a RNTI. In a case where a RNTI is used for masking of frozen bits, the user apparatus 10 performs a decoding process using the RNTI as frozen bits. In a case where a RNTI is used for masking of a CRC, the user apparatus 10 performs unmasking of the CRC, obtained from a decoding process, using the RNTI to perform CRC check.
By performing combining as mentioned above, the false detection rate seems to be improved in comparison to a case of not performing combining.
FIG. 17
illustrates features of the methods 1-3. According to the method 1, a RNTI is applied to a CRC and, after a decoding process, the RNTI is used for unmasking. According to the method 2, a RNTI is applied to frozen bits and, the RNTI is used before a decoding process. According to the method 3, a RNTI is applied to padding bits and, the RNTI is used before a decoding process.
Concerning a FAR (False Alarm Rate or a false detection rate), according to the method 1, a FAR depends on the length of a CRC. Therefore, in order to improve a FAR, more CRC bits are needed resulting in an overhead increase. According to the method 2, it is possible to reduce a false detection rate without increasing the overhead. However, characteristics of the method 2 may be unstable or robustness may be insufficient. According to the method 3, it is possible to reduce a false detection rate depending on the length of padding bits. Note that it is possible to use some of frozen bits as padding bits, whereby it is possible to reduce the overhead with respect to the method 3.
FIG. 18
FIG. 18
FIG. 18
FIG. 18
illustrates FAR evaluation results of the methods 2 and 3. In , "Frozen RNTI" denotes the method 2 and "Shortened RNTI" denotes the method 3. The abscissa axis of denotes Es/N0 (a signal-to-noise ratio); the ordinate axis denotes a FAR. As illustrated in , it can be seen that the method 3 has a better FAR than the method 2. For example, assuming that 2 RNTIs differ by only 1 bit (for example: RNTI = 0000; RNTI = 0001), it is possible to identify them according to the method 3 better than the method 2.
The method 1 is similar to a scheme according to the existing LTE and therefore, implementation may be relatively easier. According to the method 2, because a RNTI is applied to frozen bits, padding bits need not be attached; furthermore, on a decoding side, unmasking of a CRC and so forth are not needed. Therefore, concerning these points, the process loads seem lower. As described above, the method 3 has an advantageous effect of having a better FAR.
Next, a variant 1 of the above-described methods 1-3 will be described. Points at which the variant 1 is different from the method 1-3 will now be described. Therefore, points not particularly specified may be the same as or similar to those of the methods 1-3.
FIG. 19
FIG. 19
illustrates an encoding process according to the variant 1. As illustrated in , there is a case where, although a payload size is changed, bit lengths after encoding are the same.
FIG. 19
Although payloads having different payload sizes are input to an encoding unit 111 or 211, the same bit sizes are obtained as Payload size 1 and Payload size 2 illustrated in in a case of encoding using Polar codes. Because a Polar code can have a configuration such as being nested with another code, the same coded bit lengths may be output even if payloads having different payload sizes are input. Note that a payload that is assumed is, for example, downlink control information and, if the bit length of downlink control information cannot be identified, a problem occurs for obtaining the downlink control information. Therefore, a coded bit length is changed for a different payload size so that the payload size can be identified.
For example, frozen bit values to be used for Polar encoding may be changed on a per payload size basis. For example, all of frozen bits may be made to have values "1"; frozen bits may have such values as "101010" where "10" are repeated a predetermined number of times; and frozen bits may have any previously determined values. Note that a frozen bit length may be determined appropriately.
FIG. 20
FIG. 20
illustrates an example (1) of an encoding process according to the variant 1. "Frozen bits" illustrated in denote frozen bits; "Payload" denotes a payload. By inputting frozen bits having different values in a per payload size basis in an encoding process using a Polar code, it is possible to identify a payload size upon a decoding process of a decoding unit 112 or 212.
It is also possible to change CRC bit values included in a payload, for example. Specifically, a method for generating a CRC may be changed on a per payload size basis. For example, a binary expression of a CRC may be changed; the length of a CRC may be changed through repetitions performed on the basis of a payload size; and a scrambling bit sequence for a CRC may be changed. For example, CRC bits obtained from repetitions performed until a payload size becomes 40 bits may be scrambled with a bit sequence "0101000".
In addition, for example, a CRC may be initialized with additional bits.
FIG. 21
FIG. 21
illustrates an example (2) of an encoding process according to the variant 1. As illustrated in , in a normal case where additional bits are not used, a payload includes a "Sequence X" and a "CRC". The "CRC" is generated from the "Sequence X".
In a case of using additional bits, a payload includes "Additional bits", a "Sequence X", and a "CRC". The "CRC" is generated from the "Additional bits" and the "Sequence X". Thereamong, bits to be transmitted include the "Sequence X" and the "CRC" with exclusion of the "Additional bits". In other words, what is Polar encoded is the "Sequence X" and the "CRC". A decoding side performs a decoding process on received "Sequence X" and "CRC" assuming that the "Sequence X" and "CRC" are those encoded with "Additional bits". "Additional bits" may be previously determined on a basis of a payload size. For example, "Additional bits" may be a bit sequence: each bit of the bit sequence is 1 and the bit sequence has a predetermined length. In addition, "Additional bits" may be a bit sequence that is a predetermined nonzero bit sequence and has a predetermined length.
Furthermore, for example, coded bit values may be changed on a per payload size basis. Scrambling is performed on the basis of a payload size or on the basis of a type of a DCI format that is of a payload. For example, it may be previously determined that, in a case of a DCI format 01, scrambling (010101) of repeating "01" is performed.
It is also possible to perform encoding on a per payload size basis by combining a plurality from among a change in frozen bit values, a change in CRC bit values, and a change in coded bit values mentioned above.
As described above, according to the variant 1, by performing, on a per payload size basis, a change in a frozen bit sequence, a change in a CRC bit sequence, or a change in a coded bit sequence, a decoding side can identify a payload size. In other words, in a radio communication system, for a reception side to decode a coded bit sequence, identifying of a payload size can be made easier.
Next, a functional configuration example of the user apparatus 10 and the base station 20 implementing the above-described processing operations will be described.
FIG. 22
FIG. 22
FIG. 22
illustrates one example of a functional configuration of the user apparatus 10. As illustrated in , the user apparatus 10 includes a signal transmission unit 101, a signal reception unit 102, and a setup information management unit 103. The functional configuration illustrated in is merely one example. As long as the operations concerning the present embodiment can be implemented, function classifications and names of functional units can be any classifications and names.
The signal transmission unit 101 generates a transmission signal from transmission data and transmits the transmission signal wirelessly. The signal reception unit 102 receives various signals wirelessly and obtains information of a higher layer from a received signal of a physical layer.
The setup information management unit 103 stores various sorts of setup information received from the base station 20 through the signal reception unit 102. The setup information management unit 230 also stores previously set setup information. The contents of the setup information are, for example, one or a plurality of RNTIs, known bit values, and so forth. In addition, the setup information management unit 103 may store an inverse function to be used to calculate values of padding bits.
FIG. 22
As illustrated in , the signal transmission unit 101 includes an encoding unit 111 and a transmission unit 121. The encoding unit 111 performs a coding process according to the method 3. For example, the encoding unit 111 is configured to encode (for example, Polar encoding) known bit values, information bit values, and padding bit values, to generate coded information. In addition, the encoding unit 111 has a function to calculate a CRC and includes the CRC in the information bit values. Furthermore, in addition to a coding process according to the method 3, the encoding unit 111 may perform a coding process according to the method 1 and/or a coding process according to the method 2.
The transmission unit 121 is configured to generate a transmission signal from coded information generated by the encoding unit 111 and wirelessly transmit the transmission signal. For example, the transmission unit 121 punctures some of bit values of coded information through rate matching and modulates the punctured coded information to generate modulation symbols (complex-valued modulation symbols). In addition, the transmission unit 121 maps modulation symbols to resource elements, thus generates a transmission signal (for example, an OFDM signal or a SC-FDMA signal), and transmits the transmission signal through an antenna of the transmission unit 121. The transmission signal is received by, for example, another communication apparatus (for example, the base station 20 or the user apparatus 15).
The signal transmission unit 101 of the user apparatus 10 need not have a function to perform Polar encoding.
The signal reception unit 102 includes a decoding unit 112 and a reception unit 122. The reception unit 122 demodulates a signal received from another communication apparatus to obtain likelihoods of the respective bits of coded information obtained from an encoding process (for example, a Polar encoding process). For example, the reception unit 122 performs FFT on a received signal, obtained from a detection process, to obtain signal elements of respective subcarriers, and obtain log-likelihood ratios of respective bits using a QRM-MLD method or the like.
FIG. 16
The decoding unit 112, for example, as described above with reference to , uses likelihoods and likelihoods corresponding to a predetermined identifier (for example, a RNTI) to decode coded information. In addition, the decoding unit 112 performs a checking operation using an error detection code (for example, a CRC) on information obtained from coded information through a decoding process, and, if the checking operation is successful, determines that the information is a final decoding result.
FIG. 23
FIG. 23
FIG. 23
illustrates one example of a functional configuration of the base station 20. As illustrated in , the base station 20 includes a signal transmission unit 201, a signal reception unit 202, a setup information management unit 203, and a scheduling unit 204. The functional configuration illustrated in is merely one example. As long as the operations concerning the present embodiment can be implemented, function classifications and names of functional units can be any classifications and names.
The signal transmission unit 201 has functions to generate a transmission signal to be transmitted to the user apparatus 10 and transmit the transmission signal wirelessly. The signal reception unit 202 has functions to receive various signals transmitted by the user apparatus 10 and obtain information of a higher layer from a received signal of a physical layer, for example.
The setup information management unit 203 stores, for example, known setup information. The contents of the setup information are, for example, one or a plurality of RNTIs and known bit values. In addition, the setup information management unit 203 may store an inverse function to be used to calculate values of padding bits.
The scheduling unit 204, for example, allocates resources that the user apparatus 10 uses (resources for UL communication, resources for DL communication, or resources for SL communication) and sends allocation information to the signal transmission unit 201. The signal transmission unit 201 transmits downlink control information including the allocation information to the user apparatus 10.
FIG. 23
As illustrated in , the signal transmission unit 201 includes an encoding unit 211 and a transmission unit 221. The encoding unit 211 performs a coding process according to the method 3. For example, the encoding unit 211 is configured to encode (for example, Polar encoding) known bit values, information bit values, and padding bit values, to generate coded information. In addition, the encoding unit 211 has a function to calculate a CRC and includes the CRC in the information bit values. Furthermore, in addition to a coding process according to the method 3, the encoding unit 211 may perform a coding process according to the method 1 and/or a coding process according to the method 2.
The transmission unit 221 is configured to generate a transmission signal from coded information generated by the encoding unit 211 and wirelessly transmit the transmission signal. For example, the transmission unit 221 punctures some of the bit values of the coded information through rate matching and modulates the punctured coded information to generate modulation symbols (complex-valued modulation symbols). In addition, the transmission unit 221 maps the modulation symbols to resource elements, thus generates a transmission signal (for example, an OFDM signal or a SC-FDMA signal), and transmits the transmission signal through an antenna of the transmission unit 221. The transmission signal is received by, for example, another communication apparatus (for example, the user apparatus 10).
The signal reception unit 202 includes a decoding unit 212 and a reception unit 222. The reception unit 222 demodulates a signal received from another communication apparatus to obtain likelihoods of the respective bits of coded information obtained from an encoding process (for example, a Polar encoding process). For example, the reception unit 222 performs FFT on a received signal, obtained from a detection process, to obtain signal elements of respective subcarriers, and obtain log-likelihood ratios of respective bits using a QRM-MLD method or the like.
FIG. 16
The decoding unit 212, for example, as described above with reference to , uses likelihoods and likelihoods corresponding to a predetermined identifier (for example, a RNTI) to decode coded information. In addition, the decoding unit 212 performs a checking operation using an error detection code (for example, a CRC) on information obtained from coded information through a decoding process, and, if the checking operation is successful, determines that the information is a final decoding result.
Note that the signal reception unit 202 of the base station 20 need not have a function to perform Polar decoding.
FIGs. 22
23
The functional configuration diagrams ( and ) used in the description of the above-mentioned embodiment illustrate blocks in function units. These functional blocks (configuration units) are implemented by any combination of hardware and/or software. In this regard, means for implementing the various functional blocks are not limited. That is, each functional block may be implemented by one device that is a physical and/or logical combination of a plurality of elements. In addition, each functional block may be implemented by two or more devices that are physically and/or logically separated and directly and/or indirectly (for example, in a wired and/or wireless manner) connected together.
FIG. 24
Further, for example, each of the user apparatus 10 and the base station 20 according to the embodiment of the present invention may function as a computer that performs processes according to the present embodiment. illustrates one example of hardware configurations of the user apparatus 10 and the base station 20 according to the present embodiment. Each of the above-described user apparatus 10 and base station 20 may be configured as a computer apparatus that physically includes a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and so forth.
Note that, below, the term "device" may be read as a circuit, a unit, or the like. The hardware configurations of the user apparatus 10 and the base station 20 may include one or more of each of the devices 1001-1006 illustrated, and may be configured not to include some of the devices 1001-1006 illustrated.
Each of the functions of the user apparatus 10 and the base station 20 is implemented as a result of hardware such as the processor 1001 and the memory 1002 reading predetermined software (program) and thereby the processor 1001 performing operations to control communication by the communication device 1004 and control reading data from and/or writing data to the memory 1002 and the storage 1003.
The processor 1001 controls the entirety of the computer by causing an operating system to operate, for example. The processor 1001 may include a central processing unit (CPU) that includes an interface for a peripheral device, a control device, an arithmetic device, a register, and so forth.
FIG. 22
FIG. 23
The processor 1001 reads a program (a program code), a software module, or data from the storage 1003 and/or the communication device 1004 onto the memory 1002, and thus implements various processes according to the read information. As the program, a program that causes the computer to perform at least some of the operations described above for the above-mentioned embodiment is used. For example, the signal transmission unit 101, the signal reception unit 102, and the setup information management unit 103 of the user apparatus 10 illustrated in may be implemented by a control program that is stored in the memory 1002 and operates with the processor 1001. Further, for example, the signal transmission unit 201, the signal reception unit 202, the setup information management unit 203, and the scheduling unit 204 of the base station 20 illustrated in may be implemented by a control program that is stored in the memory 1002 and operates with the processor 1001. In this regard, it has been described that various processes described above are implemented by the single processor 1001. However, the various processes may be implemented by two or more processors 1001 simultaneously or sequentially. The processor 1001 may be implemented by one or more chips. Note that the program may be transmitted from a network through an electric communication line.
The memory 1002 is a computer readable recording medium and may include, for example, at least one of a ROM (Read-Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), a RAM (Random Access Memory), and so forth. The memory 1002 may be called a register, a cache, a main memory (a main storage), or the like. The memory 1002 can store a program (program codes), a software module, or the like executable for implementing processes according to the embodiment of the present invention.
The storage 1003 is a computer readable recording medium and may include, for example, at least one of an optical disc such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magneto-optical disc (for example, a compact disc, a digital versatile disc, or a Blu-ray (registered trademark) disc), a smart card, a flash memory (for example, a card, a stick, or a key drive), a floppy (registered trademark) disk, a magnetic strip, and so forth. The storage 1003 may be called an auxiliary storage device. The above-described recording medium may be, for example, a suitable medium such as a database, a server, or the like that includes the memory 1002 and/or the storage 1003.
The communication device 1004 is hardware (a transmission and reception device) for performing communication between computers through a wired and/or wireless network and may also be called, for example, a network device, a network controller, a network card, a communication module, or the like. For example, the signal transmission unit 101 and the signal reception unit 102 of the user apparatus 10 may be implemented by the communication device 1004. Further, the signal transmission unit 201 and the signal reception unit 202 of the base station 20 may be implemented by the communication device 1004.
The input device 1005 is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, or the like) that receives an input from the outside. The output device 1006 is an output device (for example, a display, a speaker, a LED light, or the like) that performs outputting to the outside. The input device 1005 and the output device 1006 may have an integrated configuration (for example, a touch panel).
Further, various devices such as the processor 1001 and the memory 1002 are connected together via the bus 1007 for performing communication of information. The bus 1007 may be configured by a single bus and may be configured by different buses corresponding to the various devices.
Further, each of the user apparatus 10 and the base station 20 may include hardware such as a microprocessor, a digital signal processor (DSP), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), or a FPGA (Field Programmable Gate Array). The hardware may implement some or all of the various functional blocks. For example, the processor 1001 may be implemented by at least one of these types of hardware.
As described above, according to the present embodiment, a communication apparatus includes an encoding unit configured to generate a second coded bit sequence by encoding according to a second encoding scheme a frozen bit sequence and a second bit sequence that includes a first bit sequence and a first coded bit sequence generated from encoding the first bit sequence according to a first encoding scheme; and a transmission unit configured to transmit a transmission signal generated from the second coded bit sequence. The communication apparatus determines the second coded bit sequence on the basis of a length of the second bit sequence.
Thanks to the configuration, as a result of, on a basis of the payload size, a change in a frozen bit sequence, a change in a CRC bit sequence, or a change in a coded bit sequence being performed, a decoding side can identify a payload size. In other words, in a radio communication system, upon decoding of a coded bit sequence by a reception side, it is possible to easily identify a payload length.
The frozen bit sequence may be determined on the basis of the length of the second bit sequence. Thus, a frozen bit sequence can be changed on the basis of a payload size.
A bit sequence to be used to scramble the first coded bit sequence may be determined on the basis of the length of the second bit sequence. Thus, scrambling of a CRC bit sequence can be changed on the basis of a payload size.
The first coded bit sequence may be generated from encoding an additional bit sequence and the first bit sequence according to the first encoding scheme. Thus, a CRC bit sequence generating method can be changed on the basis of the additional bits.
The additional bit sequence may be a bit sequence: each bit of the bit sequence is 1 or the bit sequence is a predetermined nonzero bit sequence; and the bit sequence has a predetermined length. Thus, additional bits can be changed on the basis of a payload size.
A bit sequence to be used to scramble the second coded bit sequence may be changed on the basis of the length of the second bit sequence. Thus, scrambling of a Polar encoded bit sequence can be changed on the basis of a payload size.
As described above, according to the present embodiment, a communication apparatus used in a radio communication system includes an encoding unit configured to generate a second bit sequence by encoding according to a second encoding scheme an input frozen bit sequence and a payload bit sequence that includes an information bit sequence and a first coded bit sequence generated from encoding the information bit sequence according to a first encoding scheme; and a transmission unit configured to generate a transmission signal generated from the second coded bit sequence generated by the encoding unit and transmit the transmission signal. The communication apparatus makes different the generated second coded bit sequence on the basis of a length of the payload bit sequence.
Thanks to the configuration, as a result of, on a basis of the payload size, a change in a frozen bit sequence, a change in a CRC bit sequence, or a change in a coded bit sequence being performed, a decoding side can identify a payload size. In other words, in a radio communication system, upon decoding of a coded bit sequence by a reception side, it is possible to easily identify a payload length.
The frozen bit sequence may be changed on the basis of the length of the payload bit sequence. Thus, a frozen bit sequence can be changed on the basis of a payload size.
A bit sequence to be used to scramble the first coded bit sequence may be determined on the basis of the length of the payload bit sequence. Thus, scrambling of a CRC bit sequence can be changed on the basis of a payload size.
An additional bit sequence may be determined on the basis of the length of the payload bit sequence and the coded bit sequence obtained according to the first encoding scheme may be generated from encoding the additional bit sequence and the information bit sequence according to the first encoding scheme. Thus, a CRC bit sequence generating method can be changed on the basis of the payload size.
The additional bit sequence may be a bit sequence: each bit of the bit sequence is 1 or the bit sequence is a predetermined nonzero bit sequence; and the bit sequence has a predetermined length. Thus, additional bits can be changed on the basis of a payload size.
A bit sequence to be used to scramble the second coded bit sequence may be changed on the basis of the length of the payload bit sequence. Thus, scrambling of a Polar encoded bit sequence can be changed on the basis of a payload size.
As described above, according to the present embodiment, a communication apparatus used in a radio communication system includes an encoding unit configured to generate coded information by performing predetermined encoding on input known bit values, information bit values, and padding bit values; and a transmission unit configured to generate a transmission signal from the coded information generated by the encoding unit and transmit the transmission signal. The padding bit values are values to be converted to a predetermined identifier through the above-mentioned coding. The predetermined identifier is used by another communication that receives the transmission signal to decode the coded information.
Thanks to the configuration, a technology is provided, according to which, in a radio communication system where a transmission side transmits coded information to which a predetermined identifier is applied and a reception side uses the predetermined identifier to detect the information, the reception side can have a satisfactory false detection rate.
The transmission unit may puncture from the coded information the predetermined identifier obtained from the above-mentioned coding. Thereby, it is possible to reduce the number of bits of the transmission signal.
Furthermore, according to the present embodiment, a communication apparatus used in a radio communication system includes a reception unit configured to demodulate a signal received from another communication apparatus to obtain likelihoods of respective bits of coded information coded according to predetermined coding; and a decoding unit configured to use the likelihoods and likelihoods corresponding to a predetermined identifier to decode the coded information.
Thanks to the configuration, a technology is provided, according to which, in a radio communication system where a transmission side transmits coded information to which a predetermined identifier is applied and a reception side uses the predetermined identifier to detect information, the reception side can have a satisfactory false detection rate.
The decoding unit uses, as known bit values used for the decoding, frozen bit values and known padding bit values, for example. Thereby, it is possible to properly implement decoding that uses known information (for example, Polar decoding).
The decoding unit may perform a check using an error detection code on information obtained from decoding the coded information and, in response to the check being successful, may determine that the information is a final decoding result. Thereby, it is possible to properly determine correctness of received information.
Thus, the embodiment of the present invention has been described. However, the disclosed invention is not limited to such an embodiment of the present invention, and the person skilled in the art will understand various variants, modifications, alternatives, replacements, and so forth. Although specific numerical values have been used as examples for promoting understanding of the invention, the numerical values are merely examples unless otherwise noted, and any other suitable values may be used instead. Classifications of items in the above description are not essential to the present invention, contents described in two or more items may be used in combination if necessary, and contents described in an item may be applied to contents described in another item (unless a contradiction arises). The boundaries between the functional units or the processing units in the functional block diagrams do not necessarily correspond to the boundaries of physical components. Operations of a plurality of functional units may be physically implemented by a single component and an operation of a single functional unit may be physically implemented by a plurality of components. Concerning the operation procedures described above for the embodiment of the present invention, the orders of steps may be changed unless a contradiction arises. For the sake of convenience for describing the operations, the user apparatus 10 and the base station 20 have been described with the use of the functional block diagrams. These apparatuses may be implemented by hardware, software, or a combination thereof. Each of software functioning with a processor of the user apparatus 10 according to the embodiment of the present invention and software functioning with a processor of the base station 20 according to the embodiment of the present invention may be stored in any suitable recording medium such as a random access memory (RAM), a flash memory, a read-only memory (ROM), an EPROM, an EEPROM, a register, a hard disk (HDD), a removable disk, a CD-ROM, a database, or a server.
Providing of information may be implemented not only according to the embodiment of the present invention described herein but also by another method. For example, providing of information may be implemented with the use of physical layer signaling (for example, DCI (Downlink Control Information) or UCI (Uplink Control Information)), higher layer signaling (for example, RRC (Radio Resource Control) signaling, MAC (Medium Access Control) signaling, broadcast information (a MIB (Master Information Block), or a SIB (System Information Block)), or another signal, or a combination thereof. Further, RRC signaling may be called a RRC message, and, for example, may be a RRC Connection Setup message, a RRC Connection Reconfiguration message, or the like.
Each embodiment of the present invention described herein may be applied to a system that uses a suitable system such as LTE (Long Term Evolution), LTE-A (LTE-Advanced), SUPER 3G, IMT-Advanced, 4G, 5G, FRA (Future Radio Access), W-CDMA (registered trademark), GSM (registered trademark), CDMA2000, UMB (Ultra Mobile Broadband), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, UWB (Ultra-WideBand), or Bluetooth (registered trademark); and/or a next-generation system expanded on the basis thereof.
Concerning the operation procedures, sequences, flowcharts, and so forth according to each embodiment described herein, the orders of steps may be changed unless a contradiction arises. For example, concerning the methods described herein, the various step elements are illustrated in the exemplary orders and are not limited to the illustrated specific orders.
The specific operations performed by the base station 20 described herein may in some cases be performed by an upper node. It is clear that the various operations performed for communication with the user apparatus 10 can be performed by the base station 20 and/or another network node (for example, a MME, a S-GW or the like may be cited, but not limited thereto) in a network that includes one or more network nodes including the base station 20. In the above, the description has been made for the case where the another network node other than the base station 20 is a single node as an example. In this regard, the another network node may be a combination of a plurality of other network nodes (for example, a MME and a S-GW).
Each embodiment described herein may be solely used, may be used in combination with another embodiment, and may be used in a manner of being switched with another embodiment upon implementation.
By the person skilled in the art, the user apparatus 10 may be called any one of a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communication device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, and other suitable terms.
By the person skilled in the art, the base station 20 may be called any one of a NB (NodeB), an eNB (evolved NodeB), a gNB, a base station, and other suitable terms.
The term "to determine" used herein may mean various operations. For example, "to determine" may mean to consider having "determined" to have performed judging, calculating, computing, processing, deriving, investigating, looking up (for example, looking up a table, a database, or another data structure), or ascertaining, or the like. Also, "to determine" may mean to consider having "determined" to have performed receiving (for example, receiving information), transmitting (for example, transmitting information), inputting, outputting, or accessing (for example, accessing data in a memory), or the like. Also, "to determine" may mean to consider having "determined" to have performed resolving, selecting, choosing, establishing, comparing, or the like. That is, "to determine" may mean to consider having "determined" a certain operation.
Words "based on" or "on the basis of" used herein do not mean "based on only" or "on the basis of only" unless otherwise specified. That is, the words "based on" or "on the basis of" mean both "based on only" and "based on at least" or both "on the basis of only" and "on the basis of at least".
As long as any one of "include", "including", and variations thereof is used herein or used in the claims, this term has an intended meaning of inclusiveness in the same way as the term "comprising". Further, the term "or" used herein or used in the claims has an intended meaning of not exclusive-or.
Throughout the present disclosure, in a case where an article such as a, an, or the in English is added through a translation, the article may be of a plural form unless the context clearly indicates otherwise.
Note that, in the embodiment of the present invention, an encoding scheme using a CRC is one example of a first encoding scheme. Polar codes, LDPC codes and convolution codes are examples of a second encoding scheme. Additional bits are example of an additional bit sequence.
Thus, the present invention has been described in detail. In this regard, it is clear for the person skilled in the art to understand that the present invention is not limited to the embodiment of the present invention described herein. The present invention can be implemented in a modified or changed mode without departing from the spirit and the scope of the present invention determined by the descriptions of the claims. Therefore, the description herein is for an illustrative purpose and does not have any restrictive meaning for the present invention.
2017-229496 filed November 29, 2017
2017-229496
The present international patent application is based on and claims priority to Japanese patent application No. ; the contents of Japanese patent application No. are incorporated herein by reference in their entirety.
10, 15
101
102
103
20
201
202
203
204
1001
1002
1003
1004
1005
1006
user apparatuses
signal transmission unit
signal reception unit
setup information management unit
base station
signal transmission unit
signal reception unit
setup information management unit
scheduling unit
processor
memory
storage
communication device
input device
output device | |
This Term Newsletter will provide you with information about your children’s upcoming teaching and learning experiences in the new term. These newsletters are provided to all parents across the school, so that everyone is aware of the great things happening at our school, even if you don’t have a child in this level. Of course, you may choose to read only those Newsletters affecting your children. There is also plenty of additional information provided via Compass throughout the school year, so please make a point of touching base regularly.
When we think of the character traits and qualities we hope to instil in our young learners, a few come to mind – persistence, resilience and kindness. Over the past two terms, our Grade 5 learners have demonstrated their ability to rise above the ongoing pressures of isolation and remote learning and adapt in ways we’ve never seen before. Their commitment to completing online learning tasks, responding to feedback and pushing themselves to improve has been inspiring. We observed many writing skills develop as students experimented with characterisation, description, language features and improved punctuation. We celebrated a range of authors and their texts and analysed the structure and impact of many narratives. Results in maths were overwhelming, with many students achieving growth well above expectation! Despite the physical space between teachers and students, we can all reflect on remote learning as being the best possible experience given the challenges we faced. Teachers ought to be commended on their ongoing commitment to developing high quality learning tasks. Students ought to be congratulated on their engagement levels each and every day.
As Term 4 begins, we will inevitably hit the ground running. Literacy and numeracy assessments will begin immediately, allowing teachers to gain a greater insight into the individual needs of every child. Our teaching and learning program will continue to be rich, ensuring student learning is targeted. The term is going to busy, but incredibly exciting. We are hopeful that the ‘downward trend’ continues and we will be able to finish the year on-site, surrounded by friends and peers.
The Grade 5 teachers would like to thank all families for their incredible support and flexibility throughout this difficult time. We look forward to working together for the remainder of the year to ensure all children succeed!
Numeracy
In Term 4, students will have an opportunity to extend their learning about location, transformation and angles, including symmetry, grid references and mapping. We will continue to teach a range of problem solving skills throughout the term and provide opportunities for students to practise and apply many multiplication, division, addition and subtraction strategies in various open and closed settings.
Literacy
Students will continue to participate in sustained independent reading throughout Term 4. They will have ownership over the texts they choose to read and will establish and review reading goals through teacher-student conferences and peer discussions. We hope to see many students inspired by the texts they read and use these to develop their own identities as authors. Our writing sessions will continue to centre on the Writing Workshop model, where students will be explicitly taught skills associated with seeding new ideas, drafting texts, revising, editing and publishing. The ‘Author’s Chair’ will be one of the many ways we continue to showcase and celebrate student learning during our literacy sessions.
Inquiry
The Inquiry theme, ‘Our Place in the World’, will continue into Term 4. Students will investigate and research many Asian countries and strengthen their knowledge of the associated cultures, rituals and celebrations. | https://www.laralake.vic.edu.au/newsletter-article/2020-term-4-curriculum-news-grade-5 |
Healthcare Recession: How Can Providers Help Their Patients and Themselves?
June 18, 2020
When we think of a recession, we often tend to exempt healthcare facilities from the conversation. Afterall, people are constantly in need of healing and care. Because of this, many tend to think of a “healthcare recession” as nothing more than fiction and believe the industry as a whole is…
Understanding Hospital Readmission Rates and the Need to Lower Them
April 28, 2020
Healthcare providers are in a constant state of improving their care capabilities in order to better treat those that seek their care. And contrary to popular beliefs, this transcends care that’s only administered while a patient is residing within a facility. Acute care and emergency treatment…
Preparing for Healthcare Consumerism with Healthcare Price Transparency
April 16, 2020
Patient empowerment through education and engagement programs have been rising in popularity within the healthcare space as it embraces value-based care. Because of this push, we’ve seen an increase in medical devices such as vital tracking wearables and medical tablet pcs loaded up with patient…
Going Paperless in Healthcare: The Benefits and the Roadblocks
April 14, 2020
Patient data has become the cornerstone of healthcare in recent years. And while the term “data” may immediately conjure up images of computer databases and digital spreadsheets, for quite some time, the healthcare space actually preferred to use physical paper charting in order to record this…
Triage Practices and How They’re Evolving
April 7, 2020
Healthcare, like most other industries, is in a constant state of improving operations through the use of new technology. This technology can span from medical computers and EHR records that compile patient notes to automated chatbots that speak to patients when doctors can’t and so much more. And…
The Who, What, When, and Why of Healthcare Industry Cybersecurity
March 26, 2020
Healthcare and cybersecurity have a very closely entwined relationship. The healthcare industry is filled to the brim with medical computers and EHR records housing valuable patient data. Cybercriminals are often chomping at the bit for this kind of information as it can sell for quite the pretty…
What is SOAP Notes? How Standardized Notes Improve Healthcare
March 12, 2020
It’s been a general understanding for some time that the way doctor’s take notes needs to be improved. Doing so would not only enhance patient care, it would cut down on physician burnout as well. It was believed that EHRs were to be the solution to this need in the healthcare space. Unfortunately,…
Healthcare Chatbot Use Cases Blend Tech with Patient Empathy
March 5, 2020
Despite the sheer abundance of technology used in care facilities such as medical computers, AI, surgical robots, and more, healthcare is a very personal, social discipline. After all, doctors and nurses are often working with patients at their lowest and in the most need of a human touch. Many of…
5G Healthcare Use Cases and How to Prepare for Them
March 3, 2020
5G has been the talk of the town for years and not just in healthcare. Industry, retail, tech, every industry under the sun seems to be singing 5G’s praises despite most of these discussions focusing mostly on hypothetical benefits. This is because 5G hasn’t been around for very long and has hardly…
Sign Up
Join our 35,000+ subscriber newsletter and discover the latest in the healthcare, industrial, and enterprise technology communities. | https://www.cybernetman.com/blog/tag/healthcare/ |
The invention relates to the technical field of fluorescent materials, and discloses a fluorescent material with multimode fluorescence characteristics, a preparation method and application. The fluorescent material takes CaAl2O4 as a matrix, Mn < 2 + > and Cr < 3 + > ions as a luminescence center and Ge < 4 + > as a charge control agent, the chemical general formula of the fluorescent material is Ca1-xAl2-2y-2zO4: xMn < 2 + >, yCr < 3 + > and zGe < 4 + >, in the formula, x is greater than or equal to 0.0001 and less than or equal to 0.005, y is greater than or equal to 0.0001 and less than or equal to 0.005, z is greater than or equal to 0.2 and less than or equal to 0.5, and x, y and z are mole numbers. The fluorescent material for the CaAl2O4: xMn < 2 + >, yCr < 3 + > and zGe < 4 + > (x is more than or equal to 0.0001 and less than or equal to 0.005, y is more than or equal to 0.0001 and less than or equal to 0.005, and z is more than or equal to 0.2 and less than or equal to 0.5) pavement luminescent paint is successfully synthesized through a traditional high-temperature solid-phase reaction method. The fluorescent material is highly sensitive to components, excitation wavelength and temperature, so that the fluorescent material can output tunable dynamic multicolor radiation, convertible green light and red light signals and convertible green long afterglow and red long afterglow signals by changing test conditions. | |
"Looking back to the past, no effort to beg pardon and to seek to repair the harm done will ever be sufficient,.... Looking ahead to the future, no effort must be spared to create a culture able to prevent such situations from happening, but also to prevent the possibility of their being covered up and perpetuated." - Pope Francis, Aug. 20, 2018 Letter "to the people of God"
Click here for the Diocesan Response to Abuse Crisis &
the List of Credibly Accused Clergy in the Diocese of Providence
We carry with us these days the pain and hope of all who may feel let down by the Church. Yet, we find ourselves grateful for the reminder that the future does not rest with any of us alone, but rather belongs to God. Hope is to be found in Christ. In Him, hope becomes unshakable." | https://dioceseofprovidence.com/response-to-abuse |
This invention relates generally to a system and method for creating variable rate application maps for applying dispensing materials to a field. In particular, the present invention relates to a system for creating variable rate application maps which allow the user to vary the dispensing rate of dispensing materials at various field locations depending upon different field conditions at different field locations. In particular, the present invention relates to a geographic information system for maintaining geographic field data and other data for site specific farming applications.
Typically, dispensing apparatus for dispensing materials (such as fertilizer, seeds, etc.) to a field have applied such materials uniformly across a field irrespective of varying field conditions across the field. Such application of materials at a constant rate without consideration to varying field conditions may not provide optimal efficiency or yield. Accordingly, it is desirable to vary the dispensing rates of materials depending upon varied field conditions. Various field conditions, such as soil characteristics and nutrient levels, affect plant growth. Accordingly, it is desirable to provide varying application rates of dispensing materials to accommodate for varied field conditions.
Systems are already known which are capable of evaluating soil nutrient levels and other field conditions. Thus, it is desirable to use such field characteristics to determine optimum or desired dispensing levels at varied locations. The criteria for determining desired dispensing rates and data available may vary. Thus, it is desirable to have a dynamic system for generating variable rate application maps for use with a controller with the flexibility to evaluate dispensing rates depending upon selected data and varied criteria. Additionally, it is desirable to have a system for creating application maps which may consider varied available data for the purpose of achieving optimum dispensing rates for various materials.
The present invention relates to a dynamic system for creating application maps for use with a controller for a dispensing apparatus for dispensing materials to a field. The application maps determine variable rates for application of dispensing materials depending upon varied field conditions. Thus, dispensing materials are dispensed at variable rates across the field depending upon the particular field conditions at a particular field location. Preferably, the system includes a geographic information system for storing field data relative to data type and a georeferenced field location. Preferably, spatial field data is georeferenced relative to longitudinal and latitude coordinates for storage, access and manipulation of said data for creating georeferenced application maps in one embodiment of the invention. The system also includes a means for storing field boundary data for correlating spatial data relative to a specific field.
Preferably, the system includes user interface means for selectively defining various application rate equations for determining rates of application for a particular dispensing material based upon particular field data desired for various field locations. The application rate equations are selectively defined by a user relative to desired relationships between selected data and desired output. A processor is operably associated with the stored field data and the user interface means for defining various application rate equations for use in determining varied application rates for a particular field for a particular dispensing material. The application rate equation correlate selected data and desired output to produce a variable rate application map for a field.
| |
Summary: Ares is the seventh book in O’Connor’s very successful Olympians series of graphic novels. In fact, I was amazed to see that we’ve already gotten to book 7, because that means I’ve missed quite a few in the middle. For those who are interested, we featured the first book, Zeus (reviewed here), and last year we hosted a guest post from the author in honor of the publication of Aphrodite, which was book 6.
I’ve already talked endlessly about my love of Greek (and all other) mythology, so I won’t go into that. But I do want to say that Ares was never my favorite character of the bunch. In that respect, I suppose he represents a side of humanity that I relate to less—the warlike, gleefully conquering, savage side. (Ironic, since my zodiac sign is Aries.) So I was interested to see how O’Connor handled his story, especially given that all I remembered clearly of the stories from my mythology book was something about Ares sowing the teeth of some monster in the ground and reaping a crop of fierce warriors. What the author does is quite clever: we get to see Ares in his element, in context, set against the backdrop of one of the greatest battles in history, the Trojan War. In the process, he comes up against his own divine cousin Athena, also a goddess of war, but of strategy and tactics, based on wisdom.
Peaks: As the battle between the Greeks and Trojans rages, we see—from the gods’ vantage—the whole sorry escapade with Patroclus disguising himself as Achilles to rile up the troops and get them raging for blood, and then Hector killing Patroclus thinking he’s Achilles, and then Achilles going after Hector in a battle to the death. Yep, you’ve got it: it’s Homer’s Iliad. It’s a clever way to flesh out the story of Ares and really show him being a war god, and it’s also a fun way to show the gods in one of their many times of strife, their battle echoing the one on the ground and driving home the point that the Greek pantheon was anthropomorphic, reflecting human traits and preoccupations.
O’Connor brings this all to life in a very readable, traditional comic-book style, with plenty of action but a minimum of actual gore, and little flourishes of anachronistic humor that the contemporary reader will enjoy. It’s got a high production value, and quite a bit of wonderful educational extras at the end (don’t miss the G(r)eek Notes for some entertaining glimpses into the author’s thought process), but at the same time this is a friendly, fun series that in no way feels didactic.
From time to time, the flashing back and forth between the action at Olympus and the action on the ground had me a little confused, in terms of who was who and which gods supported which mortals, and so I had to fall back on my background knowledge of the Iliad to get it straight in my mind. Ultimately, though, since this is NOT the actual Iliad, it isn’t critical to have all those details straight in my head for the purposes of this book.
Conclusion: Overall, this is another excellent addition to the series, and a fantastically fun way to get younger readers interested in Greek mythology. Mythology buffs like myself will almost always enjoy a new retelling, and this one sticks quite close to the original tales, so I can see the series being a great addition to either a home or classroom library. Now, if you’ll excuse me, I gotta go catch up on all those volumes I missed…but in the meantime, be sure to check out the other entries in this blog tour–including blog buds Charlotte, Mary Ann, and Mary Lee! Thanks to Gina Gagliano at First Second for organizing the celebration.
I received my copy of this book courtesy of First Second. You can find ARES: BRINGER OF WAR by George O’Connor at an online e-tailer, or at a real life, independent bookstore near you! | http://writingya.com/?p=455 |
MOOCs are arriving to the business sector, especially within Human Resource Development and customer training. While some (larger) companies offer their own MOOCs, and others use external MOOCs to complement their HR offers, the awareness and perception level among HR managers in general, particularly across Europe, is still lower as might be assumed. Besides unawareness, administrative and inexperience-barriers have been identified as the main reasons.
European HR experts seem to observe the development first, before taking action themselves. On the other hand, employees world-wide are already a strong part of the movement. There is a significant bottom-up movement from professional lifelong learners in MOOCs with the intention to improve their job performance, upgrade their skills and optimize their career options. As there are no specific studies available for the european context, this paper sets out to answer the research question if this bottom-up movement in the uptake of MOOCs by employees can be confirmed for Europe. It will further discuss which opportunities and challenges this would imply for both european companies and employees.
The methodology applied is a combination of desk and field research following a mixed-methods approach of quantitative and qualitative analysis. Starting with findings derived from literature and studies, a MOOC targeting a business audience has been designed and delivered on a european MOOC platform. The MOOC was accompanied by in-depth evaluation with quantitative and qualitative analysis. Next to pre-and post-course surveys among the participants, 21 business and educational experts have been interviewed before and after the MOOC in focus groups.
Findings confirm that there is a bottom-up movement taking place in Europe. The MOOC was not an official part of any company HR programme, but showed significant participation from professional learners with clearly job-and career-related motives. Next, a high potential for MOOCs as a complementary HR offer was acknowledged, but demanding a change of company culture first and local adaption second. The bottom-up participation would solve some of the administrative key barriers identified and could establish a co-creation approach of HR programs by managers and pro-active employees with MOOC experience.
Slides
Video
Author's Biography
Christian Friedl
Christian is senior lecturer at the Department of Management at FH JOANNEUM - University of Applied Sciences in Graz, Austria. His research focuses on entrepreneurship/
Christian holds a Master´s degree in Business Administration and Environmental System Sciences, a Postgraduate Master´s degree in European Project and Public Management and is currently doing his PhD in the field of Corporate Entrepreneurship Education at the University of Graz. Besides academia, he was working in the music industry for 14 years on and behind stage. | http://www.emadridnet.org/index.php/en/28-eventos-y-seminarios/1035-flexible-self-directed-and-bottom-up-are-employees-overtaking-their-human-resource-departments-with-moocs |
Dear European fire safety professionals, colleagues and friends,
This is the fourth time that the European Fire Safety Alliance together with its partners invites you to participate in the European Fire Safety Week (EUFSW2022).
As in previous editions, we are going to discuss with leading experts and decision-makers the challenges facing Europe in the field of fire safety.
This year we will mainly focus on how to place EU citizens safety at the heart of the energy transition, but we will not forget about other important problems defined in the European Fire Safety Action Plan.
View the full program offer (available soon) and register! We are looking forward to welcome you.
Best regards, | https://www.f-e-u.org/events/european-fire-safety-week-14-18-november-2022/ |
TECHNICAL FIELD
BACKGROUND ART
CITATION LIST
PATENT LITERATURE
SUMMARY OF INVENTION
TECHNICAL PROBLEM
SOLUTION TO PROBLEM
ADVANTAGEOUS EFFECTS OF INVENTION
BRIEF DESCRIPTION OF DRAWINGS
DESCRIPTION OF EMBODIMENTS
[Rubber component]
<Copolymer>
<Method for preparing copolymer>
(Polymerization method)
(Polymerization initiator for anionic polymerization)
(Anionic polymerization method)
(Hydrocarbon solvent used in anionic polymerization)
(Randomizer used in anionic polymerization)
<Modifier>
(Structure silica (linear silica))
(Silane coupling agent)
(Antioxidant)
(Softener)
(Vulcanizing agent)
(Vulcanization accelerator)
(Vulcanization activator)
(Other components)
<Preparation of rubber composition>
<Pneumatic tire>
EXAMPLES
<Analysis of copolymer>
(Measurement of weight average molecular weight Mw)
(Determination of copolymer structure)
<Synthesis of copolymer>
(Copolymer (1))
(Copolymers (2) to (15))
<Examples and Comparative Examples>
<Evaluated item and test method>
(Fuel economy)
(Wet grip performance (1))
(Wet grip performance (2))
(Dry grip performance)
(Average primary particle size, average length, and average aspect ratio of silica)
REFERENCE SIGNS LIST
The present invention relates to a rubber composition and a pneumatic tire formed using the composition.
Recent concerns about resource or energy saving and environmental protection created a growing social demand for reducing carbon dioxide emissions. In the automotive industries, various strategies to reduce carbon dioxide emissions, such as weight reduction of vehicles and use of electric energy, have been attempted.
A common goal to be achieved by all vehicles is improved fuel economy, which can be achieved by improvement of the rolling resistance of tires. Another growing need for vehicles is improved driving safety. The fuel economy and safety of vehicles largely depend on the performance of tires used. The vehicle tires are increasingly required to have improved fuel economy, wet grip performance, handling stability, and durability. These properties of tires depend on various factors, such as the structure of tires and materials contained, and in particular depend on the properties of rubber compositions used for their treads, which are tire components to be in contact with a road. Accordingly, many technical improvements of tire rubber compositions have been considered and proposed, and are practically employed.
Tire tread rubber should meet the following requirements: low hysteresis loss for improved fuel economy; and high wet-skid resistance for improved wet grip performance. Low hysteresis loss and high wet-skid resistance are opposing properties, and improvement of either one of these properties is not enough to solve the above problems. One typical strategy to provide improved tire rubber compositions is to use improved materials, specifically to use rubber materials (e.g. styrene butadiene rubber, butadiene rubber) with an improved structure or to use reinforcing fillers (carbon black, silica), vulcanizing agents, and plasticizers with an improved structure or an improved composition.
A strategy to improve the fuel economy and wet grip performance together while maintaining the balance between them is to use silica as filler. Unfortunately, silica is difficult to disperse because of its strong self-aggregation properties. The strategy is needed to overcome this problem. Patent Literature 1 discloses a method for producing a rubber composition with good fuel economy and good wet grip performance by mixing a zinc aliphatic carboxylate and chain end-modified styrene butadiene rubber with a specific compound containing nitrogen and silicon. Still, there is a need for other methods.
JP 2010-111754 A
Patent Literature 1:
An object of the present invention is to provide a rubber composition that can solve the above problems, and improve the fuel economy, wet grip performance, and dry grip performance together while maintaining the balance between them, and a pneumatic tire, a component (in particular, a tread) of which includes the rubber composition.
5
6
1
1
The present invention relates to a rubber composition containing: a rubber component containing a copolymer; and a silica, wherein the copolymer is obtained by copolymerization of 1,3-butadiene, styrene, and a compound represented by formula (I) below, has an amino group at a first chain end and a functional group containing at least one atom selected from the group consisting of nitrogen, oxygen, and silicon at a second chain end, and has a weight average molecular weight of 1.0×10 to 2.5×10, and the silica has an average length W between branched particles Z-Z inclusive of the branched particles Zs of 30 to 400 nm, wherein the branched particles Zs are each adjacent to at least three particles;
wherein R represents a C1 to C10 hydrocarbon group.
5
6
1
The present invention also relates to a rubber composition, obtained by mixing a silica sol and a copolymer, wherein the copolymer is obtained by copolymerization of 1,3-butadiene, styrene, and a compound represented by formula (I) below, has an amino group at a first chain end and a functional group containing at least one atom selected from the group consisting of nitrogen, oxygen, and silicon at a second chain end, and has a weight average molecular weight of 1.0×10 to 2.5×10;
wherein R represents a C1 to C10 hydrocarbon group.
The functional group is preferably an alkoxysilyl group, and is more preferably a combination of an alkoxysilyl group and an amino group.
11
The amino group at the first chain end is preferably an alkylamino group or a group represented by the following formula (II):
wherein R represents a divalent C2 to C50 hydrocarbon group optionally containing at least one of nitrogen and oxygen atoms.
12
19
The group represented by the formula (II) is preferably a group represented by the following formula (III):
wherein R to R, which may be the same or different, each represent a hydrogen atom or a C1 to C5 hydrocarbon group optionally containing at least one of nitrogen and oxygen atoms.
The copolymer preferably has, in addition to the amino group, an isoprene unit at the first chain end.
The copolymer preferably contains 0.05 to 35% by mass of the compound represented by the formula (I).
The copolymer is preferably obtained by copolymerizing 1,3-butadiene, styrene, and the compound represented by the formula (I) using a compound containing a lithium atom and an amino group as a polymerization initiator, and modifying a polymerizing end of the resulting copolymer with a modifier containing a functional group containing at least one atom selected from the group consisting of nitrogen, oxygen, and silicon.
21
22
23
24
25
26
27
28
29
30
33
The modifier is preferably a compound represented by the following formula (IV), (V), or (VI):
wherein R, R, and R, which may be the same or different, each represent an alkyl, alkoxy, silyloxy, carboxyl, or mercapto group, or a derivative of any of these groups; R and R, which may be the same or different, each represent a hydrogen atom or an alkyl group; and n represents an integer;
wherein R, R, and R, which may be the same or different, each represent an alkyl, alkoxy, silyloxy, carboxyl, or mercapto group, or a derivative of any of these groups; R represents a cyclic ether group; and p and q each represent an integer;
wherein R to R, which may be the same or different, each represent an alkyl, alkoxy, silyloxy, carboxyl, or mercapto group, or a derivative of any of these groups.
11
The polymerization initiator preferably contains an alkylamino group or a group represented by the following formula (II):
wherein R represents a divalent C2 to C50 hydrocarbon group optionally containing at least one of nitrogen and oxygen atoms.
12
19
The group represented by the formula (II) is preferably a group represented by the following formula (III):
wherein R to R, which may be the same or different, each represent a hydrogen atom or a C1 to C5 hydrocarbon group optionally containing at least one of nitrogen and oxygen atoms.
The polymerization initiator preferably contains an isoprene unit.
The rubber component contains the copolymer in an amount of not less than 5% by mass based on 100% by mass of the rubber component.
The rubber composition preferably contains the silica in an amount of 5 to 150 parts by mass relative to 100 parts by mass of the rubber component.
1
The silica preferably has an average aspect ratio W/D determined between branched particles Z-Z inclusive the branched particles Zs of 3 to 100, wherein D is an average primary particle size.
The silica preferably has an average primary particle size D of 5 to 1000 nm.
The rubber composition preferably contains a silane coupling agent in an amount of 1 to 20 parts by mass relative to 100 parts by mass of silica.
The rubber composition is preferably for use as a rubber composition for a tire tread.
The present invention further relates to a pneumatic tire, formed from the rubber composition.
The present invention provides a rubber composition containing a rubber component containing a copolymer, and a specific silica, wherein the copolymer is obtained by copolymerizing 1,3-butadiene, styrene, and a compound represented by the formula (I), has an amino group at a first chain end and a functional group containing at least one atom selected from the group consisting of nitrogen, oxygen, and silicon at a second chain end, and has a weight average molecular weight in a specific range. This composition improves the fuel economy, wet grip performance, and dry grip performance together while maintaining the balance between them, and can be used for tire components (in particular, treads) to prepare pneumatic tires that are excellent in these performance properties.
Fig. 1] Fig. 1
[ is a schematic view illustrating branched particles Zs.
Fig. 2] Fig. 2
1
2
[ is a schematic view indicating the average primary particle size D, the average length (W) between branched particles Z-Z inclusive of the branched particles Zs, and the average length (W) between branched particles Z-Z exclusive of the branched particles Zs of the silica.
5
6
1
1
The rubber composition of the present invention contains a rubber component containing a copolymer, and a silica (structure silica (linear silica)), wherein the copolymer is obtained by copolymerization of 1,3-butadiene, styrene, and a compound represented by formula (I) below, has an amino group at a first chain end and a functional group containing at least one atom selected from the group consisting of nitrogen, oxygen, and silicon at a second chain end, and has a weight average molecular weight of 1.0×10 to 2.5×10, and the silica has an average length W between branched particles Z-Z inclusive of the branched particles Zs of 30 to 400 nm, wherein the branched particles Zs are each adjacent to at least three particles.
(In the formula, R represents a C1 to C10 hydrocarbon group.)
The main chain of the copolymer is modified with the compound represented by the formula (I). The compound (in particular, oxygen in the compound) interacts with the filler to improve the dispersibility of the filler, and constrain the copolymer. This results in low hysteresis loss and, in turn, in improved fuel economy, and provides good wet grip performance and good dry grip performance. The amino group at the first chain end and the functional group at the second chain end of the copolymer also cause an interaction between the filler and both ends of the copolymer to improve the dispersibility of the filler and constrain the copolymer. Similarly, this results in low hysteresis loss and, in turn, in improved fuel economy, and provides good wet grip performance and good dry grip performance. The combination of the units derived from the compound represented by the formula (I), the amino group at the first chain end, and the functional group at the second chain end of the copolymer synergistically improves the fuel economy, wet grip performance, and dry grip performance.
In general, the addition of a functional group to a chain end of a polymer having a functional group at the main chain (a main chain-modified polymer) (or in other words, modification into a main chain- and chain end-modified polymer) does not always result in improvement in the above-mentioned performance properties. This is because different functional groups have different affinities for the filler. The very important factor to successfully improve the performance properties is combination of functional groups. In the present invention, the combination of the units derived from the compound represented by the formula (I), the amino group at the first chain end, and the functional group at the second chain end is very good. This good combination is presumed to synergistically improve the fuel economy, wet grip performance, and dry grip performance.
Conventional rubber compositions containing granular silica can have improved wet grip performance but fail to have improved fuel economy and improved dry grip performance at the same time. By contrast, the use of the structure silica results in less amount of occluded rubber (rubber that is enclosed by silica aggregates so that it cannot be deformed), which is formed by aggregation of silica particles, and therefore reduces local stress concentration, i.e., local strain. This reduces the hysteresis loss of a tire at low tensile elongation (low strain) and thus reduces rolling resistance. Additionally, in tires at high tensile elongation (high strain) (e.g. during sudden braking or sharp turning), the structure silica becomes oriented along the circumferential direction of the tire tread. This orientation causes rubber areas around the structure silica particles to exponentially deform, and thus increases the hysteresis loss. Accordingly, the dry grip performance is improved. Moreover, the combined use of the copolymer and the structure silica synergistically increases their improving effects. Owing to these effects, the fuel economy, wet grip performance, and dry grip performance can be improved together to high levels while maintaining the balance between them.
The rubber composition containing the structure silica of the present invention can be prepared by, for example, mixing the copolymer and a silica sol.
The "copolymer" as used herein is included in the concept of the term "rubber component".
1
In the formula (I), R is a C1 to C10 hydrocarbon group. If the number of carbon atoms is more than 10, higher costs may be required. Additionally, the fuel economy, wet grip performance, and dry grip performance may not be sufficiently improved. In order for the resulting polymer to have higher effects of improving the fuel economy, wet grip performance, and dry grip performance, the number of carbon atoms is preferably 1 to 8, more preferably 1 to 6, and still more preferably 1 to 3.
1
1
Examples of hydrocarbon groups for R include monovalent aliphatic hydrocarbon groups, such as alkyl groups, and monovalent aromatic hydrocarbon groups, such as aryl groups. In order for the resulting polymer to have higher effects of improving the fuel economy, wet grip performance, and dry grip performance, R is preferably an alkyl group, and more preferably a methyl or tert-butyl group.
1
1
In order for the resulting copolymer to have higher effects of improving the fuel economy, wet grip performance, and dry grip performance, compounds represented by the following formula (I-I) are preferred among compounds represented by the formula (I).
(In the formula (I-I), R is defined as above for R in the formula (I).)
Examples of the compound represented by the formula (I) include p-methoxystyrene, p-ethyoxystyrene,
p-(n-propoxy)styrene, p-(tert-butoxy)styrene, and m-methoxystyrene. These may be used alone, or two or more of these may be used in combination.
The copolymer preferably contains the compound represented by the formula (I) in an amount of not less than 0.05% by mass, more preferably not less than 0.1% by mass, still more preferably not less than 0.3% by mass. Additionally, the amount is preferably not more than 35% by mass, more preferably not more than 20% by mass, still more preferably not more than 10% by mass, particularly preferably not more than 5% by mass, and most preferably not more than 2% by mass. If the amount is less than 0.05% by mass, the effects of improving the fuel economy, wet grip performance, and dry grip performance may not be obtained; if the amount is more than 35% by mass, higher costs may be required.
The copolymer preferably contains styrene in an amount of not less than 2% by mass, more preferably not less than 5% by mass, still more preferably not less than 10% by mass, particularly preferably not less than 15% by mass. Additionally, the amount is preferably not more than 50% by mass, more preferably not more than 30% by mass, still more preferably not more than 25% by mass, and particularly preferably not more than 22% by mass. If the amount is less than 2% by mass, the wet grip performance and dry grip performance may be degraded; if the amount is more than 50% by mass, the fuel economy may be degraded.
The amount of 1,3-butadiene in the copolymer is not limited at all, and can be appropriately determined according to the amounts of other components. The amount is preferably not less than 15% by mass, more preferably not less than 20% by mass, and still more preferably not less than 60% by mass. Additionally, the amount is preferably not more than 97% by mass, more preferably not more than 85% by mass, and still more preferably not more than 80% by mass. If the amount of 1, 3-butadiene is less than 15% by mass, the wet grip performance and dry grip performance may be degraded; if the amount is more than 97% by mass, the fuel economy may be degraded.
The amounts of the compound represented by the formula (I), 1,3-butadiene, and styrene in the copolymer can be determined by the method described below in EXAMPLES.
The amino group (a primary amino group, secondary amino group, or tertiary amino group) at the first chain end may be an acyclic amino group or a cyclic amino group.
Examples of acyclic amines from which acyclic amino groups are derived include monoalkylamines, such as 1,1-dimethylpropylamine, 1,2-dimethylpropylamine, 2,2-dimethylpropylamine, 2-ethylbutylamine, pentylamine, 2,2-dimethylbutylamine, hexylamine, cyclohexylamine, octylamine, 2-ethylhexylamine, and isodecylamine; dialkylamines, such as dimethylamine, methylisobutylamine, methyl(t-butyl)amine, methylpentylamine, methylhexylamine, methyl(2-ethylhexyl)amine, methyloctylamine, methylnonylamine, methylisodecylamine, diethylamine, ethylpropylamine, ethylisopropylamine, ethylbutylamine, ethylisobutylamine, ethyl(t-butyl)amine, ethylpentylamine, ethylhexylamine, ethyl(2-ethylhexyl)amine, ethyloctylamine, dipropylamine, diisopropylamine, propylbutylamine, propylisobutylamine, propyl(t-butyl)amine, propylpentylamine, propylhexylamine, propyl(2-ethylhexyl)amine, propyloctylamine, isopropylbutylamine, isopropylisobutylamine, isopropyl(t-butyl)amine, isopropylpentylamine, isopropylhexylamine, isopropyl(2-ethylhexyl)amine, isopropyloctylamine, dibutylamine, diisobutylamine, di-t-butylamine, butylpentylamine, dipentylamine, and dicyclohexylamine; and laurylamine and methylbutylamine. These acyclic amines are converted into acyclic amino groups when a hydrogen atom bonded to the nitrogen of the acyclic amines is released.
Preferred acyclic amino groups are alkylamino groups (formed by releasing a hydrogen bonded to the nitrogen of the monoalkylamines and dialkylamines), and dialkylamino groups (formed by releasing a hydrogen bonded to the nitrogen of the dialkylamines) are more preferred, because these groups improve the fuel economy, wet grip performance, and dry grip performance more synergistically with the units derived from the compound represented by the formula (I) and the functional group at the second chain end. These alkylamino and dialkylamino groups preferably contain a C1 to C10 alkyl group, more preferably a C1 to C3 alkyl group.
Examples of cyclic amines from which cyclic amino groups are derived include aziridine, 2-methylaziridine, 2-ethylaziridine, compounds containing a pyrrolidine ring (pyrrolidine, 2-methylpyrrolidine, 2-ethylpyrrolidine, 2-pyrrolidone, succinimide), piperidine, 2-methylpiperidine, 3,5-dimethylpiperidine, 2-ethylpiperidine, 4-piperidinopiperidine, 2-methyl-4-piperidinopiperidine, 1-methylpiperazine, 1-methyl-3-ethyl piperazine morpholine, 2-methylmorpholine, 3,5-dimethylmorpholine, thiomorpholine, 3-pyrroline, 2,5-dimethyl-3-pyrroline, 2-phenyl-2-pyrroline, pyrazoline, 2-methylimidazole, 2-ethyl-4-methylimidazole, 2-phenylimidazole, pyrazole, pyrazole carboxylic acid, α-pyridone, γ-pyridone, aniline, 3-methylaniline, N-methylaniline, and N-isopropylaniline. These cyclic amines are converted into cyclic amino groups when a hydrogen atom bonded to the nitrogen of the cyclic amines is released.
11
Preferred cyclic amino groups are compounds represented by formula (II) below because these groups improve the fuel economy, wet grip performance, and dry grip performance more synergistically with the units derived from the compound represented by the formula (I) and the functional group at the second chain end.
(In the formula, R represents a divalent C2 to C50 hydrocarbon group optionally containing a nitrogen and/or oxygen atom.)
11
R is a divalent C2 to C50 (preferably C2 to C10, more preferably C3 to C5) hydrocarbon group.
Examples of such hydrocarbon groups include C2 to C10. alkylene groups, C2 to C10 alkenylene groups, C2 to C10 alkynylene groups, and C6 to C10 arylene groups. In particular, such alkylene groups are preferred.
12
19
Among the groups represented by the formula (II), preferred are groups represented by the following formula (III).
(In the formula, R to R, which may be the same or different, each represent a hydrogen atom or a C1 to C5 hydrocarbon group optionally containing a nitrogen and/or oxygen atom.)
12
19
1
Examples of C1 to C5 (preferably C1 to C3) hydrocarbon groups for R to R are the same hydrocarbon groups as listed above for R. Among them, alkyl groups are preferred, and methyl and ethyl groups are more preferred.
12
19
12
19
R to R are each preferably hydrogen. More preferably, all of R to R are hydrogen.
The copolymer preferably has, in addition to the amino group, isoprene unit(s) (unit(s) represented by formula (VII) below) at the first chain end. This structure improves the fuel economy, wet grip performance, and dry grip performance more synergistically with the units derived from the compound represented by the formula (I) and the functional group at the second chain end. In particular, the combination of an alkylamino group and isoprene unit(s) is more preferred, and the combination of a dialkylamino group and isoprene unit(s) is still more preferred. For example, groups represented by the formula (A) are suitable.
(In the formula, s represents an integer of 1 to 100 (preferably 1 to 50, more preferably 1 to 10, and still more preferably 1 to 5.))
(In the formula, s represents an integer of 1 to 100 (preferably 1 to 50, more preferably 1 to 10, still more preferably 1 to 5.))
Examples of the functional group containing at least one atom selected from the group consisting of nitrogen, oxygen, and silicon at the second chain end include amino, amide, alkoxysilyl, isocyanate, imino, imidazole, urea, ether, carbonyl, carboxyl, hydroxyl, nitril, and pyridyl groups.
The functional group at the second chain end is preferably an alkoxysilyl, amino, or ether group, and is more preferably a combination of an alkoxysilyl group and an amino group, because these groups improve the fuel economy, wet grip performance, and dry grip performance more synergistically with the units derived from the compound represented by the formula (I) and the amino group at the first chain end.
Examples of amino groups include the same groups as listed above for the amino group at the first chain end. In particular, alkylamino groups are preferred, and dialkylamino groups are more preferred. These alkylamino and dialkylamino groups preferably contain a C1 to C10 alkyl group, more preferably a C1 to C3 alkyl group.
Examples of alkoxysilyl groups include methoxysilyl, ethoxysilyl, propoxysilyl, and butoxysilyl groups. These alkoxysilyl groups preferably contain a C1 to C10 alkoxy group, more preferably a C1 to C3 alkoxy group.
The copolymer of the present invention can be prepared by, for example, copolymerizing 1, 3-butadiene, styrene, and the compound represented by the formula (I) using a compound containing a lithium atom and an amino group as a polymerization initiator, and modifying a polymerizing end of the polymer with a modifier that contains a functional group containing at least one atom selected from the group consisting of nitrogen, oxygen, and silicon. The following specifically describes how to prepare the copolymer.
The copolymerization of monomer components including styrene, 1,3-butadiene, and the compound represented by the formula (I) can be accomplished by any polymerization method without limitation, and specifically any of solution polymerization, vapor phase polymerization, and bulk polymerization can be used. In particular, solution polymerization is preferred for reasons of stability of the compound represented by the formula (I). The polymerization may be carried out in either a batch-wise or continuous manner.
In the case of solution polymerization, a solution having a monomer concentration (a combined concentration of styrene, 1, 3-butadiene, and the compound represented by the formula (I)) of not lower than 5% by mass is preferably used. The monomer concentration is more preferably not lower than 10% by mass. The use of a solution having a monomer concentration of less than 5% by mass provides only a small amount of the copolymer, and may increase costs. The monomer concentration of the solution is preferably not more than 50% by mass, and more preferably not more than 30% by mass. A solution having a monomer concentration of more than 50% by mass is too viscous to stir, and therefore may not allow the polymerization to successfully proceed.
In the case of anionic polymerization, a compound containing a lithium atom and an amino group is preferably used as a polymerization initiator. This use results in a conjugate diene polymer (living polymer) having an amino group at the polymerization initiation end and an active polymerization site at the other end.
Since the amino group of the polymerization initiator (the compound containing a lithium atom and an amino group) itself will remain at the polymerization initiation end, the amino group is suitably a group as listed above as the acyclic or cyclic amino group. Preferred forms are also the same.
The compound containing a lithium atom and an amino group can be prepared by, for example, reacting a lithium compound and an amino group-containing compound (e.g. a lithium amide compound).
The lithium compound is not limited at all, and preferred examples include hydrocarbyllithiums. Preferred are hydrocarbyllithiums having a C2 to C20 hydrocarbyl group, and specific examples include ethyllithium, n-propyllithium, isopropyllithium, n-butyllithium, sec-butyllithium, tert-octyllithium, n-decyllithium, phenyllithium, 2-naphtyllithium, 2-butyl-phenyllithium, 4-phenyl-butyllithium, cyclohexyllithium, cyclopentyllithium, and a reaction product of diisopropenylbenzene and butyllithium. Among these, n-butyllithium is particularly suitable.
Since the amino group of the amino group-containing compound will remain at the polymerization initiation end, the amino group-containing compound may suitably be a compound as listed above as the acyclic amine from which the acyclic amino group is derived or the cyclic amine from which the cyclic amino group is derived (in particular, a pyrrolidine ring-containing compound). Accordingly, the amino group-containing compound is preferably an alkylamino group-containing compound (a monoalkylamine or dialkylamine), and more preferably a dialkylamino group-containing compound (dialkylamine). The preferred number of carbon atoms in the alkyl group of the alkylamino or dialkylamino group is as defined for the acyclic amino group.
The amino group-containing compound is preferably a compound having a group represented by the formula (II), and more preferably a compound having a group represented by the formula (III). Preferred examples of groups represented by the formulas (II) and (III) are as listed above for the cyclic amino group.
The reaction between the lithium compound and the amino group-containing compound can be carried out under any conditions without limitation. For example, the lithium compound and the amino group-containing compound are dissolved in a hydrocarbon solvent, and reacted at 0 to 80°C for 0.01 to 1 hour. The lithium compound and the amino group-containing compound are used at a molar ratio [(lithium compound) / (amino group-containing compound)] of, but not limited to, 0.8 to 1.5, for example.
The hydrocarbon solvent used in the reaction is not limited at all, and is preferably a C3 to C8 hydrocarbon solvent. Examples thereof include propane, n-butane, isobutane, n-pentane, isopentane, n-hexane, cyclohexane, propene, 1-butene, isobutene, trans-2-butene, cis-2-butene, 1-pentene, 2-pentene, 1-hexene, 2-hexene, benzene, toluene, xylene, and ethylbenzene. These may be used alone, or two or more of these may be used in combination.
The compound containing a lithium atom and an amino group (e.g. a lithium amide compound) can be prepared by reacting the lithium compound and the amino group-containing compound, or alternatively, a commercial product may be used. In the case of reacting the lithium compound and the amino group-containing compound, the lithium compound and the amino group-containing compound may be reacted before being combined with the monomer components, or may be reacted in the presence of the monomer components. Since the amino group-containing compound is more reactive than the monomer components, the reaction between the lithium compound and the amino group-containing compound preferentially proceeds even in the presence of the monomer components.
Examples of lithium amide compounds include lithium hexamethyleneimide, lithium pyrrolidide, lithium piperidide, lithium heptamethyleneimide, lithium dodecamethyleneimide, lithium dimethylamide, lithium diethylamide, lithium dibutylamide, lithium dipropylamide, lithium diheptylamide, lithium dihexylamide, lithium dioctylamide, lithium di-2-ethylhexylamide, lithium didecylamide, lithium-N-methylpiperazide, lithium ethylpropylamide, lithium ethylbutylamide, lithium ethylbenzylamide, lithium methylphenethylamide, and compounds represented by formula shown below. In particular, lithium pyrrolidide, lithium dimethylamide, and lithium diethylamide are preferred.
Other preferred examples of the compound containing a lithium atom and an amino group include compounds containing an amino group and isoprene unit(s) (unit(s) represented by formula (VII) below). These compounds improves the fuel economy, wet grip performance, and dry grip performance more synergistically with the units derived from the compound represented by the formula (I) and the functional group at the second terminal.
(In the formula, s represents an integer of 1 to 100 (preferably 1 to 50, more preferably 1 to 10, still more preferably 1 to 5.))
In particular, compounds containing an alkylamino group and the isoprene unit(s) are preferred, and compounds containing a dialkylamino group and the isoprene unit(s) are more preferred. For example, compounds represented by the formula below are preferred. Compounds represented by the formula below include the compound of the formula with S=2 sold from FMC Lithium under the name of AI-200.
(In the formula, s represents an integer of 1 to 100 (preferably 1 to 50, more preferably 1 to 10, still more preferably 1 to 5.))
The anionic polymerization to produce the copolymer using the compound containing a lithium atom and an amino group as a polymerization initiator can be accomplished by any method without limitation, and conventional known methods can be used. Specifically, styrene, 1,3-butadiene, and the compound represented by the formula (I) are anionically polymerized in an inert organic solvent, such as a hydrocarbon solvent (e.g. an aliphatic, alicyclic, or aromatic hydrocarbon compound), using the compound containing a lithium atom and an amino group as a polymerization initiator and optionally a randomizer. After the anionic polymerization is completed, known antioxidants, alcohols to stop the polymerization, and other agents may be optionally added.
The hydrocarbon solvent is preferably one having 3 to 8 carbon atoms, and examples include propane, n-butane, isobutane, n-pentane, isopentane, n-hexane, cyclohexane, propene, 1-butene, isobutene, trans-2-butene, cis-2-butene, 1-pentene, 2-pentene, 1-hexene, 2-hexene, benzene, toluene, xylene, and ethylbenzene. These may be used alone, or two or more of these may be used in combination.
The randomizer is a compound that controls the microstructure of conjugated diene units in the copolymer (for example, to increase the content of 1,2-butadiene units), and the distribution of monomer units in the copolymer (for example, to randomize the distribution of butadiene units and styrene units in a butadiene-styrene copolymer). The randomizer is not limited at all, and any of compounds conventionally known as randomizers can be used. Examples include ethers and tertiary amines, such as dimethoxybenzene, tetrahydrofuran, dimethoxyethane, diethylene glycol dibutyl ether, diethylene glycol dimethyl ether, bistetrahydrofurylpropane, triethylamine, pyridine, N-methylmorpholine, N,N,N',N'-tetramethylethylenediamine, and 1,2-dipiperidinoethane. Other examples include potassium salts, such as potassium-t-amylate, and potassium-t-butoxide, and sodium salts, such as sodium-t-amylate.
The randomizer is preferably used in an amount of not less than 0.01 molar equivalents, more preferably of not less than 0.05 molar equivalents relative to the polymerization initiator. The use of less than 0.01 molar equivalents of the randomizer tends to have a small effect and result in insufficient randomization. Additionally, the amount of randomizer is preferably not more than 1000 molar equivalents, and more preferably not more than 500 molar equivalents relative to the polymerization initiator. The use of more than 1000 molar equivalents of the randomizer tends to largely change the rate of the reaction of monomers and end up being insufficient randomization.
The modification with the modifier can be accomplished by any method without limitation, and known methods can be used. For example, a copolymer having a modified main chain is synthesized by anionic polymerization, and the copolymer is contacted with the modifier so that the anionic end of the copolymer reacts with the functional group of the modifier to modify the end of the copolymer. Typically, the modifier is reacted with the copolymer in an amount of 0.01 to 10 parts by mass relative to 100 parts by mass of the copolymer.
Examples of the modifier include 3-glycidoxypropyltrimethoxysilane, (3-triethoxysilylpropyl)tetrasulfide, 1-(4-N,N-dimethylaminophenyl)-1-phenylethylene, 1,1-dimethoxytrimethylamine, 1,2-bis(trichlorosilyl)ethane, 1,3,5-tris(3-triethoxysilylpropyl)isocyanurate, 1,3,5-tris(3-trimethoxysilylpropyl)isocyanurate, 1,3-dimethyl-2-imidazolidinone, 1,3-propanediamine, 1,4-diaminobutane, 1-[3-(triethoxysilyl)propyl]-4,5-dihydroimidazole, 1-glycidyl-4-(2-pyridyl)piperazinc, 1-glycidyl-4-phenylpiperazine, 1-glycidyl-4-methylpiperazine, 1-glycidyl-4-methylhomopiperazine, 1-glycidylhexamethyleneimine, 11-aminoundecyltriethoxysilane, 11-aminoundecyltrimethoxysilane, 1-benzyl-4-glycidylpiperazine, 2-(3,4-epoxycyclohexyl)ethyltrimethoxysilane, 2-(4-morpholinodithio)benzothiazole, 2-(6-aminoethyl)-3-aminopropyltrimethoxysilane, 2-(triethoxysilylethyl)pyridine, 2-(trimethoxysilylethyl)pyridine, 2-(2-pyridylethyl)thiopropyltrimethoxysilane, 2-(4-pyridylethyl)thiopropyltrimethoxysilane, 2,2-diethoxy-1,6-diaza-2-silacycloootane, 2,2-dimethoxy-1,6-diaza-2-silacyclooctane, 2,3-diohloro-1,4-naphthoquinone, 2,4-dinitrobenzenesulfonyl chloride, 2,4-tolylene diisocyanate, 2-(4-pyridylethyl)triethoxysilane, 2-(4-pyridylethyl)trimethoxysilane, 2-cyanoethyltriethoxysilane, 2-tributylstanyl-1,3-butadiene, 2-(trimethoxysilylethyl)pyridine, 2-vinylpyridine, 2-(4-pyridylethyl)triethoxysilane, 2-(4-pyridylethyl)trimethoxysilane, 2-lauryl thioethyl phenyl ketone, 3-(1-hexamethyleneimino)propyl(triethoxy)silane, 3-(1,3-dimethylbutylidene)aminopropyltriethoxysilane, 3-(1,3-dimethylbutylidene)aminopropyltrimethoxysilane, 3-(2-aminoethylaminopropyl)trimethoxysilane, 3-(m-aminophenoxy)propyltrimethoxysilane, 3-(N,N-dimethylamino)propyltriethoxysilane, 3-(N,N-dimethylamino)propyltrimethoxysilane, 3-(N-methylamino)propyltriethoxysilane, 3-(N-methylamino)propyltrimethoxysilane, 3-(N-allylamino)propyltrimethoxysilane, 3,4-diaminobenzoic acid, 3-aminopropyldimethylethoxysilane, 3-aminopropyltriethoxysilane, 3-aminopropyltrimethoxysilane, 3-aminopropyltris(methoxydiethoxy)silane, 3-aminopropyldiisopropylethoxysilane, 3-isocyanatopropyltriethoxysilane, 3-glycidoxypropyltriethoxysilane, 3-glycidoxypropyltrimethoxysilane, 3-glycidoxypropylmethyldimethoxysilane, 3-diethylaminopropyltrimethoxysilane, 3-diethoxy(methyl)silylpropyl succinic anhydride, 3-(N,N-diethylaminopropyl)triethoxysilane, 3-(N,N-diethylaminopropyl)trimethoxysilane, 3-(N,N-dimethylaminopropyl)diethoxymethylsilane, 3-(N,N-dimethylaminopropyl)triethoxysilane, 3-(N,N-dimethylaminopropyl)trimethoxysilane, 3-triethoxysilylpropyl succinic anhydride, 3-triethoxysilylpropyl acetic anhydride, 3-triphenoxysilylpropyl succinic anhydride, 3-triphenoxysilylpropyl acetic anhydride, 3-trimethoxysilylpropyl benzothiazole tetrasulfide, 3-hexamethyleneiminopropyltriethoxysilane, 3-mercaptopropyltrimethoxysilane, (3-triethoxysilylpropyl)diethylenetriamine, (3-trimethoxysilylpropyl)diethylenetriamine, 4,4'-bis(diethylamino)benzophenone, 4,4'-bis(dimethylamino)benzophenone, 4'-(imidazol-2-yl)-acetophenone, 4-[3-(N,N-diglycidylamino)propyl]morpholine, 4-glycidyl-2,2,6,6-tetramethylpiperidinyloxy, 4-aminobutyltriethoxysilane, 4-vinylpyridine, 4-morpholinoacetophenone, 4-morpholinobenzophenone, m-aminophenyltrimethoxysilane, N-(1,3-dimethylbutylidene)-3-(triethoxysilyl)-1-propaneamine, N-(1,3-dimethylbutylidene)-3-(trimethoxysilyl)-1-propaneamine, N-(1-methylethylidene)-3-(triethoxysilyl)-1-propaneamine, N-(2-aminoethyl)-3-aminopropylmethyldiethoxysilane, N-(2-aminoethyl)-3-aminopropylmethyldimethoxysilane, N-(2-aminoethyl)-3-aminopropyltriethoxysilane, N-(2-aminoethyl)-3-aminopropyltrimethoxysilane, N-(2-aminoethyl)-11-aminoundecyltriethoxysilane, N-(2-aminoethyl)-11-aminoundecyltrimethoxysilane, N-(2-aminoethyl)-3-aminoisobutylmethyldiethoxysilane, N-(2-aminoethyl)-3-aminoisobutylmethyldimethoxysilane, N-(3-diethoxymethylsilylpropyl)succinimide, N-(3-triethoxysilylpropyl)-4,5-dihydroimidazole, N-(3-triethoxysilylpropyl)pyrrole, N-(3-trimethoxysilylpropyl)pyrrole, N-3-[amino(polypropyleneoxy)]aminopropyltrimethoxysilane, N-[5-(triethoxysilyl)-2-aza-1-oxopentyl]caprolactam, N-[5-(trimethoxysilyl)-2-aza-1-oxopentyl]caprolactam, N-(6-aminohexyl)aminomethyltriethoxysilane, N-(6-aminohexyl)aminomethyltrimethoxysilane, N-allyl-aza-2,2-diethoxysilacyclopentane, N-allyl-aza-2,2-dimethoxysilacyclopentane, N-(cyclohexylthio)phthalimide, N-n-butyl-aza-2,2-diethoxysilacyclopentane, N-n-butyl-aza-2,2-dimethoxysilacyclopentane, N,N,N',N'-tetraethylaminobenzophenone, N,N,N',N'-tetramethylthiourea, N,N,N',N'-tetramethylurea, N,N'-ethyleneurea, N,N'-diethylaminobenzophenone, N,N'-diethylaminobenzophenone, N,N'-diethylaminobenzofuran, methyl N,N'-diethylcarbamate, N,N'-diethylurea, (N,N-diethyl-3-aminopropyl)triethoxysilane, (N,N-diethyl-3-aminopropyl)trimethoxysilane, N,N-dioctyl-N'-triethoxysilylpropylurea, N,N-dioctyl-N'-trimethoxysilylpropylurea, methyl N,N-diethylcarbamate, N,N-diglycidylcyclohexylamine, N,N-dimethyl-o-toluidine, N,N-dimethylaminostyrene, N,N-diethylaminopropylacrylamide, N,N-dimethylaminopropylacrylamide, N-ethylaminoisobutyltriethoxysilane, N-ethylaminoisobutyltrimethoxysilane, N-ethylaminoisobutylmethyldiethoxysilane, N-oxydiethylene-2-benzothiazolesulfenamide, N-cyclohexylaminopropyltriethoxysilane, N-cyclohexylaminopropyltrimethoxysilane, N-methylaminopropylmethyldimethoxysilane, N-methylaminopropylmethyldiethoxysilane, N-vinylbenzylazacycloheptane, N-phenylpyrrolidone, N-phenylaminopropyltriethoxysilane, N-phenylaminopropyltrimethoxysilane, N-phenylaminomethyltriethoxysilane, N-phenylaminomethyltrimethoxysilane, n-butylaminopropyltriethoxysilane, n-butylaminopropyltrimethoxysilane, N-methylaminopropyltriethoxysilane, N-methylaminopropyltrimethoxysilane, N-methyl-2-piperidone, N-methyl-2-pyrrolidone, N-methyl-ε-caprolactam, N-methylindolinone, N-methylpyrrolidone, p-(2-dimethylaminoethyl)styrene, p-aminophenyltrimethoxysilane, γ-glycidoxypropyltrimethoxysilane, γ-methacryloxypropyltrimethoxysilane, (aminoethylamino)-3-isobutyldiethoxysilane, (aminoethylamino)-3-isobutyldimethoxysilane, (aminoethylaminomethyl)phenethyltriethoxysilane, (aminoethylaminomethyl)phenethyltrimethoxysilane, acrylic acid, diethyl adipate, acetamidopropyltrimethoxysilane, aminophenyltrimethoxysilane, aminobenzophenone, ureidopropyltriethoxysilane, ureidopropyltrimethoxysilane, ethylene oxide, octadecyldimethyl(3-trimethoxysilylpropyl)ammonium chloride, glycidoxypropyltriethoxysilane, glycidoxypropyltrimethoxysilane, glycerol tristearate, chlorotriethoxysilane, chloropropyltriethoxysilane, chloropolydimethylsiloxane, chloromethyldiphenoxysilane, diallyl diphenyltin, diethylaminomethyltriethoxysilane, diethylaminomethyltrimethoxysilane, diethyl(glycidyl)amine, diethyldithiocarbamic acid 2-benzothiazolyl ester, diethoxydichlorosilane, (cyclohexylaminomethyl)triethoxysilane, (cyclohexylaminomethyl)trimethoxysilane, diglycidylpolysiloxane, dichlorodiphenoxysilane, dicyclohexylcarbodiimide, divinylbenzene, diphenylcarbodiimide, diphenylcyanamide, diphenylmethanediisocyanate, diphenoxymethylchlorosilane, dibutyldichlorotin, dimethyl(acetoxy-methylsiloxane)polydimethylsiloxane, dimethylaminomethyltriethoxysilane, dimethylaminomethyltrimethoxysilane, dimethyl(methoxy-methylsiloxane)polydimethylsiloxane, dimethylimidazolidinone, dimethylethyleneurea, dimethyl dichlorosilane, dimethylsulfamoyl chloride, silsesquioxane, sorbitan trioleate, sorbitan monolaurate, titanium tetrakis(2-ethylhexyoxide), tetraethoxysilane, tetraglycidyl-1,3-bisaminomethylcyclohexane, tetraphenoxysilane, tetramethylthiuram disulfide, tetramethoxysilane, triethoxyvinylsilane, tris(3-trimethoxysilylpropyl)cyanurate, triphenylphosphate, triphenoxychlorosilane, triphenoxymethyl silicon, triphenoxymethylsilane, carbon dioxide, bis(triethoxysilylpropyl)amine, bis(trimethoxysilylpropyl)amine, bis[3-(triethoxysilyl)propyl]ethylenediamine, bis[3-(trimethoxysilyl)propyl]ethylenediamine, bis[3-(triethoxysilyl)propyl]urea, bis[(trimethoxysilyl)propyl]urea, bis(2-hydroxymethyl)-3-aminopropyltriethoxysilane, bis(2-hydroxymethyl)-3-aminopropyltrimethoxysilane, tin bis(2-ethylhexanoate), bis(2-methylbutoxy)methyl chlorosilane, bis(3-triethoxysilylpropyl)tetrasulfide, bisdiethylaminobenzophenone, bisphenol A diglycidyl ether, bisphenoxyethanolfluorene diglycidyl ether, bis(methyldiethoxysilylpropyl)amine, bis(methyldimethoxysilylpropyl)-N-methylamine, hydroxymethyltriethoxysilane, vinyltris(2-ethylhexyloxy)silane, vinylbenzyldiethylamine, vinylbenzyl dimethylamine, vinylbenzyl tributyltin, vinylbenzylpiperidine, vinylbenzylpyrrolidine, pyrrolidine, phenylisocyanate, phenylisothiocyanate, (phenylaminomethyl)methyldimethoxysilane, (phenylaminomethyl)methyldiethoxysilane, phthalic amide, hexamethylene diisocyanate, benzylidene aniline, poly(diphenylmethane diisocyanate), polydimethylsiloxane, methyl-4-pyridyl ketone, methylcaprolactam, methyltriethoxysilane, methyltriphenoxysilane, methyl laurylthiopropionate, and silicon tetrachloride.
21
22
23
24
25
26
27
28
29
30
33
The modifier is preferably a compound represented by any one of formulas (IV), (V), and (VI) below, more preferably a compound represented by the formula (IV) or (V), and still more preferably a compound represented by the formula (IV) because these compounds improve the fuel economy, wet grip performance, and dry grip performance more synergistically with the units derived from the compound represented by the formula (I) and the amino group at the first chain end.
(In the formula, R, R, and R, which may be the same or different, each represent an alkyl, alkoxy, silyloxy, carboxyl (-COOH), or mercapto (-SH) group, or a derivative of any of these groups; R and R, which may be the same or different, each represent a hydrogen atom or an alkyl group; and n represents an integer.)
(In the formula, R, R, and R, which may be the same or different, each represent an alkyl, alkoxy, silyloxy, carboxyl (-COOH), or mercapto (-SH) group, or a derivative of any of these groups; R represents a cyclic ether group; and p and q each represent an integer.)
(In the formula, R to R, which may be the same or different, each represent an alkyl, alkoxy, silyloxy, carboxyl (-COOH), or mercapto (-SH) group, or a derivative of any of these groups.)
21
22
23
21
22
23
21
22
23
As for compounds represented by the formula (IV), examples of alkyl groups for R, R, and R include C1 to C4 (preferably C1 to C3) alkyl groups such as a methyl group. Examples of alkoxy groups for R, R, and R include C1 to C8 (preferably C1 to C6, more preferably C1 to C4) alkoxy groups such as a methoxy group. The term "alkoxy group" is intended to include cycloalkoxy and aryloxy groups. Examples of silyloxy groups for R, R, and R include silyloxy groups (e.g. trimethylsilyloxy and tribenzylsilyloxy groups) having C1 to C20 aliphatic or aromatic groups as substituents.
24
25
21
22
23
As for compounds represented by the formula (IV), examples of alkyl groups for R and R include the alkyl groups mentioned above (the alkyl groups listed for R, R, and R).
21
22
23
24
25
In order to ensure larger effects of improving the fuel economy, wet grip performance, and dry grip performance, R, R, and R are each preferably an alkoxy group, and R and R are each preferably an alkyl group.
For reasons of availability, n (integer) is preferably 0 to 5, more preferably 2 to 4, and most preferably 3. If n is 6 or more, higher costs are required.
Specific examples of the compound represented by the formula (IV) include
3-(N,N-dimethylamino)propyltriethoxysilane and 3-(N,N-dimethylamino)propyltrimethoxysilane, which are already listed above as examples of the modifier. In particular, 3-(N,N-dimethylamino)propyltrimethoxysilane is preferred.
26
27
28
21
22
23
26
27
28
As for compounds represented by the formula (V), R, R, and R are defined as above for R, R, and R of compounds represented by the formula (IV). In order to ensure large effects of improving the fuel economy, wet grip performance, and dry grip performance, R, R, and R are each preferably an alkoxy group.
29
As for compounds represented by the formula (V), examples of cyclic ether groups for R include cyclic ether groups containing one ether bond, such as an oxirane group, cyclic ether groups containing two ether bonds, such as a dioxolane group, and cyclic ether groups containing three ether bonds, such as a trioxane group. In particular, in order to ensure large effects of improving the fuel economy, wet grip performance, and dry grip performance, cyclic ether groups containing one ether bond are preferred, and an oxirane group is more preferred. The number of carbon atoms in these cyclic ether groups is preferably 2 to 7, and more preferably 2 to 4. Additionally, cyclic ether groups with a ring structure free of unsaturated bonds are preferred.
For reasons of availability and reactivity, p (integer) is preferably 0 to 5, more preferably 2 to 4, and most preferably 3. If p is 6 or more, higher costs are required.
For reasons of availability and reactivity, q (integer) is preferably 0 to 5, more preferably 1 to 3, and most preferably 1. If q is 6 or more, higher costs are required.
Specific examples of compounds represented by the formula (V) include 3-glycidoxypropyltrimethoxysilane and 3-glycidoxypropyltriethoxysilane, which are already listed above as examples of the modifier. In particular, 3-glycidoxypropyltrimethoxysilane is preferred.
30
33
21
22
23
30
33
As for compounds represented by the formula (VI), R to R are defined as above for R, R, and R of compounds represented by the formula (IV). In order to ensure larger effects of improving the fuel economy, wet grip performance, and dry grip performance, R to R are each preferably an alkoxy group.
Specific examples of compounds represented by the formula (VI) include tetraethoxysilane and tetramethoxysilane, which are already listed above as examples of the modifier. In particular, tetraethoxysilane is preferred.
In addition to the compounds represented by the formulas (IV), (V), and (VI), N-(3-triethoxysilylpropyl)-4,5-dihydroimidazole, silicon tetrachloride, and the like are also preferably used as the modifier.
In the present invention, after the modification reaction with the modifier, known antioxidants, alcohols to stop the polymerization, and other agents may be optionally added.
5
6
5
6
5
5
6
6
The weight average molecular weight Mw of the copolymer is 1.0×10 to 2.5×10. If the Mw is less than 1.0×10, the fuel economy may be degraded; if the Mw is more than 2.5×10, the processability may be degraded. The lower limit of the Mw is preferably not less than 2.0×10, more preferably not less than 3.0×10, and the upper limit is preferably not more than 1. 5 × 10, and more preferably not more than 1.0x10.
The Mw can be appropriately controlled by, for example, varying the amount of polymerization initiator used in the polymerization, and can be determined by the method described below in EXAMPLES.
The amount of the copolymer based on 100% by mass of the rubber component is preferably not less than 5% by mass, more preferably not less than 10% by mass, and still more preferably not less than 40% by mass. If the amount is less than 5% by mass, the effects of improving the fuel economy, wet grip performance, and dry grip performance may not be obtained. The amount of the copolymer is preferably not more than 90% by mass, more preferably not more than 80% by mass, and still more preferably not more than 60% by mass. If the amount is more than 90% by mass, higher costs are required, and additionally the abrasion resistance may be degraded.
The copolymer may be used in combination with other rubber materials. Preferred examples of other rubber materials include diene rubbers. Examples of diene rubbers include natural rubber (NR) and synthetic diene rubbers. Examples of synthetic diene rubbers include isoprene rubber (IR), butadiene rubber (BR), styrene butadiene rubber (SBR), acrylonitrile butadiene rubber (NBR), chloroprene rubber (CR), and butyl rubber (IIR). In particular, in order to provide fuel economy, wet grip performance, and dry grip performance together while maintaining the balance between them, NR, BR, and SBR are preferred. More preferably, all of NR, BR, and SBR are used in combination with the copolymer. These rubber materials may be used alone, or two or more of these may be used in combination.
The amount of NR based on 100% by mass of the rubber component is preferably not less than 5% by mass, and more preferably not less than 10% by mass. Additionally, the amount is preferably not more than 40% by mass, and more preferably not more than 30% by mass. The use of NR in an amount within the range mentioned above provides fuel economy, wet grip performance, and dry grip performance together while maintaining the balance between them.
The amount of BR based on 100% by mass of the rubber component is preferably not less than 5% by mass, and more preferably not less than 8% by mass. Additionally, the amount is preferably not more than 30% by mass, and more preferably not more than 20% by mass. The use of BR in an amount within the range mentioned above provides fuel economy, wet grip performance, and dry grip performance together while maintaining the balance between them.
The amount of SBR based on 100% by mass of the rubber component is preferably not less than 5% by mass, and more preferably not less than 10% by mass. Additionally, the amount is preferably not more than 95% by mass, more preferably not more than 90% by mass, still more preferably not more than 75% by mass, and particularly preferably not more than 50% by mass. The use of SBR in an amount within the range mentioned above provides fuel economy, wet grip performance, and dry grip performance together while maintaining the balance between them.
Fig. 1
Fig. 2
The structure silica (linear silica) used in the present invention includes particles (hereinafter, branched particles Zs) each of which is adjacent to at least three particles, and has a branched structure formed by branched particles Zs and their adjacent particles. The "branched particle Zs" corresponds to particles Zs that are each adjacent to at least three other particles as shown in that is a schematic view illustrating branched particles. Structure silicas include those having a branched structure (for example, see ); and those having no branched structure. Structure silica having no branched structure easily aggregates, and practically does not exist.
1
1
1
1
Fig. 2
The structure silica has an average length (W in ) between branched particles Z-Z inclusive of the branched particles Zs of not less than 30 nm, preferably not less than 40 nm. If W is less than 30 nm, the dry grip performance may not be sufficiently improved. Also, W is not more than 400 nm, preferably not more than 200 nm, and still more preferably not more than 100 nm. If W is more than 400 nm, the hysteresis loss tends to be increased and the fuel economy tends to be degraded.
Fig. 2
The structure silica preferably has an average primary particle size (D, see that is a schematic view of structure silica including branched particles) of not less than 5 nm, more preferably not less than 7 nm. If D is less than 5 nm, the hysteresis loss tends to be increased and the fuel economy tends to be degraded. Also, D is preferably not more than 1000 nm, more preferably not more than 100 nm, and still more preferably 18 nm. If D is more than 1000 nm, the dry grip performance may not be sufficiently improved.
1
1
1
The structure silica preferably has an average aspect ratio (W/D) determined between branched particles Z-Z inclusive of the branched particles Zs of not less than 3, more preferably not less than 4. If the ratio is less than 3, the dry grip performance may not be sufficiently improved. Also, W/D is preferably not more than 100, and more preferably not more than 30. If W/Dis more than 100, the hysteresis loss tends to be increased and the fuel economy tends to be degraded.
1
1
1
Fig. 2
In the present invention, the D, W, and W/D of silica can be determined by analyzing silica dispersed in a vulcanized rubber composition using a transmission electron microscope. For example, in the case where each particle shown in is spherical, W/D is 5.
The amount of the structure silica relative to 100 parts by mass of the rubber component is not less than 5 parts by mass, preferably not less than 10 parts by mass, and more preferably not less than 30 parts by mass. If the amount is less than 5 parts by mass, the addition of the structure silica may result in insufficient effects. Additionally, the amount of the structure silica is not more than 150 parts by mass, preferably not more than 120 parts by mass, more preferably not more than 100 parts by mass, and still more preferably not more than 70 parts by mass. If the amount is more than 150 parts by mass, the rubber composition has high rigidity, and may have bad processability and poor wet grip performance.
The proportion of the structure silica based on 100% by mass in total of the structure silica and carbon black is preferably not less than 60% by mass, more preferably not less than 85% by mass, and still more preferably not less than 95% by mass. The upper limit thereof is not limited at all. The use of the structure silica in an amount within the range mentioned above improves the fuel economy, wet grip performance, and dry grip performance together to high levels while maintaining the balance between them.
In the present invention, the structure silica is preferably used with a silane coupling agent. The silane coupling agent is not limited at all, and those widely used in the tire industries can be used. Examples thereof include sulfide silane coupling agents, mercapto silane coupling agents, vinyl silane coupling agents, amino silane coupling agents, glycidoxy silane coupling agents, nitro silane coupling agents, and chloro silane coupling agents. In particular, sulfide silane coupling agents, such as bis(3-triethoxysilylpropyl)tetrasulfide, bis(2-triethoxysilylethyl)tetrasulfide, bis(3-triethoxysilylpropyl)disulfide, and bis(2-triethoxysilylethyl)disulfide, are suitably used. In particular, in order to ensure effects of improving the reinforcing property of the rubber composition, bis(3-triethoxysilylpropyl)tetrasulfide and 3-trimethoxysilylpropylbenzothiazolyltetrasulfide are preferred. These silane coupling agents may be used alone, or two or more of these may be used in combination.
The amount of silane coupling agent is preferably not less than 1 part by mass, and more preferably not less than 2 parts by mass, relative to 100 parts by mass of the structure silica. If the amount of silane coupling agent is less than 1 part by mass, the rubber composition before vulcanization is too viscous, and therefore tends to be difficult to process. Additionally, the amount of silane coupling agent is preferably not more than 20 parts by mass, more preferably not more than 15 parts by mass, and still more preferably not more than 10 parts by mass, relative to 100 parts by mass of the structure silica. If the amount of silane coupling agent is more than 20 parts by mass, effects proportional to the amount may not be obtained, and higher costs may be required.
The rubber composition of the present invention may optionally contain an antioxidant. The antioxidant can be appropriately selected from amine compounds, phenol compounds, imidazole compounds, metal salts of carbamic acid, waxes, and the like.
Examples of softeners include petroleum softeners, such as process oil, lubricating oil, paraffin, liquid paraffin, petroleum asphalt, and petrolatum; fatty oil-based softening agents such as soybean oil, palm oil, castor oil, linseed oil, rapeseed oil, and coconut oil; waxes such as tall oil, factice, beeswax, carnauba wax, and lanolin; and fatty acids such as linoleic acid, palmitic acid, stearic acid, and lauric acid. The softener is preferably used in an amount of not more than 100 parts by mass, more preferably not more than 10 parts by mass relative to 100 parts by mass of the rubber component. The use thereof within such a range is less likely to degrade the wet grip performance.
The rubber composition of the present invention may optionally contain a vulcanizing agent. The vulcanizing agent may be an organic peroxide or a sulfur-containing vulcanizing agent. Examples of organic peroxides include benzoyl peroxide, dicumyl peroxide, di-t-butyl peroxide, t-butyl cumyl peroxide, methyl ethyl ketone peroxide, cumene hydroperoxide, 2,5-dimethyl-2,5-di(t-butylperoxy)hexano, 2,5-dimethyl-2,5-di(benzoylperoxy)hexane, 2,5-dimethyl-2,5-di(t-butylperoxy)hexine-3, and 1,3-bis(t-butylperoxypropyl)benzene. Examples of sulfur-containing vulcanizing agents include sulfur and morpholine disulfide. Among these, preferred is sulfur.
The rubber composition of the present invention may optionally contain a vulcanization accelerator. Examples of the vulcanization accelerator include sulfenamide vulcanization accelerators, thiazole vulcanization accelerators, thiuram vulcanization accelerators, thiourea vulcanization accelerators, guanidine vulcanization accelerators, dithiocarbamic acid vulcanization accelerators, aldehyde-amine vulcanization accelerators, aldehyde-ammonia vulcanization accelerators, imidazoline vulcanization accelerators, and xanthate vulcanization accelerators. These may be used alone, or two or more of these may be used in combination.
The rubber composition of the present invention may optionally contain a vulcanization activator. The vulcanization activator may be stearic acid, zinc oxide, or the like.
The rubber composition of the present invention may optionally contain other compounding agents and additives used in tire rubber compositions and general rubber compositions, such as reinforcing agents, plasticizers, and coupling agents. These compounding agents and additives can be used in amounts commonly employed.
The rubber composition of the present invention can be prepared by any of conventional methods without limitation. The composition is prepared by, for example, mixing the ingredients under commonly used conditions by an ordinary method using a kneader such as a Banbury mixer or a mixing roll.
(I) a base mixing step of mixing the rubber component containing the copolymer, a silica sol, and optionally agents such as carbon black, a silane coupling agent, zinc oxide, stearic acid, a softener, an antioxidant, and an wax at 80 to 180°C (preferably at 90 to 170°C) for 3 to 10 minutes;
(II) a final mixing step of mixing a mixture obtained in the base mixing step with a vulcanizing agent and a vulcanization accelerator at 30 to 70°C (preferably at 40 to 60°C) for 3 to 10 minutes; and
(III) a vulcanizing step of vulcanizing an unvulcanized rubber composition obtained in the final mixing step at 150 to 190°C (preferably at 160 to 180°C) for 5 to 30 minutes.
In particular, in order to easily prepare the rubber composition of the present invention in which structure silica is formed, it is preferred that a silica sol be mixed with the rubber component including the copolymer using a rubber kneader. More preferably, the rubber composition is prepared by a method including the following steps:
The preferred amount of silica sol, calculated as silica, is as described for the structure silica.
1
If the materials are mixed in toluene, which is a good solvent for rubber, in a mixing step (e.g. the base mixing step) for forming the structure silica, the resulting structure silica tends to have an excessively large W. Therefore, the mixing is preferably carried out without toluene.
The term "silica sol" herein refers to a colloid solution in which silica is dispersed in a solvent. The silica sol is not limited at all, and is preferably a colloid solution in which slender particles of silica are dispersed in a solvent because the structure silica is readily formed. A colloid solution (organosilica sol) in which slender particles of silica are dispersed in an organic solvent is more preferred. The "slender particles of silica" herein refers to chain-like structures (secondary particles) of silica consisting of multiple spherical or granular primary particles linked. Either linear or branched structures may be used.
Any solvent for dispersing silica can be used without limitation, and preferred examples are alcohols, such as methanol and isopropanol. Isopropanol is more preferred.
The silica (secondary particles) in the silica sol preferably consists of primary particles with an average particle size of 1 to 100 nm, more preferably 5 to 80 nm.
The average particle size of primary particles is determined as the average (average diameter) of the particle sizes of 50 primary particles visually measured in photographs taken by a transmission electron microscope JEM 2100FX available from JEOL Ltd.
In the case of slender particles of silica (secondary particles), the average size of primary particles is determined as an average of the thickness (diameter) measured at randomly selected 50 points of silica (secondary particles) in an electron microscope photography. In the case of connected bead-shaped silica (secondary particles) with recessed portions, it is determined as an average of the diameter of each of 50 beads in an electron microscope photograph. In the case of beads each having longer and shorter diameters, that is, slender beads, their short diameter is measured.
The silica (secondary particles) in the silica sol preferably has an average particle size of 20 to 300 nm, more preferably 30 to 150 nm. The average particle size of the silica (secondary particles) can be determined by dynamic light scattering, specifically as follows.
-3
The average particle size of the silica (secondary particles) is measured using a laser particle analyzing system ELS-8000 available from Otsuka Electronics Co., Ltd. (based on cumulant analysis). The measurement is carried out at a temperature of 25°C and an angle between incoming light and the detector of 90° in a number of measurement cycles of 100, and the refraction index of water (1.333) is input as the refraction index of the dispersion solvent. The measurement is typically carried out at a concentration of about 5×10% by mass.
WO 00/15552
2803134
2926915
The silica (secondary particles) can be prepared by, for example, the method disclosed in claim 2 and relevant parts in the description of , and the method disclosed in Japanese Patent No. , and the method disclosed in claim 2 and relevant parts in Japanese Patent No. .
Specific examples of the silica (secondary particles) in the present invention include "'SNOWTEX-OUP" (average secondary particle size: 40 to 100 nm) available from Nissan Chemical Industries, Ltd., "SNOWTEX-UP" (average secondary particle size: 40 to 100 nm) available from Nissan Chemical Industries, Ltd., "SNOWTEX PS-M" (average secondary particle size: 80 to 150 nm) available from Nissan Chemical Industries, Ltd., "SNOWTEX PS-MO" (average secondary particle size: 80 to 150 nm) available from Nissan Chemical Industries, Ltd., "SNOWTEX PS-S" (average secondary particle size: 80 to 120 nm) available from Nissan Chemical Industries, Ltd., "SNOWTEX PS-SO" (average secondary particle size: 80 to 120 nm) available from Nissan Chemical Industries, Ltd., "IPA-ST-UP" (average secondary particle size: 40 to 100 nm), and "Quartron PL-7" (average secondary particle size: 130 nm) available from Fuso Chemical Co., Ltd. In particular, IPA-ST-UP is preferred because structure silica can be successfully formed.
The use of the rubber composition of the present invention thus obtained provides a pneumatic tire whose fuel economy, wet grip performance, and dry grip performance are improved together while maintaining the balance between them. The rubber composition can be used for any components of tires, and is suitable for treads and side walls.
The pneumatic tire of the present invention can be manufactured by an ordinary method using the above-described rubber composition.
Specifically, an unvulcanized rubber composition containing the above-mentioned components is extruded and processed into the shape of a desired tire component such as a tread, and assembled with other tire components into an unvulcanized tire by an ordinary method using a tire building machine. This unvulcanized tire is then heated and pressed in a vulcanizer. In this way, the pneumatic tire is manufactured.
The present invention is more specifically described with reference to examples, but the present invention is not limited to these examples.
Modifier A-1: dimethylamine available from Kanto Chemical Co., Inc.
Modifier A-2: pyrrolidine available from Kanto Chemical Co., Inc.
Modifier A-3: AI-200 available from FMC Lithium (a compound represented by the following formula (s=2))
n-Butyllithium: 1.6 M n-butyllithium in hexane available from Kanto Chemical Co., Inc.
Modifier B-1: tetraethoxysilane available from Kanto Chemical Co., Inc.
Modifier B-2: 3-glycidoxypropyltrimethoxysilane available from AZmax. Co.
Modifier B-3: 3-(N,N-dimethylamino)propyltrimethoxysilane available from AZmax. Co.
2,6-tert-Butyl-p-cresol: NOCRAC 200 available from Ouchi Shinko Chemical Industrial Co., Ltd.
The chemical agents used in synthesis and polymerization reactions are described below. These agents were purified in accordance with common methods, if necessary. n-Hexane: product of Kanto Chemical Co., Inc. Styrene: product of Kanto Chemical Co., Inc. 1,3-Butadiene: product of Tokyo Chemical Industry Co., Ltd. p-Methoxystyrene: product of Kanto Chemical Co., Inc. (a compound represented by the formula (I)) p-(tert-Butoxy)styrene: product of Wako Pure Chemical Industries, Ltd. (a compound represented by the formula (I)) Tetramethylethylenediamine: product of Kanto Chemical Co., Inc.
Copolymers prepared as described below were analyzed by the following methods.
The weight average molecular weight Mw of the copolymers was determined using a gel permeation chromatograph (GPC) (GPC-8000 series available from Tosoh Corporation, detector: differential refractometer, column: TSKGEL SUPERMALTPORE HZ-M available from Tosoh Corporation) relative to polystyrene standards.
In order to determine the structure of the copolymers, the copolymers were analyzed using a device of JNM-ECA series available from JEOL Ltd. Based on the results, the amounts of 1,3-butadiene, compounds represented by the formula (I) (p-methoxystyrene and p-(tert-butoxy) styrene), and styrene in the copolymers were calculated.
A heat-resistant container was sufficiently purged with nitrogen, and charged with n-hexane (1500 ml), styrene (100 mmol), 1,3-butadiene (800 mmol), p-methoxystyrene (5 mmol), tetramethylethylenediamine (0.2 mmol), Modifier A-1 (0.12 mmol), and n-butyllithium (0.12 mmol). The mixture was stirred at 0°C for 48 hours. Then, Modifier B-1 (0.15 mmol) was added thereto, and the mixture was stirred at 0°C for 15 minutes. Thereafter, an alcohol was added to stop the reaction, and 2,6-tert-butyl-p-cresol (1 g) was added to the reaction solution. Subsequently, a copolymer (1) was obtained by reprecipitation purification. The weight average molecular weight of the copolymer (1) was 500,000, the amount of the compound represented by the formula (I) (the amount of alkoxystyrene units) was 1.1% by mass, and the amount of styrene (the amount of styrene units) was 19% by mass.
[Table 1]
Copolymer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
n-Hexane
ml
1500
1500
1500
1500
1500
1500
1500
1500
1500
1500
1500
1500
1500
1500
1500
Styrene
mmol
100
100
100
100
100
100
100
150
100
100
100
100
100
100
100
1,3-Butadiene
mmol
800
800
800
800
800
800
800
800
800
800
800
800
800
800
800
p-Methoxystyrene
mmol
5
5
5
5
-
-
-
-
-
-
-
5
5
-
-
P-(t-Butoxy)styrene
mmol
-
-
-
-
5
5
5
20
1
-
5
-
-
-
-
Tetramethylethylenediamine
mmol
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.2
Modifier A-1
mmol
0.12
0.12
0.12
-
0.12
0.12
0.12
0.12
0.12
-
-
0.12
-
0.12
-
Modifier A-2
mmol
-
-
-
0.12
-
-
-
-
-
-
-
-
-
-
-
Modifier A-3
mmol
-
-
-
-
-
-
-
-
-
-
0.12
-
-
-
-
n-Butyllithium
mmol
0.12
0.12
0.12
0.12
0.12
0.12
0.12
0.12
0.12
0.12
-
2
0.12
0.12
0.12
Modifier B-1
mmol
0.15
-
-
-
0.15
-
-
-
-
-
-
0.15
-
-
0.15
Modifier B-2
mmol
-
0.15
-
-
-
0.15
-
-
-
-
-
-
-
-
-
Modifier B-3
mmol
-
-
0.15
0.15
-
-
0.15
0.15
0.15
-
0.15
-
-
-
-
2,6-tert-Butyl-p-cresol
g
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Weight average molecular weight(× 10<sup>5</sup>)
5
4.7
4.5
4.8
4.8
4.9
4.7
5.5
4.6
4.6
4.7
0.3
5
5
5
Amount of compound of formula (I)
%
1.1
1.2
1.3
1.1
1.2
1.2
1.1
5.8
0.2
-
1.2
1.1
1.1
1.1
1.1
Amount of styrene
%
19
19
19
19
19
19
19
23
19
19
20
18
19
19
19
Copolymers were synthesized in the same manner as that for the copolymer (1). Table 1 shows the characteristics of the polymers.
NR: RSS#3
BR: UBEPOL BR150B available from Ube Industries, Ltd.
SBR: SL574 available from JSR Corp.
Copolymers (1) to (15): synthesized as described above Silica A: Organosilica sol IPA-ST-UP available from Nissan Chemical Industries, Ltd. (silica sol with slender particles of silica dispersed in isopropanol (average particle size of silica (secondary particles) determined by dynamic light scattering: 40 to 100 nm), silica content: 15% by mass) (the amounts shown in Tables 2 and 3 are the amounts of silica in the organosilica sol.)
2
Silica B: ULTRASIL VN3 (silica particles, NSA: 175 m2/g, available from EVONIK DEGUSSA)
Silane coupling agent: Si 69
(bis(3-triethoxysilylpropyl)tetrasulfide, available from EVONIK DEGUSSA)
Antioxidant: NOCRAC 6C
(N-1,3-dimethylbutyl-N'-phenyl-p-phenylenediamine) available from Ouchi Shinko Chemical Industrial Co., Ltd.
Stearic acid: stearic acid available from NOF CORP.
Zinc oxide: zinc oxide #1 available from Mitsui Mining & Smelting Co., Ltd.
Sulfur: powdered sulfur available from Tsurumi Chemical Industry Co., Ltd.
Vulcanization accelerator (1): NOCCELER CZ
(N-cyclohexyl-2-benzothiazolylsulfenamide) available from Ouchi Shinko Chemical Industrial Co., Ltd.
Vulcanization accelerator (2) : NOCCELER D (diphenylguanidine) available from Ouchi Shinko Chemical Industrial Co., Ltd.
Chemicals used in the examples and comparative examples are listed below.
Each of the combinations of materials shown in Tables 2 and 3 except the sulfur and vulcanization accelerators was mixed in a 1.7-L Banbury mixer available from KOBE STEEL, LTD. at 80 to 180°C for 5 minutes to obtain a kneaded mixture. Next, the sulfur and vulcanization accelerator were added to the kneaded mixture, and they were mixed using an open roll mill at 50°C for 5 minutes to obtain an unvulcanized rubber composition. A portion of the unvulcanized rubber composition was vulcanized at 170°C for 12 minutes into a vulcanized rubber composition.
Another portion of the unvulcanized rubber composition was formed into a tread shape, and assembled with other tire components into an unvulcanized tire using a tire building machine. The tire was vulcanized at 170°C for 12 minutes to obtain a test tire (tire size: 195/65R15).
The vulcanized rubber compositions and test tires thus obtained were evaluated for their performance by the methods described below.
<math display="block"><mfenced><mi>Fuel economy index</mi></mfenced><mo>=</mo><mfenced separators=""><mi mathvariant="normal">tan δ of Comparative Example</mi><mspace width="1em" /><mn mathvariant="normal">1</mn></mfenced><mo>/</mo><mfenced><mi mathvariant="normal">tan δ of each formulation</mi></mfenced><mo>×</mo><mn mathvariant="normal">100</mn></math><img id="ib0026" file="imgb0026.tif" wi="150" he="11" img-content="math" img-format="tif" />
The tan δ of the vulcanized rubber compositions was measured using a spectrometer available from Ueshima Seisakusho Co., Ltd. at a dynamic strain of 1%, a frequency of 10 Hz, and a temperature of 50°C. The measured value is expressed as an index using the equation shown below. A higher index indicates smaller rolling resistance and better fuel economy.
<math display="block"><mfenced separators=""><mi>Index of wet grip performance</mi><mspace width="1em" /><mfenced><mn mathvariant="normal">1</mn></mfenced></mfenced><mo>=</mo><mfenced><mi>maximum friction coefficient of each formulation</mi></mfenced><mo>/</mo><mfenced separators=""><mi>maximum friction coefficient of Comparative Example</mi><mspace width="1em" /><mn mathvariant="normal">1</mn></mfenced><mo>×</mo><mn mathvariant="normal">100</mn></math><img id="ib0027" file="imgb0027.tif" wi="135" he="17" img-content="math" img-format="tif" />
The wet grip performance was evaluated using a flat belt friction tester (FR5010 Series) available from Ueshima Seisakusho Co., Ltd. A cylindrical rubber test piece with a width of 20 mm and a diameter of 100 mm was prepared from each vulcanized rubber composition. The slip ratio of the test pieces on a road surface was varied from 0 to 70% at a speed of 20 km/hour, a load of 4 kgf, and a road surface temperature of 20°C, and the maximum value of the friction coefficient detected during the variations was read. The measured value is expressed as an index using the equation shown below. A higher index indicates higher wet grip performance.
<math display="block"><mfenced separators=""><mi>Index of wet grip performance</mi><mspace width="1em" /><mfenced><mn>2</mn></mfenced></mfenced><mo>=</mo><mfenced separators=""><mi>braking distance of Comparative Example</mi><mspace width="1em" /><mn mathvariant="normal">1</mn></mfenced><mo>/</mo><mfenced separators=""><mi>braking</mi><mspace width="2em" /><mi>distance of each formulation</mi></mfenced><mo>×</mo><mn mathvariant="normal">100</mn></math><img id="ib0028" file="imgb0028.tif" wi="142" he="17" img-content="math" img-format="tif" />
The test tires were mounted on the wheels of an FR car (engine size: 2000 cc) made in Japan. In a test course with a wet road surface to which water had been sprinkled, the running distance required for the vehicle to stop after braking tires at 70 km/h (i.e. braking distance) was measured. The measured value is expressed as an index using the equation shown below. A higher index indicates higher wet grip performance.
<math display="block"><mfenced><mi>Index of dry grip performance</mi></mfenced><mo>=</mo><mfenced><mi>maximum friction coefficient of each formulation</mi></mfenced><mo>/</mo><mfenced separators=""><mi>maximum friction coefficient of Comparative Example</mi><mspace width="1em" /><mn mathvariant="normal">1</mn></mfenced><mo>×</mo><mn mathvariant="normal">100</mn></math><img id="ib0029" file="imgb0029.tif" wi="152" he="22" img-content="math" img-format="tif" />
The dry grip performance was evaluated using a flat belt friction tester (FR5010 Series) available from Ueshima Seisakusho Co., Ltd. A cylindrical rubber test piece with a width of 20 mm and a diameter of 100 mm was prepared from each vulcanized rubber composition. The slip ratio of the test pieces on a dry road surface was varies from 0 to 50% at a speed of 20 km/hour, a load of 4 kgf, and an outside temperature of 30°C, and the maximum value of the friction coefficient detected during the variations was read. The measured value is expressed as an index using the equation shown below. A higher index indicates higher dry grip performance.
1
2
1
2
Fig. 2
Fig. 2
A test piece was cut out from the tread of each test tire, and the silica dispersed therein was observed using a transmission electron microscope to calculate the average primary particle size (D), average length between branched particles Z-Z inclusive of the branched particles Zs (W in ), average length between branched particles Z-Z exclusive of the branched particles Zs (W in ), average aspect ratio determined between branched particles Z-Z inclusive of the branched particles Zs (W/D), and average aspect ratio determined between branched particles Z-Z exclusive of the branched particles Zs (W/D) of the silica. The silica was measured at 30 points, and the average was employed.
As shown in Tables 2 and 3, the compositions of the examples, which contained a rubber component containing a copolymer obtained by copolymerization of 1,3-butadiene, styrene, and a compound represented by the formula (I), having an amino group at a first chain end and a functional group containing at least one atom selected from the group consisting of nitrogen, oxygen, and silicon at a second chain end, and having a weight average molecular weight within a specific range, and a specific silica, showed improved fuel economy, wet grip performance, and dry grip performance together while maintaining the balance between them.
Comparisons between Comparative Examples 1, 6, and 7 and Example 1, between Comparative Examples 1, 6, and 8 and Example 2, and between Comparative Examples 1, 6, and 9 and Example 3 revealed that the combination of the copolymer and the specific silica synergistically improves the fuel economy, wet grip performance, and dry grip performance.
Z
Branched particle | |
This is a quick lecture on the basic meaning of dependent and Independent variables in Media and Communication research. This lecture is also relevant to those in every other behavioural science studies.
A variable is defined as anything that has a quantity or quality that varies. The independent variable when manipulated, brings about changes in the dependent variable. Watch the videos below to learn the meaning of dependent and independent variables, and how to identify them in a research topic on media and communication. | https://massmediang.com/dependent-and-independent-variables-in-media-and-communication-research-a-quick-pidgin-english-lecture-video/ |
Introduction {#Sec1}
============
Similar to many of his 9-year-old school peers, Brian was put on psychostimulants after complaints of poor concentration and impulsivity that met ADHD diagnostic criteria. Despite a remarkable improvement in his academic performance, parent and teachers noticed a reduction in appetite and weight loss after the onset of the medication. Moreover, when not under the effects of medication, inattention and impulsivity rebounded creating innumerous embarrassments to him and his family. His parents are now considering neurofeedback---a non-pharmacological and non-invasive intervention that has shown promising results in managing the ADHD symptoms in the long run and without side effects \[[@CR1]\].
Despite being the most often applied and accepted treatments for ADHD, recent large-scale studies and meta-analyses have demonstrated limitations of psychostimulants and behavioral therapy. Thus, research and the development of non-pharmacological treatments such as neurofeedback have been recommended. To date, however, the clinical value of neurofeedback is still debated, with evaluations ranging from "efficacious and specific" \[[@CR2], [@CR3]\] to "fails to support neurofeedback as an effective treatment for ADHD". \[[@CR4]•\] In this contribution, we will introduce neurofeedback and review the application of neurofeedback to ADHD as well as its past and current evidence in the treatment of ADHD. We will also attempt to reconcile these seemingly discrepant research findings.
Current Treatment Approaches in ADHD {#FPar1}
------------------------------------
Several guidelines exist for the diagnosis and treatment of children who have or are suspected of having ADHD. Among these are international, national, and various regional guidelines for general practitioners. Additionally, there are guidelines for youth aid and youth protective services.
Traditionally, the treatment of ADHD consists of pharmacotherapy, often complemented by behavioral therapy based on parent management training and mediation training for parents and teachers \[[@CR5]\]. Additionally, classroom interventions, academic interventions, and peer-related interventions are being used as psychosocial therapeutic approaches \[[@CR6]\]. Regarding pharmacotherapy, the administration of methylphenidate is often the method of choice (e.g., Ritalin, Concerta, Equasym, Medikinet); however, D-amphetamine, as well as non-psychostimulants, such as atomoxetine and guanfacine, are prescribed too \[[@CR7]\]. Over the past years, the Multimodal Treatment Study of Children with ADHD and follow-up studies (the so-called MTA studies) have provided ample research regarding stimulant medication, behavioral treatments, their combination, and self-chosen community care. Results demonstrate that both stimulant medication and a combined treatment had a clear clinical benefit in the short term, but on the long-term group differences attenuate, as assessed after 24 months, as well as after 6 and 8 years \[[@CR8]\]. These findings, in combination with studies indicating the potential side effects of pharmacotherapy \[[@CR9]•, [@CR10]\], partial drug response \[[@CR7]\], and the time and cost intensiveness of combining treatments due to the involvement of multiple professionals \[[@CR6]\], have resulted in a growing interest into the development of alternative non-pharmacological treatments in ADHD.
For instance, computerized cognitive--based training approaches (e.g., working-memory and attention training) aim to reduce ADHD core symptoms and tackle neuropsychological functioning. Research into this topic is still in the early stages and more controlled studies regarding the effects on ADHD core symptoms are required \[[@CR11]\]. Another alternative treatment method for ADHD which is already more extensively studied in the past is neurofeedback. In the following paragraphs, we will (i) introduce neurofeedback, (ii) present standard protocols for ADHD, (iii) review the past and current evidence in the treatment of ADHD, and (iv) depict the current status of institutional and professional regulation of the clinical implementation of neurofeedback.
Definition, History, and Mechanism of Action of Neurofeedback {#FPar2}
-------------------------------------------------------------
Despite the recent popularity of neuromodulation techniques, neurofeedback is for the most part still an unknown territory. Neurofeedback is based on a brain-computer interface (BCI) and is implemented by a software system and a processing pipeline, altogether consisting of five elements (Fig. [1](#Fig1){ref-type="fig"}) \[[@CR12]•\]. Neurofeedback measures the participant's own brain activity, which is pre-processed (steps 1 and 2). Pre-selected brain parameters (a specific frequency band or a brain potential) are calculated online (step 3) and translated to signals that are fed back to the user in real time (step 4). Thus, selected features of brain activity are made perceivable for the participant. Through this feedback, the participant (step 5) can learn to self-regulate his own brain activity to directly alter the underlying neural mechanism of cognition and behavior.Fig. 1Overview neurofeedback: neurofeedback pipeline and three areas of neurofeedback application. The pipeline includes the five most important processing steps and parts of a neurofeedback system
It has been proposed that neurofeedback is based on principles of operant conditioning and procedural skills learning. Due to these learning mechanisms, neuroplasticity is expected to take place during neurofeedback training either via Hebbian plasticity or anti-Hebbian/homeostatic plasticity. Such intrinsic regulatory mechanisms are believed to prevent extreme states of brain activity, such as pathologically high or low synaptic strengths or oscillatory states; for further reading, see \[[@CR13]•\].
Nowadays, neurofeedback is used in three ways: (i) as a therapeutic tool to normalize deviating brain activity and treat neurocognitive disorders, (ii) as a so-called peak performance training to enhance cognitive performance in healthy participants, and (iii) as an experimental method to investigate the causal role of neural oscillations in cognition and behavior. More precisely, the neurofeedback research is dominated by two streams: clinical research and neuroscientific inspired research, which is mainly based on recent methodological and technical innovations, as well as on an increasing knowledge about the neural correlates of behavior and cognition. Some examples of recently developed EEG neurofeedback protocols are the upregulation or downregulation of high alpha \[[@CR14], [@CR15]\], the upregulation of frontal beta \[[@CR16]\], and frontal midline theta \[[@CR17]\], but also neurofeedback protocols using fMRI neurofeedback \[[@CR18]•\].
Historically, neurofeedback dates back to the initial discovery of the human electroencephalogram (EEG) by Hans Berger. Only 6 years after this breakthrough, two French researchers---Gustave Durup and Alfred Fessard---first reported that the EEG alpha rhythm could be subject to classical conditioning \[[@CR19]\], which is thought to be one of the basic premises of neurofeedback. This initial observation was followed up by more systematic studies in the early 1940s that further demonstrated all of the Pavlovian types of conditioned responses could be demonstrated on the "EEG alpha blocking response". \[[@CR20]\] In a follow-up study, Jasper and Shagass \[[@CR21]\] investigated further whether participants could also exert voluntary control over this alpha blocking response. In this study, they had participants press a button, which would switch the lights on and off, and use subvocal verbal commands when pressing the button, (e.g., "Block" when pressing the button and "Stop" when releasing the button). After five sessions, the subject was able to voluntarily suppress alpha activity, while the lights were off (a condition where normally synchronous alpha would be present). Despite these early developments, it was only in the 1970s that these same principles were applied more systematically, and the first clinical implications were described in the literature. These developments were motivated by the discovery of the anticonvulsant effects of sensori-motor rhythm (SMR) neurofeedback in cats \[[@CR22]\] and subsequently humans \[[@CR23]\]. The presumed role of SMR modulation on motor behavior was followed by the first demonstrations of the positive effects of SMR neurofeedback in hyperkinetic disorder \[[@CR24]\]. Around the same 1960--1970 period, the first report of voluntary control over a slow brain potential called the contingent negative variation (CNV) or "bereitschaftspotential" (readiness potential, due to the property of this potential to emerge when preparing for action, e.g., when waiting in front of a traffic light) was reported \[[@CR25]\], which laid the foundation of another well-known neurofeedback approach, namely of slow cortical potential (SCP) neurofeedback. The first application of SCP neurofeedback in ADHD was reported in 2004 \[[@CR26]\]. The initial findings described above as SMR and TBR neurofeedback resulted into what we currently known as "frequency band neurofeedback."
Standard Protocols with ADHD {#Sec2}
============================
Theta/beta (4--7 Hz/12--21 Hz) ratio (TBR) neurofeedback strives to decrease theta and/or increase beta power in central and frontal locations. This protocol directly targets important electrophysiological characteristics such as high theta/beta ratios, high theta power, and/or low beta power commonly observed in children (for a review, see \[[@CR27]\]) and adults with ADHD \[[@CR28]--[@CR30]\]. Recent randomized controlled trials suggest that 30 to 40 sessions of TBR neurofeedback were as effective as methylphenidate in reducing inattentive and hyperactivity symptoms and were even associated with superior post-treatment academic performance \[[@CR31], [@CR32]\]. It has been proposed that the effects of TBR neurofeedback on ADHD might be explained by the learned self-regulation of attention \[[@CR33]\] as evidenced by enhanced amplitude of endogenous evoked-related potentials such as the P300 \[[@CR34]\]. However, more neuroscientific evidence is needed to determine the specific mechanisms by which TBR neurofeedback might impact cognitive functioning in ADHD.
SMR neurofeedback training over the sensori-motor strip (predominantly in the central right hemispheric region) was first applied to ADHD children by Lubar and colleagues \[[@CR24], [@CR35]\], based on the functional association of the sensori-motor rhythm with behavioral inhibition and the promising results in reducing cortical excitability in epileptics obtained by Sterman, MacDonald, and Stone \[[@CR36]\]. Lubar's seminal studies revealed that the beneficial hyperactivity-reducing effects of a combined SMR/theta neurofeedback training were maintained after psychostimulants was withdrawn in hyperactive children.
Studies suggest that SMR neurofeedback training reduces inattentive and hyperactive/impulsive symptoms in ADHD children to the same extent as TBR training and comparable number of treatment sessions. However, the two protocols might achieve the same results through distinct mechanisms. Arns, Feddema, and Kenemans \[[@CR37]\] provided evidence that ADHD patients trained with the SMR protocol showed decreased sleep onset latency (SOL) and improved sleep quality in comparison to those administered with TBR, midway treatment. A mediation analysis revealed that this normalized sleep mid-treatment was responsible for the improved inattention post-treatment. The improvements in ADHD symptoms following SMR training might hence be the result of the vigilance stabilization mediated by the regulation of the locus coeruleus noradrenergic system of which activation has been shown to impact the sleep spindle circuitry \[[@CR38]\]. This explanation seems to be in line with previous indications that patients with ADHD present delays in SOL \[[@CR39]\] and that SMR training increases sleep spindle density and improves sleep quality in healthy adults \[[@CR40]\].
Another standard protocol is the self-regulation of SCP \[[@CR41], [@CR42]••\] after around 35 sessions. SCP neurofeedback is based on the learned self-regulation of cortical activation and inhibition which are associated with the electrical negativation and positivation of slow cortical electrical deflections respectively. These periodical shifts from electrical positivity to negativity have been described as a phasic tuning mechanism in the regulation of attention \[[@CR43]\] as shown by the enhanced reaction time, stimulus detection, and short-term memory during the negative shift phase \[[@CR44]\]. Since SCP, of which the CNV is an example, are closely associated with preparatory motor responses with a maximal topographic representation in the motor areas, the vertex is usually the site of choice for training. Differently from TBR and SMR protocols which are typically unidirectional (i.e., instructions either require the participant to increase or decrease the power of the EEG parameter), the self-regulation of SCP usually involves the training in generating both cortical activation and inhibition. In the case of ADHD, the therapeutic focus is on promoting an increase in the firing probabilities of the underlying cortical areas (i.e., negativation). Another difference relative to frequency neurofeedback is that in SCP neurofeedback the learning trials are higher in number and considerably shorter in duration. Interestingly, it has been hypothesized that SCP might also be associated with improvements in sleep. The generation of slow oscillations, in particular negative slow direct current, shifts training during SCP neurofeedback, might exert control over the sleep spindle circuit and therefore facilitate the transition from wakefulness to sleep \[[@CR45]\].
Current Status of Efficacy of Standard Protocols for Neurofeedback in ADHD {#Sec3}
==========================================================================
As with any emerging new treatments, knowledge of technical aspects of the treatment, proper standards, and education are crucial for appropriately evaluating the merits and pitfalls of neurofeedback. Unfortunately, the unfounded assumption that "neurofeedback = neurofeedback" is often made. Neurofeedback can differentially impact brain functioning depending on the kind protocol and implementation the same way as different pharmacological treatments do (e.g., antidepressants and analgesic drugs). As an illustration, neurofeedback treatments such as the earlier mentioned SMR, TBR, and SCP neurofeedback are well-investigated and effective in the treatment of ADHD while other approaches such as posterior alpha enhancement have been found to be not effective (for a review, see \[[@CR3]\].
Especially when restricted to standard protocols such as TBR, SMR, and SCP protocols \[[@CR3]\], neurofeedback is a well-investigated treatment for ADHD. This has become evident from several meta-analyses \[[@CR2], [@CR46]••, [@CR47]\], including a critical meta-analysis from the European ADHD Guidelines Group (EAGG) that also conducted a sensitivity analysis focused on so called "blinded" ratings (i.e., teacher reports only) \[[@CR4]•\]. Blinded ratings have usually lower effects sizes than ratings by people most-proximal to the child and therefore least blinded (e.g., parents) and both rating types are only modestly correlated \[[@CR48]\]. One explanation for this may be that the rating types focus on different aspects of ADHD symptoms. This is reflected in studies showing different rating-ADHD aspect associations, as for instance parent ratings of hyperactive-impulsive behaviors were found to be correlated with genetics \[[@CR49]\], whereas teacher ratings have been shown to be associated to medication effects \[[@CR50]\], most likely due to the fast onset of action of psychostimulants. To come back to the latter meta-analysis \[[@CR4]•\], the researchers did not find an effect of neurofeedback in general on teacher-rated ADHD symptoms, but there was an effect when the analysis was restricted to the above mentioned "standard protocols." Finally, a recent meta-analysis that included 10 RCTs and specifically looked at long-term effects of neurofeedback, compared to active treatments (including psychostimulants) and semi-active treatments (e.g., cognitive training), found that after on average 6 months follow-up, the effects of neurofeedback were superior to semi-active control groups and no different from active treatments including methylphenidate \[[@CR46]••\]. Interestingly, this meta-analysis confirmed the trend for medication effects to diminish with time, and the effects of neurofeedback---without additional sessions being conducted---to increase with time. These data suggest the promising aspect, namely of long-term efficacy, of neurofeedback. Currently, one of the largest and most comprehensive double-blind multisite RCT is carried out: the International Collaborative ADHD Neurofeedback study (ICAN). This study consists of a cross-site investigation team with different background of ADHD treatment approaches assessing 140 participants in total (see the study design in \[[@CR51]\]), and results are foreseen to be published in 2019.
Current Status of Institutional and Professional Regulation of Clinical Neurofeedback Implementations {#Sec4}
=====================================================================================================
Although standard protocols turn out to be efficacious and specific, the practical implementation of neurofeedback as a clinical therapy is currently not regulated. This applies to the educational standards, medical security, and the usage of standard protocols indicated for specific disorders such as ADHD. The lack of regulation and agreed upon standards comes with the danger of patients being treated with ineffective neurofeedback protocols applied by unlicensed personal (or even worse by people without any health-related background). For instance, although practitioners should stick to standard protocols with functional specificity of the frequency and topographic locations, clinical practice often deviates from what is recommended by research. The lack of regulation and missing standards have furthermore caused a surge in commercial driven applications and proclaimed "innovations" of neurofeedback protocols and implementations. Several studies have now demonstrated that some of those "innovations" and implementations do not work. One example of such ineffective technique is the SmartBrain neurofeedback approach using the "NASA patented engagement index" with Sony PlayStation feedback \[[@CR51], [@CR52]\]. Additionally, there is no evidence in favor of the efficacy of unconventional neurofeedback protocols used in some neurofeedback clinics \[[@CR53]\] and frequently advertised applications such as *Z* score and LORETA neurofeedback \[[@CR54]\]. Unfortunately, these proclaimed innovations and commercial-driven applications only add noise to the ongoing debate of neurofeedback efficacy and risk "throwing the baby out with the bathwater." However, above all this demonstrates the need for further research into the effectiveness of already available and newly developed neurofeedback protocols (i.e., the number of sessions, targeted brain area, selected brain parameter, working mechanism) in addition to proper "agreed-upon standards" and training within the field of neurofeedback.
Neurofeedback researchers and practitioners can affiliate to scientific and professional organizations at the international and national level. On an international level, there are mainly two societies. The Society of Applied Neuroscience (SAN) (<http://www.applied-neuroscience.org/>) is an EU-based nonprofit membership organization for the advancement of neuroscientific knowledge and development of innovative applications for optimizing brain functioning (such as neurofeedback with EEG, fMRI, NIRS). The International Society for Neurofeedback & Research (ISNR (<https://www.isnr.org>) is a membership organization aimed at supporting scientific research in applied neurosciences, promoting education in the field of neurofeedback, albeit not always clearly separating commercial and objective interests. Other neurofeedback societies or organizations are often connected to certain neurofeedback equipment manufacturers and have (seemingly) conflicting interests. Furthermore, the Biofeedback Certification International Alliance (BCIA) is a broader international licensure also including biofeedback ([www.BCIA.org](http://www.bcia.org)).
Conclusions {#Sec5}
===========
Recent years witness a renewed interest in neurofeedback in response to the lack of long-term effects for both medication and behavioral therapy and the side effects of medication. Herein, we provide evidence for the efficacy and specificity of standard neurofeedback protocols, namely theta/beta, sensori-motor rhythm, and slow cortical potential. In line with the guidelines for rating evidence developed by the APA, "standard" neurofeedback protocols have been considered to be "Efficacious and Specific, Level V" in the treatment of ADHD (AAPB Guidelines: \[57\]).
However, currently there are no uniform standards regarding training courses for neurofeedback that are accepted by expert associations, neither national-wide, nor in the EU or USA. While performing neurofeedback in a therapeutic context, a thorough basic training, a distinct technical understanding of the medical devices, the software, and the EEG caps, as well as continuing education, are imperative. Regarding the medical security performing neurofeedback in a clinical context, neurofeedback devices (hardware: amplifier and EEG caps, neurofeedback software) are neither regulated in a strict way. However, it is essential that besides the absolute minimum technical requirements after the Medical Device Regulation (MDR) EU 2017/745), neurofeedback devices should be regulated by both the CE (that confirms a medical device meets the essential MDR requirements) and a European equivalent of the Food and Drug Administration (FDA). The FDA enforces laws to protect the consumer's health, safety, and pocketbook. Such potential regulating mechanisms could be implemented by the European medicine regulatory network. In short, tasks ahead concern regulating neurofeedback as therapy, developing internationally accepted binding standards for education and NF implementation and the qualification of neurofeedback trainers.
Last but not least, Brian---now 4 years later---discontinued his medication successfully under medical supervision. Due to neurofeedback, his impulsivity symptoms strongly reduced and he gained control over his concentration, doing well in high school performance.
This article is part of the Topical Collection on *Attention-Deficit Disorder*
**Publisher's Note**
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Stefanie Enriquez-Geppert and Diede Smit each declare that they have no conflict of interest. Miguel G Pimenta declares that he is a lecturer in neurofeedback for the neuroCare Group (Munich, Germany). Martijn Arns (MAr) reports research grants and options from Brain Resource (Sydney, Australia); owns stock in and serves as Chief Scientific Adviser of the neuroCare Group (Munich, Germany) and Director and Researcher of Research Institute Brainclinics (Nijmegen, Netherlands); is a consultant on a National Institute of Mental Health, US-funded iCAN study (CNG 2013); and is a co-inventor on four patent applications (A61B5/0402; US2007/0299323, A1; WO2010/139361 A1; one pending) related to EEG, neuromodulation, and psychophysiology (not related to neurofeedback). MAr declares no ownership or financial gains for these patents - just authorship.
This article does not contain any studies with human or animal subjects performed by any of the authors.
| |
Five Ideas to Transform a Soldier's Life
A healthcare strategist proposes five policy changes to benefit U.S. veterans who served in Iraq and Afghanistan wars. Veterans Day, November 11, 2009, is the time to do more for soldiers returning from combat.
November 9, 2009 (Newswire.com) - Washington, D.C.---Veterans Day is Wednesday, November 11, 2009. Since 9/11, more than 1.6 million men and women have served in the U.S. armed forces in Afghanistan and Iraq.
"War changes people," stated Noe Foster, CEO of theStrategist. "Research confirms that the battle does not end when combat soldiers return home. For many the battle intensifies."
A large number struggle with Post Traumatic Stress Disorder, (PTSD) Traumatic Brain Injury (TBI), substance abuse, violent rages, and strained family relationships.
Combat veterans need more than lip service and parades.
Noe Foster, outlines five policy changes that promise to transform the life of a soldier and help him or her win their personal battle at home:
1. Presume Post Traumatic Stress Disorder (PTSD) will occur. One in three soldiers will return from war with PTSD. The probability increases dramatically with multiple deployment. Treat all combat soldiers in theatre and at home as if they were already experiencing PTSD. Lift the 5-year diagnosis time period attached to PTSD disability benefits. Screening for PTSD is flawed since symptoms can be masked for many years. Like other mental health conditions, PTSD carries a debilitating stigma that hampers early intervention.
2. Prevent Traumatic Brain Injuries (TBI) from occurring. One in five combat veteran returns home with a TBI. Repeated deployments increase the likelihood of an occurrence exponentially.The cause of a war-related TBI is most often from exposure of a detonated IED or from a motor vehicle accident. Improved safety devices like head restraints need to be explored.
3. Prepare job-ready Guard and Reserve soldiers. Help them translate their experience in a combat zone to skills local job markets want. National Guard and Reserve troops make up 48% of the armed forces in the war zones. Many while down range have received "Dear John," letters from employers whose businesses have collapsed and closed.
4. Auto-enroll Guard and Reserve soldiers who have been deployed to war into the Veteran Affairs health and benefit system. Only 20% of veterans now use the VA healthcare system. Many fail to register for these benefits.
5. Search for and identify children of active duty Guard and Reserve troops. Provide school-based mental health wellness programs to help them cope with a parent's deployment and return. While in Iraq, Afghanistan, or at home, soldiers worry for the health and happiness of their children. Studies reveal that children of deployed troops experience significantly more stress than their friends.
Noe Foster is the CEO and founder of the Strategist, a healthcare advisory firm. | https://www.newswire.com/five-ideas-to-transform-a-soldier/10330 |
Background: While most common diaper-related conditions are easily resolved, the diaper region may be the site of a variety of tumors (either benign or malignant) and other abnormalities that may require completely unique treatment approaches. Objectives: This review sought to catalogue the various conditions and complications that may arise in the diaper area during the first few years of life. Methods: To identify studies included in this review, computerized searches were undertaken in the PubMed and Medline databases using the term tumors of the diaper region with the following terms: tumors, malformations, diaper region, and infant. Searches were limited to studies published between 1995 and 2014. Results: The most common types of tumor in the diaper region are called infantile hemangiomas, which vary in presentation between superficial, deep, segmental, and abortive or minimal. Vascular malformations may also occur in the diaper region, in either isolation or as part of a condition that affects the development of blood vessels, soft tissues, and bones. A range of soft tissue tumors and hamartomas may also occur in the diaper region. Other recently described rare conditions are plaque-like myofibroblastic tumor of infancy (PMTI), and dermatofibrosarcoma protuberans, which can manifest as congenital or acquired lesions. Conclusions: A range of conditions may arise in the diaper area during the first years of life that may require attention. Vigilant monitoring by parents and pediatricians, appropriate identification, and diagnosis and treatment will help retain health for these young patients. | https://ohsu.pure.elsevier.com/en/publications/beyond-infections-tumors-and-malformations-of-the-diaper-area |
Q:
Terms in second-order logic
I am having a hard time in trying to find a formal and explicit definition for the syntax of the second-order logic. I understand there may be small differences in one formalization w.r.t. another (just like, in the formalization of a first-order language, the set of connectives is often different - just because you can build the missing ones from the others), but there are some gaps that I am unable to fill in myself.
In particular, I was wondering what should be considered the set of second-order terms and I would like to be pointed to some reference book which states this (sufficiently) explicitly.
Let me elaborate a bit more on what I found:
in van Dalen's "Logic and Structure" (5 ed, ch. 5) the author first introduces the symbols of a second-order alphabet, and then defines the set of second-order formulas. However, there's no equality among the symbols, and it's not explicitly mentioned what are the terms he uses for building the formulas. If terms were just the first-order terms they would require a symbol which is not in the alphabet.
in Libkin's "Elements of finite model theory" (https://homepages.inf.ed.ac.uk/libkin/fmt/fmt.pdf, ch. 7), a second-order language is explicitly described as an extension of a FO language. He describes what are first-order terms and then...he just forgets to say what the SO terms are, and few lines below just says that $t$ and $t'$ are "terms", without mentioning their order. Should I assume SO terms are exactly FO terms? I feel this would somehow be against the next reference I am mentioning.
in Enderton's "A mathematical introduction to logic" (ch. 4) the author introduces two sorts of second order variables (one for predicate variables and one for function variables). It seems that the SO terms should be the FO terms plus the ones obtained by applying a function variable to the FO terms). This is a bit confusing though, as in other books I didn't find the possibility to quantify explicitly on two different sorts of second-order variables. I know you can always consider a function as a set (with certain properties), but formally this changes the set of what we should call "second-order terms".
I guess this may not be extremely relevant for the development of the theory, but honestly I feel that this is not a good reason for not having a formal definition to start with. I took a look a several other books I'm not mentioning here (for brevity), so please point to some reference only if you are certain that it solves my doubts.
A:
One issue is that, although a first-order language has only one type of basic variables, for individuals, a general language for second order logic has an infinite collection of types of basic variables. Here is one of the more inclusive definitions for the set of types.
There is a type named "$0$" for "first order" variables ($x^0$, $y^0$, $\ldots$) which are meant to range over the individuals of a given model.
For each $n$, there is a type for variables that range over $n$-ary relations $R(x^0_1, \ldots, x^0_n)$ on individuals.
For each $n$, there us a type for variables that range over $n$-ary functions $f^0(x^0_1, \ldots, x^0_n)$ which take individuals and return an individual.
There are an infinite number of variable symbols for each of these types, and each variable symbol is of only one of the types.
The signature for a particular second-order theory can then have:
Constant symbols of any of the types of variables (e.g. a constant symbol $+$ for a binary function, a constant symbol $0$ for an individual, a constant symbol $<$ for a binary relation between individuals, etc). In particular, this includes all the kinds of symbols that could be in a first-order signature.
Third-order function symbols that take a finite number of terms (each of one of the types above) and return an object of one of the types above.
Third-order relation symbols that take a finite number of terms (each of one of the types above).
It is situation dependent whether to include an equality relation for each type. Equality symbols could be omitted entirely, included for only some types, or included for all types. If they are included, the appropriate logical axioms also need to be assumed in the deductive system.
Given all that, the terms of a second-order logic in a given signature are defined by analogy with first-order logic:
Individual variables are terms (in any type). Some of these are function variables and some may be function symbols from the signature
Constant symbols from the signature are terms.
The function symbols of the signature, with any variables substituted into them, are terms of the appropriate type
If $F^\rho(x^\sigma, x^\tau)$ is a function term that takes inputs of types $\sigma$ and $\tau$, and $t^\sigma$ and $t^\tau$ are terms of the appropriate types, then $F^\rho(t^\sigma, t^\tau)$ is a term of type $\rho$. The same holds for function terms of each arity. For example, if $f^0(x^0_1)$ is a function term and $t^0$ is a term of type $0$ then $f^0(t^0)$ is also term of type $0$.
In many settings, we can get by with only part of this general syntax, just as in first-order logic we can often simply the syntax to make a purely functional or purely relational language.
As usual, we can replace $n$-ary function symbols with $(n+1)$-ary relation symbols, although we will need to add axioms or clauses saying that the relation symbols define functions.
In theories of arithmetic we have a pairing function which is a bijection between the set of pairs of individuals and the set of individuals. In such settings, we can reduce our relation symbols to just unary relations. For this reason, second-order arithmetic is often axiomatized as having only two types: individuals of type $0$ and unary relations on individuals. In that case there are no function variables and no quantifiers for function variables.
Alternatively, some authors axiomatize second-order arithmetic or higher-order arithmetic with only function symbols, so there are no relation variables and no quantifiers over relations. This convention is common, in particular, in constructive higher order arithmetic. For example, the intuitionistic theory of Heyting arithmetic in all finite types, $\text{HA}^\omega$, is often axiomatized with only function variables, as is its second-order variant $\text{HA}^2$.
Second-order logic can be generalized to higher-order logic in all finite types. The syntax I am sketching here is just a restriction of that syntax to the special case where all objects in a model are of appropriately low types.
A:
In my neck of the logical woods, the canonical text on second-order logic is Stewart Shapiro's (slightly oddly titled) Foundations Without Foundationalism: A Case for Second Order Logic (OUP: Oxford Logic Guides 17).
Shapiro starts the formal exposition of various second order logics in his Chap 3. There's an account of the new formation rule for terms (added to the existing first order rules) on p. 62.
| |
The cancellation of the new M4 relief road marks an opportunity for Cymru to break with the unsustainable transportation policies of the past, especially our chronic over-reliance on cars.
However, there are barriers to be overcome before we can even begin seeking appropriate solutions.
The first barrier is the notion that investing in transport infrastructure is essential for economic growth.
While the growth in transport volume has closely followed rates of economic growth, correlation is not causation. Further, as the (UK) Secretary of State for Transport was advised by 32 professors of transport in 2013:
“Recent evidence from the UK and internationally shows signs of road traffic growth leveling (sic) off, even after accounting for lower than anticipated economic growth…which the Department for Transport has never forecast…”
They noted…
“…a range of views as to the importance of new transport infrastructure in stimulating economic growth. The evidence base is not as strong as you, or we, might wish it to be.”
Our service and knowledge-led economy should not be assumed to be as tightly coupled to road traffic for its success as it once might have been.”
The second barrier is that economic growth, elusive and limited though that has often been, and in the form that has massively undermined our life support systems, cannot continue without catastrophic consequences for all life on Earth. There is no Planet B.
It is essential that we re-focus locally and internationally on other opportunities for growth – personal, social, cultural, societal – and leave gross material consumption behind.
The third barrier, and possibly the most difficult to overcome, is the extreme reluctance of elites in all countries to accept that business as usual is no longer an option.
Many of the powerful have major investments in what are likely to become stranded assets – the oil and gas industry being the most notable example – and will resist change.
Many large companies will become dinosaurs, unable to change fast enough to contribute to solutions, thus remaining parts of the problem.
Solutions
Starting with a relatively clean sheet, I offer the following proposals for what might we do in Cymru:
- Secure full devolution of all transport powers
It is obvious that we are better placed to resolve transportation (and all other) issues in our own country. Devolution of all transport powers would give us an opportunity to set up a coherent and integrated national policy framework for passengers and freight, focusing on safety, access (not mobility), efficiency, equity and affordability.
- Revise the Highway Code to give absolute rights of way to pedestrians and people in wheelchairs in all shared space (eg. footpaths, supermarket carparks, cycleways)
We must redress the balance between people and vehicles, prioritising safety in our planning and design. While desirable, adopting Vision Zero in Cymru would require more far-reaching changes to the Highway Code.
- Pedestrianise urban and suburban centres
Switching to less expensive forms of transportation would allow more of our disposable income to be directed into more labour-intensive sectors, including retail and leisure. Greening our urban centres would improve their ambience and reduce air pollution.
- Adopt strict or presumed liability in relation to any collisions
Strict liability assumes that in a collision, the driver of the higher-powered vehicle is presumed to be at fault, as in most of Europe. This applies to cyclist-pedestrian collisions as well, reinforcing the right of way for pedestrians.
- Reduce the number of carparks in urban centres, parking buildings, private carparks and public facilities by 10% every year for at least 5y and use the space released for landscaping, cycle parking, public amenity…
Peak traffic flows are largely driven by the availability of relatively cheap parking in urban centres.
- Redesign urban bus services so they can be utilised as a network with efficient interchanges with all other modes and routes
Most bus services operate on commuter routes, and few passengers know much about the rest of the network. Interchange design, especially with other modes, and maps are often poor.
- Complement mainline services with community buses
Community buses can be provided at reasonable cost, and quickly become part of the social fabric in suburbs and towns. On-demand services need have no fixed routes, and act as feeders to mainline bus routes and other modes.
- Provide on-demand bus services in rural areas
Scheduled services in areas of low population require heavy subsidies. Community buses are increasingly popular, providing lifeline services in remote areas. They should be strongly supported.
- Install high-quality broadband and promote homeworking
In Cymru there are still areas where broadband is non-existent or much too slow. Computer-based homeworking has major potential for boosting the productivity of and employment in local areas – if there is fast broadband.
- Stop urban centre growth in favour of creating decentralised activity centres to minimise commuting
The dominance of our mini-London-on-the-Taff, a purported growth pole, has done little for the rest of Cymru, its commuters sucking the life out of what have become dormitory suburbs in the Valleys. Commuting is a waste of time, energy, money and effort, and is unsustainable.
- Build low-cost Light Rail Vehicles (LRVs) and deploy them rapidly on-street in Cardiff, Newport and other large centres and their environs with efficient interchanges with other modes
Trams (of appropriate scale) would revolutionise our urban centres, as they have in 600 cities worldwide. Apart from cycling, trams are the most efficient form of urban transportation.
Building our own trams (initially using refurbished rolling stock), rolling Light Rail (LR) track at Port Talbot and manufacturing simplified overhead would provide an effective solution for Cardiff, Newport, Swansea and other towns currently over-run by cars. LR routes should also be designed to carry freight.
The skillsets made redundant by Ford in Bridgend are transferable and should not be squandered.
- Connect major towns by LR
Coupled to a decentralisation and regionalisation strategy, on-road trams would provide sustainable links when heavy rail would be uneconomic. The restoration of the coastal rail route is unlikely to prove economic for heavy rail, but possibly not for light rail.
- Regionalise administration into 4 – 5 regions and develop appropriate infrastructure in regional centres and their hinterlands
Local government is too small to be effective and fails to attract the necessary talent. Regions would plug the infrastructure gaps and encourage local businesses to increase their range.
- Review plans for the Metro to electrify the core Valley lines utilising simplified overhead and low-profile track, and phase-out dual and triple mode powercars
Poor decisions have been made about rail electrification in Cymru, building in additional cost (and long-term diesel pollution). Lower cost options appear to have been ignored.
- Abandon tram-train
Tram-train is either wholly inappropriate for urban streets (out of scale) or uneconomic to operate. Tram-train is a hybrid that fails to meet the requirements of the different transportation tasks performed by mainline rail and by light rail and is sub-optimal in each role.
- Electrify the mainline to Fishguard after rationalising the route west of Cardiff
It is a false economy not to electrify the whole of the line, though consideration could be given to interchanging with light rail west and north of Carmarthen.
- Provide railheads for freight on integrated heavy and light rail services
Moving freight off-roads to enhance safety and efficiency would be greatly facilitated by small-scale but widely-deployed cross-dock infrastructure.
- Provide passing bays and lanes on major road routes within Cymru
Slow traffic unnecessarily reduces inter-city travel times in Cymru. By providing additional space for tractors, caravans and trucks to pull in, faster traffic could proceed with less hindrance.
- Provide more viewing points adjacent to major roads
Our country is beautiful, but the lack of safe stopping places on highways restricts our views of it. Many such places should provide facilities for travellers – clean toilets and waste receptacles, for example.
Much better interpretations of place, and of walking and cycle tracks would also improve our access to the countryside.
- Plan for straightened north – south (heavy) rail and road routes wholly within Cymru
The pattern of our highways and rail routes reflect the historic dominance of England and its extraction of Cymrian resources (these days, highly-skilled people commuting to Bristol, London and Manchester). These routes are largely east-west.
The Cymru of our future will require good communications between north and south by both road and (heavy) rail. We should develop indigenous tunnelling expertise to assist in straightening and shortening these and other routes.
Support our Nation today
For the price of a cup of coffee a month you can help us create an independent, not-for-profit, national news service for the people of Wales, by the people of Wales. | https://nation.cymru/opinion/20-steps-to-tackle-wales-transportation-problems/?shared=email&msg=fail |
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to the divalent chromate ion oxidation of an allylic halide or a halomethylated aromatic to an allylic or aromatic aldehyde.
2. Description of the Prior Art
Kulka, Am. Perfumer Aromat., 69, 31 (February, 1957) and 70, 31 (November, 1957), teaches the oxidation of p-methylbenzyl chloride with aqueous sodium dichromate in the presence of sodium bicarbonate. After 20 hours of reflux, p-tolualdehyde in 90 percent yield was recovered by steam distillation. The use of a catalyst is not reported.
Cardillo et al., J.C.S. Chem. Comm., 190 (1976) and Tetrahedron Let., 44, 3985-6 (1976), teach the reaction between an alkyl, allylic or benzylic halide and potassium dichromate (K.sub.2 CrO.sub.4) in hexamethylphosphoramide (HMPA) in the presence of molar quantities of a crown ether to obtain moderate to good yields of aldehydes and ketones. Due to the general insolubility of K.sub.2 CrO.sub.4 in HMPA, Cardillo et al. supported K.sub.2 CrO.sub.4 on an insoluble polymer matrix.
SUMMARY OF THE INVENTION
According to this invention, a process for preparing an aldehyde of the formula ##EQU1## wherein A is an allylic, aryl or an inertly- substituted allylic or aryl radical and n is an integer of at least 1, the process comprising contacting in a basic, liquid, biphasic mixture and at reactive conditions a compound of the formula ##EQU2## wherein X is chloride or bromide, preferably chloride, and A and n are as previously defined, with divalent chromate ion, is improved by contacting II with the chromate ion in the presence of a catalytic amount of a quaternary ammonium and/or phosphonium salt. The improved process is characterized by short reaction times, high yields and good selectivity.
DETAILED DESCRIPTION OF THE INVENTION
The compounds of II are allylic halides, halomethylated aromatics and inertly-substituted allylic halides and halomethylated aromatics. As used herein, "halide", "halo-" and like terms refer to chloride and bromide but not fluoride, iodide or astatine, "allylic" refers to compounds containing the moiety ##STR1## and "inertly- substituted" means that the allylic or aromatic compound or radical can bear 1 or more substituents that are essentially nonreactive toward the process reagents or products at the process conditions. Typical inert substituents include: alkyl and alkoxy radicals of 1 to about 18 carbon atoms, halogen (other than the halogen in the halomethyl (--CH.sub.2 X) moiety), carboxyl, nitro, aryl, aryloxy, hydroxyl, ethylenic unsaturation, etc. Where A in II is a phenyl radical, these inert substituents can be ortho, meta and/or para to the halomethyl moiety.
A in II can be any suitable allylic, aryl or inertly- substituted allylic or aryl radical. Representative radicals include: allylic and inertly-substituted allylic radicals of the formula ##STR2## wherein R and each R' are individually hydrogen, an aliphatic, alicyclic, aryl or an inertly-substituted aliphatic, alicyclic or aryl radical and the open valence is the bond which links the radical to the halomethyl moiety; aryl radicals, such as phenyl, naphthyl, anthracyl, phenanthracyl, etc.; and inertly-substituted aryl radicals, such as phenethyl, hydroxyphenyl, hydroxynaphthyl, phenoxyphenyl, biphenyl, triphenyl, methoxybiphenyl, etc. Preferably, A in II is either an allylic or inertly- substituted allylic radical where, in IV, R is hydrogen or methyl and each R' is individually hydrogen or an alkyl radical of 1 to about 8 carbon atoms, or phenyl or an inertly-substituted phenyl radical. Most preferably, A in II is phenyl and II is benzyl halide, especially benzyl chloride.
n Is the number of halomethyl moieties attached to A. The size of n is dependent upon A; the larger A is, generally, the larger n can be. n Is at least and preferably 1.
Any source of divalent chromate ion can be used in the practice of this invention. "Divalent chromate ion" here means an ion of the formula ##EQU3## wherein the chromium atom has a valence of plus 6. Representative of the many known sources of chromate ion include the alkali metal chromates, the alkaline earth metal chromates, silver chromate, lead chromate, etc. The alkali metal chromates and magnesium chromate are preferred to the other sources of chromate ion because of their greater solubility in water. For reasons of convenience and general availability, sodium dichromate is the preferred source of chromate ion.
The catalysts here used are quaternary ammonium and phosphonium salts (here termed collectively "onium" salts) and are known in the art as phase transfer catalysts. The salts are described by Starks and Napier in U.S. Pat. No. 3,992,432 and British Patent 1,227,144 and by Starks in the J. Amer. Chem. Soc., 93, 195 (1971). Suitable onium salts have a minimum solubility of at least about 1 weight percent in both the organic phase and the aqueous phase at 25° C. The ammonium salts are preferred over the phosphonium salts and benzyltrimethyl-, benzyltriethyl- and tetra-n-butyl ammonium chlorides and bromides are most preferred.
As a further illustration of the onium salts here used, suitable onium salts are represented by the formula ##EQU4## wherein Q. sup.⊕ is a quaternized atom of nitrogen or phosphorus, R"-R.sup.V are hydrocarbyl groups, e.g., alkyl, aryl, aralkyl, cycloalkyl, etc., and R" can join with R'", or R'" with R.sup.IV, etc. to form a 5- or 6- membered heterocyclic compound having at least one quaternized nitrogen or phosphorus atom in the ring and they also contain one nonadjacent atom of oxygen or sulfur within the ring. Typically, R"--R.sup.V are hydrocarbyl groups of 1 to about 16 carbon atoms each, with a combined minimum total of about 10 carbon atoms. Preferred onium salts have from about 10 to about 30 carbon atoms.
The neutralizing anion portion of the salt, i.e., An.sup. ⊖ in VI above, may be varied to convenience. Chloride and bromide are the preferred anions, but other representative anions include fluoride, iodide, tosylate, acetate, bisulfate, etc. The following compounds serve as a further illustration: tetraalkyl ammonium salts, such as tetra-n- butyl-, tri-n-butyl-methyl-, tetrahexyl-, trioctylmethyl- , hexadecyltriethyl- and tridecylmethyl ammonium chlorides, bromides, iodides, bisulfates, tosylates, etc.; aralkyl ammonium salts, such as tetrabenzyl-, benzyltrimethyl-, benzyltriethyl-, benzyltributyl- and phenethyltrimethyl ammonium chlorides, bromides, iodides, etc.; aryl ammonium salts, such as triphenylmethylammonium fluoride, chloride or bromide, N,N,N-trimethylanilinium chloride, N,N,N-triethylanilinium bromide, N,N-diethylanilinium bisulfate, trimethylnaphthylammonium chloride, p-methylphenyltrimethylammonium chloride or tosylate, etc.; 5- and 6-membered heterocyclic compounds containing at least 1 quaternary nitrogen atom in the ring, such as N,N-dibutylmorpholinium chloride, N- decylthiazolium chloride, etc.; and the corresponding phosphonium salts.
Stoichiometric amounts of divalent chromate ion and halomethyl moiety of II are used in the practice of this invention. Although an excess of either component can be used, such a practice is generally disfavored. Excess divalent chromate ion can cause some loss of product (aldehyde) by further oxidizing the product to a carboxylic acid, e.g., benzaldehyde to benzoic acid. Excess halomethyl moieties (equivalents) results in incomplete conversion of the halomethyl moiety to the corresponding aldehyde.
A catalytic amount of the onium salt is required in the practice of this invention. The concentration will vary with the reagents employed, however best results are generally achieved where the onium salt concentration is from about 1 mole percent to about 30 mole percent based upon the allylic halide or halomethylated aromatic (or halomethyl equivalents). Onium salt concentrations of about 2 mole percent to about 10 mole percent are preferred.
The reaction medium of this invention is a biphasic mixture of an aqueous phase and an organic phase. The aqueous phase contains the source of divalent chromate ion, the onium salt and typically an alkaline buffer. The organic phase contains the allylic halide or halomethylated aromatic. The reaction medium is typically agitated throughout the course of the oxidation.
Temperature and pressure are not critical to this invention as long as the biphasic mixture remains a liquid. A temperature of about 20. degree. C. to about 100° C. is typically employed with a temperature of about 40° C. to about 70° C. preferentially employed. The oxidation can be conducted at reduced, atmospheric or superatmospheric pressure. Autogenous, usually atmospheric, pressure is preferred.
Although this process is usually conducted neat, it can be conducted in the presence of an inert, essentially water-immiscible organic solvent. Typical solvents include benzene, chlorobenzene, o- dichlorobenzene, hexane, methylene chloride, chloroform, carbon tetrachloride, and the like. Sufficient solvent to dissolve the allylic halide or halomethylated aromatic is used and preferably the amount of solvent used is equal in volume to the amount of aqueous medium employed.
The reaction medium of this invention is basic, i.e., has a pH value in excess of 7. Preferably, the reaction medium has a pH value between about 7 and 10 and this value can be obtained and maintained by the use of any suitable alkaline buffer. Representative buffers include sodium carbonate, potassium carbonate, etc. Sufficient alkaline buffer is used to maintain a pH value in excess of 7 throughout the process.
The following example is an illustrative embodiment of this invention. Unless otherwise indicated, all parts and percentages are by weight.
SPECIFIC EMBODIMENTS
EXAMPLE
A 250 ml 3-neck, round-bottom flask fitted with a magnetic stirrer and reflux condenser was charged with benzyl chloride (37 g, 0.3 mole) and deionized water (150 ml). The two immiscible liquid layers were stirred rapidly and charged with sodium carbonate (6 g, 0.057 mole) and Adogen. sup.® 464 (10 g, 0.02 mole, 7 mole percent), a quaternary ammonium salt having three C.sub.8 -C.sub.10 alkyl groups and one methyl group manufactured by Archer Daniels Midland Co. Sodium dichromate of the formula Na.sub.2 Cr.sub.2 O.sub.7.2H.sub.2 O (34.1 g, 0.14 mole) was then added slowly, and after complete addition of the sodium dichromate, the reaction mixture was heated to reflux. After 2 hours of reaction, gas chromatographic analysis indicated that better than 90 percent of the benzyl chloride had been converted to benzaldehyde.
CONTROL
The above example was repeated except that no catalyst was employed, i. e., Adogen.sup.® 464 was not used. After 20 hours of reflux, less than 90 percent of the benzyl chloride had been converted to benzaldehyde.
In both the example and control, the sodium dichromate was converted to the divalent chromate ion according to the following reaction: ##EQU5## The divalent chromate ion is converted by the oxidation of the halomethyl moiety to a chromate ion of plus 3 valence which can, if desired, be reoxidized to a chromate ion of a plus 6 valence. This provides recycleability and thus good ecology and good economics. Carbon dioxide is released throughout the process.
A comparison of the results between the example and the control demonstrates the improved characteristics of this invention. Not only was the process of the example completed in a shorter time, but also generated a better yield of benzaldehyde.
Although this invention has been described in detail by the preceding example, such detail is for the purpose of illustration only and is not to be construed as a limitation upon the invention. Many variations can be had upon the preceding example without departing from the spirit and scope of the appended claims. | |
279 So.2d 436 (1973)
Clinton GEOHAGAN, Administrator of the Estate of Barbara Geohagan Evans, Deceased
v.
GENERAL MOTORS CORP. and McDaniel Motor Co.
SC 101
Supreme Court of Alabama.
May 24, 1973.
Rehearing Denied July 5, 1973.
*437 Tipler, Fuller & Barnes, Andalusia, for appellant.
Powell & Sikes, Andalusia, for appellee, General Motors Corp.
Rushton, Stakely, Johnston & Garrett, Montgomery, for appellee, McDaniel Motor Co.
HARWOOD, Justice.
This is a products liability case involving a claim for wrongful death instituted by the administrator of the estate of plaintiff's decedent against General Motors Corp. (General Motors), the manufacturer of the vehicle involved, and McDaniel Motor Co. (McDaniel), the retail vendor. The complaint, as finally amended, presented two counts; viz, Count One-D and Count Two-C, to which the defendants ultimately plead the general issue in short by consent. In substance Count One-D charges that the combined negligence of the defendants proximately resulted in the death of the plaintiff's intestate, Barbara Geohagan Evans. Count Two-C alleges that in connection with the sale of the vehicle the defendants impliedly warranted its suitability and fitness, the breach of which resulted in the fatal injury to plaintiff's intestate.
The trial court, at the conclusion of the testimony, granted each defendant's request for the affirmative charge as to Count Two-C and submitted the case to the jury on Count One-D only. This appeal is taken from the judgment rendered by the Circuit Court in accordance with the jury's verdict in favor of the defendants-appellees, and from the Circuit Court's action in granting defendants' request for the affirmative charge as to Count Two-C.
During her lifetime, Barbara Geohagan Evans was married to Rodney Evans. Mr. Evans was killed in military service subsequent to the time of his wife's fatal crash and prior to the date this suit was commenced by George Geohagan, Barbara's father, as administrator of her estate. On September 30, 1967, Rodney Evans purchased a new 1968 Chevrolet Camero from McDaniel. The car was undisputedly a General Motors product. In June of 1968, Rodney Evans and Barbara Geohagan were married and during their mutual lifetime the automobile involved was used by each of them as the family car. Barbara was driving the car alone on September 5, 1968, along Highway 52 between Opp and Samson, Alabama, at which time the car left the road at a high rate of speed, ran *438 into a culvert and crashed into a utility pole. Barbara was killed in the crash.
The factual theory of the plaintiff's case was that the crash resulted from defective motor mounts, the failure of which caused the engine to shift within the engine compartment locking the accelerator at full throttle. Since this court's opinion rests solely on the question of law presented by the pleadings, no further recitation of the facts is deemed necessary. Suffice it here to observe that the evidence in support of the opposing contentions of plaintiff and of defendants relating to the cause of the crash was in conflict.
The matter urged by the appellant as error pertains to those assignments of error to the effect that the lower court erred in giving the affirmative charge for the defendants (appellees) as to the breach of implied warranty count, i. e., Count Two-C.
The basic issue here considered is whether an action for breach of implied warranty will legally sustain a claim for wrongful death. Realizing that this is a cause of first impression in Alabama, we take special note of the fact that we have been favored with excellent briefs by counsel for each of the parties.
In Alabama, as generally elsewhere, punitive damages are not recoverable for breach of contract. Wood v. Citronelle-Mobile Gathering System, 5th Cir., 409 F.2d 367.
As stated by de Graffenried, J., in Millsap v. Woolf, 1 Ala.App. 599, 56 So. 22:
"A warranty, in the sale of a chattel, is a collateral undertaking on the part of the seller as to the quality of or title to the subject of the sale. It may be express or implied. * * *
"As a warranty, express or implied, is a contract, the good faith of the seller in making it is not material. In actions for breach of warranty, the only questions are: Was there a contract of warranty? If so, has there been a breach? And if so, the amount of damages suffered by the purchaser thereby. Scott v. Holland, 132 Ala. 389, 31 So. 514.
"There is a clear distinction between an action for a breach of warranty and one for deceit in the sale of a chattel; in the first case the action is ex contractu and in the second, ex delicto. 30 Am. & Eng.Ency.Law, p. 129; Scott v. Holland, supra. * * *"
Regardless of the view in the earlier development of the action of breach of warranty that it was based on tort, certainly as the action developed it was regarded as contractual, and such was the view of our cases at the time of the passage of the Uniform Commercial Code by the Alabama Legislature in 1965, which code was to be effective at midnight on 31 December 1966.
Damages for breach of a contract (or breach of warranty) are awarded to put a party in the same position he would have occupied if the contract had not been violated. Coastal States Life Ins. Co. v. Gass, 278 Ala. 656, 180 So.2d 255. On the other hand, damages under our wrongful death statutes are punitive in nature, and are not compensatory. Crenshaw v. Alabama Freight, Inc., 287 Ala. 372, 252 So.2d 33. Such view is compatible only with the concept that any action permissible under our wrongful death acts must in nature be in tort and not in contract.
The Alabama, Uniform Commercial Code is contained in Act No. 549, 1965 Acts of Alabama. This Act appears as Secs. 1-101 through 9-505, of Title 7A, Michies Recompiled Code of Alabama 1958 (1966 Added Volume). For convenience we will refer to the provisions of the Act as they appear in the Recompiled Code.
*439 Secs. 1-102(1) and 1-102(2) (a) and (b) are as follows:
"(1) This title [Uniform Commercial Code] shall be liberally construed and applied to promote its underlying purposes and policies.
"(2) Underlying policies and purposes of this title are
"(a) to simplify, clarify and modernize the law governing commercial transactions ;
"(b) to permit the continued expansion of commercial practices through custom, usage and agreement of the parties." (Emphasis ours.)
Thus it is crystal clear that the purpose of the legislature in passing our version of the Uniform Commercial Code was to regulate commercial transactions. By no stretch of the imagination can it be deemed that actions for wrongful death are commercial transactions.
Our decisions since the enactment of our wrongful death acts have made it clear that such acts are intended to protect human life, to prevent homicide, and to impose civil punishment on takers of human life. The damages awarded are punitive in nature. The personal representative in prosecuting a wrongful death action acts as an agent of legislative appointment for declaring the public policy evidenced by the wrongful death acts. An action under our wrongful death acts comes into being only on death from some wrongful act. See innumerable citations and annotations under Sec. 123, Title 7, Code of Alabama 1940.
Thus a wrongful death action differs entirely from an action for a breach of warranty, express or implied, in a contract, for as respects liability for breach of a warranty the good faith, or lack of faith, in promisor in making the contract of warranty is immaterial. Attalla Oil & Fertilizer Co. v. Goddard, 207 Ala. 287, 92 So. 794.
The above principles were well settled by the decisions of this court (and the courts of many of our sister states) at the time of the passage of our Uniform Commercial Code. Our wrongful death acts, and the decisions of this court thereunder have been the law of this state for decades.
Where a statute enumerates certain things on which it is to operate, the statute is to be construed as excluding from its effect all things not expressly mentioned. Champion v. McLean, 266 Ala. 103, 95 So.2d 82.
We do not see how the legislature could have more clearly expressed the operative scope of the Alabama Uniform Commercial Code than it did in the Section 1-102(2), Subsections (a) and (b) of Title 7A, above mentioned, i. e., that the underlying purpose and policy of the act was "to simplify, clarify, and modernize the law governing commercial transactions," and "to permit the continued expansion of commercial practices through custom, usage and agreement of the parties." (Emphasis ours.)
So far as can be determined from a reading of our Uniform Commercial Code, there is not one word, sentence, paragraph, clause, or section which in anywise even suggests that for the breach of an express or implied warranty in a contract any person is given a right to maintain an action for a wrongful death. On the other hand, the precision with which the legislature has defined the purpose and policy of the act, limiting the same to commercial transactions, clearly demonstrates that it was not the intent of the legislature in enacting the Uniform Commercial Code to create a wrongful death action in case of a breach of warranty of the contract involved.
This precise point was before the U.S. District Court for the Northern District of Alabama in Knight, Admr. v. Collins, et al., 327 F.Supp. 97 (1971). In an opinion by Pointer, J., it was set forth:
"To the extent that plaintiff seeks to bring a cause for breach of contract *440 within the Homicide ActTitle 7, § 119, or its adult companion, Title 7, § 123 the Alabama Supreme Court has clearly foreclosed the way. Thaggard v. Vafes, 218 Ala. 609, 119 So. 647 (1928). Moreover, this same decision assumes the rule that an action ex contractu for damage to the person causing death would not otherwise survive (insofar as the right of action held by the deceased). There may be good reason to question the correctness of the reasoning underlying such a rulesee, e.g., Smedley, `Wrongful DeathBasis of Common Law Rules,' 13 Vanderbilt Law Review 605 (1960)but our role is limited here to discerning the rule adopted by the Alabama Supreme Court. In a similar situation, Judge Lynne of this district court ruled that a count for breach of implied warranty causing death could not be maintained under Alabama law. (Unpublished ruling, Wheeler v. General Motors Corp., U.S.Dist.Court, N.D.Ala., # 70-315). The Fifth Circuit has reached the same conclusion as to a comparable Florida statute. Latimer v. Sears, Roebuck & Co., 285 F.2d 152 (5th Cir. 1960)."
It is to be noted that in the special concurrence of Faulkner, J., and the dissenting opinion of Jones, J., in Battles v. Pierson Chevrolet, 290 Ala. 98, 274 So.2d 281, it is stated that Georgia does not allow recovery for death in a breach of warranty action. Apparently, such is also the rule in Florida. To like effect see Post v. Manitowoc Engineering Co., 88 N.J.Super 199, 211 A.2d 386; Foran v. Carangelo, 153 Conn. 356, 216 A.2d 638; Bloss v. Dr. C. R. Woodan Sanitarium Co., 319 Mo. 1061, 5 S.W.2d 367; Wadleigh v. Howson, 88 N.H. 365, 189 A. 865. See also annotation in 86 A.L.R.2d 316.
We hold that no contractual cause of action for wrongful death is created by our Uniform Commercial Code arising from a breach of warranty, and that actions for wrongful death can arise in this state and be processed only under our wrongful death acts.
The lower court therefore did not err in withdrawn from the jury's consideration Count 2C, and the judgment here appealed from is due to be affirmed.
Affirmed.
MERRILL, COLEMAN, BLOODWORTH and McCALL, JJ., concur.
MADDOX, J., concurs specially.
HEFLIN, C. J., and FAULKNER and JONES, JJ., dissent.
MADDOX, Justice (concurring specially).
I agree with the majority that a wrongful death action cannot be maintained for breach of an implied warranty, but since the dissenting opinions discuss the effect of the Uniform Commercial Code, Title 7A, Section 2-318 and its controlling effect on this litigation, I desire to express my separate views on this question, which is one of first impression insofar as I can determine. I can appreciate some of the views the dissenters have about what effect the adoption of the Uniform Commercial Code, with amendments, had on products liability law in Alabama. I think its adoption had a tremendous impact. For one thing, its adoption loosened the privity requirement, vertically and horizontally, in product liability cases. One legal writer thinks that the adoption of the "non-uniform" Commercial Code transformed "the Alabama personal injury warranty action, from the contractual beast it has historically seemed to be, into a new animal enjoying a predominantly tort pedigree."[1] I am unable to agree such a transformation occurred and that the Legislature changed the nature of the warranty action from one sounding in contract to one sounding in *441 tort. Admittedly, the Legislature stripped the "warranty" action of much of its former contract regalia. In other words, while the "warranty" action was stripped of much of its contract cloak, it was not stripped of its name.
I am also aware that it is generally accepted that the action for breach of warranty originally was considered to be tortious in nature. Some of the legal scholars think the name of the product liability action has always been tort and that it has merely paraded around in "warranty" or contract clothing for some 40 or 50 years. Prosser classified "warranty" as "a freak hybrid born of the illicit intercourse of tort and contract, (which) had always been recognized as bearing to some extent the aspects of a tort." Prosser, The Fall of the Citadel, 50 Minn.L.Rev. 791, 800 (1966).
Dean Prosser states that the trouble always lay with the use of the word "warranty" and he may be right. He maintains the "warranty" theory has been from the outset only a rather transparent device to accomplish the desired result of strict liability. He pointed out in an article entitled "The Fall of the Citadel," 50 Minn.L. Rev. 791, 802, as follows:
"Although the writer was perhaps the first to voice it, the suggestion was sufficiently obvious that all of the trouble lay with the one word `warranty,' which had been from the outset only a rather transparent device to accomplish the desired result of strict liability. No one disputed that the `warranty' was a matter of strict liability. No one denied that where there was no privity, liability to the consumer could not sound in contract and must be a matter of tort. Why not, then, talk of the strict liability in tort, a thing familiar enough in the law of animals, abnormally dangerous activities, nuisance, workmen's compensation, libel, misrepresentation and respondeat superior, and discard the word `warranty' with all its contract implications? ..."
Unquestionably, by adoption of the Uniform Commercial Code, the Legislature intended to provide, and did provide, consumer protection which was unavailable before in instances where products were not reasonably safe. However, in granting this consumer protection the Legislature used the word "warranty," which had acquired a special meaning in the field of products liability. While the Legislature knocked out the requirement of privity in product liability cases, horizontally and vertically, in my opinion, I do not think it changed the nature of the action for breach of warranty from ex contractu to ex delicto. In arriving at this belief, I recognize that there are decisions which hold that in products liability cases, regardless of the form of the action, that the tort aspects of warranty call for the application of a tort rather than a contract rule in allowing recovery for wrongful death. But many cases have held to the contrary, on the ground that the gist of warranty has become contract, and it is not included within the wrongful death statutes. W. Prosser, Law of Torts 635, § 95 (4th ed. 1971). See also, Annotation: Action ex contractu for damages caused by death, 86 A.L.R.2d 316, 317 (1962), where it is stated:
"While there is some authority to the contrary, it appears to be generally recognized that in absence of statute an action ex contractu is not the appropriate remedy to recover damages resulting from the death of another."
I believe Alabama has consistently recognized an action for breach of warranty to be contractual in nature. Consequently, I cannot interpret Section 2-318 of the Commercial Code to state that the breach of an express or implied warranty is an action ex delicto and therefore a "wrongful act" under Alabama's Wrongful Death Statute.
This court has held that the breach of a contract is not a wrongful or negligent act *442 under our Wrongful Death Statute. Thaggard v. Vafes, 218 Ala. 609, 119 So. 647 (1928). See also Knight v. Collins, 327 F. Supp. 97 (N.D.Ala.1971); cf. Latimer v. Sears Roebuck and Co., 285 F.2d 152 (5th Cir. 1960). Contra, Chrobak v. Textron, Inc., Civil No. 1012-S (M.D.Ala., filed Sept. 2, 1969) (unpublished ruling) (Interpreting Tit. 7A, § 2-318).
As I understand the position taken by the dissenters, they feel that Alabama has, by the passage of the Uniform Commercial Code, with amendments, established a "public duty," the breach of which is a "wrongful act" under our Wrongful Death Act. I can agree basically with the position taken by the dissenters in this respect, but I must point out that the plaintiff below did not allege this "public duty." Plaintiff did not allege its breach by the defendants. On the contrary, the plaintiff alleged that the defendants "warranted expressly or by implication that said automobile was fit for normal and ordinary use and operation as intended and was of merchantable quality" and "that as a proximate result and consequence of said breach of warranty by the defendants, as aforesaid, plaintiff's intestate suffered such severe injuries that she died ..." I believe that most lawyers and judges would classify this pleading as an ex contractu action. I do.
Had the plaintiff alleged a cause of action under the tort doctrine of strict liability as spelled out in the Restatement (Second) of Torts, § 402A (1965), I believe the giving of the affirmative charge would have been improper. § 402A of the Restatement (Second) of Torts provides:
"(1) One who sells any product in a defective condition unreasonably dangerous to the user or consumer or to his property is subject to liability for physical harm thereby caused to the ultimate user or consumer, or to his property, if
"(a) the seller is engaged in the business of selling such a product, and
"(b) it is expected to and does reach the user or consumer without substantial change in the condition in which it is sold.
"(2) The rule stated in Subsection (1) applies although
"(a) the seller has exercised all possible care in the preparation and sale of his product, and
"(b) the user or consumer has not bought the product from or entered into any contractual relation with the seller."
In other words, I believe that this Court, in view of the policy expressed in the Uniform Commercial Code to protect users and consumers and persons affected by products, and in view of the recent trends in the development of the law in product liability cases, would adopt the doctrine of strict liability set out in the Restatement. I believe that this Court might hold that those protected against harm included not only users or consumers but any person who may be affected by the goods and who is personally injured. The justification for allowing such an ex delicto action for strict liability has been said to be that the seller, by marketing his product for use and consumption, has undertaken and assumed a special responsibility toward any member of the consuming public who may be injured by it. Properly presented, I believe this Court might approve the strict liability doctrine. But the warranty count here under consideration makes no attempt to claim under the theory of strict liability.
I believe that the comment in the Restatement (Second) of Torts on § 402A, above quoted, sustains my view that the rule of strict liability which I suggest should be available in personal injury actions, whether the injury is fatal or nonfatal. But I also believe that such actions are not governed by the provisions of the Uniform Sales Act, or those of the Uniform *443 Commercial Code, as to warranties. Comment "m" states, in part:
"A number of courts, seeking a theoretical basis for the liability, have resorted to a "warranty," either running with the goods sold, by analogy to covenants running with the land, or made directly to the consumer without contract. In some instances this theory has proved to be an unfortunate one. Although warranty was in its origin a matter of tort liability, and it is generally agreed that a tort action will still lie for its breach, it has become so identified in practice with a contract of sale between the plaintiff and the defendant that the warranty theory has become something of an obstacle to the recognition of the strict liability where there is no such contract. There is nothing in this Section which would prevent any court from treating the rule stated as a matter of `warranty' to the user or consumer. But if this is done, it should be recognized and understood that the `warranty' is a very different kind of warranty from those usually found in the sale of goods, and that it is not subject to the various contract rules which have grown up to surround such sales.
"The rule stated in this Section does not require any reliance on the part of the consumer upon the reputation, skill, or judgment of the seller who is to be held liable, nor any representation or undertaking on the part of that seller. The seller is strictly liable although, as is frequently the case, the consumer does not even know who he is at the time of the consumption. The rule stated in this Section is not governed by the provisions of the Uniform Sales Act, or those of the Uniform Commercial Code, as to warranties; and it is not affected by limitations on the scope and content of warranties, or by limitation to `buyer' and `seller' in those statutes. Nor is the consumer required to give notice to the seller of his injury within a reasonable time after it occurs, as is provided by the Uniform Act. The consumer's cause of action does not depend upon the validity of his contract with the person from whom he acquires the product, and it is not affected by any disclaimer or other agreement, whether it be between the seller and his immediate buyer, or attached to and accompanying the product into the consumer's hands. In short, `warranty' must be given a new and different meaning if it is used in connection with this Section. It is much simpler to regard the liability here stated as merely one of strict liability in tort."
Consequently, under present law, I think a party to suffers a non-fatal injury has two routes he could take. He can allege that there was an express or implied warranty, that it was breached and as a proximate result of the breach he suffered damages. In such cases, the Uniform Commercial Code does not require him to show privity. I personally think that in non-fatal injury cases, he could sue under the theory of strict liability which I have herein set forth. In death cases, I think an action for breach of warranty would be inappropriate for the reasons I have set forth, that is, because this Court and a majority of other courts have so held such remedy is inappropriate. In death cases, the appropriate remedy might be a suit under the so-called manufacturer's liability doctrine or under the doctrine of strict liability set forth in the Restatement (Second) of Torts.
The plaintiff below had a negligence count against both defendants which went to the jury. The plaintiff presented much evidence that the defendants manufactured, sold, or serviced a product which was not reasonably safe and that plaintiff's intestate was killed as a proximate result of the defective product, but the jury returned a verdict in favor of the defendant on this negligence count. Plaintiff assumed the higher burden of proving that "his intestate suffered said injuries and died as aforesaid as a proximate consequence of the combined negligence of said defendants *444 in that the said defendant, General Motors Corporation, a corporation, negligently designed, engineered, manufactured, assembled or sold said automobile for use as a transportation vehicle in a dangerous condition... and the defendant, McDaniel Motor Company, a partnership, negligently sold and serviced said defective automobile and its components ..." As is stated in Comment "a" under Restatement (Second) of Torts, § 402A:
". . . The rule stated here is not exclusive, and does not preclude liability based upon the alternative ground of negligence of the seller, where such negligence can be proved." (Emphasis added.)
Having selected the theories upon which he would proceed to fasten liability on the defendants, we cannot pass on what might have been. As to the warranty count, I think he selected an inappropriate remedy. As to his negligence count, on which the jury found against him, I think he selected the alternative which required more proof than had he elected to proceed under the theory of strict liability, but that was a pleading choice.
Since the majority does not discuss in detail some of the points I have discussed, I took the liberty to express gratuitously my personal views on this matter of first impression in this special concurring opinion. My views are my own and should not be taken to express the thinking of either the majority or minority of this Court.
JONES, Justice (dissenting).
I respectfully dissent.
In ruling, as a matter of law, that the warranty count was legally insufficient, the trial court stated, "Punitive damages are not allowed for breach of contract". Two concepts are inherent in this ruling: First, the legal nature and judicial purpose of "punitive" damagesthe exclusive measure of damages in actions brought under the Alabama Wrongful Death Statute[1]; and secondly, the legal nature and character of an action for breach of implied warranty in Alabama. Before summarizing the respective contentions of the parties, I will first briefly review the history of the cause of action for wrongful death, and then I will analyze the present status of the nature of an action for breach of implied warranty.
The legal conclusion that an individual action for personal injuries abated with the death of an individual is based on the ancient maxim: Action personalis moritur cum personaa personal action dies with the person. Although the judicial etiology of this principle is questionable,[2] it ultimately became ingrained in the fabric of the common law. Lord Ellenborough, in Baker v. Bolton, 1 Camp. 493, 170 Eng. Rep. 1033 (K.B., 1808), laid down the rule that there was no cause of action for wrongful death. Virtually every legal scholar who has considered the rule has criticized it ("whose forte was never common sense", says Dean Prosser).[3]
". . . no reason has ever been assigned for the existence of this rule which would satisfy an enlightened court of modern times." Harris v. Nashville Trust Co., 128 Tenn. 573, 162 S.W. 584, 586 (1914).
Despite its manifest harshness, however, this was the status of the law when Alabama became a state in 1819 and by its Constitution adopted the common law of *445 England.[4] Historians have credited the industrial revolution, its impact in augmenting the mobility of society and the concomitant increase in fatal accidents, with the intensification of the public's rejection of this execrable rule: Death did not create liability; rather death extinguished liability. The conscience of society was ultimately satisfied in England in 1846 by the passage of Lord Campbell's Act. All fifty American states presently have wrongful death statutes. While most of these statutes are modeled after Lord Campbell's act, which raises a new cause of action for the benefit of certain designated beneficiariesmeasuring damages by a broadened concept of pecuniary loss to the family survivors, a minority of the states have death acts in the nature of survival statutesmeasuring recovery by the loss to decedent's estate; and two states, Alabama and Massachusetts, have statutes which, by judicial interpretation, are penal in nature measuring damages in accordance with the degree of defendant's culpability. That the Alabama Wrongful Death Statute created a new cause of action that was unknown to the common law has ofttimes been observed by this Court. Parker v. Fies & Sons, 243 Ala. 348, 10 So.2d 13 (1942) ; Breed v. Atlanta, B. & C. R. Co., 241 Ala. 640, 4 So.2d 315 (1941); Kennedy v. Davis, 171 Ala. 609, 55 So. 104 (1911). Thus, it can be seen that its manifest purpose was to afford redress in cases where no redress obtained at the common law and thereby to ameliorate the harsh rule that denied recovery if the injured party died, while permitting damages if the person lived.
It is only against the background of this historical perspective that the true nature of such punitive damages, as are permitted in death cases in Alabama, can be understood.
". . . the purpose and result of the suit therein provided were not a mere solatium to the wounded feelings of surviving relations, nor compensation for the last [sic] earnings of the slain. We think the statute has a wider aim and scope. It is punitive in its purposes. Punitive of the person or corporation by which the wrong is done, to stimulate diligence and to check violence, in order thereby to give greater security to human life; `to prevent homicides.'" The South and North Ala. RR. Co. v. Sullivan, 59 Ala. 272, at pp. 278, 279 (1877).
I now turn my attention to an analysis of the present nature of an action for breach of implied warranty. The trial court construed Count Two-C as an ex contractu action sounding purely in contract. It is essential to note that in Count Two-C the plaintiff did not attempt to characterize the nature of his action as either contractual or tortious; he merely plead a "breach of implied warranty". The contractual characterization is, as we have taken pains to observe, the trial court's legal conclusion.
Purely from a historical standpoint, it is generally agreed that the action of breach of warranty was originally tortious in nature, having its origin in misrepresentation or deceit.
"In its inception the liability was based on tort, and the action was on the case." Prosser, Law of Torts (4th Ed.) p. 634.
Over the years, because of its close association with the law of sales, the action for breach of warranty gradually acquired a contractual flavor. It was this association that compelled this Court long ago to hold:
"The warranty of the seller of personal property does not, as a rule impose any liability upon him as to third persons who are in no way a party to the contract." Birmingham Chero-Cola Bottling Co. v. Clark, 205 Ala. 678, p. 680, 89 So. 64, p. 65 (1921).
*446 More recently it was noted in the case of Harnischfeger Corp. v. Harris, 280 Ala. 93, 190 So.2d 286 (1966):
"In effect, we are requested to overturn the long-existing rule in this jurisdiction that there must be privity of contract between a seller and a person injured by a defect in the article sold who seeks to recover for such injury in an action against the seller for a breach of warranty. (citing). Although this is a `judge-made' rule which could be changed by another `judge-made' rule, we entertain the view that, because of its long existence as a part of the jurisprudence of this State, it would be more appropriate for its demise to be effectuated by legislative action, if it is to be overturned." Ibid, p. 97, 190 So.2d p. 289.
The Legislature has since spoken and by the express language of the Code of Alabama, Title 7A, § 2-318, a seller's warranty is specifically made applicable to any "natural person if it is reasonable to expect that such person may use, consume or be affected by the goods and who is injured in person by breach of the warranty". I must parenthetically note that it is perfectly clear, both from the language of our Homicide Act and from our cases, that an action for wrongful death is the legal equivalent to an action for injury to the person:
"Although the personal injury has resulted in death, yet the action is for the personal injury ..." Ala. Great Southern Ry. Co. v. Ambrose, 163 Ala. 220, 50 So. 1030 (1909).[5]
The effect of § 2-318 is to affirmatively extend the ambit of privity so as to embrace all natural persons who might reasonably be expected to use, consume, or be affected by the product.[6]
The Alabama version of the Uniform Commercial Code was in effect at all pertinent times referred to in the plaintiff's complaint and governs the relationship that existed between the plaintiff and the defendants insofar as Count Two-C is concerned. We must, therefore, look to this act to determine the present status and legal personality of an action for breach of implied warranty. That is to say, the requirement of privity having been legislatively resolved, the question as recast now becomes: Is the legal nature of an action for breach of implied warranty under the Alabama "non-uniform" Commercial Code such a "wrongful act or omission" as will sustain a claim under the Alabama Wrongful Death Statute?
It is in this posture and against this historical context that the appellant urges three theories for the proposition that this Court should allow recovery in a claim for wrongful death based on an action for breach of implied warranty.
Theory No. One: The common law rule of Baker v. Bolton, supra, prohibiting recovery for wrongful death applied only to actions ex delicto and did not extend to contract actions. Lord Campbell's Act and its American progenynecessarily spoke only to this deficiency. Therefore, since causes of action in contract have never been declared extinguished by death *447 (and, consequently, not under the Baker v. Bolton influence), the present action is cognizable at the common law, separate and apart from the purview of the Alabama Wrongful Death Statute.[7] The premise upon which this theory is based is that, should this Court construe the nature of an action for breach of implied warranty as being "purely contractual", it should nevertheless be maintainable under common law principles.
Theory No. Two: A historical analysis of the action for breach of warranty indicates that its origin is in misrepresentation or deceit; and, consequently, such actions are not purely contractual and, therefore, are maintainable under our Wrongful Death Statute.
Theory No. Three: The Alabama Legislature, by its enactment of a "non-uniform" version of the Uniform Commercial Code, has so infused tortious characteristics into the nature of the implied warranty that the breach of such warranties is a "wrongful act" as contemplated by the Alabama Wrongful Death Statute.
As to these contentions, the defendants reply that the present action is governed by the Alabama Wrongful Death Statute, which permits only the recovery of punitive damages, and further that punitive damages are not recoverable in actions ex contractu. The defendants each rely principally on Treadwell Ford, Inc. v. Leek, 272 Ala. 544, 133 So.2d 24 (1961) in support of their position on this issue.
It is not necessary for this Court to consider, nor do the pleadings properly present, the first of the plaintiff's theories. The trial court's ruling was made on the assumption that the provisions of Title 7, § 123, governed. The failure of the record to reflect that this theory was specifically called to the trial court's attention compels me to pretermit any consideration of this theory and I agree that this case is controlled by our Wrongful Death Statute. The trial court should not be reviewed or reversed on a question of law not clearly presented in the proceedings below. Head v. Triangle Const. Co., 274 Ala. 519, 157 So. 2d 389 (1963).
Likewise, plaintiff's "Theory No. Two" is without merit. Although an academic reconsideration of the historical origin of an action for breach of warranty might have predicted a different result, the Alabama Supreme Court, prior to the enactment of Title 7A, did hold that an action for breach of implied warranty was essentially contractual. Birmingham Chero-Cola Bottling Co., v. Clark, supra.
Plaintiff's "Theory No. Three" only is deemed applicable, and to this contention the defendants' reply must be considered for it correctly expresses the basis of the lower court's ruling. In referring to Title 7A, I have described it as a "non-uniform" version of the Uniform Commercial Code. This description is as significant as it is appropriate. Specifically, with reference to "injuries to the person in the case of consumer goods", the Alabama legislature incorporated five separate amendments to the Uniform Commercial Code which must be considered in determining the legal nature of warranties implied by operation of the provisions of Title 7A:
(A). Subsection (5) was added to Section 2-316 for the purpose of prohibiting the seller from excluding or modifying his liability for damages for injuries to the person in the case of consumer goods.
(B). Section 2-318 of the UCC was amended so as to exclude the phrase "who is in the family or household of his buyer or who is a guest in his home", which phrase appeared in the Uniform Commercial Code as the limiting description of the term "natural person".
(C). Section 2-714 was amended so as to add the following quoted language at
*448 the end of Subsection (2) of the uniform version:
". . . and nothing in this section shall be construed so as to limit the seller's liability for damages for injury to the person in the case of consumer goods. Damages in an action for injury to the person include those damages ordinarily allowable in such actions at law."
(D). Section 2-719 dealing with the seller's privilege to contractually modify or limit the buyer's remedy was amended so as to add Subsection (4) which provides:
"Nothing in this section or in the preceding section shall be construed so as to limit the seller's liability for damages for injury to the person in the case of consumer goods."
(E). Section 2-725 relating to the statute of limitations was amended so as to add the quoted phrase at the conclusion of Subsection (2):
"... however, a cause of action for damages for injury to the person in the case of consumer goods shall accrue when the injury occurs."
The obvious import of each of these amendments is to amplify the legal rights of the buyer in the posture of a products liability case beyond the scope of the Uniform Commercial Code; and to this extent they reflect a legislative intent that is harmonious with the judicial trend expressed in a growing majority of cases over the country. See Prosser, The Fall of the Citadel, 50 Minn.L.Rev. 791 (1966). The amendments, in their composite effect, make it clear that the intention of the Alabama Legislature in adopting a modified version of the Commercial Code was to provide the consumer, at least in cases involving "injury to the person", with a right of action for breach of warranty the nature of which is as much, if not more, tortious as it is contractual. See Springfield v. Williams Plumbing Supply Co., 249 S.C. 130, 153 S.E.2d 184 (1967); Chairaluce v. Stanley Warner Management Corp., 236 F. Supp. 385 (D. C., Conn., 1964). Additionally, §§ 1-102 and 1-106 mandate a liberal construction with respect to such remedies.[8]
The purpose of tort law, at least since the beginning of the 20th Century, has been to provide a civil remedy in situations where the plaintiff's legally protected interests have been injured by the defendant's violation of publicly imposed duties.[9] The legal personality of warranties, which arise in connection with transactions governed by the Alabama Commercial Code, is compatible with traditional tort concepts in that a breach of warranty thereunder is a violation of a publicly imposed duty. The character of the event necessary to invoke the right and remedy created by the Alabama Wrongful Death Statute is defined by the Statute as any "wrongful act, omission or negligence". In King v. Henkie, 80 Ala. 505, 60 Am.Rep. 119 (1876), this Court held:
"The condition that the action must be one which could have been maintained by the deceased had it failed to produce death, or had not death ensued, has no reference to the nature of the loss or injury sustained, or the person entitled to recover, but to the circumstances attending the injury, and the nature of the wrongful act or omission which is made the basis of the action."
This Court in Thaggard v. Vafes, 218 Ala. 609, 119 So. 647 (1927), noted by way of dicta that a "mere breach of contract" is not a wrongful or negligent act, within the meaning of the statute, given a right of action for wrongful death. The plaintiff *449 in Thaggard expressly laid his complaint in negligence and for this reason the language quoted above was not necessary to the Court's opinion which in fact, held that the complaint properly averred a negligent breach of the defendant's duty. Thaggard was a malpractice suit in which the administratrix of the plaintiff's estate alleged in substance that the defendant, a practicing physician, undertook for reward to treat the plaintiff's intestate, and that he "so negligently conducted himself in that regard that plaintiff's intestate died as a proximate consequence of defendant's negligence". The reasoning in support of the actual holding in Thaggard is in harmony with our opinion here. The Court in Thaggard recognized that, in the absence of pleading affirmatively averring a breach of contract, the underlying relationship between a physician and his patient is not "necessarily contractual" and is not, therefore, a "mere breach of contract".
Similarly, I would hold that the warranties that arise by operation of the Alabama Commercial Code, out of the relationship between the "seller" of a product and "any natural person who might reasonably be expected to use, consume or be affected by" the product, are in the nature of a public duty imposed by law and are not "necessarily contractural" or a "mere contract"; the breach of such warranties are, therefore, maintainable in an action brought under the Alabama Wrongful Death Statute.[10]
The contention most stringently urged by the defendants, and the one expressed by the trial judge in granting the affirmative charge, is the proposition that punitive damages will not lie for a breach of contract. Treadwell Ford, Inc. v. Leek, supra, cited by the defendants, although so holding, was not a wrongful death case. It is true that punitive damages are not ordinarily recoverable in actions for breach of contract. 22 Am.Jur.2d, Damages, § 245. It is also true that damages under the Alabama Wrongful Death Statute are punitive. Airheart v. Green, 267 Ala. 689, 104 So.2d 687 (1957).
Judicial juxtaposition of these two rules, however, does not compel the conclusion that an action for breach of implied warranty under the Alabama Wrongful Death Statute would not permit a recovery for punitive damages. Or, stated another way, it does not necessarily follow that an action for wrongful death may not be maintained based on breach of warranty. Our decisions do not allow recovery of punitive damages in a purely personal injury case for simple negligence but do permit their recovery in an action for wanton misconduct. Following the defendants' reasoning, we would be forced to conclude that a death case based on simple negligence would not lie. The clear wording of our statute, permitting recovery for "[any] wrongful act, omission or negligence", illustrates the fallacy of this reasoning. A contrary rule would have the effect of increasing the degree of culpability contemplated by our statute as the requisite for recovery in wrongful death actions.
The punitive aspect of the damages permitted in actions brought under the Alabama Homicide Statute relates to the nature and amount of the recovery rather than the underlying right or recovery; it is not the nature of the recoverable damages that permits the maintenance of a "wrongful death action" but the circumstances attending to injury, and the nature of the wrongful act or omission which is make the basis of the action. Breed v. Atlanta, *450 B. & C.R. Co., supra; King v. Henkie, supra. The sense in which damages recoverable under the Alabama Homicide Act are deemed punitive is sui generis and the term is not used in the identical sense when applied to actions involving wanton misconduct or intentional injury. For example:
(1) A wrongful death action does not abate by the death of the defendant although he can no longer be punished. Bagley v. Grime, 283 Ala. 688, 220 So.2d 876 (1969); Campbell v. Davis, 274 Ala. 555, 150 So.2d 187 (1962).
(2) Punitive damages may be awarded for simple negligence where the injury results in death. Southern Ry. Co. v. Sherrill, 232 Ala. 184, 167 So. 731 (1936); see also Drummond v. Drummond, 212 Ala. 242, 102 So. 112 (1924).
(3) In wrongful death actions against joint defendants the damages are not divided according to the relative culpability of each defendant. Bell v. Riley Bus Lines, 257 Ala. 120, 57 So.2d 612 (1952).
(4) If by the same wrongful act the defendant causes the death of two people, he cannot in the second case mitigate his responsibility by showing that he has already been sufficiently punished by a verdict in the first case. Kansas City M. & B. R. R. Co. v. Sanders, 98 Ala. 293, 13 So. 57 (1893).
The wording of our wrongful death statute does not characterize the recovery but simply permits "such damages as the jury may assess". Our earliest cases correctly, I think, discerned a legislative intent to equate the value of all human life and established a rule of recovery which reflects that the cardinal factor of culpability is the taking of a human life, regardless of the financial status of the victim, with the amount of recovery keyed to the degree of culpability. Daniel Construction Co. v. Pierce, 270 Ala. 522, 120 So.2d 381 (1959); Richmond & Danville Railroad Co. v. Freeman, 97 Ala. 289, 11 So. 800 (1892); L. & N. R. R. Co. v. Perkins, 1 Ala.App. 376, 56 So. 105 (1911).
I conclude, therefore, that the previous state of actions arising from breach of warranty has been fundamentally changed by the legislative enactment of the U.C.C. The U.C.C. clearly imposes a public duty with respect to an implied warranty of fitness of consumer goods, and the breach of that duty resulting in personal injury may be redressed by recovery of "those damages ordinarily allowable in such actions at law". Further, when such personal injury results in death, the Alabama Wrongful Death Statute governs that remedy and "those damages ordinarily allowable".
One basic theme runs throughout the majority opinionthat the purpose of the legislature in passing the Alabama version of the U.C.C. was to regulate commercial transactions and that an action for wrongful death is not a commercial transaction. I would merely point out that no cause of action, be it for wrongful death, breach of contract, or negligence, is a commercial transaction.
It is unfortunate, it seems to me, that the majority opinion looks to the ultimate cause of action, rather than to the underlying transaction, to determine a party's right to relief for personal injuryfatal or nonfatalresulting from a breach of warranty and the public duty imposed by the Code arising therefrom. The underlying transaction giving rise to a cause of action, which should be looked to in determining that the party's right to relief is, in this case, the sale of a defective car. This is the commercial transaction which forms the basis of the instant suit, and which ties the case into the U.C.C. The underlying transactionthe sale of the car rather than the resulting injury or damage sustained should be the determinative factor.
A literal application of the rationale of the majority opinion would exclude recovery for personal injurynonfatal as well as fatal. It is my view that such an interpretation *451 ignores the liberal remedies afforded by the Code to the consumer public in products liability cases. Likewise, I am puzzled by the failure of the majority opinion to affirm, overrule, or even mention the long-established rule of Ambrose, supra, reaffirmed by Harris, supra, to the effect that the statutory use of the words "personal injury" includes the ultimate injurydeath. Did our legislature in its passage of the U.C.C. not have the right to assume that this Court would follow its own established precedents in giving effect to the remedies provisions of the Code?
I would, therefore, reverse and remand.
HEFLIN, C. J., concurs.
FAULKNER, Justice (dissenting).
I respectfully dissent. It appears to me that the legislature has provided us with a good map and compass, which the majority has read to mean one thing, and I, in the minority, interpret to mean another. I know of no words in the English language any plainer than those used in the Uniform Commercial Code. Section 2-318 of the U.C.C. provides that an action may be brought by any natural person for personal injury resulting from the breach of a warranty. The majority holds that a wrongful death action is not an action for personal injury. I know of but two ways for a person to die. One is by natural causes. The other is by internal or external injury to the body sufficient to produce death.
County One-D in this case alleges injuries which resulted in death.
This Court has held in Ambrose, supra, that a wrongful death action is an action for personal injury.
Would this Court, by the use of a time machine, digress to the period before Lord Campbell's Act, and hold that there is no cause of action for wrongful death? Apparently so, because they have certainly reached a medieval result here. I cannot distinguish death resulting from a breach of warranty and death resulting from a tort. In both instances, the person is very dead. I suppose, in view of the majority opinion, that if the dead man had a choice, he had rather go by tort than by breach of warranty. In such instance he may go to his happy hunting ground knowing that the wrongdoer would have to make an accounting for his tortious act, whereas if injury resulted from breach of warranty, the wrongdoer would pray for his death.
The majority go further and hold that punitive damages will not lie for breach of a contract. Simplistically, the majority call contract exactly the same thing as warranty, whereas the best authorities hold that warranty is a hybrid form of action resting somewhere between tort and contract. Justice Jones correctly states that damages for death resulting from breach of warranty awarded under the Homicide Statute are punitive in nature and cannot precisely be equated with punitive damages in an ordinary tort action.
In view of the majority opinion, I believe this question should be taken up by the legislature for clarification of this very important point of law.
NOTES
[1] McDonnell, The New Privity Puzzle: Products Liability Under Alabama's Uniform Commercial Code, 22 Ala.L.Rev. 455, 484 (1970).
[1] The rule is the same whether the action is governed by §§ 119 or 123, Title 7. Code of Alabama 1940, as amended. Louisville & Nashville R. R. Co. v. Bogue, 177 Ala. 349, 58 So. 392 (1912).
[2] Smedley, Wrongful DeathBasis of Common Law Rules, 13 Vanderbilt L. Rev. 605 (1960).
[3] See, e. g., Winfield, Death as Affecting Liability in Tort, 29 Columbia L.Rev. 239 (1929); Malone, The Genesis of Wrongful Death, 17 Stanford L.Rev. 1043 (1965).
[4] Smith v. United Construction Workers, 271 Ala. 42, 122 So.2d 153 (1960); Title 1, § 3, Code of Alabama 1940.
[5] While this case was overruled as to its holding on the venue question, the language quoted above has been reaffirmed by this Court in Harris v. Elliott, 277 Ala. 421, 171 So.2d 237 (1965).
[6] The Official Comments numbered 2 and 3 following § 2-318 in the 1966 Recompilation of Title 7A are not appropriate and do not apply to the language of § 2-318 as actually enacted by the Alabama Legislature. Comments 2 and 3 were obviously drafted by the editors of the Commercial Code as being applicable to the official "Uniform" version of § 2-318. The actual wording of § 2-318 incorporated in the Alabama Act is similar to the language originally employed by the drafters of the UCC. Permanent Editorial Board for the Uniform Commercial Code, Report No. 3, p. 13 (1967). See Freedman, Products Liability Under the Uniform Commercial Code, The Practical Lawyer (April, 1964) ; Bailey, Sales Warranties, Products Liability and the UCC, 4 Williamette L.J. 291; see also McDonnell, The New Privity Puzzle, 22 Ala.L.Rev. 455.
[7] See Gaudette v. Webb, 284 N.E.2d 222 (Mass., 1972) ; Moragne v. States Marine Lines, Inc., 398 U.S. 375, 90 S.Ct. 1772, 26 L.Ed.2d 339 (1970).
[8] See also Tiger Motor Co. v. McMurty. 284 Ala. 283, 224 So.2d 638 (1969).
[9] Winfield, The Foundation of Liability in Tort, 27 Columbia L.Rev., p. 1 (1927) ; Ashby v. White, 92 Eng.Rep. 120 (1703) ; Nixon v. Herndon, 273 U.S. 536, 47 S.Ct. 446, 71 L.Ed. 759 (1927).
[10] For cases from other jurisdictions in accord with the present holding, see Kelley v. Volkswagerwerk, 110 N.H. 369, 268 A.2d 837 (1970) ; Dagley v. The Armstrong Rubber Co., 344 F.2d 245 (7th Cir., Ind., 1965) ; Schuler v. Union News Co., 295 Mass. 350, 4 N.E.2d 465 (1936) ; Zostautas v. St. Anthony De Padua Hospital, 23 Ill.2d 326, 178 N.E. 2d 303 (1961) ; Breach of Warranty as a Basis for a Wrongful Death Action, 51 Iowa L.Rev. 1010 (1966) ; Annot. 86 A.L.R.2d 316.
| |
Having more than one mentor certainly worked in my favor when I started my business.
When it comes to good business advice, there’s no such thing as too much. For many entrepreneurs, during the startup stage of their business, a mentor is a crucial part of the process.
Mentors can be your sounding board for a number of areas. When you need to talk through setting SMART targets, overcome obstacles, or just having someone there when you need to vent, your mentor is there for you.
You and your business will thrive from having a mentorship resource of support. In fact, having more than one mentor certainly worked in my favor when I started my business.
Having access to multiple mentors has been of great help to me! Getting a business off the ground requires a number of different skills. And access to various skills and knowledge will not always be available in one mentor alone.
I also found that I needed the advice and support of different people at different times during my business. As business increased and I sought to refresh my five year business plan, the expertise of my original mentor was spot on for strategic planning and growth, but when I needed help with marketing and social media strategy I needed to look elsewhere.
At first, I felt a bit disloyal. I considered that I should stick with the one person who steered me in the right direction from the start, but as my original mentor pointed out, each mentor has his or her own unique contribution to make. As business became more complex, it was absolutely okay to bring in more specialist support.
In the same way, a second or third mentor would offer many benefits. It was like having my own small board of informal directors.
My company deals with a number of overseas contracts. There would be times when work would build up and it became difficult to prioritize. While my mentors did not “do the work” for me, talking it through helped me decide what to accept and what to turn down.
Having more than one pair of eyes (and ears) meant that any decisions I made were reasoned ones. My mentors had aided in the evaluation process which alleviated a ton of pressure. Working with a handful of mentors also cultivated my assertiveness, which has stood me in good stead several times since then.
One of my mentors was a former co-worker who had left corporate several years before me and was now working from home. Having the opportunity to talk through obstacles she faced meant that, when I encountered similar problems I knew how to deal with them.
Also, having a mentor who had worked on an international level and who could give advice around logistics, invoicing and payment in another currency was invaluable.
As I increased my mentors it meant the likelihood that one or more of them had faced similar challenges also increased, which was very reassuring.
I had to step it up if I wanted to be taken seriously in business. I needed to present myself and my company in a certain light. Having a mentor I could talk through the process with beforehand was very beneficial.
It also gave me confidence to walk through the doors of meetings and get the responses I wanted. For example, I discussed presentation techniques with one mentor and carried out a practice meeting with the second, so it was really beneficial to bounce ideas off of various mentors.
In creative industries there are many different areas in which you can branch out and develop. Having access to diverse mentors with a range of experience in business has been a springboard for my company.
Talking through a specific area of business with more than one person has given me an objective overview. My mentors have also opened my eyes to considering new areas of diversification; ways I wouldn’t have thought about because I was always in “head down, get the work completed” mode.
A final benefit of having several mentors? They have helped to keep me focused and energized, particularly when times were tough, which means that my business goes the same way!
Emilly Hadrill is the owner of Emilly Hadrill Hair & Extensions, a company that specialises in hair and hair extensions. With multiple locations across Australia, including the Gold Coast, Melbourne, and Brisbane, Emilly knows she is fortunate to have had the right guidance in growing her business, and remains as passionate about providing quality hair related products and services today as the day she started her business. Connect with Emilly on LinkedIn. | https://yfsmagazine.com/2017/01/09/why-you-should-have-multiple-business-mentors/ |
Introduction
============
Cardiovascular quality measures for inpatient care have undergone a rapid evolution over the past three decades. Isolated efforts at simply measuring quality have developed into national programs dedicated toward the public reporting of hospital performance on a number of quality measures. Most recently, performance on quality measures has become closely tied to hospital and physician payments. With the implementation of the Affordable Care Act (ACA), further changes in how we measure, report, and pay for quality health care will continue in coming years.
At their core, quality measures in cardiovascular care are meant to improve the quality of care delivered to patients, and in doing so, improve patient‐relevant outcomes such as mortality, hospital readmission, and patient experience. However, the relationship between quality measures and hard outcomes has been inconsistent, and thus, problematic for policymakers and clinical leaders who aim to use these measures to effectively drive improvements in cardiovascular care. In this review, we examine the evidence behind three major mechanisms of quality improvement: measurement alone, public reporting, and pay‐for‐performance. In characterizing the successes and failures that have occurred as part of each of these mechanisms, we provide a framework to inform future quality improvement efforts.
Quality Improvement Through Process‐of‐Care Measurement
=======================================================
History of Quality Measurement
------------------------------
The earliest steps toward quality improvement involved simply creating and implementing basic mechanisms for quality measurement. In the 1950s, the Joint Commission on Accreditation of Healthcare Organizations (formerly JCAHO, now The Joint Commission, TJC) began mandating hospital compliance with a set of "Minimum Standards" of quality ([Figure](#fig01){ref-type="fig"}), which were later incorporated into the process of hospital accreditation under the ORYX Initiative of the 1990s.^[@b1]--[@b2]^ As part of ORYX, accredited hospitals were required to regularly provide TJC with a subset of performance data to identify areas in need of improvement.
{#fig01}
Cardiovascular care was among the first areas in medicine in which standardized quality measurement was attempted on a national scale. In 1992, the Health Care Financing Administration (HCFA, now the Center for Medicare and Medicaid Services, CMS), began measuring and tracking a series of disease‐specific process‐of‐care measures for Medicare patients under the Health Care Quality Improvement Initiative.^[@b3]^ The chosen measures were based on evidence‐based guidelines at the time for prevention and treatment of multiple conditions, including acute myocardial infarction (AMI), heart failure, and stroke. In concert with early TJC efforts, the program intended to provide hospitals with performance data and nonpunitively highlight areas for improvement.
The Cooperative Cardiovascular Project (CCP) was a program under the Health Care Quality Improvement Initiative that defined and measured adherence to evidence‐based practices for AMI care,^[@b4]--[@b5]^ such as the use of thrombolytics or aspirin during hospitalization, and the receipt of beta‐blockers and angiotensin‐converting enzyme (ACE) inhibitors at discharge. Initial data from this program demonstrated widespread deficiencies in care: among "ideal candidates," 69% of patients received thrombolytics and 83% received aspirin during hospitalization; at discharge, only 45% of patients received beta‐blockers.^[@b4]^ CMS later launched the National Heart Failure (NHF) project in 1999,^[@b6]^ which aimed to define and measure adherence to standards for high quality care for heart failure, such as measurement of left ventricular (LV) systolic function, and the use of ACE inhibitors in patients with LV systolic dysfunction. This program similarly found suboptimal and variable performance across hospitals: the range of appropriate evaluation of LV systolic function spanned from 21% to 66%, while adherence to ACE inhibitor therapy for eligible patients spanned from 51% to 93%.^[@b7]^
In concert with these federal efforts, the 1980s and 1990s saw the growth of national registries to track, measure, and improve quality in cardiovascular care. The Society for Thoracic Surgeons (STS) introduced the STS National Database to track risk‐adjusted outcomes in adult cardiac and general thoracic surgery for both internal quality improvement and public reporting purposes, while the National Registry of Myocardial Infarction began to track and measure practice patterns and outcomes for AMI patients. In 1997, the American College of Cardiology developed the National Cardiovascular Data Registry to consolidate clinical data in cardiovascular care. Early registry‐based studies confirmed the widespread underuse of thrombolytic therapy, aspirin, and beta‐blockers, particularly in elderly patients and patients with delayed AMI presentations.^[@b8]^ Perhaps more importantly, the registries were the first large‐scale efforts to track patient outcomes in addition to process measures, indicating a marked change from the CMS and TJC programs.
Borne out of a public‐private nonprofit partnership, the National Quality Forum (NQF) was formed in 1999 to set national standards of healthcare quality. Specifically, the NQF defined quality metrics, organized data collection, and reported standards in accordance with recommendations from the President\'s Advisory Council.^[@b9]^ Importantly, the NQF improved public access to quality data while playing a key role in introducing new quality metrics for adoption by CMS, with an eventual emphasis on outcome measures.
Efforts at quality measurement were further bolstered by the public release of the Institute of Medicine\'s landmark study, *Crossing the Quality Chasm*, in 2001.^[@b10]^ In response to the pervasive quality gaps described therein, TJC introduced identical quality measures as CMS beginning in 2002 and required over 3000 of its accredited hospitals to submit performance data on at least 2 of 4 condition‐specific measures: AMI, heart failure, pneumonia, and pregnancy‐related conditions.^[@b11]^ In addition to standardizing quality measurement, TJC provided hospitals with quarterly performance reports to motivate improvement. Studies of this quality measurement scheme after 2 years of its implementation showed a 3% to 33% improvement from baseline in the proportion of patients receiving appropriate care for AMI, heart failure, and pneumonia,^[@b11]^ with the lowest‐performing hospitals at baseline showing the greatest improvement.
By the early 2000s, quality measurement in cardiovascular care involved a multilevel framework, with both public and private contributions at the state and national levels. A growing body of evidence confirmed widespread variability in adherence to guideline‐based process measures, suggesting that simply defining quality metrics did not necessarily translate into adoption by clinicians. Nevertheless, such frameworks set the stage for understanding whether adherence to process‐based care led to improved patient outcomes.
Relationship Between Quality Measurement and Patient Outcomes
-------------------------------------------------------------
Early data on the relationship between measurement and outcomes in cardiovascular care showed inconsistent correlations. One of the first studies to address this, a small, patient‐level observational study of stroke care in 3 New Zealand hospitals, showed significant differences in process‐of‐care scores (as assessed by obtaining a head CT, performing a swallow study prior to feeding, and completing a multidisciplinary care meeting) between non‐survivors and survivors at the time of hospital discharge. Yet, paradoxically, of the 3 hospitals studied, the hospital with the poorest process score demonstrated the best‐case, mix‐adjusted outcomes of death and functional status at the time of discharge.^[@b12]^
The process‐outcome relationship for AMI care has also been shown to be significant, though variable in magnitude. A study from the Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the American College of Cardiology (ACC)/American Heart Association (AHA) Guidelines (CRUSADE) trial demonstrated a strong correlation between processes of care and outcomes, with every 10% increase in composite adherence to process measures associated with a 10% decrease in in‐hospital mortality.^[@b13]^ However, a larger hospital‐level study of AMI care for Medicare patients showed that while receipt of beta‐blockers and aspirin at time of discharge was associated with lower risk‐standardized 30‐day mortality rates, when taken together, performance on process measures explained only 6% of hospital‐level variation in risk‐standardized, 30‐day mortality rates.^[@b14]^
The process‐outcome relationship for heart failure has also been shown to be modest. In the Organized Program to Initiate Lifesaving Treatment in Hospitalized Patients With Heart Failure (OPTIMIZE‐HF), a patient‐level registry designed to promote guideline‐based care for heart failure patients, none of the process‐of‐care measures were found to be associated with lower mortality at 60 or 90 days, and only ACE‐inhibitor or angiotensin receptor blocker use at discharge was associated with lower readmission and later post‐hospitalization mortality.^[@b15]^ Ironically, though beta‐blocker use at discharge was not established as a process measure for heart failure performance at the time, it was shown to be strongly associated with reduced mortality rates. More recent studies have confirmed this weak overall process‐outcome relationship in heart failure care: analyses from the American Heart Association\'s Get With The Guidelines Program for heart failure (GWTG‐HF) demonstrated that more frequent measurement of LV ejection fraction and use of ACE inhibitors or angiotensin receptor blockers in patients with LV dysfunction did not translate into lower 30‐day mortality rates, but did result in small, but significant, reductions in 30‐day readmission rates.^[@b16]^
Limitations of Quality Measurement in Improving Patient Outcomes
----------------------------------------------------------------
While process measures are intuitively valid as quality metrics, their impact on outcomes remains limited. One possible reason for this modest relationship is simply that some process‐of‐care measures are not designed to impact short‐term mortality. For example, measuring ejection fraction or counseling patients on smoking cessation, while good care practice, are unlikely to have an immediate impact on short‐term mortality. Furthermore, as guideline‐based cardiovascular care has become codified, there is little between‐hospital variability in adherence to some process‐of‐care measures, making it difficult to detect associated differences in mortality rates. Finally, risk‐adjusted mortality measurements may suffer from residual confounding by clinical complexity or socioeconomic factors, thus limiting the ability to determine the true relationship between process‐of‐care and mortality. Nonetheless, in spite of their weak relationship with outcomes, process measures continue to be useful quality metrics due to their inherent face validity as well as their independent utility in ensuring the delivery of high‐quality, guideline‐based care.
Quality Improvement Through Public Reporting
============================================
History of Public Reporting
---------------------------
As quality measurement took hold in American hospitals, the public release of hospital performance on quality measures emerged as the natural next step in incenting quality improvement. The rationale behind public reporting was twofold: first, making performance data public could provide a powerful incentive for clinicians and leaders to improve; second, it would empower consumers to make choices based on hospital and physician performance.
Public reporting was initiated at the state level prior to its use as a national strategy. In 1989, New York (NY) State began reporting risk‐adjusted mortality rates for coronary artery bypass grafting (CABG) surgery by hospital and surgeon. Pennsylvania (PA) followed suit in 1992, reporting CABG outcomes as well as costs of care; Massachusetts (MA) initiated a public reporting program for CABG outcomes in 2002. These states have also initiated programs for public reporting of percutaneous coronary intervention (PCI): New York in 1991, Pennsylvania in 2001, and Massachusetts in 2005.^[@b17]^
However, the first large‐scale, national endeavor to publicly report hospital quality data began in the early 2000s, when the Hospital Quality Alliance (HQA) was borne out of a collaborative venture between CMS, TJC, and several medical professional organizations. To support the HQA efforts, Congress passed the Medicare Modernization Act of 2003,^[@b18]^ which tied hospitals\' participation in public reporting to annual payment updates, effectively incenting hospitals to report data to CMS on 10 evidence‐based process measures for the management of AMI, heart failure, and pneumonia---essentially the same set of metrics that had been collected by TJC and HCFA in earlier years. The first set of HQA data was released in 2004, and for the first time, the American public could access quality data on nearly all U.S. hospitals on a centralized website, Hospital Compare.^[@b19]^
Relationship Between Public Reporting and Patient Outcomes
----------------------------------------------------------
The first studies to examine public reporting\'s impact on outcomes were from the state‐level CABG reporting programs. Initial results suggested that public reporting in NY led to decreases in CABG mortality over time, which was initially attributed to de‐selection of surgeons with high mortality rates and improvements in processes of care in response to reporting.^[@b20]--[@b22]^ However, subsequent work showed comparable decreases in states without public reporting,^[@b23]--[@b24]^ suggesting that these improvements might have been the result of secular trends rather than public reporting, per se. Studies of the PCI public reporting programs have found no overall difference in mortality rates for reporting versus nonreporting states.^[@b17]^
In contrast, the first evaluations of the Hospital Compare national public reporting program were positive: though baseline performance on these metrics was variable,^[@b25]^ studies showed that overall performance on process measures improved significantly over the first 2 years of public reporting.^[@b11]^ Perhaps even more impressive was the finding that higher performance on these process measures was associated with lower risk‐adjusted mortality rates for AMI, heart failure and pneumonia, though the differences were, again, small.^[@b26]--[@b27]^ A follow‐up study examining the first 3 years of the program also showed that improvement in performance over time was associated with improved outcomes for AMI: a 10‐point increase in performance on process measures was associated with an 0.6% reduction in 30‐day mortality rates and an 0.5% reduction in 30‐day readmission rates.^[@b28]^ There were minimal effects for heart failure and pneumonia, however. Nonetheless, this study raised the possibility that public reporting of the same metrics that had been simply measured for many years might be a key innovation in incenting meaningful improvements in patient outcomes.
However, more recent studies of Hospital Compare have painted a less rosy picture, suggesting that the improvements in mortality might be more the result of underlying hospital quality than about the publicly reported measures themselves. For example, a recent study showed that of the 180 hospitals in the top quintile of mortality rates for AMI, fewer than one‐third (31%) were in the top quintile of the composite process score, and that together, the HQA process measures explained only 6% of hospital‐level variation in 30‐day mortality rates.^[@b14]^ Perhaps most striking is the recent finding that while mortality rates for AMI, HF, and pneumonia improved in the period after the introduction of Hospital Compare, the improvement essentially followed the trends in mortality prior to the program, suggesting that the addition of public reporting did not lead to a more rapid improvement in mortality rates than was occurring under quality measurement alone.^[@b29]^
Limitations of Public Reporting in Improving Patient Outcomes
-------------------------------------------------------------
The major limitation of public reporting is the concern that it may lead physicians to avoid high‐risk patients in order to avoid poor outcomes. Studies examining this in the context of CABG surgery have been equivocal: while one study found an increase in the number of patients transferred to the Cleveland Clinic from NY State after the initiation of CABG reporting,^[@b30]^ another demonstrated that the risk profile of patients receiving CABG in NY State actually worsened after the adoption of public reporting, and NY State residents who received CABG surgery in‐state were of higher‐risk than those who received surgery out‐of‐state.^[@b22]^ Whether this was due in part to greater attention to coding of medical comorbidities as a response to public reporting is unclear.
Racial and ethnic minorities are another group that may be perceived to be at higher risk of poor outcomes, and thus may be at risk of decreased access to surgical care under public reporting. Two studies have examined this issue using CABG data from the NY experience, one of which found that disparities between black and white patients in rates of CABG increased in NY State after the adoption of public reporting^[@b31]^; the other demonstrated that non‐whites in NY State were more likely to be treated by surgeons with high mortality rates after the adoption of reporting.^[@b32]^
A study looking at differences in case mix and outcomes for PCI in NY found a significantly lower propensity to undergo PCI in NY than in Michigan (a nonreporting state) for AMI,^[@b33]^ and an analysis of a registry of patients with cardiogenic shock demonstrated that NY patients in shock were less than half as likely as non‐NY patients in shock to undergo PCI.^[@b34]^ Beyond the NY experience alone, a recent national study demonstrated that the 3 states with mandatory public reporting of PCI outcomes (NY, PA, and MA) had significantly lower rates of use of this procedure for patients with an AMI. This was associated with higher mortality for patients with ST‐elevation myocardial infarction, though overall mortality rates in the AMI population were unaffected.^[@b17]^
Some of this reduction in use likely is due to reporting‐induced risk aversion among physicians; a survey of interventional cardiologists in NY State found that 89% of respondents felt that reporting had influenced their decision on whether to intervene in critically ill patients, although this study did not include data on actual practices.^[@b35]^ Another study examining hospitals\' response to identification as an "outlier" for mortality rates after PCI in Massachusetts showed that the risk profile of PCI patients at outlier institutions was significantly lower after public identification as an outlier, suggesting that risk aversion increased among PCI operators at outlier institutions as a result.^[@b36]^
In summary, the experience with public reporting demonstrates little evidence that reporting is associated with improvement on either process‐of‐care or patient outcomes for cardiovascular disease, above and beyond quality measurement alone, and demonstrates that avoidance of high‐risk patients is a real consequence of these programs. Thus, it remains unclear whether the net effect of public reporting is positive or negative. Indeed, the absence of randomized trials of public reporting and the existing observational data limits our ability to conclusively assess its net effect on patient outcomes, and the expectation of conclusive evidence may be unwarranted at this stage. As such, future rigorous trials are required in order to fully exclude the possible small‐to‐moderate benefits of public reporting. There are certainly many benefits to public reporting that may not be captured in process‐of‐care or outcome measurement, such as increased transparency and improved trust from patients and other consumers. In this context, it is unlikely that public reporting will cease any time soon. In fact, the Hospital Compare program has expanded to include measures of patient satisfaction, surgical quality, and nursing home ratings, among others.^[@b19]^ However, whether these benefits will be worth the potential unintended consequences remains to be seen.
Quality Improvement Through Pay‐for‐Performance
===============================================
History of Pay‐for‐Performance
------------------------------
Paying for performance (P4P) is the newest quality improvement effort that has been used on a national scale for cardiovascular care. Certainly, P4P has strong face validity in that when the appropriate financial incentives are in place, there is likely to be a strong stimulus to improve on the part of both clinicians and hospital administrators. Furthermore, as cost control became a major concern for policymakers, P4P gained traction as a strategy for maximizing quality while prioritizing cost effectiveness.
As was the case in public reporting, state‐level experiments predated federal ones. Large‐scale examples of P4P include the Hawaii Medical Service Association Hospital Quality Service and Recognition P4P Program (HQSR),^[@b37]^ and the Blue Cross Blue Shield of Michigan (BCBSM) Participating Hospital Agreement Incentive Program,^[@b38]^ both of which were launched in 2001. Similar to many prior quality programs, both programs used process‐of‐care measures for AMI, heart failure, and pneumonia in assessing quality. While both programs offered bonus financial incentives, neither included a financial penalty for poor performance. Only the Hawaii Medical Service Association program included outcome measures as part of their quality assessment. Both programs emphasized absolute performance over improvements from baseline.
The first nationwide foray into hospital‐level P4P began in 2003 when 421 hospitals were invited by CMS to participate in the Premier Hospital Quality Incentives Demonstration (HQID), with 252 hospitals ultimately joining the program and providing data for analysis.^[@b39]^ HQID offered payment bonuses to hospitals based on their performance on a set of disease‐specific process measures, which were very similar to measures established by the HQA for AMI, CHF, and pneumonia. Hospitals in the highest deciles of performance qualified for a financial bonus while those with the poorest performance were susceptible to a financial penalty.
Relationship Between Pay‐for‐Performance and Patient Outcomes
-------------------------------------------------------------
Studies of the Hawaii P4P program found modest benefit: after 4 years of the HQSR, there were significant decreases in risk‐adjusted complication rates and lengths‐of‐stay for surgical and obstetric procedures, as well as improvements in patient satisfaction with emergency department care as measured by individual hospital surveys.^[@b37]^ The BCBSM P4P program was associated with improvements in processes of care from 2000 to 2003, with more patients receiving aspirin after AMI (87% to 95%), beta‐blocker after AMI (81% to 93%), and ACE inhibitors for heart failure (70% to 80%); but again, outcomes were not assessed.^[@b38]^
Studies of the HQID program showed greater improvements in adherence to guideline‐based process measures over a 2‐year period in Premier hospitals as compared with hospitals without these incentives, though this study did not evaluate patient outcomes.^[@b40]^ However, another study released in the same year, with a broader comparison group, showed no significant difference in a composite measure of the 6 CMS‐rewarded processes between HQID versus non‐HQID hospitals, and no evidence to suggest that improvements in mortality were greater at HQID hospitals.^[@b41]^ Even more problematic was a report examining performance on process measures and patient outcomes at the 5‐year mark of the program, which demonstrated no difference between HQID and non‐HQID hospitals on any of the metrics and no difference in mortality rates between the 2 groups.^[@b42]^
Overall, the available evidence on large‐scale hospital pay‐for‐performance programs for cardiovascular disease suggests that these programs have led to only very modest, if any, improvement in either processes or outcomes of care beyond that achieved with quality measurement or public reporting. However, the success of pay‐for‐performance as a quality‐improvement mechanism most likely lies in the details, such as the size of the incentive, baseline performance levels, and a hospital\'s inherent ability to improve and respond adequately to such incentives.^[@b43]^ It is feasible that alternative designs to pay‐for‐performance programs, for example including larger incentives, targeting incentives at particularly high‐impact measures, and considering both group and individual performance evaluation, may yield better results.^[@b44]^
New National Quality Improvement Efforts
========================================
While many of the quality improvement and public reporting efforts described above continue, there are a number of new quality‐improvement efforts for cardiovascular disease emerging at the national level, most notably Value‐Based Purchasing (VBP) and the Hospital Readmissions Reduction Program (HRRP).
Value‐Based Purchasing
----------------------
The VBP program is a national P4P program that represents an attempt to fundamentally shift Medicare from a passive payer of services into active purchaser of quality health care. Based largely on the same quality metrics and payment incentives as the Premier HQID, VBP starts with a 1% "holdback" of Medicare payments, and hospitals can earn bonuses ≤1% based on a complex formula rewarding performance, improvement, and consistency on processes of care and patient experience; mortality rates and efficiency metrics will be phased in during future iterations of the program.^[@b45]^
While the long‐term benefits of VBP in terms of improving hospital quality remain to be seen, it is of concern that the Premier HQID program on which VBP is based has led to little improvement beyond secular trends. Moreover, the fairly small amount of payment that will be at risk for hospitals^[@b46]^ suggests that there should be at least some degree of skepticism about its likely impact on quality, and in turn, on patient outcomes. Other concerns have been raised about whether the VBP penalties will be too punitive for hospitals that disproportionately care for poor patients. A recent simulation of the VBP program suggested that despite overall improvement nationally on quality metrics, hospitals in disadvantaged areas would continue to have lower performance levels in comparison to hospitals in better‐resourced areas, leading to significantly higher financial penalties.^[@b47]^ Other studies have shown that safety‐net hospitals are more likely to be penalized under VBP, particularly on measures of patient experience.^[@b48]^ Nevertheless, these prior studies have been modeling exercises, and can only serve as predictive models of the future of VBP. It is possible that the majority of hospitals will be able to respond constructively to the financial incentives created in the ACA.
The Hospital Readmissions Reduction Program
-------------------------------------------
The HRRP is a program that reduces Medicare payments to hospitals with higher‐than‐expected readmission rates for AMI, heart failure, and pneumonia.^[@b49]^ The intent of the program is to place increasing attention on good discharge practices, encourage enhanced communication with outpatient providers, and reduce fragmentation of care. Initial reports from the year leading up to the implementation of the HRRP suggest slight drops in readmission rates nationally,^[@b50]^ which is an encouraging early signal for the potential success of this program.
However, there are concerns about the HRRP as well. For instance, while readmission rates have a high degree of face validity, prior studies have shown that only approximately 27% of readmissions are "preventable."^[@b51]^ Further, there is little relationship between typical measures of hospital quality and readmission rates.^[@b52]--[@b54]^ Another potential concern with the HRRP is the inverse relationship that has been demonstrated between mortality and readmission rates for HF in particular, though the mechanism underlying this relationship is poorly understood.^[@b55]--[@b56]^ Finally, readmissions may be influenced by patient socioeconomic complexity, as well as by community resources,^[@b57]--[@b58]^ which are not adjusted for in the CMS penalty scheme. Perhaps reflective of these issues, early research on the impact of the HRRP demonstrates that large hospitals, teaching hospitals, and safety‐net hospitals are currently receiving the highest penalties.^[@b59]^ Whether or not this will have a significant negative downstream impact on these hospitals is not yet known.
Methodological Issues in Cardiology Quality Measures
====================================================
Defining Metrics
----------------
Although process measures remain minimally correlated with outcomes and may represent clinical concepts that are somewhat inaccessible to patients,^[@b60]^ they do have independent value as a marker of a hospital\'s ability to provide widely accepted, guideline‐based clinical care. To this end, the ACC/AHA Task Force on Performance Measures released a report in 2005 outlining attributes of optimal performance measures, including interpretability, actionability, clear numerator and denominator calculation, and feasibility.^[@b61]^ As the number and complexity of quality metrics proliferate, adhering to these recommendations will be increasingly important.
Using Appropriate Analytics to Test Quality Measures\' Impact
-------------------------------------------------------------
The earliest studies on quality measurement programs were limited by the absence of a comparison group and lack of adjustment for secular trends toward improvement, thus creating the illusion of success when in reality the improvements seen were simply reflective of larger national trends in care. This is particularly important for cardiovascular care, where such trends have resulted in falling mortality rates for AMI and minimal variability in process adherence across U.S. hospitals. Future studies of quality metrics should include a comparison group whenever possible, be sufficiently powered in sample size to overcome the issues of variability and adequately tease apart the low signal‐to‐noise relationship between process and outcomes, and account for secular trends.
Another analytic issue is determining the appropriate level at which to conduct studies of quality improvement. Analyses at the patient level allow a more granular study, but are difficult to conduct due to issues of privacy and limitations in data collection; hospital‐level analyses allow for ease of measurement but are constrained by the loss of specific information at larger study units and the inability to fully control for confounders. Indeed, some have argued that the absence of a strong, consistent relationship between process and outcome measures may be the result of ecological fallacy in falsely generalizing hospital‐level analyses to the patient level.^[@b62]^ Given such tension, future studies should employ hierarchical analyses when feasible to allow for the adequate examination of patient, hospital, and health system factors in achieving quality.
Finally, the methods used to assess outcomes themselves are important to consider. In the absence of randomized controlled trials, it is difficult to ensure equal distribution of confounders in comparison groups. Current models, which often rely on administrative data, may have inadequate ability to account for differences in patient population and case mix that may impact hospital performance, and possibly augment the temptation for risk‐averse behavior among physicians and hospital leaders.^[@b63]^
Accounting for "Gaming"
-----------------------
As the pressure to comply with quality measures mounts, there is growing incentive for physicians and hospitals to "game" the system to make their performance appear better. There are a growing number of "exclusions" from quality metrics^[@b64]^ as well as data suggesting that hospitals may game the system by reclassifying patients into or out of publicly reported diagnoses.^[@b65]^ Upcoding, in which hospitals code a higher number of diagnoses to make patients appear "sicker" and, therefore, risk‐adjusted outcomes appear better, also occurs.^[@b66]--[@b67]^ One strategy to combat gaming may be the move to broader outcome metrics, such as all‐cause mortality or all‐cause readmission rates, though these metrics have their own limitations and do not fully deal with issues of upcoding.
Involvement of Industry in Quality Improvement
----------------------------------------------
Finally, it is worth noting that several national efforts at quality measurement, including OPTIMIZE, CRUSADE, and the National Registry of Myocardial Infarction mentioned previously, as well as the Acute Decompensated Heart Failure National Registry (ADHERE), were industry‐sponsored efforts at quality improvement. This is worth particular consideration when quality improvement is measured by the uptake of sponsored products. For example, OPTIMIZE and the Registry to Improve the Use of Evidence‐Based Heart Failure Therapies in the Outpatient Setting (IMPROVE‐HF) trials put in place specific strategies to increase the use of beta‐blockers and implantable cardioverter defibrillators produced by their respective pharmaceutical sponsors, prior to the inclusion of these strategies in formal ACC/AHA guidelines.
Multiple prior studies have shown industry‐sponsored studies to be more likely to publish results favorable to the sponsor than non‐industry‐sponsored studies,^[@b68]^ and that the "industry bias" is, indeed, independent of an otherwise expected "risk of bias."^[@b69]^ While the quality of trial methods in industry‐sponsored studies have been shown to be at least as good as, if not better, than non‐industry‐sponsored efforts,^[@b68]^ recent studies have shown that knowledge of industry sponsorship negatively influences physicians\' perceptions of study quality and lowers the propensity to change clinical behaviors based on trial findings, regardless of a study\'s true methodologic rigor.^[@b70]^ Such implications may be important when identifying strategies for promoting changes in clinical behavior related to quality improvement.
Conclusions
===========
Quality metrics for cardiovascular disease are here to stay, though their utility in improving patient outcomes remains unclear. Measuring quality does seem to improve quality for processes of care, but unless these process measures are closely linked to patient‐relevant outcomes, such as mortality, hospital readmission, or patient experience, they may not have maximal impact. Public reporting of quality metrics thus far has not been shown to have positive impacts on outcomes, and though reporting may have value in improving transparency and promoting patient trust in the health care system, future programs should be designed with unintended consequences of risk aversion in patient selection in mind. Finally, pay‐for‐performance continues to have tremendous face validity as a quality improvement approach, in spite of its somewhat limited success on a national scale thus far. Future attempts at pay‐for‐performance may benefit from creating incentives that are large enough to influence provider behavior, measuring performance in a minimally complex and clinically relevant manner, and focusing on high‐impact metrics like mortality. However, these too are likely subject to unintended consequences in terms of patient selection.
Quality measurement in cardiovascular care remains an active area for innovation and continued evaluation. More than ever before, the study of the impact of quality improvement efforts on patient outcomes will be crucial to improve cardiovascular health in the coming years.
Dr Joynt was supported by NIH grant 1K23HL109177‐01 from the National Heart, Lung, and Blood Institute. The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.
None.
| |
Lucknow vs Gujarat IPL 2022 scorecard: The two new franchises have been the best teams of the tournament so far.
Lucknow Super Giants will take on Gujarat Titans in the 57th league game of the Indian Premier League 2022. The match will be played at the MCA Stadium in Pune.
Both these new franchises have played well in the tournament so far. KL Rahul has been the best batter of the Lucknow side, whereas Avesh Khan is their best bowler with 14 wickets. Gujarat Titans have lost their last two matches, and they rely heavily on their lower order.
Lucknow vs Gujarat IPL 2022 scorecard
Lucknow Super Giants and Gujarat Titans faced each other earlier in IPL 2022, which was the first game of IPL 2022 for both teams. Lucknow Super Giants had the worst possible start in that game when KL Rahul went out on a golden duck. Quinton de Kock, Manish Pandey and Evin Lewis also flopped.
Ayush Badoni and Deepak Hooda both scored half-centuries in the lower order, and Lucknow managed to score 158 runs. Mohammed Shami scalped three wickets in the match, whereas Varun Aaron got a couple of them.
Lucknow Super Giants’ bowlers bowled well throughout the innings, but the lower order of the Gujarat Titans took the team through. Hardik Pandya and David Miller played some decent knocks, but the knock of Rahul Tewatia was the game-changer. Tewatia scored 40 runs in just 24 balls to seal the game for the Gujarat Titans by 5 wickets.
Pandya-derby – Bhai-valry Returns!
If there’s one thing we 💟 about the Pandya Brothers, it’s that they don’t stop dreaming! Check out their fascinating journey as brothers turn rivals tonite – tune in 19:30 📺 🏏 🍿#SeasonOfFirsts #AavaDe #AbApniBaariHai #BhaukaalMachaDenge pic.twitter.com/hCCdTFMcfc
— Lucknow Super Giants (@LucknowIPL) May 10, 2022
Are LSG and GT qualified for playoffs 2022?
Lucknow Super Giants and Gujarat Titans have both won eight games each in the tournament so far, and they are at 16 points. It is seen that generally, 16 points are enough to qualify for the playoffs. Both of them have not officially qualified for the playoffs, but it is almost likely that they will qualify.
The winner of this game will officially qualify for the playoffs, and not only the top-4, but the winner may also seal the top-2 spot in the points table. So, this game is really vital for both of them. | https://thesportsrush.com/cricket-news-lucknow-vs-gujarat-ipl-2022-scorecard-are-lsg-and-gt-qualified-for-playoffs-2022/ |
Looking for a mentor? 5 questions to guide you
Picking a mentor to help guide you throughout your career — especially as you're starting out in real estate — can make a world of a difference. If you're looking around for the right person, here are a few things to consider
Picking a mentor to help guide you throughout your career — especially as you’re starting out in real estate — can make a world of a difference. If you’re looking around for the right person, here are a few things to consider.
So, you’ve passed your examinations and received your license — now what? When you first start out your career in this industry, you’ll find there’s a lot more you’ll need to learn.
Learning about the real estate business from an experienced and trusted adviser can lay a strong foundation toward a path to success. A mentor is a great resource you can run ideas by — someone who can help you navigate challenging situations, assist in problem-solving and of course, celebrate your successes along the way.
Sometimes, the role of a mentor is organic, and two people will naturally form this type of relationship. Other times, the decision of choosing a mentor requires more purposeful consideration.
I’m lucky to have been both a mentee and now a mentor, and I have found both roles to be incredibly rewarding. If you are currently searching for a mentor, here are some points to consider when selecting the right person for your real estate career.
1. Who do you admire?
When considering a mentor, you have to first ask yourself — who do you admire in this business?
You should select a mentor who you look up to for many reasons, including ethics, integrity, passion for the business, drive and determination. This part of the selection process should be the most organic.
2. Do your views and philosophies align?
Most successful mentor-mentee relationships result in close friendships and lifelong bonds. Like forming any type of relationship, you can’t force what should come naturally to make a partnership work.
Now that you have established your potential mentor, it’s time to take a closer look at how you both align when it comes to your views and philosophies on the business, work ethic and morals. Sometimes, this synergy reveals itself over time, but sometimes, it can be addressed at the onset of the relationship.
Being open and honest is the best way to move forward. That way, you’ll discover if there are any fundamental differences that could be a detriment to the partnership later down the road.
3. Does this person have time to mentor me?
Once you have selected someone who you believe will be the perfect mentor, you have to ask some logistical questions. One of the most important things that you will need from mentors is their time.
Typically, people who make for great mentors have full schedules and boat load of responsibilities. That said, you want to make sure they live up to your expectations and needs.
When having the initial conversation with your mentor, make sure to identify the time you both are willing to take for the relationship. This could be a weekly call, Zoom, coffee, dinner, etc. Just make sure you establish a cadence and a means of communication that works for both of you — something that all parties will be able to honor.
4. Does this person know your goals?
Once you are ready to ask someone to mentor you, there are a few things to discuss during the initial conversation. First, consider their time. The person you are asking is likely highly successful in their career, meaning their time is also in demand. Make sure to show that you appreciate them considering your request.
To start off the relationship, ask potential mentors if they can make time for a meeting in the coming weeks. It is also important to clearly explain the guidance you are seeking. Describe the advice you are seeking and your long-term goals.
Think about this, and articulate what you are seeking from the beginning. Finally, confirm with your potential mentor that you are prepared to commit the time and effort to make the most of their advice.
5. Can you return the favor?
Don’t forget — the relationship between a mentor and a mentee is a two-way street. Although typically new to a profession, mentees can bring new ideas with fresh eyes to improve systems and processes as well as creative ideas.
When speaking with a mentor, be prepared to share areas where you think you can bring value to them as well. Perhaps you have extensive knowledge in new social media practices, have a background in marketing or writing, have in-depth knowledge in accounting or lending practices.
Don’t be shy to offer up suggestions for improvement or ideas for consideration. In my experience, I have always enjoyed working with people who bring innovative thoughts and ideas to the table and that is what helps us all move the real estate industry forward.
A mentor and a mentee can both have a profound impact on each other that can be mutually beneficial. While a mentee can gain valuable experience from a trusted advisor, a mentor can also enjoy a fresh perspective to the business with innovative thoughts and ideas.
Santiago Arana is a managing partner at The Agency, in Los Angeles. Connect with him on Instagram. | |
The Estonian chief of defense met with key leaders and Soldiers of 1st Battalion, 68th Armor Regiment, to discuss lessons learned during the Fort Carson unit’s recent nine-month rotation in support of Atlantic Resolve.
In addition to receiving feedback from the 3rd Armored Brigade Combat Team, 4th Infantry Division, Soldiers on their experiences while training with the Estonian army, Gen. Riho Terras’s Nov. 20, 2017, visit included marksmanship training with members of the 10th Special Forces Group (Airborne) and touring Fort Carson facilities.
Terras spoke about the importance of having the 1st Bn., 68th Armor Reg., Soldiers training with his organization.
“Eight tanks are better than four,” said Terras. “I feel that the U.S. Soldiers’ presence played a crucial role in the deterrence of any possible enemy. I think us working together shows that we are a stronger force, and a force to be reckoned with.”
Soldiers from 1st Bn. 68th Armor Reg., conducted numerous training exercises in the vicinity of the Central Training Area near Tapa, Estonia, for nearly four months.
Lt. Col. Jonathan S. Kluck, commander, 1st Bn. 68th Armor Reg., reminisced about the time he spent training in Estonia.
“While we were in Estonia … synchronizing our elements was a challenge at first, but they were very welcoming and committed to the mission,” he said.
Capt. Todd Pepino, commander, Company C, 1st Bn., 68th Armor Reg., also shared what he experienced while training with Estonian soldiers.
“We worked alongside our Estonian partners conducting live-fire exercises,” said Pepino. “This was a great opportunity to learn and understand how they operate.”
Company C executed a variety of training, from individual marksmanship to armored vehicle maneuvers. | https://www.fortcarsonmountaineer.com/2017/12/3abct-reviews-lessons-learned-in-estonia/ |
Monetary policy report to the Congress.
Report submitted to the Congress on February 20, 1990, pursuant to the Full Employment and Balanced Growth Act of 1978.(1)
MONETARY POLICY AND THE ECONOMIC OUTLOOK FOR 1990
The U.S. economy recorded its seventh consecutive year of expansion in 1989. Although growth was slower than in the preceding two years, it was sufficient to support the creation of 2 1/2 million jobs and to hold the unemployment rate steady at 5 1/4 percent, the lowest reading since the early 1970s. On the external front, the trade and current account deficits shrank further in 1989. And while inflation remained undesirably high, the pace was lower than many analysts--and, indeed, most members of the Federal Open Market Committee (FOMC)--had predicted, in part because of the continuing diminution in longer-range inflation expectations.
In 1989, monetary policy was tailored to the changing contours of the economic expansion and to the potential for inflation. Early in the year, as for most of 1988, the Federal Reserve tightened money market conditions to prevent pressures on wages and prices from building. Market rates of interest rose relative to those on deposit accounts, and unexpectedly large tax payments in April and May drained liquid balances, restraining the growth of the monetary aggregates in the first half of the year. By May, M2 and M3 lay below the lower bounds of the annual target ranges established by the FOMC.
Around midyear, risks of an acceleration in inflation were perceived to have diminished as pressures on industrial capacity had moderated, commodity prices had leveled out, and the dollar had strengthened on exchange markets, reinforcing the signals conveyed by the weakness in the monetary aggregates. In June, the FOMC began a series of steps, undertaken with care to avoid excessive inflationary stimulus, that trimmed 1 1/2 percentage points from short-term interest rates by year-end. Longer-term interest rates moved down by a like amount, influenced by both the System's easing and a reduction in inflation expectations.
Growth of M2 rebounded to end the year at about the midpoint of the 1989 target range. Growth of M3, however, remained around the lower end of its range, as a contraction of the thrift industry, encouraged by the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA), reduced needs to tap M3 sources of funds. The primary effect of the shrinkage of the thrift industry's assets was a rechanneling of funds in mortgage markets, rather than a reduction in overall credit availability; growth of the aggregate for nonfinancial sector debt that is monitored by the FOMC was just a bit slower in the second half than in the first, and this measure ended the year only a little below the midpoint of its range.
Thus far this year, the overnight rate on federal funds has held at 8 1/4 percent, but other market rates have risen. Increases of as much as 1/2 percentage point have been recorded at the longer end of the maturity spectrum. The bond markets responded to indicators suggesting a somewhat greater-than-anticipated buoyancy in economic activity--which may have both raised expected real returns on investment and renewed some apprehensions about the outlook for inflation. The rise in yields occurred in the context of a general runup in international capital market yields, which appears to have been in part a response to emerging opportunities associated with the opening of Eastern Europe; this development had particularly notable effects on the exchange value of the West German mark, which rose considerably relative to the dollar, the yen, and other non-European Monetary System currencies.
Monetary Policy for 1990
The Federal Open Market Committee is committed to the achievement, over time, of price stability. The importance of this objective derives from the fact that the prospects for long-run growth in the economy are brightest when inflation need no longer be a material consideration in the decisions of households and firms. The members recognize that certain short-term factors--notably a sharp increase in food and energy prices--are likely to boost inflation early this year, but they anticipate that these factors will not persist. Under these circumstances, policy can support further economic expansion without abandoning the goal of price stability.
To foster the achievement of those objectives, the Committee has selected a target range of 3 to 7 percent for M2 growth in 1990. Growth in M2 may be more rapid in 1990 than in recent years and yet be consistent with some moderation in the rate of increase in nominal income and restraint on prices; in particular, M2 may grow more rapidly than nominal GNP in the first part of this year in lagged response to last year's interest rate movements. Eventually, however, slower M2 growth will be required to achieve and maintain price stability (table 1).
The Committee reduced the M3 range to 2 1/2 to 6 1/2 percent to take account of the effects of the restructuring of the thrift industry, which is expected to continue in 1990. A smaller proportion of mortgages is likely to be held at depository institutions and financed by elements in M3; thrift institution assets should continue to decline, as some solvent thrift institutions will be under pressure to meet capital standards and insolvent thrift institutions will continue to be shrunk and closed, with a portion of their assets carried, temporarily, by the government. While some of the assets shed by thrift institutions are expected to be acquired by commercial banks, overall growth in the asset portfolios of banks is expected to be moderate, as these institutions exercise caution in extending credit. An increase in lender--and borrower--caution more generally points to some slowing in the pace at which nonfinancial sectors take on debt relative to their income in 1990. In particular, recent developments suggest that leveraged buyouts and other transactions that substitute debt for equity in corporate capital structures will be noticeably less important in 1990 than in recent years. Moreover, a further decline in the federal sector's deficit is expected to reduce credit growth this year. In light of these considerations, the Committee reduced the monitoring range for debt of the nonfinancial sectors to 5 to 9 percent.
The setting of targets for money growth in 1990 is made more difficult by uncertainty about developments affecting thrift institutions. The behavior of M3 and, to a more limited extent, M2 is likely to be affected by such developments, but there is only limited basis in experience to gauge the likely effect. In addition, in interpreting the growth of nonfinancial debt, the Committee will have to take into account the amount of Treasury borrowing (recorded as part of the debt aggregate) used to carry the assets of failed thrift institutions, pending their disposal. With these questions adding to the usual uncertainties about the relationship among movements in the aggregates and output and prices, the Committee agreed that, in implementing policy, they would need to continue to consider, in addition to the behavior of money, indicators of inflationary pressures and economic growth as well as developments in financial and foreign exchange markets.
Economic Projections for 1990
The Committee members, and other Reserve Bank presidents, expect that growth in the real economy will be moderate during 1990. Most project real GNP growth over the four quarters of the year to be between 1 3/4 and 2 percent--essentially the same increase as in 1989, excluding the bounceback in farm output after the 1988 drought. It is expected that this pace of expansion will be reflected in some easing of pressures on domestic resources; the central tendency of forecasts is for an unemployment rate of 5 1/2 to 5 3/4 percent in the fourth quarter (table 2).
Certain factors have caused an uptick in inflation early this year. Most notably, prices for food and energy increased sharply as the year began, reflecting the effect of the unusually cold weather in December. However, these run-ups should be largely reversed in coming months, and inflation in food and energy prices for the year as a whole may not differ much from increases in other prices.
Given the importance of labor inputs in determining the trend of overall costs, a deceleration in the cost of labor inputs is an integral part of any solid progress toward price stability. Nominal wages and total compensation have grown relatively rapidly during the past two years, while increases in labor productivity have diminished. With prices being constrained by domestic and international competition, especially in goods markets, profit margins have been squeezed to low levels. A restoration of more normal margins ultimately will be necessary if businesses are to have the wherewithal and the incentive to maintain and improve the stock of plant and equipment.
Unfortunately, the near-term prospects for a moderation in labor cost pressures are not favorable. Compensation growth is being boosted in the first half of 1990 by an increase in social security taxes and a hike in the minimum wage. The anticipated easing of pressures in the labor market should help produce some moderation in the pace of wage increases in the second half of 1990, but the Committee will continue to monitor closely the growth of labor costs for signs of progress in this area.
Finally, the recent depreciation of the dollar likely will constitute another impetus to near-term price increases, reversing the restraining influence exerted by a strong dollar through most of last year. Prices of imported goods, excluding oil, increased in the fourth quarter after declining through the first three quarters of 1989. The full effect of this upturn likely will not be felt on the domestic price level until some additional time has passed.
Despite these adverse elements in the near-term picture, the Committee believes that progress toward price stability can be achieved over time, given the apparently moderate pace of activity. In terms of the consumer price index, most members expect an increase of between 4 and 4 1/2 percent, compared with the 4.5 percent advance recorded in 1989.
Relative to the Committee, the Administration currently is forecasting more rapid growth in real and nominal GNP. At the same time, the Administration's projection for consumer price inflation is at the low end of the Committee's central-tendency range. In its Annual Report, the Council of Economic Advisers argues that, if nominal GNP were to grow at a 7 percent annual rate this year--as the Council is projecting--then M2 could exceed its target range, particularly if interest rates fall as projected in the Administration forecast. As suggested above, monetary relationships cannot be predicted with absolute precision, but the Council's assessment is reasonable. And, although most Committee members believe that growth in nominal GNP more likely will be between 5 1/2 and 6 1/2 percent, a more rapid expansion in nominal income would be welcome if it promised to be accompanied by a declining path for inflation in 1990 and beyond.
THE PERFORMANCE OF THE ECONOMY IN 1989
Real GNP grew 2 1/2 percent over the four quarters of 1989, 2 percent after adjustment for the recovery in farm output from the drought losses of the prior year. This rate of growth of GNP constituted a significant downshifting in the pace of expansion from the unsustainably rapid rates of 1987 and 1988, which had carried activity to the point that inflationary strains were beginning to become visible in the economy. As the year progressed, clear signs emerged that pressures on resource utilization were easing, particularly in the industrial sector. Nonetheless, the overall unemployment rate remained at 5.3 percent, the lowest reading since 1973, and inflation remained at 4 1/2 percent despite the restraining influence of a dollar that was strong for most of the year.
The deceleration in business activity last year reflected, to some degree, the monetary tightening from early 1988 through early 1989 that was undertaken with a view toward damping the inflation forces. Partly as a consequence of that tightening, the U.S. dollar appreciated in the foreign exchange markets from early 1988 through mid-1989, contributing to a slackening of foreign demand for U.S. products. At the same time, domestic demand also slowed, more for goods than for services. Reflecting these developments, the slowdown in activity was concentrated in the manufacturing sector: Factory employment, which increased a total of 90,000 over the first three months of 1989, declined 195,000 over the remainder of the year, and growth in manufacturing production slowed from 5 1/2 percent in 1988 to only 1 3/4 percent last year. Employment in manufacturing fell further in January of this year, but that decline was largely attributable to temporary layoffs in the automobile industry, and most of the affected workers have since been recalled.
As noted above, the rate of inflation was about the same in 1989 as it had been in the preceding two years. While the appreciation of the U.S. dollar through the first half of the year helped to hold down the prices of imported goods, the high level of resource utilization continued to exert pressure on wages and prices. In that regard, the moderation in the expansion of real activity during 1989 was a necessary development in establishing an economic environment that is more conducive to progress over time toward price stability.
The Household Sector
Household spending softened significantly in 1989, with a marked weakening in the demand for motor vehicles and housing. Real consumer spending on goods and services increased 2 1/4 percent over the four quarters of 1989, 1 1/2 percentage points less than in 1988. Growth in real disposable income slowed last year, but continued to outstrip growth in spending, and, as a result, the personal saving rate increased to 5 3/4 percent in the fourth quarter of 1989.
The slackening in consumer demand was concentrated in spending on goods. Real spending on durable goods was about unchanged from the fourth quarter of 1988 to the fourth quarter of 1989--after jumping 8 percent in the prior year--chiefly reflecting a slump in purchases of motor vehicles. Spending on nondurable goods also decelerated, increasing only 1/2 percent in 1989 after an advance of 2 percent in 1988. The principal support to consumer spending came from continued large gains in outlays for services. Spending on medical care moved up 7 1/2 percent in real terms last year, and now constitutes 11 percent of total consumption expenditures--up from 8 percent in 1970. Outlays for other services rose 3 1/4 percent, with sizable increases in a number of categories.
Sales of cars and light trucks fell 3/4 million units in 1989, to 14 1/2 million. Most of the decline reflected reduced sales of cars produced by U.S.-owned automakers; a decline in sales of imported automobiles was about offset by an increase in sales of foreign nameplates produced in U.S. plants. The slowing in sales of motor vehicles was most pronounced during the fourth quarter of 1989, reflecting a "payback" for sales that had been advanced into the third quarter and a relatively large increase in sticker prices on 1990-model cars. Although part of this increase reflected the inclusion of additional equipment--notably the addition of passive restraint systems to many models--consumers nevertheless reacted adversely to the overall increase in prices. Beyond these influences, longer-run factors appear to have been damping demand for autos and light trucks during 1989; in particular, the robust pace of sales earlier in the expansion seems to have satisfied demand pent up during the recessionary period of the early 1980s. The rebuilding of the motor vehicle stock suggests that future sales are likely to depend more heavily on replacement needs.
Residential investment fell in real terms through the first three quarters of 1989, and with only a slight upturn in the fourth quarter, expenditures decreased 6 percent on net over the year. Construction was weighed down throughout 1989 by the overbuilding that occurred in some locales earlier in the decade. Vacancy rates were especially high for multifamily rental and condominium units. In the single-family sector, affordability problems constrained demand, dramatically so in those areas in which home prices had soared relative to household income.
Mortgage interest rates declined more than a percentage point, on net, between the spring of 1989 and the end of the year, helping to arrest the contraction in housing activity; however, the response to the easing in rates appears to have been muted somewhat by a reduction in the availability of construction credit, likely reflecting, in part, the tightening of regulatory standards in the thrift industry and the closing of several insolvent institutions. Exceptionally cold weather also hampered building late in the year, but a sharp December drop in housing starts was followed by a record jump in activity last month.
The Business Sector
Business fixed investment, adjusted for inflation, increased only 1 percent at an annual rate during the second half of 1989 after surging 7 3/4 percent during the first half. Although competitive pressures forced many firms to continue seeking efficiency gains through capital investment, the deceleration in overall economic growth made the need for capacity expansion less urgent, and shrinking profits reduced the availability of internal finance.
Spending on equipment moved up briskly during the first half of 1989, with particularly notable gains in outlays for information-processing equipment -- computers, photocopiers, telecommunications devices, and the like. However, equipment outlays were flat in the second half of the year; growth in the information processing category slowed sharply, and spending in most other categories was either flat or down. Purchases of motor vehicles dropped sharply in the fourth quarter from the elevated levels of the second and third quarters. There were a few exceptions to the general pattern of weakness during the second half. Spending on aircraft was greater in the second half of 1989 than in the first half, and would have increased still more had it not been for the strike at Boeing. Outlays for tractors and agricultural machinery moved up smartly; spending on farm equipment has been buoyed by the substantial improvements over the past several years in the financial health of the agricultural sector. Over the four quarters of 1989, total spending on equipment increased 6 percent in real terms -- about 1 percentage point below the robust pace of 1988.
Business spending for new construction edged down 1/2 percent in real terms during 1989 -- the second consecutive yearly decline. Commercial construction, which includes office buildings, was especially weak; vacancy rates for office space remain at high levels in many areas, lowering prospective returns on new investment. Outlays for drilling and mining, which had dropped 20 percent over the four quarters of 1988, moved down further in the first quarter of 1989; later in the year, drilling activity revived as crude oil prices firmed. The industrial sector was the most notable exception to the overall pattern of weakness: Real outlays increased 11 percent in 1989, largely because of construction that had been planned in 1987 and 1988 when capacity in many basic industries tightened substantially and profitability was improving sharply.
As noted above, the slowdown in investment spending during the second half of last year likely was exacerbated by the deterioration in corporate cash flow. Before-tax operating profits of nonfinancial corporations dropped 12 percent from the fourth quarter of 1988 to the third quarter of 1989 (latest data available); after-tax profits were off in about the same proportion. Reflecting the increased pressures from labor and materials costs -- and a highly competitive domestic and international environment -- before-tax domestic profits of nonfinancial corporations as a share of gross domestic product declined to an average level of 8 percent during the first three quarters of 1989, the lowest reading since 1982. At the same time, taxes as a share of before-tax operating profits increased to an estimated 44 percent in the first three quarters of 1989; since 1985, this figure has retraced a bit more than half of its decline from 54 percent in 1980.
Nonfarm business inventory investment averaged $21 billion in 1989. Although the average pace of accumulation last year was slower than in 1988, the pattern across sectors was somewhat uneven. Some of the buildup in stocks took place in industries -- such as aircraft -- where orders and shipments have been strong for some time now. But inventories in some other sectors became uncomfortably heavy at times and precipitated adjustments in orders and production. The clearest area of inventory imbalance at the end of the year was at auto dealers, where stocks of domestically produced automobiles were at 1.7 million units in December -- almost three months' supply at the sluggish fourth-quarter sales pace. In response, the domestic automakers implemented a new round of sales incentives and cut sharply the planned assembly rate for the first quarter of 1990. Elsewhere in the retail sector, inventories moved up substantially relative to sales at general merchandise outlets. Overall, however, most sectors of the economy have adjusted fairly promptly to the deceleration in sales and appear to have succeeded in preventing serious overhangs from developing.
The Government Sector
Budgetary pressures continued to restrain the growth of purchases at all levels of government. At the federal level, purchases fell 3 percent in real terms over the four quarters of 1989, with lower defense purchases accounting for the bulk of the decline. Nondefense purchases also declined in real terms from the fourth quarter of 1988 to the fourth quarter of 1989; increases in such areas as the space program and drug interdiction were more than offset by general budgetary restraint that imposed real reductions on most other discretionary programs.
In terms of the unified budget, the federal deficit in fiscal year 1989 was $152 billion, slightly smaller than in 1988. Growth in total federal outlays, which include transfer payments and interest costs as well as purchases of goods and services, picked up a bit in fiscal year 1989. Outlays were boosted at the end of the fiscal year by the initial $9 billion of spending by the Resolution Trust Corporation. On the revenue side of the ledger, growth in federal receipts also increased in fiscal 1989. The acceleration occurred in the individual income tax category, but strong increases also were recorded in corporate and social security tax payments.
Purchases of goods and services at the state and local level increased 2 1/2 percent in real terms over the four quarters of 1989, down more than a percentage point from the average pace of the preceding five years. Nonetheless, there were some areas of growth. Spending for educational buildings increased, and employment in the state and local sector rose 350,000 over the year, largely driven by a pickup in hiring by schools. Despite the overall slowdown in the growth of purchases, the budgetary position of the state and local sector deteriorated further over the year; the annualized deficit of operating and capital accounts, which excludes social insurance funds, increased $6 billion over the first three quarters of 1989 and appears to have worsened further in the fourth quarter.
The External Sector
The U.S. external deficits improved somewhat in 1989, but not by as much as in 1988. On a balance-of-payments basis, the deficit on merchandise trade fell from an annual rate of $128 billion in the fourth quarter of 1988 (and $127 billion for the year as a whole) to $114 billion in the first quarter of 1989. Thereafter, there was no further net improvement. The appreciation in the foreign exchange value of the dollar between early 1988 and mid-1989 appears to have played an important role in inhibiting further progress on the trade front. During the first three quarters of 1989, the current account, excluding the influence of capital gains and losses that are largely caused by currency fluctuations, showed a deficit of $106 billion at an annual rate -- somewhat below the deficit of $124 billion in the comparable period of 1988.
Measured in terms of the other Group of Ten (G-10) currencies, the foreign exchange value of the U.S. dollar in December 1989 was about 3 percent above its level in December 1988, but the dollar has moved lower thus far in 1990. In real terms, the net appreciation of the dollar during 1989 in terms of the other G-10 currencies was about 5 percent as consumer prices rose somewhat faster here than they did abroad, on average. Over the year, the dollar moved lower on balance against the currencies of South Korea, Singapore, and especially Taiwan. From a longer perspective, the modest uptrend on balance in the dollar over the past two years marked a sharp departure from the substantial weakening seen during the 1985-87 period.
The behavior of the dollar differed greatly between the two halves of 1989. In the first half, the dollar appreciated 12 percent in terms of the other G-10 currencies, while depreciating against the currencies of South Korea and Taiwan. The dollar fluctuated during the summer, and later in the year unwound most of the prior appreciation, as U.S. interest rates eased relative to rates abroad and in response to concerted intervention in exchange markets in the weeks immediately after the September meeting of Group of Seven officials and to events in Eastern Europe. In the second half of the year, the dollar rose against the currencies of South Korea and Taiwan while depreciating in terms of the Singapore dollar. Over the course of 1989, the dollar appreciated nearly 16 percent against the Japanese yen and 14 percent against the British pound, but it depreciated slightly against the German mark, the Canadian dollar, and most other major currencies.
On a GNP basis, merchandise exports increased about 11 percent in real terms over the four quarters of 1989--roughly 4 percentage points less than in 1988. This deceleration took place despite continued strong growth in economic activity in most foreign industrial countries (with the exception of Canada and the United Kingdom), and appears to have reflected, in large part, the effect on U.S. competitiveness of the dollar's appreciation and the more rapid U.S. inflation over 1988 and much of 1989. Exports were also depressed in the fourth quarter of 1989 by several special factors, including the Boeing strike. The volume of agricultural exports increased about 11 percent in 1989--a bit faster even than the robust pace of 1988. The value of agricultural exports rose much less, however, as agricultural export prices reversed the drought-induced increases of the previous year.
Merchandise imports excluding oil expanded about 7 percent in real terms during 1989, with much of the rise accounted for by imports of computers. Imports of oil increased 6 percent from the fourth quarter of 1988 to the fourth quarter of 1989, to a rate of 8.3 million barrels per day. At the same time, the average price per barrel increased almost 40 percent, and the nation's bill for foreign oil jumped 45 percent.
The counterpart of the current account deficit of $106 billion at an annual rate over the first three quarters of 1989 was a recorded net capital inflow of about $60 billion at an annual rate and an unusually large statistical discrepancy, especially in the second quarter. More than half of the recorded net inflow of capital reflected transactions in securities, as foreign private holdings of U.S. securities rose nearly $50 billion (half of the increase being in holdings of U.S. Treasury securities), while U.S. holdings of foreign securities increased a bit less than $20 billion. Net direct investment accounted for another substantial portion of the inflow; foreign direct investment holdings in the United States rose more than $40 billion, and U.S. holdings abroad rose only half as much. Over the first three quarters of 1989, foreign official assets in the United States increased almost $15 billion, but this increase was more than offset by the increase in U.S. official holdings of assets abroad, largely associated with U.S. intervention operations to resist the dollar's strength.
Labor Markets
Employment growth slowed in the second half of 1989; nonetheless, nonfarm payrolls increased nearly 2 1/2 million during the year. The bulk of this expansion occurred in the service-producing sector. By contrast, the manufacturing sector shed 100,000 jobs. These job losses were more than accounted for by declines in the durable goods industries and appeared to reflect the slump in auto sales, the weakening in capital spending, and the effects of a stronger dollar on exports and imports.
Despite the slowdown in new job creation, the overall balance of supply and demand in the labor market remained steady over the year. The civilian unemployment rate, which had declined about 1/2 percentage point over the twelve months of 1988, finished 1989 at 5.3 percent--unchanged from twelve months earlier. Moreover, there was no increase in the number of "discouraged" workers--those who say they would re-enter the labor force if they thought they could find a job. Nor was there any net increase in workers who accepted part-time employment when they would have preferred fulltime. The proportion of the civilian population with jobs reached a historic high.
Reflecting the tightness of labor markets and the persistence of inflation expectations in the range of 4 to 5 percent, according to surveys, the employment cost index for wages and salaries in nonfarm private industry increased 4 1/4 percent over the twelve months of 1989--about the same as in 1988. Benefit costs continued to rise more rapidly than wages and salaries last year, with health insurance costs remaining a major factor; nonetheless, the rate of growth in overall benefit costs slowed in 1989, in part because of a smaller increase in social security taxes than in 1988. Total compensation--including both wages and salaries and benefits--rose 4 3/4 percent during 1989. Compensation growth in the service-producing sector--at 5 percent--continued to outpace the gain in the goods-producing sector by about 3/4 percentage point.
A slowdown in the growth of productivity often accompanies a softening in the general economy, and productivity gains were lackluster in 1989. Output per hour in the private nonfarm business sector increased only 1/2 percent over the four quarters of the year--1 percentage point below the rate of increase in 1988. In the manufacturing sector, productivity gains during the first half of 1989 kept pace with the 1988 average of 3 percent; in the second half, however, productivity growth slowed to an annual rate of 2 1/4 percent. Reflecting both the persistent growth in hourly compensation and the disappointing developments in productivity, unit labor costs in private nonfarm industry rose 5 percent over the four quarters of 1989--the largest increase since 1982.
Price Developments
Inflation in consumer prices remained in the neighborhood of 4 1/2 percent for the third year in a row, as the level of economic activity was strong and continued to exert pressures on available resources. During the first half of the year, overall inflation was boosted by a sharp run-up in energy prices and a carry-over from 1988 of drought-related increases in food prices. However, inflation in food prices slowed during the second half, and energy prices retraced about a third of the earlier run-up. Prices for imported goods excluding oil were little changed over 1989, on net, and acted as a moderating influence on consumer price inflation.
Food prices increased 5 1/2 percent at the retail level, slightly more than in 1988 when several crops were severely damaged by drought. Continued supply problems in some agricultural markets in 1989--notably a poor wheat crop and a shortfall in dairy production--likely prevented a deceleration from the drought-induced rate of increase in 1988. At the same time, increases in demand, including sharp increases in exports of some commodities, also appear to have played a role. Still another impetus to inflation in the food area last year evidently came from the continuing rise in processing and marketing costs.
Consumer energy prices surged 17 percent at an annual rate during the first six months of 1989, before dropping back 6 percent in the second half. During the first half of the year, retail energy prices were driven up by increases in the cost of crude oil. The increase in gasoline prices at midyear was exaggerated by the introduction of tighter standards governing the composition of gasoline during summer months. Gasoline prices eased considerably in the second half, reflecting a dip in crude oil prices and the expiration of the summertime standards. Taking the twelve months of 1989 as a whole, the increase in retail energy prices came to a bit more than 5 percent. Heating oil prices jumped sharply at the turn of the year, reflecting a surge in demand caused by December's unusually cold weather. The spike in heating fuel prices largely reversed itself in spot markets during January of this year, but crude oil prices remained at high levels.
Consumer price increases for items other than food and energy remained at about 4 1/2 percent in 1989. Developments in this category likely would have been less favorable had the dollar not been appreciating in foreign exchange markets through the first half of 1989. The prices of consumer commodities excluding food and energy decelerated sharply, and this slowdown was particularly marked for some categories in which import penetration is high, including apparel and recreational equipment. Given the dollar's more recent depreciation, however, the moderating effect of import prices on overall inflation may be diminishing. Indeed, prices for imported goods excluding oil turned up in the fourth quarter of 1989, after declining earlier in the year. In contrast to goods prices, the prices of nonenergy services--which make up half of the overall consumer price index--increased 5 1/4 percent in 1989, 1/4 percentage point more than in 1988. The pickup in this category was led by rents, medical services, and entertainment services.
At the producer level, prices of finished goods increased 7 1/2 percent at an annual rate during the first half--almost twice the pace of 1988 -- before slowing to an annual rate of increase of 2 1/2 percent over the second half. In large part, developments in this sector reflected the same sharp swings in energy prices that affected consumer prices. At earlier stages of processing, the index for intermediate materials excluding food and energy decelerated sharply during the first half of the year and then edged down in the second half. For the year as a whole, this index registered a net increase of only 1 percent, compared with more than 7 percent in 1988. The sharp deceleration in this category appears to have reflected a relaxation of earlier pressures on capacity in the primary processing industries, and the influence of the rising dollar through the first half of last year. Also consistent with the weakening in the manufacturing sector and the strength of the dollar, the index for crude nonfood materials excluding energy declined 3 3/4 percent over the year, and spot prices for industrial metals moved sharply lower during the year, in part because of large declines for steel scrap, copper, and aluminum.
MONETARY AND FINANCIAL DEVELOPMENTS DURING 1989
In 1989, the Federal Reserve continued to pursue a policy aimed at containing and ultimately eliminating inflation while providing support for continued economic expansion. In implementing that policy, the Federal Open Market Committee maintained a flexible approach to monetary targeting, with policy responding to emerging conditions in the economy and financial markets as well as to the growth of the monetary aggregates relative to their established target ranges. This flexibility has been necessitated by the substantial variability in the short-run relationship between the monetary aggregates and economic performance; however, when viewed over a longer perspective, those aggregates are still useful in conveying information about price developments.
As the year began, monetary policy was following through on a set of measured steps begun a year earlier to check inflationary pressures. By then, however, evidence of a slackening in aggregate demand, along with sluggish growth of the monetary aggregates, suggested that the year-long rise in short-term interest rates was noticeably restraining the potential for more inflation. But, after an increase of 1/2 percentage point in the discount rate at the end of February, the Federal Reserve took no further policy action until June. Over the balance of 1989, the Federal Reserve moved toward an easing of money market conditions, as indications mounted of slack in demand and lessened inflation pressures. The easing in reserve availability induced declines in short-term interest rates of 1 1/2 percentage points; money growth strengthened appreciably, and M2 was near the middle of its target range by the end of 1989. The level of M3, on the other hand, remained around the lower bound of its range, with its weakness mostly reflecting the shifting pattern of financial intermediation as the thrift industry retrenched. The growth of nonfinancial debt was trimmed to 8 percent in 1989, about in line with the slowing in the growth of nominal GNP, and ended the year at the midpoint of its monitoring range.
Implementation of Monetary Policy
In the opening months of the year, the Federal Open Market Committee, seeking to counter a disquieting intensification of inflationary pressures, extended the move toward restraint that had begun almost a year earlier. Policy actions in January and February, restraining reserve availability and raising the discount rate, prompted a further increase of 3/4 percentage point in short-term market interest rates. Longer-term rates, however, moved up only moderately; the tightening apparently had been widely anticipated and was viewed as helping to avoid an escalation in underlying inflation. Real short-term interest rates--nominal rates adjusted for expected price inflation--likely moved higher, though remaining below peak levels earlier in the expansion; these gains contributed to a strengthening of the foreign exchange value of the dollar over this period, while the growth of the monetary aggregates slowed as the additional policy restraint reinforced the effects of actions in 1988.
As evidence on prospective trends in inflation and spending became more mixed in the second quarter, the Committee refrained from further tightening and in June began to ease pressures on reserve markets. As the information on the real economy, along with the continued rise in the dollar, suggested that the outlook for inflation was improving, most long-term nominal interest rates fell as much as a percentage point from their March peaks; the yield on the bellwether thirty-year Treasury bond moved down to about 8 percent by the end of June. The decline in interest rates outstripped the reduction in most measures of investors' inflation expectations, so that estimated real interest rates fell from their levels earlier in the year. These declines in nominal and real interest rates, however, were not accompanied by declines in the foreign exchange value of the dollar. Rather, because of better-than-expected trade reports and political turmoil abroad, the dollar strengthened further.
In July, when the FOMC met for its semiannual review of the growth ranges for money and credit, M2 and M3 lay at, or a bit below, the lower bounds of their target cones. This weakness, reinforcing the signals from prices and activity, contributed to the Committee's decision to take additional easing action in reserve markets. The Committee reaffirmed the existing annual target ranges for the monetary and debt aggregates and tentatively retained those ranges for the next year, since they were likely to encompass money growth that would foster further economic expansion and moderation of price pressures in 1990.
Late in the summer, longer-term interest rates turned higher, as several releases of economic data suggested reinvigorated inflationary pressures. With growth in the monetary aggregates rebounding, the Committee kept reserve conditions about unchanged until the direction of the economy and prices clarified.
Beginning in October, amid indications of added risks of a weakening in the economic expansion, the FOMC reduced pressures on reserve markets in three separate steps, which nudged the federal funds rate down to around 8 1/4 percent by year-end, about 1 1/2 percentage points below its level when incremental tightening ceased in February. Over those ten months, other short- and long-term nominal interest rates fell about 1 to 1 1/4 percentage points; and most major stock price indexes reached record highs at the turn of the year, more than recovering the losses that occurred on October 13. Reflecting some reduction in inflation anticipations over the same period, estimated short- and long-term real interest rates fell somewhat less than nominal rates, dropping probably about 1/2 to 3/4 percentage point. Still, most measures of short- and long-term real interest rates remained well above their trough levels of 1986 and 1987 -- levels that had preceded rapid growth in the economy and a buildup of inflationary pressures.
Over the last three months of the year and into January 1990, the foreign exchange value of the dollar declined substantially from its high, which was reached around midyear and largely sustained through September. The dollar fell amid concerted intervention undertaken by the G-7 countries in the weeks immediately after a meeting of the finance ministers and central bank governors of these countries in September. The dollar continued to decline in response to the easing of short-term interest rates on dollar assets and increases in rates in Japan and Germany. The German currency rate rose particularly sharply as developments in Eastern Europe were viewed as favorable for the West German economy, attracting global capital flows. Rising interest rates in Germany likely contributed to an increase in bond yields in the United States early in 1990, even as U.S. short-term rates remained essentially unchanged. More important, however, for the rise in nominal, and likely real, long-term rates in the United States were incoming data pointing away from recession in the economy and from any abatement in price pressures, especially as oil prices moved sharply higher.
Behavior of Money and Credit
Growth in M2 was uneven over 1989, with marked weakness in the first part of the year giving way to robust growth thereafter. On balance over the year, M2 expanded 4 1/2 percent, down from 5 1/4 percent growth in 1988, placing it about at the midpoint of its 1989 target range of 3 to 7 percent. The slower rate of increase in M2 reflected some moderation in nominal income growth as well as the pattern of interest rates and associated opportunity costs of holding money, with the effects of increases in 1988 and 1989 outweighing the later, smaller drop in rates (table 3).
M2 has grown relatively slowly over the past three years, as the Federal Reserve has sought to ensure progress over time toward price stability. There appears to be a fairly reliable long-term link between M2 and future changes in inflation. One method of specifying that link is to estimate the equilibrium level of prices implied by the current level of M2, assuming that real GNP is at its potential and velocity is at its long-run average, and compare that to actual prices. The historical record suggests that inflation tends to rise when actual prices are below the equilibrium level and to moderate when equilibrium prices are below actual. At the end of 1986, the equilibrium level of prices was well above the actual level, reinforcing the view that the risks weighed on the side of an increase in inflation; at the end of 1989, that equilibrium price had moved into approximate equality with the actual price level, indicating that basic inflation pressures had steadied.
In 1989, compositional shifts within M2 reflected the pattern of interest rates, the unexpected volume of tax payments in the spring, and the flow of funds out of thrift deposits and into other instruments. Early in the year, rising market interest rates buoyed the growth of small-denomination time deposits at the expense of more liquid deposits, as rates on the latter accounts adjusted only sluggishly to the upward market movements. The unexpectedly large tax payments in April and May contributed to the weakness in liquid instruments as those balances also were drawn down to meet tax obligations. As market interest rates fell, the relative rate advantage reversed in favor of liquid instruments and the growth in liquid deposits rebounded, boosted as well by the replenishment of accounts drained by tax payments.
The M1 component of M2 was especially affected by the swings in interest rates and opportunity costs last year, and in addition was buffeted by the effects of outsized tax payments in April. After its rise of 4 1/4 percent in 1988, M1 grew only 1/2 percent in 1989, with much of the weakness in this transactions aggregate occurring early in the year. By May, M1 had declined at an annual rate of about 2 1/2 percent from its fourth-quarter 1988 level, reflecting a lagged response to earlier increases in short-term interest rates and an extraordinary bulge in net individual tax remittances to the Treasury. From May to December, M1 rebounded at a 4 percent rate as the cumulating effects of falling interest rates and post-tax-payment rebuilding boosted demands for this aggregate. M1 velocity continued the upward trend that resumed in 1987, increasing in the first three quarters before turning down in the fourth quarter of 1989.
The shift of deposits from thrift institutions to commercial banks and money fund shares owed, in part, to regulatory pressures that brought down rates paid by some excessively aggressive thrift institutions. Beginning in August, the newly created Resolution Trust Corporation (RTC) targeted some of its funds to pay down high-cost deposits at intervened thrift institutions and began a program of closing insolvent thrift institutions and selling their deposits to other institutions--for the most part, banks. On balance, the weak growth of retail deposits at thrift institutions appears to have been about offset by the shift into commercial banks and money market mutual funds, leaving M2 little affected overall by the realignment of the thrift industry.
M3 was largely driven, as usual, by the funding needs of banks and thrift institutions; under the special circumstances of the restructuring of the thrift industry, it was a less reliable barometer of monetary policy pressures than is normally the case. After expanding 6 1/4 percent in 1988, M3 hugged the lower bound of its 3 1/2 to 7 1/2 percent target cone in 1989, closing the year about 3 1/4 percent above its fourth quarter of 1988 base. In 1989, bank credit growth about matched the previous year's 7 1/2 percent increase, but credit at thrift institutions is estimated to have contracted a bit on balance over the year, in contrast to its 6 1/4 percent growth in 1988. This weakness in thrift credit directly owed to asset shrinkage at savings and loan institutions insured by the Savings Association Insurance Fund; credit unions and mutual savings banks expanded their balance sheets in 1989. In addition, funds paid out by the RTC to thrift institutions and to banks acquiring thrift deposits directly substituted for other sources of funds. As a result, thrift institutions lessened their reliance on managed liabilities, as evidenced by the decline of 14 3/4 percent over the year in the sum of large time deposits and repurchase agreements at thrift institutions. Institution-only money market mutual funds were bolstered by a relative yield advantage, as fund returns lagged behind declining market interest rates in the second half of the year; these funds provided the major source of growth for the non-M2 component of M3. On balance, the effects of the thrift restructuring dominated the movements in M3, and the rebound in M2 in the second half of the year did not show through to this broader aggregate. As a consequence, the velocity of M3 increased 3 percent in 1989, 1 1/4 percentage points faster than the growth in M2 velocity, and its largest annual increase in twenty years.
Many of the assets shed by thrift institutions were mortgages and mortgage-backed securities, but this appears to have had little sustained effect on home mortgage cost and availability. The spread between the rate on primary fixed-rate mortgages and the rate on ten-year Treasury notes rose somewhat early in the year, but thereafter remained relatively stable. The share of mortgages held in securitized from again climbed in 1989, facilitating the tapping of a base of investors. Diversified lenders, acting in part through other intermediaries, such as federally sponsored agencies, mostly filled the gap left by the thrift institutions. However, some shrinkage of credit available for acquisition, development, and construction appeard to follow from limits imposed by the FIRREA on loans by thrift institutions to single borrowers, though the reduction in funds available for these purposed probably also reflected problems in some residential real estate markets.
Aggregate debt of the domestic nonfinancial sectors grew at a fairly steady pace over 1989, averaging 8 percent, which placed it near the midpoint of its monitoring range of 6 1/2 to 10 1/2 percent. Although the annual growth of debt slowed in 1989, as it had during the preceding two years, it still exceeded the 6 1/2 percent growth of nominal GNP. Federal sector debt grew 7 1/2 percent, about 1/2 percentage point below the 1988 increase--and the lowest rate of expansion in a decade--as the deficit leveled off. Debt growth outside the federal sector eased by more to average 8 1/4 percent, mostly because of a decline in the growth of household debt. Mortgage credit slowed in line with the reduced pace of housing activity, and consumer credit growth, though volatile from month to month, trended down through much of the year. The growth of nonfinancial business debt slipped further below the extremely rapid rates of the mid-1980s. Corporate restructuring continued to be a major factor buoying business borrowing, although such activity showed distinct signs of slowing late in the year as lenders became more cautious and the use of debt to require equity ebbed.
The second half of 1989 was marked by the troubling deterioration in indicators of financial stress among certain classes of borrowers, with implications for the profitability of lenders, including commercial banks. In the third quarter, several measures of loan delinquency rates either rose sharply or continued on an uptrend. Delinquency rates on closed-end consumer loans at commercial banks and auto loans at "captive" auto finance companies were close to historically high levels. At commercial banks as a whole in 1989, both delinquency and charge-off rates for real estate loans were little changed from the previous year. Still, problem real estate loans continued to be a drag on the profitability of banks in Texas, Oklahoma, and Louisiana; in the second half, such loans emerged as a serious problem for banks in New England. On the other hand, smaller, agriculturally oriented banks continued to recover from the distressed conditions of the mid-1980s. Since 1987, agricultural banks have charged off loans at well below the national rate, and their nonperforming assets represented a smaller portion of their loans than that for the country as a whole.
The upswing in the profitability of insured commercial banks that began in 1988 only extended through the first half of 1989. A slowing in the buildup of loan loss provisions, along with improvements in interest rate margins, contributed to these gains, with the money center banks showing the sharpest turnaround. Information for the second half of 1989, although still incomplete, clearly points to an erosion of these profit gains, in part, because of problems in the quality of loans. Several money center banks sharply boosted their loss provisions on loans to developing countries, while evidence of rising delinquency rates on real estate and consumer loans suggested more widespread weakening. Despite these developments, the spread of rates on bank liabilities, certificates of deposit, and Eurodollar deposits, over comparable Treasury bill rates narrowed early in 1990. [Tabular Data 1 to 3 Omitted]
(1)The charts for the report are available on request from Publications Services, Board of Governors of the Federal Reserve System, Washington, D.C. 20551. | https://www.thefreelibrary.com/Monetary+policy+report+to+the+Congress-a08841899 |
==> Forex Trading A-Z - With LIVE Examples of Forex Trading <==
In our last lesson we looked at the Consumer Confidence number, an indicator which gauges the mood of the consumer in the US economy. In today’s lesson we are going to look at an economic indicator which combines many of the leading components of the indicators we have studied thus far into one indicator meant to give insight into where the economy is heading several months ahead of time.
Published around the 21st of each month, the Conference Board’s Index of Leading Economic indicators is made up of 10 sub indices which tend to move ahead of the overall Economy.
The 10 sub indices are:
1. The average weekly hours worked by manufacturing workers - Before hiring or firing employees manufacturing firms will normally increase worker hours when demand requires or cut back on worker hours when demand falters, which is why this is included as a leading economic indicator.
2. The average number of initial applications for unemployment insurance – As is probably obvious if the number of applications for unemployment increases this means more people out of work, which means people will have less money to spend, which means a weaker economy and market sell offs all else being equal.
3. The amount of manufacturers' new orders for consumer goods and materials - An increase in the amount of new orders should indicate a pickup in demand and vice versa.
4. The speed of delivery of new merchandise to vendors from suppliers - A leading indicator because increase in demand can cause an increase in delivery time as suppliers have trouble keeping up with new demand.
5. The amount of new orders for capital goods unrelated to defense – another way of looking at new orders which should lead the business cycle as pickups in new orders indicate rising demand.
6. The amount of new building permits for residential buildings – As builders try to anticipate demand, new building permits normally move higher ahead of demand which is why this is considered at leading economic indicator.
7. The S&P 500 stock index – As we will learn in our lessons on the stock market, the S&P 500 Index includes the stock prices of the 500 largest companies in the US. And as we have learned in past lessons markets anticipate making changes in the stocks that make up this index a leading indicator of future economic activity.
8. The inflation-adjusted monetary supply (M2) - in simple terms this is watched as it is a measure of bank lending which increases ahead of economic expansion and decreases ahead of economic contraction making this a leading economic indicator.
9. The spread between long and short interest rates – as we learned in our lessons on interest rates, normally the shorter term the loan the lower the interest rate one will pay. This is considered a leading economic indicator as when the distance between short term interest rates and long term interest rates narrow this is indicative of a situation where the market participants are anticipating Fed interest rate cuts which normally come during economic slowdowns, and vice versa.
10. Consumer sentiment – This indicator which measures how optimistic the consumer is about the economy is a leading indicator as when the consumer feels that the economy is not good and not going to be good they will normally pull back spending causing economic slowdown and vice versa. | http://www.aboutcurrency.com/university/fxvideocourse/how_to_interpret_the_index_of_leading_economic_indicators.shtml |
Information and data processing is required in virtually all areas of human endeavor. The shear volume of documents that an organization must contend with has become increasingly problematic. The ability to rapidly obtain relevant documents from the huge store of available documents has become a key to an organization's success. Electronic document handling is the foundation for the future of information processing, with millions upon millions of papers being converted into electronic images every day.
Electronic documents such as images, emails, reports, web pages, etc. are generated at a tremendous rate. Indexing and classifying these electronic documents into manageable databases has become a mandatory task, without which, retrieval and searching for information and data among such documents is impossible in terms of efficiency and accuracy. As is well known, the cost of classifying documents manually is extremely high. As the number of documents being digitally captured and distributed in electronic format increases, there is a growing need for techniques and systems to quickly classify digitally captured documents.
At one time document classification was done manually. An operator would visually scan and sort documents by document type. This process was tedious, time consuming, and expensive. As computers have become more ubiquitous, the quantity of new documents including on-line publications has increased greatly and the number of electronic document databases has grown almost as quickly. As the number of documents being digitally captured and distributed in electronic format increases, the old, manual methods of classifying documents are simply no longer practical. Similarly, the conversion of information in paper documents is an inefficient process that often involves data entry operators transcribing directly from original documents to create keyed data.
A great deal of effort in the area of document handling and analysis has been done in the areas of document management systems and document recognition. Specifically, the areas of page decomposition and optical character recognition (OCR) are well developed in the art. Page decomposition involves automatically recognizing the organization of an electronic document. This usually includes determining the size, location, and organization of distinct portions of an electronic document. For example, a particular page of an electronic document may include data of various types including paragraphs of text, graphics, and spreadsheet data. The page decomposition would typically be able to automatically determine the size and location of each particular portion, as well as the type of data found in each portion. Certain page decomposition software go further than merely determining the type of data found in each portion, and will also determine format information within each portion. For example, the font, font size, and justification may be determined for a block containing text.
As may be appreciated, OCR involves converting a digital image of textual information into a form that can be processed as textual information. Since electronically captured documents are often simply optically scanned digital images of paper documents, page decomposition and OCR are often used together to gather information about the digital image and sometimes to create an electronic document that is easy to edit and manipulate with commonly available word processing and document publishing software. In addition, the textual information collected from the image through OCR is often used to allow documents to be searched based on their textual content.
In today's information society, individuals often require information and data acquired by others relating to them, making the freedom and ability to obtain such information a necessity. Organizations, both commercial and governmental, are required to provide such information upon request. However, documents so provided cannot contain confidential and/or otherwise secret information and data. As such, redaction is required before certain documents can be sent out. As may be appreciated, the redaction process is extremely costly, both in time and money, if performed manually. Finally, data capturing (data entry and coding) has become very important. In many situations, data must be captured and populated into databases for data mining, searching, and processing. When performed manually, these tasks are extremely costly.
There have also been a number of systems proposed that deal with classifying and extracting data from multiple document types. There are also systems available for automatically recognizing a candidate form as an instance of a specific form contained within a forms database based on the structure of lines on the form. These systems rely, however, on the fixed structure and scale of the documents involved.
Additionally, expert systems have been proposed using machine learning techniques to classify and extract data from diverse electronic documents. One such expert system proposed is described in U.S. Pat. No. 6,044,375, entitled “Automatic Extraction of Metadata Using a Neural Network.” Since machine learning techniques generally require a training phase that demands a large amount of computational power, such classification systems operate more efficiently if the document type of a new document is known.
From the foregoing it will be apparent that there is still a need for an improved system and process for document recognition that is capable of understanding the contents of electronic documents.
| |
Australian research has shown consistently that young people living in households where English is spoken are more likely to smoke than those living in households where a language other than English is the first language.1,2 Although some groups of adult males speaking a language other than English (LOTE) at home may have a higher prevalence of smoking than English-speaking adult males (see Chapter 1, Section 1.8), these patterns are not (or not yet) apparent in the early years of secondary school (years 7 and 8).1 Rissel and colleagues3 found that in a group of year 10 and 11 pupils (aged approximately 15–17 years) in Sydney, students from an English-speaking background were much more likely to be current regular smokers (27%) than teenagers from Arabic (16%), Vietnamese or Southeast Asian backgrounds (8%). Teenagers from Vietnamese, Southeast Asian and Chinese backgrounds were also more likely to report that their families had rules at home about smoking, that they were usually supervised, and that they had lesser amounts of pocket money than other ethnic groups. Each of these factors independently correlates with a lower uptake of smoking (see Sections 5.12 and 5.14).
An earlier Sydney-based study by Tang and colleagues also showed that young adolescents (aged 12–13) who spoke a LOTE at home were much less likely to smoke than children from an English-speaking background.1 This study found that the factor of greatest influence in smoking uptake among children speaking a LOTE at home was whether their close friends smoked. The authors speculate that these lower rates may be due to stricter cultural attitudes opposing smoking among adolescents; students may be more likely to socialise with other children speaking the same LOTE at home and sharing the same cultural attitude, hence reducing the likelihood of peer smoking pressures; and/or that tobacco advertising had failed to reach these groups.
Prevention programs targeted for culturally and linguistically diverse populations in Australia are discussed in Chapter 7, Section 7.19.7.
The higher prevalence of smoking in Aboriginal and Torres Strait Islander adults means that many young Indigenous people live in settings in which smoking is the norm. It is also likely that factors such as poorer school connectedness and other socio-demographic issues connected with disadvantage contribute to higher rates of uptake.13 Smoking among Aboriginal and Torres Strait Islander children and teenagers, including influences on smoking behaviour, is discussed in greater detail in Chapter 8, Section 4. | http://tobaccoinaustralia.org.au/chapter-5-uptake/5-10-cultural-background |
Dr Nelia Drenth emphasises why self-care is a lifestyle and not an emergency response.
Self-care; the buzzword of our time. A word that is used so easily, and says so much but becomes complicated when we think of it as a verb.
I recently attended my husband’s 50-year matric reunion with him and the words I heard the most when we said our goodbyes, were “Take care of yourself.”
Since then I’ve been asking myself if we really mean these words when we utter them. Do we really think about it when someone gives us such good advice? Because it really is good advice!
But then we continue with life…as if it will never end, and rush from point A to point B, with no time for self-care.
What does self-care mean?
According to Psychcentral.com, self-care is “any activity that we do deliberately in order to take care of our mental, emotional, physical, and spiritual health.”
Be kind to yourself
Why is it easier to concentrate on the well-being of others, rather than on the well-being of yourself?
I’ll tell you why. It’s because it asks for action from the self. The same self that is so overwhelmed by life. It can also be caused by the monster, we call guilt.
Guilt creeps up on us so easily when we don’t care continuously for our loved ones, jobs, the poor, the sick, and the who-knows-what-else. And, we feed that monster because we have no time to jump into action with self-care.
We are entering a change in season which most are looking forward to. A change in season could give rise to a change in personal well-being. It starts with taking a step back and examining yourself.
Consider what gives you pleasure, what you’re grateful for, who loves you, and who wants you in their lives. Think about the legacy you will leave behind. Will it be a legacy of the busy one, the fun one, the one who cared for him/herself as much as he/she cared for all those around him/her?
Share your story
Self-care is not just face masks, bath bombs or drinking tea with friends. It includes making time to share your own story. This allows you to shed your tears about the sad and difficult times, to identify the things you’re grateful for, to stick to your personal goals, and to enjoy life.
The wonderful thing of sharing your story is that you have choices:
- You can share with a friend, and you decide what you want to share.
- You can share by journaling, blogging, or writing an email to yourself. And, even here, you decide what you want to share.
In sharing your story, allow yourself to be mindful. Be in the moment and calmly experience your senses. What do you feel, see, hear, smell and taste when your are sharing your story?
The Oxford Dictionary defines mindfulness as “a mental state achieved by focusing one’s awareness on the present moment, while calmly acknowledging and accepting one’s feelings, thoughts, and bodily sensations.”
Remember, that you can’t pour from an empty cup. Consider your successes, challenges, and strategies related to each of the areas of well-being.
Build on successes
Self-care efforts need to build on successes. Think of times when you were more successful in an area of self-care. For instance, you may recall that you had better success with exercise when you exercised in the morning.
But, because we all experience challenges, it’s good to share these with people in your social system to get advice on how to manage self-care.
Self-care doesn’t just happen. It needs a structured plan, which means that we must acknowledge that our cups are empty. It also means that we must stop, breathe and think about our lives, and then act.
What is your plan for?
- Physical well-being
- Emotional well-being
- Spiritual well-being
- Mental well-being
Maybe many of the well-being actions, that you identified above, are already on your bucket list. Why then wait for later years to tick the items off?
Self-care is attainable. It involves self-kindness and self-awareness. It means recognising and accepting your own humanness. Let’s make the following our motto for the last few months of 2019: “I care for myself because I care for others.”
MEET OUR EXPERT – Dr Nelia Drenth
Dr Nelia Drenth is a palliative care social worker in private practice in Pretoria, Gauteng. She presents workshops on psychosocial palliative care and bereavement counselling and has a passion for social work in healthcare. | https://www.buddiesforlife.co.za/importance-self-care/ |
We started our journey form Maharagama at around 6.00am, there is a special bus to Hatton from Maharagama. The crew was three; we arrived at Ginigathhena at 9.00 a.m. We got in to the Nallathanni bus and got down at the 4th mile post which is after passing Nortonbridge. From this junction it was a 1.5-2Km walk for us on a good motorable road.
On the way we came across a bridge (old), as we passed the bridge we had to take a right hand turn and continue walking. All the way up to the falls on our right hand side was the Maskeliya Oya feeding the waterfall & on the left hand side was the 7 Virgin hills which made us forget about our initial objective which was Lakshapana Falls. The sight of it was magnificent, just looking at it tempted us to climb it
Bit of history
On the 12.04.1974, Martinair McDonnell Douglas DC-8-55 , a Dutch airliner crashed in Seven Hills Range and resulted in 191 Casualties
We came to a place with a name board on our left hand side saying “Lakshapana Jaya Bima Waththa” at this place on the right hand side you will see a foot path going downwards(Easily mistaken) we travelled down few steps and there we were on top of the falls. I have been to many waterfalls but this one the it was so wide on the top, we explored the whole area but it would have been more interesting if there was someone with geography knowledge, believe me it really looked like a mini grand canyon for us, over millions of years water has flowed through these valleys leaving only imprints on the rock.
We went to the edge of the waterfall, it’s very reachable and if the weather is fine (no rain) you can get down in to this small pit (see the picture) to get nice photograph. To explore the other side we had to go up stream to cross over because even attempting to cross the small water stream is very dangerous. When we reached the other side of the stream we noticed a pointed tip protruding out at the top of the waterfall, now this is a dream place for a photographer to capture the beauty of the fall, but it’s also very very dangerous (my advice is not to attempt to go there). We found a safe area to have a swim upstream, stayed there about 45mins before we left.
After exploring the top of the falls we came back to the tarred road and travelled few hundred meters downwards & forwards where we came to place with the board saying “Lakshapana Falls” from here onwards it was decent trough private land up to the base. The good thing is there are cement steps up to the waterfall but the uphill return is bit difficult.
At the base there was a scene you can never forget, we took many photographs of it but it seems still not enough. It is indeed one of the most beautiful waterfalls in Sri Lanka. We attempted to get close to the base but the rocks were deadly slippery so sadly we abandoned the attempt.
Quote from Srilanka waterfalls web site
This very popular 129m fall is thought to derive its name from the presence of iron ore (laksha) in the rocks over which the water flows.The fall was said to house a labyrinth of tunnels, one of which still exists. Superstitious villagers tell how during Halloween, a golden melon bobs up and down in the water.
The Laksapana Reservoir, where the fall is found, is used by power stations at New Laksapana, Canyon and Polpitye Samanala resulting in a certain amount of water depletion. A number of villages including Laksapana, Pathana, Kiriwaneliya, Muruthatenna, Kottalena, Hunugala and Belumgala surround the fall.
The fall is 660m above sea level in the Nuwara Eliya Ambagamuwa Korale at the Ginigathhena Divisional Secretariat. The most convenient route is the Hatton –
Maskeliya road. Take this road for 18km from Hatton, where a footpath leads down past the Pathana village to the fall. Alternatively, take the Laksapana road from the Kaluganga River junction for 14km to the Laksapana Temple. The fall is just 2.5km from here.
The closest town is Ginigathhena, and the hotels of Dick Oya are 50km away. | http://trips.lakdasun.org/lakshapana-falls129m-and-the-scenic-seven-virgins.htm |
---
abstract: 'Since the Randall-Sundrum 1999 papers, braneworlds have been a favourite playground to test string inspired cosmological models. The subject has developped into two main directions : elaborating more complex models in order to strenghten the connection with string theories, and trying to confront them with observations, in particular the Cosmic Microwave Background anisotropies. We review here the latter and see that, even in the simple, “paradigmatic", case of a single expanding brane in a 5D anti-de Sitter bulk, there is still a missing link between the “view from the brane" and the “view from the bulk" which prevents definite predictions.'
author:
- Nathalie
date: October 22nd 2002
title: 'Cosmological perturbations of an expanding brane in an anti-de Sitter bulk : a short review'
---
Introduction
============
Since the now classic 1999 papers by Randall and Sundrum \[2\], there has been a growing interest for gravity theories in spacetimes with large extra dimensions and the idea that our universe may be a four dimensional singular hypersurface, or “brane", in a five dimensional spacetime, or “bulk".
The (second) Randall-Sundrum scenario, where our universe is represented by a four dimensional quasi-Minkowskian edge of a double-sided perturbed anti-de Sitter spacetime, or “$Z_2$-symmetric" bulk, was the first model where the linearized Einstein equations were found to hold on the brane, apart from small $1/r^2$ corrections to Newton’s potential \[2\] \[5\].
Cosmological models were then soon to be built, where the brane, instead of flat, is taken to be a Robertson-Walker spacetime, and it was shown that such “braneworlds" can tend at late times to the standard Big-Bang model and hence represent the observed universe \[4\].
The subject has since developped into two main directions :
On one hand, more complex models were elaborated, in order to turn the Randall-Sundrum scenario from a “toy" to a more “realistic" low energy limit of string theories. That included considering two brane models, studying the dynamics and stabilisation of the distance between the branes (the “radion"), allowing for colliding branes, as well as turning the 5D cosmological constant into a scalar field living in the bulk, correcting Einstein’s equations with a Gauss-Bonnet term, etc.
On another hand, effort has been devoted to try and confront these models with observations, in particular the Cosmic Microwave Background anisotropies. In order to do so, various set ups to study the perturbations of braneworlds have been proposed and compared to the perturbations of standard, four dimensional, Friedmann universes.
We concentrate here on the simple, “paradigmatic", case of a single expanding brane in a 5D anti-de Sitter bulk and briefly review the 40 odd papers dealing with the cosmological perturbations of this toy model. As we shall see, they all have up to now stalled on the problem of solving, in a general manner, the Israel “junction conditions" (that is the Einstein equations integrated across the brane) which relate the matter perturbations on the brane and the perturbations in the bulk.
The brane gravity equations
===========================
A first approach to obtain the equations which govern gravity on the brane, called for short the “view from the brane", is to project the bulk 5D Einstein equations, ${\cal
G}_{AB}=\Lambda\gamma_{AB}$, on the brane. To do so it is convenient to (1) introduce a gaussian normal coordinate system where the brane is located at $y=0$, (2) expand the metric in Taylor series in $y$, (3) write the Einstein 5D equations at lowest order in $y$, (4) relate, by means of the Israel junction conditions, the first order term of the Taylor expansion of the metric (that is, the extrinsic curvature of the brane) to the brane tension and stress-energy tensor, and, (5), get the Shiromizu-Maeda-Sasaki (SMS) equations for gravity on the brane \[3\] $$G_{\mu\nu}=8\pi GT_{\mu\nu}+{(8\pi G)^2\over\Lambda}S_{\mu\nu}+E_{\mu\nu}$$ $$D_\mu T^\mu_\nu=0$$ $$-R=8\pi G T+{(8\pi G)^2\over\Lambda}S\quad\Longleftrightarrow\quad E=0$$ where $G$ is Newton’s constant, where $G_{\mu\nu}$ and $R$ are the brane Einstein tensor and Ricci scalar, where $S_{\mu\nu}$ is some tensor quadratic in $T_{\mu\nu}$, and where the projected Weyl tensor $E_{\mu\nu}$ is related to the second order term of the Taylor expansion of the metric (see also \[32\]).
If we impose the brane to be a (flat) Robertson-Walker type universe with scale factor $a$ and Hubble parameter ${\dot a\over a}\equiv H$, then the second equation is the conservation equation : $$\dot\rho+3H(\rho+p)=0$$ $\rho$ and $p$ being the energy density and pressure of the brane cosmological fluid ; the third equation gives the modified Friedmann (BDEL) equation \[4\] : $$H^2={8\pi G\over 3}\rho\left(1+{4\pi G\rho\over\Lambda}\right)+{c\over a^4}$$ and the first gives $E_{\mu\nu}$ (which is zero if $c=0$), that is the metric off the brane up to second order in $y$. By iteration one gets the (BDL) metric everywhere in the bulk \[1\]. It looks complicated but one soon realizes \[8\] that (for $c=0$) the bulk is nothing but 5D anti-de Sitter spacetime (see also \[11\]).
The “view from the bulk", on the other hand, consists in considering a 5D Einstein manifold (such that ${\cal G}_{AB}=\Lambda\gamma_{AB}$) and imposing a foliation by maximally symmetric 3-spaces. One then immediatly gets, from Birkhoff’s theorem, that the 5D manifold is 5D anti-de Sitter spacetime (Schwarzschild-AdS5 if $c\neq0$). In coordinates adapted to the symmetries of the bulk (e.g. conformally minkowskian if the brane is spatially flat), the bulk metric looks simple ($ds^2={6\over\Lambda( X^4)^2}\eta_{AB}dX^AdX^B$), but the equation for the brane is (slightly) more complicated than in gaussian normal coordinates ($X^4=\sqrt{6\over\Lambda}{1\over a}\equiv A$, $X^0=\sqrt{1+A'^2}$), see \[6\] and e.g. \[25\].
Braneworld perturbations : the view from the brane
==================================================
In this approach, one concentrates on perturbing the SMS equations around the BDEL, Friedmann-modified, background brane solution (with $c=0$).
A first step is to assume $E_{\mu\nu}=0$ at linear order. The SMS equations then differ from the standard Einstein equations only by the presence of the $S_{\mu\nu}$ term. Standard perturbation theory (either in a “covariant" \[30\] or “gauge invariant" formulation) can then be applied. One result one can reach is, for example, that the conditions for inflation on the brane are different from the standard 4D case because of the presence of $S_{\mu\nu}$ with, as a consequence, that the initial density spectrum is enhanced, and the initial gravitational spectrum less so, as compared to the standard 4D case. See e.g., ref \[9\] \[21\] \[31\] \[38\] \[46\].
A second step consists in isolating in $E_{\mu\nu}$ the bits which prevent the SMS equations to close on the brane. It turns out that it is its transverse traceless part, $
P_{\mu\nu}$. However when one considers scalar perturbations only, then $P_{\mu\nu}=D_\mu D_\nu P$, and this term drops out of the SMS equations on super Hubble scales. The system is then closed on large scales but, instead of involving only one master variable (the perturbation of the inflaton basically), it also involves $E_{00}$, which acts as a kind of second scalar field and induces, on top of the standard inflationary adiabatic perturbations, isocurvature ones \[14\] \[22\] \[30\].
In \[26\], the authors recovered the previous results using a gauge invariant, rather than covariant, formalism. They also showed that, if the SMS equations for the initial density perturbation spectrum indeed closes on the brane, the equations governing the $g_{00}$ perturbation were, on the other hand, not closed, so that the Sachs-Wolfe contribution to the CMB anisotropies could not be predicted without further knowledge of the bulk.
One can nevertheless write a Boltzmann code including the contributions of $S_{\mu\nu}$ and $E_{\mu\nu}$ \[39\]. If one then assumes some specific behaviour for $E_{\mu\nu}$ \[37\], the CMB anisotropies can be calculated, see \[45\] for preliminary results.
The view from the bulk in coordinates adapted to the brane
==========================================================
As we have already mentionned, the anti-de Sitter metric looks complicated when written in gaussian coordinates in which the equation for the brane is $y=0$. One can however perturb this BDL metric \[15\] and write the perturbation of the 5D Einstein tensor in terms of the perturbation of the 4D Einstein tensor plus terms involving first an second $y$-derivatives of the metric perturbations. The first derivative terms are expressed in terms of the perturbations of the stress-energy tensor of matter on the brane thanks to the Israel junction conditions. As for the second order derivatives they remain undetermined and can be interpreted as some kind of extra “seeds" in the 4D perturbation equations \[23\]. Of course these final equations must be equivalent, and were shown to be equivalent \[23\], to the perturbed SMS equations, with the second order derivative identified with the projected Weyl tensor $E_{\mu\nu}$.
A drawback of this choice of gauge is, first, that the perturbation equations are very complicated, so that the regularity conditions which one must impose on the bulk perturbations at the AdS5 horizon have not yet been implemented; second, the brane bending effect is put under the rug \[25\] \[34\], which renders the boundary conditions even more difficult to implement.
However some partial results could be obtained \[19\] \[41\]. For example, if the brane is 4D de Sitter spacetime, then the equations for the tensor perturbations $T$ can be integrated by separation of the $t$ and $y$ variables (see \[36\] for an explanation of this simplification). They can be written as an infinite tower of modes ($T=\int
dm\,\phi_m(t){\cal E}_m(y)$), and the normalisation of the ${\cal E}_m(y)$ part imposed by the bulk boundary conditions \[2\] \[5\], first suppresses the non zero modes, and, second, yields a modified normalisation of $\phi_0(t)$ when one quantizes it. Hence a spectrum of gravitational waves different from the standard 4D one \[19\]. The same procedure was applied to the vectorial modes \[24\], with the somewhat surprising result that they can be normalized only if some matter vorticity is present.
The 5D “longitudinal gauge" adopted by a number of authors \[16\] \[27\] \[28\] is the closest to the very commonly used 4D longitudinal gauge, which allows to write the perturbation equations under a fairly familiar form. However, since this gauge is completely fixed, the brane cannot a priori be placed at $y=0$ ; it can be placed at $y=0$ only if there are no matter anisotropic stresses. To include brane bending and anisotropic stresses it is therefore necessary to go to another gauge if one wants to keep the brane at $y=0$, which spoils a bit the form of the perturbation equations. Of course the perturbation equations in the gaussian normal and the longitudinal gauge must be equivalent, and were shown to be \[34\].
In order to ease the passage from one gauge to another, various fully gauge invariant formalisms were proposed \[12\] \[13\] \[20\] \[29\] \[33\] \[40\], which have the advantage of expressing the scalar, vector and tensor perturbations in terms of 3 independent master variables whose evolution equations are known. Connection with the gaussian normal and the longitudinal gauge was performed in \[29\] \[34\] \[42\].
The view from the bulk in coordinates adapted to the bulk
=========================================================
As we have mentionned earlier, the AdS5 spacetime metric is very simple when written in conformally minkowskian coordinates. The perturbations equations are also very simple in this coordinate system and can be explicitely solved in, e.g., the standard tansverse-traceless gauge, see \[18\] \[25\] \[35\] \[43\], or in a gauge invariant way \[33\]. In that background coordinate system, the regularity conditions on the graviton modes at the AdS5 horizon can also easily be discussed, see e.g. \[7\] \[18\] \[25\] ; for example one may only keep the outgoing modes. Finally the brane bending degree of freedom is also easily taken into account by perturbing the position of the brane.
It is then straightforward to write the extrinsic curvature of this bent brane in a perturbed AdS5 spacetime and relate it, by means of the Israel junction conditions, to the perturbations of the stress-energy tensor of the matter inside the brane \[25\]. In this approach then, the perturbations of the brane stress-energy tensor are determined in terms of the brane bending and the (regular) 5D graviton modes. They can then be split into perturbations of the brane matter fields on one hand, and “seeds" on the other. If one assumes or imposes the absence of seeds, this approach gives straightforwardly the allowed brane bending and 5D gravitons compatible with this constraint. Particular cases have been studied, for example the case of an inflating brane \[25\]. The serious drawback of this “view from he bulk" approach is that the connection with the previous ones is not straightforward, has not been done yet and hence has not yet given the missing piece of information, that is the expression of the projected Weyl tensor $E_{\mu\nu}$ which is needed in order to implement the existing Boltzmann codes to yield the CMB anisotropies.
There exists however a more promising approach \[18\] \[35\], which consists in starting, as above, with the brane bending and the (regular) 5D graviton modes expressed as perturbations of AdS5 spacetime in conformally minkowskian coordinates, and then in performing the (“large") coordinate transformation which brings the conformally minkowskian coordinates to the gaussian normal (BDL) ones before projecting them on the brane. However, here too, the connection with the “view from the brane" approaches has not been completed yet, although partial results have already been reached, for example the fact that the isocurvature brane mode found in \[22\] would correspond to a divergent bulk mode \[35\] \[43\].
Conclusion
==========
To summarize the situation in one paragraph : when one treats the braneworld cosmological perturbations in a strict brane point of view, one stumbles on the problem of finding the expression for the projected Weyl tensor, and that alone prevents from predicting the CMB anisotropies. When one attempts to treat the problem by looking at the bulk perturbations in coordinate systems adapted to the brane, say, gaussian normal, then one obtains discouragingly complicated perturbation equations that can be solved only in the very particular dS4 brane case. Finally, when one adopts a coordinate system adapted to the bulk geometry, then the brane perturbations are obtained under a form which is very different from the familiar, 4D perturbations equations, so that standard Boltzmann codes cannot be used to yield the CMB anisotropies.
The problem of computing the CMB anisotropies generated by braneworlds will be solved when the gap between the two approaches is bridged, along the lines of \[42\] \[43\] or \[44\]. In order to do so, it may prove useful to study toy models, such as the case of a de Sitter brane, or induced gravity models in which the bulk is nothing but 5D Minkowski spacetime.
\[1\] P. Binétruy, C. Deffayet, D. Langlois, “Non conventional cosmology from a brane universe", hep-th/9905012
\[2\] L. Randall, R. Sundrum, “A large mass hierarchy from small extra dimension", hep-th/9905221 ; L. Randall, R. Sundrum, “An alternative to compactification", hep-th/9906064
\[3\] T. Shiromizu, K. Maeda, M. Sasaki, “The Einstein equations on the 3-brane world", gr-qc/9910076
\[4\] P. Binétruy, C. Deffayet, U. Ellwanger, D. Langlois, “Brane cosmological evolution on a bulk with a cosmological constant", hep-th/9910219
\[5\] J. Garriga, T. Tanaka, “Gravity in the Randall-Sundrum brane world", hep-th/9911 055
\[6\] D. Ida, “Brane world cosmology", gr-qc/9912002
\[7\] M. Sasaki, T. Shiromizu, K. Maeda, “Gravity, stability and energy conservation on the Randall Sundrum brane world", hep-th/9912233
\[8\] S. Mukohyama, T. Shiromizu, K. Maeda, “Global structure of exact cosmological solutions in the brane world". hep-th/9912287
\[9\] R. Maartens, D. Wands, B. Bassett and I. Heard, “Chaotic inflation on the brane", hep-th/9912464
\[10\] S.W. Hawking, T. Hertog, H.S. Reall, “Brane new world", hep-th/0003052
\[11\] N. Deruelle, T. Dolezel, “Brane vs shell cosmologies in Einstein and Einstein-Gauss-Bonnet theories", gr-qc/0004021
\[12\] S. Mukohyama, “Gauge invariant gravitational perturbations of maximaly symmetric spacetimes", hep-th/0004067
\[13\] H. Kodama, A. Ishibashi and O. Seto, “Brane-world cosmology, gauge invariant formalism for perturbation", hep-th/0004160
\[14\] R. Maartens, “Cosmological dynamics on the brane", hep-th/0004166
\[15\] D. Langlois, “Brane cosmological perturbations", hep-th/0005025
\[16\] C. van de Bruck, M. Dorca, R. Brandenberger, A. Lukas, “Cosmological perturbations in brane world theories : formalism", hep-th/0005032
\[17\] G. Hogan, “Gravitational waves from mesoscopic dynamics of the extra dimensions", astro-ph/0005044
\[18\] K. Koyama, J. Soda, “Evolution of cosmological perturbations in the brane world", hep-th/0005239
\[19\] D. Langlois, R. Maartens, D. Wands. “Gravitational waves from inflation on the brane", hep-th/0006007
\[20\] S. Mukohyama, “Perturbation of junction condition and doubly gauge invariant variables", hep-th/0006146
\[21\] E. Copeland, A. Liddle, J. Lidsey, “Steep inflation : ending braneworld inflation by gravitational particle production", astro-ph/0006421
\[22\] C. Gordon, R. Maartens, “Density perturbations in the brane-world", hep-th/0009010
\[23\] D. Langlois, “Evolution of cosmological perturbations in a brane-universe", hep-th/0010063
\[24\] H. Bridgman, K. Malik, D. Wands, “Cosmic vorticity on the brane", hep-th/0010133
\[25\] N. Deruelle, T. Dolezel, J. Katz, “Perturbations of braneworlds", hep-th/0010215
\[26\] D. Langlois, R. Maartens, M. Sasaki, D. Wands, “Large-scale cosmological perturbations on the brane", hep-th/0012044
\[27\] C. van de Bruck, M. Dorca, “On cosmological perturbations on a brane in an anti-de Sitter bulk", hep-th/0012073
\[28\] M. Dorca, C. van de Bruck, “Cosmological perturbations in brane worlds : brane bending and anisotropic stresses" , hep-th/0012116
\[29\] H. Kodama, “Behavior of cosmological perturbations in the brane world model", hep-th/0012132
\[30\] R. Maartens, “Geometry and dynamics of the brane-world", gr-qc/0101059
\[31\] G. Huey, J. Lidsey, “Inflation, braneworlds and quintessence", astro-ph/0104006
\[32\] N. Deruelle, J. Katz, “Gravity on branes", gr-qc/0104007
\[33\] S. Mukohyama, “Integro-differential equation for brane-world cosmological perturbations", hep-th/0104185
\[34\] H. Bridgman, K. Malik, D. Wands, “Cosmological perturbations in the bulk and on the brane", astro-ph/0107245
\[35\] K. Koyama, J. Soda, “Bulk gravitational field and cosmological perturbations on the brane", hep-th/0108003
\[36\] D.S. Gorbunov, V.A. Rubakov, S.M. Sibiryakov, “Gravity waves from inflating brane or mirrors moving in ads5", hep-th/0108017
\[37\] J.D. Barrow, R. Maartens, “Kaluza-Klein anisotropy in the CMB", gr-qc/0108073
\[38\] A. Liddle, A. N. Taylor, “Inflaton potential reconstruction in the braneworld scenario", astro-ph/0109412
\[39\] B. Leong, P. Dunsby, A. Challinor, A. Lasenby, “1+3 covariant dynamics of scalar perturbations in braneworlds", gr-qc/0111033
\[40\] S. Mukohyama, “Doubly-gauge-invariant formalism of brane-world cosmological perturbations", hep-th/0202100
\[41\] D. Wands, “String-inspired cosmology", hep-th/0203107
\[42\] C. Deffayet, “On brane world cosmological perturbations", hep-th/0205084
\[43\] J. Soda, K. Koyama, “Cosmological perturbations in brane world", hep-th/0205208
\[44\] A. Riazuelo, F. Vernizzi, D. Steer, R. Durrer, “Gauge invariant cosmological perturbation theory for braneworlds", hep-th/0205220
\[45\] B. Leong, A. Challinor, R. Maartens, A. Lasenby, “Braneworld tensor anisotropies in the CMB", astro-ph/0208015
\[46\] P.R. Ashcroft, C. van de Bruck, A.C. Davis, “Suppression of entropy perturbations in multi-field inflation on the brane", hep-th/0208411
| |
Simone Biles Withdraws From Floor, Still Might Compete In Balance Beam
U.S. superstar gymnast Simone Biles has pulled out of the individual final in the floor exercise, leaving one event in which she might still compete at the Tokyo Olympics.
"Simone has withdrawn from the event final for floor and will make a decision on beam later this week," USA Gymnastics said. "Either way, we're all behind you, Simone."
Biles suddenly pulled out of the team final earlier this week after a difficult first vault, and later said that she didn't feel that she was there mentally and was dealing with a phenomenon called the twisties. She also pulled out of the individual all-around final and the individual competitions in vault and uneven bars.
Since Biles withdrew, she's been actively supporting and cheering on her teammates.
USA Gymnastics has not said whether another U.S. gymnast will take her place in the floor final. Jade Carey qualified for floor along with Biles, and their teammate Jordan Chiles had the next-highest score for a U.S. athlete.
Copyright 2021 NPR. To see more, visit https://www.npr.org. | https://www.wyso.org/npr-news/2021-07-31/simone-biles-withdraws-from-floor-still-might-compete-in-balance-beam |
National Academies of Sciences, Engineering, and Medicine
Taking social risk factors into account is critical to improving the prevention and treatment of acute and chronic illness. Social workers are specialists in providing social care who have a long history of working within health care delivery, and in-depth training and credentialing. With expertise in patient and family engagement, assessment, care planning, behavioral health, and systems navigation, social workers identify and address multiple factors that contribute to health and well-being.
Carrie Dorn, MPA, LMSW
NASW played a lead role in conceiving and funding The National Academies of Sciences, Engineering and Medicine (NASEM) Consensus Study Report, Integrating Social Care into the Delivery of Health Care: Moving Upstream to Improve the Nation’s Health. This landmark report recognizes that social workers are specialists in identifying and addressing social needs, and it includes numerous recommendations to ensure that the nation’s health care systems address the many factors that contribute to health.
On April 16th each year, NASW celebrates National Healthcare Decisions Day to empower individuals to engage in advance care planning (ACP). As individuals, families, and communities face the COVID-19 pandemic, the importance of conversations about health care treatment preferences comes into new focus. During this public health emergency, health care social workers have a crucial role in communicating pertinent health information to family members and loved ones and advocating on behalf of patients in health settings.
During the COVID-19 crisis, states are leveraging the Medicaid program to expand health insurance coverage, respond to the health needs of individuals, and ease practice restrictions. These measures inform social work services, and they support efforts to minimize health risks for providers and clients. With varied responses at the state level, best practices are emerging and informing advocacy opportunities.
NASW is pleased that the Trump Administration has withdrawn an anticipated proposed rule calling for additional Medicaid eligibility restrictions.
Social workers, like many health and behavioral health professionals, are concerned about the impact of coronavirus disease 2019 (COVID-19) on their well-being, the people to whom they provide services, their families, and others in the community.
Social workers are in a unique position to promote disease prevention efforts (including disseminating accurate information from trusted sources), and to help address anxiety and other concerns that are arising as a result of this public health crisis.
This publication uses Emile Durkheim’s theory of suicide to understand suicidal distress and to identify circumstances that create suicide potential in the elderly. | https://www.socialworkers.org/Practice/Health/Health-Tools |
In Tayrona National Natural Park (Colombian Caribbean), abiotic factors such as light intensity, water temperature, and nutrient availability are subjected to high temporal variability due to seasonal coastal upwelling. These factors are the major drivers controlling coral reef primary production as one of the key ecosystem services. This offers the opportunity to assess the effects of abiotic factors on reef productivity. We therefore quantified primary net (Pn ) and gross production (Pg ) of the dominant local primary producers (scleractinian corals, macroalgae, algal turfs, crustose coralline algae, and microphytobenthos) at a water current/wave-exposed and-sheltered site in an exemplary bay of Tayrona National Natural Park. A series of short-term incubations was conducted to quantify O2 fluxes of the different primary producers during non-upwelling and the upwelling event 2011/2012, and generalized linear models were used to analyze group-specific O2 production, their contribution to benthic O2 fluxes, and total daily benthic O2 production. At the organism level, scleractinian corals showed highest Pn and Pg rates during non-upwelling (16 and 19 mmol O2 m(-2) specimen area h(-1)), and corals and algal turfs dominated the primary production during upwelling (12 and 19 mmol O2 m(-2) specimen area h(-1), respectively). At the ecosystem level, corals contributed most to total Pn and Pg during non-upwelling, while during upwelling, corals contributed most to Pn and Pg only at the exposed site and macroalgae at the sheltered site, respectively. Despite the significant spatial and temporal differences in individual productivity of the investigated groups and their different contribution to reef productivity, differences for daily ecosystem productivity were only present for Pg at exposed with higher O2 fluxes during non-upwelling compared to upwelling. Our findings therefore indicate that total benthic primary productivity of local autotrophic reef communities is relatively stable despite the pronounced fluctuations of environmental key parameters. This may result in higher resilience against anthropogenic disturbances and climate change and Tayrona National Natural Park should therefore be considered as a conservation priority area. | https://www.altmetric.com/details/2646528 |
How to Write Paper Topics for Higher EducationMany aspiring and professional teachers find themselves struggling to get their pen in the air when asked to write a paper topic for higher education. For many students, it can seem like the land of dreams because there are so much fun and excitement when they finally graduate. However, it can be difficult to write about subjects that they have never even heard of, let alone trying to learn.
The first step to writing paper topics for higher education is to make sure that you're prepared. Try to spend a day or two doing research online about the subjects that are important to you. The more research you do, the better prepared you will be for writing your paper topic. You will be able to brainstorm about topics that interest you and build an outline to guide you through the paper.
Next, you will want to decide on a topic that you know a lot about. There are lots of ways to do this, including looking at the topic in the encyclopedia or referring to a book or magazine that is specifically about that topic. If you do not know a lot about the topic, then you should use some of the free resources that are available online to research the topic.
The next step in writing paper topics for higher education is to focus on specific research. For example, if you want to research military issues, you can research that topic by reading books, magazines, newspapers and blogs on the subject. Then, you should choose one topic and focus on that topic.
Make sure that you keep all of your references close at hand. This way, you will not forget any information that you have researched. Also, make sure that you don't forget any acronyms or abbreviations used in your research, so that you will be ready for your paper when it comes time to submit it.
The most important thing to remember when writing paper topics for higher education is to be clear and concise. Your paper should be just as informative and useful as a student journal article, but you need to write it quickly. The goal is to write on a topic that you know about, so make sure that you have a topic that you feel confident enough about to explore.
To summarize, the best way to learn how to write paper topics for higher education is to focus on your own interests. Keep in mind that you may be given a short topic to research and you may not be able to make a choice on that topic. However, by choosing one that you are interested in, you will be able to write a great paper.
Overall, you should be excited about writing paper topics for higher education. You should find that the more you do it, the better your chances of producing a quality paper. You should also be encouraged by the encouragement you will receive when you hear from the academic department chairperson that you were chosen to present your paper topic for higher education.
Ethical Relativism Paper TopicsWhen students enter college, they should be aware of the ethical relativism paper topics that are covered. This may be a bit of a surprise for some students, but there is a very good reason for it. It's good for education and it's good for the development of future leaders. This course of study is usually covered by most colleges and universities that are looking to advance their programs and goals.
Ethical relativism means that different types of ethical standards apply in different situations. Students should be exposed to this concept and given the opportunity to learn what ethical relativism is and how it can affect them. They should also be exposed to the ways that certain societies make use of ethical relativism in order to increase the level of ethics in society. Students who take courses in the humanities may find this a fascinating and informative way to get acquainted with a new concept or facet of morality.
Colleges are increasingly becoming more socially conscious as a way to deal with issues that affect their students, and this includes how the students are raised and what the average student in society thinks of them. Ethical relativism will not necessarily go out of style. As a matter of fact, it is likely to be a more accepted way of looking at morality and ethics in the coming years. More people are becoming aware of the concepts of moral relativism.
If your current student is not familiar with this concept or when they finally discover it on campus, the best thing to do is to inform them of it. One of the best ways to accomplish this is to use ethical relativism paper topics. Of course, some students will know little about this concept, but if your student can read this article and understand the concept, then they will have an easier time when they are taking their ethics class.
The idea of relativism comes from the philosophical concept of relative values. That is, one person's opinion is perceived as true while another's is not. The reason behind this concept is that each person has their own individualistic perspective that they hold to be the most valid view or opinion. In order to make sense of this, we need to look at the different types of values and their importance to the world and society.
First off, there is absolute or non-relativism. Absolute values and morals do not change because of the existence of other opinions. What is accepted is the way that the world should be at all times. Non-relativism holds that moral values change based on the needs of the time. For example, the need for respect during wartime or even while recognizing and punishing those who cause harm or crime will influence the relative values of different people.
Next up is context-relativism. This is basically a middle ground between absolute and relativism. Instead of judging one value through another, context-relativists judge a value by its relevance to situations that already exist. For example, a value that is relevant in the workplace could be very different from a value in an after-school activity or recreational activity.
Relativism is an idea that most students are not sure about. It is important that they know about it so that they can get a good understanding of what they are learning in ethics class. By reading ethical relativism paper topics, they can make sense of what it means to them.
|
|
Author
Write something about yourself. No need to be fancy, just an overview. | https://freeresearchpapers586.weebly.com/ |
Building on 20+ years of successful experience supporting teens and families, I utilize a strengths-based approach which focuses on teen- and family-strengths as they manage crisis or maximize opportunities. Adolescence is a hugely challenging time during which one can often benefit from a trusted coach and confidant.
Specialties
- Career difficulties
- Bipolar disorder
- Depression
- Coping with life changes
- Coaching
Stress, Anxiety , LGBT , Relationship issues , Family conflicts , Trauma and abuse , Eating disorders , Parenting issues , Anger management , Compassion fatigue , ADHD , Cancer , Codependency , Control Issues , Domestic Violence , Emptiness , Family Problems , Forgiveness , Impulsivity , Jealousy , Life Purpose , Men's Issues , Midlife Crisis , Money and Financial Issues , Narcissism , Panic Disorder and Panic Attacks , Phobias , Post-traumatic Stress , Prejudice and Discrimination , Process addiction (porn, exercise, gambling) , Self-Harm , Self-Love , Sexual Assault and Abuse , Sexuality , Traumatic Brain Injury , Workplace Issues , Young Adult Issues Read more...
Clinical approaches: | https://www.teencounseling.com/michael-beavers/ |
The EAWOP Early Career Summer School for Advanced Work and Organizational Psychology (W/O) brings together 36 early career scholars from all over Europe for an eighth time in 2022. This event presents the unique opportunity for young researchers to meet with fellow researchers and prominent professors and to discuss their own work as well as aspects of being a researcher. Committed to this biannual, extraordinary meeting, and certain about its potential to be a transformative event for young scholars in the field of W/O psychology, the Cyprus Institute of Marketing and its independent Cyprus Centre for Business Research are hosting the 8th EAWOP Early Career Summer School in Protaras, Cyprus, from the 7th to the 11th of June 2022.
Summer school Activities
The summer school consist of a variety of activities including:
Invited senior scholars
Summer School Objectives
The 8th EAWOP Early Career Summer School for Advanced Work and Organizational psychology will empower European research in the field of W/O psychology, set key guidelines and help in establishing strong networks for future collaborative research. Specifically, the planned summer school has six specific objectives:
1. To increase the quality of European W/O psychology research by supporting and providing early stage researchers the chance to interact with senior scholars in the area. This will allow them to obtain better understanding of how to conduct impactful research, get feedback on current research projects and to generate new and interesting ideas for collaborative research.
2. To enhance participants’ valuable skill-sets in production of high quality research, funding application and management of research projects as well as editing and publication in leading European and international journals.
3. To generate and share feedback concerning future career steps and get insights into aspects of a successful scientific career by senior scholars to facilitate participants’ career development in academia and become prolific and rigorous researchers.
4. To foster and establish European research collaborations by creating a solid network of ambitious up-and-coming researchers. This will further strengthen the European W/O community among participants and senior academics who will be involved as keynotes or session facilitators in the summer school.
5. To bring together scientist and practitioner perspectives of W/O Psychologists and support the application of scientific outcomes; at the same time, to enlighten the scientific community about actual needs of practitioners.
6. To raise awareness among participants about real-world challenges of modern organizations and discuss key elements of engaged scholarship and actionable knowledge. In an attempt to effectively bridge the gap between academia to practice, we choose to invite scholars whose work cuts across the fields of academia and consulting.
Participation
Eligible participants at the summer schools are:
Organizing Committee:
About Cyprus
Learn more about island of Cyprus here, here, and here.
The provisional programme for this year’s summer school is available here. | http://eawop.com/next-school2 |
Organic dried Malatya apricots typical sweet apricots, aromatic with no off taints or odours.
Ingredients: Organic Apricots 95% Organic Rice flour 5% Organic Rice Flour is used during dicing as an anti-caking agent for free flowing.
Allergens: None of the FSA Standard 14.
Whilst every effort is made to avoid cross-contamination, it is impossible for us to guarantee that our products are completely free of allergens due to the nature of our shop environment and transportation.
A full list of our products, producers and ingredients is available on request if in doubt. | https://weytogo.store/products/chopped-apricots-org |
The Pygmalion EffectWednesday, February 19th, 2014
Much has been written about the Pygmalion Effect, but what is it? More importantly, how can it affect you and your organization?
The Pygmalion Effect, sometimes referred to as a “self-fulfilling prophecy”, occurs when the expectations we have for another (be they positive or negative) influence that person’s performance. The phenomenon has been studied and documented numerous times both in business and education. | http://www.crmlearning.com/blog/index.php/tag/the-pygmalion-effect/ |
How do you see yourself? Are you a mentor? Are you a leader? Are you a supervisor? You have probably thought about these roles and what they mean, but have you considered yourself as a coach? In this blog post, I will be exploring the coaching role in early childhood education (ECE). I have thought of myself as a mentor and a leader. I have not previously considered myself as a coach, but I now see the benefits of examining coaching from the perspective of early learning. Coaching in ECE is evolving. There is growing evidence that unlike other professional development/learning approaches, it has the potential to lead to very positive outcomes (O’Keefe, 2014). According to O’Keefe (2014) ECEs often, engage in professional development experiences by themselves, and these are usually one-time, lecture-style trainings. According to the New York Early Childhood Professional Development Institute this is not enough time for learning to be internalized. When an educator who works in a team, attends a training alone, they may struggle to implement new strategies without the collaborative support of their colleagues. While webinars and online trainings have become increasingly popular, especially in this past year, these modalities can be impersonal and can fail to engage educators “who have questions about specific students and challenges unique to their own practices” (Frazier, 2018, p. 3). Coaching, when implemented well, looks very different: It lasts longer, and it is grounded in an educators’ practice (O’Keefe, 2014).
Coaching is an individualized approach to professional learning “where educators work towards specific teaching goals with support and feedback from a designated colleague or expert”. Mentoring is usually “a peer-to-peer relationship between a more-experienced and less-experienced educator”. Supervising is “between an educator and the person who has direct managerial responsibility over them” (O’Keefe, 2014, p. 4). These are important roles but are they effective when the desired outcome is professional learning? In my Ontario context, leaders are often referred to as “pedagogical leaders”. Pedagogical leadership is guiding the study of the teaching and learning process alongside educators. Pedagogical leaders can be anyone with a strong knowledge of theory and practice, experience, and a commitment to ongoing learning. Watch this video to learn more about pedagogical leaders.
Pedagogical leaders can increase their effectiveness if they see their role from a coaching perspective. “Coaching is designed to build capacity for specific professional dispositions, skills, and behaviors and is focused on goal-setting and achievement for an individual or group”. Above all, “it is a relationship-based process” led by someone who “serves in a different professional role than the recipient(s)” (National Association for the Education of Young Children National Association of Child Care Resource and Referral Agencies, 2011).
Early childhood educators understand the importance of relationships and trust. How does a coach build trust? Start from a position of love and kindness. A legendary football coach once said that …
Sometimes love can be tough. It is not always easy to address practices that seem outdated or ineffective. Kindness is a combination of acceptance and compassion. There are so many challenges in the early learning profession, and these have intensified almost beyond measure with the pandemic. Accept those who you coach and offer them compassion. In the past, I approached professional learning with judgement. Now, I see those whom I support from a position of strength and capacity. I believe in their competence to go forward on their journey. Now rather than a judge, I model the role of the coach.
The coach models. They understand that they are engaged in a parallel process. Their strengths-based, reflective interactions serve as a model for the interactions that the educator has with colleagues, families, and children (Jablon, 2016). The coach acknowledges the experience and skills that the educator brings to their work, just as the educator should acknowledge the experiences of children and families. When “the coach challenges the educator to experiment with new practices” they do so while maintaining trust and by being “knowledgeable, dependable, and optimistic” (Frazier, 2018, p. 4). The pedagogical leader as coach demonstrates the same dispositions that they want to cultivate in others. Dispositions are ways in which a person is inclined to behave. Professional dispositions are the tendencies to think and act in certain ways that are valued by the field (Swim & Merz, 2016). The following professional dispositions are recommended by the New York Early Childhood Professional Development Institute in this policy brief.
To explore your role as a coach, start by building a culture of collaboration based on trusted relationships. Set goals for practice. This will lead to pedagogical knowledge and professional dialogue, and end in effective and meaningful programming for children. That is my theory. Will this work for you? Can you see opportunities in your practice to coach? I look forward to your feedback and input. What do you think? | https://articles.colormesafe.net/category/child-care/5526291/06/01/2021/exploring-the-coaching-role-in-early-childhood-education-professional-learning |
Empower your researchers with less time spent searching for information and more time devoted to research. ProQuest databases provides a single source for scholarly journals, newspapers, reports, working papers, and datasets along with millions of pages of digitized historical primary sources and more than 450,000 ebooks. Renowned abstracting and indexing makes this information easily navigable, while content tools, including instant bibliography and citation generators, simplify management and sharing of research.
The GeoRef database, established by the American Geological Institute (AGI) in 1966, provides access to the geoscience literature of the world.
Global Breaking Newswires is a library news database which provides timely full-text access to the best newswire content available globally as well as growing archive of news that may not be captured in any of the traditional print sources.
Global Newsstream provides today's global news content – with archives that stretch back into the 1980s – from over 2,800 news sources including newspapers, newswires, transcripts, video, and digital-first content in full-text format.
Goethe (1749-1832) was Germany's supreme poet and a writer, and he exercised a profound influence on the German language of today. Goethes Werke contains the complete text of the 143 volumes of the definitive Weimar Edition.
The Health & Medical Collection is a comprehensive medical information resource providing full-text journal content, reference ebooks, and evidence-based resources, including dissertations and systematic reviews. It includes MEDLINE®, which contains journal citations and abstracts for biomedical literature from around the world.
The Health Research Premium Collection provides access to the latest medical information essential for medical students and researchers. The collection offers a central access point to a variety of essential medical resources.
The Healthcare Administration Database is ideal for researchers studying health administration. It provides the most reliable and relevant information on a range of topics, including hospitals, insurance, law, statistics, business management, ethics, and public health administration.
HeritageQuest Online combines digital, searchable images of U.S. federal census records with the digitized version of the popular ProQuest Genealogy & Local History collection and other valuable content.
Click here for the home of Historic Map Works Library Edition, one of the most extensive digital map collections available, with over 1.5 million high-resolution, full color historic maps.
Historical Statistical Abstracts of the United States allows researchers to discover, work with, and analyze unaltered data on economic trends, social climate, and demographic makeup over time. | https://www.proquest.com/products-services/databases/?page=10 |
CROSS-REFERENCE TO RELATED APPLICATIONS
BACKGROUND
DETAILED DESCRIPTION
This application claims all benefits accruing under 35 U.S.C. §119 from China Patent Application No. 200910104954.6, filed on Jan. 7, 2009 in the China Intellectual Property Office.
1. Technical Field
The present disclosure relates to a thermal interface material based on carbon nanotubes and a method for manufacturing the same.
2. Description of the Related Art
Electronic components such as semiconductor chips are becoming progressively smaller, while at the same time heat dissipation requirements are increasing. Commonly, a thermal interface material is utilized between the electronic component and a heat sink in order to efficiently dissipate heat generated by the electronic component.
A conventional thermal interface material is made by diffusing particles with a high heat conduction coefficient in a base material. The particles can be made of graphite, boron nitride, silicon oxide, alumina, silver, or other metals. However, a heat conduction coefficient of the thermal interface material is now considered to be too low for many contemporary applications, because it cannot adequately meet the heat dissipation requirements of modern electronic components.
A new kind of thermal interface material has recently been developed. The thermal interface material is obtained by fixing carbon fibers with a polymer. The carbon fibers are distributed directionally, and each carbon fiber can provide a heat conduction path. A heat conduction coefficient of this kind of thermal interface material is relatively high. However, the heat conduction coefficient of the thermal interface material is inversely proportional to a thickness thereof, and the thickness is required to be greater than 40 micrometers. In other words, the heat conduction coefficient is limited to a certain value corresponding to a thickness of 40 micrometers. The value of the heat conduction coefficient cannot be increased, because the thickness cannot be reduced.
An article entitled, “Unusually High Thermal Conductivity of Carbon Nanotubes” and authored by Savas Berber (page 4613, Vol. 84, Physical Review Letters 2000) discloses that a heat conduction coefficient of a carbon nanotube can be 6600 W/mK (watts/milliKelvin) at room temperature.
U.S. Pat. No. 6,407,922 discloses another kind of thermal interface material. The thermal interface material is formed by injection molding and has a plurality of carbon nanotubes incorporated in a matrix material. The longitudinal axes of the carbon nanotubes are parallel to the heat conductive direction thereof. A first surface of the thermal interface material engages with an electronic device, and a second surface of the thermal interface material engages with a heat sink. The longitudinal axes of the carbon nanotubes are perpendicular to the first and second surfaces. The second surface has a larger area than the first surface, so that heat can be uniformly spread over the larger second surface.
The first and second surfaces need to be processed to remove matrix material to expose two ends of each of the carbon nanotubes by chemical mechanical polishing or mechanical grinding, thereby improving heat conductive efficiency of the thermal interface material. However, surface planeness of the first and second surfaces can be decreased because of the chemical mechanical polishing or mechanical grinding, which can increase thermal contact resistance between the thermal interface material and the heat source, thereby further decreasing dissipating efficiency. Furthermore, the polishing or grinding process can increase the manufacturing cost.
What is needed, therefore, is a thermal interface material, which can overcome the above-described shortcomings.
The disclosure is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean at least one.
FIG. 1
10
20
40
60
80
20
40
20
60
40
80
20
Referring to , one embodiment of a thermal interface material includes a carbon nanotube array , a matrix , a plurality of heat conductive particles , and a polymer . The carbon nanotube array includes a plurality of carbon nanotubes. The matrix is formed on at least one end of the carbon nanotube array along longitudinal axes of the carbon nanotubes. The heat conductive particles are uniformly dispersed in the matrix and contact the carbon nanotubes. The polymer is injected in among the carbon nanotubes of the carbon nanotube array .
20
21
22
21
20
20
20
20
The carbon nanotube array includes a first end and a second end opposite to the first end along the longitudinal axes of the carbon nanotubes. There is no restriction on the height of the carbon nanotube array and its height can be set as desired. The carbon nanotubes of the carbon nanotube array may be single-walled carbon nanotubes, double-walled carbon nanotubes, or multi-walled carbon nanotubes or their combinations. In one embodiment, the carbon nanotubes are multi-walled carbon nanotubes. The carbon nanotube array is a super-aligned carbon nanotube array. The term “super-aligned” means that the carbon nanotubes in the carbon nanotube array are substantially parallel to each other.
40
20
40
42
44
42
21
20
44
22
20
21
22
20
42
44
40
40
40
40
40
The matrix may be formed on one end of the carbon nanotube array or two ends. In one embodiment, the matrix includes a first matrix and a second matrix . The first matrix is formed on the first end of the carbon nanotube array . The second matrix is formed on the second end of the carbon nanotube array . The first and second ends , of the carbon nanotubes of the carbon nanotube array are respectively inserted into the first and second matrixes , . The matrix may be made of phase change material, resin material, heat conductive paste, or the like. The phase change material may be paraffin or the like. The resin material may be epoxy resin, acrylic resin, silicon resin, or the like. In one embodiment, the matrix is made of paraffin. When a temperature of the matrix is higher than the melting point of the matrix , the matrix will change to a liquid state.
60
60
60
60
The heat conductive particles may be made of metal, alloy, oxide, non-metal, or the like. The metal may be tin, copper, indium, lead, antimony, gold, silver, bismuth, aluminum, or any alloy thereof. The oxide may be metal oxide, silicon oxide, or the like. The non-metal particles may be graphite, silicon, or the like. The heat conductive particles may be set as desired to have diameters of about 10 nanometers (nm) to about 10,000 nm. In one embodiment, the heat conductive particles are made of aluminum powder and have diameters of about 10 nm to about 1,000 nm. There is no particular restriction on shapes of the heat conductive particles and may be appropriately selected depending on the purpose.
40
20
80
20
40
21
22
20
80
42
44
80
42
44
80
42
44
42
44
80
80
40
80
42
44
FIG. 1
When the matrix is formed on only one end of the carbon nanotube array , the polymer is filled into the remaining portion of the carbon nanotube array . When the matrix is formed on the first and second ends , of the carbon nanotube array , the polymer is filled in between the first and second matrixes , . In one embodiment, the polymer is filled in between the first and second matrixes , as shown in . The polymer may directly contact the first and second matrixes , or be spaced from the first and second matrixes , . The polymer may be made of silica, polyethylene glycol, polyester, epoxy resin, anaerobic adhesive, acryl adhesive, rubber, or the like. Understandably, the polymer can be made of the same material as the matrix . In one embodiment, the polymer is directly contacting the first and second matrixes , and made of two-component silicone elastomer.
FIG. 3
10
32
34
10
40
40
60
32
34
10
32
10
30
10
32
10
34
60
20
60
20
32
30
Referring to , the thermal interface material is applied between a first element , such as an electronic component, and a second element such as a heat sink. The thermal interface material is heated up by the heat generated by the electronic component. When the temperature of the matrix is higher than its melting point, the matrix changes to a liquid state, and along with the heat conductive particles , flow and fill the contact surface of the first element and the second element that has low surface planeness, thereby increasing the actual contact area between the thermal interface material and the first element and between the thermal interface material and the second element . Thus, thermal contact resistance between the thermal interface material and the first element , and between the thermal interface material and the second element are decreased. Furthermore, the heat conductive particles directly contact the carbon nanotubes of the carbon nanotube array , thereby increasing heat dissipating efficiency. The heat conductive particles flow inwards into intervals defined between every adjacent two carbon nanotubes of the carbon nanotube array filling in any space between the first element and the second element . Thus, the heat dissipating efficiency of the thermal interface material can be further increased.
Depending on the embodiment, certain of the steps described in the methods below may be removed, others may be added, and the sequence of steps may be altered. It is also to be understood that the description and the claims drawn to a method may include some indication in reference to certain steps. However, the indication used is only to be viewed for identification purposes and not as a suggestion as to an order for the steps.
One embodiment of a method for fabricating the thermal interface material is shown. The method includes:
10
20
step S: providing the carbon nanotube array ;
11
40
21
22
20
step S: forming the matrix on the first and second ends , of the carbon nanotube array ; and
12
60
40
60
20
10
step S: adding a plurality of heat conductive particles into the matrix and contacting the heat conductive particles with the carbon nanotubes of the carbon nanotube array to obtain the thermal interface material .
10
20
FIG. 2
In step S, the carbon nanotube array may be acquired by the following method. The method employed may include, but not limited to, chemical vapor deposition (CVD), Arc-Evaporation Method, or Laser Ablation. In one embodiment, the method employs high temperature CVD. Referring to , the method includes:
101
12
step S: providing a substrate ;
102
14
12
step S: forming a catalyst film on the surface of the substrate ;
103
14
step S: treating the catalyst film by post oxidation annealing to change it into nano-scale catalyst particles;
104
12
step S: placing the substrate having catalyst particles into a reaction chamber; and
105
20
step S: adding a mixture of a carbon source and a carrier gas for growing the carbon nanotube array .
101
12
12
In step S, the substrate may be a glass plate, a multiporous silicon plate, a silicon wafer, or a silicon wafer coated with a silicon oxide film on the surface thereof. In one embodiment, the substrate is a multiporous silicon plate, that is, the plate has a plurality of pores with diameters of less than 3 nm.
102
14
In step S, the catalyst film may have a thickness in a range from about 1 nm to about 900 nm and the catalyst material may be Fe, Co, Ni, or the like.
103
In step S, the treatment is carried out at temperatures ranging form about 500° C. to about 700° C. from about 5 hours to about 15 hours.
104
20
In step S, the reaction chamber is heated up to about 500° C. to about 700° C. and filled with protective gas, such as inert gas or nitrogen for maintaining purity of the carbon nanotube array .
105
In step S, the carbon source may be selected from acetylene, ethylene or the like, and have a velocity of about 20 standard cubic centimeters per minute (sccm) to about 50 sccm. The carrier gas may be inert gas or nitrogen, and have a velocity of about 200 sccm to about 500 sccm.
11
40
42
44
42
44
21
22
20
42
44
In step S, as described above, the matrix includes the first matrix and the second matrix . The first and second matrixes , are respectively formed on the first and second ends , of the carbon nanotube array . The method of forming the first and second matrixes , is described in the following. The method includes:
110
80
20
step S: injecting the polymer among the carbon nanotubes of medium portion of the carbon nanotube array ;
111
42
21
20
step S: coating the first matrix on the exposed first end of the carbon nanotube array ;
112
12
22
20
step S: removing the substrate connected to the second end of the carbon nanotube array ; and
113
44
22
20
step S: coating the second matrix on the second end of the carbon nanotube array .
110
80
In step S, a method of injecting the polymer among the carbon nanotubes includes the following steps:
21
20
forming a protective layer on the first end of the carbon nanotube array ;
20
80
immersing the carbon nanotube array having the protective layer into a solution of the polymer ;
80
80
20
curing the liquid state based polymer filled in interstices between the carbon nanotubes to form a composite material of the polymer and the carbon nanotube array ; and
removing the protective layer from the composite material.
20
80
20
80
The protective layer may be made of polyresin or the like. The protective layer can be directly pressed on the end of the carbon nanotube array to tightly contact with it. The liquid state based polymer is placed in the air or stove to cure and dry it or is placed into a cool room to dry it. Understandably, if a height of the carbon nanotube array immersed by the solution of the polymer can be predetermined as desired, the protective layer can be omitted.
111
42
21
20
112
12
113
44
42
21
20
42
113
In step S, the first matrix can be coated on the first end of the carbon nanotube array via a brush or printed on that end via a printer. In step S, the substrate can be directly striped or etched via chemical etch method. In step S, a method of coating the second matrix may be similar to that of coating the first matrix . Understandably, when only the first end of the carbon nanotube array is coated with the first matrix , the step S can be omitted.
12
60
60
42
42
42
42
42
42
60
42
60
44
60
60
20
In step S, a method of adding the heat conductive particles includes distributing a number of the heat conductive particles on a surface of the first matrix and heating the first matrix to a temperature higher than the melting point of the first matrix . When the temperature of the first matrix is higher than the melting point thereof, the first matrix will change to a liquid state. The liquid-state first matrix may not easily flow because of surface tension. The heat conductive particles can fall into the liquid-state first matrix due to gravity. Understandably, the method of adding the heat conductive particles into the second matrix is similar to that as described above. There is no particular restriction on the quantity of the heat conductive particles as long as the heat conductive particles can thermally connect to the carbon nanotubes of the carbon nanotube array .
11
40
21
22
20
Understandably, in step S, the matrix can be formed on one of the first and second ends , of the carbon nanotube array .
In the above method of fabricating the thermal interface material, a good conductive channel is formed between the thermal interface material because of the heat conductive particles and the carbon nanotubes. In order to decrease the thermal contact resistance between the thermal interface material and the electronic components, the surface of the thermal interface material does need not to be treated, such as through chemical mechanical polishing or mechanical grinding, because the matrix can be melted into a liquid state. Therefore, the manufacture cost can be decreased.
It is to be understood, however, that even though numerous characteristics and advantages of embodiments have been set forth in the foregoing description, together with details of the structures and functions of the embodiments, the disclosure is illustrative only, and changes may be made in detail, especially in matters of shape, size, and arrangement of parts within the principles of the disclosure to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.
BRIEF DESCRIPTION OF THE DRAWINGS
Many aspects of the embodiments can be better understood with reference to the following drawings. The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the embodiments. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
FIG. 1
is a schematic view of an embodiment of a thermal interface material.
FIG. 2
FIG. 1
is a schematic view of a carbon nanotube array used in the thermal interface material of .
FIG. 3
FIG. 1
is a schematic, cross-sectional view of an electronic assembly having the thermal interface material of . | |
Having been recognized as a resilient and vibrant artist, I am inspired by the warmth of nature that surrounds me. As a creator of nature on canvas, panel, Wood, Wall or whatever the surface is, it takes discipline, will power, courage and determination to represent nature in its true form, pure! I am inspired by some of the greatest painters who have ever lived. I favor impressionism because it’s a true representation of nature. Currently as I evolved in my craft experimenting with the combination of Modern Art and Cubism intertwined with Impressionism. Inherently, I have been a painter from creation, and as I grew older I realized that when you are chosen for a specific purpose you hold the key to your destiny.
My process is a very simple one, which I believe leads to a great end result. After careful observation of the motif, weather for time of day, weather or season, and most of all the narrative within, I then prepare the surface, canvas, panel, wood or wall murals, lay in the color temperature that represents the light key. I begin by sketching the subject wet into wet or on a dry medium grey background. As I proceed, I paint what I see within the light key which transforms the motif to life.
Currently, my overall body of work is represented in a total of twelve series; Modern Art meets Cubism, Cuban Experience, Coal Miners, A Look at DC, Water Lilies at Giverny and The Trench Town Rock Experience, just to name a few. These are mainly from life experience throughout my travels as I capture the anatomy of the human body and nature at its best. | https://www.mckinsongallery.com/artist-statement |
Series:
Tales of the Otori
Volume:
1
Genre:
Fantasy
ISBN:
1573222259
Pages:
304 pages
Publisher:
Riverhead Books
Price:
$24.95
Reader Rating:
9
out of 10
Votes:
18
Across the Nightingale Floor by
Lian Hearn
Description:
Every now and then a novel appears, completely unlike anything that has appeared before. Across the Nightingale Floor is such a work-a magical creation of a world beyond time.
Set in an imaginary, ancient Japanese society dominated by warring clans, Across the Nightingale Floor is a story of a boy who is suddenly plucked from his life in a remote and peaceful village to find himself a pawn in a political scheme, filled with treacherous warlords, rivalry-and the intensity of first love. In a culture ruled by codes of honor and formal rituals, Takeo must look inside himself to discover the powers that will enable him to fulfill his destiny.
A work of transcendent storytelling with an appeal that crosses genres, genders, and generations, Across the Nightingale Floor is a rich and brilliantly constructed tale, mythic in its themes and epic in its vision. It is poised to become the most captivating novel of the year.
Select Rating
1 - Rubbish
2 - Very Bad
3 - Bad
4 - Below Average
5 - Average
6 - Above Average
7 - Good
8 - Very Good
9 - Great
10 - Excellent
Display Comments
Also in this series are
Grass for His Pillow
,
Brilliance of the Moon
,
The Harsh Cry of the Heron
Return to the
Lian Hearn
page. | https://www.sfbookcase.com/viewbook.asp?bookno=5272 |
Why patent professionals should take a second look at re-examination
Australian patent law allows for re-examination of granted Australian patents by IP Australia, the national Patent Office, as an alternative to seeking revocation before the courts. If the re-examined patent is found to lack novelty or inventive step, it may then be revoked by the Patent Office. The re-examination provisions are similar to those provided under US law, in that they allow any person to request re-examination of a granted patent.
Re-examination is under-utilised as a tool in Australia because of the number of significant disadvantages when compared to revocation proceedings before the courts. However, it is still significantly cheaper and also faster than revocation proceedings. Despite this, the Australian patent profession has so far been hesitant to use re-examination to attack the validity of an Australian patent, as few requests for re-examination have ultimately succeeded in invalidating a patent.
A prior art search is essential to any re-examination in order to identify prior art that clearly anticipates the claims of the Australian patent. However, unless a single document is found to support this, there is a low likelihood of the patent ultimately being found invalid under re-examination. That is, while re-examination may succeed on the ground of lack of novelty, it is unlikely to succeed on the ground of lack of inventive step. This is at least partially due to the nature of Australian re-examination proceedings.
The only grounds of invalidity considered in re-examination are novelty and inventive step. Information that became publicly available only through the doing of an act may not be considered during re-examination. While this does not prevent the lodging of expert evidence regarding the common general knowledge of a person skilled in the art (which is highly relevant to determining inventive step), it does result in the need for each element of the common general knowledge to be shown as a documentary disclosure.
Further, while the person requesting re-examination must provide prior art and a statement of relevance with the re-examination request, he or she then has no further involvement in the re-examination process. However, the patentee has (multiple) opportunities to respond to the examiner and to supply argument and evidence regarding common general knowledge and inventive step. In practice, patent office examiners typically are not in a position to sustain an inventive step objection in the face of evidence and argument submitted by the patentee as to the state of the common general knowledge of a person skilled in the art. As the person requesting examination has no right of reply or further involvement in the re-examination process, any arguments or evidence submitted by the patentee cannot be refuted. Accordingly, it is only in a prima facie case of lack of novelty that re-examination is a truly worthwhile exercise in a commercial context.
Re-examination should always be considered as a potential alternative (or preliminary) to initiating revocation proceedings in the Federal Court. However, a granted patent that survives re-examination can (in a commercial context, if not a legal one) be considered to have a higher presumption of validity and give the patentee additional leverage in commercial negotiations. The re-examination can also identify for the patentee any weaknesses in the patent prior to any litigation, giving it the opportunity to amend the patent to put it in better order. Accordingly, re-examination should be approached with caution, used only when very highly relevant prior art is available (preferably a prima facie case of invalidity on the ground of novelty), and the potential risks should be considered prior to requesting re-examination.
Re-examination is potentially an effective strategic tool where a patent of clearly dubious validity affects business activities in the Australian market.
This is an insight article whose content has not been commissioned or written by the IAM editorial team, but which has been proofed and edited to run in accordance with the IAM style guide. | https://www.iam-media.com/article/why-patent-professionals-should-take-second-look-re-examination |
Lactase was produced by Streptococcus salivarius subsp. thermophilus grown on deproteinized whey. Maximum lactase production in 6.85 percent whey supplemented with certain nutrients was 54.5 U/ml. Lactase was purified by ammonium sulfate and acetone fractionation and gel filtration. It was immobilized on chitin and chitosan by covalent binding. Lactase immobilized on chitosan had the highest activity for hydrolysis of whey lactose. The maximum amount of glucose produced by the immobilized lactase was 20 percent of lactose during conversion of 71 percent lactose in 6 h. Xanthomonas campestris was able to convert glucose, galactose, mixture of them, lactose, or whey to xanthan, where the yield was 86, 67, 83, 2, and 1.8 percent respectively, after 72 h in batch fermentation. When deproteinized whey, which was hydrolyzed by immobilized lactase, was used for xanthan production. The synthesis of icanthan was generally as good as with comparable conventional media, the yield reached to 65 percent.
Science Alert
Introduction
In recent years there is a great deal of interest in using industrial wasts as nutrient sources in bioconversions. Whey is a by-product of dairy industry. In Egypt, about 500,000 tons of whey are produced per year, most of this amount is run to drain without use constituting serious pollution problem (Facia, 1983). Whey is a fluid containing very low quantities of milk solids and high concentration of lactose. A typical sweet dairy whey contains 7 percent solids and is composed of about 70 percent lactose, 12 percent protein and smaller amount of organic acids, minerals and vitamins (Glass and Hedrick, 1977). In respect that its containing nutritionally valuable substances, whey is a good candidate to be utilize as an economic substrate by certain microorganisms for production of essential and beneficial compounds such as ethanol, lactic acid, amino acids and SCP (Mann, 1986).
Lactase could be obtained from several microorganisms including lactose fermenting molds, yeasts and bacteria. Kluyveromyces fragilis (Mahoney et al., 1975) and Streptococcus thermophilus (Rao and Dutta, 1981) are the best sources of the enzyme. The later, which is widely used as a starter organism for yogurt manufacture, is a food-safe organism and promising source of lactase. Recently, there is an effort to produce this enzyme in a large amount because of its application for hydrolyzing lactose in milk products to elevate problems associated with whey disposal.
Xanthomonas campestris is industrial interesting for its ability to produce xanthan gum, an extracellular high molecular weight polysaccharide, which is used in a variety of applications as a stabilizing, viscosifying, emulsifying, thickening and suspending agent (Becker et al., 1998). Many researchers have studied the fermentation conditions required for optimal gum production (Moraine and Rogovin, 1973; Kennedy, 1982; Torrestiana et al., 1990; Garca-Ochoa et al., 1992; De Vuyst and Vermeire, 1994; Rajeshwari et al., 1995). In industrial production of xanthan gum, whey may offer a cheap cultivation medium. Due to the low level of lactase present in X. campestris (Frank and Somkuti, 1979; Walsh, 1984 and Fu and Tseng, 1990), the bacterium is not able to grow well and produce xanthan gum in lactose medium. Xanthan gum can be successfully produced in whey-containing medium, if qualities of X. campestris are genetically engineered or if lactose is first hydrolyzed to glucose and galactose to utilize whey more efficiently.
The goal of this article away from genetic manipulation is the later approach which is more economical to achieve this production by cultivation the original strain (X. campestris) in whey pretreated with immobilized lactase.
Materials and Methods
Bacterial strains: Streptococcus salivarius subsp. thermophilus EMCC 10509 was obtained from the microbial culture collection center of Cairo MIRCEN, Fac. Agric., Ain Shams Univ., Egypt and maintained on YPG medium which containing (g/L); yeast extract, 5.0; peptone, 10.0; glucose, 5.0; NaCI, 5.0 and agar 20.0. The pH was adjusted to 7.2-7.4. Xanthomonas campestris NRRL B-1459 was used throughout this study. The strain was maintained on slants with YPGM medium (Rajeshwani et al., 1995).
Whey Preparation: Whey from bovine was supplied as sweet dried whey from Sigma Co., USA. It contained (w/w) 73 percent lactose, 10.8 percent protein and 0.75 percent phosphorus. Deproteinized whey solutions were prepared by acidification (pH 4.5) and heat treatment (90°C for 30 min). Then, the preparation was cooled and filtered to remove the precipitated protein.
Lactase Production: Lactase was produced in optimum conditions as described by Menshawy (1997). The medium employed for lactase production contained 5.48 percent sweet dried whey (Sigma Co.) in water together with "supplementary nutrients" [0.5 % of NaH2PO4; 0.5 percent of yeast extract, and 0.3 percent of (NH4)2SO41. The initial pH was adjusted to 7.0. The strain was grown in shake flasks (120 rev/min) for 24 h at 40°C.
Extraction and Purification of Lactase: Harvested cells were washed twice with 0.067 M potassium phosphate buffer, pH 6.8. Washed cells were suspended in 25 ml of 0.067 M potassium phosphate buffer, pH 6.8, containing 0.5 mM MgSO4 and 0.1 mM MnCl2. The suspension was treated with toluene (2 percent v/v) at 37°C for 10 min. Broken cells were removed and discarded by centrifugation at 15 000 rpm for 30 min at 4°C and the supernatant was used as an enzyme source. Cell free extract (1000 ml) was collected and precipitated by the addition of one volume of acetone at 4°C. Then, the precipitate was collected by centrifugation and the residual acetone was evaporated using an air stream. The precipitate was dissolved in phosphate buffer at pH 7.0 and dialyzed at 4°C against the same buffer. Solid (NH4)2SO4 up to 60 percent saturation was added slowly to the dialyzed solution, the precipitate which formed was collected by centrifugation at 15000 rpm for 20 min in refrigerated centrifuge, resuspended in phosphate buffer, pH 7.0 and dialyzed extensively against the same buffer at 4°C. The partially purified enzyme was applied on a Sephadex G-100 column to remove the (NH4)2SO4, then active fraction was collected and used as a source of lactase.
Preparation of Immobilized Lactase: Chitosan beads were prepared by shaking 0.5 g chitosan in 10 ml of 0.1 M HCI containing 2.5 percent glutaraldehyde for 2 h at 30°C. The beads were precipitated by the addition of 0.1 M NaOH to neutrality. The beads were collected by filtration and washed with water. The wet chitosan beads were mixed with 5.0 ml of the lactase solution (1000 units). After being shaken for 1 h at 30°C, the unbound lactase was removed by washing with distilled water. In case of chitin, 0.5 g chitin was shaked in 10 ml of 2.5 percent glutaraldehyde in 0.1 M acetate buffer (pH 5.5) for 2 h. The solid material was filtered and washed. The treated chitin was mixed with 5.0 ml of the lactase solution (1000 units) for 1 h at 30°C. The unbounded lactase was removed by washing with distilled water (Ohtakara, 1988).
Assay of Immobilized Lactase Activity: Lactase (approximately g of protein) were suspended in 50 mmol/l of phosphate buffer (pH 6.9) containing 4.56 percent whey lactose at 40°C with magnetic stirring. After 10 min of incubation or at various times a portion of the reaction mixture was removed and the amount of lactose hydrolyzed was assayed. One unit of immobilized lactase was defined as the amount of immobilized lactase required to hydrolyze 1 μmmol of lactose per min at 40°C and pH 7.0.
Whey Hydrolysis with Immobilized lactase: Whey permeate, adjusted to pH 7.0, was hydrolyzed using immobilized lactase. Hydrolyzed whey permeate was supplemented with a basal medium (per liter K2HPO4, 7 g; KH2PO4, 2 g; NH4NO3, 0.6 g; FeSO4.7H20, 0.01 g; MgC12.6H20, 0.1 g and MnC12, 0.001 g; Na-citrate, 2 g and 0.125 percent tryptone). The pH of the solution was finally adjusted to 7.0 with 2N H2SO4 and 2N KOH. 50 ml of whey medium was dispensed into 250 ml Erlenmeyer flasks, sterilized, then, 5 ml of 24 h old inoculum was added (for preparing inocula, a fresh culture grown on the YPGM agar slant at 28°C for 48 h was transferred to 500 ml Erlenmeyer flasks containing 100 ml of the YPGM broth and incubated at 28°C and 250 rec/min for 24 h).
Xanthan Production: In shake-flask (250 rev/min on a rotary shaker) experiments, cultures were grown at 30°C using 250-ml Erlenmeyer flasks containing 50 ml of liquid xanthan production medium (a basal medium amended with different sugar sources) and inoculated by 10 percent (v/v) inoculum. Inocula was prepared as previously mentioned.
Sugar determination: Glucose was determined by glucose oxidase peroxidase. Galactose was determined by galactose oxidase peroxidase. Lactose was determined by Nickerson et al. (1976) method.
Xanthan Determination: Cultures were centrifuged at 10,000 xg for 10 min. To aliquots of the supernatant ethanol was added to 70 percent concentration with agitation followed by standing in cold for 3 hr. The precipitates were collected by centrifugation and redissolved in aliquots of distilled water. Modified Anthrone method (Trevelyan and Harrison, 1952) was used for quantitative determination. The standard curve was constructed by using pure xanthan.
Viscosity Measurement: Viscosity was measured using a rotational viscometer at a constant shear rate of 10.71S1 and 30°C.
pH Determination: The pH value was measured using laboratory pH-meter with glass electrodes (Knick-Digital-pH meter 646).
Protein determination: Protein was determined by the method of Lowry et al. (1951), using crystalline bovine serum albumin as standard.
Lactase activity determination: The activity of lactase was determined at 35°C using 45 mM lactose as substrate (in 50 mM phosphate buffer, pH 7.0). Glucose evolution was analyzed using the glucose oxidase/peroxidase system (Bergmeyer and Bernt, 1974). One unit of lactase activity is defined as the amount of enzyme that hydrolyses 1 mmmol lactose/min at 35°C.
Results and Discussion
Lactase production: Lactase production by S. salivarius subsp. thermophilus, grown in optimum conditions in different concentrations of whey supplemented with essential nutrients, was investigated. Data represented in Fig. 1 show that after 24 h of cultivation, total lactase activity (54.5 μ/ml) was obtained with 6.85 percent whey (5.0% lactose). Consumption of whey lactose reached 93.44 percent after 24h. The enzyme level and remaining lactose, in whey medium, is within the range of values reported previously for S. salivarius subsp. thermophilus (Rao and Dutta, 1981; Menshawy, 1997).
Purification of lactase: Purification parameters are summarized in Table 1. A 29.25-fold purification of the lactase was achieved with an overall recovery of 59.29 percent.
The results are in line with Rao and Dutta (1981).
Immobilization of lactase: The immobilization of S. salivarius subsp. thermophilus lactase by covalent binding though glutaraldehyde onto chitin and chitosan was carried out (Table 2). A considerably good loading efficiency (354 units/g) for lactase immobilized on chitin, but low immobilization yield was detected (70 percent). Lactase immobilized on chitosan by covalent binding showed higher activity (425 units/g) and better immobilization yield (85%). Therefore, chitosan was used for lactase immobilization for the purpose of whey hydrolysis. Immobilization of lactase on chitosan might be useful in processing milk by-products into valuable sweeteners, or into nutritional media for fermentation processes. Its possible application to solving many problems related to the use of lactose should also be considered (Leuba and Widmer, 1977; Greenberg and Mahoney, 1981).
Whey lactose hydrolysis: Quite high activity for the hydrolysis of lactose was obtained by immobilized lactase. Table 3 is indicated the concentrations of glucose produced during hydrolysis of 5.48 percent whey (4.0 percent lactose) with the lactase immobilized onto chitosan. A maximum of 20 percent glucose of the total lactose was obtained by 71 percent conversion of lactose, which is slightly less than with the free lactase. The results are in line with Huffman and Harper (1985) and Mozaffar et al. (1986).
Xanthan production: The basal medium was supplied with glucose, galactose, or both of them (mixture of glucose and galactose 50/50) and used as xanthan production medium by X. campestris. The results given in Fig. (2a-c) revealed that glucose and galactose either alone or in combination were utilized for xanthan synthesis. Glucose was consumed more rapidly than galactose. The results also showed simultaneous utilization of both sugars. After 72 h of batch fermentation, 17.22 (g/L) of xanthan accumulated in the broth. This maximum yield recorded 86 percent after 72 h by using glucose medium.
When lactose used in the production medium as a sole carbon source, slight amount of lactose was assimilated by the tested organism. Only 0.4 g of xanthan accumulated in the liter broth after 72 h of batch fermentation (Fig. 2d).
The low level of lactase present in X. campestris may explain this behavior. It was expected according to Stauffer and Leeder (1978) who found that no significant amount of xanthan gum is produced when X. campestris is grown in lactose medium. Frank and Somkuti (1979), Walsh (1984) and Fu and Tseng (1990) reported that X. tampestris was not able to grow in a lactose medium and to synthesize high xanthan yield because of the low concentration of produced lactase. Moreover, Drahovska and Turna (1995) found that original strain X. campestris 1069 produced 10.5 g of xanthan gum in liter of glucose medium after 16 h and no detectable amount of xanthan in lactose medium.
Slight amounts of lactose from whey were assimilated by X. campestris in whey medium, the yield percent was very low (1.75 percent) and only 0.35 g xanthan /L after 72 h (Fig. 2e). This result in agreement of the previous data about xanthan production in lactose medium, therefore, an attempt to use lactose-based substrate (such as whey) for gum production by X. campestris would be difficult. Some attempts have been previously made by several groups to construct lactose utilizing X. campestris strains for xanthan production. Fu and Tseng (1990) were able to select a strain which can utilize lactose for xanthan production, but the strain was not stable.
Therefore in this study, a mobilizable lactase was introduced into whey firstly to hydrolyse lactose to fermentable sugar, glucose and galactose. In that case, the bacterium was able to grow and successfully produce xanthan from pretreated whey in amounts near to those produced by galactose-grown cells, reached to 12.99 g xanthan/L after 72 h (Fig. 2f) and was 40-fold higher than in a medium containing whey. The results obtained are an indication for the possibility usage of pretreated whey by immobilized lactase in the process of industrial production of xanthan. | https://scialert.net/fulltext/?doi=pjbs.1999.1240.1244&org=11 |
Inequalities and privileges : middle-class mothers and employment
O'Hagan, Clare
URI:
http://hdl.handle.net/10344/1649
Date:
2010
Publication type:
Doctoral thesis
Supervisor:
O'Connor, Pat
;
Gray, Breda
Abstract:
This thesis explores the inequalities and privileges women experience by combining motherhood with paid employment. Examining the experiences of thirty ‘working mothers’ through an intersectional1 lens, this thesis reveals complex patterns of inequality and privilege, which arise at the intersection of motherhood with paid work because in contemporary Ireland the normative construction of an ideal worker is one without care responsibilities, and an ideal mother works full time in the home. Applying a feminist, intersectional research methodology, a case study was conducted with thirty women in a middle class Irish suburb. During focus group discussions and interviews, women reveal they experience different relations of privilege and penalty, because the social relations of gender, motherhood and class intersect with the institutional domains of family, workplace and society and at these intersections, women experience privileges or inequalities which vary according to each woman’s individual circumstances. Through the concepts of choice, care and time, this study reveals the power operating through the dominant discourses of neo-liberalism, individualism, feminism and motherhood, which encourage women to both devote significant effort to developing their children, while also to commit themselves to productive paid work. Women navigate the terrain between motherhood and paid work with little social support and each woman’s decision to combine motherhood with paid work is configured as her individual ‘choice’, thus the dilemmas which arise are her own responsibility. This intersectional approach reveals relationships between discourses which are interdependent and create new complex patterns of inequality for ‘working mothers’. By privileging some women sometimes, enduring inequalities are created for all.
Description:
peer-reviewed
Show full item record
Files in this item
Name:
2010_O'Hagan, ...
Size:
857.0Kb
Format:
PDF
View/
Open
This item appears in the following Collection(s)
Doctoral theses
Doctoral theses (AHSS)
Sociology
Related items
Showing items related by title, author, creator and subject.
Who cares?: ‘Working mothers’, childminders and childcare
O'Hagan, Clare
(
Manchester University Press
,
2012
)
Childcare is central to women’s ability to participate in paid work. Drawing on empirical research conducted with middle class ‘working mothers’1 in an Irish suburb2, this article examines these women’s childcare arrangements ...
Who cares? The economics of childcare in Ireland
O'Hagan, Clare
(
Journal of Motherhood Initiative for Research and Community Involvement
,
2012
)
Childcare is central to women’s ability to participate in paid work. This article explores the increasing demand for childcare and women’s ability to source and retain childcare in the context of the Irish State’s ...
Ideologies of motherhood and single mothers
O'Hagan, Clare
(
Edwin Mellen Press
,
2006
)
This chapter examines the situation of lone parents in contemporay Ireland, in particular the workings of ideologies of motherhood and the family through different sites, contexts and institutions in order to determine the ...
FESTA expert report 4.1 gendering decision making and communication processes
O'Hagan, Clare
;
O'Connor, Pat
;
Veronesi, Liria
;
Mich, Ornella
;
Sağlamer, Gulsun
;
Tan, Mine G.
;
Çağlayan, Hülya
(
EU
,
2015
)
Executive Summary: The purpose of this action-research project is to effect structural and cultural change in higher level education and research institutes, and particularly in their decision-making bodies and processes ...
Perpetuating academic capitalism and maintaining gender orders through career practices in STEM in universities
O'Hagan, Clare
;
O'Connor, Pat
;
Myers, Sophia Eva
;
Baisner, Liv
;
Apostolov, Georgi
;
Topuzova, Irina
;
Sağlamer, Gulsun
;
Çağlayan, Hülya
(
Taylor and Francis
,
2019
)
Academic capitalism is an outcome of the interplay between neoliberalism, globalisation, markets and universities. Universities have embraced the commercialisation of knowledge, technology transfer and research funding ... | https://ulir.ul.ie/handle/10344/1649 |
Possible “topological superconductor” could overcome industry’s problem of quantum decoherence.
A potentially useful material for building quantum computers has been unearthed at the National Institute of Standards and Technology (NIST), whose scientists have found a superconductor that could sidestep one of the primary obstacles standing in the way of effective quantum logic circuits.
Newly discovered properties in the compound uranium ditelluride, or UTe2, show that it could prove highly resistant to one of the nemeses of quantum computer development — the difficulty with making such a computer’s memory storage switches, called qubits, function long enough to finish a computation before losing the delicate physical relationship that allows them to operate as a group. This relationship, called quantum coherence, is hard to maintain because of disturbances from the surrounding world.
The compound’s unusual and strong resistance to magnetic fields makes it a rare bird among superconducting (SC) materials, which offer distinct advantages for qubit design, chiefly their resistance to the errors that can easily creep into quantum computation. UTe2’s exceptional behaviors could make it attractive to the nascent quantum computer industry, according to the research team’s Nick Butch.
“This is potentially the silicon of the quantum information age,” said Butch, a physicist at the NIST Center for Neutron Research (NCNR). “You could use uranium ditelluride to build the qubits of an efficient quantum computer.”
Research results from the team, which also includes scientists from the University of Maryland and Ames Laboratory, appear today in the journal Science. Their paper details UTe2’s uncommon properties, which are interesting from the perspectives of both technological application and fundamental science.
One of these is the unusual way the electrons that conduct electricity through UTe2 partner up. In copper wire or some other ordinary conductor, electrons travel as individual particles, but in all SCs they form what are called Cooper pairs. The electromagnetic interactions that cause these pairings are responsible for the material’s superconductivity. The explanation for this kind of superconductivity is named BCS theory after the three scientists who uncovered the pairings (and shared the Nobel Prize for doing so).
What’s specifically important to this Cooper pairing is a property that all electrons have. Known as quantum “spin,” it makes electrons behave as if they each have a little bar magnet running through them. In most SCs, the paired electrons have their quantum spins oriented in a single way — one electron’s points upward, while its partner points down. This opposed pairing is called a spin singlet.
A small number of known superconductors, though, are nonconformists, and UTe2 looks to be among them. Their Cooper pairs can have their spins oriented in one of three combinations, making them spin triplets. These combinations allow for the Cooper-pair spins to be oriented in parallel rather than in opposition. Most spin-triplet SCs are predicted to be “topological” SCs as well, with a highly useful property in which the superconductivity would occur on the surface of the material and would remain superconducting even in the face of external disturbances.
“These parallel spin pairs could help the computer remain functional,” Butch said. “It can’t spontaneously crash because of quantum fluctuations.”
All quantum computers up until this point have needed a way to correct the errors that creep in from their surroundings. SCs have long been understood to have general advantages as the basis for quantum computer components, and several recent commercial advances in quantum computer development have involved circuits made from superconductors. A topological SC’s properties — which a quantum computer might employ — would have the added advantage of not needing quantum error correction.
“We want a topological SC because it would give you error-free qubits. They could have very long lifetimes,” Butch said. “Topological SCs are an alternate route to quantum computing because they would protect the qubit from the environment.”
The team stumbled upon UTe2 while exploring uranium-based magnets, whose electronic properties can be tuned as desired by changing their chemistry, pressure or magnetic field — a useful feature to have when you want customizable materials. (None of these parameters are based on radioactivity. The material contains “depleted uranium,” which is only slightly radioactive. Qubits made from UTe2would be tiny, and they could easily be shielded from their environment by the rest of the computer.)
The team did not expect the compound to possess the properties they discovered.
“UTe2 had first been created back in the 1970s, and even fairly recent research articles described it as unremarkable,” Butch said. “We happened to make some UTe2 while we were synthesizing related materials, so we tested it at lower temperatures to see if perhaps some phenomenon might have been overlooked. We quickly realized that we had something very special on our hands.”
The NIST team started exploring UTe2 with specialized tools at both the NCNR and the University of Maryland. They saw that it became superconducting at low temperatures (below -271.5 degrees Celsius, or 1.6 kelvin). Its superconducting properties resembled those of rare superconductors that are also simultaneously ferromagnetic – acting like low-temperature permanent magnets. Yet, curiously, UTe2 is itself not ferromagnetic.
“That makes UTe2 fundamentally new for that reason alone,” Butch said.
It is also highly resistant to magnetic fields. Typically a field will destroy superconductivity, but depending on the direction in which the field is applied, UTe2 can withstand fields as high as 35 tesla. This is 3,500 times as strong as a typical refrigerator magnet, and many times more than most low-temperature topological SCs can endure.
While the team has not yet proved conclusively that UTe2 is a topological SC, Butch says this unusual resistance to strong magnetic fields means that it must be a spin-triplet SC, and therefore it is likely a topological SC as well. This resistance also might help scientists understand the nature of UTe2 and perhaps superconductivity itself.
“Exploring it further might give us insight into what stabilizes these parallel-spin SCs,” he said. “A major goal of SC research is to be able to understand superconductivity well enough that we know where to look for undiscovered SC materials. Right now we can’t do that. What about them is essential? We are hoping this material will tell us more.”
Learn more: Newfound Superconductor Material Could Be the ‘Silicon of Quantum Computers’
The Latest on: Quantum logic circuits
via Google News
The Latest on: Quantum logic circuits
- ORNL Demos Memory Storage for Super Cold Computingon January 22, 2020 at 10:57 am
memory cell circuit design based on coupled arrays of Josephson junctions, a technology that may be faster and more energy efficient than existing memory devices. If successfully scaled, this type of ...
- Evolution of circuits for machine learningon January 16, 2020 at 2:18 am
Hirjibehedin is in the Quantum Information and Integrated Nanosystems group ... such as indicating whether at least one or both of the inputs were on (called an OR gate and an AND gate, respectively, ...
- Quantum Computing, ML Drive 2019 Patent Awardson January 15, 2020 at 7:31 am
Along with IBM, patent leaders in this category included Microsoft, Google, Intel and Facebook. In one example, IBM was granted a patent for optimizing testing of quantum logic circuits. “We expected ...
- Instant, secure ‘teleportation’ of data in the workson January 14, 2020 at 12:25 pm
“These chips are able to encode quantum information in light generated inside the circuits and can process the quantum information,” the school ... “At the sub-microscopic level, where quantum ...
- Technology Predictions from a [Precision] Electronic Test Thinktankon January 13, 2020 at 9:49 am
Quantum computing and engineering will continue to be in an aggressive hype phase ... From the bottom-up, the process involves printed-circuit-board (PCB) design populated with re-programmable devices ...
- Memory storage for super cold computingon January 13, 2020 at 6:51 am
Many of these technologies lean on a type of digital logic called single flux quantum, or SFQ. Others are based on magnetic Josephson junctions ... This capability may help add stability while saving ...
- New approach for controlling qubits via microwave pulses reduces error rates and increases efficiencyon January 10, 2020 at 7:16 am
By using superconducting circuits, researchers recently succeeded in demonstrating that quantum computers are able to perform highly specialised tasks ... microwave signals through special conductor ...
- Quantum Computing Providers Pick Their Dance Partnerson January 8, 2020 at 7:47 am
Those hardware providers include IonQ and Honeywell, both of which have built quantum systems based on ion trap technology, as well as Quantum Computing Circuits (QCI), a Yale University spinout that ...
- Multilayered Josephson junction logic and memory deviceson November 25, 2019 at 2:12 pm
Flux quantum logic and memory circuits using superconducting Josephson tunnel junctions have high-speed switching times (approximately 1 ps), low power dissipation (
- Sound-driven single-electron transfer in a circuit of coupled quantum railson October 8, 2019 at 3:11 am
Surface acoustic waves (SAWs) strongly modulate the shallow electric potential in piezoelectric materials. In semiconductor heterostructures such as GaAs/AlGaAs, SAWs can thus be employed to ... | https://www.innovationtoronto.com/2019/08/sidesteping-one-of-the-primary-obstacles-standing-in-the-way-of-effective-quantum-logic-circuits/ |
Nordberg, D. and Katelouzou, D., 2015. Alternatives within corporate ‘ownership’. In: Society for the Advancement of Socio-Economics, 2--4 July 2015, London. (Unpublished)
Full text available as:
|
PDF
|
AlternativesWithinOwnership SASE 2015 final.pdf - Accepted Version
296kB
|
|
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via [email protected].
Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material.
Abstract
Institutional investors have long played a central role in corporate governance but no more so than since the financial crisis of 2007-09. To counteract short-termism, the UK Stewardship Code (Financial Reporting Council, 2010) encouraged investors to engage with the companies in which they invest came first and develop a sense of ownership. France (Commission Europe, 2010; ORSE, 2011) and Germany (discussed in Roth, 2012) took similar actions. The European Union (European Commission, 2011, 2013) included investor engagement in its review of corporate governance, while in the US the Dodd-Frank Act (Library of Congress, 2010) gave shareholders new voting powers and made it easier to raise shareholder resolutions. Some funds that favour this approach now call themselves “shareowners” rather than “shareholders” (Butler & Wong, 2011). This approach assumes shareholders are able to prevent corporate excess and might want to. But obstacles arise from the changing structure and power balances in institutional investment: hedge funds, funds-of-funds, sovereign wealth, and the revival of shareholder activism. This paper takes its cue from a parallel debate about changes in structure and power in corporations. In his paper “After the corporation”, Davis (2013) provocatively argues that scholarship on organizations and industrial policy are based on an outdated conceptualization of the corporation. He describes how companies including Apple, Google, Facebook and Amazon are now giants in the eyes and portfolios of institutional investors. They are giants by market capitalization, but pigmies by employment. The disaggregation of production functions across industries makes the corporation of yore a relic of a previous industrial age. In the US at least, the old giants made up a large part of the social structure and services that has held society together. What happens to the structure of society “after the corporation”, he asks? This paper turns that spotlight on investors. The policy push towards stewardship evokes both a bygone era of family-owned enterprises and corporations controlled by grand financiers. But the patient capital of Warren Buffett is a model few follow, or could. New money from end-investors flows instead into funds-of-funds, detaching the end beneficiary even further from control. Setting public policy to make finance serve the whole economy as envisaged in the “universal owner” (Hawley & Williams, 2007; Urwin, 2011) - modelled – on the large pension fund seems a laudable goal. The economic interests of these investors lie more in long-term social advances than short-term trading profits. But such policy prescriptions may privilege a dying class of investor against other more vibrant ones. Moreover, they may legitimate shareholder primacy at a time when scholars and the rest of the policy framework question it (Armour, Deakin, & Konzelmann, 2003; Bainbridge, 2010; Stout, 2013). We – scholars, policymakers and practitioners alike – need to consider alternatives. Within the system of wealth creation and like the corporation, the traditional investor – that is, the universal owner – remains an important economic force. But what alternatives within the system will work as these investors decline as a social force? What alternatives arise “after the owner”? | http://eprints.bournemouth.ac.uk/22203/ |
Justice for the Luddites
The modern West is perhaps unique in world history for its adulation of the future. Throughout history, most writers have been at best uninterested in, and at worst fearful of, what the future will bring. The whole linear view of history is perhaps a Western idea, but the idea of ‘progress’, of the upward, usually exponential, trend of human history is certainly our mistake. If one chooses to view history in terms of technological progress, one must look over millennia, at countless civilisations all over the world, most rising to a similar technological stage, and ruling for hundreds if not thousands of years, then collapsing into dust. One may imagine the Romans as the most advanced civilisation before the advent of ours, but of course ‘progress’ didn’t continue on from there; society regressed. Perhaps China would be a better candidate. China achieved incredible technological advancement around the time of Qin Shi Huangdi, building monumental canals, tombs and the early stages of a well-known wall. They set up a complicated professional government, built around an educated civil service. They maintained a similar system for the next 1900 years. Events continued, China was conquered, it split, it had coups and was at times rendered lawless by wandering warlords. But one would struggle to identify any serious cultural or technological revolution. At the end of that 1900 years, China was felled by a people that had, only 1500 years previously, consisted of pagan warbands.
China isn’t the only civilisation that got caught in a ‘rut’. Ancient Egypt existed for almost three millennia, the Greco-Roman world for 1400 years, the Mayans for 3500 years. None of these civilisations achieved what we have now. We are not an accumulation of these societies. We are something different. If we were to graph human technological development properly, it would not look like exponential growth; progress would look digital, not analog. This is important, as it has completely twisted our view of the world. It has moved us away from the old cyclical nature of history towards one where the future is idealised. Our culture is formed around the concept of progress, of outdoing the past and driving ever ‘forward’, towards a goal no one is quite sure of. It is a view that affects our science and it affects our politics. Anything that is old is ‘backward’. If something was used in the past and is no longer used, it must be because it has been improved upon. Despite our own problems, we look at past societies with disdain.
This brings me to the Luddites. Luddite has become a byword for the ignorant and the ‘backward’. They were a people who misunderstood the world, feared the future and, now we can look back at them with hindsight, were completely wrong. Or were they? The future did prevail and the ancestors of those Luddites are undoubtedly more materially wealthy than they ever were before. Whether they are happier is less clear. What is also less clear, is how it is that the Luddites could be wrong. The mathematics is fairly simple. Say one tailor can make a shirt per day, and a shirt lasts a man a year. To keep a town of 3650 people in shirts would require ten tailors. If one of those tailors can make a machine that allows him to produce ten shirts per day, not only can he keep up with the whole town’s demand on his own, but he will also be able to produce the shirts cheaper as each one requires less labour, thus outselling his competitors. The only result is 9 unemployed tailors.
This has to be the way it works. People often talk about industrialisation and automation as if those 9 tailors can be employed somewhere else in the shirt industry, perhaps as shirt-machine mechanics. ‘The jobs of the future’. Of course this makes no economic sense at all. If, when our tailor buys this machine, he finds it requires 9 mechanics at any one time, the cost of the shirts return to normal. He can no longer outsell is competitors. He has, instead, the same sales but nine men on the payroll and the interest on the machine eating into his profit. No matters how you rearrange it, the goods, after industrialisation and certainly after automation, are made with less man hours than before and therefore people will lose their jobs. But that didn’t happen and, despite the clear maths, the Luddites were proved wrong and will forever by categorised as reactionary fools. This is because they didn’t predict consumerism.
If those ten tailors all want to keep their jobs, there are two ways they can go about it. First, they can all agree to restrictions on the price they sell their goods at. This was how the guild system worked and it stifles progress a great deal. It would prevent any of them getting a machine; they would all continue as normal. Needless to say this didn’t happen in the end. The second solution is that all ten of them get machines. They reach equilibrium again and all are competitive with one another. But now they have 36500 shirts to sell. This is material progress. This is how industrialisation and automation does not lead to (much) unemployment. The production of goods and services must grow exponentially to keep up with rising efficiency.
We all know this, its rudimentary economics. But it is the shifting of this goods surplus that causes a great deal of the dissatisfaction that industrial society inspires. The first way to shift it is to find new markets. That village of 3650 is not the whole world. Tailors can start selling shirts in neighbouring towns and countries, but this of course is likely to out compete and destroy the industry of the native tailors. On the other hand, the native tailors may develop superior manufacturing methods and outcompete our village tailors. One can either institute protectionist trade restrictions to prevent this (but in the Anglo-American tradition that is unprogressive), or one can conquer, destroy the native industry and force the natives to consume your goods. This was the primary drive behind imperialism, a desperate thirst for new consumers and a desperate fear of new competitors.
The second way is to encourage faddism. The tailors can come up with very slightly different styles of shirt, say, every month. Using advertising and other tricks of the trade, the tailors can convince the townsfolk that they simply must have a new shirt every month, or they will look like old-fashioned fools. Not only will the same 3650 people consume the full 36500 shirts, but in a stroke of genius there is now an inadequate supply, meaning prices can be driven up higher than the original, more practical, shirt had.
This has the same result as the third way. An easier way to increase shirt consumption, without the marketing costs associated with encouraging a fad, is to purposely lower the quality of the shirts so that they wear through. This has a double benefit; the shirts are likely to be cheaper to make. Profits expand in every direction. As with faddism, the great victim of this, other than the consumer who is made a mug, is the planet. More pollution, more material waste, lower quality, higher price. The clever tailors can, of course, shirk responsibility for this with some simple actions; perhaps with an ‘eco’ brand or an apologetic twitter account.
There is an even more catastrophically short sighted way to keep up with supply. Population increase. The more people in the town, the more shirts they can buy. Of course the real tragedy of this is that, when the population has doubled, they import another ten tailors. Now the citizens of the village live in a rather smoggy textile town, their houses have been turned into flats to accommodate the population growth and of course, with twenty tailors, the problem of selling surplus shirts has not been alleviated at all in the long term.
People tend to blame corporations or politicians for the ills of capitalism, but they can’t be seriously blamed. The issues we are suffering from are not a by-product of progress, they are progress. If technological progress continues, consumption must. The economy has to grow or it will utterly collapse. We are not a people bravely hacking through the jungle towards a brighter future, we are a people trapped on a hurtling train with no idea where it is going and no means to stop it going there. We are the shark civilisation; if we stop moving we will die. Capitalism far an away the best way to achieve progress, and progress has been good to us in most spheres of our lives. But we need control. We mustn’t be slaves to it. Perhaps if people could see more clearly what the Luddites were rejecting when they destroyed machinery, they would have a little more sympathy for their movement. | https://www.posthistoricchronicle.com/post/justice-for-the-luddites |
Editor’s Note: This piece was written by two Democratic congressional staffers on behalf of a 12-member organizing committee. Since they face possible retaliation until the House vote takes place, The New Republic has agreed to keep their identities private.
The Democratic Party platform states: “Democrats will make it easier for workers, public and private, to exercise their right to organize and join unions.” One hundred days after House Speaker Nancy Pelosi promised to support staff unionization—as the groundswell of labor organizing across the country reached the halls of Congress—the credibility of lawmakers is being put to the test.
Will our bosses lead by example by passing the resolution granting labor protections to their own workers—or is Congress above the laws it creates?
The cruel ironies of our jobs in Congress are hard to swallow. We advocate for livable wages while qualifying for food stamps due to low pay. We write speeches condemning corporations’ failure to protect against sexual harassment in the workplace, even as we too, lack sufficient recourse. We assure our constituents they’re being represented, even if we are the only person of color in the room. We fight for working families while questioning whether we can financially survive another year in public service.
We deal with abusive bosses, constant pressure, and brutal burnout from 60-, even 70-hour workweeks. We work in an environment where discrimination abounds and where we’re made to feel powerless when responding to sexual or psychological abuse by management. One year after a mob of domestic terrorists and white supremacists attacked our workplace, many of us do not feel safe at work. As one staffer stated, “They wouldn’t care if I was dead.’’
In a time of heightening inequality and racial disparity, staff who come from nonwhite and working-class communities can’t make it in such low-paying jobs—which exacerbates the lack of low-income and minority representation. In 2020, 89 percent of top Senate aides and 81 percent of top House aides were white. If Congress is advised by workers far whiter and wealthier than the communities we represent, how can we ever hope to achieve our promise of equal justice under the law?
Our bosses are keenly aware of the plight of workers across the country. Every House Democrat but one voted to pass the Protecting the Right to Organize Act this term—proudly declaring their support for the greatest expansion of labor protections in nearly 100 years. Yet we, the dedicated public servants who wrote and work tirelessly to advance this and other laws to protect workers’ rights, are not afforded the same basic protections.
When Congress passed the Congressional Accountability Act in 1995, it did so on the premise that the legislative branch should not carve its own employees out of labor protections. However, Congress never granted its workers legal protections to organize and bargain collectively for a better workplace—exempting itself from its own legislation for the past 26 years, with little pushback.
That changed when we, the Congressional Workers Union, went public this February with our drive to secure staffers’ right to form a union and bargain for livable wages, safer work conditions, and equity on Capitol Hill. Our organizing drive inspired Michigan Congressman Andy Levin to introduce a resolution finally to extend House staff basic protections to form a union, and it inspired Speaker Pelosi to announce a $45,000 minimum salary and a vote on the resolution next week, ahead of the 100-day milepost. With this vote, our bosses will have the opportunity to make good on their promises to protect all workers, including their own.
During the February hearing on our right to organize, Chair Zoe Lofgren said, “This institution could not run without our staff.” But workers who have the option to leave bad jobs do—and our democracy is paying the price. The average tenure on the Hill is just three years, and 2021 marked the worst staff turnover in the House in at least two decades. Our lack of legal protections to collectively bargain have left Congress ill-equipped to meet the needs of the American people.
It’s no secret that these shameful working conditions are causing a brain drain from Congress to the powerful special interests trying to influence it. In 2021, industry spent $2.74 billion and employed approximately 11,772 staff on lobbying efforts—compared to the $1.48 billion spent on 9,034 congressional staff. As a result, staff are incentivized to become the lobbyists their less experienced replacements have to rely on. Collective bargaining will help Congress retain the talent it needs to serve the American people.
All across America, workers are standing up for their rights, banding together, and winning. Public support for labor unions is at a nearly 60-year high. Now that this national reckoning has reached the halls of Congress, lawmakers have an opportunity—and those with strong labor records have an obligation—to make good on their pro-union campaign promises in their own workplaces. Every worker deserves the right to unionize and bargain collectively. If Democrats are for the people, we are people too. | https://newrepublic.com/article/166401/house-democrats-let-staffers-unionize?utm_medium=Social&utm_campaign=EB_TNR&utm_source=Twitter |
2nd May, 2017
While Malcolm Turnbull might be cooling it with his use of the word ‘innovation’ lately, the sector remains vitally important for Australia’s future.
Where and how the government allocates money to innovation could either see Australia slip further behind on the world stage or begin an ascent to the truly clever country.
We spoke to leading incubator BlueChilli, which mentors and help grow the next wave of startups, about what could help nurture baby companies.
Alan Jones, BlueChilli’s Startup Evangelist, isn’t exactly optimistic about the chances of the government resetting the debate with this budget.
“Looking at the recent tightening of the R&D tax concessions and axing of the 457 skilled immigration scheme, it feels more like there’s an undeclared policy of active discouragement of a viable startup industry,” he told The Pulse.
He also pointed to a planned cut of $2.8 billion to tertiary education as a possible knock on the industry when Australia is competing to both attract and keep tech talent.
But there are some things that Jones thinks would help the startup sector if they were implemented in the budget.
Jones would love to see a government-backed fund established which would match private sector investment in early-stage startups.
“Unlike export and research grant schemes, this would give the taxpayer an equity stake in potentially successful investments while also putting Australian startups on a level playing field,” said Jones.
He said a similar scheme exists in Israel. It has helped the country become a startup hub – largely as a response to positive government policy.
In fact, Israel now produces more startups than nations such as China, the UK and Korea.
Supporting the organisations that support Australia’s startups seems like a no-brainer, so luckily there is grant support available from the government.
However, according to Jones, the actual criteria for applying for a grant is fuzzy at best.
“[The government should] work with the ATO to clarify the qualifying criteria for recently announced grant funding for startup accelerator and incubator programs, as well as funding for entrepreneurs-in-residence,” he said.
“While funding is available, it remains mostly unallocated because qualifying criteria haven’t yet been established.”
The brain drain of Australia’s tech talent is an enormous problem that is holding back the sector.
While Jones thinks that the government’s recently announced changes to higher education fees aren’t going to help this, there is something it can do to help correct the ship.
“The government could resist brain drain by providing grants or scholarships to Australian startups to assist them in providing internships and employment to our best final year and postgraduate computer science and engineering students,” he proposed.
Currently, Australia’s best and brightest are simply heading overseas to chase opportunity, with a lack of market maturity in Australia.
By providing a reason for them to stick around, Jones is hoping that the talent can help grow Australia’s market rather than helping another country’s.
At the moment, everybody’s trying to work out how the 457 changes are going to affect the employment landscape, and the innovation sector’s no different.
Jones, and the innovation sector more broadly, is seeking what every sector is seeking – clarity.
“Whatever replaces the axed 457 visa system must reflect the needs of Australia’s tech startup industry,” said Jones.
He also pointed to an interesting side note on how skills gaps are calculated.
At the moment, the Australian Computer Society alone defines the areas of shortage in the ICT sector.
Instead, Jones would like to see startup involvement in the definition of skills shortages.
“It must be determined in consultation with bodies such as StartupAus and TechSydney which more closely represent the interest of startups,” said Jones. | https://www.myob.com/au/blog/federal-budget-2017-what-does-the-innovation-sector-want/ |
Fundamental internal and external changes coupled with digitalisation have enabled new market entrants, FinTechs, to innovate services, creating competitive solutions to incumbents' offering. The purpose of this article is to understand the service innovation approach of FinTech companies. The complexities of service innovation are explained with a theoretical concept of service innovation stack, which presents the multiple components needed for successful service innovation. The usefulness of this construct is observed with a longitudinal case study of 10 FinTech startup from Finland using interviews and other data. These are shown with a visual representation, which ties in the internal activities with the external ones and shows the interplay between them. With the representation of the service innovation stack, the service innovation within financial industry can be better understood and further developed. The authors further suggest that though the framework is based on cases from FinTech startups, it might be relevant also for the incumbents.
Riikkinen, Mikko ; Saraniemi, Saila ; Still, Kaisa. / FinTechs as service innovators : Understanding the service innovation stack. In: International Journal of e-Business Research. 2019 ; Vol. 15, No. 1. pp. 20-37.
FinTechs as service innovators : Understanding the service innovation stack. / Riikkinen, Mikko; Saraniemi, Saila; Still, Kaisa.
In: International Journal of e-Business Research, Vol. 15, No. 1, 01.2019, p. 20-37.
N2 - Fundamental internal and external changes coupled with digitalisation have enabled new market entrants, FinTechs, to innovate services, creating competitive solutions to incumbents' offering. The purpose of this article is to understand the service innovation approach of FinTech companies. The complexities of service innovation are explained with a theoretical concept of service innovation stack, which presents the multiple components needed for successful service innovation. The usefulness of this construct is observed with a longitudinal case study of 10 FinTech startup from Finland using interviews and other data. These are shown with a visual representation, which ties in the internal activities with the external ones and shows the interplay between them. With the representation of the service innovation stack, the service innovation within financial industry can be better understood and further developed. The authors further suggest that though the framework is based on cases from FinTech startups, it might be relevant also for the incumbents.
AB - Fundamental internal and external changes coupled with digitalisation have enabled new market entrants, FinTechs, to innovate services, creating competitive solutions to incumbents' offering. The purpose of this article is to understand the service innovation approach of FinTech companies. The complexities of service innovation are explained with a theoretical concept of service innovation stack, which presents the multiple components needed for successful service innovation. The usefulness of this construct is observed with a longitudinal case study of 10 FinTech startup from Finland using interviews and other data. These are shown with a visual representation, which ties in the internal activities with the external ones and shows the interplay between them. With the representation of the service innovation stack, the service innovation within financial industry can be better understood and further developed. The authors further suggest that though the framework is based on cases from FinTech startups, it might be relevant also for the incumbents. | https://cris.vtt.fi/en/publications/fintechs-as-service-innovators-understanding-the-service-innovati |
Elanor Retail Property Fund is externally managed real estate investment fund investing in Australian retail property, focusing predominantly on quality, high yielding neighbourhood and sub-regional shopping centres. The strategy of Elanor Retail Property Fund is to acquire and unlock value in these assets to provide attractive cash flows and capital growth potential, to grow its investments under management through establishing new managed investment funds. The strategy of Elanor Retail Property Fund is to acquire and unlock value in these assets to provide attractive cash flows and capital growth potential, to grow its investments under management through establishing new managed investment funds. | http://tricklar.com/asx/company/erf/5 |
Publications:
Documents:
Photos:
Audio:
Biography
Hannah's research portfolio is broadly based around the role of nutrition in ocular disease, but has included the development and evaluation of ophthalmic instrumentation, clinical trials, the development of hand-held technologies for people with low vision, and investigations of the psychology of nutritional behaviour. This range of research has been made possible through collaborations with engineers, computer scientists, clinicians and health psychologists and is linked by the aim to impact on the lives of those people living with ocular diseases.
Areas of Expertise (6)
Clinical Education
Ophthalmic Instrumentation
Low Vision
Ocular Nutrition
Macular Pigment
Ocular Physiology
Education (3)
Aston University: MEd 2017
Aston University: PhD, Ocular Nutrition 2005
Aston University: BSc, Optometry 2000
Affiliations (6)
- College of Optometrists : Member
- General Optical Council : Member
- Higher Education Academy : Member
- American Academy of Optometry : Fellow
- Higher Education Academy : Senior Fellow
- Higher Education Academy : Principal Fellow
Links (2)
Media Appearances (2)
Curious Kids: how do eyes grow?
The Conversation online
2018-12-31
Each different type of cell is the starting point for the different parts of our bodies. So one type of cell might help to grow our ears, while another will help to grow our hearts, and so on. There are three different types of cell that work to make our eyes. When we have been growing inside mum for about three weeks, our eyes start to be created.
A feast for the eyes: how to improve your eyesight with food
Daily Express online
2015-11-02
“Some studies suggest that maintaining a healthy diet, including oily fish, nuts, fruit and vegetables in your meals could reduce your eye disease risk in the future,” says Dr Hannah Bartlett of Aston University’s School of Life & Health Sciences in Birmingham.
Articles (5)
Agreement in clinical decision-making between independent prescribing optometrists and consultant ophthalmologists in an emergency eye departmentEye
2020 The specialty-registration of independent prescribing (IP) was introduced for optometrists in 2008, which extended their roles including into acute ophthalmic services (AOS). The present study is the first since IP’s introduction to test concordance between IP optometrists and consultant ophthalmologists for diagnosis and management in AOS.
Comparison of the eating behaviour and dietary consumption in older adults with and without visual impairmentBritish Journal of Nutrition
2020 Globally, a high prevalence of obesity and undernutrition has been reported in people with visual impairment (VI) who have reported multi-factorial obstacles that prevent them from achieving a healthy diet, such as having restricted shopping and cooking abilities. The present study is the first to investigate the relationship between VI and dietary consumption using a representative sample size, standardised methods to categorise VI and a detailed analysis of dietary consumption.
Colour contrast sensitivity in eyes at high risk of neovascular age-related macular degenerationEuropean Journal of Ophthalmology
2019 To generate the first published reference database of colour contrast sensitivity in eyes at high risk of neovascular age-related macular degeneration and to explore this important feature in quality of vision.
An analysis of the impact of visual impairment on activities of daily living and vision-related quality of life in a visually impaired adult populationBritish Journal of Visual Impairment
2018 Previous research has shown that people with visual impairment are more likely to be malnourished and have reported to have difficulty shopping for, preparing, and eating food. They are also reported to have a poor quality of life. The present study aims to investigate the impact of visual impairment on activities of daily living and Vision-Related Quality of Life (VR-QoL) in a sample of adults with visual impairment who are living in the United Kingdom.
Testing the impact of an educational intervention designed to promote ocular health among people with age-related macular degenerationBritish Journal of Visual Impairment
2018 Research has shown that individuals affected by age-related macular degeneration (AMD) do not always consume foods or supplements known to be beneficial for ocular health. This study tested the effectiveness of an educational intervention designed to promote healthy eating and nutritional supplementation in this group. A total of 100 individuals with AMD completed baseline measures of several variables: confidence that diet affects AMD, motivation to engage in health-protective behaviours, knowledge about which nutrients are beneficial, and intake of kale, spinach, and eggs. Participants were allocated to either intervention or control conditions. Intervention participants received a leaflet and prompt card that contained advice regarding dietary modification and supplementation. Control participants received a leaflet created by the Royal College of Optometrists. | https://expertfile.com/experts/drhannah.bartlett/dr-hannah-bartlett |
Which restaurant serves up the best soup in Santa Fe? You be the judge!
About the Food Depot
The Food Depot is committed to ending hunger in Northern New Mexico. As the food bank for nine Northern New Mexico counties, The Food Depot provides food to 145 nonprofit agencies including emergency food pantries, hot meal programs, homeless shelters, youth programs, senior centers, homes for the mentally disabled and shelters for battered persons. This service enables these agencies to stay focused on their primary missions such as sheltering homeless families, providing hot meals to the homebound and offering life skills development to youth. The food bank distributes an average of 400,000 pounds of food and household products each month, providing more than 500,000 meals to people in need — the most vulnerable of our community — children, seniors, working families, and those in ill health. | https://www.flow3d.com/flow-science-sponsors-souper-bowl-2017/ |
BACKGROUND OF THE INVENTION
SUMMARY
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
1. Field of the Invention
This invention relates to method and apparatus for generating printed documents, in particular, for generating printed documents with security features for detecting unauthorized copying and alterations.
2. Description of Related Art
It is a well known problem that documents printed on paper or other physical media are subject to duplication (copying) and potential alternation, and it can be difficult to guard and protect against unauthorized duplication and alternations. It is often difficult to verify whether the printed document is original or a copy as most of them are printed by printers and can be copied by copiers. It is often also difficult to check if the printed document has been altered or changed by a computer. In some applications, the paper of the original document is provided with built-in and often hidden security components so that a document without such security components can be discerned as a copy.
There are many known methods aimed at preventing un-authorized copying and alteration or making them more difficult to do. In one method, a barcode is printed on the document to store data that can be used to authenticate the document. This type of method adds extraneous visible content to the document. Other methods use invisible security features, but many such methods are difficult to implement in real-time and the cost of producing the original document can be high. In addition, some methods may make the secured documents difficult to handle as regular paper documents because the processing methods change the flexibility and weight of the paper.
The present invention is directed to a method and apparatus of generating printed documents with security features for detecting unauthorized copying and alterations.
Additional features and advantages of the invention will be set forth in the descriptions that follow and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
To achieve these and/or other objects, as embodied and broadly described, the present invention provides a method for producing a secured document, which includes: printing a visible content of the document on a medium; and printing a security layer over the visible content on the medium, the security layer comprising a layer of patterned transparent conductive material.
In one embodiment, the patterned transparent conductive material forms a memory circuit storing a security data. The step of printing the security layer may include: obtaining image data representing the visible content of the document; processing the image data to generate the security data; generating a memory circuit pattern based on the security data; and printing the memory circuit pattern using the transparent conductive material.
The step of printing the security layer may alternatively include: obtaining a document ID corresponding to the document as the security data; generating a memory circuit pattern based on the security data; and printing the memory circuit pattern using a transparent conductive material.
In another embodiment, the method further includes: measuring electrical properties of the printed security layer to obtain reference electrical property values; and performing one of the following steps: storing the reference electrical property values and a document ID in a storage device; printing a barcode on the medium which encodes the reference electrical property values, or printing a memory circuit pattern storing the reference electrical property values using the transparent conductive material. In another aspect, the present invention provides a method for authenticating a target printed document having a security layer printed over a visible content, the security layer comprising a layer of patterned transparent conductive material forming a memory circuit storing a security data, the method including: (a) scanning the document to generate a target image representing the visible content; (b) transmitting a probe signal to the memory circuit printed on the document and receiving any response from the memory circuit; (c) if no response is received, determining that the document is not authentic; and (d) if a response is received, (d1) obtaining the security data from the RF response; and (d2) determining whether the target document is authentic based on the target image and the security data.
In one embodiment, the security data stored in the memory circuit has been generated by processing an original image of the document using a predetermined algorithm, and wherein step (d2) includes: processing the target image generated in step (a) using the predetermined algorithm to generate target security data; and comparing the target security data and the security data obtained in step (d1) to determine whether they match each other.
In another embodiment, the security data stored in the memory circuit contains a document ID, and wherein step (d2) includes: obtaining the document ID from the security data; retrieving archived data from a storage device using the document ID, the archived data being descriptive of an original image of the document; and comparing the target image and archived data to determine whether the original image and the target image match each other.
In another aspect, the present invention provides a method for authenticating a target printed document having a security layer printed over a visible content, the security layer comprising a layer of patterned transparent conductive material, the method including: measuring electrical properties of the printed security layer; and obtaining reference values of the electrical properties, including obtaining the reference values from a barcode printed on the target document or retrieving the reference values from a storage device using a document ID obtained from the target document; and comparing the measured values of the electrical properties with the reference values to determine whether the target document is authentic.
In another aspect, the present invention provides printing system which includes: a first print engine for printing a visible content on a medium; a second print engine for printing a layer of patterned transparent conductive material on the medium to form a security layer; and a control section coupled to the first and second print engines, comprising one or more processors and memories having a computer readable program code embedded therein, the computer readable program code configured to cause the control section to execute a printing process including: controlling the first print engine to print a visible content of the document on a medium; obtaining image data representing the visible content of the document; processing the image data to generate a security data; generating a circuit pattern based on the security data, the circuit pattern including a memory circuit storing the security data; and controlling the second print engine to print the circuit pattern over the visible content using the transparent conductive material.
In another aspect, the present invention provides a system for authenticating a target printed document having a security layer printed over a visible content, the security layer comprising a layer of patterned transparent conductive material, the system including: a measurement device for measuring electrical properties of the printed security layer; a processing section coupled to the measurement device, comprising one or more processors and memories having a computer readable program code embedded therein, the computer readable program code configured to cause the processing section to execute an authentication process comprising: obtaining reference values of the electrical properties; and comparing the measured values of the electrical properties with the reference values to determine whether the target document is authentic.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
Embodiments of the present invention provide methods of generating a secured printed document on paper or other medium, by printing a security layer made of a transparent conductive material on top of the original printed document, i.e., over the ink or toner layer that forms the visible content of the document. The security layer is printed using a transparent conductive material such as transparent conductive toner or ink and is invisible to human eyes. In a first group of embodiments, the transparent conductive material forms a pattern of a radio frequency (RF) transponder circuit which can be read by a contactless RF reader. In a second group of embodiments, the transparent conductive material forms a pattern of a memory circuit which can be read by a contact type reader. In a third group of embodiments, the transparent conductive material is patterned but does not form a functional circuit, and its electrical properties can be measured with a contact type measuring device.
In the first group of embodiments, the patterned transparent conductive layer forms a printed circuit which acts as both a digital memory and a radio frequency antenna electrically coupled to each other. The circuit can be activated by a radio frequency signal from a radio frequency reader, and respond by returning a stream of data based on the printed pattern. This way, data stored in the digital memory of the printed circuit, referred to herein as security data, can be read by an RF signal processing reader for purpose of authentication.
In one embodiment, the security data stored in the printed circuit correspond to the visible content of the printed document and can be processed by a computer for authentication (verification) purposes. Such data may be generated, at the time of printing, from the image data representing the visible content of the document. For example, the security data may be a hash code generated from the bitmap image of the visible content. As another example, the security data may be a compressed image of the document (e.g. a JPEG image). In another embodiment, the stored security data includes a document ID which can be used to retrieve archived data for purpose of authentication.
In the second group of embodiments, the transparent conductive layer forms a memory circuit pattern, but not a radio frequency antenna pattern. The memory circuit stores security data such as a hash code or a document ID similar to those in the first group of embodiments.
For the third group of alternative embodiments, the transparent conductive layer forms a conductive pattern, and electrical properties such as resistance, capacitance or inductance of the security layer may be measured by a suitable detector, for purpose of authentication.
Because the security layer printed over the document is transparent, a conventional copier cannot read its pattern to reproduce it during copying. Conventional copiers or printers also do not have the ability to print transparent conductive ink or toner to duplicate the patterned conductive material on a copy. Thus, if a document carrying the security layer is reproduced using a conventional copier, the resulting copy will not carry a patterned conductive material. Therefore, using a suitable reader or other detector, it is possible to determine whether a target printed document is an original printed document or a copy produced by a copier, as will be described in more detail later.
FIG. 1
illustrates an exemplary printed document carrying a security layer of patterned circuit pattern, which is schematically indicated by the dotted pattern. The pattern in this example is arbitrary and not an actual functional pattern. It should be noted that in the actual document the security layer is not visible to human eyes, and the circuit pattern is made visibly in this figure for purpose of illustration only.
FIG. 1
In the example illustrated in , the patterned RF material extends substantially over the entire area of the printed document. Alternatively, it may only extend over a part of the printed document.
FIG. 2
10
11
12
10
13
14
15
15
13
14
15
schematically illustrates a printing system that may be used to generate a printed document carrying the security layer. The printing system includes first and second printing sections (also referred to as print engines) and , for printing the visible content of the printed document using regular ink or toner, and printing the invisible security layer using transparent conductive ink or toner, respectively. The system also includes an image processing section for performing functions such as raster image processing (RIP), etc.; a control section for controlling the print engines and other components of the printing system as well as performing data processing functions such as generating the security data, etc. as will be described in more detail later; and a pattern generator . The pattern generator generates the RF circuit pattern or memory circuit pattern based on the security data for the first and second groups of embodiments, or generates the conductive pattern for the third group of embodiments. The image processing section , the control section , and the pattern generator may be implemented by one or more processors executing program code stored in memories, or other suitable electronic circuitry. The above mentioned components are connected to each other for example by a bus or other wired or wireless communication link. Other components of the printing system, such as an I/O section, etc., are not illustrated.
11
12
11
12
As described earlier, the invisible security layer is printed over the normal ink/toner layer that forms the visible content of the document. Thus, the printing process is carried out in two steps, the first to print the visible content, and the second to print the invisible security layer. In one embodiment, the first and second printing sections , are physically located within one printer unit, so that the printer can perform both printing steps. A mechanical transport system may be provided so that the medium such as paper is automatically transported between the first and second print engines without operator intervention. In an alternative embodiment, the first and second printing sections , are located in separate physical units, in which case an operator may be required to transport the medium from the location of the first printing section after the first printing step to the location of the second printing section.
10
15
14
More generally, the various components of the printing system may be distributed in various physical units as desired. For example, the pattern generator and the portion of the control section that generates the security data may be located on a separate computer connected to the unit(s) that contains the print engines.
10
16
16
11
12
16
11
12
10
11
10
In a preferred embodiment, the printing system also includes an optical scanning section . Suitable transport mechanisms may be provided to transport the medium among the scanning section and the first and second print engines and . Using such a printing system, an existing printed document (unsecured) may be scanned by the scanning section to generate a document image, from which the security data is generated. Then, the printing system prints the document image on a medium using the first print engine and prints the security layer using the second print engine to generate a printed document carrying the security layer. This way, a secured printed document can be generated using an existing unsecured printed document. Alternatively, the printing system may print the security layer directly over the existing printed document, whereby a security layer is added to the existing unsecured document to generate a secured document. In the latter case, the first print engine is not necessary and may be omitted from the printing system .
16
Another function of the scanning section is to scan the document that has been printed with the visible content by the first print engine but before the security layer is printed, as will described in more detail later.
FIG. 3
20
20
21
22
22
20
23
23
20
schematically illustrates a processing system which may be used to read and process a target printed document to determine whether it is an original document. The processing system includes an optical scanning section for scanning the visible content of the target document, and a reader/tester for reading the security data stored in the printed circuit of the security layer or measuring the electrical properties of the security layer. For the first group of embodiment, the reader may be a contactless RF reader, which transmits an RF probe signal to the printed RF circuit on a secured document, and receives the RF signal returned from the RF circuit. For the second group of embodiments, the reader may be a contact type signal processing reader described in more detail later. For the third group of embodiments, the tester may be a contact type detector such as an LCR ((Inductance (L), Capacitance (C), and Resistance (R)) tester described in more detail later. The processing system further includes a data processing section for processing the security data and the scanned image to determine whether the target document is authentic, as well as performing various other functions, as will be described in more detail later. The data processing section may be implemented by one or more processors executing program code stored in memories, or other suitable electronic circuitry. The above mentioned components are connected to each other for example by a bus or other wired or wireless communication link. Other components of the printing system, such as an I/O section, etc., are not illustrated. The various components of the processing system may be distributed in various physical units as desired.
FIG. 4
10
20
30
40
10
20
schematically illustrates an overall system in which embodiments of the present invention may be implemented. The system includes the printing system , the processing system , one or more computers (e.g. servers or client computers), and storage devices , connected via a network or other communication links. It should be noted that the printing system and the processing system are not required to be connected to the same network; they may be separately connected to respective servers which in turn are connected to the storage device, or not connected to any network at all. In particular, the printing system that prints a secured document and the processing system that reads a target document are not required to be at the same location or belong to the same organization.
FIG. 5
10
schematically illustrates a method for generating a secured document carrying a security layer containing an RF transponder circuit using the printing system .
101
11
102
102
13
14
First, source data is received which represent the document to be printed (step S). The source data is in electronic form and may be of any suitable format, such as PDF, JPG, text format, printed language such as PDL, etc. Based on the source data, a document is printed using the first print engine on a medium (e.g. paper) (step S). The document printed by step S carries the visible content of the document. This step includes any necessary data processing by the image processing section and the control section .
16
103
14
104
15
105
12
106
Then, the printed document is scanned back using the scanning section to generate a document image, preferably a bitmap image (step S). The document image is processed by the control section to generate security data (step S). In a preferred embodiment, the security data includes a hash code generated from the binary document image. The security data may be encrypted and/or compressed to reduce the data size. Any suitable hash algorithm, encryption algorithm and compression algorithm may be used. Then, the pattern generator generates an RF circuit pattern, which includes a memory circuit pattern based on the security data and an RF antenna pattern (the antenna pattern may be pre-stored in the printer or another processing system) (step S). The second print engine prints the RF circuit pattern, using transparent conductive ink or toner, on the medium over the visible content that was printed by the first print engine (step S). The printed RF circuit pattern including the antenna pattern and the memory circuit pattern constitutes the security layer.
107
Optionally, the finished document may be read by an RF reader to verify that the security layer has been correctly printed and the security data is intact (step S).
FIG. 6
FIG. 5
20
schematically illustrates a method for authenticating a target printed document using the processing system . The target document is purported to have been printed using the process shown in .
22
201
202
208
202
203
First, the target document is read with the reader such as an RF reader (step S). If the document contains a security layer, the RF signal transmitted by the RF reader will activate the printed circuit pattern of the security layer, which will respond by transmitting the data stored in the memory circuit of the RF circuit pattern. If the RF reader does not receive a response from the target document or if the response is meaningless (“N” in step S), it is determined that the target document is not original (e.g., it is copied or otherwise tampered with) (step S). A response may be meaningless if, for example, a certain data format is expected but the received signal does not satisfy the format. If meaningful RF signal is received (“Y” in step S), the security data contained in the RF signal is extracted and stored as recovered security data (step S). If the security data has been encrypted during the printing process, it is decrypted in this step.
21
204
104
205
206
207
209
207
210
Then, the target document is scanned with the optical scanning section to generate a target image (step S). The target image is preferably a bitmap image. The target image is then processed, using the same algorithms as step S during the printing process, to generate target security data (step S). The target security data is compared with the stored recovered security data (step S). If they do not match (“N” in step S), it is determined that the target document is not original (step S). If they match (“Y” in step S), it is determined that the target document is authentic (step S).
FIGS. 5 and 6
10
20
In the embodiments shown in , the authentication (i.e. determining whether the target document is an original document) is carried out based solely on the target document itself, without referring to any data not contained in the target document. In such a method, the document is referred to as self-authenticating. To implement a self-authenticating system, the printing system and the processing system are not required to be able to access a common storage device. In the preferred embodiment, the security data stored in the printed RF circuit is a hash code which is relatively short. Comparing the hash code recovered from the RF circuit with the hash code generated from the scanned target image can indicate whether the visible content of the document has been altered, but cannot indicate what the alterations are.
Alternatively, the security data may be a compressed image of the visible content, which can then be compared to the scanned target image to determine whether the images are the same. This alternative may be more difficult to implement because the data amount of the compressed image is relatively large and it may be difficult to print an RF pattern to store such a large amount of data. As another alternative, the security data may contain a compressed image of small but critical areas of the document, such as signatures, names, dates, numbers and other key contents.
40
8
FIGS. 7
In lieu of a self-authenticating scheme, the RF pattern may be used in an authentication scheme in which archive data is stored in an external storage and used authenticate a target document. In such a method, archive data descriptive of the original document is stored in a storage device during the printing process. Preferably, the archive data include the document image. It may also include desired document management information such as author, time of creation, etc. A document ID is assigned to each archived document for data retrieval later. The document ID is stored in the RF circuit pattern printed on the document, and is later used to retrieve the archived data to authenticate the document. Such an authentication scheme is shown in (printing process) and (authentication process).
FIG. 7
FIG. 5
301
307
101
107
304
308
In the printing process shown in , steps S through S are generally the same as steps S through S of , except that in step S, the document ID is used as a part of the security data. In step S, the archive data with the document ID is stored in the storage device for later retrieval.
FIG. 8
FIG. 6
401
410
201
210
403
405
406
406
In the authentication process shown in , steps S through S are generally the same as steps S through S of , except that: in step S, the recovered security data includes the document ID; in step S, the document ID is used to retrieve archived data of the document from the storage device; and in step S, the target image is compared with the archived data to determine whether the target document is authentic. For example, if the archived data includes the document image, the archived image and the target image may be compared in step S. Because image comparison is used, this method can not only determine whether the target document is authentic, but also indicate what changes have been made.
406
20
Step S may be performed using a suitable image comparison algorithm. For example, image comparison may be performed on a pixel-by-pixel basis, or done by comparing various descriptive characteristics of the images. Alternatively, image comparison may be done manually by displaying the images to a user. Image comparison may be performed by a server connected to the processing system as it tends to be computationally intensive.
FIG. 5
FIG. 5
FIG. 7
102
106
103
101
102
104
106
The printing process shown in starts with source data in electronic form. Alternatively, as mentioned earlier, a secured document may be generated from an existing hard copy document. In one scenario, the existing document is first scanned (not shown in ) to generate the source data (e.g. a bitmap image), and then steps S through S are carried out. This results in a secured copy of the existing (unsecured) document, while the existing document can be preserved. In another scenario, the existing document is scanned in step S (steps S and S are bypassed), and steps S through S are carried out by printing the RF circuit pattern on the existing document itself. As a result, a security layer is added to the existing document over the existing visible content. The same modification can be made to the printing process shown in .
FIG. 5
104
103
102
103
204
104
205
In the printing process shown in , the document image used to generate the security data (step S) is obtained by scanning the printed document (step S) after the visible content is printed by the first print engine (step S). Alternatively, the document image may be generated directly from the source data. For example, if the source data is a bitmap image, it can be used directly. Otherwise, a bitmap image can be generated from the source data using available programs. However, scanning back the actual printed document (step S) may offer the advantage that the scanned document image will be closer to the target image generated later in the authentication process (step S). This is because the scanned image data will contain various effects due noise present in the printed document or other factors such as the color and reflectivity of the paper or other print medium. Thus, the hash code generated in the printing process (step S) and the hash code generated in the authenticating process (step S) will match better.
As mentioned earlier, in a second group of embodiments, the transparent conductive layer forms a memory circuit pattern for storing security data, but not a radio frequency antenna pattern. The memory circuit pattern preferably includes two or more contact pads. A contact type signal processing reader, equipped with contact terminals that can be placed in contact with the contact pads of the printed circuit, sends an electrical probe signal to the memory circuit. The memory circuit is designed so that it will respond to the probe signal by returning an electrical signal representing the stored digital data. The number of contact pads may be two for serial data transfer, or more for parallel data transfer. The contact pads are preferably located at predetermined locations of the printed document, and the contact type reader has contact terminations at corresponding locations to form electrical contact with the contact pads.
10
105
305
106
306
107
307
FIG. 5
FIG. 7
According to the second group of embodiments, a secured document carrying such a security layer may be generated using the printing system in a process similar to that shown in or , with the following modifications. In modified steps S and S, a memory circuit pattern is generated based on the security data. Steps S and S are not changed but the pattern does not include an antenna pattern. In modified steps S and S, the pattern is read by a contact type reader to verify the security data stored in the memory circuit.
20
201
401
202
402
203
403
FIG. 6
FIG. 8
According to the second group of embodiments, a target printed document may be read and authenticated using the processing system in a process similar to that shown in or , with the following modifications. In modified steps S and S, the target document is read using a contact type reader rather than an RF reader. In modified steps S and S, it is determined whether the security data can be read from the target document by the contact type reader. In steps S and S, the security data is contained in the electrical signal transmitted by the memory circuit rather than the RF signal.
FIGS. 5-8
When the printed document is in circulation, the layer of transparent conductive material may be susceptible to damage either due to normal handling or due to deliberate tampering such as rubbing with an eraser. However, due to redundancy in the pattern (e.g., a conductive line may be damaged but not completely broken; some lines in the antenna pattern may be broken without losing the antenna function), the printed pattern can sustain certain amount of damage without losing its ability to correctly respond to the RF activation. Because the embodiments of and variation thereof rely on the content of the security data stored in the printed circuit to perform authentication, certain amount of damage or alteration of the printed circuit pattern will not change the authentication result.
105
305
This may be advantageous in practice (i.e. the security layer is not overly sensitive to normal handling), but may be disadvantageous in some situations. The line thickness of the circuit pattern may be designed based on practical considerations such as the amount of damage likely to occur due to normal handling of the printed document (which may, for example, depend on the property of medium the circuit is printed on). One way to mitigate the potential problem of insensitivity to deliberate alteration is to design the memory circuit pattern or the RF circuit pattern (in steps S and S) such that the memory circuit pattern or the memory circuit part of the RF circuit pattern is located over important areas of the document, such as a signature. Because the memory part of the RF circuit pattern is more sensitive to physical tampering, this can better protect the important areas of the document. When such areas are tampered with, it will likely result in the memory circuit pattern being damaged leading to detectable errors in the response from the circuit.
As mentioned earlier, in a third group of embodiments, the electrical properties of the printed conductive materials are used as an indication of the integrity of the document. The electrical properties include conductivity (or impedance or resistance), capacitance and/or inductance. When the document is tampered with, for example, when an eraser or a sharp object is used to remove original printed content, the conductive materials over the content that is tampered with will be also removed or destroyed, so the electrical properties of the document will likely be changed. Resistance, capacitance and inductance may be measured using a contact type measurement device such as an LCR tester, which may be equipped with contact terminals arranged in a suitable pattern.
The conductive pattern printed over the document preferably includes contact pads for resistance, capacitance or inductance measurement using the LCR tester. The tester's terminals will contact the contact pads at pre-determined locations, and the resistance, capacitance and/or inductance of the printed conductive pattern can be measured. In a simple example, the conductive pattern is a set of parallel lines extending across the document with contact pads at both ends of each line. In another example, the conductive pattern includes one or more meandering lines with contact pads at both ends of each line. The track resistance R of such a line pattern is a function of the total printed circuit length L divided by the cross section of the pattern A: R=K*ρ*L/A, where K is a constant and ρ is the resistivity of the conductive material. A change in the width or thickness of the printed line pattern, or removal of segments of the lines, will result in a change of the measured resistivity.
FIG. 9
501
503
504
505
505
503
502
During the printing process according to the third group of embodiment, illustrated in , after the visible content and the conductive pattern layer are printed (steps S through S), the conductivity and/or capacitance of the printed document are measured using the contact type measurement device (step S). The measured values (reference values) are stored in a storage device as a part of the archived data (step S). Alternatively, the reference values may be coded in a barcode and printed on the document itself (step S, which is a third printing step using the first print engine). If the reference data is stored externally in an archive, a document ID is stored in the RF circuit (in step S) or in a barcode printed on the document (in step S).
FIG. 10
601
602
603
606
During the authentication process, illustrated in , the conductivity and/or capacitance of the target document are measured using a contact type measurement device such as an LCR tester (step S). The reference conductivity and/or capacitance values are obtained, e.g., read from the RF circuit or the barcode, or retrieved from the archive using the document ID read from the RF circuit or the barcode (step S). The measure conductivity and/or capacitance values are compared to the reference values to determine whether the document had been tampered (steps S through S).
505
602
FIG. 9
FIG. 10
FIG. 10
When the electrical properties are used for document authentication, the printed conductive material may form a non-functional pattern, i.e., one that does not form an RF circuit or a memory circuit. In such cases, the electrical properties alone are used for document authentication. In alternative embodiments, both functional circuit patterns and non-functional patterns may be used. For example, a non-functional conductive pattern may be placed over an area of the document such as a signature area, and estimated or measured reference values of electrical properties of the non-functional pattern are stored in a memory circuit also printed on the document. This eliminates the need to print a barcode storing the reference values or to store the reference values in an archive. In other words, step S of may be modified to printing a memory pattern storing the reference values. If the reference values are actually measured from the non-functional pattern after it is printed, a two-pass printing of the transparent conductive material may be required, once to print the non-functional circuit, the second to print the memory circuit storing the measured reference values. During authentication, in step S of , a reader is used to read the stored data from the printed circuit pattern. The other steps of are unchanged.
FIGS. 5-8
FIGS. 9-10
As seen from the above descriptions, a common feature of the various methods according to various embodiments of the present invention is that a patterned transparent conductive layer is printed over the visible content of the document as a security layer. In the first and second groups of embodiments ( and modified versions thereof), the patterned transparent conductive layer forms an RF transponder circuit or a memory circuit which stores security data to be used for document authentication. In the third group of embodiments (), electrical properties of the transparent conductive layer are used to authenticate the document.
It will be apparent to those skilled in the art that various modification and variations can be made in the document authentication method and apparatus of the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover modifications and variations that come within the scope of the appended claims and their equivalents.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
illustrates an exemplary printed document carrying a security layer of patterned conductive material according to embodiments of the present invention.
FIG. 2
schematically illustrates a printing system for generating a printed document carrying the security layer according to an embodiment of the present invention.
FIG. 3
schematically illustrates a processing system for processing a target printed document to determine whether it is authentic according to an embodiment of the present invention.
FIG. 4
schematically illustrates an overall system in which embodiments of the present invention may be implemented.
FIGS. 5 and 6
schematically illustrate a method for generating a secured document carrying a security layer containing an RF transponder circuit, and for authenticating such a printed document, respectively, according to an embodiment of the present invention.
FIGS. 7 and 8
schematically illustrate a method for generating a secured document carrying a security layer containing an RF transponder circuit, and for authenticating such a printed document, respectively, according to another embodiment of the present invention.
FIGS. 9 and 10
schematically illustrate a method for generating a secured document carrying a layer of transparent conductive material, and for authenticating such a printed document, respectively, according to an alternative embodiment of the present invention. | |
MyGov, the Indian government’s citizen engagement platform, in partnership with the Ministry of Higher Education, has launched an innovation challenge for the creation of an Indian language learning app. This innovation challenge was launched to advance Prime Minister Narendra Modi’s vision of celebrating India’s cultural diversity through greater interaction between its constituent parts.
MyGov launched the Innovation Challenge to create an app that will allow individuals to learn simple sentences of any Indian language and gain working knowledge of a language. The objective of this challenge is to create an app that will promote regional language literacy, thus creating a better cultural understanding within the country. The key parameters that will be examined will include ease of use, simplicity, graphical user interface, gamification features, user interface, user experience, and premium content, which make it easy and fun to use. learning an Indian language.
The Innovation Challenge is open to individuals, startups and Indian companies. MyGov envisions the app as being multi-modular, with the ability to teach through written, voice and video / visual. Application developers can provide multiple interfaces for learner engagement. The Innovation Challenge Booth can be accessed at https://innovateindia.mygov.in/indian-language-app-challenge/.
The Innovation Challenge ends on May 27, 2021. After evaluation of the prototype submission, the 10 best teams will be invited to make presentations and the best 3 will be selected by a jury. The first 3 will receive funding of 20, 10 and 5 lakhs INR to improve the applications. The solutions would be evaluated based on general parameters such as innovation, scalability, usability, interoperability, ease of deployment / deployment and campaign. | https://indiancountrynm.org/innovation-challenge-launched-by-mygov-for-the-creation-of-an-indian-language-learning-application/ |
Elise is a producer, dance artist and nature lover. She holds a BA (Hons) Psychology (University of Sussex) and Postgraduate Diploma Community Dance (Laban Conservatoire of Dance). Elise has worked as a dance teacher in community settings since 2010 and has been dancing and performing since age four. Elise has a background managing projects in a range of sectors (including the arts, education, technological innovation and corporate) since 2002, specialising in producing contemporary dance since 2014.
Elise has always been interested in supporting people to be the best version of themselves, whether by helping them to access social/health services, manage their day-to-day lives, or engage in activities that give enjoyment and meaning. Elise discovered that dance could be a tool for achieving the same through a company called Dance United, who worked very successfully with young offenders, reducing rates of reoffending and increasing rates of return to education, employment and training. This triggered a shift to a life dedicated to dance and the arts.
There is a growing body of research* that demonstrates the positive and lasting impacts dance can have on: physical, mental and emotional health, group and community cohesion, and developing transferrable skills for education and work. Further than all the obvious physical benefits (increased fitness, mobility, flexibility, stability, coordination, etc.), dance can: build self confidence, give a sense of achievement through creating and sharing, access and assist processing of process traumatic emotions, build trust and leadership, and make and share meaning.
Whether working as a dancer, teacher or producer, Elise ‘s practice is underpinned by a desire to share the benefits of dance and arts engagement and to support strong collaborative working relationships with and between individuals.
* People Dancing and Arts Council England have excellent databases of resources on the multiple benefits of dance, you can find them here: | http://elisephillipsdance.co.uk/about/about-me/ |
The Inca Quarry Trail offers hikers incredible Andean scenery, challenging mountain passes and the opportunity for local interactions with the trail winding through villages in the Sacred Valley.
If that’s not enough for you to lace up your boots, you’ll get to see some of the lesser known archaeological sites and learn about the impressive building techniques used by the Incas. While many travellers visit Peru to see Machu Picchu (and hiking the Quarry Trail you still get to see this amazing site), there’s so much more to the Sacred Valley region, scattered with Inca ruins and relics to explore.
1. Choquequilla
Before the Quarry Trail hike begins, you’ll visit Choquequilla – a small ceremonial place where Incas worshipped the moon and the sun according to the change in seasons. It was a place dedicated to Pachamama (Mother Earth) and has beautifully carved edges as part of an altar. Nobody knows for sure its real name – some locals call it naupa iglesia.
2. Q’orimarca
Found within a clearing along the trail, the site was used as a checkpoint during the times of the Incas, and a resting spot for passing travellers and pilgrims on their way to Machu Picchu. A significant amount of the site is made up of storerooms, with plenty of space to keep food and supplies. Following the first pass, the path overlooks grasslands; arriving to the top of the second pass you will see the Urubamba mountain range and – if you’re lucky – a condor flying through the valley.
RELATED: 6 REASONS TO TAKE THE INCA QUARRY TRAIL TO MACHU PICCHU
3. Inti Punku
In the local language, Inti Punku means Sun Gate; the Incas built these structures throughout the Andes in honour of the sun god ‘Inti’ (not to be mistaken for the well-known Sun Gate at Machu Picchu). On the Quarry Trail, this marks the location of a smaller archaeological site with spectacular views of the snow-capped mountain ‘Veronica’, and into the Sacred Valley below overlooking the ancient town of Ollantaytambo.
4. Choqetacarpo
Further down the hill, you’ll come across small buildings that appear to be the remains of residences – experts have suggested these may have been used by stone masons recruited from all over South America to work in the quarry during the height of the Inca empire.
RELATED: QUARRY TRAIL FAQ: THE ‘OTHER’ ROUTE TO MACHU PICCHU
5. Kachiqata quarry
Making your way downhill on the last day of the trek, one hour from the campsite, you will get to learn more about the ancient excavating techniques of the Incas. After seeing some of the ruins strewn throughout the Sacred Valley, and understanding that these structures are a mark of engineering ingenuity (built with no mortar or metal tools), you’ll walk away even more impressed having seen part of an ancient quarry and discovering some of the techniques used to excavate and haul the blocks to build the town of Ollantaytambo.
6. Ollantaytambo
You get a real sense of how the Inca civilisation once thrived in this ancient town, home to two impressive ruins, sitting in the Sacred Valley. This is where the Quarry Trail ends, walking up the cobblestone lane that takes you into the main square where you can rest your wobbly legs and relish in the moment of completing the journey.
There’s more than one hiking option in Peru’s Sacred Valley. We offer Inca Trail and Quarry Trail itineraries. | https://www.intrepidtravel.com/adventures/quarry-trail-ruins/ |
If you have comments, questions and/or feedback please email [email protected]. Opens a New Window.
2019 Grant Application Opens a New Window.
January Workshop Presentation (PDF) Opens a New Window.
Special meetings of the Advisory Committee may be called in conformance of the Brown Act, including 24 hour notice of the meeting posted at the regular meeting location, and in those local newspapers that have requested to be informed of Advisory Committee Meetings.
On April 10, 2012. the Napa County Board of Supervisors signed a resolution creating an Arts & Culture Advisory Committee (ACAC) for the primary purpose of conducting a capacity building grant program and making recommendations to County staff regarding grant awards to be made to non-profit art and cultural organizations in Napa County. Funding for grants comes from the Special Projects Fund, which the Board of Supervisors established after the Transient Occupancy Tax (TOT) increased from 10.5% to 12% on January 1, 2005.
Each year the ACAC conducts a Request for Proposals application process, awarding arts grants to deserving arts-based nonprofit groups in Napa. The ACAC forwards its recommendations for deserving applicants to the Board of Supervisors, which makes the final decisions on grant funding. The goal of the arts grant program is to: provide more focus on growing arts and culture in Napa, include more diversity of cultures and voices, and foster more collaboration among Napa's art groups, as reflected in Napa's arts plan: A Community Cultural Plan for Napa County (PDF) Opens a New Window. .
Read the Committee's Bylaws (PDF) Opens a New Window. , including purpose, membership, terms and more. | https://www.countyofnapa.org/1391/Arts-Culture-Advisory-Committee |
Our Philosophy:
Our approach in Biological Dentistry is first and foremost to acknowledge that the Body functions as a single organism and any alteration of a part can impact the entire system. There are numerous studies that demonstrate the impact of intrusive dental practices on the health of the individual. The body is an energetic entity, designed to function optimally with all that it was given genetically, to fulfill its life function. Our goal is to understand this and to maintain that integrity wherever possible. When necessary, we intervene to restore equanimity. Only when this balance is restored can the normal bodily functions return.
Often with interventions from well-meaning healthcare practitioners, procedures are performed that may prove to be inappropriate, to name a few: tonsillectomies, certain orthodontic procedures, vaccinations and more specifically, mercury containing vaccines and mercury fillings. Each creates its own potential liability, but collectively they can create a completely compromised immune system.
Nature’s way is for an infant to be breastfed, which results in a sucking action and develops a jaw capable of accommodating all the teeth to come in properly. Coarse, healthy, organic wholesome food, further leads to proper chewing (mastication), digestion and the functional development of the bite, a good prominent jaw, and a healthy body.
Terrain is a simple concept that is inviolable.
What is meant by Terrain?
Terrain is the natural structure (geography) of the body. If this is altered, through intrusive methods, or materials, it can negatively affect the energetic system or meridian flow along the organs of the body altering their capacity to function.
Once we violate the terrain, through intrusive intervention, the energetic flow within the body breaks its rhythmic patterns, blocks get created and breakdown occurs, creating pathology in weak areas in the body and certain immune functions become compromised.
An Integrative Approach in Dentistry to locate and correct interfering blocks.
At the Biological Dental Center, based on the principles of General Dentistry, and an integrative approach we first establish where the blocks have occurred due to inappropriate dental materials, dead teeth or poorly healed extraction sites as well as poor bites or malocclusions. We then outline a course to correct these blocks in the energetic system. It is not unusual to integrate the principles of Traditional Chinese Medicine and Homeopathy in our approach while maintaining sound dental concepts. Recommendations can vary from replacement of mercury fillings to possible extraction of dead teeth as well as replacement of missing teeth with appropriate materials, and/or possible referrals for orthodontic consultation. We will also consider the influence of tonsils in the energetic schemata and what appropriate care may be necessary with recommendations for such care.
We respect your Philosophy.
In evaluating your individual circumstance, we will consider your philosophy and make respective recommendations. If you choose to maintain your mercury fillings because they seem to be serviceable, or if you are interested in root canals, then we will refer you to someone to fill these needs. However if you are seeking an evaluation, and treatment or a second opinion with consideration of energetic principles relating to inappropriate electrical blocks in the oral cavity caused by multiple metals in the mouth and its effects on your overall health then this may be the office for you, and we invite you to come see us.
Our Approach : Considering the effects of the Oral Cavity on the Whole Body.
Generally patients came to a dental office with concerns for aesthetics or pain caused by a toothache and got treated symptomatically, however over the last 30 years, times have changed. There is a growing awareness of the effects of the oral cavity on the overall health of the body. Gross fatigue, pain, mental depression or confusion and other non-specific complaints may be coming from their oral cavity. To investigate this possibility and/or rectify it, by replacement of bio-compatible materials, our office will be glad to be of service. We are a fully equipped office conscious of protecting the patient from noxious substances such as mercury. we offer an environmentally friendly office. Our Center has 100% wool carpets (hence without fire retardants), organic paint and cleanable surface floors (linoleum). During the removal of materials, we are committed to protect you the patient as much as humanly possible.
Although we do offer same-day appointments when available, we encourage making appointments as far in advance as possible. Our friendly staff will be happy to answer any questions you have regarding scheduling and payment arrangements. | http://www.drlanderman.com/our-practice/ |
UT San Antonio gets CISA grant to develop high-value asset cybersecurity
The Cybersecurity and Infrastructure Security Agency on Monday announced a $1.2 million grant to a think tank at University of Texas at San Antonio that will launch a pilot program to help state and local governments improve the cyber defenses of their most critical systems.
Using the grant, the university’s Center for Infrastructure Assurance and Security, which studies cybersecurity and critical infrastructure, will develop methods by which state, local, tribal and territorial agencies can better identify their high-value assets, which CISA identifies as information or an IT system “so critical to an organization that the loss or corruption of this information or loss of access to the system would have serious impact to the organization’s ability to perform its mission or conduct business.”
A 2018 Department of Homeland Security directive focused on federal high-value assets urged greater malware defense, access controls, authentication protocols and network segmentation for U.S. government systems containing personally identifiable information, classified data or financial data, which are tempting targets for criminal and state-sponsored malicious actors.
The recently reported exploitation of Microsoft’s Exchange Server email program, allegedly by Chinese hackers, potentially affected tens of thousands of organizations across the United States, including many state and local governments. Many of those organizations are still evaluating their exposure to other recent hacks, like the Accellion data breach and compromise of network monitoring software from SolarWinds.
But more recently, CISA has been encouraging a High Value Asset Program for state and local governments, adapting the 2018 directive. In a recent “CISA Insights” document, the agency recommended that public sector organizations establish high-value asset governance programs of their own. Those programs, the document claims, should include evaluations of the interconnectivity of identified systems and prioritize them based on how essential they are to an agency’s mission.
In an interview Tuesday, Natalie Sjelin, associate director of training programs at the Center for Infrastructure Assurance and Security, told StateScoop the CISA grant will be used to make the 2018 DHS guidance fit the size and needs of state and local entities.
“What we’re doing is taking that guidance and making it more scalable and flexible so the state, tribal, territorial and local governments can actually look at it and utilize it,” she said. “We’re looking at all the way from rural most small town, and how does the guidance fit them and benefit them, to the most robust state.”
Sjelin said her group’s research will look to help state and local agencies identify assets that, if they “could impact health and safety, prevent injury or protect property,” would be the most impactful if they fell victim to a cyberattack.
“Cyberattacks are going to continue to come, and they’re getting more sophisticated all the time,” she said. “If it impacts those critical systems, that’s worse. The idea behind this is that these attacks continue to happen, we want to to a point that anything is preventable.”
Over two years, UTSA researchers will use the grant money to develop best practices for state and local agencies identifying, categorizing and prioritizing high-value assets.
The center’s team includes 22 full-time staff, Sjelin, plus a roster of part-time cybersecurity professionals who will be part of the high value asset project, including serving as subject matter experts and interviewing state and local officials. | https://statescoop.com/university-texas-san-antonio-cisa-grant-high-value-asset/ |
Beautiful Hand Carved Solid Sterling Silver Anchor PendantID#452
This is a beautiful hand carved solid sterling silver anchor pendant with rope wrapped around the anchor. This is a unique pendant that depicts strength, courage, stability and power. It can be worn by both men and women. This pendant comes without a chain. | http://sealofsolomon.net/product/Beautiful-Hand-Carved-Solid-Sterling-Silver-Anchor-Pendant/ |
Interview With Ren Li, Vice President Of HSBC Business School, Peking University: The Long-Term Development Of Gazelle Enterprises Is Inseparable From The Local Business Environment
The so-called "Gazelle enterprise" refers to the small and medium-sized enterprises that have successfully crossed the early stage of entrepreneurship and entered the stage of high growth and rapid expansion after "dying all their lives". They often have the common characteristics of "Gazelle" - small size, agility and high vigilance. These enterprises have not been established for a long time, but they can continue to grow at an unconventional or even doubling speed.
The more gazelle enterprises in a region, the stronger the innovation vitality and the faster the development speed. On the other hand, gazelle enterprises also play a role in rapidly expanding employment. In the macro environment of "six stabilities" and "six guarantees", the discovery and cultivation of gazelle enterprises highlight their valuable value.
In order to explore the development path of gazelle enterprises and the macro development environment behind them, a reporter from southern finance and economics interviewed Ren Zhen, vice president of the Business School of Beijing University and director of the Institute of enterprise development.
Innovation is the core driving force of gazelle Enterprises
Nanfang finance and Economics: how to understand gazelle enterprises? What are the characteristics of gazelle enterprises in regional distribution?
Ren Li: gazelle enterprises are generally not established for a long time, but can continue to grow at an unconventional or even doubling speed. Generally speaking, to achieve such a growth rate, enterprises are often driven by cutting-edge technology and appear in the field of high-tech industries. Enterprises either have original technical ability in the field of subdivision technology to develop unique products, or can skillfully apply cutting-edge technology to provide the best solution for the demander.
According to relevant data, there are 21538 gazelle enterprises in China, which are distributed in 32 provinces and cities and 86 industry categories. Among them, more than 60% of them are located in Beijing. From the distribution of the head gazelle enterprises, there are more in Beijing and Shanghai, which is closely related to the strong foundation of scientific and technological research and development ability and intensive high-tech talents in Beijing and Shanghai.
From the perspective of Guangzhou and Shenzhen, the industry distribution of head gazelle enterprises is dominated by financial technology, distance education, e-commerce and enterprise services, while artificial intelligence, aerospace and medical and health services are relatively scarce. To some extent, it reflects the characteristics of traditional business atmosphere in Guangdong, Hong Kong and Macao. Of course, the innovation ability of Shenzhen enterprises in the field of applied technology is also very prominent.
Nanfang finance and Economics: gazelle enterprises often "run fast", but "running fast" does not mean "running far". What risk points should be paid attention to when an enterprise becomes a mature Unicorn after experiencing rapid growth?
Ren Li: from the internal point of view of the enterprise, first, the founding team is generally good at technology and lacks operation and management experience. Because gazelle enterprises generally do not rely on platforms or models, they mainly rely on the unique advantages in high-tech fields. After the products are formed and the market is opened rapidly, they often face operational challenges, such as supply chain management, financial cash flow management, etc. Second, gazelle enterprises will attract the attention of the capital market, and some capital parties may compete for the control of the enterprise, which requires the founding team to pay more attention to control the enterprise. Third, after the enterprise has achieved certain success, the core members of the founding team may have differences on the future direction of the enterprise. At this time, the core founders need to make a firm judgment on the prospects of the enterprise and adhere to it.
From the outside of the enterprise, as gazelle enterprises are suddenly emerging and growing fast competitors, it is easy to attract the attention of the existing leading enterprises, and may be attacked by competition, or be offered a takeover offer. This requires the founding team to make a business judgment: to maintain its own growth logic, make effective defense and advance towards unicorn and IPO, or integrate into another Of entities.
Nanfang finance and Economics: what role does innovation play in the development of gazelle enterprises? How to balance the relationship among R & D investment, innovation and survival and development?
Ren Li: innovation is the core driving force for the development of gazelle enterprises. The R & D cycle of such enterprises is relatively short, and the technology and product iteration speed is very fast. Therefore, they should be sensitive to changes in market demand and make real-time responses. Innovation ability comes from R & D investment and human capital investment, which directly determines the development prospects of enterprises. Therefore, enterprises should focus on nothing else and maintain sustained high-intensity R & D and talent investment.
In the closely linked market environment, gazelle enterprises can effectively find the demand side, Polish key technologies and achieve rapid growth. Photo by Zheng dikun
Pay attention to the employment significance of gazelle Enterprises
Nanfang finance and Economics: what role does the government play in the development of gazelle enterprises?
Ren Li: from the perspective of the role of the government, the government mainly provides policies and environment. The policies include large industrial guidance policies and innovation support policies. The environment mainly includes business environment and market environment. These areas need government investment to provide public goods and services, provide adequate intellectual property protection, and provide innovation funding for small and medium-sized micro invasive new enterprises.
Especially for small and micro technology-based enterprises, we should provide inclusive and unconditional innovation funding to promote small and micro enterprises to quickly achieve technological breakthroughs. In the past, relevant government departments may worry that enterprises will not use their own money to invest in R & D and fall into the so-called moral hazard. In fact, it depends on the overall local market environment. As long as the market atmosphere encourages innovation and the support for enterprises is open, fair and fair, it can generate positive incentives for enterprises and give full play to the leverage effect of government subsidies on enterprise innovation investment. For example, Shenzhen's "innovation voucher" is an effective means of inclusive funding. It helps small and medium-sized micro invasive new enterprises to purchase science and technology services, human resources and enterprise development planning services from specialized intermediary service agencies, which has produced good results.
At the same time, the government should also play a good role in industrial agglomeration and industrial parks. Many gazelle enterprises are small and medium-sized enterprises supporting the core large-scale enterprises in the industrial chain. Better industrial agglomeration effect is conducive to reducing the operating costs of these enterprises. Therefore, the industrial park should better develop intelligent public services for gazelle enterprises and the majority of small and medium-sized enterprises, reduce the burden of enterprises and do a good job in business incubation.
Nanfang finance and Economics: Why have a large number of gazelle enterprises been born in Shenzhen? Is this related to the business environment?
Ren Li: there are two characteristics of the business environment in Shenzhen. One is the combination of an effective market and a promising government. When we talk about the business environment, we actually mean the willingness, speed and ability of the government to respond to the needs of market entities. Shenzhen has done a good job in this respect and created a good market environment.
Second, Shenzhen has put a lot of attention on the guidance of industrial policies, and has been promoting industrial upgrading and effective agglomeration. Shenzhen promotes the development of strategic emerging industries, creates conditions for incubation and cultivates enterprises to grow into leading enterprises with world influence. These enterprises have brought a very strong traction effect, and also led a number of supporting enterprises to provide high-quality services in the subdivided technical fields. In the closely linked market environment, gazelle enterprises can effectively find the demand side, quickly refine and polish the key technical capabilities, and finally achieve rapid growth.
Nanfang finance and Economics: how to look at the role of gazelle enterprises in employment?
Ren Li: when economists put forward the concept of "Gazelle enterprises" in the 1980s, in fact, they first focused on the role of such enterprises in expanding employment. From the perspective of stable development of a region, especially the establishment of the strategic basis point of expanding domestic demand during the "14th five year plan" period and realizing the balance between expanding domestic demand and deepening supply side structural reform, employment priority should be guaranteed. In this sense, gazelle enterprises can play a very important role, and local governments should pay extra attention to them, and give financial and tax incentives to gazelle enterprises that achieve the goal of expanding employment. | http://www.sjfzxm.com/global/en/579465.html |
How Social Anxiety Is Killing Your Cells and Why the Internet Can Help
Just over 19 percent of US adults experienced an anxiety disorder at some point last year (that figure jumps to nearly a quarter when looking at US women in particular) and over 12 percent of people suffer from social anxiety disorder at some point in their lives. So needless to say, quite a few present readers are about to get some bad news: it’s not just your retinue or lack thereof that’s feeling the consequences of sub-functional mental health. No matter how well you’ve co-opted your mental illness and colored it as an endearing eccentricity, if you’re still chronically distressed, impaired or both, then there’s a very high likelihood that nearly every cell in your body is losing the will to go on. What does it look like when a cell reacts to your mood or anxiety disorders? While exact mechanisms are unclear, there’s an observable drop in two enzymes key for keeping your cells beautiful and long-replicating: one is essentially an antioxidant and the other serves to persuade your telomeres (those caps on the ends of chromosomes that degrade with each cell division, beckoning the inexorable march towards natural cell death) to not degrade so quickly. In 2015, one of the largest studies relating cell aging to mental disorders found that for among 1,200 participants, those suffering from anxiety disorders had consistently shorter telomere lengths than their non-anxious counterparts.1 For those learning about telomeres for the fi...
Source: Psych Central - Category: Psychiatry Authors: Greg Hughes, PharmD Tags: Aging Anxiety Neuroscience Social Networking Technology Treatment Brain Social Anxiety telomeres Source Type: news
Related Links:
AbstractSelective mutism (SM) is an anxiety disorder in which a child fails to speak in some situations (e.g., school) despite the ability to speak in other situations (e.g., home). Some work has conceptualized SM as a variant of social anxiety disorder (SAD) characterized by higher levels of social anxiety. Here, we empirically tested this hypothesis to see whether there were differences in social anxiety (SA) between SM and SAD across behavioral, psychophysiological, self-, parent-, and teacher-report measures. Participants included 158 children (Mage = 8.76 years, SD = 3.23) who were cla...
Source: European Child and Adolescent Psychiatry - Category: Psychiatry Source Type: research
This study used path analysis to examine longitudinal therapy outcomes with 423 college students. Having a stronger therapeutic bond predicted decreased symptoms of depression, social anxiety, and academic distress. Findings support continued attention to developing a working relationship.
Source: Journal of College Counseling - Category: Universities & Medical Training Authors: Alexander K. Tatum, Elizabeth Vera Tags: Research Source Type: research
You have finally found a medication to treat your depression that your body tolerates well. It has taken your psychiatrist months to find the optimal dose (after two failed medication trials). The COVID-19 pandemic hit, but in spite of your new daily stressors, you seem to be doing relatively well. That is, until you hear that your antidepressant medication is now in short supply. What can you do? Mental health treatment during COVID-19 With the increased stress of the COVID-19 pandemic, prescriptions for medications to treat mental illnesses have increased more than 20% between February and March 2020. Sertraline, or Zolo...
Source: Harvard Health Blog - Category: Consumer Health News Authors: Stephanie Collier, MD, MPH Tags: Behavioral Health Mental Health Source Type: blogs
Characterization and Prediction of Anxiety in Adolescents with Autism Spectrum Disorder: A Longitudinal Study.
Abstract Anxiety is one of the most common comorbidities in youth with autism spectrum disorder (ASD). The current study's aims were: To examine the frequency of elevated anxiety symptoms in adolescents diagnosed with ASD in toddlerhood; To explore the impact of comorbid anxiety in adolescents on clinical presentation; To evaluate variables in toddlerhood that associate with anxiety symptom severity in adolescence. The study included 61 adolescents (mean age = 13:8y) diagnosed with ASD in toddlerhood (T1). Participants underwent a comprehensive assessment of cognitive ability, adaptive skills and aut...
Source: Journal of Abnormal Child Psychology - Category: Psychiatry & Psychology Authors: Ben-Itzchak E, Koller J, Zachor DA Tags: J Abnorm Child Psychol Source Type: research
Condition: Social Anxiety Intervention: Behavioral: Telehealth CBT Sponsor: Stanford University Not yet recruiting
Source: ClinicalTrials.gov - Category: Research Source Type: clinical trials
This study aimed to examine the psychometric properties of the SAQ-A30 in Iran.
Source: Health and Quality of Life Outcomes - Category: International Medicine & Public Health Authors: Mahdieh Mosarezaee, Azadeh Tavoli and Ali Montazeri Tags: Research Source Type: research
IJERPH, Vol. 17, Pages 4561: Is Lockdown Bad for Social Anxiety in COVID-19 Regions?: A National Study in The SOR Perspective
In conclusion, under the SOR framework, the lockdown measures had a buffer effect on social anxiety in pandemic regions, with the mediating role of psychological distancing.
Source: International Journal of Environmental Research and Public Health - Category: Environmental Health Authors: Zheng Miao Lim Li Nie Zhang Tags: Article Source Type: research
We often hear of self-worth as necessary for forming a healthy sense of self-esteem and a solid self-identity. Self-worth is at the foundation for the concepts of self-acceptance and self-love. Without feeling a solid sense of worth or value it is difficult, if not impossible to feel worthy of love or acceptance from others. The implications for a lack of self-worth are many. Those with limited self-worth are more vulnerable to experiencing toxic relationships and self-defeating behaviors which can include negative self-talk, avoidance of intimacy, comparing themselves to others or sabotaging relationships because of feeli...
Source: World of Psychology - Category: Psychiatry & Psychology Authors: Dr. Annie Tanasugarn Tags: Relationships Self-Esteem chronic shame Intimacy Positive Self Talk self-worth Toxic Relationships Source Type: blogs
Time course of attentional bias in social anxiety: the effects of spatial frequencies and individual threats - Dong X, Gao C, Guo C, Li W, Cui L.
Hypervigilance and attentional bias to threat faces with low-spatial-frequency (LSF) information have been found in individuals with social anxiety. The vigilance-avoidance hypothesis posits that socially anxious individuals exhibit initial vigilance and l...
Source: SafetyLit - Category: International Medicine & Public Health Tags: Risk Factor Prevalence, Injury Occurrence Source Type: news
Sociability and extinction of conditioned social fear is affected in neuropeptide S receptor-deficient mice.
Abstract Being cautious of unfamiliar conspecifics is adaptive because sick or aggressive conspecifics may jeopardize survival and well-being. However, prolonged or excessive caution, i.e. fear related to social situations, is maladaptive and may result in social anxiety disorder. Some anxiety disorders in humans are associated with polymorphisms of the neuropeptide S receptor (NPSR) gene. In line with this finding, animal studies showed an important role of NPS and NPSR in anxiety and fear. The present study investigated the role of NPSR deficiency in social behavior under non-aversive and aversive conditions. Fo... | https://medworm.com/757514874/how-social-anxiety-is-killing-your-cells-and-why-the-internet-can-help/ |
Sleek Web Design Tutorial
It looks like the sleek, Web 2.0 style website design was the most popular request on last week’s poll. This is not surprising since these types of designs are very popular right now, or in the very least, the core components of them are popular: simplicity, typography, whitespace, and a close attention to proximity.
Although I’m going to be making this PSD design in Photoshop, I understand everyone doesn’t have (a.k.a. can’t afford!) Photoshop or the other Adobe software. Keep in mind that while the specifics may change, you can probably alter this tutorial to work in a variety of graphics programs.
So let’s get started.
Research
It is important to understand exactly what a Web 2.0 style web design is, and what the core components of a design like this are. With any design process, research is a big part. I found this page that outlines the main features of a Sleek/Web 2.0 website design, and also considers the smaller details of this style:
Scan through this real quickly, and remember the main points when we go through this PSD tutorial. I’d also bookmark this page if you plan to create your own Web 2.0 style design because this a great guide to follow.
Basic Parts
So let’s start thinking of what we need in our design. This is a good initial step to go through so you don’t leave anything out when creating the final draft, and are then forced to squish a forgotten piece in. In our design we’re going to need space for the following components:
- Logo
- Primary Navigation (About, Portfolio, Services, Contact)
- Secondary Navigation (Sidebar menu, perhaps archives or categories )
- Content
- Header
- Footer
Generic Rules of Design
One of the next steps is to create a grid in Photoshop to place all of our components in. Before we do that, though, we need to understand some of the generic rules of good design:
The Grid
Let’s get started making the grid. I’m not going to use it in this tutorial, but for future reference, the 960 Grid System is a great tool for creating web design grids and for following the design principles listed above.
1. Start by opening up Photoshop and creating a new 1100px by 1100px document.
2. Next we’re going to use the ruler tool to create guidelines, or the grid. If you’re rulers are not already open, press ‘Ctrl R’ and you should see rulers on the top and left-hand side of the workspace. Make sure your ruler is in pixels by right clicking the ruler and selecting ‘Pixels’.
3. I’ve filled in the background of my image with a light gray (#EEEEEE). This just helps to see the bright grids a bit better.
4. A ruler can be created by taking the Move tool (usually at the top of the toolbar) and dragging a bright blue ruler off of the numbered ruler on the side of the workspace. Create vertical rulers at 100, 400, 700, and 1000. Similarly, create horizontal rulers at 100, 300, and 800.
Notice we followed the Rule of Thirds with the three even section in the middle, left room for margins on the left and right of the document, and planned a basic outline for 2 headers (logo area and main header), and a footer. There is also, of course, the main content area, still split into thirds.
Logo Design
One of the biggest elements in a sleek, Web 2.0 style design is the logo. It should be largely focused within the design, and still hold Web 2.0 properties. A proper logo should include a clean font and smooth edges, and additionally gradients, shine, and and bright color.
In order to aid with the logo design process, there are plenty of tutorials and articles on logo design.
Now I’ll create my own logo for Webitect. You can copy the logo, or use your own text with the same techniques. Ideally, logos are created with Illustrator because they are vector based, but since this is just a mockup, it can be created in Photoshop.
1. I’d like to stick the logo in our grid on the top row, second column over. This area is 300px by 100px, so we’re going to create a new Photoshop document with these dimensions.
2. Select the text tool by pressing ‘T’ on your keyboard. Now go through a list of your available fonts and find an appropriate font for a ‘Web 2.0 Style’. It should be sleek, perhaps a bit funky, and modern. I’m using a downloaded font called “Brie Light”.
3. Now we need to resize our font to fit our layout. In terms of alignment, remember that the edges of this document will the be edges of the section on the grid we’re working with. In order to align other elements with the grid, we’ll need to make this logo stretch out to the edge of this document through the use of font size and kerning.
4. Next, I changed the color of the text to a bright pink. (#f30078) Brighter colors usually look better with Web 2.0 designs, as it provides contrast to the excess whitespace and simplicity. I’ve also added a few more Web 2.0 style features: a white gradient (glossy look), a slight 3D effect, and other details.
You can play around with effects like this and go through Web 2.0 tutorials to create your own logo. I listed a few logo tutorials above that match this style, although there are plenty more out there. Here is what I ended up with:
Backgrounds
Simple backgrounds are key in sleek web design. A bright use of color is what’s really needed to make things interesting. For this design, we want to keep the main wrapper white, and add a small bit of style to the main background.
1. Go to StripGenerator.com to make a simple striped pattern for our background. I decided to go with bright blue to stick with the bright color theme. These are the settings I’ve chosen below:
2. Download the pattern and open up the resulting image in a new Photoshop document. Go to Edit > Define Pattern. A dialog box will come up asking you to name your pattern. You can rename it appropriately or just leave the default.
3. Using the paintbucket tool, select pattern from the dropdown at the top, and then the pattern you just created. (Your new pattern should be near the bottom of all the other patterns.) Fill in the document with this new striped pattern.
4. Now we’re going to add some white background for the content, footer, and headers in the layout. We’re using the Rounded Rectangle Tool with a 15px radius, and with the fill color (foreground color) set to white. I’m going to add a few more rulers to add some margins between what are going to be my white areas. I’ll add about 10px of space between the white areas.
5. Now we can place our logo in our design. I’ve downsized our logo by 10px in width in order to compensate for our added margin on the left.
Primary Navigation
Next we’re going to add the primary navigation to the same top header as the logo. For this tutorial, we’re going to include ‘About’, ‘Portfolio’, ‘Services’, and ‘Contact.’
1. Select the Text tool again by pressing ‘T’ on your keyboard. You can put them all in one text layer, or in separate ones. I’m going to put them in separate layers so I can move them around separately.
2. We’re going to put them in rounded tabs in our top header, to the right of the logo. Choose an appropriate font (Web 2.0 style–modern & sleek) and appropriate font size. I’m using 24pt Century Gothic. Place the navigation text to the right after adding some more 10px margins.
Note: I’ve also lowered the logo on the left to line up with the bottom margin of the primary navigation. It’s small changes like this through the design process that lead to a good overall sense of alignment in a design.
3. Next we’re going to add rounded tabs behind each piece of primary navigation using the Rounded Rectangle Tool. Choose this tool, and then create a new layer under all of the text layers. This is so the tabs will be under the text.
4. Feel free to add 10px margin rulers around each navigation piece. In the newly created layer, add a rounded rectangle (with a color of #ffc960) behind each navigation piece with the 10px margin. Note the rectangle will go below further than whats actually needed to account for the bottom corners. We’ll trim them off next.
5. Now with the rectangular marquee tool, cut off the bottoms of the rounded rectangles up to the bottom edge of the first header, to create tabs.
6. Add any Web 2.0 effects you like to the tabs, but be sure to keep it simple. I’ve just added a transparent to white gradient, which gives the tabs a bit more depth.
Distinct Header
Next it’s time to create the second header. One of the primary uses of a header is to add visual appeal, and connect the different parts of the whole layout visually. I see we’re starting to lose our Rule of Thirds because the primary navigation did not fit neatly into our preset margins. This is ok, because we can now redefine them in our main header.
1. Add three large rounded rectangles into the even thirds of our original rulers, with 10px margins for repetition. To choose another bright color, we’ll go for a light green: #cdffbf.
2. I’ve chosen three random vector images from 123Vectors to act as ‘portfolio’ pieces in our design. You can use portfolio pieces or random images you create or find.
The images going across the entire design add a sense of unity between the various elements already present, and among the elements we’re going to add.
Typography
Next we’re going to add some dummy text and headers. This is all basic CSS stuff, but it is often times beneficial to add the dummy text to see how different fonts and sizes will look.
1. Choose a generic web font for the dummy text. I’m using 11pt Verdana, with a 24pt line height and 50 kerning. You can choose your own font specifications, but a sans-serif font is best for Web 2.0 style web design. Make sure the spacing and proximity reflects a sleek look, although the text should still be legible and flow.
2. Next choose header and sidebar fonts, as well as extra detail like the header backgrounds I’ve chosen below. Note that there is repetition with the rounded corners and color in he header backgrounds I’ve created.
3. Add any Web 2.0 elements (gradient, gloss, shadow, etc.) to smaller elements in the typography. I’ve added the same gradient that I’ve been using throughout the design in my header backgrounds.
Footer
And finally, the footer. We’ll imitate what we did in the header by complimenting the Rule of Thirds.
1. This time, we’ll put the rounded rectangles in dark gray (#333333), and give it a similar gradient as in the rest of the design.
2. Add some dummy text the the three areas of the footer. Because this is what the visitor will see after viewing your content, there are some ideal choices for what should go in these three sections. One may be a contact form, links to other content on your site, or recent comments/testimonials. Basically, any content that would help the visitor stay on your website or lead them to a desired action.
Conclusion
What’s great about this design is that it’s simple, trendy, and really outlines the most important components: Logo, primary navigation, and portfolio pieces.
Of course, a portfolio is obviously not the only direction you could go with this. It would also work great as a blog or other type of generic website because of the amount of content it could hold.
Usually a sleek, Web 2.0 style design is a lot more simple than this, but this design still works because all the elements are simple individually, creating an overall simplistic and sleek feel. | https://claytonjohnson.com/sleek-web-design-tutorial |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.