content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
This course provides student teachers with opportunities to develop the professional knowledge, skills and dispositions required to work in a range of early childhood services and settings.
-
Level5
-
Credits20
-
EFTS0.1667
-
Teaching weeks 18 weeks
-
Workload Approximately 12 hours per week
-
Prerequisites Check entry and other requirements under the qualification you are studying
The first practicum introduces student teachers to pedagogical skills, bi-cultural practices, and professional behaviours to ensure inclusive and respectful relationships with young children, their families and whānau. There is a focus on providing safe, welcoming and interesting environments for young children through the development of pedagogical skills such as observation, and a sound knowledge of early childhood regulatory requirements.
This course includes a compulsory 5 week full-time practicum placement in an early childhood centre and a 2 day workshop.
To submit your practicum placement details please fill out this form.
ECE Practicum Organisation Form
On successfully completing Year 1 Practicum in a licenced early childhood centre, you will be able to
Practice
- Demonstrate a range of appropriate pedagogical practices, professional behaviours and qualities during teaching practicum
- Demonstrate a developing ability to develop inclusive, respectful, ethical, and professional relationships with children and their families and whanau.
- Use a range of observation and assessment methods to gather information about children to support their learning and development
- Contribute to the maintenance of a safe physical environment including predicting potential hazards and acting to minimise risk of harm
- Outline a range of teaching strategies and resources to support children’s learning and development in te reo me nga tikanga Māori
Pedagogy
- Discuss own pedagogical practice when working with children and their families, with reference to theory, current research, early childhood curriculum and teaching experiences in early childhood settings
- Demonstrate an understanding of the roles and responsibilities of the early childhood teacher including culturally relevant approaches to working with children from a range of ethnic, socio-economic and cultural background.
- Demonstrate knowledge and understanding of teaching responsibilities under the statutory and regulatory requirements for early childhood services and the role of government agencies.
Assessment
This course is 100% internally assessed.
Computer and internet requirements
To complete this course you will need access to a laptop or desktop computer, reliable broadband internet connection and a data plan able to support online learning such as streaming of videos (including YouTube), downloading content, and writing and submitting online assessments. If you are unsure if your current computer or internet access allows you to complete your online learning with us, please contact us before applying to enrol.
Learn more about our online learning and study tools.
How to enrol
Before enrolling in this course you need to:
- choose the qualification you will study the course under
- check the order that courses in the qualification should be studied in the Qualification Structure table. This is in the Choose courses and apply tab on the qualification page. | https://www.openpolytechnic.ac.nz/qualifications-and-courses/65105-professional-practice-and-pedagogy/ |
Analysis 61: Unit Equivalent Sales Price Needed to Justify Marginal Capacity Addition
EXHIBITS:
| HOW TO INTERPRET THE ANALYSIS: This exhibit demonstrates the price, through a business cycle, that would be necessary to justify new capacity addition in the industry. There are three types of capacity addition shown. The first is the most expensive, a new Greenfield facility. The second is a less expensive form of capacity addition, where the company would add an additional production line at an already existing facility. Finally, there is the lowest cost, easiest to justify, capacity addition, a conversion of a manufacturing line producing another product. The cost of this line does not reflect the loss in contribution margin from the product that will no longer be produced on the line.
|
There are two sources of cost that must be covered in order to justify each type of capacity addition. The first source is the operating cost that is the sum of the costs of people and purchases. In a new facility, the company will need $647 per year to pay for the people and purchases. The second source of cost is the cost of capital. The company will need a further $546 per year to amortize its capital investment in the new facility at its cost of capital. When we add the costs of people, purchases and capital together, we get a price of $1220 per unit. If the company would conclude that, through the business cycle, its average price will be $1220 per unit, it can safely invest in a new facility with the assurance that it will cover all its operating costs (i.e., its people and purchases) and earn enough to pay its depreciation and all capital charges at its cost of capital.
At the other end of the scale, perhaps the company is living through a low-price market. In that case, the company need satisfy itself only that through the business cycle, it will see a price per unit sold of $760 in order to justify conversion of a line producing another product to the current product. With a $760 average price through the business cycle, the company will cover its full people and purchases costs of $700 and still have $60 left over to pay for depreciation and all capital charges on its marginal investment.
PURPOSE: This analysis helps the company predict the direction and likely limits of future prices. It also helps the company plan its own capacity additions. Most new capacity will come on-stream only when the company and its competition have incentive to build it. This analysis projects the price at which the company should expect to see new capacity coming into the market. It also projects the type of capacity growth the company should monitor and evaluate further.
APPROACH: The company would determine for itself, as well as for its major competition: 1) the price level that will bring this capacity on stream and pay all of the new capacity's cash operating and capital carrying costs and 2) the amount of capacity, stated as a percentage of annual demand, that is available to be added with each source.
The operations and finance functions estimate the cost of additional capacity, as well as the amount of capacity that could become available, in a defined period of time, using each of the potential capacity-adding methods, starting with the least costly and moving to the most costly method.
The cost of each form of new capacity is expressed in the equivalent of a price for a unit of product. In the example shown, each new capacity option has its cost stated in terms of the average market price per unit of product the industry would have to realize, through a business cycle, in order to pay for the new capacity's capital and operating costs.
The company would compare the rate of this potential capacity addition to unit demand growth in the marketplace to estimate the direction of future prices. If predicted demand grows faster than the rate that inexpensive (i.e. with prices below today's prices) new sources of capacity can come on stream, prices should rise to bring on more expensive forms of new capacity. If, on the other hand, sufficient capacity can come on stream, at or below current prices, to meet projected levels of future demand, industry prices are likely to stay flat or decline, in real terms.
The company would use this analysis to project future prices and to plan its own capacity additions.
Return to Diagnose Pricing: Future Capacity
|
|
Recommended Reading
|For a greater overall perspective on this subject, we recommend the following related items:
|
Analyses:
Symptoms and Implications: Symptoms developing in the market that would suggest the need for this analysis.
Perspectives: Conclusions we have reached as a result of our long-term study and observations. | https://www.strategystreet.com/tools/analyses/pricing-3/analysis_61_unit_equivalent_sales_price_needed_to_justify_marginal_capacity_addition/ |
New method for analyzing microplastics in soil
Microplastic pollution has caused attention worldwide. One of the challenges when studying this type of pollution in soil and sediments, is to separate the microplastic particles from the remainder of the sample. A new study recently published online in the peer-reviewed journal Science of the Total Environment presents a possible solution.
The first author of the study is Chengtao Li from College of Environmental Science and Engineering, Shaanxi University of Science & Technology, Xi'an, and chemistry professor and CBA leader group member Rolf D. Vogt is one of the co-authors.
The authors have created a method for analyzing soil samples for microplastic pollution that can be executed at any lab, and without the need for advanced instruments. It uses commonly available devices for extraction of non-degradable and biodegradable microplastics from soil samples in a NaBr solution based on density flotation. The device has a combined circulation and recovery system for the salt solution, which increases its environmental-friendliness.
The paper can be read here. | https://www.mn.uio.no/cba/english/news-and-events/news/new-method-for-analyzing-microplastics-in-soil.html |
Having a set of the .NET Frameworks that works correctly is important for all Windows users and developers are no exception. Through the course of development, it is easy to accidentally overwrite needed files. In an effort to minimize troubleshooting, Microsoft provides the .NET Framework Repair Tool. This tool provides the ability to scan a Windows system for errors with any of the .NET Framework packages that are supposed to be installed.
The program may be executed via the command line or through a graphical wizard. Command line switches are available which allow the tool to be run unattended and to enable proper .NET packages to be located on a network share (versus requiring Internet access). There is also an option to disable the default behavior of sending a diagnostic log file to Microsoft after the tool has been executed.
The typical operation of the utility takes the following steps:
- Scan the machine for the known errors with .NET Framework installations and if found, provide a list to the user for ereview
- Take any of the following corrective measures:
- Ensuring Windows Installer service is properly operational
- Reset discretionary access control lists on system folders
- Verify and correct update registration
- If the actions in step 2 have not been successful in correcting the issue, provide user with an option to perform a full repair of the installed .NET Frameworks.
- Optionally send a CAB file containing system logs for transmission to Microsoft.
In a trial run on this author's machine, the program's operation took approximately 20 minutes. When completed, a cab file was left in
%TEMP% directory which contained all of the log files from my user directory's AppData. The cab also included registry dump files for HKCR and
HKLM (
HKEY_LOCAL_MACHINE).
Full details on the program’s operation are available in a blog post by The .NET Fundamentals Team and in its corresponding KB article. The latest version, V1.3, includes support for all .NET Frameworks through 4.6.1. It supports operation on Windows operating systems through Windows 7 Service Pack 1 and Windows Server 2008 R2 Service Pack 1.
How might we improve InfoQ for you
Thank you for being an InfoQ reader.
Each year, we seek feedback from our readers to help us improve InfoQ. Would you mind spending 2 minutes to share your feedback in our short survey? Your feedback will directly help us continually evolve how we support you. | https://www.infoq.com/news/2016/05/net-framework-repair-tool/ |
QUARTERLY REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934
For the quarterly period ended March 31, 2015
OR
¨
TRANSITION REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934
For the transition period from to
Commission File Number 0-28000
PRGX Global, Inc.
(Exact name of registrant as specified in its charter)
Georgia
58-2213805
(State or other jurisdiction of
(I.R.S. Employer
incorporation or organization)
Identification No.)
600 Galleria Parkway
30339-5986
Suite 100
(Zip Code)
Atlanta, Georgia
(Address of principal executive offices)
Registrant’s telephone number, including area code: (770) 779-3900
Indicate by check mark whether the registrant (1) has filed all reports required to be filed by Section 13 or 15(d) of the Securities Exchange Act of 1934 during the preceding 12 months (or for such shorter period that the registrant was required to file such reports), and (2) has been subject to such filing requirements for the past 90 days. Yes ý No ¨
Indicate by check mark whether the registrant has submitted electronically and posted on its corporate Web site, if any, every Interactive Data File required to be submitted and posted pursuant to Rule 405 of Regulation S-T during the preceding 12 months (or for such shorter period that the registrant was required to submit and post such files). Yes ý No ¨
Indicate by check mark whether the registrant is a large accelerated filer, an accelerated filer, a non-accelerated filer, or a smaller reporting company. See definitions of “large accelerated filer,” “accelerated filer,” and “smaller reporting company” in Rule 12b-2 of the Exchange Act. (Check One):
¨ Large accelerated filer
ý
Accelerated filer
¨ Non-accelerated filer (Do not check if a smaller reporting company)
¨
Smaller reporting company
Indicate by check mark whether the registrant is a shell company (as defined in Rule 12b-2 of the Act). Yes ¨ No ý
Common shares of the registrant outstanding at April 30, 2015 were 25,659,719.
The accompanying Condensed Consolidated Financial Statements (Unaudited) of PRGX Global, Inc. and its wholly owned subsidiaries have been prepared in accordance with accounting principles generally accepted in the United States of America for interim financial information and with the instructions for Form 10-Q and Article 10 of Regulation S-X. Accordingly, they do not include all of the information and footnotes required by accounting principles generally accepted in the United States of America for complete financial statements. In the opinion of management, all adjustments (consisting of normal recurring accruals) considered necessary for a fair presentation have been included. Operating results for the three-month period ended March 31, 2015 are not necessarily indicative of the results that may be expected for the year ending December 31, 2015.
Except as otherwise indicated or unless the context otherwise requires, “PRGX,” “we,” “us,” “our” and the “Company” refer to PRGX Global, Inc. and its subsidiaries. For further information, refer to the Consolidated Financial Statements and Footnotes thereto included in the Company’s Form 10-K for the year ended December 31, 2014.
Beginning with the second quarter of 2014, we reclassified certain information technology expenses within our Recovery Audit Services — Americas segment from Selling, General and Administrative expenses to Cost of Revenue to better reflect the nature of the work performed. We have revised the presentation of our Selling, General and Administrative expenses and Cost of Revenue for all relevant prior periods.
New Accounting Standards
A summary of the new accounting standards issued by the Financial Accounting Standards Board (“FASB”) and included in the Accounting Standards Codification (“ASC”) that apply to PRGX is set forth below:
FASB ASC Update No. 2015-03. In April 2015, the FASB issued Accounting Standards Update No. 2015-03, Interest—Imputation of Interest (Subtopic 835-30) (“ASU 2015-03”). ASU 2015-03 simplifies presentation of debt issuance costs by requiring that debt issuance costs related to a recognized debt liability be presented in the balance sheet as a direct deduction from the carrying amount of that debt liability, consistent with debt discounts. ASU 2015-03 is effective for annual periods beginning after December 15, 2015 with early adoption permitted. The guidance also requires retrospective application to all prior periods presented. We are currently evaluating of the impact of ASU 2015-03 on our consolidated financial statements.
FASB ASC Update No. 2014-15. In August 2014, the FASB issued Accounting Standards Update No. 2014-15, Presentation of Financial Statements—Going Concern (Subtopic 205-40) (“ASU 2014-15”). ASU 2014-15 provides guidance on management's responsibility to evaluate whether there is substantial doubt about an entity’s ability to continue as a going concern and related disclosure requirements. ASU 2014-15 is effective for annual periods beginning after December 15, 2016 with early adoption permitted. We do not expect the adoption of ASU 2014-15 to have a material impact on our consolidated financial statements.
FASB ASC Update No. 2014-09. In May 2014, the FASB issued Accounting Standards Update No. 2014-09, Revenue from Contracts with Customers (Topic 606) (“ASU 2014-09”). ASU 2014-09 supersedes the revenue recognition requirements in Revenue Recognition (Topic 605), and requires an entity to recognize revenue when it transfers promised goods or services to customers in an amount that reflects the consideration to which the transferring entity expects to be entitled to in exchange for those goods or services. ASU 2014-09 allows for adoption using either of two methods; retrospectively to each prior reporting period presented or retrospectively with the cumulative effect of application recognized at the date of initial adoption. It is effective for annual periods beginning after December 15, 2016. Early adoption is not permitted. We are currently evaluating the impact of ASU 2014-09 on our consolidated financial statements.
The following tables set forth the computations of basic and diluted earnings (loss) per common share for the three months ended March 31, 2015 and 2014 (in thousands, except per share data):
Three Months Ended March 31,
Basic earnings (loss) per common share:
2015
2014
Numerator:
Net loss
$
(2,958
)
$
(3,674
)
Denominator:
Weighted-average common shares outstanding
26,394
30,159
Basic earnings (loss) per common share
$
(0.11
)
$
(0.12
)
Three Months Ended March 31,
Diluted earnings (loss) per common share:
$
)
$
)
Incremental shares from stock-based compensation plans
—
—
Denominator for diluted earnings (loss) per common share
26,394
30,159
Diluted earnings (loss) per common share
$
(0.11
)
$
(0.12
)
For the three months ended March 31, 2015, weighted-average common shares outstanding excludes from the computation of diluted earnings (loss) per common share antidilutive shares underlying options that totaled 3.3 million shares and antidilutive Performance Units issuable under the Company's 2006 Management Incentive Plan that totaled less than 0.1 million shares. For the three months ended March 31, 2014, weighted-average common shares outstanding excludes from the computation of diluted earnings (loss) per common share antidilutive shares underlying options that totaled 2.4 million shares and antidilutive Performance Units related to the Company's 2006 Management Incentive Plan that totaled less than 0.1 million shares. As a result of the net loss for the three months ended March 31, 2015 and March 31, 2014, all shares underlying stock options and Performance Units were considered antidilutive. The number of common shares we used in the basic and diluted earnings (loss) per common share computations include nonvested restricted shares of 0.5 million and 0.7 million for the three months ended March 31, 2015 and 2014, respectively, and nonvested restricted share units that we consider to be participating securities of 1.4 million and 0.2 million for the three months ended March 31, 2015 and 2014, respectively.
We repurchased 1,129,932 shares of our common stock during the three months ended March 31, 2015 for $5.5 million, and 6,700 shares of our common stock during the three months ended March 31, 2014 for less than $0.1 million.
Pursuant to exercises of outstanding stock options, we issued 12,863 shares of our common stock having a value of less than $0.1 million in the three months ended March 31, 2015 and 563,514 shares of our common stock having a value of $2.2 million in the three months ended March 31, 2014. Stock option exercises during the three-month period ended March 31, 2014 primarily consisted of exercises by a former executive officer of the Company.
In partial satisfaction of a business acquisition obligation, we issued 187,620 shares of our common stock having a value of $1.3 million in the three months ended March 31, 2014. There were no shares issued to satisfy business acquisition obligations in the three months ended March 31, 2015.
The Company currently has two stock-based compensation plans under which awards are outstanding: (1) the 2006 Management Incentive Plan (“2006 MIP”) and (2) the 2008 Equity Incentive Plan (“2008 EIP”) (collectively, the “Plans”). We describe the Plans in the Company’s Annual Report on Form 10–K for the fiscal year ended December 31, 2014. For all periods presented herein, awards outside the Plans are referred to as inducement awards.
2008 EIP Awards and Inducement Awards
Stock options granted under the 2008 EIP generally have a term of seven years and vest in equal annual increments over the vesting period, which typically is three years for employees and one year for directors. There were no stock option grants during the three months ended March 31, 2014. The following table summarizes stock option grants during the three months ended March 31, 2015:
Grantee
Type
# of
Options
Granted
Vesting Period
Weighted
Average
Exercise Price
Weighted
Average Grant
Date Fair Value
2015
Director
2,849
1 year or less
$
4.07
$
0.97
Director
8,546
3 years
$
4.07
$
1.46
Employee inducements (1)
75,000
3 years
$
5.81
$
1.45
(1)
The Company granted non-qualified stock options outside its existing stock-based compensation plans in the first quarter of 2015 to two employees in connection with the employees joining the Company.
Nonvested stock awards, including both restricted stock and restricted stock units, granted under the 2008 EIP generally are nontransferable until vesting and the holders are entitled to receive dividends with respect to the nonvested shares. Prior to vesting, the grantees of restricted stock are entitled to vote the shares, but the grantees of restricted stock units are not entitled to vote the shares. Generally, nonvested stock awards with time-based vesting criteria vest in equal annual increments over the vesting period, which typically is three years for employees and one year for directors. There were no nonvested stock awards (restricted stock and restricted stock units) granted during the three months ended March 31, 2014. The following table summarizes nonvested stock awards granted during the three months ended March 31, 2015:
Grantee
Type
# of Shares
Granted
Vesting Period
$
$
Employee group (1)
1,325,000
2 years
$
4.00
Employee inducements (2)
10,000
3 years
$
5.29
(1)
The Company granted nonvested performance-based stock awards (restricted stock units) in the first quarter of 2015 to eight executive officers.
(2)
The Company granted nonvested stock awards (restricted stock) outside its existing stock-based compensation plans in the first quarter of 2015 to two employees in connection with the employees joining the Company.
2006 MIP Performance Units
On June 19, 2012, seven executive officers of the Company were granted 154,264 Performance Units under the 2006 MIP, comprising all of the then remaining available awards under the 2006 MIP. The awards had an aggregate grant date fair value of $1.2 million and vest ratably over three years. Upon vesting, the Performance Units will be settled by the issuance of Company common stock equal to 60% of the number of Performance Units being settled and the payment of cash in an amount equal to 40% of the fair market value of that number of shares of common stock equal to the number of Performance Units being settled. During the three months endedMarch 31, 2015, an aggregate of 6,200 Performance Units were settled, which resulted in the issuance of 3,720 shares of common stock and cash payments of less than $0.1 million. There were no Performance Units settled during the three months endedMarch 31, 2014. Since the June 19, 2012 grant date to March 31, 2015, an aggregate of 127,410 Performance Units were settled by two current executive officers and four former executive officers, and 16,524 Performance Units were forfeited by one former executive officer and currently are available to be granted.
Such settlements resulted in the issuance of 73,158 shares of common stock and cash payments totaling $0.3 million. As of March 31, 2015, a total of 10,330 Performance Units were outstanding, none of which were vested.
Performance-Based Restricted Stock Units
On March 30, 2015, eight executive officers of the Company were granted 1,325,000 performance-based restricted stock units (“PBUs”) under the 2008 EIP. Upon vesting, the PBUs will be settled by the issuance of Company common stock equal to 50% of the number of PBUs being settled and the payment of cash in an amount equal to 50% of the fair market value of that number of shares of common stock equal to the number of PBUs being settled. The PBUs vest and become payable based on the cumulative adjusted EBITDA that the Company (excluding the Healthcare Claims Recovery Audit business) achieves for the two-year performance period ending December 31, 2016. At the threshold performance level, 35% of the PBUs will become vested and payable; at the target performance level, 100% of the PBUs will become vested and payable; and at the maximum performance level, 200% of the PBUs will become vested and payable. If performance falls between the stated performance levels, the percentage of PBUs that shall become vested and payable will be based on straight line interpolation between such stated performance levels (although the PBUs may not become vested and payable for more than 200% of the PBUs and no PBUs shall become vested and payable if performance does not equal or exceed the threshold performance level).
Selling, general and administrative expenses for the three months ended March 31, 2015 and 2014 include $1.1 million and $1.0 million, respectively, related to stock-based compensation charges. At March 31, 2015, there was $14.9 million of unrecognized stock-based compensation expense related to stock options, restricted stock awards, restricted stock unit awards, and Performance Unit awards which we expect to recognize over a weighted-average period of 1.8 years. The unrecognized stock-based compensation expense related to restricted stock unit awards with performance vesting criteria is based on our estimate of both the number of shares of the Company's common stock that will ultimately be issued and cash payments that will be made when the restricted stock units are settled.
Note D – Operating Segments and Related Information
We conduct our operations through the following four reportable segments:
Additionally, Corporate Support includes the unallocated portion of corporate selling, general and administrative expenses not specifically attributable to the four reportable segments.
We evaluate the performance of our reportable segments based upon revenue and measures of profit or loss we refer to as EBITDA and Adjusted EBITDA. We define Adjusted EBITDA as earnings from continuing operations before interest and taxes (“EBIT”), adjusted for depreciation and amortization (“EBITDA”), and then further adjusted for unusual and other significant items that management views as distorting the operating results of the various segments from period to period. Such adjustments include restructuring charges, stock-based compensation, bargain purchase gains, acquisition-related charges and benefits (acquisition transaction costs, acquisition obligations classified as compensation, and fair value adjustments to acquisition-related contingent consideration), tangible and intangible asset impairment charges, certain litigation costs and litigation settlements, certain severance charges and foreign currency transaction gains and losses on short-term intercompany balances viewed by management as individually or collectively significant. We do not have any inter-segment revenue.
Cash and cash equivalents include all cash balances and highly liquid investments with an initial maturity of three months or less from date of purchase. We place our temporary cash investments with high credit quality financial institutions. At times, certain investments may be in excess of the Federal Deposit Insurance Corporation (“FDIC”) insurance limit or otherwise may not be covered by FDIC insurance. Some of our cash and cash equivalents are held at banks in jurisdictions outside the U.S. that have restrictions on transferring such assets outside of these countries on a temporary or permanent basis. Such restricted net assets are not significant in comparison to our consolidated net assets.
Our cash and cash equivalents included short-term investments of approximately $6.6 million as of March 31, 2015 and $12.2 million as of December 31, 2014, of which approximately $2.9 million and $2.5 million, respectively, were held at banks outside of the United States, primarily in Brazil and Canada.
Note F – Debt
On January 19, 2010, we entered into a four-year revolving credit and term loan agreement (the “2010 Credit Agreement”) with SunTrust Bank (“SunTrust”). Subsequent modifications of the 2010 Credit Agreement were entered into with SunTrust. Most recently, on December 23, 2014, we entered into an amended and restated revolving credit agreement (the “Credit Facility”) with SunTrust. The Credit Facility, and provisions of the 2010 Credit Agreement where applicable, is guaranteed by the Company and all of its material domestic subsidiaries and secured by substantially all of the assets of the Company.
The amount available for borrowing under the Credit Facility is $20.0 million, and as of March 31, 2015 we had no outstanding borrowings. With the Credit Facility provision of a fixed applicable margin of 1.75% plus a specified index rate based on one-month LIBOR, the interest rate that would have applied at March 31, 2015 had any borrowings been outstanding was approximately 1.92%. We also must pay a commitment fee of 0.25% per annum, payable quarterly, on the unused portion of the Credit Facility.
The Credit Facility includes customary affirmative, negative, and financial covenants binding on the Company, including delivery of financial statements and other reports, maintenance of existence, and transactions with affiliates. The negative covenants limit the ability of the Company, among other things, to incur debt, incur liens, make investments, sell assets or declare or pay dividends on its capital stock. The financial covenants included in the Credit Facility, among other things, limit the amount of capital expenditures the Company can make, set forth maximum leverage and net funded debt ratios for the Company and a minimum fixed charge coverage ratio, and also require the Company to maintain minimum consolidated earnings before interest, taxes, depreciation and amortization. In addition, the Credit Facility includes customary events of default. The Company was in compliance with the covenants in its Credit Facility as of March 31, 2015.
Note G – Fair Value of Financial Instruments
We state cash equivalents at cost, which approximates fair market value. The carrying values for receivables from clients, unbilled services, accounts payable, deferred revenue and other accrued liabilities reasonably approximate fair market value due to the nature of the financial instrument and the short term maturity of these items.
We had no debt outstanding as of March 31, 2015 and December 31, 2014. We consider the factors used in determining the fair value of debt to be Level 3 inputs (significant unobservable inputs).
We had no business acquisition obligations as of March 31, 2015 and December 31, 2014. We determine the estimated fair values of business acquisition obligations based on our projections of future revenue and profits or other factors used in the calculation of the ultimate payment to be made. The discount rate that we use to value the liability is based on specific business risk, cost of capital, and other factors. We consider these factors to be Level 3 inputs (significant unobservable inputs).
Note H – Commitments and Contingencies
Legal Proceedings
We are party to a variety of legal proceedings arising in the normal course of business. While the results of these proceedings cannot be predicted with certainty, management believes that the final outcome of these proceedings will not have a material adverse effect on our financial position, results of operations or cash flows.
Reported income tax expense in each period primarily results from taxes on the income of foreign subsidiaries. The effective tax rates generally differ from the expected tax rate due primarily to the Company’s deferred tax asset valuation allowance on the domestic earnings and taxes on income of foreign subsidiaries.
Significant judgment is required in evaluating our uncertain tax positions and determining our provision for income taxes. In addition, we are subject to the continuous examination of our income tax returns by the Internal Revenue Service in the U.S. and other tax authorities. We regularly assess the likelihood of adverse outcomes resulting from these examinations to determine the adequacy of our provision for income taxes.
We apply a “more-likely-than-not” recognition threshold and measurement attribute for the financial statement recognition and measurement of a tax position taken or expected to be taken in a tax return. We refer to GAAP for guidance on derecognition, classification, interest and penalties, accounting in interim periods, disclosure, and transition. In accordance with FASB ASC 740, our policy for recording interest and penalties associated with tax positions is to record such items as a component of income before income taxes. A number of years may elapse before a particular tax position is audited and finally resolved or when a tax assessment is raised. The number of years subject to tax assessments also varies by tax jurisdiction.
Note J – Subsequent Events
On April 28, 2015, the Company announced its decision to exit its Healthcare Claims Recovery Audit Services business due to the continued challenges in the Medicare RAC business and our lack of a diversified client base in that segment.
Item 2. Management’s Discussion and Analysis of Financial Condition and Results of Operations
Overview
We conduct our operations through four reportable segments: Recovery Audit Services - Americas, Recovery Audit Services - Europe/Asia-Pacific, Adjacent Services and Healthcare Claims Recovery Audit Services. The Recovery Audit Services - Americas segment represents recovery audit services (other than Healthcare Claims Recovery Audit Services) we provide in the U.S., Canada and Latin America. The Recovery Audit Services - Europe/Asia-Pacific segment represents recovery audit services (other than Healthcare Claims Recovery Audit Services) we provide in Europe, Asia and the Pacific region. The Adjacent Services segment represents data transformation, data analytics and associated advisory services. The Healthcare Claims Recovery Audit Services segment represents recovery audit services that involve the identification of overpayments and underpayments made by healthcare payers to healthcare providers such as hospitals and physicians’ practices and includes services we provide as a subcontractor to three of the four prime contractors in the Medicare Recovery Audit Contractor program (the “Medicare RAC program”) of the Centers for Medicare and Medicaid Services (“CMS”). We include the unallocated portion of corporate selling, general and administrative expenses not specifically attributable to the four reportable segments in Corporate Support.
Recovery auditing is a business service focused on finding overpayments created by errors in payment transactions, such as missed or inaccurate discounts, allowances and rebates, vendor pricing errors, erroneous coding and duplicate payments. Generally, we earn our recovery audit revenue by identifying overpayments made by our clients, assisting our clients in recovering the overpayments from their vendors, and collecting a specified percentage of the recoveries from our clients as our fee. The fee percentage we earn is based on specific contracts with our clients that generally also specify: (a) time periods covered by the audit; (b) the nature and extent of services we are to provide; and (c) the client’s responsibilities to assist and cooperate with us. Clients generally recover claims by either taking credits against outstanding payables or future purchases from the relevant vendors, or receiving refund checks directly from those vendors. The manner in which a claim is recovered by a client is often dictated by industry practice. In addition, many clients establish client-specific procedural guidelines that we must satisfy prior to submitting claims for client approval. Our recovery audit business also includes contract compliance services which focus on auditing supplier billings against large and complex services, construction and licensing contracts. Such services include verification of the accuracy of third party reporting, appropriateness of allocations and other charges in cost or revenue sharing types of arrangements, adherence to contract covenants and other risk mitigation requirements and numerous other reviews and procedures to assist our clients with proper monitoring and enforcement of the obligations of their contractors. For some services we provide, such as certain of our services in our Adjacent Services segment, we earn our compensation in the form of a fixed fee, a fee per hour, or a fee per other unit of service.
We earn the vast majority of our recovery audit revenue from clients in the retail industry due to many factors, including the high volume of transactions and the complicated pricing and allowance programs typical in this industry. Changes in consumer spending associated with economic fluctuations generally impact our recovery audit revenue to a lesser degree than they affect individual retailers due to several factors, including:
•
Diverse client base – our clients include a diverse mix of discounters, grocery, pharmacy, department and other stores that tend to be impacted to varying degrees by general economic fluctuations, and even in opposite directions from each other depending on their position in the market and their market segment;
•
Motivation – when our clients experience a downturn, they frequently are more motivated to use our services to recover prior overpayments to make up for relatively weaker financial performance in their own business operations;
•
Nature of claims – the relationship between the dollar amount of recovery audit claims identified and client purchases is non-linear. Claim volumes are generally impacted by purchase volumes, but a number of other factors may have an even more significant impact on claim volumes, including new items being purchased, changes in discount, rebate, marketing allowance and similar programs offered by vendors and changes in a client’s or a vendor’s information processing systems; and
•
Timing – the client purchase data on which we perform our recovery audit services is historical data that typically reflects transactions between our clients and their vendors that took place 3 to 15 months prior to the data being provided to us for audit. As a result, we generally experience a delayed impact from economic changes that varies by client and the impact may be positive or negative depending on the individual clients’ circumstances.
While the net impact of the economic environment on our recovery audit revenue is difficult to determine or predict, we believe that for the foreseeable future, our revenue will remain at a level that will not have a significant adverse impact on our liquidity, and we have taken steps to mitigate the adverse impact of an economic downturn on our revenue and overall financial health. These steps include devoting substantial efforts to develop an improved service delivery model to enable us to more cost
effectively serve our clients. Further, we continue to pursue our ongoing growth strategy to expand our business beyond our core recovery audit services to retailers by growing the portion of our business that provides recovery audit services to enterprises other than retailers, such as our offerings to commercial clients; contract compliance service offerings; expansion into new industry verticals, such as oil and gas; and growth within our Adjacent Services segment.
Our Adjacent Services business targets client functional and process areas where we have established expertise, enabling us to provide services to finance and procurement executives to improve working capital, optimize purchasing leverage in vendor pricing negotiations, improve insight into product margin and true cost of goods for resale, identify and manage risks associated with vendor compliance, improve quality of vendor master data and improve visibility and diagnostics of direct and indirect spend. Our Adjacent Services also include the CIPS Sustainability Index, an Internet-based supplier sustainability assessment offered in the UK through our strategic alliance with the Chartered Institute of Purchasing & Supply (“CIPS”). As our clients’ data volumes and complexity levels continue to grow, we are using our deep data management experience to develop new actionable insight solutions, as well as to develop custom analytics and data transformation services. Taken together, our deep understanding of our clients’ procure-to-pay data and our technology-based solutions provide multiple routes to help our clients achieve greater profitability.
During 2013, auditing under our current Medicare RAC program subcontracts became subject to significant additional restrictions imposed by CMS on all Medicare recovery auditors, including deadlines for requesting medical records from providers and submitting claims and the types of claims that may be audited. These restrictions began to limit our Medicare RAC program revenue in the third quarter of 2013 and had a significant negative impact on our fourth quarter 2013 and annual 2014 Medicare RAC program revenue. For a number of reasons, including the significant uncertainties and financial risks inherent in the Medicare RAC program, we withdrew from the Medicare RAC program rebid process in February 2014. On April 28, 2015, the Company announced its decision to exit its Healthcare Claims Recovery Audit Services business due to the continued challenges in the Medicare RAC business and our lack of a diversified client base in that segment. The Company will likely incur employee termination and other exit costs as a result of this strategic decision.
Non-GAAP Financial Measures
EBIT, EBITDA and Adjusted EBITDA are all “non-GAAP financial measures” presented as supplemental measures of the Company’s performance. They are not presented in accordance with accounting principles generally accepted in the United States, or GAAP. The Company believes these measures provide additional meaningful information in evaluating its performance over time, and that the rating agencies and a number of lenders use EBITDA and similar measures for similar purposes. In addition, a measure similar to Adjusted EBITDA is used in the restrictive covenants contained in the Company’s secured credit facility. However, EBIT, EBITDA and Adjusted EBITDA have limitations as analytical tools, and you should not consider them in isolation, or as substitutes for analysis of the Company’s results as reported under GAAP. In addition, in evaluating EBIT, EBITDA and Adjusted EBITDA, you should be aware that, as described above, the adjustments may vary from period to period and in the future the Company will incur expenses such as those used in calculating these measures. The Company’s presentation of these measures should not be construed as an inference that future results will be unaffected by unusual or nonrecurring items. We include a reconciliation of net loss to each of EBIT, EBITDA and Adjusted EBITDA and a calculation of Adjusted EBITDA by segment below in “–Adjusted EBITDA”.
Three Months Ended March 31, 2015 Compared to the Corresponding Period of the Prior Year
Revenue. Revenue was as follows (in thousands):
Three Months Ended March 31,
2015
2014
Recovery Audit Services – Americas
$
22,417
$
24,798
Recovery Audit Services – Europe/Asia-Pacific
9,305
9,702
Adjacent Services
1,263
2,283
Healthcare Claims Recovery Audit Services
147
1,118
Total
$
33,132
$
37,901
Total revenue decreased for the three months ended March 31, 2015 by $4.8 million, or 12.6%, compared to the same period in 2014.
Below is a discussion of our revenue for our four reportable segments.
Recovery Audit Services – Americas revenue decreased by $2.4 million, or 9.6%, for the first quarter of 2015 compared to the first quarter of 2014. One of the factors contributing to changes in our reported revenue is the strength of the U.S. dollar relative to foreign currencies. Changes in the average value of the U.S. dollar during the period relative to foreign currencies impacted our reported revenue. On a constant dollar basis, adjusted for changes in foreign exchange (“FX”) rates, revenue for the first quarter of 2015decreased by 6.7% compared to a decrease of 9.6% as reported.
In addition to the impact of the change in FX rates, the year over year net decrease in our Recovery Audit Services – Americas revenue in the three months ended March 31, 2015 was due to a number of factors. Revenue at our existing clients declined 11.9% in the three-month period primarily due to lower contingency fee rates at several clients and a change in position from primary auditor to secondary auditor at a large client. Partially offsetting these declines, revenue increased 2.3% in the three-month period due to new clients.
Recovery Audit Services – Europe/Asia-Pacific revenue decreased by $0.4 million, or 4.1%, for the three months ended March 31, 2015 compared to the same period in 2014. The changes in the value of the U.S. dollar relative to foreign currencies in Europe, Asia and the Pacific region negatively impacted reported revenue for the first quarter compared to the same period in 2014. On a constant dollar basis, adjusted for changes in FX rates, revenue increased by 10.0% during the first three months of 2015 compared to a decrease of 4.1% as reported. The 10.0% net increase on a constant dollar basis for the three-month period included net increases in revenue of 8.6% attributable to existing clients, 0.9% attributable to cyclical clients and 0.5% attributable to new clients.
Adjacent Services revenue decreased by $1.0 million, or 44.7%, for the three months ended March 31, 2015 compared to the same period in 2014 primarily due to the sale of our Chicago, Illinois-based consulting practice on October 1, 2014.
Healthcare Claims Recovery Audit Services revenue decreased by $1.0 million, or 86.9%, for the three months ended March 31, 2015 compared to the same period in 2014. The decrease in revenue in the three-month period is primarily due to restrictions imposed on all Medicare RAC program contractors, which negatively impacted our revenue. As disclosed in our Form 10-K for the year ended December 31, 2013, we withdrew from the Medicare RAC program rebid process in February 2014. On April 28, 2015, the Company announced its decision to exit its Healthcare Claims Recovery Audit Services business.
Cost of Revenue (“COR”). COR consists principally of commissions and other forms of variable compensation we pay to our auditors based primarily on the level of overpayment recoveries and/or profit margins derived therefrom, fixed auditor salaries, compensation paid to various types of hourly support staff and salaries for operational and client service managers for our recovery audit services and our Adjacent Services businesses. COR also includes other direct and indirect costs incurred by these personnel, including office rent, travel and entertainment, telephone, utilities, maintenance and supplies and clerical assistance. A significant portion of the components comprising COR is variable and will increase or decrease with increases or decreases in revenue.
COR was as follows (in thousands):
$
14,971
$
16,000
Recovery Audit Services – Europe/Asia-Pacific
6,437
7,417
Adjacent Services
1,759
3,035
Healthcare Claims Recovery Audit Services
611
2,380
Total
$
23,778
$
28,832
COR as a percentage of revenue for Recovery Audit Services – Americas was 66.8% and 64.5% for the three months ended March 31, 2015 and 2014, respectively. The increase in COR as a percentage of revenue for the three months ended March 31, 2015 compared to the same period in 2014 is primarily due to the fixed portion of our costs not decreasing in line with the lower revenue, and increased compensation expense associated with personnel added to deliver contract compliance services.
COR for Recovery Audit Services – Europe/Asia-Pacific decreased $1.0 million for the three months ended March 31, 2015 compared to the same period in 2014 due primarily to the increase in the value of the U.S. dollar relative to foreign currencies in Europe, Asia and the Pacific region between the 2015 and 2014 periods. On a constant dollar basis, adjusted for changes in foreign currency rates, COR for the first quarter of 2015 did not significantly change compared to the first quarter of 2014.
COR as a percentage of revenue for Recovery Audit Services – Europe/Asia-Pacific is generally higher than COR as a percentage of revenue for Recovery Audit Services – Americas primarily due to differences in service delivery models, scale and geographic fragmentation. The Recovery Audit Services – Europe/Asia-Pacific segment generally serves fewer clients in each geographic market and on average generates lower revenue per client than those served by the Company’s Recovery Audit Services – Americas segment.
COR as a percentage of revenue for Adjacent Services was 139.3% and 132.9% for the three months ended March 31, 2015 and 2014, respectively. COR declined 42.0% in the three months ended March 31, 2015 compared to the same period in 2014 due primarily to the sale of the Chicago, Illinois-based consulting practice on October 1, 2014 as well as reductions in compensation costs for the remaining business as we rationalized and refined our service offerings.
Healthcare Claims Recovery Audit Services COR relates primarily to costs associated with the Medicare RAC program subcontracts. COR decreased $1.8 million for the three-month period ended March 31, 2015 compared to the same period in 2014 due primarily to personnel reductions and reduced direct costs associated with the Medicare RAC program such as costs for medical records and other costs associated with the generation of claims. These reductions were not sufficient to enable us to achieve revenue in excess of COR for our services under the Medicare RAC program, resulting in COR exceeding revenue in the 2015 period.
Selling, General and Administrative Expenses (“SG&A”). SG&A expenses for all segments other than Corporate Support include the expenses of sales and marketing activities, information technology services and allocated corporate data center costs, human resources, legal, accounting, administration, foreign currency transaction gains and losses other than those relating to short-term intercompany balances and gains and losses on asset disposals. Corporate Support SG&A represents the unallocated portion of SG&A expenses which are not specifically attributable to our segment activities and include the expenses of information technology services, the corporate data center, human resources, legal, accounting, treasury, administration and stock-based compensation charges.
SG&A expenses were as follows (in thousands):
$
1,521
$
2,848
Recovery Audit Services – Europe/Asia-Pacific
1,566
1,804
Adjacent Services
200
566
Healthcare Claims Recovery Audit Services
225
624
Subtotal for reportable segments
3,512
5,842
Corporate Support
4,657
4,134
Total
$
8,169
$
9,976
Recovery Audit Services – Americas SG&A decreased by $1.3 million, or 46.6%, for the three months ended March 31, 2015 from the comparable period in 2014 due primarily to lower compensation expense that resulted from our transformation efforts and reductions in bad debt expense.
Recovery Audit Services – Europe/Asia-Pacific SG&A decreased$0.2 million, or 13.2%, for the three months ended March 31, 2015 compared to the same period in 2014. This decrease is primarily due to an increase in the value of the U.S. dollar relative to foreign currencies in Europe, Asia and the Pacific region between the 2015 and 2014 periods. On a constant dollar basis, adjusted for changes in foreign currency rates, SG&A for the first quarter of 2015 decreased by less than $0.1 million compared to the first quarter of 2014.
Adjacent Services SG&A decreased$0.4 million, or 64.7%, in the three months ended March 31, 2015 compared to the same period in 2014. This decrease is primarily due to lower compensation, travel and office-related expenses that declined as a result of personnel reductions that were made as we rationalized our service offerings in this segment.
Healthcare Claims Recovery Audit Services SG&A decreased$0.4 million, or 63.9%, in the three months ended March 31, 2015 compared to the same period in 2014. This decrease is primarily due to reductions in overhead charges and occupancy costs that resulted from staff reductions in this segment necessitated by the decline in Healthcare Claims Recovery Audit Services revenue.
Corporate Support SG&A increased$0.5 million, or 12.7%, for the three months ended March 31, 2015 compared to the same period in 2014. This increase is primarily due to increases in equity compensation expenses and costs associated with our information technology infrastructure initiatives.
Depreciation of property and equipment. Depreciation of property and equipment was as follows (in thousands):
$
969
$
1,256
Recovery Audit Services – Europe/Asia-Pacific
153
146
Adjacent Services
157
160
Healthcare Claims Recovery Audit Services
13
120
Total
$
1,292
$
1,682
The overall decrease in depreciation relates primarily to the mix and timing of our capital expenditures and the associated useful lives for such purchases.
Amortization of intangible assets. Amortization of intangible assets was as follows (in thousands):
$
441
$
500
Recovery Audit Services – Europe/Asia-Pacific
273
307
Adjacent Services
32
96
Total
$
746
$
903
The decrease in amortization expense for the three months ended March 31, 2015 compared to the same period in 2014 is primarily due to the end of the finite lives of certain intangible assets, no new intangible assets since 2013, and the disposition of intangible assets as a result of the sale of our Chicago, Illinois-based consulting practice on October 1, 2014. We have not recorded any amortization of intangible assets in our Healthcare Claims Recovery Audit Services segment because there have been no business acquisitions in this segment. Unless we complete an acquisition in any of our reportable segments in 2015, we anticipate that amortization expense will continue to decrease in 2015 compared to 2014.
Foreign Currency Transaction (Gains) Losses on Short-Term Intercompany Balances. Foreign currency transaction gains and losses on short-term intercompany balances result from fluctuations in the exchange rates for foreign currencies and the U.S. dollar and the impact of these fluctuations, primarily on balances payable by our foreign subsidiaries to their U.S. parent. Substantial changes from period to period in foreign currency exchange rates may significantly impact the amount of such gains and losses. The strengthening of the U.S. dollar relative to other currencies results in recorded losses on short-term intercompany balances receivable from our foreign subsidiaries while the relative weakening of the U.S. dollar results in recorded gains. In the three months ended March 31, 2015 and 2014, we recorded foreign currency transaction losses of $1.7 million and less than $0.1 million, respectively, on short-term intercompany balances.
Net Interest Expense (Income). Net interest income was less than $0.1 million for the three months ended March 31, 2015 and net interest expense was $0.1 million for the three months ended March 31, 2014. Net interest income in the three months ended March 31, 2015 is primarily due to interest from a time deposit that exceeded total interest expense for the period.
Income Tax Expense. Our income tax expense amounts as reported in the accompanying Condensed Consolidated Financial Statements (Unaudited) do not reflect amounts that normally would be expected due to several factors. The most significant of these factors is that for U.S. tax reporting purposes we have net operating loss carryforwards and other tax attributes which created deferred tax assets on our balance sheet. We reduce our deferred tax assets by a valuation allowance if it is more likely than not that some portion or all of a deferred tax asset will not be realized. Generally, these factors result in our recording no net income tax expense or benefit relating to our operations in the United States. Reported income tax expense for the three months ended March 31, 2015 and 2014 primarily results from taxes on the income of certain of our foreign subsidiaries.
Adjusted EBITDA. We evaluate the performance of our reportable segments based upon revenue and measures of profit or loss we refer to as EBITDA and Adjusted EBITDA. We define Adjusted EBITDA as earnings from continuing operations before interest and taxes (“EBIT”), adjusted for depreciation and amortization (“EBITDA”), and then further adjusted for unusual and other significant items that management views as distorting the operating results of the various segments from period to period. Such adjustments include restructuring charges, stock-based compensation, bargain purchase gains,
Transformation severance and related expenses decreased$0.2 million for the three months ended March 31, 2015 compared to the same period in 2014. Transformation severance and related expenses fluctuate with staff reductions and lease expenses associated with vacating office space across all segments in order to reduce our cost structure.
Stock-based compensation increased$0.1 million, or 10.9%, for the three months ended March 31, 2015 compared to the same period in 2014 due to relatively higher expenses associated with equity awards granted during the 2014 fiscal year.
We include a detailed calculation of Adjusted EBITDA by segment in Note D of “Notes to Consolidated Financial Statements” in Item 1 of this Form 10-Q. A summary of Adjusted EBITDA by segment for the three months ended March 31, 2015 and 2014 is as follows (in thousands):
$
5,981
$
5,958
Recovery Audit Services – Europe/Asia-Pacific
1,367
560
Adjacent Services
(680
)
(1,156
)
Healthcare Claims Recovery Audit Services
(689
)
(1,731
)
Subtotal for reportable segments
5,979
3,631
Corporate Support
(3,516
)
(3,113
)
Total
$
2,463
$
518
Recovery Audit Services – Americas Adjusted EBITDA did not significantly change for the three months ended March 31, 2015 compared to the same period in 2014. The revenue declines experienced by this segment were offset by reductions in COR and SG&A expenses.
Recovery Audit Services – Europe/Asia-Pacific Adjusted EBITDA increased by $0.8 million, or 144.1%, for the three months ended March 31, 2015 compared to the same period in 2014. The increase is due to greater reductions in COR and SG&A expenses than revenue.
Adjacent Services Adjusted EBITDA improved $0.5 million for the three months ended March 31, 2015, compared to the same period in 2014. Excluding the impact of the sale of the Chicago-Illinois based consulting practice on October 1, 2014, the improvement in Adjusted EBITDA is primarily due to a decrease in SG&A expenses.
Healthcare Claims Recovery Audit Services Adjusted EBITDA improved $1.0 million, or 60.2%, for the three months ended March 31, 2015 compared to the same period in 2014. This improvement is due to reductions in COR and SG&A expenses that exceeded the reductions in revenue.
Corporate Support Adjusted EBITDA declined by $0.4 million, or 12.9%, for the three months ended March 31, 2015 compared to the same period in 2014. This decrease is due primarily to increased equity compensation and expenses associated with our information technology infrastructure initiatives.
Liquidity and Capital Resources
As of March 31, 2015, we had $23.4 million in cash and cash equivalents and no borrowings outstanding against our $20.0 million revolving credit facility.
Operating Activities. Net cash provided by operating activities was $5.4 million and $2.9 million during the three months ended March 31, 2015 and 2014, respectively. These amounts consist of two components, specifically, net loss adjusted for certain non-cash items (such as depreciation, amortization, stock-based compensation expense, and deferred income taxes) and changes in assets and liabilities, primarily working capital, as follows (in thousands):
Three Months Ended March 31,
2015
2014
Net loss
$
(2,958
)
$
(3,674
)
Adjustments for certain non-cash items
5,078
3,428
2,120
(246
)
Changes in operating assets and liabilities
3,304
3,158
Net cash provided by operating activities
$
5,424
$
2,912
The change in net cash provided by operating activities primarily resulted from changes in operating assets and liabilities as well as the relatively lower net loss compared to the 2014 period. We include an itemization of these changes in our Condensed Consolidated Statements of Cash Flows (Unaudited) in Item 1 of this Form 10-Q.
Investing Activities. Net cash used for property and equipment capital expenditures was $1.1 million and $0.8 million during the three months ended March 31, 2015 and 2014, respectively. These capital expenditures primarily related to investments we made to upgrade our information technology infrastructure.
Capital expenditures are discretionary and we currently expect to continue to make capital expenditures to enhance our information technology infrastructure and proprietary audit tools in 2015. Should we experience changes in our operating results, we may alter our capital expenditure plans.
Financing Activities. Net cash used by financing activities was $5.5 million and net cash provided by financing activities was $0.1 million for the three months ended March 31, 2015 and 2014, respectively. The increase in net cash used by financing activities in the three months ended March 31, 2015 compared to same period in 2014 is primarily due to the $5.5 million of common stock repurchased during the first three months of 2015.
Secured Credit Facility
On January 19, 2010, we entered into a four-year revolving credit and term loan agreement with SunTrust Bank (“SunTrust”). The SunTrust credit facility initially consisted of a $15.0 million committed revolving credit facility and a $15.0 million term loan. The SunTrust term loan required quarterly principal payments of $0.8 million beginning in March 2010, and a final principal payment of $3.0 million due in January 2014 that we paid in December 2013. The SunTrust credit facility is guaranteed by the Company and all of its material domestic subsidiaries and secured by substantially all of the assets of the Company.
On January 17, 2014, we entered into an amendment of the SunTrust credit facility that increased the committed revolving credit facility from $15.0 million to $25.0 million, lowered the applicable margin to a fixed rate of 1.75%, eliminated
the provision limiting availability under the revolving credit facility based on eligible accounts receivable, increased our stock repurchase program limits, and extended the scheduled maturity of the revolving credit facility to January 16, 2015 (subject to earlier termination as provided therein).
On December 23, 2014, we entered into an amendment of the SunTrust credit facility that reduced the committed revolving credit facility from $25.0 million to $20.0 million. The credit facility bears interest at a rate per annum comprised of a specified index rate based on one-month LIBOR, plus an applicable margin (1.75% per annum). The credit facility includes two financial covenants (a maximum leverage ratio and a minimum fixed charge coverage ratio) that apply only if we have borrowings under the credit facility that arise or remain outstanding during the final 30 calendar days of any fiscal quarter. These financial covenants also will be tested, on a modified pro forma basis, in connection with each new borrowing under the credit facility. This amendment also extends the scheduled maturity of the revolving credit facility to December 23, 2017 and lowered the commitment fee to 0.25% per annum, payable quarterly, on the unused portion of the revolving credit facility.
As of March 31, 2015, we had no outstanding borrowings under the SunTrust Credit Facility. With the provision of a fixed applicable margin of 1.75% per the amendment of the SunTrust credit facility, the interest rate that would have applied at March 31, 2015 had any borrowings been outstanding was approximately 1.92%.
The SunTrust credit facility includes customary affirmative, negative, and financial covenants binding on the Company, including delivery of financial statements and other reports, maintenance of existence, and transactions with affiliates. The negative covenants limit the ability of the Company, among other things, to incur debt, incur liens, make investments, sell assets or declare or pay dividends on its capital stock. The financial covenants included in the SunTrust credit facility, among other things, limit the amount of capital expenditures the Company can make, set forth maximum leverage and net funded debt ratios for the Company and a minimum fixed charge coverage ratio, and also require the Company to maintain minimum consolidated earnings before interest, taxes, depreciation and amortization. In addition, the SunTrust credit facility includes customary events of default.
We believe that we will have sufficient borrowing capacity and cash generated from operations to fund our capital and operational needs for at least the next twelve months.
Stock Repurchase Program
On February 21, 2014, our Board of Directors authorized a stock repurchase program under which we may repurchase up to $10.0 million of our common stock from time to time through March 31, 2015. On March 25, 2014, our Board of Directors authorized a $10.0 million increase to the stock repurchase program. On October 24, 2014, our Board of Directors authorized a
$20.0 million increase to the stock repurchase program, increasing the total stock repurchase program since its inception to
$40.0 million, and extended the duration of the program to December 31, 2015. From the February 2014 announcement of the Company’s current stock repurchase program through March 31, 2015, the Company has repurchased 4.7 million shares, or 15.7%, of its common stock outstanding on the date of the announcement, for an aggregate cost of $28.2 million. For the three months ended March 31, 2015, we repurchased 1.1 million shares under this plan for and aggregate cost of $5.5 million. These shares were retired and accounted for as a reduction to Shareholders' equity in the Condensed Consolidated Balance Sheet (Unaudited). Direct costs incurred to acquire the shares are included in the total cost of the shares.
The timing and amount of future repurchases, if any, will depend upon the Company’s stock price, the amount of the Company’s available cash, regulatory requirements, and other corporate considerations. The Company may initiate, suspend or discontinue purchases under the stock repurchase program at any time.
Off-Balance Sheet Arrangements
As of March 31, 2015, the Company did not have any material off-balance sheet arrangements, as defined in Item 303(a)(4)(ii) of the SEC’s Regulation S-K.
We describe the Company’s significant accounting policies in Note 1 of Notes to Consolidated Financial Statements of the Company’s Annual Report on Form 10-K for the year ended December 31, 2014. We consider certain of these accounting policies to be “critical” to the portrayal of the Company’s financial position and results of operations, as they require the application of significant judgment by management. As a result, they are subject to an inherent degree of uncertainty. We identify and discuss these “critical” accounting policies in the Management’s Discussion and Analysis of Financial Condition and Results of Operations section of the Company’s Annual Report on Form 10-K for the year ended December 31, 2014. Management bases its estimates and judgments on historical experience and on various other factors that management believes to be reasonable under the circumstances, the results of which form the basis for making judgments about the carrying values of assets and liabilities that are not readily apparent from other sources. Actual results may differ from these estimates under different assumptions or conditions. On an ongoing basis, management evaluates its estimates and judgments, including those considered “critical”. Management has discussed the development, selection and evaluation of accounting estimates, including those deemed “critical,” and the associated disclosures in this Form 10-Q with the Audit Committee of the Board of Directors.
Some of the information in this Form 10-Q contains “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934, which statements involve substantial risks and uncertainties including, without limitation, statements regarding: (1) future results of operations or of the Company’s financial condition, (2) the adequacy of the Company’s current working capital and other available sources of funds, (3) the Company's goals and plans for the future, including its strategic initiatives and growth opportunities, (4) expectations regarding future revenue trends, and (5) the expected impact of the Company’s decision to exit the Company's Healthcare Claims Recovery Audit Services business. All statements that cannot be assessed until the occurrence of a future event or events should be considered forward-looking. These statements are forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995 and can be identified by the use of forward-looking words such as “may,” “will,” “expect,” “anticipate,” “believe,” “estimate” and “continue” or similar words. Risks and uncertainties that may potentially impact these forward-looking statements include, without limitation, those set forth under Part I, Item 1A “Risk Factors” in the Company’s Annual Report on Form 10-K for the year ended December 31, 2014 and its other periodic reports filed with the Securities and Exchange Commission. The Company disclaims any obligation or duty to update or modify these forward-looking statements.
There may be events in the future, however, that the Company cannot accurately predict or over which the Company has no control. The risks and uncertainties listed in this section, as well as any cautionary language in this Form 10-Q, provide examples of risks, uncertainties and events that may cause our actual results to differ materially from the expectations we describe in our forward-looking statements. You should be aware that the occurrence of any of the events denoted above as risks and uncertainties and elsewhere in this Form 10-Q could have a material adverse effect on our business, financial condition and results of operations.
Foreign Currency Market Risk. Our reporting currency is the U.S. dollar, although we transact business in various foreign locations and currencies. As a result, our financial results could be significantly affected by factors such as changes in foreign currency exchange rates or weak economic conditions in the foreign markets in which we provide our services. Our operating results are exposed to changes in exchange rates between the U.S. dollar and the currencies of the other countries in which we operate. When the U.S. dollar strengthens against other currencies, the value of foreign functional currency revenue decreases. When the U.S. dollar weakens, the value of the foreign functional currency revenue increases. Overall, we are a net receiver of currencies other than the U.S. dollar and, as such, benefit from a weaker dollar. We therefore are adversely affected by a stronger dollar relative to major currencies worldwide. During the three months ended March 31, 2015, we recognized $1.7 million of operating income from operations located outside the U.S., virtually all of which was originally accounted for in currencies other than the U.S. dollar. Upon translation into U.S. dollars, such operating income would increase or decrease, assuming a hypothetical 10% change in weighted-average foreign currency exchange rates against the U.S. dollar, by $0.2 million for the three months ended March 31, 2015. We currently do not have any arrangements in place to hedge our foreign currency risk.
Interest Rate Risk. Our interest income and expense are sensitive to changes in the general level of U.S. interest rates. In this regard, changes in U.S. interest rates affect the interest earned on our cash equivalents as well as interest paid on amounts outstanding under our revolving credit facility, if any. As of March 31, 2015, we had no borrowings outstanding against our $20.0 million revolving credit facility. Interest on our revolving credit facility is payable monthly and accrues at an index rate using the one-month LIBOR rate plus an applicable margin of 1.75%. Assuming full utilization of the revolving credit facility, a hypothetical 100 basis point change in interest rates applicable to the revolver would result in an approximate $0.2 million change in annual pre-tax income.
The Company carried out an evaluation, under the supervision and with the participation of its management, including the Chief Executive Officer and Chief Financial Officer, of the effectiveness of the design and operation of the Company’s “disclosure controls and procedures” (as defined in the Exchange Act Rule 13a-15(e)) as of the end of the period covered by this report. Based upon that evaluation, the Chief Executive Officer and Chief Financial Officer concluded that the Company’s disclosure controls and procedures were effective as of March 31, 2015.
There were no changes in the Company’s internal control over financial reporting during the quarter ended March 31, 2015 that have materially affected, or are reasonably likely to materially affect, the Company’s internal control over financial reporting.
We are party to a variety of legal proceedings arising in the normal course of business. While the results of these proceedings cannot be predicted with certainty, management believes that the final outcome of these proceedings will not have a material adverse effect on our financial position, results of operations or cash flows.
Item 1A. Risk Factors
There have been no material changes in the risks facing the Company as described in the Company’s Form 10-K for the year ended December 31, 2014.
Item 2. Unregistered Sales of Equity Securities and Use of Proceeds
The Company’s current credit facility prohibits the payment of any cash dividends on the Company’s capital stock.
The following table sets forth information regarding the purchases of the Company’s equity securities made by or on behalf of the Company or any affiliated purchaser (as defined in Exchange Act Rule 10b-18) during the three-month period ended March 31, 2015:
2015
Total Number
of Shares
Purchased (a)
Average Price
Paid per Share
Total Number of
Shares Purchased
as Part of Publicly
Announced Plans
or Programs (b)
Maximum Approximate
Dollar Value of Shares
that May Yet Be
Purchased Under the
Plans or Programs
(millions of dollars)
January 1 - January 31
340,738
$
5.38
331,774
$
—
February 1 - February 28
300,540
$
5.34
300,540
$
—
March 1 - March 31
497,618
$
4.22
497,618
$
—
1,138,896
$
4.86
1,129,932
11.8
(a)
Shares purchased during the quarter include shares surrendered by employees to satisfy tax withholding obligations upon vesting of restricted stock and shares from the Company's stock repurchase program.
(b)
On February 21, 2014, our Board of Directors authorized a stock repurchase program under which we may repurchase up to $10.0 million of our common stock from time to time through March 31, 2015. On March 25, 2014, our Board of Directors authorized a $10.0 million increase to the stock repurchase program, bringing the total amount of its common stock that the Company may repurchase under the program to $20.0 million. On October 24, 2014, our Board of Directors authorized a $20.0 million increase to the stock repurchase program, increasing the total share repurchase program to $40.0 million, and extended the duration of the program to December 31, 2015. From the February 2014 announcement through March 31, 2015, the Company repurchased a total of 4,735,074 shares under this program for an aggregate purchase price of $28.2 million. The timing and amount of repurchases, if any, will depend upon the Company’s stock price, economic and market conditions, regulatory requirements, and other corporate considerations. The Company may initiate, suspend or discontinue purchases under the stock repurchase program at any time.
Restated Articles of Incorporation of the Registrant, as amended and corrected through August 11, 2006 (restated solely for the purpose of filing with the Commission) (incorporated by reference to Exhibit 3.1 to the Registrant’s Form 8-K filed on August 17, 2006).
3.1.1
Articles of Amendment of the Registrant effective January 20, 2010 (incorporated by reference to Exhibit 3.1 to the Registrant’s Form 8-K filed on January 25, 2010).
3.2
Amended and Restated Bylaws of the Registrant (incorporated by reference to Exhibit 3.1 to the Registrant’s Form 8-K filed on December 11, 2007).
4.1
Specimen Common Stock Certificate (incorporated by reference to Exhibit 4.1 to the Registrant’s Form 10-K for the year ended December 31, 2001).
4.2
See Restated Articles of Incorporation and Bylaws of the Registrant, filed as Exhibits 3.1 and 3.2, respectively.
10.1
Form of PRGX Performance-Based Restricted Stock Unit Agreement (incorporated by reference to Exhibit 10.1 to the Registrant's Form 8-K filed on April 1, 2015).
31.1
Certification of the Chief Executive Officer, pursuant to Rule 13a-14(a) or 15d-14(a), for the quarter ended March 31, 2015.
31.2
Certification of the Chief Financial Officer, pursuant to Rule 13a-14(a) or 15d-14(a), for the quarter ended March 31, 2015.
32.1
Certification of the Chief Executive Officer and Chief Financial Officer, pursuant to 18 U.S.C. Section 1350, for the quarter ended March 31, 2015.
Site Links
Based on public records. Inadvertent errors are possible. Getfilings.com does not guarantee the accuracy or timeliness of any information on this site. Use at your own risk.
This website is not associated with the SEC.
| |
Introduction {#s1}
============
There is substantial evidence that subcortical white matter lesions are associated with cognitive deficits [@pone.0013567-GunningDixon1]--[@pone.0013567-Raz1]. However, the majority of investigations involve persons aged over 60 years, and it is rare for research to focus on middle-aged adults. From a lifespan perspective this is a notable omission as it is important to identify when age-related cognitive deficits begin to appear, and to pinpoint factors that may explain such deficits. Not only is such work theoretically important, but practically, it may provide valuable information concerning when screening and assessment for age-related neuropathology should begin, and thereby facilitate early intervention. Here, we investigated white matter lesions and performance on a range of cognitive measures. Importantly, we focused on healthy, community-dwelling adults aged between 44 and 48 years. Our objective was to assess whether associations between white matter lesions and cognitive deficits typically reported in the over 60 s, were evident in this comparatively younger age group.
Our particular focus was on white matter hyperintensities (WMH). WMH refer to white matter lesions that appear as high signal intensities on T2-weighted MRI. Their neuropathological origins are wide-ranging and include demyelination, gliosis, destruction of axons, and eventual cavitation and infarction. As the myelinated axons within white matter form connective pathways within and between different brain structures, damage to these pathways is likely to have consequences for the efficiency of information transfer within the brain, and therefore, for cognitive function. Indeed, it is likely that white matter alterations differentially affect cognitive function depending on the brain regions involved [@pone.0013567-Sullivan1].
Age, together with vascular risk factors, is one of the strongest predictors of WMH burden, and research shows WMH are associated with deficits in a range of cognitive domains including processing speed, executive function and episodic memory [@pone.0013567-GunningDixon1]--[@pone.0013567-Raz1]. As noted though, the vast majority of the studies to date have focused on older ages, and it is relatively rare for WMH and cognition to be investigated in healthy adults below 50 years of age. Three recent studies that did include persons aged under 50 years [@pone.0013567-Kennedy1]--[@pone.0013567-Raz2] all demonstrated associations between white matter degradation and cognitive deficits, but did so in relatively small samples, the largest comprising 52 persons.
A major objective of the present study, therefore, was to address the paucity of research investigating associations between WMH and cognition in large population-based samples of adults aged below 50 years. Additionally, it was important to establish how far vascular risk factors account for WMH-cognitive associations in middle age. One of the aforementioned studies [@pone.0013567-Raz2] clearly implicated vascular health as a major influence on white matter degradation, and by extension, cognitive function. Therefore, we utilized data for 428 persons aged 44 to 48 years participating in the *PATH Through Life Project*, a large-scale population-based study of age, cognition, and a range of health, biological, and individual difference variables [@pone.0013567-Anstey1]. On the basis of the established link between cognition and WMH, we expected cognitive deficits where WMH were present. We anticipated that such associations would be specific to the cognitive domain. Specifically, tasks drawing upon executive processes would predict frontal WMH, while memory measures would predict temporal white matter lesions. Because our focus was on non-periventricular WMH which are thought to be related to ischaemia \[e.g., [@pone.0013567-Fazekas1],[@pone.0013567-Wen1]\], we also anticipated that vascular risk factors would account for any WMH-cognition associations that were identified.
Methods {#s2}
=======
Ethics statement {#s2a}
----------------
All aspects of the study were approved by the Australian National University Human Research Ethics Committee. Written informed consent was obtained from all participants in the study.
Participants {#s2b}
------------
This cohort of the *PATH Through Life Project* comprised 2530 individuals aged 44--48 years who were residents of the city of Canberra and surrounding areas, and were recruited randomly through the electoral roll. Enrolment to vote is compulsory for Australian citizens. A randomly selected subsample of 656 participants was offered an MRI scan, of which 503 accepted, and 431 (85.7%) eventually completed. There were no differences in age, sex and years of education between those who had an MRI scan and those who did not (p\>0.05). In the present study, WMH data for three participants were unavailable, and so the analyses reported below are based on 428 persons (232 women) with a mean age of 46.69 years (SD = 1.43). For those participants, apart from WMH reported here, MRI scans did not indicate any additional neuropathology.
Health variables {#s2c}
----------------
Health histories were obtained through an interview, and included details (prevalence rates in parentheses) of cancer (n = 10; 2.3%), heart disease (n = 13; 3.0%), stroke (n = 4; 0.9%), diabetes (n = 9; 2.1%), thyroid problems (n = 19; 4.4%), and head injury (n = 67; 15.7%). All were coded 1 = Yes, 2 = No. In a minority of cases (\<1.7%), missing data were coded '2'. This represents a conservative approach to the estimation of disease for missing data. Two readings of resting blood pressure two hours apart with participants sitting were taken by the interviewer using an Omron M4 automatic blood pressure monitor, for which they had received specific training. For present purposes, high blood pressure (BP) was defined as either mean systolic BP\>140 mm Hg, or mean diastolic BP\>90 mm Hg [@pone.0013567-Joint1]. Values above those thresholds were coded '2', and those below '1'.
Psychomotor tasks {#s2d}
-----------------
Simple and choice reaction tasks were administered, and for both tasks measures of intraindividual mean RT and variability were computed.
### Simple and choice RT tasks: {#s2d1}
These tasks were administered using a small box held with both hands, with left and right buttons at the top to be depressed by the index fingers. The front of the box had three lights: two red stimulus lights under the left and right buttons respectively and a green get-ready light in the middle beneath these. There were four blocks of 20 trials measuring simple reaction time (SRT), followed by two blocks of 20 trials measuring choice reaction time (CRT). For SRT everyone used their right hand regardless of dominance. The interval between the 'get-ready' light and the first light of the trial was 2.3 s for both SRT and CRT.
### Computation of intraindividual mean RT {#s2d2}
Means were calculated after removing outliers. This was done by firstly eliminating any values over 2000 ms. Next, means and standard deviations were calculated for each individual for each block and values were eliminated which lay outside three standard deviations for each individual. A number of very slow individuals still retained RT scores greater than 1000 ms. In a final step, these values were dropped before the final means per block were calculated for each participant. Here we present the grand mean across blocks for the respective tasks.
### Computation of intraindividual variability {#s2d3}
Mean absolute residuals (in ms) were calculated for each individual by averaging the deviations from regression models of RT against trial number and block number in each of the simple and choice RT series (Blocks 1--4 inclusive for simple RT, and Blocks 5--6 for choice RT). A quadratic function of trial number was also entered into the model because the decline in RT with practice is not linear. Block number was treated as categorical. These models were designed to remove both intra-block practice effects and the effect of the short rest periods between blocks, leaving residuals that measure only random variation. By contrast, simply using each person\'s 'raw' standard deviation of RT would inflate the apparent variability for participants who showed substantial improvement over the course of their trials.
This procedure is similar to that used in the cognitive aging literature more broadly [@pone.0013567-Hultsch1] and follows a precedent within the PATH Through Life Project specifically [@pone.0013567-Anstey1]. The procedure takes into account individual differences in RT when determining outliers, and the absolute cut-off at the final step ensures that intermittent unusually slow responses for this age group (for the respective tasks) are excluded. This results in a conservative measure of variability and in consequence, there is an increased likelihood that any effects found in relation to the variability measure are robust.
Other cognitive measures {#s2e}
------------------------
In addition to the RT tasks, a battery of cognitive tests was administered to participants. This included a *backward digit span test* from the Wechsler Memory Scale which requires participants to repeat a list of three to six words in length backwards [@pone.0013567-Wechsler1]. *Immediate* and *delayed recall* were assessed using the first trial of the California Verbal Learning Test which requires participants to remember 16 shopping list items and to recall them immediately and again after a delay of twenty minutes [@pone.0013567-Delis1]. In a *face recognition task* [@pone.0013567-Crane1], 12 photographs of faces were presented for 45 s. After a 90 s delay, the 12 target faces were represented with 13 distracter faces. Finally a *Lexical decision making* was measured through the Spot-the-Word test [@pone.0013567-Baddeley1] which comprises of 60 questions and requires participants to indicate which of two items is a valid word.
MRI acquisition {#s2f}
---------------
MRI data were acquired on a 1.5 Tesla Gyroscan scanner (ACS-NT, Philips Medical Systems, Best, The Netherlands). T1-weighted 3-D structural MRI images were acquired in coronal plane using Fast Field Echo (FFE) sequence. About mid-way through this study, for reasons beyond the researchers\' control, the original scanner (Scanner A) was replaced with an identical Philips scanner (Scanner B) and single channel RF headcoils. The scanning parameters were kept essentially the same. The first 163 subjects were scanned on Scanner A with TR = 8.84 ms, TE = 3.55 ms, a flip angle of 8°, matrix size = 256×256, slices 160, and field of view (FOV) 256×256 mm. Slices were contiguous with slice thickness of 1.5 mm. For the remaining 268 subjects scanned on Scanner B, the TR = 8.93 ms, TE = 3.57 ms values were slightly different in order to improve image quality, but all other parameters were exactly the same. The fluid-attenuated inversion recovery (FLAIR) sequence was the same for both scanners and acquired with TR = 11,000 ms, TE = 140 ms, TI = 2,600, number of excitations = 2, matrix size = 256×256, and the FOV was 230×230 mm. Slice thickness was 4.0 mm with no gap between slices and in-plane spatial resolution is 0.898×0.898 mm/pixel. To ensure the reliability and compatibility of the data, we compared the subjects scanned on the two scanners on sociodemographic and imaging parameters. There were no differences on age (p = 0.377), or years of education (p = 0.588), but more women were inadvertently scanned on Scanner B than A (p = 0.003). The volumetric measures of total intracranial volume, gray matter volume, white matter volume, or cerebrospinal fluid volume obtained from two scanners did not differ significantly [@pone.0013567-Wen2]. For the old and new scanners respectively, mean volumes were as follows: Grey matter = 0.72 vs 0.72; white matter = 0.47 vs 0.46; cerebrospinal fluid = 0.27 vs 0.27.
Image analysis {#s2g}
--------------
The image analysis of WMH has been described in detail elsewhere [@pone.0013567-Wen2]. Briefly, the FLAIR and 3D T1 structural images of the same subject were co-registered [@pone.0013567-Wells1]; T1-weigthed structural images were segmented into three separate tissue components (grey matter, white matter, and cerebrospinal fluid [@pone.0013567-Ashburner1], [@pone.0013567-Ashburner2]. Nonbrain tissue was removed from both T1-weighted and co-registered FLAIR images using the brain mask transformed from the average mask originally defined in the Talairach space by inverting the deformation matrix generated from its own spatial normalization [@pone.0013567-Ashburner3]; the spatial normalization transformation to produce the brain masks and white matter probability maps in the individual imaging space was inverted for the WMH detection and non-brain tissue removal; both FLAIR and T1-weighted images were intensity corrected after the removal of nonbrain tissues [@pone.0013567-Ashburner2]. Finally, a parametric method [@pone.0013567-Wen3] was adapted and applied to the initial WMH detection. Candidate WMH clusters from the brain were extracted and further investigated using a non-parametric k-nearest neighbor rule consisting of a training procedure on a small portion of the dataset and a testing procedure applied to the whole dataset. Candidate clusters were then classified into deep WMH, periventricular WMH, and false WMH clusters. This method was validated against visual ratings conducted by two independent clinicians experienced in examining MRI scans on a modified Fazekas scale [@pone.0013567-Fazekas2]. The different types of white matter hyperintense signal surrounding the ventricles and in the deep white matter were rated as 0 = absence, 1 = "caps" or pencil-thin lining, 2 = smooth "halo," 3 = irregular white matter hyperintensity extending into the deep white matter. Deep white matter hyperintense signals were rated as 0 = absence, 1 = punctate foci, 2 = beginning confluence of foci, 3 = large confluent areas. There was a strong association between visual ratings and computed WMHs volumes (r = .823, p = 0.001).
Missing data and statistical analyses {#s2h}
-------------------------------------
For missing data on a minority of cognitive variables (mean RT and variability measures for SRT and CRT tasks), values were imputed with the EM algorithm in SPSS [@pone.0013567-Schafer1]. Missing data frequencies before imputation were less than 3% for all variables.
Hierarchical multiple regression was used in the main analyses. WMH variables were regressed onto intracranial volume and total white matter volume at Step 1 in order to take into account individual differences in neuroanatomical structure. At Step 2, the primary effects for the cognitive variables and gender were entered into the equation. As several gender effects involving cognitive variables were evident in the bivariate correlations, at Step 3 the Gender × Cognitive variable interaction (variables were centered prior to this procedure) was entered. Due to the number of regression equations run, alpha was set conservatively at p\<.01.
Results {#s3}
=======
Descriptive statistics for the WMH variables by laterality and gender are presented in [Table 1](#pone-0013567-t001){ref-type="table"}. In the light of the associations that we report with the cognitive variables below, it is important to note that a relatively small percentage of WMH were recorded across the various brain regions, and that percentages were largely similar for men and women (for further details of WMH in this sample, see 16).
10.1371/journal.pone.0013567.t001
###### Descriptive data for white matter hyperintensities variables by gender[1](#nt102){ref-type="table-fn"}.
{#pone-0013567-t001-1}
Left Frontal Right Frontal Left Temp Right Temp Left Parietal Right Parietal Left Occ Right Occ
----------------------------------------------- -------------- --------------- -------------- -------------- --------------- ---------------- ------------- ------------- -------------- --------------- ------------- --------------- ------------- ----------- ------------- ------------
\% of sample 7.1 6.9 8.7 11.6 1.5 0.4 2.0 0.4 15.3 15.5 16.8 20.7 1.5 0.4 1.0 0.9
Mean vol[2](#nt103){ref-type="table-fn"} (SD) 2.23 (12.20) 2.77 (19.06) 5.03 (30.98) 5.46 (24.31) 0.45 (3.97) 0.04 (0.59) 0.37 (2.76) 0.19 (2.86) 8.66 (37.21) 16.28 (57.95) 7.52 (26.0) 18.18 (94.59) 0.39 (3.78) .06 (.69) 0.08 (0.91) .81 (9.67)
Range[3](#nt104){ref-type="table-fn"} 127.5 255 346.5 244.5 45 9 27 43.5 337.5 415.5 190.5 1290 49.5 10.5 12 139.5
Notes.
All values computed within-gender (men = 196; women = 232).
Metric = mm^3^.
Lowest value = 0.
Temp = Temporal; Occ = Occipital; M = Men; W = Women.
Bivariate correlations for all the main variables in the study are presented in [Table 2](#pone-0013567-t002){ref-type="table"}. Gender differences were observed for some of the cognitive variables; women outperformed men on immediate and delayed recall, whereas the opposite was true for spot-the-word and backward digit span tasks. Apart from that expected for total white matter volume, the associations between gender and WMH were all statistically unreliable. With one exception, significant correlations indicated WMH to be associated with poorer cognitive performance. The exception concerned right temporal WMH, which were positively associated with spot-the-word scores.
10.1371/journal.pone.0013567.t002
###### Bivariate correlations between biographical, cognitive and white matter variables.
{#pone-0013567-t002-2}
M SD 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
------------------ ------- ------- ----------------------------------------- ---------------------------------------- ----------------------------------------- ----------------------------------------- ----------------------------------------- ----------------------------------------- ---------------------------------------- ---------------------------------------- ----------------------------------------- ----------------------------------------- ---------------------------------------- ---------------------------------------- ------ ---------------------------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- ------ ------
1.Gender − − −
2.Years Educ 14.56 2.40 −.08 −
3.Immed Rec 8.18 2.27 .24[\*\*](#nt107){ref-type="table-fn"} .10[\*](#nt106){ref-type="table-fn"} −
4.Del Rec 7.54 2.48 .22[\*\*](#nt107){ref-type="table-fn"} .09 .84[\*\*](#nt107){ref-type="table-fn"} −
5.Digit Back 5.80 2.22 −.10[\*](#nt106){ref-type="table-fn"} .12[\*](#nt106){ref-type="table-fn"} .21[\*\*](#nt107){ref-type="table-fn"} .16[\*\*](#nt107){ref-type="table-fn"} −
6.Face Recog 9.44 1.44 .09 .11[\*](#nt106){ref-type="table-fn"} .14[\*\*](#nt107){ref-type="table-fn"} .13[\*\*](#nt107){ref-type="table-fn"} .10[\*](#nt106){ref-type="table-fn"} −
7.Lex Dec Making 51.65 4.85 −.12[\*](#nt106){ref-type="table-fn"} .41[\*\*](#nt107){ref-type="table-fn"} .20[\*\*](#nt107){ref-type="table-fn"} .19[\*\*](#nt107){ref-type="table-fn"} .28[\*\*](#nt107){ref-type="table-fn"} .09 −
8.ISD SRT 0.044 .019 .06 −.07 −.06 −.05 −.14[\*\*](#nt107){ref-type="table-fn"} −05 −.09 −
9.Mn SRT (ms) 240 42.5 .17[\*\*](#nt107){ref-type="table-fn"} −.08 −.04 −.01 −.18[\*\*](#nt107){ref-type="table-fn"} −.07 −.09 .59[\*\*](#nt107){ref-type="table-fn"} −
10.ISD CRT 0.046 .015 .07 −.02 −.02 −.04 −.03 −.01 −.03 .37[\*\*](#nt107){ref-type="table-fn"} .29[\*\*](#nt107){ref-type="table-fn"} −
11.Mn CRT (ms) 292 41.4 .15[\*\*](#nt107){ref-type="table-fn"} .01 .00 .02 −.08 −.02 −.06 .38[\*\*](#nt107){ref-type="table-fn"} .67[\*\*](#nt107){ref-type="table-fn"} .61[\*\*](#nt107){ref-type="table-fn"} −
12.ICV 1449 136 −.66[\*\*](#nt107){ref-type="table-fn"} .14[\*\*](#nt107){ref-type="table-fn"} −.16[\*\*](#nt107){ref-type="table-fn"} −.13[\*\*](#nt107){ref-type="table-fn"} .12[\*](#nt106){ref-type="table-fn"} −.13[\*\*](#nt107){ref-type="table-fn"} .15[\*\*](#nt107){ref-type="table-fn"} .00 −.14[\*\*](#nt107){ref-type="table-fn"} −.08 −.12[\*](#nt106){ref-type="table-fn"} −
13.Tot WM vol 463 55.1 −.65[\*\*](#nt107){ref-type="table-fn"} .13[\*\*](#nt107){ref-type="table-fn"} −.16[\*\*](#nt107){ref-type="table-fn"} −.12[\*\*](#nt107){ref-type="table-fn"} .11[\*](#nt106){ref-type="table-fn"} −.09 .11[\*](#nt106){ref-type="table-fn"} −.02 −.17[\*\*](#nt107){ref-type="table-fn"} −.14[\*\*](#nt107){ref-type="table-fn"} −21[\*\*](#nt107){ref-type="table-fn"} .86[\*\*](#nt107){ref-type="table-fn"} −
14.Front WMH L 2.52 16.26 .02 .01 .01 .01 −.02 −.03 .03 .11[\*](#nt106){ref-type="table-fn"} .01 .12[\*](#nt106){ref-type="table-fn"} .06 .09 .08 −
15.Front WMH R 5.26 27.53 .01 −.01 −.03 −.04 −.01 .09 −.02 −.01 .01 −.04 −.01 −.04 −.03 .12[\*](#nt106){ref-type="table-fn"} −
16.Temp WMH L 0.23 2.72 −.08 −.05 −.06 −.09 .00 −.12[\*](#nt106){ref-type="table-fn"} −.11[\*](#nt106){ref-type="table-fn"} −.02 −.05 −.01 −.01 .11[\*](#nt106){ref-type="table-fn"} .09 −.01 .15[\*\*](#nt107){ref-type="table-fn"} −
17.Temp WMH R 0.27 2.81 −.03 .05 −.10[\*](#nt106){ref-type="table-fn"} −.11[\*](#nt106){ref-type="table-fn"} .06 .01 .12[\*](#nt106){ref-type="table-fn"} .03 .08 .00 .04 .12[\*](#nt106){ref-type="table-fn"} .07 −.02 .07 .02 −
18.Par WMH L 12.79 49.63 .08 −.07 .06 .06 .05 −.01 .02 −.06 .00 −.02 .03 −.06 −.06 .19[\*\*](#nt107){ref-type="table-fn"} .19[\*\*](#nt107){ref-type="table-fn"} .00 .05 −
19.Par WMH R 13.30 71.95 .07 .04 −.01 .03 .00 .03 −.01 −.06 −.02 −.05 .00 −.04 −.04 .01 .10[\*](#nt106){ref-type="table-fn"} −.02 .01 .26[\*\*](#nt107){ref-type="table-fn"} −
20.Occ WMH L 0.20 2.61 −.07 .01 −.03 −.06 −.04 −.01 .06 −.05 −.03 −.04 −.03 .02 .02 −.01 .12[\*](#nt106){ref-type="table-fn"} .13[\*\*](#nt107){ref-type="table-fn"} .35[\*\*](#nt107){ref-type="table-fn"} −.01 −.01 −
21.Occ WMH R 0.48 7.15 .05 .02 .07 .04 .06 .05 .02 −.04 −.06 −.04 −.08 −.05 −.04 −.01 −.01 −.01 .00 −.02 .00 −.01
\*P\<.05,
\*\*p\<.01.
ISD = Intraindividual variability; SRT = Simple RT; CRT = Choice RT; ICV = Intracranial volume; WMH = white matter hyperintensities (ICV and WMH = mm^3^); Gender, 1 = male, 2 = female.
The results of the hierarchical regressions are presented in [Table 3](#pone-0013567-t003){ref-type="table"}. Three features of the findings should be highlighted. First, where associations with cognitive variables exist, they involve the frontal and temporal lobes, but not the parietal and occipital lobes. Second, associations are predominantly with left hemisphere WMH volumes. In addition, greater left frontal lesioning was associated with higher intraindividual variability in choice RT, and greater temporal WMH burden was associated with poorer Spot-the-Word scores. However, both of these effects were modified by significant Gender × Cognitive variable interactions. Additionally, that interaction was significant for face recognition, although the primary effects for that regression were statistically unreliable.
10.1371/journal.pone.0013567.t003
###### White matter hyperintensities regressed on cognitive variables.
{#pone-0013567-t003-3}
Left Frontal Right Frontal Left Temp Right Temp Left Parietal Right Parietal Left Occ Right Occ
------------------------ -------------------------------------- --------------- --------------------------------------- --------------------------------------- --------------- ---------------- ---------- -----------
1^a^.WM vol .03 .02 −.02 −.13 −.04 −.02 .03 .04
IC vol .07 −.06 .13 .23 −.02 −.02 −.01 −.09
2^b^. Immed Rec (IR) .01 −.03 −.04 −.10 .04 −.03 −.01 .06
Gender .14 −.03 .00 .09 .06 .10 −.09 .02
3^c^. Gender × IR −.12 .05 .03 .06 −.03 −.03 .03 .06
2^b^. Del Rec (DR) .00 −.04 −.07 −.11 .04 .01 −.05 .03
Gender .14 −.02 .01 .09 .06 .09 −.09 .03
3^c^. Gender × DR −.11 .09 .06 .03 −.02 −.01 .07 .02
2^b^. Digit Back (DB) −.03 .00 −.01 .05 .06 .01 −.04 .07
Gender .14 −.03 −.01 .06 .07 .09 −.10 .04
3^c^. Gender × DB −.05 .02 .00 .05 .02 .00 .04 .06
2^b^. Face Rec (FR) −.03 .08 −.10 .03 −.01 .02 −.01 .04
Gender .14 −.03 −.01 .06 .07 .09 −.10 .03
3^c^. Gender × FR −.04 −.03 .14[\*](#nt110){ref-type="table-fn"} −.04 −.01 .02 .01 .03
2^b^. Lexical DM (LDM) .02 −.01 −.13[\*](#nt110){ref-type="table-fn"} .10 .03 .00 .05 .03
Gender .14 −.03 −.02 .07 .07 .09 −.09 .04
3^c^. Gender × LDM −.02 .05 .13[\*](#nt110){ref-type="table-fn"} −.02 −.01 .00 −.07 .04
2^b^. ISD SRT (ISRT) .11 −.01 −.02 .02 −.06 −.06 −.04 −.04
Gender .14 −.03 −.01 .06 .08 .10 −.09 .04
3^c^. Gender × ISRT .09 .01 .02 −.06 −.04 −.03 .05 −.03
2^b^. MSRT .02 .01 −.03 .08 −.02 −.03 −.02 −.07
Gender .14 −.03 −.01 .05 .07 .09 −.09 .04
3^c^. Gender × MSRT .08 .06 .03 −.14[\*](#nt110){ref-type="table-fn"} −.05 −.02 .05 −.05
2^b^. ISD CRT (ICRT) .13[\*](#nt110){ref-type="table-fn"} −.04 .00 .01 −.02 −.05 −.04 −.05
Gender .14 −.03 −.01 .06 .07 .10 −.10 .03
3^c^. Gender × ICRT .16[\*](#nt110){ref-type="table-fn"} .05 −.02 −.02 −.05 −.03 .05 −.03
2^b^. MCRT .07 −.01 .00 .04 .02 −.01 −.02 −.08
Gender .14 −.03 −.01 .06 .07 .09 −.09 .04
3^c^. Gender × MCRT .09 .02 .00 −.08 .02 −.01 .03 −.07
Notes:
\*p\<.01; a = df 2, 425; b = df 2, 423; c = df 1,422.
WM vol = White matter volume; IC vol = Intracranial volume; ISD = Intraindividual standard deviation; MSRT = Mean Simple RT; MCRT = Mean Choice RT; Temp = Temporal; Occ = Occipital.
Therefore, where significant Gender × Cognition interactions were found, regressions were rerun for men and women separately. With one exception, all interactions stemmed from stronger effects in men. The exception was the association between left frontal WMH and variability in the choice RT task. Although this was the only primary effect to attain significance across the whole sample (14 men and 16 women exhibited left frontal WMH), when the regression was rerun for men, it was statistically unreliable. However, for women, that regression was significant, df = 1,228, beta = .23, p\<.001, indicating the association between WMH and variability to be stronger in this group (see [Figure 1](#pone-0013567-g001){ref-type="fig"}). Consideration of [Figure 1](#pone-0013567-g001){ref-type="fig"} indicates that an outlier may have influenced this effect. Therefore, we reran the regression having removed this extreme value. The association remained positive and significant (beta = .13, p\<.05) as in the original analysis.
{#pone-0013567-g001}
When the remaining regressions producing significant Gender × Cognitive variable interactions were rerun within gender, for women, all associations were nonsignificant. By contrast, however, for men, significant associations indicated that greater WMH burden was associated with poorer cognitive performance: Left temporal lobe and face recognition, df = 1,192, beta = −.17, p = .014; left temporal lobe and Spot-the-Word scores, df = 1,192, beta = −.19, p = .008; right temporal lobe and simple mean RT, df = 1,192, beta = .24, p\<.001.
We then repeated the analyses taking years of education into account. This had little bearing on the initial regression findings. Importantly, as health status, and in particular vascular risk factors, may influence white matter-cognition relations, we then statistically controlled (by entering health factors individually at Step 1 of the hierarchical multiple regression) for histories of cancer, thyroid problems, head injury, diabetes, stroke, heart disease, and high blood pressure (blood pressure variables were entered as both dichotomous and continuous variables). Notably, none of these variables altered our original findings.
Finally, WMH data are highly skewed and this, together with outliers, may have influenced the findings. In order to reduce the influence of these sources of variance, we reran the main analyses using sequential logistic regression having recoded the WMH variables (0 = no WMH, and 1 = \>0 WMH). For left frontal WMH and variability in the CRT task, entry of variability significantly raised the probability of the presence of WMH, B = 0.38, OR = 1.46, CI = 1.09--1.96, p = .011. As subsequent entry of the Gender × Variability interaction term approached significance at conventional levels (p = .068), we ran logistic regression within men and women. For men, the regression was nonsignificant. For women however, greater variability was associated with the presence of left frontal WMH, B = 0.61, OR = 1.83, CI = 1.19--2.82, p = .006. These findings are consistent with the earlier linear regressions.
For left temporal WMH and face recognition, the findings were similar to the earlier analyses. When face recognition was entered into the equation, statistics indicated the presence of left temporal WMH were associated with poorer face recognition B = −0.69, OR = 0.50, CI = 0.26--0.97, p = .039. Although entry of the Gender × Face recognition interaction was nonsignificant, further analysis revealed the trend was stronger in men. By contrast, logistic regressions examining left temporal WMH and spot-the-word, and right temporal WMH and simple mean RT, were inconsistent with the earlier analyses.
Discussion {#s4}
==========
This is one of the first investigations to focus on WMH and cognitive function in a large population-based sample of middle-aged adults. Several important findings suggested the possible presence of neuropathology in this relatively young and independently functioning group of 44 to 48 year olds living in the community. First, frontal lobe white matter lesions were associated with increased intraindividual variability, and temporal lobe WMH with deficits in face recognition. Second, these findings were left-lateralized, and the frontal lobe associations stronger in women, while the temporal lobe associations were stronger in men. Finally, statistically controlling for a range of health variables, including vascular risk factors, made no difference to those findings.
That WMH were associated with cognitive deficits was not in itself unusual, and is consistent with findings elsewhere [@pone.0013567-GunningDixon1]--[@pone.0013567-Raz1]. What is of note, however, is that this association was evident in a community-based sample of functioning persons in midlife. Although the effect sizes were relatively small, the findings are consistent with work elsewhere that has found an association between white matter integrity and cognition in persons less than 50 years of age [@pone.0013567-Kennedy1]--[@pone.0013567-Raz2]. From a lifespan perspective, these findings are important as they add to evidence that the deleterious effects of neurobiological disturbance may manifest at an earlier age than is suggested by the broader literature. Not only is this of note theoretically as it points to a possible neuropathological basis for cognitive decline in middle age, but also practically, as it suggests that preventative programs and early intervention may benefit community-dwelling adults in their 40 s and upwards.
The results were selective in that left frontal lobe lesions were associated with within-person variability, while left temporal WMH were associated with spot-the-word and face recognition performance. The former finding is in line with the proposition that intraindividual variability indexes neurobiological disturbance [@pone.0013567-Hultsch1] and executive and attentional control mechanisms supported by the frontal cortex \[e.g., [@pone.0013567-Bunce1]--[@pone.0013567-West1]\]. This finding builds upon our earlier work showing frontal lesions to predict within-person variability in adults aged 60 to 64 years [@pone.0013567-Bunce3], and the left lateralization of the association is consistent with functional imaging work showing an association between left middle frontal (BA 46) activity and within-person variability [@pone.0013567-Bellgrove1]. Together, these studies suggest the neural correlates of intraindividual variability in young and middle-aged persons to include the left dorsolateral prefrontal cortex.
It is of note that, although the association between left frontal WMH and intraindividual variability was stronger in women, the primary effect was also significant for this regression (see [Table 3](#pone-0013567-t003){ref-type="table"}) indicating the trend to be present in men too (although this effect was nonsignificant when tested in men alone). One factor that may have contributed to the stronger effect in women is the likelihood that individuals in this age group were perimenopausal, and hormonal factors may have influenced the strength of this association. Indeed, there is evidence that estrogen may moderate variability over time in women [@pone.0013567-Wegesin1].
It is important to emphasize that while the measure of within-person variability was sensitive to WMH presence, the alternative measure of central tendency (mean RT) for the same choice RT task, was not. This finding adds to work showing a dissociation between measures of mean RT and within-person variability from the same task, with the latter variable being sensitive to possible neuropathology [@pone.0013567-Bunce3] and mild psychopathology [@pone.0013567-Bunce4], [@pone.0013567-Bunce5]. Given the apparent sensitivity of this measure to subtle effects when other cognitive measures are not, and evidence that intraindividual variability predicts conversion to mild cognitive impairment over several years [@pone.0013567-Cherbuin1], it is possible that these measures may serve as a valuable "early warning" screening tool in community and healthcare settings.
The association between left temporal WMH and spot-the-word and recognition performance was in line with, respectively, work implicating that lobe in the processing of nouns [@pone.0013567-Perani1], and lesion studies showing that damage to the amygdala impairs face and emotion recognition [@pone.0013567-Adolphs1]. Also, right temporal lobe WMH were associated with slower responding in the simple RT task. However, some caution is appropriate in interpreting these temporal lobe findings as they stemmed from three and four men for left and right temporal lobe WMH respectively. Moreover, for spot-the-word and mean simple RT, the logistic regression failed to confirm earlier associations found using linear regression, suggesting that the skewed distribution and outlying WMH values may have influenced the initial finding. As both lexical decision making and perceptual speed measures may provide valuable insights into possible neuropathology in midlife, it is important that further work investigates these associations in middle-aged samples.
It is important to note that statistically controlling for a range of health variables, including histories of cancer, heart disease, thyroid problems, diabetes, stroke, head injury, and high blood pressure, had no bearing on the findings. The analyses controlling for vascular risk factors were of particular note as evidence suggests that non-periventricular WMH are associated with ischaemia \[e.g., [@pone.0013567-Fazekas1],[@pone.0013567-Wen1]\], and the finding is contrary to that reported by Raz and colleagues [@pone.0013567-Raz2]. Although the prevalence of health risk factors in this relatively young sample may have been too low to statistically account for the WMH-cognition associations, the present findings suggest that non-vascular influences, such as age and genetics, affect WMH-cognition associations in this relatively young and predominantly healthy group. Additionally, we cannot rule out the possibility that these adults were in the preclinical phase of, as yet, undetected neurological disorder. So as to inform healthcare intervention strategy and policy, it is clearly important that further research investigates WMH-cognition relations in midlife samples, and delineates between age and health factors in accounting for associations where they are found. Importantly, longitudinal research is required that examines midlife status on a range of health, cognitive and neuropathological (e.g., WMH) markers in relation to long-term mental health outcomes, including cognitive impairment and dementia.
There are a number of limitations to the present research that should be acknowledged. First, the study was cross-sectional, and we are therefore unable to give any indication of causality. Moreover, the use of a narrow cohort design allows individual differences in characteristics such as WMH to be investigated without the confounding of age differences. However, this means that we cannot generalise our results beyond the ages of 44 to 48 years. Second, at present we do not have any information concerning the future neurological status of participants. Planned long-term follow-ups in this group will provide valuable information on how far the present findings represent the early manifestation of eventual age-related neurological conditions. Finally, due to the young age of the sample, there were relatively few participants with significant white matter lesion load. Therefore, despite the sample being arguably the largest to investigate WMH and cognition in persons in their mid−40 s, even larger samples are required to enable detailed evaluation of the small group of individuals who demonstrate significant pathology in this age group.
To conclude, the finding that cognitive deficits were associated with non-periventricular WMH in a community sample aged 44 to 48 years having taken into account a range of health variables has important implications. From a lifespan perspective, the findings suggest that cognitive deficits may have a neuropathological basis that manifests in some individuals during middle age. From a healthcare perspective this underlines the view that population-based preventative strategies should start in early adulthood and not wait until mid or later life. Not only are the costs of such initiatives likely to be offset by long-term healthcare savings, but also by associated benefits to the quality of life and extended independence of vulnerable persons living in the community.
The authors are grateful to Anthony Jorm, Bryan Rodgers, Chantal Reglade-Meslin, Patricia Jacomb, Karen Maxwell, and the PATH interviewers.
**Competing Interests:**The authors have declared that no competing interests exist.
**Funding:**David Bunce\'s collaboration in this work was supported by the Leverhulme Trust and the British Academy. The study was funded by NHMRC of Australia Unit Grant No. 973302, Program Grant No. 179805, NHMRC project grant No. 157125, grants from the Australian Rotary Health Research Fund and the Australian Brewers Foundation. Nicolas Cherbuin is funded by NHMRC Research Fellowship No. 471501. Kaarin Anstey is funded by NHMRC Research Fellowship No. 366756. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
[^1]: Conceived and designed the experiments: DB KA NC HC PSS. Analyzed the data: DB. Contributed reagents/materials/analysis tools: DB KA RB WW. Wrote the paper: DB KA NC RB HC WW PSS.
| |
Learning objectives:
Assignment:
A professor at Hardtack University has an unusual method of grading. The students may or may not all take the same number of tests. The individual tests are weighted, and these weights are used to compute the student’s average. Important: the weights for all of the tests for a given student must add to 100. Assuming that w1 … wn are the weights, and g1 … gn are the grades, the average is computed by the formula:
((w1 * g1) + (w2 * g2) + … + (wn * gn)) / 100
The names of the students and their grades, with weights, are stored in a text file in the format firstName lastName w1 g1 w2 g2 … wn gn
Write the program weightedAverage.py. Ask the user to enter the name of a file of grades. Compute and print each student’s average, using the formula given above. Then compute and print the class average, an average of the individual averages. Format all averages to one decimal place. Note the weights for each student will always add up to 100.
For example, if the text file contains:
Billy Bother 20 89 30 94 50 82
Hermione Heffalump 40 93 60 97
Kurt Kidd 20 88 30 82 40 76 10 99
Then the output might look like this:
Billy Bother’s average: 87.0
Hermione Heffalump’s average: 95.4
Kurt Kidd’s average: 82.5
Class average: 88.3
Submission:
Submit weightedAverage.py to your class account.
Policies:
The policies given in Program 1 are in effect for this and all assignments. Do not forget to include your name and the Certification of Authenticity. | https://codingprolab.com/product/computer-science-220-weighted-average/ |
Humanities, Social Science, Education, Kinesiology, and Athletics (HSSEKA) Division
We offer comprehensive instructional programs, with high academic standards, providing you an opportunity to complete transfer, associate degree, and certificate programs. We encourage our students and our faculty to collaborate across disciplines fostering a robust educational experience with multiple perspectives. Throughout all of our courses, respect for cultural diversity and ethical behavior is a priority and appreciation for historical perspectives is valued.
Subjects We Offer
- Administration of Justice (A J)
- Anthropology (ANTHR)
- Athletics (ATH)
- Child Development (CDEV)
- Civilization (CIVIL)
- Economics (ECON)
- Education (EDUC)
- Foster and Kinship Care (FKC)
- Health Education (H ED)
- History (HIST)
- Humanities (HUM)
- PE Activity (PEACTIV)
- PE Theory (PETHEORY)
- Philosophy (PHILO)
- Political Science (POLSC)
- Psychology (PSYCH)
- Religion (RELGN)
- Sociology (SOCIO)
Our Programs
About the Division
Our mission is to provide all students with a strong liberal arts program, which gives students the opportunity to transform into adult citizens, both domestically and globally. The three aspects of adult citizenship that our division focuses on are:
- Knowledge: Civic education begins with fundamental knowledge of people and their cultures, the US Constitution, American history, current events, persistent public problems, and the capacity to analyze them.
- Values: Value judgments and the ability to evaluate fairness, social justice, freedom, equality, mutual rights and responsibilities in various contexts.
- Action: Using one's knowledge and values to take action in ones community, government, military, local and/or state politics. | https://deltacollege.edu/department/HSSEKA |
Matt and Richard talk to Prof Goodhill from the Queensland Brain Institute and his work with Zebra Fish and mapping the brains of these fish and learning how neural networks develop.
Professor Goodhill’s lab, which Matt was lucky enough to see first hand, is looking at how brains process information, particularly during development. Part of this research is about how growing nerve fibres use molecular cues to make guidance decisions, how map-like representations of visual inputs form in the optic tectum and visual cortex, and how these maps code sensory information. This is done partly by an amazing new microscope they are using that produces a high definition, real-time, look into a working brain.
For more on the professor you can find him here https://researchers.uq.edu.au/researcher/1519
For more about Prof Goodhill’s lab you can find that here https://qbi.uq.edu.au/goodhillgroup
Also, the Brain Basic Bundle we mentioned in the show is here https://www.thescienceofpsychotherapy.net/bundles/brain-basics-bundle
Thanks for listening!
Please leave an honest review on iTunes and please subscribe to our show. | https://www.thescienceofpsychotherapy.com/professor-geoffrey-goodhill-and-basic-research-on-neural-function/ |
The evolution of L-Bridge Capital begins with a rebrand.
L-Bridge Capital can be considered as the engine of a successful family office. Our mission is simple: it is to grow, protect, and transfer – upholding our clients’ legacies through the test of time.
As we evolve our strengths and capabilities, we believe that this is the best time to introduce our new brand identity to take us forward on this journey.
SOPHISTICATEDLY SIMPLE
We embraced a minimalist brand identity design to embody our simple, yet important, values: Trust, Integrity, and Transparency. The minimalist design also enabled us to address some of the challenges that we faced with the previous logo.
1. The lower-case typography in bold represents humility and bravery; values that the founders embraced during L-Bridge Capital’s inception.
2. The blue square represents integrity and stability; qualities that are integral to the company's function.
3. The new logo is also designed to be practical and versatile by allowing it to function as an icon. | https://www.lbridgecap.com/post/change-is-here |
The Voice of SpaceMarcus Chown
The black hole merger detected by its gravitational waves on 14 September 2015 pumped out 50 times more power than all the stars in the Universe combined.
If you ask me whether there are gravitational waves or not, I must answer that I do not know. But it is a highly interesting problem. – Albert Einstein
Ladies and gentlemen, we did it. We have detected gravitational waves. – David Reitze, 11 February 2016
At Livingston in Louisiana is a 4-kilometre-long ruler made of laser light. Three thousand kilometres away in Hanford, Washington State, is an identical 4-kilometre-long ruler made of laser light. At 5.51 Eastern Daylight Time on 14 September 2015, a shudder went through the Livingston ruler. Seven milliseconds later – less than a hundredth of a second afterwards – an identical shudder went through the Hanford, ruler. It was the unmistakable signature of a passing gravitational wave – a ripple in the fabric of space-time itself, predicted to exist by Einstein almost exactly 100 years ago.
The source of the gravitational waves was an extraordinary event. In a galaxy far, far away, at a time when the most complex organism on Earth was a bacterium, two monster black holes, were locked in a death-spiral. They whirled about each other one last time. They kissed and coalesced. And, in that instant, three times the mass of the Sun vanished. It re-appeared a split-second later as a tsunami of tortured space-time, propagating outwards at the speed of light.
The power in these gravitational waves exceeded the power output of all the stars in the Universe put together by a factor of 50. Or, to put in another way, had the black hole merger produced visible light rather than gravitational waves, it would have shone 50 times brighter than the entire Universe. This is the single most powerful event ever witnessed by human beings.
Gravitational waves are produced whenever mass is accelerated. Wave your hand in the air. You just generated gravitational waves. They are spreading outwards like ripples on a lake. Already, they have left the Earth. In fact, they have passed the Moon and are well on their way to Mars. And, in four years’ time, they will ripple through the nearest star system to the Sun. We know that one of the three stars of Alpha Centauri system is orbited by a planet. If that planet happens to be home to a technological civilisation that has built a gravitational wave detector, in four years’ time it will pick up the ripples in space-time that you made with your hand a moment ago.
The only problem is that they will be very weak. Imagine a drum. It is easy to vibrate it because a drum skin is flexible. But space-time is a billion billion billion times stiffer than steel. Imagine trying to vibrate a drum skin that is a billion billion billion times stiffer than steel. This is why only the most violent cosmic events such as the merger of black holes create significant vibrations of space-time.
But those vibrations, like ripples spreading on a lake, die away rapidly. When they arrived on Earth on 14 September 2015, they had been travelling for 1.3 billion years across space and were fantastically tiny. As they passed the 4 kilometre rulers at Hanford and Livingston, they alternately stretched and squeezed them – but by only one-hundred millionth of the diameter of an atom! To give you some idea of how small that is, it would take about 10 million atom said end to end to span the full stop at the end of this sentence. The fact that the twin rulers of the “Laser Interferometric Gravitational-wave Observatory”, or LIGO, could detect such a small effect is extraordinary. LIGO is a technological tour de force. At each site, there are actually two tubes 1.2 metres in diameter, which form an L-shape down which a megawatt of laser light travels in a vacuum better than interplanetary space. At each end, the light bounces off 42-kilogram mirrors, suspended by glass fibres just twice the thickness of a human hair and so perfectly smooth that they reflect 99.999 per cent of all incident light. It is the microscopic movement of these suspended mirrors that signals a passing gravitational wave. So sensitive is the machine that it was knocked off kilter by an earthquake in China.
To detect gravitational waves, the LIGO physicists had to do something extraordinary: spot a change in length of their 4-kilometre ruler by just 1 part in 1,000,000,000,000,000,000,000. No wonder the 2017 Nobel Prize was awarded to three of the physicists who had pioneered the experiment: Rainer “Rai” Weiss, Kip Thorne and Barry Barish.
Since the first detection of gravitational waves on Earth, gravitational waves have been picked up from a total of five events. Four were from the merger of pairs of black holes and one from the merger of super-compact neutron stars. The latter event, observed in 17 August 2017, is the most significant because, as well as gravitational waves, it created light, which was picked up by telescopes all over the world. Analysis of the light revealed that the fireball forged at least ten times the mass of the Earth in pure gold.
It has long been known that we were made in Heaven – that we are stardust made flesh. The iron in your blood, the calcium in your bones, the oxygen that fills your lungs each time you take breath – all were forged inside stars which lived and died before the Earth was born. But scientists have long wondered where gold came from. Now, at last, they know. If you have a gold bracelet or watch, its material was forged in the cataclysmic collision of neutron stars billions of years ago. Can there be a more striking example of the intimate connection between the mundane and close-to-home and the cosmic and far away?
The significance of directly detecting gravitational waves cannot be over-stated. Imagine you have been deaf since birth, then, suddenly, overnight, you are able to hear. This is how it is for physicists and astronomers. For all of history they have been able to “see” the Universe. Now, at last, they can “hear” it. Gravitational waves are the voice of space. It is not too much of an exaggeration to say that their detection is the most important development in astronomy since the invention of the astronomical telescope by Galileo in 1609.
On 14 September 2015, at the very edge of audibility, we heard a faint sound like the rumble of distant thunder. But we have yet to hear the gravitational wave equivalent of baby crying or music playing or a bird singing. Over the next few years, as LIGO increases its sensitivity and other detectors come online in Europe, Japan and eventually India, our ability to detect gravitational waves will get better. Who knows what we will hear as we tune into the cosmic symphony?
“The Voice of Space” is just one of the topics included in Marcus Chown’s new is book, Infinity in the Palm of Your Hand: 50 Wonders that Reveal an Extraordinary Universe which is out now.
The Cosmic Shambles Network relies on your support on pledges via Patreon so we can continue to provide great, new, exciting content without the need for third party ads or paywalls.
For as little as $1 a month you can support what we do and get some great rewards for doing so as well. Click the Patreon logo to pledge or find out more.
Marcus Chown is an award-winning writer and broadcaster. Marcus was formerly a radio astronomer at the California Institute of Technology in Pasadena where studied under Richard Feynman and gained a Master of Science in Astrophysics. He is cosmology consultant of New Scientist and has written a number of best selling popular science books including We Need to Talk About Kelvin and Afterglow of Creation. He is @marcuschown on Twitter.
If you would like to reuse this content please contact us for details
Subscribe to The Cosmic Shambles Network Mailing list here. | https://cosmicshambles.com/words/blogs/notes/voice-of-space-marcus-chown |
Introduction: The term “digital healthcare professional” alludes to a health professional with the additional digital capabilities such as information and technology. The assumption that attaining technical knowledge and skills to meet the available professional standards in digital healthcare, will engage and empower healthcare users, thus deliver person-centered digital healthcare (PCDHc), is flawed. Identifying where digital healthcare and technologies can genuinely support person-centered care may lead to future discourse and practical suggestions to build person-centered integrated digital healthcare environments. This review examines current digital health and informatics capability frameworks and identifies the opportunity to include additional or alternative principles.
Methods: A scoping review was conducted. Literature valuing person-centered digital healthcare requirements, digital health capabilities, and competencies were identified between 2000 and 2019 inclusive, then collated and considered. Using a PRISMA approach for eligibility screening, thirteen articles met the study inclusion criteria. Analysis used a thematic framework approach, which assisted in the data management, abstraction and description, and finally the explanations.
Results: Analysis indexed fifty-nine (59) capabilities, charted thirteen (n13) categories, mapped four (n4) themes, which were then interpreted as findings.
Findings: The four themes identified were Change Management; User Application; Data, Information, and Knowledge; and Innovation. The themes recognize the opportunity to align the application of technical skills towards the capabilities required to deliver authentic PCDHc.
Discussion: Holistic mindsets are imperative in maintaining the objective of PCDHc. The authors propose that debates regarding professional digital capability persist in being “siloed” and “paternalistic” in nature. They also recommend that the transition to authentic PCDHc requires refocusing (rather than rewriting) current capabilities. The realignment of capabilities towards individual healthcare outcomes, rather than professional obligation, can steer the perspective towards a genuine PCDHc system.
Conclusion: This scoping review confirms the assumption that digital skills will empower all healthcare stakeholders is incorrect. This review also draws attention to the need for more research to enable digital healthcare systems and services to be designed to realize complex human behaviors and multiple person-centered care requirements. Now more than ever, it is imperative to align healthcare capabilities with technologies to ensure that the practice of PCDHc is the empowering journey for the healthcare user that theory implies.
Keywords: person-centered, digital healthcare, capability, professional practice
Introduction
The term “digital healthcare professional” alludes to a health professional with the additional digital capabilities in information and technology. The digital healthcare professional in this instance satisfies the role of a healthcare professional currently required to use informatics as part of their daily routine. There is growing consensus however that the expansion of healthcare into digital is humanistic and aligned with person-centered care, rather than merely a growth of technological capability.1,2 The assumption that attaining technical knowledge and skills to meet the available professional standards in digital healthcare, will engage and empower healthcare users (hereafter referred to as individuals) thus deliver person-centered digital healthcare (PCDHc), is flawed. This assumption exposes an oversight of the complexities of healthcare delivery, empowerment, engagement, and ultimately self-efficacy of the individual.3–5
Internationally, the focus is on the need for contemporary healthcare research, to revisit and embrace methods that facilitate genuine participation of all healthcare.1,2 Taking a holistic approach, which regards empowerment, engagement, and self-efficacy as fundamental to healthcare provision, aims to make use of contextual evidence, including the social determinants of health, which support individuals within their broader communities.6
Three decades ago, the underlying ideas behind person-centered care were borne from Bandura’s seminal work on Self-Efficacy4 and Wagner’s attempts to shift away from paternalistic care by introducing the Chronic Care Model (CCM).5 In both bodies of work, a fundamental improvement to healthcare outcomes for the individual was observed and discussed to be a fundamental benefit that should motivate their adoption in healthcare practices. The aspirations of person-centered care continue today in contemporary healthcare with models in which the individual ought to be considered the focal point of care: engaged, enabled, and empowered, actively involved in the decision-making of their healthcare journey.7 This is in contrast to the individual merely being invited to participate or considered equal to the healthcare professional in the management of their healthcare journey.8 However, person-centered care continues to be interpreted to “fit” the needs of the healthcare professional without consideration of these models.9 The continuing discourse regarding person-centered care delivery appears idealistic rather than established practice. This poses a question, has contemporary healthcare moved from traditional professional-centric healthcare towards delivering person-centered practices?
The umbrella term “digital health” incorporates, but is not limited to, technological areas such as eHealth, mHealth, telehealth, wearable devices, and personalized medical devices.10 This emphasis on technology suggests a priority in information technology knowledge and skills rather than personal digital health or care capability.11 Digital health research regarding the development and use of shared decision-making tools, for example, the electronic health record (EHR) or technologies for in-the-home interventions, continue to define the end-user of such systems as the healthcare professional, rather than embracing the role of the individual.12 In digital healthcare, there should be equal opportunities for all individuals of healthcare. In this case, the healthcare professional and the individual receiving the care, to be engaged, enabled, and empowered. To achieve this, healthcare professionals need to shift their healthcare delivery from traditional and paternalistic to that of an enabler and collaborator for the individual on their health journey.8,13 A PCDHc framework can provide a process, which increases self-efficacy and improved health outcomes.4,5
Technology is driving a shift toward empowering the individuals of healthcare services.1,2,14 Development and application of these services is leading to a demand and growth in research regarding professional competence and capability frameworks. In 2018, Brunner et al11 recognized the need to bridge existing and emerging digital health capabilities for health professionals by offering a framework, which aimed to better prepare incumbents to this evolving workforce. Likewise, other frameworks have suggested supporting skilled healthcare professionals by increasing their capability and practice in “digital skills” required for contemporary delivery.11,15 However, upon closer reading, there is a large variance in the rationale for developing these frameworks. The “why, how, and what”, which authors choose to place as broad concepts, can obscure the concept of delivering digital healthcare in a complex contemporary environment that is no longer able to exclude the role or input of the individual.1,16
New technologies in healthcare delivery have been credited as engaging, enabling, and empowering the individual by welcoming their involvement in their healthcare.14 For example, promoting a shared digital health record and shared decision-making between healthcare professionals and individuals assumes improved access, thus empowerment for the healthcare individual. This is an assumption of empowerment by the association of involvement. Healthcare demands on professionals are complex, inconsistent, and context-dependent.17 In the delivery of digital healthcare, generalizing principles for guiding communication behaviors has limitations.17 Evidence shows, when technology is used in the delivery of healthcare services, the satisfaction of the therapeutic relationship between the healthcare professional and the individual may decrease.18 Without professional capability in digital approachability, technology has the potential to negatively affect the individual, lessening social interaction, and increasing feelings of anxiety, loneliness, and disconnection.18 These negative impacts contradict the impression that any type of improved involvement with the individual automatically results in empowerment.
The authors suggest digital health and technology capabilities have been established with a continued focus on siloed traditional methods of healthcare delivery. Further, digital capability frameworks focus on the delivery of the digital healthcare technologies. These capabilities overlook required behavioral changes, goals, and outcomes skills required of professionals working in a digital healthcare environment.19 For example, the descriptions of Healthcare Digital Capabilities Framework of the United Kingdom’s National Health Services address confidence and competence using digital technologies independently of the purpose of the task at hand:15
I actively lead on and champion equitable access for all to digital teaching, learning and self-development, and I can create solutions to solve complex problems relating to individual and collaborative teaching and learning across a wide range of digital devices, tools, technologies, systems and learning environments15
The focus is the professional’s decisions and capabilities in using technologies, rather than considering the intended health outcome of having such technologies integrated into health or care delivery.11,15,20 The evidence fails to represent the required shift in communication behaviors and responsibilities required of PCDHc delivery. There is a continuing discourse around professional frameworks and capabilities for effective use of Digital Health,11,15 yet as recently as 2015, Gammon et al recognizes that the advancements in technology applied in health are still not being effectively bridged with effective development of models of care.20
Scoping the literature and mapping the capabilities currently identified for contemporary digital healthcare professionals delivering authentic PCDHc may offer insight into how the gap between theory and practice could be addressed. Identifying where digital healthcare and technologies can support person-centered care in a contemporary complex healthcare environment may lead to future discourse; practical suggestions to build person-centered integrated digital healthcare environments. This review examines current digital health and informatics capability frameworks, identifies the opportunity to include additional or alternative principles, which can underpin future development in this field of research and healthcare delivery.
This scoping review was conducted, by the two authors, over six months from September 2019 to February 2020. The objective, and method, was to scope and map available evidence of capabilities relevant and fundamental to the delivery of PCDHc, followed by identification of themes.
Method
After scoping and identifying relevant literature, a PRISMA approach21 was used to screen for eligibility. Finally, a thematic framework analysis approach22 assisted in a rigorous, iterative process required for the identification of PCDHc capabilities.
Scoping the Literature
This scoping review recognizes the “source” of information as any existing literature. For example, primary research studies, systematic reviews, meta-analyses, letters, guidelines, and websites (hereafter collectively referred to as articles). Unlike other reviews, scoping facilitated mapping of key concepts underpinning a research area, assisted in clarifying working definitions, and provided conceptual boundaries for a subject matter.23 Leaving the “source” of information open allowed the authors the inclusion of a diverse range of articles.24
The search included articles composed in English language, between 2000 and 2019 (inclusive). Databases and search engines included PubMed, Web of Science, and Google Scholar. The key search terms: digital health, health professional, allied health, digital, workforce, capability, competency, standards, and practice guidelines were used. The decision to scope the literature in this manner was made because digital healthcare professional capability frameworks are a relatively new concept and there is great variation in nomenclature and descriptions by authors which revealed difficulty in identifying the content being sought.11,25,26
This scoping review offers a preliminary assessment of available literature,27 to identify whether further attention is warranted.23 To fulfil this objective, iterative combinations of search terms were applied. Scereening the limited selection of the first hits per search in the interest of available resources for screening and relevance of articles continued in this manner.28 Where there was prior knowledge of articles these were also included for screening. An example search applied consisted of the terms “Digital health AND professional AND capability” applied in Google Scholar identified three articles of potential relevance according to their key words, title abstract, of which two succeeded the full screening process11,29 (Figure 1).
|
|
Figure 1 Example of combination of search terms used in scoping literature to identify articles.
Eligibility Screening
Following PRISMA21 as a guide, the authors screened all identified articles for duplications, content eligibility, relevance, and finally full-text screening. Duplications were identified and removed. The remaining article titles, keywords, and abstracts were screened for content eligibility. Articles deemed irrelevant to digital healthcare practice, competencies, and capabilities were excluded. Full text of the remaining articles was screened. Those which met the inclusion criteria were agreed (Table 1).
|
|
Table 1 Inclusion and Exclusion Criteria for Screening
Thematic Framework Analysis Approach
Data analysis was structured using a thematic framework analysis approach.22 Analysis was achieved in stages: Data management 1) thorough immersion in the identified literature constructing a framework by identifying descriptive characteristics then 2) indexing capabilities; Abstraction and Description 3) charting the capabilities to categories 4) mapping the categories to themes and finally Explanations 5) interpreting the findings. In summary, the approach aimed to clearly map themes, identify findings, and form a discussion. It should be noted, although the stages of analysis are depicted as linear, in practice data management, abstraction and description, and explanations were iterative.
Results
A PRISMA approach was used21 (Figure 2). After identifying relevant articles from database searches (n22), the duplicates were removed (n1). The authors then screened titles, abstracts, and keywords for eligibility by applying the inclusion/exclusion criteria (Table 1). Eight further articles were excluded; they lacked capabilities for delivering digital healthcare and were deemed to not represent appropriate capabilities or a professional competency framework. For example, there being a lack of direct focus on capabilities,7,30 description of educational elements or pedagogical approaches,31 recommendations,32–34 or discussion of intervention categories29 despite key words and titles that implied relevance. The full text articles of the remaining articles (n13) were retrieved. The authors independently reviewed these. Any differences in selection during screening and review were resolved by consensus.
|
|
Figure 2 PRISMA approach Flow Diagram illustrating the scoping, and screening process.Notes: Adapted with permission from Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. J BMJ. 2009;339:b2700.21
Thirteen articles were included in the final selection. These were considered to contain relevant, appropriate, and diverse contexts regarding digital healthcare capabilities required of a healthcare service or professional (Table 2).
|
|
Table 2 Articles Eligible for Inclusion in the Review (N13)
Scoping the literature found the terms capability, skill, characteristic, and competence were used interchangeably. Henceforth, the term capability is used to represent all terms; it is considered to encompass competency, extending beyond implied skills emphasizing adaptability, continued learning, and self-efficacy, addressing a wider view of professionalism.11,26,35 Capability is described as the aptitude to fulfill a task or type of work rather than focusing on the limitations applicable to specific roles.25 This is a point of view the authors agree with and have chosen to adopt this stance. The complex changing and adaptable nature of healthcare delivery is a reason why an open mind should be kept when considering what the role of PCDHc capabilities are.11,25,26
Analysis: Identification of Capability, Categories, and Themes
Following thorough immersion in the identified articles and constructing a framework by identifying descriptive characteristics the authors proceeded with their analysis.
Indexing Capability
The initial index contained 111 capabilities (both author’s capabilities merged). After repetitions were removed this reduced to 59 capabilities.11,36–46 Indexing the capabilities in this way made it possible to re-group the capabilities according to category if required.
Where multiple capabilities were identified as a group, these were individually labeled to reflect them as singular parts. For example, if “communication, collaboration, and participation” and “information, data, and media literacies” were cited as single capabilities in the source articles the authors separated them into individual capabilities of “Communication”, “Collaboration”, “Participation”, “Information literacy”, “Data literacy”, and “Media literacy”. Each capability was cross-checked and agreed between the authors.
Charting Categories
Charting the capabilities identified 13 categories. To assist in charting to categories, each category required a research-specific description. Category descriptions assisted in demonstrating a research-focused understanding of why each capability had been charted to a particular category, and later why the category was mapped to a theme.47
Mapping Themes
Once all the data had been labeled and categorized further refinement was required. The data in each category were reviewed together with the research objective (to scope and map available evidence of capabilities relevant and fundamental to the delivery of PCDHc). Finally, connections were made which created the themes. At this point, the author’s looked beyond the article context and toward the category descriptions and the developing themes.47 Finally, 59 capabilities charted to 13 categories, which were mapped as four themes: Change Management, User Application, Data Information & Knowledge, and Innovation (Figure 3). Validating the mapping towards generating the themes involved the author’s analysis of the key characteristics as laid out in the capabilities and categories (Table 3).
|
|
Table 3 Capabilities Charted into Categories
|
|
Figure 3 Data synthesis figure of thematic analysis from labels to categories to themes.
Findings
This section provides a description of the four themes: Change Management, User Application, Data, Information & Knowledge (DIK), and Innovation. They have been identified and validated through an iterative process of data analysis and consideration of the research objective. The four themes recognize the opportunity to align the application of technical skills towards healthcare professional capabilities required to deliver PCDHc.
Change Management
Change Management is described as the process, tools, and techniques used to manage the individual’s role in the change required to achieve an outcome.48 The theme Change Management is mapped from: professionalism, which recognizes the conduct, behavior, and attitude of an individual within a healthcare environment; education, which facilitates learning, the acquisition of knowledge, skills, values, beliefs, and habits; professional standards, which require ethical behaviors that the healthcare professional must adhere to; and the use of non-technology skills, which are relevant to the role, task, or responsibilities of the individual that are not defined by technology.
User Application
The theme, User Application, relates to tasks, processes, responsibilities, and objectives that are designed or designated for operation by the healthcare individual (specifically for the healthcare individual to apply). Despite a long-standing focus on the individual in person-centered healthcare, at times referred to as the “user”, the authors found no consistent language or terminology to reflect the focus on the supporting elements for the individual to be able to apply themselves or the technologies of person-centered healthcare. Therefore, the term “User Application” is considered a novel term for the purpose of this article. User Application differs from the theme Change Management in the person-centered focus of the healthcare individual as opposed to the focus on change itself. The theme is mapped from: User-development, suggesting the preparation of inexperienced individuals, those who require pre-requisite skills, methods, and tools to allow them to create, manage or use a given task or tool; and holistic care, described as greater than the sum of its parts. This theme benefits from consideration or inclusion of diverse elements; the third category in this theme is partnership, collaboration to advance mutual interests of more than one individual/profession.
Data, Information, Knowledge
The term Data, Information, Knowledge (DIK)49 represents the ability to identify data, interpret information, and create knowledge. The theme emphasizes insight, technical ability, knowledge, and understanding rather than “hands-on” healthcare application. DIK is mapped from: technology skills, which require the acquisition of ability and knowledge needed to perform specific mechanical, mathematical, or scientific tasks; technology literacy, which requires the ability to use technology appropriately and effectively to access, manage, integrate, evaluate, create, and communicate information; and managing technology, which requires the ability to integrate planning, design, optimization, operation, and control of technological products, processes, and services.
Innovation
Innovation is defined by the authors as fundamental to the creation, development, and adoption of contemporary models of care. This theme is mapped from: innovative practice, which requires valuing the opportunity of applying a new mindset; innovation behavior, requiring the ability to conceptualize, consider, attempt or apply new ideas, processes, and procedures; and applied innovation, which aims to close the gap between the theory and the practice. Technology is rapidly evolving. The development of new tools, applications, and opportunities provided by technology continually challenges current processes and technical proficiencies which in turn encourages innovation behavior.
Whilst evidence-based practice is a vital tenet of maintaining professional standards for high-quality healthcare, and represented in the capabilities of Change Management, these are directly impacted by the ability to conceptualize new forms of evidence, new mindsets for which evidence may be needed, and practices that may need review in light of new evidence and technology. Innovation practice is therefore a key factor in allowing the important function of continuing innovation to occur and in turn supporting and complimenting other themes identified.
Summary of Themes
The capabilities that mapped together forming each of the four themes shaped the priorities of each theme reflected in their definitions. The authors then considered whether the capability themes identified collectively and individually genuinely support PCDHc. Common focal points from each theme were found to provide complimentary as well as clashing interests in how they support PCDHc. The common points were technology focusing on its new possibilities and required literacy; contemporary models of care considering their objectives as well as the goal to develop and adopt such models; and finally the role of the individual in and for the given capability. The role of the individual is complimented between the theme of Change Management and User Application to achieve an outcome, whereby Change Management addresses the healthcare professional and User Application addresses the individual. These two themes however equally stand apart from each other in that Change Management prioritizes the change itself that needs managing whereas User Application prioritizes the person at the center above all else.
The objectives of contemporary models of care are reflected as priorities in the theme of Innovation and User Application by focusing on how these objectives are applied along with supporting the individual in achieving their appropriate outcomes of care, respectively. The theme of Innovation fundamentally focuses on innovations required to do so whereas User Application places the person as the priority instead. Innovation and Change Management converge regarding development and adoption of contemporary models of care. Innovation prioritizes the earlier stages of development whereas Change Management addresses the elements needed to see a successful adoption of such models of care and so represent related yet distinct parts of the same goal.
Technology brings together DIK, Innovation and User Application in using technology literacy to allow an individual to achieve an outcome that may be defined or created by innovations in technology and/or the model of care. These themes stand apart from each other in each prioritizing the technology itself over the purpose for its use, or the capacity to create new methods and outcomes of or prioritizing the support needed for the individual to achieve and outcome, respectively.
Discussion
Do the Current Capabilities Support PCDHc?
Four themes were identified from mapping the capabilities, with common focal points and points of difference among the scope and definitions in each theme. A clear understanding of the capabilities required to genuinely support PCDHc is dependent on a clear understanding of how to define PCDHc. The term PCDHc is comprised of: person-centered that can be considered as the individual with a role in the healthcare journey; digital that can be represented as the technology that is shaping contemporary healthcare and lastly healthcare itself can be thought to represent the contemporary model of care being used. The common points of interest across the four themes of capabilities thus support that the capabilities found in current literature do collectively support PCDHc. The NHS capability framework listed capabilities that mapped across all four themes; however, there was at least one category within each theme that did not contain any capabilities from this framework.15 The digital capability framework for health professional published by Brunner et al in 2018 did not provide any capabilities that mapped to the theme of Innovation.11 The available discourse around capabilities for PCDHc shows great variation in the recognition of required capabilities from across the four themes identified in this article to support PCDHc. Siloed discussion of the capabilities overlooks opportunities for alignment that could lead to more productive conversations and a better understanding of healthcare outcomes for contemporary models of care.
Realizing needed change in the adoption of PCDHc for the health professional and individual alike can be supported by the categories that were mapped to Change management. The theme of User Application recognizes what an individual’s healthcare journey should address how the use of healthcare technology and services should be applied to deliver benefit in health outcomes. The delivery of healthcare, especially with the use of technology should maintain a focus on the delivery of care (encouragement, collaboration, and attitude) along with individual digital user outcomes (digital wellbeing and My Health Record). The theme recognizes the healthcare individual together with the application of professional skills as necessary components of successful PCDHc.
While the focus of DIK is technical proficiency, required in the application of DIK, its value is fundamental in the enablement of successful Change Management and User Application. Diagnosis and delivery are essential components of any health process; DIK is an essential partner to all themes, allowing PCDHc to be holistic in design and delivery. The basic concept of informed decisions requires the ability to understand the information needed to be able to make a decision.
Effective change of tools, behaviors, and potential outcomes is benefited by a thorough understanding of how any change impacts any part of the healthcare journey. Holistic mindsets are imperative in maintaining healthcare practice and delivery and must align with the objective of PCDHc. Applied innovation thus supports the objective of achieving PCDHc by focusing on bringing theory and practice closer together. The four themes, together with the research aims and objectives, form the scaffold for the following discourse of PCDHc.
Future Directions and Considerations
The authors suggest, despite the urgent need for transitional models of healthcare, debates regarding professional digital capability persist in being “siloed” and “paternalistic”.13,20,47 The authors propose transitioning to PCDHc requires refocusing (rather than rewriting) current capabilities. Focusing on Change Management; User Application; Data, Information, and Knowledge; and Innovation.
Personal Cost in Change Management
Emphasis on Change Management11,37–46 implies high regard for capabilities that leverage technical skills and knowledge as effective performance in a professional role. A common occurrence in the workplace is the expectation of developing digital literacy and capability skills without appropriate investment in either time or support.50,51 There is a recognized expectation on healthcare professionals entering the workforce to be future-ready and proficient for PCDHc.11,36
The investment in time and effort, to transition to suitably proficient in new and evolving technical and procedural skills, results in hugely variable cognitive and professional load. The impact is represented in the literature as a professional burden.36,41,51 This burden effects an individual’s capacity to develop capability, thus creating simultaneous cause and effect of being overwhelmed and performance being compromised.51 Insufficient or ineffective change management is thus an important concern for individuals, healthcare professionals, and organizations. For all individuals, the potential consequence of digital illiteracy is disengagement.52 This multifaceted effect alienates individuals rather than valuing their interaction.18
Digital learning and development places a priority on the capability to achieve new and evolving technical proficiencies; for example, “digital learning and development”.36,41 Professional standards for accreditation, registrations, and other professional obligations dictate the change management needs and investments of the healthcare professional. If these standards are perceived as the motivation for a professional, who invests time and effort into gaining new capabilities, a reactive culture toward change management can be created rather than a pro-active one. The nuances of change management alone cannot truly reflect the intentions of PCDHc, which relies on a pro-active, innovative mindset.
Perspective and Priorities
The application of Bandura’s theory of self-efficacy in the context of health behavior remains in high regard when improving health outcomes.4 When considering User Application and development, the practice of self-efficacy should be prominent in creating and supporting elements of self-development and empowerment. However, the focus of capabilities identified in this review was aligned with the healthcare professional perspective of skills and objective goals, rather than the individual’s health outcome. The subtle but important difference in the underlying perspective implies that the practice of self-efficacy in healthcare, which should be the foundation of person-centered care, is not reflecting the theory and thus is not promoting a truly PCDHc. The authors suggest, this lack of alignment between theory and practice of person-centered care exemplifies the disconnect between digital capabilities in healthcare.
DIK is technical in its focus, of observation, description, and instruction.49 This focus misses the point, how to influence effective performance. The review identifies a strong consensus on the value of technology, rather than the capability of using technology to improve health outcomes. Further, the repeated emphasis on information management,11,15,40,42,43 rather than the importance of understanding the management of data,15 knowledge,42 and interoperability,45 suggests that information is deemed more valuable than knowing what to do with it. This restricted perspective of healthcare is considered to be limiting the ability for transformation toward PCDHc and envisioning a holistic perspective of the contemporary landscape. The inevitable growth of new technology-related knowledge constantly requires resources such as time and professional development support allocated accordingly. This iterative demand impacts the professional’s ability to remain open to the broader reaches of healthcare knowledge and needs.13 Specialist wisdom adds value to knowledge, and this requires judgment that is unique and personal to the individual.49 The appropriate application of wisdom should be regarded as innovation for the benefit of bridging knowledge and capabilities. Application of wisdom has the potential to benefit the holistic perspective of PCDHc seeks to attain.
The National Health Service, Health and Care Digital Capability Framework15 describes digital safety and security as the responsibility of championing safe and secure digital creation. The NHS framework requires the professional to reflect and evaluate unforeseen or unintended consequences their “digital methods and use of technology” may have on safety and security rather than vice versa.15 The fundamental priority of exploration into undefined knowledge, rather than conforming with current knowledge, is an innovative value that brings greater value than technical proficiency or professional merit alone can achieve.
In 2015, Gammon et al identified a fundamental disconnect between technology capabilities and healthcare service development and delivery.20 This review reawakens their concerns, which remain present and need greater attention in discourse regarding PCDHc capabilities and professional practice. Focusing on digital capabilities, such as the ability to use technology rather than understood to be integral to care, overlooks the fact that such capabilities impact professional functions required of healthcare delivery. These include clinical and ethical decision-making, empowerment, suitability for services, promotion of health and wellness, and new models of care.11,53
Chronic disease management is growing sector in healthcare,54 involving greater input from the healthcare individual over time than any other sector of healthcare. Complex chronic care therefore represents a good fit for discussing PCDHc as the future model of care. The authors refer Grover and Joshi’s55 review of Chronic Disease Models (CDM) models, including Chronic Care Model (CCM).5 The CCM demonstrates a robust model in supporting and improving chronic care. The integration of information systems is deemed the weakest attribute of the Chronic Care Model.5 However, the development in technical capability and related literature compensate for this. The CCM defines the outcomes of the healthcare journey rather than defining the role of a healthcare professional within that journey. This is in contrast to the Digital Capabilities articles identified in this scoping review that predominantly positioned the role of the healthcare professional within a paternalistic model of care. One which oversees the health outcome. This is a subtle distinction but the authors consider this to be a fundamental nuance in understanding the role of capabilities in PCDHc.
Digital tools and technologies can assist and support the delivery of person-centered healthcare. Understanding the purpose of using digital tools and technologies can assist in more productive conversations on how to define professional capabilities in PCDHc. This review affirms a previously held conviction that discourse on healthcare practices and professional capabilities do not reflect truly person-centered healthcare.13,20 The realignment of capabilities towards the person’s healthcare outcomes, rather than the professional’s immediate obligation can steer the perspective towards a real PCDHc system.
The consistent reference to data, information, and knowledge with an absence of wisdom reaffirms the gap between appreciation of digital capabilities and their place in PCDHc. Valuing holistic wisdom, across rather than towards skills in isolation, may be one way to address the technical focus rather than sought-after person-centered health outcomes. In PCDHc delivery, the unpredictable nature of the healthcare individual and professional will always impact the use and application of any task, tool, or technology. While healthcare tools and technology continue to evolve, professional capabilities must also evolve to address the complex behavior of the person at the center of their healthcare journey.
Limitations
Our study has several limitations, firstly that the scope of literature was not systematic. The risk of the current method is that appropriate articles may have been missed in the scoping exercise. Further exploration of this project or continuation would be advised to conduct a systematic review as the primary step. This project has also been completed over a relatively short six-month period, within the parameters of routine work and without any funding or additional time sought or received.
Given the relatively new field being investigated, and as such a new field of resources available, there were limited opportunities to develop frameworks/guidelines. This is, in turn, why a smaller project was conducted to gauge feasibility and scope for further potential investigation/research. Only two researchers conducted this research project however, it should be noted that the two researchers represent very different health disciplines, which itself is a benefit supporting a cross-discipline perspective of the subject matter. All data and findings were primarily validated by each author moderating the other’s work. Final validation was sought by requesting support from one further expert in the field of health, care, and education.
Conclusion
This scoping review confirms the assumption that digital skills will empower all healthcare stakeholders is incorrect. Bandura4 and Wagner5 have repeatedly discussed the need for appreciation of complex human behavior change. The need to enable and empower the healthcare individual in their digital healthcare journey is equally part of this conversation. Achieving these goals for the individual requires supporting self-efficacy4 and delivering PCDHc for safe, quality health outcomes for the healthcare individual and health service.
As recently as 2015, the authors identified, the continued disconnect between (or siloed delivery of) health, care, and the accepted innovations of technology available.20 However, articles continue to fail to acknowledge the impact of behavior4,11 on the delivery of PCDHc.11,20 The appropriate application of reflective practice, PCDHc delivery, and use of the capabilities discussed in Brunner et al and other articles40–44 remains open and important to achieve a truly PCDHc. This scoping review suggests healthcare models continue to be interpreted to fit the needs of the healthcare professional without the consideration of person-centered care. Healthcare professionals and researchers need to work together to address the intrinsic behaviors that could potentially allow for effective change in healthcare practice or health outcomes.
The root of this issue may be the gap that Gammon et al42 identified between models of care and digital technology evolutions. The authors propose it is time to stop generating ubiquitous silos and start bridging the gap between technology and the practice of healthcare. This review also draws attention to the need for more research to enable digital healthcare systems and services to be designed to realize complex human behaviors and multiple person-centered care requirements. Clarity and consistency of the objectives of PCDHc and the appropriate mindset for truly encompassing them may be beneficial in revising and/or further exploring the effective capabilities for PCDHc.
The authors acknowledge that people are complex, technology is constantly adapting, and care will always need to evolve to meet chronicity and changing behaviors. Any investment in resources such as time and effort into gaining new digital capabilities and professional development needs to be allocated appropriately. Now more than ever, it is imperative to align healthcare capabilities with technologies to ensure that the practice of PCDHc is the empowering journey for the healthcare user that theory implies.
Abbreviations
EHR, electronic health record; PCDHc, person-centered digital healthcare; DIK, data, information and knowledge; DIKW, data, information, knowledge, and wisdom; CCM, chronic care model; CDM, chronic disease model.
Acknowledgments
The authors would like to acknowledge Dr Ian Almond, who contributed his expertise and time to reviewing and editing the paper.
Disclosure
The authors report no conflicts of interest in this work. No funding or resources were sought or used in the preparation of this work.
References
1. Huckvale K, Wang CJ, Majeed A, Car J. Digital health at fifteen: more human (more needed). BMC Med. 2019;17(1):62. doi:10.1186/s12916-019-1302-0
2. Ovretveit J. Digital technologies supporting person-centered integrated care - a perspective. Int J Integr Care. 2017;17(4):6. doi:10.5334/ijic.3051
3. Sturmberg J. Person-centered medicine from a complex adaptive systems perspective. Eur J Person Centered Healthcare. 2014;2(1):85–97. doi:10.5750/ejpch.v2i1.711
4. Bandura A. Self-efficacy and human functioning. In: Schwarzer R, editor. Self-Efficacy: Thought Control of Action. Abingdon, Oxon: Routledge; 1992:3–38.
5. Wagner E, Austin B, Von Korff M. Improving outcomes in chronic illness. Managed Care Q. 1996;4(2):12–25.
6. Greenhalgh T, Snow R, Ryan S, Rees S, Salisbury H. Six ‘biases’ against patients and carers in evidence-based medicine. BMC Med. 2015;13:200. doi:10.1186/s12916-015-043
7. Chute C, French T. Introducing care 4.0: an integrated care paradigm built on industry 4.0 capabilities. Int J Environ Res Public Health. 2019;16(12):2247. doi:10.3390/ijerph16122247
8. Mesko B, Drobni Z, Benyei E, Gergely B, Gyorffy Z. Digital health is a cultural transformation of traditional healthcare. Mhealth. 2017;3:38. doi:10.21037/mhealth.2017.08.07
9. Cronenwett L, Sherwood G, Barnsteiner J, et al. Quality and safety education for nurses. Nurs Outlook. 2007;55(3):122–131. doi:10.1016/j.outlook.2007.02.006
10. Scholz N Focus on digital health events [Blog]. European Parliament Research Service. European Parliament Research Service;2016. Available from: https://epthinktank.eu/2016/06/07/focus-on-digital-health-events/.
11. Brunner M, McGregor D, Keep M, et al. An eHealth capabilities framework for graduates and health professionals: mixed-methods study. J Med Internet Res. 2018;20(5):e10229. doi:10.2196/10229
12. Arsand E, Demiris G. User-centered methods for designing patient-centric self-help tools. Inform Health Soc Care. 2008;33(3):158–169. doi:10.1080/17538150802457562
13. Maddocks I. Silo mentality bad for our patients. Med J Austr. 2016;41.
14. Topol E. The topol review: preparing the healthcare workforce to deliver the digital future [Report]. Health Education England, NHS. 2019. Available from: https://topol.hee.nhs.uk/.
15. NHS. A health and care digital capabilities framework [Framework]. 2018. Available from: https://www.hee.nhs.uk/sites/default/files/documents/Digital%20Literacy%20Capability%20Framework%202018.pdf.
16. Koshy K, Limb C, Gundogan B, Whitehurst K, Jafree D. Reflective practice in health care and how to reflect effectively. Int J Surg-Oncol. 2017;2:6. doi:10.1097/IJ9.0000000000000020
17. Salmon P, Young B. Creativity in clinical communication: from communication skills to skilled communication. Med Educ. 2011;45(3):217–226. doi:10.1111/j.1365-2923.2010.03801.x
18. Lengacher L. Mobile technology: its effect on face-to-face communication and interpersonal interaction. URJHS. 2015;14:1.
19. Hadziomerovic A. Competencies, Capabilities and Skills. What’s the difference and how are they used?: Linkedin; October 2017. Available from: https://www.linkedin.com/pulse/competencies-capabilities-skills-whats-difference-how-aida/.
20. Gammon D, Berntsen GK, Koricho AT, Sygna K, Ruland C. The chronic care model and technological research and innovation: a scoping review at the crossroads. J Med Internet Res. 2015;17(2):e25. doi:10.2196/jmir.3547
21. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. J BMJ. 2009;339:b2700. doi:10.1136/bmj.b2700
22. Ritchie J, Lewis J, Nicholls CM, Ormston R, editors. Qualitative Research Practice: A Guide for Social Science Students and Researchers. Sage; 2013 Nov; 1.
23. Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32. doi:10.1080/1364557032000119616
24. Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil H. Chapter 11: scoping Reviews (2020 version). In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. JBI; 2020. Available from: https://wiki.jbi.global/display/MANUAL/Chapter+11%3A+Scoping+reviews.
25. Lester S. Professional standards, competence and capability. Higher Educ Skills Work-Based Learning. 2014;4(1):31–43. doi:10.1108/heswbl-04-2013-0005
26. Phelps RHS, Ellis A. Competency, capability, complexity and computers: exploring a new model for conceptualising end-user computer education. Br J Educ Technol. 2005;36(1):67–84. doi:10.1111/j.1467-8535.2005.00439.x
27. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J. 2009;26(2):91–108. doi:10.1111/j.1471-1842.2009.00848.x
28. Pham M, Rajic A, Greig J, Sargent J, Papadopoulos A, McEwen S. A scoping review of scoping reviews: advancing the approach and enhancing consistency. Res Syn Meth. 2014;5:371–385. doi:10.1002/jrsm.1123
29. World Health Organization. Report on the public consultation to inform development of the Framework on integrated people-centred health services [Report]. World Health Organization; 2016. Available from: https://apps.who.int/iris/bitstream/handle/10665/246252/WHO-HIS-SDS-2016.ng.pdf.
30. Eysenbach G, Jadad A. Evidence-based patient choice and consumer health informatics in the Internet age. J Med Int Res. 2001;3(2):e19. doi:10.2196/jmir.3.2.e19
31. Herrington J, Oliver R. An instructional design framework for authentic learning environments. Etr&d-Educ Tech Res. 2000;48(3):23–48. doi:10.1007/Bf02319856
32. Christopherson TA, Troseth MR, Clingerman EM. Informatics-enabled interprofessional education and collaborative practice: a framework-driven approach. J Interprof Ed Prac. 2015;1(1):10–15. doi:10.1016/j.xjep.2015.03.002
33. Mantas J, Ammenwerth E, Demiris G, et al. Recommendations of the International Medical Informatics Association (IMIA) on education in biomedical and health informatics. Meth Inf Med. 2010;49(02):105–120. doi:10.3414/ME5119
34. Beetham H, McGill L, Littlejohn A, Joint Information K Systems Committees (JISC). Thriving in the 21st century: learning literacies for the digital age (LLiDA project): executive Summary, Conclusions and recommendations [Study Report]. Open Research Online. June 2009. Available from: http://oro.open.ac.uk/52237/1/llidaexecsumjune2009.pdf.
35. O’Connell JGG, Coyer F. Beyond competencies: using a capability framework in developing practice standards for advanced practice nursing. J Adv Nurs. 2014;70(12):2728–2735. doi:10.1111/jan.12475
36. Adelaide University. Developing Digitally Capable Graduates [Framework]. Digital Capabilities @ Adelaide. 2017. Available from: https://universityofadelaide.app.box.com/s/d65cpexld49xbhxp0u7lws48v0ghtxis.
37. APS Ltd. Telehealth measure to improve access to psychological services for rural and remote patients under the Better Access initiative [Considerations for Providers]. Australian Psychological Society. 2017. Available from: https://www.psychology.org.au/getmedia/4dd9dd91-1617-421b-928c-531d019f05c2/17APS-Telehealth-Web.pdf.
38. Eysenbach G. What is e-health? J Med Internet Res. 2001;3(2):E20. doi:10.2196/jmir.3.2.e20
39. van Houwelingen CT, Moerman AH, Ettema RG, Kort HS, Ten Cate O. Competencies required for nursing telehealth activities: a Delphi-study. Nurse Educ Today. 2016;39:50–62. doi:10.1016/j.nedt.2015.12.025
40. Honey M, Collins E, Britnell S. Guidelines: Informatics for Nurses Entering Practice [Guidelines]. Aukland, New Zealand: Univesity of Auckland; 2018. Available from: https://auckland.figshare.com/articles/Guidelines_Informatics_for_nurses_entering_practice/7273037.
41. Jisc. Building digital capability: the six elements defined [Framework]. Jisc. September 2018. Available from: http://repository.jisc.ac.uk/6611/1/JFL0066F_DIGIGAP_MOD_IND_FRAME.PDF.
42. Nagle LM, Crosby K, Frisch N, et al. Developing entry-to-practice nursing informatics competencies for registered nurses. Stud Health Technol Inform. 2014;201:356–363.
43. Martin-Sanchez F, Rowlands D, Schaper L, Hansen D. The Australian Health Informatics Competencies Framework and Its Role in the Certified Health Informatician Australasia (CHIA) program. Stud Health Technol Inform. 2017;245:783–787.
44. Royal College of Nursing. Every Nurse an E-Nurse: Insights from a Consultation on the Digital Future of Nursing [Report]. London: Royal College of Nursing; 2018. Available fromhttps://www.rcn.org.uk/professional-development/publications/pdf-007013#detailTab.
45. Australian Digital Health Agency. Framework for action: how Australia will deliver the benefits of digitally enabled health and care [Framework]. ADHA. 2018. Available from: https://conversation.digitalhealth.gov.au/sites/default/files/framework_for_action_-_july_2018.pdf.
46. Donato D. The untapped potential in digital health. J Allied Health Professionals. 2019. Available from https://www.hisa.org.au/wp-content/uploads/2019/08/Allied-HI-PositionStatement.pdf?x30583.
47. Almond H Exploring the experiences of and engagement with Australia’s shared digital health record by people living with complex chronic conditions in a rural community. [Thesis]. University of Tasmania; 2018.
48. Prosci. An Introduction to Change Management [Article] Thought Leadership Articles. Prosci. Available from https://www.prosci.com/resources/articles/change-management-definition#:~:text=Change%20management%20is%20the%20process,adoption%20and%20realization%20of%20change.
49. Rowley J. The wisdom hierarchy: representations of the DIKW hierarchy. J Info Com Sci. 2007;33:2. doi:10.1177/0165551506070706
50. S BP L, Konstantinidis ST, Traver V, Car J, Zary N. Setting priorities for EU healthcare workforce IT skills competence improvement. Health Informatics J. 2019;25(1):174–185. doi:10.1177/1460458217704257
51. Morrison JaL P. When no one has time: measuring the impact of computerization on health care workers. Workplace Health Saf. 2008;56(9):373–378. doi:10.1177/216507990805600902
52. van der Vaart R, Drossaert CH, Taal E, Drossaers-Bakker KW, Vonkeman HE, van de Laar MA. Impact of patient-accessible electronic medical records in rheumatology: use, satisfaction and effects on empowerment among patients. BMC Musculoskelet Disord. 2014;15(1):102–110. doi:10.1186/1471-2474-15-102
53. Gray K. Public health platforms: an emerging informatics approach to health professional learning and development. J Public Health Res. 2016;5(1):665. doi:10.4081/jphr.2016.665
54. AIHW. Chronic Disease [Homepage]. AIHW, Australian Government. 2019. Available from: https://www.aihw.gov.au/reports-data/health-conditions-disability-deaths/chronic-disease/overview. | https://www.dovepress.com/health-professional-digital-capabilities-frameworks-a-scoping-review-peer-reviewed-fulltext-article-JMDH |
Yoga talks about cleanliness, but when it comes to your sweaty self after practice, the suggestion is to wait at least 30 minutes before taking a shower. This is because you actually want to let your body cool down and re-absorb some of the your lost essential minerals.
How long after yoga can I shower?
Do not shower
It also drains away essential energy that was built in your body during the yoga routine. So it is essential that you wait to take a bath after a yoga session. Similarly, it is advisable that you don’t take a bath at least 2 hours before a yoga session.
Can you shower after yoga?
Always take a shower after yoga class, especially if you’ve just taken an extra sweaty class like Bikram or Ashtanga yoga. Your body releases toxins when you sweat, and if you don’t shower after class, those toxins will stay on and eventually be absorbed back into your skin.
Can we do yoga before bath?
One must always practice yoga early in the morning, after taking bath and without eating anything. You can even perform yoga before bath, but after practice you must wait for some time and then take bath. Keep the doors, windows open for fresh air and light while performing yoga.
Can we drink water immediately after yoga?
A bottle of water, after your practice is a great way to replenish the water that your muscles have consumed or that you have sweated out during class. A glass or two right after class should be enough to help you recover and keep your muscles from tightening or cramping.
When should you not do yoga?
- Yoga should not be performed in a state of exhaustion, illness, in a hurry or in an acute stress conditions.
- Women should refrain from regular yoga practice especially asanas during their menses. …
- Don’t perform yoga immediately after meals. …
- Don’t shower or drink water or eat food for 30 minutes after doing yoga.
Is it better to do yoga in the morning or at night?
In general, yoga practice is recommended in the morning or the early evening. A morning yoga session can be quite active and consist of a full practice. Always finish with Savasana (Corpse Pose), no matter what time of day or season your practice. You may choose to do a different type of practice in the afternoon.
What is the best time to do yoga?
The very best time to practice yoga is first thing in the morning before breakfast. Upon waking, empty the bowels, shower if you wish, then commence the day with your regime of yoga practices. The second most conductive time is early evening, around sunset.
Can we eat immediately after yoga?
When To Eat Post-Yoga
The first two hours after your practice are key. This is because your body is most receptive to receiving these nutrients with the first half hour to 2 hours after your practice. Eating within this time can have an impact on your next practice and your overall improvement.
Does yoga tone your body?
The connective tissue and muscle fibers get longer and the added resistance creates tension that helps the body build and maintain a toned appearance. For these reasons, yoga is an excellent way to tone virtually every major muscle group including the booty and abs.
Is it good to do yoga everyday?
If you do yoga every day, you will get stronger
“You will become stronger and physically fit because you use your body weight to strengthen all your major muscle groups,” she explained to The List. “It takes great strength to build up to certain poses that require coordination, poise, and power.”
Is it okay to do yoga before bed?
Some forms of exercise, like yoga, can even be calming when done right before you crawl into bed. You might want to take a class or two before you start a new bedtime routine, of course, but making it a point to perform yoga before bed can reduce restlessness and help you achieve deeper, more refreshing sleep.
Why is yoga so healthy?
Yoga’s incorporation of meditation and breathing can help improve a person’s mental well-being. “Regular yoga practice creates mental clarity and calmness; increases body awareness; relieves chronic stress patterns; relaxes the mind; centers attention; and sharpens concentration,” says Dr.
Can we eat banana after yoga?
Eat a smart snack.
A handful of almonds, quinoa or oatmeal are good choices – especially for a more athletic style of yoga like power vinyasa or hot yoga. An avocado and chia pudding are also easy on the stomach. Eat fruit like a banana, apple, pear or dried fruit before you practice.
Can we do yoga in empty stomach?
Experts agree that practicing yoga on an empty stomach is one of the most important preparations for practice. Generally, it’s best to avoid eating for 1 – 2 hours before asana or pranayama (breathing exercises). For most people, it’s okay to have a heavy meal four hours before practice.
Should we drink water before doing yoga?
Strictly limit your water intake half an hour prior to yoga practice. If you still feel thirsty take few sips of water at room temperature (not cold water)before starting yoga. … For that matter keep drinking water throughout the day and do not rely on drinking one to two glasses of water just before the yoga class. | https://centeryourhealth.net/yoga/how-long-should-you-wait-to-shower-after-yoga.html |
An Eternally Irish resting place...
News & Events
St. Patrick's Day Traditions & Events
Blog
Posted 14/03/2019
St. Patrick’s Day
Saint Patrick is the patron saint of Ireland and his feast day is celebrated on the 17th March every year. Saint Patrick’s day is a public holiday in the Republic of Ireland and Northern Ireland. It is also a holiday in the Canadian province of Newfoundland and Labrador and the Caribbean territory Montserrat due to Ireland’s close ties to these regions.
Saint Patrick’s Day Today
Saint Patrick’s day is one of the biggest events of the year in Ireland. The St. Patrick’s Festival takes place for five days every year in Dublin city centre. This year the event takes place from the 14th to the 18th March and will welcome over 1 million people to the streets of the city centre.
Saint Patrick’s Day is the one national holiday celebrated in more countries around the world than any other. It is most definitely a day when everybody wants to be Irish.
There is a whole host of events taking place over the weekend. With traditional Irish singing and dancing sessions, walking tours, light shows and even a pop-up Gaeltacht.
The Festival is a great celebration of wonderful Irish traditions mixing with the modern culture of 21st Century Ireland.
The Saint Patrick’s Day Parade will take place on Sunday this year. The theme of the parade is Storytelling. Street theatre groups, pageant companies and marching bands will take to the streets of Dublin to charm what is expected to be a record number of attendees in 2019. The atmosphere will be drummed up with bands from the United States of America, Germany and of course Ireland.
The Parade will take it’s usual route starting at Parnell Square, heading across O’Connell Bridge before finishing up just past St. Patrick’s Cathedral on Cuffe Street. To fit in with this year’s storytelling theme, Irish comedians Deirdre O’Kane and Jason Byrne have been selected as this year’s Grand Marshalls.
The History of St. Patrick
Saint Patrick was born in Roman-Britain in the 5th Century, believed to be somewhere in modern-day Wales. At the age of 16, he was kidnapped and taken to Ireland to work as a slave. In the Declaration, allegedly written by Patrick, it is said that he spent the following six years working as a shepherd. It was during this time that he ‘found God’.
According to the Declaration, God told Patrick to head for the Irish coast where a boat would be waiting to bring him back home to safety. Back at home, Patrick became a priest.
He would then go on to return to Ireland to convert the pagans to Christianity and legend goes that he drove the ‘snakes’ out of Ireland.
According to tradition, Saint Patrick died on the 17th March in Downpatrick and that is why we celebrate the life of Ireland’s Patron Saint on that day every year.
St. Patrick’s Day Traditions
St. Patrick’s Day is celebrated around the world with some of the largest events taking place outside Ireland. Some examples of St. Patrick’s Day traditions and events are outlined below.
Parades
Almost every town and village in Ireland has its own St. Patrick’s Day parade. It is common for local clubs, societies, schools and business to arrange floats for the annual parade in their hometown.
As mentioned already, the largest parade takes place in the capital, Dublin. Approximately 500,000 people attended the parade in 2018.
There are parades held across the world too, with some of the largest coming in North America. One of the largest parades outside Ireland is in Montreal. They have been running St. Patrick’s Day parades since 1824. Theirs is also a weeklong festival in St. John, New Brunswick to celebrate St. Patrick.
Elsewhere, New York and Chicago are known for their lively parades. New York’s edition runs down the famous 5th Avenue with almost 150,000 people taking part in the parade every year. Another city with close links to Ireland, Chicago, colours it famous Chicago River green every year to mark St. Patrick’s Day.
The White House
It has been tradition since 1956 that a member of the Irish government visits the White House on St. Patrick’s Day to present a Waterford Crystal bowl of shamrock to the President. Every Taoiseach since 1990 has visited the White House and this year Leo Varadkar will meet Donald Trump for the annual gift giving.
It has also become tradition for a relaxed evening event to be hosted by the President in the White House for a number of Irish travelling delegates.
A number of other Irish ministers travel abroad on St. Patrick’s day to experience the festivities and meet with foreign leaders.
Global Greening
2019 marks the tenth year of Tourism Ireland’s Global Greening campaign. Landmarks around the world are lit up green to mark St. Patrick’s Day. The campaign has been a monumental success with nearly 500 landmarks going green in over 50 countries. Most notably, we have seen the likes of the Colosseum in Rome, the leaning tower of Pisa, the Great Wall of China, the Sydney Opera House and the Christ the Redeemer statue in Rio de Janeiro have all gone emerald for the day.
Shamrock
According to legend, St. Patrick used a three-leaf shamrock to explain the Holy Trinity to pagans and the sprig still holds tradition on St. Patrick’s day. As already mentioned, the President of the United States has been presented with a bowl of Shamrock every year since 1954.
In Ireland, men and women will wear shamrock in their lapels or hats on the day. In times gone by, the shamrock would be removed at the end of the day for the ‘drowning of the shamrock’. The small plant is one of the national symbols of Ireland and is represented on the logo of the national airline Aer Lingus and the crest of the Irish rugby team as well as many other organisations around the world, including the Boston Celtics.
Sporting Events
As it is a bank holiday in Ireland, many sporting events traditionally take place on St. Patrick’s Day. The All-Ireland Senior Club Football and Hurling finals take place in Croke Park on St. Patrick’s Day every year. The provincial schools finals in rugby, gaelic football and hurling are also usually held on the 17th March in grounds around the country. The most storied of these would be Ulster’s MacRory Cup which is broadcast on the BBC and online and played in front of a crowd of thousands.
The final round of the annual 6 Nations tournament usually takes place around St. Patrick’s Day. In 2018, Ireland defeated England on the 17th March to claim their third-ever Grand Slam. This year, they face Wales on the 16th March with a chance to claim another title.
Sporting organisations around the world also celebrate St. Patrick’s Day, particularly in North America. The New York Knicks, Toronto Raptors and Toronto Maple Leafs along with a whole host of teams have been known to wear special-edition uniforms for the day. | http://www.eternallyirish.com/news/18/st-patrickand-39-s-day-traditions-andamp-events.php |
BACKGROUND OF THE INVENTION
1. Field of the Invention
In many types of apparatus, such as two way radios, pagers, etc. , specific signals are utilized to establish a communications link between two remote pieces of equipment. One common type of specific signal utilized includes one or more low frequency tones. It is, therefore, a necessity to include circuitry in these pieces of apparatus which will recognize specific signals or tones and provide an output signal when the correct signals or tones are received. This output signal is then used to activate audio or visual indicators, turn on receivers, etc.
Further, in some instances it is desirable to send the tones or specific signals along with audio or data signals. To do this a portion of the audio or data is notched out and the tones or specific signals are multiplexed into the notch. In these instances it is imperative that the notch remove a very small amount of the audio or data, but still include enough tone or specific signal to be detected.
2. Description of the Prior Art
In prior art devices, circuitry capable of recognizing specific signals or tones includes mechanical vibrating devices (reeds or crystals) or electrical filters which allow only the desired signals or tones to pass. These prior art devices are effective but the signal or tone must be present for a relatively long period of time before recognition or detection can occur. Further, the signal or tone must be monitored continuously by the prior art devices for the relatively long period of time.
SUMMARY OF THE INVENTION
The present invention pertains to apparatus for correlating a few cycles of input signals with a few cycles of the input signal received at a previous interval, varying the interval when correlation occurs and providing a detection signal after a predetermined number of successive correlations. A number of successive correlations are required to distinguish between frequencies other than the desired frequency and to compensate for any noise or other interference that may have caused a single correlation.
It is an object of the present invention to provide a new and improved sampled signal detector for detecting a periodically recurring signal.
It is a further object of the present invention to provide a sampled detector which is capable of detecting periodically recurrng signals in a relatevely short period of time and without the necessity of monitoring the signal continuously.
These and other objects of this invention will become apparent to those skilled in the art upon consideration of the accompanying specification, claims and drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Referring to the drawings,
FIG. 1 is a block diagram of a sampled signal detector embodying the present invention; and
FIG. 2 is a block/schematic diagram of a portion of another embodiment of a sampled signal detector.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring to FIG. 1, a first input terminal 10, adapted to receive a reference signal, is connected through an inverter 11 to an input of a transmission gate 12. The transmission gate 12 has a second input for receiving a gating signal, which input is labeled F to indicate that it is connected to an output signal to be described presently. The output of the transmission gate 12 is connected to an input of a shift register of 64 stages 15, the output of which is connected to the input of a transmission gate 16 and a transmission gate 17. The shift register 15 has a second input for receiving clock pulses thereon, which is designated C in accordance with the common practice in this art. The transmission gate 16 has a second input for receiving gating signals thereon, which is designated F, the further connection of which will be described presently, and an output that is connected to the input of the shift register 15. The transmission gate 17 has a second input for receiving gating impulses thereon which is designated F and an output which is connected to the input of a shift register of 64 stages 20. The shift register 20 has a second input for receiving clock pulses thereon, designated C which is connected to the clock pulse input of the shift register 15 and to the output of an inverter 21. The output of the shift register 20 is connected to the input thereof through a transmission gate 22, which has an input, labeled F, for receiving gating pulses thereon. The output of the shift register 15 is also connected to one input of an exclusive OR gate 25 and the output of the shift register 20 is connected to one input of a second exclusive OR gate 26. The outputs of the exclusive OR gates 25 and 26 are labeled A and B, respectively, and will be described in more detail presently.
A second signal input, designated 30, is adapted to receive an unknown signal thereon which is a periodically recurring signal to be analyzed. The signal on the input 30 is applied through an inverter 31 to a transmission gate 32. The transmission gate 32 has a second input for receiving gating signals thereon, which is designated F. The output of the transmission gate 32 is applied to an input of a shift register of 64 stages 35. A second input of the shift register 35, designated C, is adapted to receive clock pulses thereon. The output of the shift register 35 is coupled to the input thereof through a transmission gate 36 and is also connected to a transmission gate 37. The transmission gates 36 and 37 each have second inputs for receiving gating signals thereon which are labeled F and F, respectively. The output of the transmission gate 37 is connected to an input of a shift register of 64 stages 40, the output of which is coupled to the input thereof through a transmission gate 41. The transmission gate 41 has another input labeled F for receiving gating pulses thereon. The shift register 40 has a second input labeled C, for receiving clock pulses thereon, which is connected to the C input of the shift register 35 and to the output of a NAND gate 42. The output of the shift register 35 is connected to a second input of the exclusive OR gate 25 and the output of the shift register 40 is connected to a second input of the exclusive OR gate 26. The shift registers 15, 20, 35 and 40 and the above described circuitry associated therewith are utilized as signal storage means, the operation of which will be described presently.
The output of the exclusive OR gate 25, labeled A, is applied to a similarly labeled input of a NAND gate 45. A second input of the NAND gate 45 is connected to the output of an inverter 46. The output of the NAND gate 45 is connected through an inverter 47 to an input of a counter 48. A plurality of outputs of the counter 48 are connected through a plurality of diodes 50 to an input of a NOR gate 55. The input of the NOR gate 55 is also connected to a resistor 56 to ground. The specific outputs of the counter 48 connected through the diodes 50 to the input of the NOR gate 55 are chosen so that the number of pulses applied to the input thereof must exceed a predetermined count before the circuit will provide an output signal at the NOR gate 55. A reset input, labeled R, of the counter 48 is connected to an output of a NOR gate 57. The output, labeled B, of the exclusive OR gate 26 is connected to one input of a NAND gate 60. A second input of the NAND gate 60 is connected to the output of the inverter 46. The output of the NAND gate 60 is connected through an inverter 61 to an input of a counter 62. A reset input, labeled R, of the counter 62 is connected to the output of the NOR gate 57. A plurality of outputs of the counter 62 are connected through a plurality of diodes 63 to a second input of the NOR gate 55. The second input of the NOR gate 55 is also connected through a resistor 64 to ground. The particular outputs of the counter 62 connected to the NOR gate 55 through the diodes 63 are chosen so that at least a predetermined count in the counter 62 supplies a signal to the second input of the NOR gate 55. The exclusive OR gates 25 and 26 and the counters 48 and 62 with their associated circuitry comprise correlation means, the operation of which will be described in detail presently.
Timing means for controlling the operation of the signal storage means and the correlation means are constructed, in this embodiment, as follows. A clock or basic oscillator 65 is connected through an amplifier 66 to the input of a divide by N divider 67, an inverter 68 and one input of a NAND gate 70. A divide by 2 output of the divider 67 is connected to a second input of the NAND gate 70 and to one input of a NAND gate 71. The output of the NAND gate 70 is connected to the input of the inverter 46, and the output of the inverter 68 is connected to a second input of the NAND gate 71. This circuit provides two clock signals which are one-half the frequency of the clock signals produced by the clock 65 and which are 180° out of phase with each other. The output of the divider 67, the frequency of which is 128th of the frequency at the input, is connected to the input of a second divide by N divider 75, the D input of a D-type flip-flop 76, one input of the NOR gate 57 and one input of a NOR gate 77. A clock input, labeled C, of the flip-flop 76 is connected to the output of the amplifier 66 and a reset input, labeled R, is connected to the source of signals labeled F, to be explained presently. The output of the flip-flop 76 is connected to a second input of the NOR gate 57. A divide by 32 output of the divider 75 is connected to a reset input, labeled R, of a D type flip-flop 79, the clock inputs, labeled C, of four D type flip-flops 80, 81, 82 and 83 and the signal input of a divide by 10 divider 85. The divide by 10 output of the divider 85 is connected to the signal input of a second divide by 10 counter 86.
The divider 85 has 10 output taps representative of units measurements of time and the divider 86 has 10 output taps representative of tens measurements of time in the production of a predetermined interval of time. Four NAND gates 90, 91, 92 and 93 each have two inputs connected to the two counters 85 and 86 so that each is representative of a predetermined time interval. For example, the two inputs of the NAND gate 90 are connected to the divide by 8 tap of the divider 85 and the divide by 2 tap of the divider 86 so that the output of the NAND gate 90 is an interval of 280 milliseconds, with a clock input to the counter 85 of 100 Hz. Further, the two inputs of the NAND gate 91 are connected to the divide by 0 tap of the divider 85 and the divide by 4 tap of the divider 86 to provide a 400 millisecond interval (a 120 millisecond interval after the end of the first interval), the two inputs of the NAND gate 92 are connected to the divide by 5 tap of the divider 85 and the divide by 4 tap of the divider 86 to provide a 450 millisecond interval (a 50 millisecond interval after the end of the second interval), and the two inputs of the NAND gate 93 are connected to the divide by 9 tap of the divider 85 and the divide by 4 tap of the divider 86 to provide a 490 millisecond interval (a 40 millisecond interval after the end of the third interval).
The outputs of the NAND gates 90, 91 and 92 are connected to 3 inputs of a NAND gate 95, the output of which is connected to the D input of the flip-flop 83. The output of the NAND gate 93 is connected through an inverter 96 to an output terminal for the detector, labeled 100, and to a disable input labeled D, of the counter 85. Since each of the flip- flops 80, 81, 82 and 83 are clocked at the same rate as the frequency of the input signal to the dividers 85 and 86, each of the flip-flops represents one unit of time, or 10 milliseconds if the frequency of the clock signal is 100 Hz. The output of the flip-flop 83 is connected to the D input of the flip-flop 82, one input of a NOR gate 105 and the reset input, labeled R, of a set-reset type flip-flop 106. The output of the flip-flop 82 is connected to a second input of the NOR gate 105 and the output thereof is connected to the D input of the flip- flop 81 and to a second input of the NOR gate 77. The output of the flip- flop 81 is connected to the D input of the flip-flop 80. The Q output of the flip- flop 80 is the source of signals labeled F in the previous description and the Q output is the source of signals labeled F in the previous descriptions. In addition to the connections previously described, the F signal from the flip-flop 80 is connected to the clock input, labeled C, of the flip-flop 79 and one input of a NOR gate 108. The flip-flop 79 has a set input, labeled S, with an input terminal 107 connected thereto for purposes of restarting the detector once an output signal has been produced, as will be described presently. The output of the flip-flop 79 is connected to the reset inputs of the dividers 85 and 86. The D input of the flip-flop 79 is connected to the Q output of the flip-flop 106. The set input, labeled S, of the flip-flop 106 is connected to the output of a NOR gate 110 one input of which is connected to the Q output of a D type flip-flop 111 and the other input of which is connected through an inverter 112 to the output of the NOR gate 57. The output of the inverter 112 is also connected to one input of the NAND gate 42 in the storage means. The output of the correlation means, which is present at the output of the NOR gate 55, is applied to the D input of the flip-flop 111. The output of the NOR gate 57, which is applied to the reset inputs, labeled R, of the counters 48 and 62 is also applied to the clock input labeled C, of the flip-flop 111. The output of the NAND gate 71 is applied to a second input of the NOR gate 108 and the outputs of the NOR gates 77 and 108 are applied to two inputs of a NOR gate 113. The output of the NOR gate 113 is connected to the input of the inverter 21 and a second input of the NAND gate 42 in the storage means.
Operation
For purposes of describing the operation of the above-described circuit, it will be assumed that the frequency of the clock 65 is 409, 601 Hz., the frequency of the signals at the outputs of the NAND gate 70 and 71 is 204,800 Hz., the frequency at the output of the first divider 67 is 3200 Hz., and the frequency at the output of the divider 75 is 100 Hz. It will of course be understood by those skilled in the art that many other frequencies and dividing or multipling schemes might be utilized and the present circuitry and frequencies are simply for purposes of explanation. Assuming that the apparatus has just been turned on or the dividers 85 and 86 have just been reset, nothing occurs until 280 milliseconds have gone by. At 280 milliseconds the output of the NAND gate 95 generates a pulse. This pulse is delayed 10 milliseconds by flip- flop 83 and is stretched into a 20 millisecond pulse by flip-flop 82 and NOR gate 105. The 20 millisecond low pulse that appears at the output of NOR gate 105 allows 64 clock pulses of the 3200 Hz signal to pass through the NOR gate 77. The output of the NOR gate 108 is low because the F signal is high. Because the NOR gate 113 has a low signal on one input and the 3200 Hz. signal on the other input, the 3200 Hz signal will be passed through the NOR gate 113, through the inverter 21 to the clock inputs of the shift registers 15 and 20. Simultaneously, the high F signal is being applied to the reset input of the flip-flop 76, which produces a high output that is applied to the NOR gate 57. With a high input to the NOR gate 57, the output is low and after being inverted by the inverter 112 appears as a logical high at the input of the NAND gate 42. Thus, the 3200 Hz. signal passes through the NAND gate 42 and is applied to the clock inputs of the shift registers 35 and 40. Since the 3200 Hz. signal is applied for 20 milliseconds, 64 pulses are applied to the clock inputs of the shift registers 15, 20, 35 and 40. Also, since the F signal is high the transmission gates 12, 17, 32 and 37 are activated to pass information therethrough while the low F signal maintains the transmission gates 16, 22, 36 and 41 inactivated. Thus, 64 sampled bits of the reference signal are clocked into the shift register 15 and 64 sampled bits of the unknown are clocked into the shift register 35.
After the delay introduced by the two flip-flops 80 and 81, 20 milliseconds in this example, the two outputs of the flip-flop 80 change in accordance with the signal applied to the D input of the flip-flop 81, i.e. the F signal goes high while the F signal goes low. With a high signal at the output of the NOR gate 105 and a low F signal applied to the input of the NOR gate 108, the 204,800 Hz. signal applied to the other input of the NOR gate 108 passes therethrough. Thus, during the 20 milliseconds that F is low 64 × 64 pulses of the 204,800 Hz. signal pass through the NOR gate 113 and are applied through the inverter 21 and NAND gate 42 to the clock inputs of the shift registers 15, 20, 35 and 40. Also, since the F signal is low the transmission gates 12, 17, 32 and 37 are deactivated while the high F signal activates the transmission gates 16, 22, 36 and 41. Thus, the sample bits in the shift registers 15, 20, 35 and 40 are circulated within the registers at a relatively high rate. As the sample bits are circulated, the sample bits in the shift register 15 are compared to the sample bits in the shift register 35 by means of the exclusive OR gate 25 and all errors, or non-correlations, appear as pulses which are counted by the counter 48. The sample bits in the shift register 20 are compared to the sample bits in the shift register 40 by means of the exclusive OR gate 26 and all errors, or non-correlations, appear as pulses which are counted by the counter 62. When the count in the counter 48 and/or the counter 62 reaches at least a predetermined value, determined by the connection of the diodes 50 and 63, respectively, (counts 8, 16, 32 or 64 in this embodiment) a high signal is applied to one or both of the inputs of the NOR gate 55. This appears as a low signal at the output thereof and is applied to the flip-flop 111 which in turn produces a high signal at the Q output thereof when a clock pulse is applied to the C input. By allowing up to eight errors before the counters 48 or 62 produce an output pulse, the present detector has a relatively wide bandwidth and unknwon signals which have a frequency close to that of the reference signal may be detected. To increase the bandwidth the errors required to produce an output from the counters 48 or 62 is increased and vice versa. If substantially no, or very narrow, bandwidth is desired the counters 48 and 62 could be removed and a simple memory circuit substituted therefore which would provide the proper timing for operation of the circuit.
The combination of the flip-flop 76 and the NOR gate 57 provide a single narrow positive pulse at the output of the NOR gate 57 for each cycle of the 3200 Hz signal, which appears as an additional pulse at the 204,800 Hz. frequency. The single positive pulse at the output of the NOR gate 57 appears after the sample bits in the shift registers 15, 20, 35 and 40 have been shifted through each entire cycle (64 clock pulses applied to the clock inputs of the shift registers). This additional pulse resets the counters 48 and 62, clocks the flip flops 111 and, after being inverted by the inverter 112, is applied through the NAND gate 42 to the shift registers 35 and 40 to shift the sample bits therein one additional position. Thus, by circulating the information in the shift registers 15, 20, 35 and 40 a number of times equal to the number of bits stored in each register, 64 in this embodiment, during each predetermined time interval all of the sample bits in the registers 15 and 20 are compared to all of the sample bits in the registers 35 and 40, respectively.
Each time the sample bits in the shift registers 15, 20, 35 and 40 are circulated, the sample bits in the shift registers 15 and 35 and the sample bits in the shift registers 20 and 40 are compared, respectively, and the errors therebetween are counted by the counters 48 and 62, respectively. In each of the 64 comparisons or circulations in which the count in either of the counters 48 or 62 exceeds a predetermined value, a high signal is applied to one or both of the inputs of the NOR gate 55 and a low appears at the output thereof. However, if the count in both of the counters 48 or 62 does not exceed the predetermined value in any one of the 64 comparisons or circulations, a low signal is applied to both of the inputs of the NOR gate 55 and a high appears at the output thereof. The signal at the output of the NOR gate 55 is applied to the D input of the flip-flop 111 and is clocked through the flip-flop 111 by the additional pulse from the NOR gate 57 at the end of each of the circulations. A low at the input of the flip- flop 111 appears as a high at the Q output, which supplies a low through the NOR gate 110 to the set input of the flip-flop 106. A high at the input of the flip-flop 111 appears as a low at the Q output, which supplies a high through the NOR gate 110 at the set input of the flip- flop 106. Once a high set pulse is applied to the flip-flop 106 a low output is available at the Q output and this output will not change until a new reset pulse is applied to the flip-flop 106. However, if no high is applied to the set input of the flip-flop 106, the Q output will remain high. The high or low pulse at the output of the flip-flop 106 is applied to the D input of the flip- flop 79 but is not clocked therethrough until the beginning of the next F pulse, which occurs at some time after all of the information has been circulated 64 times in the shift registers. In the present embodiment the beginning of the F pulse occurs 320 milliseconds after the counters 85 and 86 have been started or reset (a 280 millisecond interval produced by NAND gate 90 and 10 millisecond delays in each of the flip-flops 80, 81, 82 and 83). When the flip-flop 79 is clocked, if a high is present at the D input, a high appears at the output and is applied to reset the dividers 85 and 86. Because the dividers 85 and 86 are reset the next F pulse will appear 320 milliseconds later. However, if a low is present at the D input of the flip-flop 79 when it is clocked, the output thereof is low and the dividers 85 and 86 are not reset so that the next F pulse or time interval is only 140 milliseconds long.
In the above description of the operation, the shift registers 15 and 35 have sample bits of information clocked therein, but the shift registers 20 and 40 only have noise stored therein so that correlation between the sample bits in the shift registers 20 and 40 will not occur and the counters 85 and 86 will be reset. Thus, after the first predetermined interval of time, which in this embodiment is 320 milliseconds, the F signal will again go high and the F signal will go low so that 3200 Hz. clock pulses are again applied to the shift registers 15, 20, 35 and 40 with the transmission gates 12, 17, 32 and 37 activated to clock the sample bits in the shift registers 15 and 35 into the shift registers 20 and 40. Simultaneously, new sample bits of the input signals will be clocked into the shift registers 15 and 35. Now 64 sample bits of the unknown signal are stored in the shift register 40 and 64 sample bits of the unknown signal, taken a predetermined interval later (320 milliseconds), are stored in the shift register 35.
Correlations which occurred between the sample bits stored in the shift registers 15 and 35 will now appear, at the same time (during the same circulation of sample bits), as correlations between the sample bits stored in the shift registers 20 and 40. If the frequency of the reference signal and the unknown signal are equal, or approximatley equal, a correlation will again appear, at approximately the same time (during the same circulation), between the sample bits stored in the shift registers 15 and 35. This is true, assuming that the phases of the unknown signal and the reference signal, relative to each other, have not changed. By comparing the specific circulation in the shift registers 15 and 35 in which correlation occurred with the same specific circulation in the shift registers 20 and 40, the comparing means is essentially correlating a portion of the unknown signal with a portion of the unknown signal received at a previous interval. When correlations occur in the shift registers 15 and 35, and in the shift registers 20 and 40, simultaneously, the counters 85 and 86 are not reset, as previously described, and new information is clocked into the shift registers 15 and 35 after a shorter interval of time. If the frequency of the reference signal and the unknown signal are the same, simultaneous correlations will again occur between the shift registers, the counters 85 and 86 will again not be reset and new information will be clocked into the shift registers 15 and 35 after an even shorter interval of time. If the frequency of the reference signal and the unknown signal are identical another simultaneous correlation will occur between the sample bits stored in the shift registers 15 and 35 and in the shift registers 20 and 40 and, this time, the count in the dividers 85 and 86 will have progressed to the point that two highs will be applied to the input of the NAND gate 93, producing a low at the output thereof which will be inverted and appear as a detect signal at the output 100. This high signal will also disable the divider 85 so that no additional pulses will be accepted therein. Thus, the entire circuitry wil cease operation. If a second tone or signal is to be detected, a new reference signal is applied to the input terminal 10 and a set pulse is applied to the input terminal 107 to set the flip-flop 79, reset the dividers 85 and 86, and start the entire cycle again.
It is necessary to check for a number of successive correlations because there are a number of frequencies which can cause correlations and, thus, appear to be the desired frequency. A waveform which consists of several widely separated short bursts of a periodically recurring signal, such as a tone, can be represented by a Fourier Series. The spectral lines are spaced around f.sub.c at frequencies N/T cycles and the envelope of the amplitudes of the spectral lines is a sin X/X function with the first zero at 1/t away from f.sub.c, where T is the spacing between bursts, t is the length of the bursts and f.sub.c is the desired frequency. By storing a portion of an unknown signal during a first period t.sub.1 and a second portion of the unknown signal during a second period t.sub.2, the total signal can be correlated against a reference signal, as described above. If the frequency of the stored signal is the same as, or close to, the reference signal and the time T is an integer number of cycles, the signal stored during the period t. sub.1 will be in phase with the signal stored during the period t.sub.2. The signal stored during the period t.sub.1 will again be in phase with the signal stored during the period t.sub.2 when the frequency of the stored signal stored during the period t.sub.2 when the frequency of the stored signal is 1/T cycles away from the reference signal. This in phase relationship will repeat every N/T cycles away from the reference signal and the correlation of the unknown signal to the reference signal will follow the sin X/X envelope of the Fourier Series. By checking a number of successive correlations with different intervals, T, therebetween the spectral lines change and only the reference signal spectral line will provide repetitive correlation.
Referring specifically to FIG. 2, a portion of a second embodiment of the present invention is illustrated including modified signal storage means and correlation means. In FIG. 2, eight clocked shift registers 125- 132 form the signal storage means. The unknown signal is applied at an input terminal 135, which is the signal input to the first shift register 125 and sample bits are clocked into the shift register 125 by clock pulses applied to a clock input 136. Each of the shift registers 125-132 are clocked by the same clock pulses from the source, not shown, applied to the input 136. Also, each of the shift registers 125-132 are four- stage shift registers with an output from each stage and a final output for information that is clocked completely through the register. The final outputs of each of the stages 125-131 are connected to the signal inputs of the shift registers 126-132, respectively, by means of inverters 140-146. The four outputs from each of the stages of each of the shift registers 125-132 are connected through resistors which are not numbered, to a common output line 150.
In the operation of the circuitry illustrated in FIG. 2, a first 32 sample bits of the input signal are clocked into the shift registers 125- 128. A predetermined time interval later a second 32 sample bits of the unknown signal are clocked into the shift registers 125-128 and the sample bits which were previously in the registers are clocked into the shift registers 129-132. After all of the sample bits are clocked into the shift registers 125-132 a comparison is made by way of the output line 150 and, if a correlation between the first set of sample bits and the second set of sample bits occurs, a correlation signal is applied by way of the line 150 to electronic circuitry, such as that previously described in conjunction with FIG. 1, which reduces the time interval before the next set of sample bits is taken. If a correlation between the first two sets of sample bits does not occur, the time interval remains constant. As in the description of FIG. 1, after predetermined number of correlations has occurred a detect signal is generated at an output of the circuitry.
Thus, an improved sampled signal detector is described which is capable of detecting a periodically recurring signal in a relatively short period of time without the necessity of monitoring the signal continuously. Further, the apparatus described is relatively simple to construct in integrated circuit form and noise falsing, shock falsing, and many other problems prelevant in prior art signal detectors are substantially reduced relative to these prior art signal detectors if the time for detecting therein is limited. It should be noted that the embodiment illustrated in FIG. 1 requires a reference which is at the same frequency, or periodically recurs the same, as the signal to be detected. However, the time interval between samples in not critical. In the embodiment illustrated in FIG. 2, the time interval between samples must be a whole integer multiple of the period of the signal to be detected but no reference signal is required. Other advantages of each of the embodiments will be readily appreciated by those skilled in the art. While I have shown and described two embodiments of the present invention, further modifications and improvements will occur to those skilled in the art. I desire it to be understood, therefore, that this invention is not limited to the particular form shown and I intend in the appended claims to cover all modifications which do not depart from the spirit and scope of this invention. | |
I believe it is impossible for neuroscience to ever show with certainty that consciousness is a production of the brain, because for every bit of evidence that neuroscience can show that points to consciousness being linked to the brain, there is always an alternative way to view that evidence that points to the opposite.
When you really get down to it, most of the evidence we have that the brain produces consciousness is correlatory.
If you hit your head, you may lose some memory and cognitive function. If you take a drug, you may experience an altered state of consciousness. We can relate certain brain states to states of consciousness.
However, if you were to take the opposite of the conventional view – that the brain regulates and limits consciousness, rather than produces consciousness, you effectively have an alternative working explanation for every supposed piece of evidence that the brain creates consciousness.
This view is summarized very well by Cyril Burt:
“The brain is not an organ that generates consciousness, but rather an instrument evolved to transmit and limit the processes of consciousness and of conscious attention so as to restrict them to those aspects of the material environment which at any moment are crucial for the terrestrial success of the individual”
Now, there are always at least two ways to view any correlate involving the mind and the brain. For example, is brain activity supposedly correlating to conscious states the cause of those conscious states, or is it merely the measure of the brain's response to those conscious states?
When you take a drug, is it your brain that alters the way it produces consciousness, or is it the brain's regulatory function over consciousness that is out of whack?
Did hitting your head cause your brain to mechanically be unable to produce consciousness in the same way, or did you damage your regulatory unit’s ability to open itself to consciousness?
Not only does this alternative view not conflict with modern neuroscience, but I believe it actually explains many neurological mysteries better than the conventional view.
For example, in ‘Acquired Savant Syndrome’, people can suffer brain damage, and rather than lose cognitive function, they gain Savant-like abilities. This is easily explained if you believe consciousness to be external to the brain. The limiting function of the regulatory system of the brain was damaged in such a way to allow more consciousness to be experienced. The materialist is in a much weaker position in explaining how brain damage leads to such a radical increase in mental ability.
There was a case of severe hydrocephalus that was circulating the news. I believe the article was called ‘Tiny brain normal life’, and it was about a French Civil servant who had lived a normal life with only a small fraction of a normal brain. His affliction is called hydrocephalus.
In severe cases of hydrocephalus patients can be left with less than 5% of the brain mass of a normal person. Even in these severe cases there are people who have above average IQs – some actually have very high IQs and seemingly no mental deficits. Again, this is easily explainable for those who believe consciousness to be separate from the brain, and much harder to explain for materialists, who often chock it up to redundancy.
The last example I’ll cite here is ‘Terminal Lucidity’ and dementia. In severe cases of dementia, as a result of progressive brain damage, some patients will be unable to remember the faces and names of family members. They won’t be able to find words, or hold a proper conversation.
Sometimes when these patients approach death, they’ll enter a lucid state where they are able to remember names, faces, and hold a proper conversation, as if they did not have dementia.
I would say this is no problem if you believe the brain to be a regulatory system for consciousness, because that implies consciousness and memory are external from the brain. The memory still exists, and while under duress the brain’s regulatory ability to limit itself is diminishing, allowing what once was blocked to be experienced.
If you view consciousness as material, and memory to be physically stored in the brain this is more difficult, because the brain degeneration supposedly responsible for their inhibited consciousness and memory is still very much there when they enter this lucid state. | https://www.debunkingskeptics.com/forum/viewtopic.php?f=5&t=1858 |
Q:
Calculate interest rate from monthly payment in SQL
Is there a way how to calculate interest rate knowing loan amount, monthly payment, and term in SQL (Oracle)? I can easily calculate the payment knowing the interest rate, however the opposite way seems to be much more difficult.
Monthly payment calculation (Interest rate = 0.1 (10%), Loan size = 1000, Term = 24):
select (0.1/12 * 1000) / (1 - power(1 + 0.1/12, -24)) as mpayment
from dual;
46.1449263375165
The question is how to go to from $46.14 monthly payment, $1,000 loan size, 24 months term and calculate 10% as a interest rate.
E.g. in MS Excel the function to use would be RATE()
A:
As I said in a Comment below your post: what you are looking for is called the "internal rate of return". Actually you are looking for a very special case - an amortized loan, with equal payments at regular intervals. Oracle offers an IRR function in add-on packages; if you want to use basic SQL and PL/SQL only, you will have to use a UDF (user-defined function).
Here is one way to code it, using Newton's method. I demonstrate a few things at the same time. Notice the numeric data types (which are specific to PL/SQL and can't be used in plain SQL; however, the runtime will convert the inputs from NUMBER to the PL/SQL data types, and the return value back to NUMBER, transparently). Using these data types in the code makes the function much faster - especially if you use native compilation (which is done as I show in the first line of code below).
So far everything should work in older versions of PL/SQL. Since version 12.1 only, and only if you are going to call the function primarily from SQL, you can use the pragma udf declaration - which will speed up plain SQL code that calls the function.
The function returns an "annualized" mortgage rate (it computes the monthly rate and then it simply multiplies by 12 - no compounding - since that's how mortgage interest rates work, at least in the U.S.). The rate is returned as a decimal number, not multiplied by 100; that is, not as a percentage. If the rate returned by the function is 0.038, that means 3.8% (ANNUAL mortgage interest rate). In the brief demo at the end, I should how you can wrap the function call within other SQL code to beautify the answer.
For the example at the end, I took a 200,000 principal value and calculated the monthly payment over 30 years (360 months) at 6.5% interest rate; I got a monthly payment of 1,264.14. Then I compute the interest rate from the other values.
The function requires the principal amount and the monthly payment, both NOT NULL and assumed positive. The term (IN MONTHS) is also needed, but I coded a default of 360. (Perhaps it would be better to code no default for this and make it required as well.) Optionally you can enter a desired precision; I coded a very high precision as default, since the computations are super-fast anyway.
I didn't code any kind of error handling; obviously that will have to be done, if you choose to use this function (or anything similar to it) for any purpose other than training/learning.
alter session set plsql_code_type = native;
create or replace function mortgage_rate(
p_principal simple_double
, p_monthly_payment simple_double
, p_term simple_integer default 360
, p_precision simple_double default 0.00000001
)
return number
as
pragma udf; -- Comment out this line if Oracle version is < 12.1
z simple_double := p_monthly_payment/p_principal;
u simple_double := 1 / (p_term * z);
v simple_double := 0;
delta simple_double := 0;
begin
for i in 1 .. 100 loop
v := power(u, p_term);
delta := ( z * u * ( v - 1) - u + 1 ) / ( z * (p_term + 1) * v - z - 1 );
u := u - delta;
exit when abs(delta) < p_precision;
end loop;
return 12 * (1/u - 1);
end;
/
select to_char( 100 * mortgage_rate(200000, 1264.14, 360), 'fm990.000')
|| '%' as interest_rate
from dual;
INTEREST_RATE
----------------
6.500%
| |
DescriptionHold a light resistance band, palms facing up, elbows against your sides and hands about shoulder width apart. Flex your elbows to 90 degrees so your forearms are parallel to the floor.
Begin by retracting your scapula (squeezing your shoulder blades back) and externally rotating your shoulders, pulling the band part, while keep your elbows tight to your sides.
Pause at your end range of motion and slowly return to the start. Repeat for Repetitions.
Keep your elbows at 90 degrees, wrist straight and shoulders down throughout the movement. | https://www.caliverse.app/exercises/band-no-money-drill-236 |
The aim of this study was to determine if patients with Borderline Personality Disorder (BPD) present higher emotional response than healthy controls in a laboratory setting. Fifty participants (35 patients with BPD and 15 healthy controls) underwent a negative emotion induction procedure (presentation of standardized unpleasant images). Subjective emotional responses were assessed by means of self-reported questionnaires while biological reactivity during the procedure was measured through levels of salivary cortisol (sCORT) and alphaamylase (sAA). Patients with BPD exhibited significant lower cortisol levels and higher sAA levels compared to controls. Self-reported emotional reactivity did not give rise to differences between groups but participants with BPD did present higher levels of negative emotional intensity at baseline and during the entire procedure. The findings do not give support to the emotional hyperreactivity hypothesis in BPD. However, BPD patients presented heightened negative mood intensity at baseline, which should be considered a hallmark of the disorder. Further studies using more BPD-specific emotion inductions are needed to confirm the trends observed in this study. © 2012 Asociación Española de Psicología Conductual. | https://portalrecerca.uab.cat/en/publications/emotional-responses-to-a-negative-emotion-induction-procedure-in- |
Government departments responsibility for construction
To help develop this article, click ‘Edit this article’ above.
This article sets out the responsibilities of central government departments for different aspects of the construction industry.
Department for Business Innovation and Skills (BIS)
- BIM Task Group.
- Construction Sector Unit.
- Green Construction Board.
- Technology Strategy Board (TSB), a non-departmental public body (NDPB).
- Construction 2025 (jointly with industry).
HM Treasury
Cabinet office:
- Efficiency and Reform Group (ERG).
- Major Projects Authority (MPA).
- Government Construction Strategy.
- Government Construction Board.
Infrastructure UK (IUK (now the Infrastructure and Projects Authority):
Department for Work and Pensions (DWP)
- Health and Safety Executive (HSE), a non-departmental public body (NDPB).
- Construction (Design and Management) Regulations (CDM).
Department for Communities and Local Government (CLG)
- Building regulations.
- Planning permission.
- Planning policy.
- Homes and Communities Agency, a non-departmental public body (NDPB).
- Fire and Rescue Service.
- Architecture (transferred from DCMS in April 2015)
Home office
- Special licences.
Department for Environment, Food & Rural Affairs (DEFRA)
- Environment Agency (EA), a non-departmental public body (NDPB).
- Natural England, a non-departmental public body (NDPB).
Department for Culture, Media and Sport (DCMS)
- Historic England, a non-departmental public body (NDPB).
Department of Energy & Climate Change (DECC)
- Energy saving and climate change policy.
Local Authorities
Devolution
- Following devolution in the UK, responsibility for many aspects of construction industry policy, oversight and regulation have been passed to the authorities in Scotland, Wales and Northern Ireland. See UK for more information. This includes building regulations and planning policy.
Find out more
Related articles on Designing Buildings Wiki
Featured articles and news
The origins, evolution and future of Level 3 BIM.
Urban design course essentials
For new and returning Urban Design students, check out our article list divided up into the modules you'll be studying.
Report states that health of urban dwellers could be significantly improved by rethinking transport design.
The Kremlin, the centre of Russian power, includes some of the country's finest architecture.
Report launched outlining steps for a national infrastructure system that is efficient, sustainable, and delivers until 2050.
An Introduction to Passive House
A review of Justin Bere's concise and well-presented introductory guide to Passive House.
This article describes in detail the tender process for a typical commercial construction contract.
What is energy storage, what are the different types and what is its future?
MAD Architects reveal their designs for a state-of-the-art concert hall in Beijing.
Take a look at BIG's designs for two twisting towers in New York City.
'The filing cabinet' which was labelled one of the best British buildings of the 21st century.
Click here to see more featured articles and news. | http://www.designingbuildings.co.uk/wiki/Government_departments_responsibility_for_construction |
A major U.S. Bank in the mid-west contracted with Prescio to validate their Advanced Measurement Approaches Theory based on the Operational Risk Model.
This SAS based application was developed internally by the statistical and quantitative modelers of the Bank. Our team included business domain experts, statisticians, mathematicians, SAS experts and project managers. Discussions between the Prescio team and the executives of the risk management and the quantitative groups of the Bank were held. From these discussions, it was decided that a validation project would examine different facets of the model according to standard validation practices of the industry.
Prescio performed a multiple step validation process which included, analysis of the theoretical fundamentals of the model, analysis of the business fundamentals, analysis of the internal data, analysis of the Structured Scenario data, determination of the distributions applicable to individual loss events, the combining of internal and structured scenario data, addressing issues related to the Extreme Value theory, and validating their SAS code. Prescio was able to deliver the clients’ requirements within a very tight delivery schedule. Prescio’s performance resulted in follow-on work for other Basel related models in commercial and retail credit. | http://prescio.com/portfoliodetails/validation-model.html |
Legal resources related to disasters in Texas.
Home
Winter Storm 2021
Safety & Preparedness
Emergency Information
Find a Shelter
Emergency Preparedness
Disaster Orders & Laws
Toggle Dropdown
Disaster Declarations
Disaster Laws
Courts & Legal Aid
Toggle Dropdown
Courts
Legal Aid
Pro Bono Opportunities
Financial Assistance
Business & Employment
Toggle Dropdown
Business & Agriculture
Employment
Nonprofits & Houses of Worship
Consumer Issues
Toggle Dropdown
Consumer Protection
Taxes
Personal Document Replacement
Insurance
Medical Issues
Family Issues
Toggle Dropdown
Custody
Schools
Domestic Violence
Animals
Wills & Estates
Housing
Toggle Dropdown
Homeowners
Landlord/Tenant
Civil Rights
Toggle Dropdown
Immigration Issues
Firearms
Disclaimer:
The State Law Library is unable to give legal advice, legal opinions or any interpretation of the law.
It is strongly recommended that you
contact an attorney
for advice specific to your situation. If you have questions about anything in this guide, please
ask a librarian.
Disclaimer:
The State Law Library is unable to give legal advice, legal opinions or any interpretation of the law.
Ask a Librarian:
Ask a Librarian
Law Libraries of Texas
Public Libraries of Texas
Contact an Attorney:
Legal Help & Information
FreeLegalAnswers.org
State Bar of Texas Lawyer Referral Service
Find a Shelter
Find a Shelter
Texas Warming Centers
The Texas Department of Emergency Management has created a statewide map of warming centers. This map is also
available in Spanish
.
Find an Open Shelter (Red Cross)
The Red Cross provides this list of emergency shelters.
Let Family Know You're Safe
Information from the American Red Cross on ways to contact loved ones after a disaster to determine if they are safe and well.
Disaster Relief: Emergency Housing and Other Needs Assistance (TexasLawHelp.org)
TexasLawHelp.org has compiled various resources for renters and homeowners in need of emergency housing after a disaster.
Can I bring my pet or service animal to a shelter?
The ADA and Emergency Shelters: Access for All in Emergencies and Disasters (ADA.gov)
Section D of this publication addresses the admittance of service animals into emergency shelters.
Are Hotels Required to Accept Pets During Natural Disasters? (Snopes)
Snopes, a fact checking organization, investigates rumors that hotels are required to accommodate pets after a disaster.
U.S. Public Law 109–308 - Pets Evacuation and Transportation Standards Act (PETS Act), 2006 [PDF]
This federal law states that the director of the Federal Emergency Management Agency (FEMA) must ensure that states "take into account the needs of individuals with household pets and service animals" when creating disaster or emergency plans.
More Help
FreeLegalAnswers.org
This website allows you to ask a lawyer a legal question in writing for free. You can even upload documents for an attorney to review.
Get Help From an Attorney
Our Legal Help guide has information on free legal hotlines, legal clinics, and legal aid organizations, as well as information on how to find a lawyer who could represent you.
Ask a Librarian
Questions? Ask us! Librarians at the State Law Library can provide information about the law, but cannot give legal advice.
<<
Previous:
Emergency Information
Next:
Emergency Preparedness >>
Last Updated:
Apr 15, 2021 4:15 PM
URL:
https://guides.sll.texas.gov/weather-emergencies
Print Page
Librarian Login
Report a problem
Subjects:
Consumer protection
,
Disaster relief
,
Health
,
Health care
,
Hurricane Harvey
,
Immigration
,
Insurance
,
Labor and employment
,
Medical records
,
Miscellaneous legal topics
,
Taxation
Tags:
broken pipes
,
burst pipes
,
disaster
,
disaster declaration
,
emergency
,
emergency declaration
,
flood
,
flooding
,
hanna
,
Harvey
,
hurricane
,
hurricane hanna
,
hurricane laura
,
hurricane marco
,
ice
,
Imelda
,
laura
,
marco
,
medical privacy
,
relief
,
safer at home
,
shelter in place
,
snow
,
stay at home
,
stay home
,
tropical storm
,
tropical storm imelda
,
tropical storm laura
,
tropical storm marco
,
uri
,
water damage
,
winter storm
, | https://guides.sll.texas.gov/weather-emergencies/find-a-shelter |
Take a look at the person next to you. From the side, is their ear in line with their shoulder or when they walk through a door is their head getting through before the rest of their body? Now have a friend take a look at your posture from the side.
In the poster down below, the first sketch (on the left) represents “perfect” head posture. A line dropped from the centre of the external auditory meatus (EAM) would land directly in the centre of the shoulder (the tip of the acromion process). The graphic demonstrates the progression of forward head posture (occasionally referred to as “anterior head translation”).
According to Kapandji (Physiology of the Joints, Volume III), for every inch your head moves forward it gains 10 pounds in weight, as far as the muscles in your upper back and neck are concerned, because they have to work that much harder to keep the head (chin) from dropping onto your chest. This also forces the suboccipital muscles (they raise the chin) to remain in constant contraction, putting pressure on the three suboccipital nerves. This nerve compression may cause headaches at the base of the skull. Pressure on the suboccipital nerves can also mimic sinus (frontal) headaches.
Rene Cailliet M.D., famous medical author and former director of the department of physical medicine and rehabilitation at the University of Southern California states: “Head in forward posture can add up to 30 pounds of abnormal leverage on the cervical spine.
This can pull the entire spine out of alignment. Forward head posture (FHP) may result in the loss of 30 per cent of vital lung capacity. These breath-related effects are primarily due to the loss of the cervical lordosis, which blocks the action of the hyoid muscles, especially the inferior hyoid responsible for helping lift the first rib during inhalation.”
Persistent forward head posture puts compression on the area of the spine through the shoulders. It is also associated with the development of Upper Thoracic Hump, which can evolve into Dowager Hump when the vertebra develop compression fractures (anterior wedging). A recent study found this hyperkyphotic posture was associated with a 1.44 greater rate of mortality.
Would you be surprised that your neck and shoulders hurt if you had a 20-pound watermelon hanging around your neck?
That’s what forward head posture can do to you. Left uncorrected, FHP will continue to get worse. Have your spine checked by a qualified corrective chiropractor to see if you have misalignments that are causing you to have a declining level of health. Our specialty is in reversing misalignments in your spine to prevent degeneration and decay and in reinvigorating the muscles that normally retract the head.
For more information, please contact [email protected] or call (02) 9418 9031. | https://arborage.com.au/perfect-posture/ |
When the application of a pesticide fails to deliver the desired control of a pest, the immediate thoughts are that something is wrong with the pesticide or there is pesticide resistance. However, the cause may be due to a different reason, water quality.
Water is the most common ingredient in pesticide applications. Water is an effective solvent for many pesticides and enables small amounts of pesticides to be applied uniformly over large areas.
Water makes up over 90% of most pesticide spray mixtures and is often taken for granted. Water quality is not usually considered given that water used for mixing pesticides comes from a variety of sources including rivers, lakes, ponds, and wells. The quality of water from these sources can vary widely in three critical areas: water pH, water hardness, and water turbidity.
Water pH
- pH is a value that describes the relative acidity or alkalinity of a solution. The pH scale runs from 0 – 14 where:
- A pH less than 7 is considered acidic
- A pH of 7 is neutral
- A pH greater than 7 is considered alkaline.
- Most pesticides perform best in water with a pH between 4 and 6.5. When the spray solution is outside the ideal range, the pesticide will be hydrolyzed and degraded, and will not work as desired.
Water hardness
- Water hardness is a measure of the total concentration of positively charged calcium, magnesium, iron, sodium, and aluminum ions in water.
- Water hardness is measured in milligrams per liter (mg/l), parts per million (ppm), or grains per gallon (grains/gal).
- Hard water can be found in over 85% of US water resources.
- When hard water is used to mix pesticides, negatively charged pesticide molecules will combine with positively charged ions in hard water to create molecules that will either precipitate out of the solution, enter target pests at a slow rate, or cannot enter the target pest at all.
Water turbidity
- Turbidity is a measure of the total suspended solids in water. It is the haziness of a liquid caused by suspended solids.
- Turbidity is usually caused by soil and organic matter, which can reduce the effectiveness of the active ingredientsof some pesticides.
- Positively charged pesticide molecules are attracted to negatively charged particles found in water, making them unavailable for plant uptake.
- In addition, soil particles and organic matter will plug nozzles and screens leading to uneven spray patterns and time lost repairing equipment.
How to check water quality
- Determine the pH of your spray water before adding the pesticide. Check it several times a year.
- Determine the hardness of your water. Knowing only the hardness of the water being used for spraying may not be adequate. The pH level determines the amount of ions in the solution which can be tied up by hard water minerals. For an accurate hardness reading, take a water sample to a local lab.
- Always use clear water in spray tanks. Applicators can easily test turbidity by dropping a quarter into a five-gallon bucket filled with water. If the water is too cloudy to see the quarter, seek an alternative source of water for the spray mixture.
Some rules to follow to overcome issues with water quality
- Use pesticides that are least affected by water quality. For example, if an applicator wants to apply 2,4-D and the water pH is high, using 2,4-D ester instead of 2,4-D amine will yield the best result.
- Use nonionic adjuvants to solve water quality problems. For example,
- AquabupH™ with Nitrogen is a nonionic buffering and conditioning agent that can:
- Lower the pH of the spray solution to less than 4.
- Combat all hard water cations that negatively affect pesticides.
- AquabupH™ with Nitrogen is a nonionic buffering and conditioning agent that can:
- Water Conditioner™ is a nonionic buffering and conditioning agent that can:
- Lower the pH of the spray solution to less than 4.
- Sequester cations like iron, calcium, and magnesium salts.
- Spray as soon as possible after adding the pesticide to the spray tank or include an adjuvant that can maintain the spray mixture pH desired by an applicator. For example:
- AquabupH™ with Nitrogen and Water Conditioner™can maintain pesticide spray solution at a low pH for over 8 hours.
Further Reading
- Brewer 90-10 Surfactant for Excellent Vegetation Management
- Optimize Your Weed and Brush Control with Silnet 200 – A Superior Nonionic Surfactant
- Best Practices of Utility Vegetation Management
- Control of Woody Vegetation in Rights-Of-Way using Basal Applications
- Basal Bark Treatment: What is it and How is it Used?
References
- Whitford, F. et al. The Impact of Water Quality on Pesticide Performance: The Little Factor that Makes the Difference. Purdue Univ. Ext. Bull. PPP-86. https://www.extension.purdue.edu/extmedia/PPP/PPP-86.pdf#:~:text=Water%20qual-%20ity%20parameters%20such%20as%20acidity%20and,in%20inferior%20performance%20and%20the%20need%20for%20re-treatment. [Accessed, July 27, 2022].
Brewer International has been a leader in land and water chemistry since the 1980s and for over 40 years has proudly served its national and regional distributors.
Our products are used widely across the United States in agriculture, aquatics, forestry, rights of way, and land management.
Our customers trust our dedication to quality ingredients, tried and true formulas, and positive outcomes. | https://brewerint.com/news-insights/101-guides/how-does-water-quality-affect-pesticide-effectiveness/ |
This season, themed Unlikely, will focus on moving from a conversation about others faiths – like isolated posts on Twitter – to a conversation with other faiths. A conversation that allows us to hear from those of different faiths, different worldviews, and different ideas that shape the way we communicate. Whether you’re a religious leader like a Pastor, Imam or Rabbi, or a person of faith, this is your chance to understand the realities of faith in the 21st century.
Episode 2 is with renowned Christian author Christine Caine. Christine has a heart for reaching people, strengthening leadership, and championing the cause of justice. Together with her husband, Nick, she founded the anti-human trafficking organization, The A21 Campaign. They also founded Propel Women, an organization designed to celebrate every woman’s passion, purpose, and potential. And Her most recent book, How Did I Get Here? is available now.
Christine will be one of the keynote speakers during the Global Faith Forum Unlikely conference in March of 2022. What a delightful conversation that we feel you will see as valuable.
Notes:
A21 – a21.org
Register for the Global Faith Forum – globalfaithforum.com
MultiFaith Neighbors Network – mfnn.org
About Pastor Bob Roberts Jr
Dr. Bob Roberts, Jr. is the founder of GlocalNet, a non-profit dedicated to mobilizing the church for transformation in the public square, founder and chairman of Glocal Ventures Inc (GVI) and co-founder of Multi-Faith Neighbors Network (MFNN), a multifaith organization committed to creating international religious freedom through intentional cross-cultural relationships. He is also currently the Senior Global Pastor at Northwood Church and host of the Bold Love podcast.
Bob has contributed or been featured on the World Economic Forum, Fox Business Channel, Washington Post, New York Times, Huckabee Show, Religious News Service, C-Span, Templeton Religions Trust, El-Hibri, Christianity Today, Outreach Magazine and more.
Bob is a graduate of Fuller Theological Seminary (Doctorate of Ministry), Southwestern Baptist Theological Seminary (Masters of Divinity), and Baylor University (BA). He and his wife Niki have two children and three grandchildren. | https://bobrobertsjr.com/podcast/s3e2-christine-caine-unlikely-journey-to-back-to-god/ |
2 mentions in Florence, Venice & Milan: Day Trips & Local Hangouts (food & music)?
Jaclyn R. said, "...San Lorenzo Market. Too touristy and busy. Too many gypsies. My favorite square is Piazza della Signoria, Piazza della Repubblica, and Piazzale Michelangelo. Good shopping near Santa Croce, Borgo Pinti, and on the back streets near the Ponte Vecchio. Day trips--..." See More
Russell G. said, "...many sights are closed on Mondays, plan accordingly. Florence: If you have an especially clear day, take the city bus to Piazzale Michelangelo. Try Trattoria Cibrèo for lunch or dinner. Mercato Centrale Firenze Srl is well worth the visit (get supplies..." See More
1 mention in Planning Italy :)
1 mention in How expensive is Rome and Florence?
Alessia C. said, "...is a small city so you can explore it also by walk (from the inner centre you can reach the panoramic Piazzale Michelangelo passing through the Oltrarno district. In the inner centre of Florence there are a lot of nightclubs where the..." See More
1 mention in Florence eats?
1 mention in What to do, see, and eat in Florence on a honeymoon?
Caterina P. said, "...those restaurants because I like them). 2) Since you'll be on your honeymoon, here is also a list of romantic spots: Piazzale Michelangelo (don't take the bus, enjoy a walk from Via di San Niccolò so that you can also visit the..." See More
1 mention in Traveling in May - Good eats and sites to see in Italy and Greece?
1 mention in Where to lunch and what to see during limited time in Florence - help!
1 mention in Family of 5 in Italy - ideas or tips?
1 mention in Must sees between Florence and Rome?
1 mention in Which towns in Tuscany should we see?
Tiffany W. said, "...you enjoy art. The museums are amazing. Do walk to the other side of the river and see the view from Piazzale Michelangelo and stop to eat at Borgo Antico . (My Florence blog post) We really enjoyed San Gimignano . You..." See More
1 mention in The most relaxing places in Tuscany?
1 mention in Fun daytime and nightlife places and activities in Florence
Jillian G. said, "...You must walk the Ponte Vecchio, and picnic atop Piazzale Michelangelo for beautiful views. Stop and grab a coffee at Caffè Le Torri and for dinner try one of my favories Golden View Open Bar -their gnocchi is out of this world!..." See More
1 mention in Must sees in La Spezia, Florence, and Venice???? | http://www.trippy.com/Attractions/Piazzale-Michelangelo-Firenze-Italy |
Teamwork Starts at the Top
When the topic of teams and collaboration comes up, many think that this is the domain of people in the middle and at the bottom of the organization. Nothing could be further from the truth. Certainly teamwork happens on the Quality Committee, with direct patient care or in the marketing department. But senior-level collaboration is required to foster and sustain a well-aligned organization able to withstand the pressures evident in today’s competitive, heavily regulated long-term care environment.
This article challenges you, the senior executive, to view your leadership role as that of the organization’s premier team leader, setting the example for collaboration that positively impacts census, employee retention, and revenue.
Collaborative leadership requires certain core competencies, the first of which is humility. While you may hold the highest position in the organization, collect the largest bonus, and have the ability to hire and fire at will, you must develop the capacity to listen, learn, reflect, and acknowledge mistakes. It is through these actions that you will be able to connect with the other senior managers who execute strategy, the middle managers who interpret it, and the care providers who make it real on a daily basis.
The next core competency is the aptitude to build, communicate, and sustain alignment. Collaboration does not happen in organizations that do not have alignment between core values and modus operandi. Alignment, the intentional congruence between all aspects of the long-term care organization, is a constant struggle for many executives. They are pulled between what is expedient to meet demands of regulators, what is best for the bottom-line, and what is needed to meet high standards of care. Reaching the delicate balance, and maintaining it, requires a support system characterized by mutual commitment to core values and consensus on the strategic approach for making those values actions in every part of the organization.
The recently released book Rules of Engagement: Timeless Tips for Team Leaders provides 46 strategies for team leaders at every level.1 They are all applicable in the long-term care environment. Several are most applicable to senior executives interested in transforming stagnant, hierarchical cultures to be more collaborative, communicative, and customer-driven. Three of those rules are discussed here.
Rules of engagement
It is logical that we begin with the rule that addresses alignment, which is critical to successful collaboration. The rule is stated as such: Align behaviors with core values. What do you believe in? What do you, at the core of your being, know to be a truth? What are the essential components of your character? Take the time to answer these questions. They reveal your core values.
When you answer them you may come up with things like honesty and integrity. You may come up with hard work, sacrifice, and humility. A focus on family, maintaining meaningful relationships and connectedness may surface as your critical values.
Whatever your values, your behavior should be congruent with them. Your decisions should reflect the beliefs that you hold true. Your relationships with colleagues must also reflect these principles. This alignment benefits you, your team, and everyone else that you encounter. People know what to expect from you. They know that you will be consistent. More important, they know that when they interact with you that they will not experience hypocrisy or deceit. Your colleagues will know that everything you do is clearly aligned with who you are at your core.
This principle-based alignment occurs when our values are consistent with the organization’s values. When we believe in the mission, values, and strategic intent of the enterprise, we are more likely to experience a connectedness and congruence. Alignment between your values and your actions is the first step.
The next step is being certain that you are working in an organization where your beliefs are consistent with those of the organization. Absent congruence, you will be working at cross-purposes internally and within the larger context of your organization. How can you possibly lead a long-term care organization without empathy for the elderly? You may succeed at revenue generation but you will never be able to take the company to the highest level of care, compassion, and customer service. You will also be very limited in setting a corporate vision that adequately addresses the needs of this very vulnerable population.
The next rule, Have a strategic focus, connects collaboration to the bottom-line intent of the enterprise. The organization’s strategy is the essential guide for any and every meaningful activity. For you to adequately direct strategy, you must first understand it from multiple perspectives. You must understand the differences between the ways a DON, a housekeeper, and an administrative person perceive the organization and its direction. Interpretations of strategy vary based on a person’s position within the organization. As the senior executive, you must be able to communicate strategy in terms that everyone can comprehend. Once staff comprehends strategy, they can make the necessary adjustments to enact it through their daily work.
Executives often assume that if no one else grasps strategic intent, at least senior managers get it. Nothing could be further from the truth. Many CEOs feel frustration when they step back and watch how their direct reports execute strategy. They find that interpretations are askew and, as a result, misalignments abound. They find departments operating at cross-purposes, wasted resources, and widespread inefficiency.
A common example is the long-term care facility that touts care and service but only shapes up its operation in preparation for surveys and other regulatory inspections. While the CEO who is espousing care and service as core values for the company may truly believe this, it is the actions of his direct reports and their teams that create disconnects and misalignments. These disconnects build a level of accepted hypocrisy that negatively impacts not only the reputation of that company, but the esteem of the long-term care profession.
A key leadership responsibility, which can be overwhelming, is to both communicate strategy consistently throughout the organization and to ensure its integration in every activity. Many senior leaders make the mistake of launching hundreds of projects and processes without making clear connections with the organization’s strategy. If that sounds familiar there are corrective measures that you can take.
The first is an organizational analysis that identifies, examines, and evaluates each and every process. The second is recruiting a seasoned organizational development professional who has the capacity to partner with you on creating sustainable alignment. The third, and most risky, is a cease and desist order. When executives issue cease and desist orders, they stop all projects and activities that cannot be directly linked to supporting organizational strategy.
The final rule of engagement that will increase your chances of success at collaborative leadership is simply put: Respond. This means, don’t just respond to attorneys requesting medical records or state regulators and disgruntled families threatening litigation. Get in the habit of responding to inquiries from staff, no matter what their title or position. Respond to suggestions from colleagues. “Responding” doesn’t mean endorsing, agreeing, or committing. A response is simply an acknowledgment.
Skillful collaborative leaders understand that the mere act of responding is a sign of respect. They also are smart enough to take it a bit further by considering requests carefully and providing informed responses. This means that when your Staff Development Coordinator stops you in the hall with her idea for a new in-service session, you will listen with an encouraging attitude. She may not be blindsiding her boss, as a suspicious mind may think. She may, in fact, be excited about the possibilities that this in-service can offer for the company. Your ability to be approachable keeps you in the loop of creative, innovative developments that can not only foster strategic intent, but also position your company as a trendsetter in the long-term care industry.
Another element of the responsiveness that builds collaboration is honesty. Honesty includes everything from admitting that you don’t understand a concept, to saying that you are too busy to deal with an issue at this time. It also includes clearly stating reservations, objections, or feelings of discomfort. Every response will not be in the affirmative. Leaders able to collaborate understand the importance of not just being responsive, but of being authentic in those responses.
Conclusion
Collaboration is as much the domain of senior executives as of middle managers. It requires executives to rethink their interpretations of team development and its importance to strategy and alignment. The rules of collaborative engagement provide a framework for connecting a more cooperative approach to leadership with indicators of corporate success. Executives with the ability to make connections at the peer level and beyond have greater chances of creating enterprises that are not only financially successful, but also able to maximize utilization of the human element. Purposeful collaboration can deliver customer satisfaction, enhanced reputation, and employee retention.
Joanne L. Smikle is a consultant specializing in leadership development and collaboration. Clients include Miller’s Health Systems, Opis Management Resources, Arizona Health Care Association, and more. For further information, call (301) 596-3140 or visit
To send your comments to the editor, e-mail [email protected].
Reference
- Smikle JL. Rules of Engagement: Timeless Tips for Team Leaders. Simpsonville MD:The Practical Press.Available at: https://www.thepracticalpress.net. | https://www.iadvanceseniorcare.com/teamwork-starts-at-the-top/ |
These were just some of the headlines that were being floated around by almost every single football news outlet back in February of 2021. Marcos Llorente was on top of the world, and had become a superhuman midfielder that blended incredible physical attributes with a brilliant footballing brain and superb end-product.
But up until his move to Atlético, the midfielder had gone from promising youngster to unhappy benchwarmer at Real Madrid, and seemed destined for a respectable – if unspectacular – career in top-flight Spanish football.
Then came that fateful night on 12th March of 2020 where Los Rojiblancos faced off against Liverpool in a cut-throat CL knockout game. The first leg at the Wanda Metropolitano was a tight affair that was decided by an early Saúl goal, with the Spanish side then taking a 1-0 lead to Anfield. Liverpool went onto dominate the second leg for 90 minutes, and deservedly drew level on aggregate with a customary Wijnaldum header in the 43rd minute. But they failed to find a winner as Jan Oblak helped Atletico weather the storm and bring the tie to extra time.
And that’s when all hell broke loose.
Firmino scored his first goal of the season at Anfield with 94 minutes on the clock to give Liverpool the lead in the tie, sparking raucous celebrations from the players and fans alike. Then out of nowhere, an Adrian mistake allowed Llorente to receive the ball outside the box and smash one in the far corner to subdue Anfield, driving the tie in favour of Atléti owing to the away goals rule. 9 minutes later he scored again, attacking the box, dropping his shoulder left to wrong-foot Jordan Henderson, then nestling it in the same corner to hand his side the lead outright. And with Liverpool in all-out-attack mode, he and Morata combined exquisitely to capitalise on acres of space to produce a 3rd for Atléti in stoppage time. Marcos promptly named his new dog 'Anfield' a few days later, infuriating the Liverpool faithful and making his pet a permanent personal reminder of his exploits that momentous night.
Without context, Llorente’s performance would only seem very good. But the fact that these were just his 7th and 8th goals and 9th assist as a 25-year-old professional footballer made it something special, a coming-of-age match on the biggest stage that vindicated Atléti’s £35M investment in him. This looked to be a turning point, as the goals and assists slowly started to pick up after football’s return from lockdown, and the future looked bright. But what transpired the next season was beyond anyone’s expectation.
Llorente contributed 23 goals and assists in La Liga alone as Atletico pimped Real to the title by two points. His 12 goals were without penalties and his 11 assists were without corners or freekicks. He played everywhere, with Simeone starting him at right-back, right wing-back, right-mid, right-wing, centre defensive-mid, centre-mid, centre attacking-mid, second-striker, and striker over the course of the season.
He had finally established himself as a superstar after years of being consistently unspectacular, and it was just the start. Llorente was in absolutely peak physical condition and in his footballing prime with a team tipped to finally usurp Barca and Real and become Spain’s premiere sporting superpower.
Yet almost exactly a year after all those sensational headlines flooded the news, Llorente looks to have reverted to his pre-Atletico days. His creative numbers are suffering slightly, he is less productive defensively, and his goal threat has gone from elite to appalling.
There are two main reasons that appear to explain this: injuries and his position. Llorente has not had an extended run of full fitness from October onwards, and after that point he seems to have shifted a little deeper on the pitch. Simeone has also yet to utilise him close to the striker, which is where his goalscoring and assisting was at its absolute peak.
But if you look a little closer, a pattern starts to emerge. Llorente’s underlying and expected numbers have dipped a little, but hardly as much as his cold, hard output in front of goal has.
fbref.com
Almost every single underlying creative number has decreased, but by a not too significant number. And the fact that his touches and crosses have increased is in keeping with a change in numbers down to position rather than performance; fullbacks tend to have a lot of the ball and a big hand in getting crosses in, so touches and crosses should be naturally high, whilst their other creative numbers are usually lower than a midfielder.
His underlying goal threat has also not suffered too significantly.
fbref.com
The drop off might seem large, but his xG (Appendix One) per 90 has fallen by 0.10 and his xA (Appendix One) per 90 by 0.06, which is the equivalent of only 3 goals and 2 assists over a full season. Again, this can be put down to position rather than performance.
But then why is he currently 13 goal contributions behind where he was at the same point last season?
fbref.com
The simple answer: he overperformed his expected numbers in 20/21. Massively. His goals, assists, goal-creating actions, shots on target, shot on target %, goals/shot, and goals - xG were all in the 92nd percentile or above. It is simply impossible to sustain this for an extended period of time, and the fact that he managed to do it for an entire year is an incredible feat in and of itself. Just for context, if Lewandowski overperformed his expected stats in the same way Llorente did, he would have ended 20/21 with 84 goals (Appendix Two) in 29 games in the Bundesliga, more than double his already record-breaking 41, and almost a hat-trick a match. Those are the sort of numbers elite clubs produce as a whole, forget individuals.
Llorente scored 12 off of just 4.3 expected goals that year. This means that, over the course of the season, an average finisher would have scored 4 goals from the chances he had generated – instead he scored 8 more. The story is the same for his assist numbers; 11 off of 5.2 expected. The cumulative quality of chances he created was 5.2, which means if his teammates were average players, they would have converted his passes into 5 goals. Instead, they managed to do much better than they should have, scoring 6 more.
fbref.com
So the conclusion is clear; Marcos Llorente did not magically become a midfielder with world-class attacking output overnight. He happened upon an (extremely) extended period of unbelievable finishing, and relied on his teammates to do much the same from his passes. It is also not like he became significantly worse the next season. The drop off in some of his numbers is mostly down to a shift to a more defensive position, and his finishing underwent a foreseeable regression to the mean.
This is not to take away from his 20/21 stats. Llorente might have had a little luck, but the quality of his finishing for that one season was a once-in-a-lifetime event.
Once-in-a-lifetime is not thrown around as a casual term either – there is substantial evidence to show it is true. According to barcanumbers, Messi’s ridiculous 12/13 had him scoring 2.39 goals/expected goal, a one in 1.6 million event. Llorente scored 2.79 goals/expected goal. It is so far off the mean that using its z-score (the number of standard deviations it is from the mean) would be basically pointless because it is such an anomaly. But to humour this path of logic anyways, using Llorente’s finishing as 6.27 standard deviations from the mean it can be calculated that this was a one in 2,769,715,066 event (Appendix Three). The caveat here is that his numbers are of a relatively small sample size. Although it was over the course of the entire season, he only accumulated 4.3 expected goals; not a big number by any stretch of the imagination.
Taking all of this into account, one can definitively say that Marcos Llorente did not really ‘rise’ or ‘fall’. Yes, on face value his goal contribution numbers went from mediocre to astonishing and back to mediocre again, but the actual quality of his performances have continued to be largely the same.
Marcos Llorente is top of the world no more, but he remains an excellent midfielder that blends incredible physical attributes and a brilliant footballing brain without superb end-product.
.
(one) – For those unfamiliar with xG (expected goals), it basically measures the quality of a shot by viewing thousands of shots from similar locations and in similar scenarios over the past few years and seeing how often they hit the back of the net. If 3 in a 100 similar shots are scored, the xG of that shot will be 0.03. xG per 90 would be the sum of all the xG that came about from each shot during an average 90 minutes for a player, so if a player takes 3 shots with xG values of 0.01, 0.23, and 0.17, his xG for that match will be 0.41.
xA (expected assists) works in much the same manner. It measures the quality of a chance, by viewing thousands of chances that resulted from similar passes in similar scenarios over the past few years and seeing how often that pass directly results in a goal. If 5 in a 100 similar chances from a specific pass were scored, the xA of that pass would be 0.05. xA per 90 would be the sum of all the xA that came about from each pass during an average 90 minutes for a player, so if a player makes 50 passes in a match, 10 of which create chances worthy of a value (passing to your goalkeeper technically has an xA value, but it is negligible since a goal resulting from that pass is a one-in-a-million event), the xA of that match will be the sum of the xA values from each of those 10 passes.
(two) – 84 goals was calculated by using Llorente's overperformance per shot, multiplying that number by Lewandowski's xG per Shot, then multiplying the result by his Total Shots per 90, then finally by the 90s he played in total.
Llorente's overperformance per shot is equal to his (Goals per Shot)/(Expected Goals per Shot), which was (0.24/0.09), or 2.66.
Lewandowski's xG per Shot is equal to his (xG per 90)/(Total Shots per 90), which was (1.16/4.76), or 0.244.
Multiply Llorente's overperformance per shot (2.66 times) by Lewandowski's xG per Shot (0.244), and you get Lewandowski's Goals per Shot if he was finishing at the same quality as Llorente (0.645).
Multiply Lewandowski's new Goals per Shot (0.645) by his Total Shots per 90 (4.76) and the result is his Goals per 90 if the Polish international had Llorente's quality of finishing (3.09).
The final step is to multiply the new Goals per 90 (3.09) by the total 90s Lewandowski played in the Bundesliga that season, which is equal to Minutes Played (2458) divided by 90, or 27.31.
The end result is 84.48, and once you round down, gives you the hypothetical goals Lewandowski would have scored in the Bundesliga in 2020/21 with Llorente's finishing, at 84.
∴ (Lewandowski Minutes Played/90) * ((Llorente Goals per Shot/Llorente xG per Shot) * (Lewandowski xG per 90/Lewandowski Total Shots per 90) * Lewandowski Total Shots per 90) = Lewandowski’s total goals when shooting with Llorente’s efficiency
∴ (2458/90) * ((0.24/0.09)*(1.16/4.76)*4.76) = 84.48
(three) – To convert standard deviations to probabilities, the empirical rule was applied. The Expected Fraction of Population inside Range formula (erf(x/√2) was utilised, where x = 6.27.
The result is 0.999999999638952, and from there to find the event probability simply plug in the result into (1/(1-x), to find an answer of approximately 2,769,715,066.
Therefore Llorente's quality of finishing relative to his Expected Goals over the course of the 2020/21 La Liga season was a one in 2,769,715,066 event. | https://www.thefootballnotebook.com/post/the-rise-and-fall-of-marcos-llorente |
"Happy families are the same because their children constantly improve their lives."
Parents always want their children to lead healthy and beautiful life and step into a successful future. For this reason, they involve both their material and spiritual strength in the struggle for this action.
In the growth phase, children still need care - parents should pay special attention to their sleeping patterns, feeding style, and psychological condition.
The process of the formation of children's personalities allows them to understand the events they see around them, draw conclusions from these events, and ultimately show a unique reaction to the environment and eventually have a right or wrong approach.
For this reason, it is very important to monitor and understand your children's relationship with the world around them at home. If they live in an apartment with a comfortable, calm, and beautiful aura, they will be cheerful and happy. Of course, kindness in the family and the positive characteristics of the house they live in will play a unique role in this result.
Having their rooms for children allows them to spend time independently from adults and discover themselves in a quiet environment. As a result of scientific research, it has been found that when children spend time in their rooms during the day, their imagination and free thinking skills develop. So, the games that continue throughout the day, the completed tasks, and the comfortable room where they can relax, give them invaluable moments.
When buying a new house or expanding an apartment, parents may have questions about how to properly design their children's rooms and how to ensure their healthy growth.
You can design the ideal room for your children by taking into account the nuances mentioned below:
1) Choice of color and quality of wallpapers
Sometimes, when decorating children's rooms, parents make mistakes in the choice of wallpaper, which is important for the appearance of the room. At this time, the right choice may be to abandon the use of simple, dark, classic tones and prefer more lively, unusual, and beautifully illustrated wallpapers. When making the final choice, determining which color your child likes best will lead to success.
Another issue when choosing wallpaper is its quality. Although this point is not taken into account by many, it plays a key role in children's health. Wallpapers made of natural substances, made with water-based paints, and at the same time environmentally friendly will not harm your children's health.
Finally, using antibacterial and breathable wallpaper will reduce the possibility of babies getting respiratory diseases in the future.
2) Gender-appropriate color selection in wallpapers
In recent years, the expediency of choosing colors that will be common to both genders has been noted in the decoration of children's rooms, so that they feel that they are equal to each other. If you have a girl and a boy at the same time, it is better to choose a color that matches both of them.
3) It is not necessary to fill the children's room with many things.
Filling their room with as many items as possible can cause distraction and make the room look cluttered. At the same time, beds should be suitable for children's height and weight. The choice of orthopedic beds will have a positive effect on the spinal and muscular systems of children. In addition, large wardrobes should be abandoned, a minimalist style should be adopted.
4) Location of the children's room
Children's access to fresh air and plenty of sunlight for their development is the greatest contribution we can make to their healthy lives. Frequent changes in the air, lighting, and spaciousness of the rooms will lead to positive changes in their mood. Therefore, when choosing an apartment, it is not a bad idea to pay attention to the area where the children's rooms are located, to ensure that the place is airy.
5) Building a small library in the children's room
Adding a bookshelf to the children's room will play a role in the advancement of children's reading abilities, and will make it possible for them to value their time positively. Children will use their time effectively, away from activities that may be harmful to them.
6) The spaciousness of the children's room allows children to move comfortably and freely and spend their free time.
A spacious room will make your child feel comfortable, free from the cramped space, and enthusiastically pursue their pursuits. No matter how old they are, their room will always be their playground. No doubt that comfort will always be their priority.
Sometimes, small rooms allocated to children hurt their imagination and ability to dream. They cannot move comfortably in their little rooms, and as they grow up, they often argue with each other and want to move to a better place. | https://melissagroup.az/en/xosbext-aileler-bir-birine-benzeyir-cunki-onlarin-ovladlari-heyatlarini-daim-gozellesdirir |
time. It is not an SIunit but is accepted for use with the SI.
Definition
In modern usage, an hour is a unit of
time60 minutes, or 3,600 seconds in length. It is approximately 1/24 of a median Earth day.
Etymology
Middle English"ure" first appears in the 13th century, as a loanword from Old French"ure, ore", form Latin "hora", ultimately from Greek _gr. ὥρα "season, time of day, hour". Middle English "ure", Anglo-French"houre" replaced Old English"tíd" (which survives as Modern English " tide") and "stund" ( Old High German"stunta", from a Germanic "*stundō" "time, interval, while").
Greek _gr. ὥρα is cognate to English "
year", both from a PIE"PIE|*i̯ēro-" "year, summer".
History
The hour was originally defined in ancient civilizations (including those of Egypt, Sumer, India, and China) as either one twelfth of the time between sunrise and sunset or one twenty-fourth of a full day. In either case the division reflected the widespread use of a
duodecimalnumbering system. The importance of 12 has been attributed to the number of lunar cycles in a year, and also to the fact that humans have 12 finger bones ( phalanges) on one hand (3 on each of 4 fingers). [ citation
title=ヒマラヤの満月と十二進法 (The Full Moon in the Himalayas and the Duodecimal System)
last=Nishikawa
first=Yoshiaki
year=2002
url=http://www.kankyok.co.jp/nue/nue11/nue11_01.html
accessdate=2008-03-24] (It is possible to count to 12 with your thumb touching each finger bone in turn.) There is also a widespread tendency to make analogies among sets of data (12 months, 12 zodiacal signs, 12 hours, a dozen).
The Ancient Egyptian civilization is usually credited with establishing the division of the night into 12 parts, although there were many variations over the centuries. Astronomers in the Middle Kingdom (9th and 10th Dynasties) observed a set of 36
decanstars throughout the year. These star tables have been found on the lids of coffins of the period. The heliacal risingof the next decan star marked the start of a new civil week, which was then 10 days. The period from sunset to sunrise was marked by 18 decan stars. Three of these were assigned to each of the two twilight periods, so the period of total darkness was marked by the remaining 12 decan stars, resulting in the 12 divisions of the night. The time between the appearance of each of these decan stars over the horizon during the night would have been about 40 modern minutes. During the New Kingdom, the system was simplified, using a set of 24 stars, 12 of which marked the passage of the night.
Earlier definitions of the hour varied within these parameters:
* One twelfth of the time from sunrise to sunset. As a consequence, hours on summer days were longer than on winter days, their length varying with
latitudeand even, to a small extent, with the local weather (since it affects the atmosphere's index of refraction). For this reason, these hours are sometimes called "temporal", "seasonal", or "unequal hours". Romans, Greeks and Jews of the ancient world used this definition; as did the ancient Chinese and Japanese. The Romans and Greeks also divided the night into three or four night watches, but later the night (the time between sunset and sunrise) was also divided into twelve hours. When, in post-classical times, a clockshowed these hours, its period had to be changed every morning and evening (for example by changing the length of its pendulum), or it had to keep to the position of the Sun on the ecliptic (see Prague Astronomical Clock).
* One twenty-fourth of the apparent solar day (between one noon and the next, or between one sunset and the next). As a consequence hours varied a little, as the length of an apparent solar day varies throughout the year. When a clock showed these hours it had to be adjusted a few times in a month. These hours were sometimes referred to as "equal" or "equinoctial" hours.
* One twenty-fourth of the mean solar day. See mean sun for more information on the difference to the apparent solar day. When an accurate clock showed these hours it virtually never had to be adjusted. However, as the Earth's rotation slows down, this definition has been abandoned. See
UTC.
Counting hours
There are different ways of counting the hours:
* In ancient and medieval cultures, in which the division between night and day mattered far more than in societies with widespread use of artificial light, the counting of hours started with sunrise. So sunrise was always exactly at the beginning of the first hour (the "zero" hour), noon at the end of the sixth hour and sunset exactly at the end of the twelfth hour. This meant that the length of hours varied with the season. This type of counting is sometimes referred to, on
astrolabes and astronomical clocks, for example, as "Babylonian" or "temporal" hours. It is also the system used in Jewish religious law ( Halakha) and frequently called "Talmudic hour" ("Sha'a Zemanit") in a variety of texts.The talmudic hour is the division of time elapsed from sunrise to sunset by 12, therefore being longer at summer than in winter.
* In so-called "Italian time", or "Italian hours", the first hour started with the
Angelusat sunset (or the end of dusk, i.e., half an hour after sunset, depending on local custom and geographical latitude). The hours were numbered from 1 to 24. For example, in Lugano the Sun rose in December during the 14th hour and noon was during the 19th hour; in June the Sun rose during the 7th hour and noon was in the 15th hour. Sunset was always at the end of the 24th hour. The clocks in church towers struck only from 1 to 12, thus only during night or early morning hours. This manner of counting hours had the advantage that everyone could easily see how much time they had to finish their day's work without artificial light. It was already widely used in Italyby the 14th century and lasted until the mid-18th century (was officially abolished in 1755), or in some regions, customary, until the mid-19th century [There is a "trace" of that system, for instance, in Verdi's operas where in " Rigoletto" or in " Un ballo in maschera" midnight is announced by the bell striking 6 times (not 12 as we are accustomed to it today!) But in his last opera, Falstaff, strangely, he abandoned that style, perhaps under influence of contemporary trends at end of 19th centurywhen he composed it, and the midnight bell strikes 12 times.] . It was also used in Polandand Bohemiauntil the 17th century. The system of Italian hours can be seen on a number of clocks in Italy, where the dial is numbered from 1 to 24 in either Roman or Arabic numerals. The St Mark's Clock in Venice is a famous example.
* The medieval
Islamicday began at sunset. The first prayer of the day ( maghrib) was to be performed between sunset and the end of twilight.
* In the modern
12-hour clock, counting the hours starts at midnight and restarts at noon. Hours are numbered 12, 1, 2, ..., 11. Solar noon is always close to 12 noon, differing according to the equation of time(by up to about fifteen minutes either way). At the equinoxes sunrise is around 6 A.M. ("ante meridiem", "before noon"), and sunset around 6 P.M. ("post meridiem", "after noon").
* In the modern
24-hour clock, counting the hours starts at midnight and hours are numbered from 0 to 23. Solar noon is always close to 12:00 (again differing according to the equation of time). At the equinoxes sunrise is around 06:00 and sunset around 18:00.
* For many centuries, up to 1925, astronomers counted the hours and days from noon, because it was the easiest solar event to measure accurately. An advantage of this method (used in the
Julian Datesystem, in which a new Julian Day begins at noon) is that the date doesn't change during a single night's observing.
Sunrise and sunset are much more conspicuous points in the day than noon or midnight; starting to count at these times was, for most people in most societies, much easier than starting at noon or midnight. However, with modern astronomical equipment (and the telegraph or similar means to transfer a time signal in a split-second), this issue is much less relevant.
Astrolabes, sundials, and astronomical clocks sometimes show the hour length and count using some of the older definitions and counting methods.
ee also
*
Canonical hours
References
* "Astronomy before the telescope". Ed. Christopher Walker. London: British Museum Press, 1996.
Further reading
*cite book|author=Gerhard Dohrn-van Rossum|title=History of the hour: clocks and modern temporal orders|publisher=University of Chicago Press|isbn=0226155102|year=1996
Wikimedia Foundation. 2010.
Look at other dictionaries: | http://enacademic.com/dic.nsf/enwiki/8374 |
Zoning regulations; when authorized; powers; manufactured homes; limitation of jurisdiction.
(1) The county board shall have power: (a) To create a planning commission with the powers and duties set forth in sections 23-114 to 23-114.05, 23-168.01 to 23-168.04, 23-172 to 23-174, 23-174.02, 23-373, and 23-376; (b) to make, adopt, amend, extend, and implement a county comprehensive development plan; (c) to adopt a zoning resolution, which shall have the force and effect of law; and (d) to cede and transfer jurisdiction pursuant to section 13-327 over land otherwise subject to the authority of the county board pursuant to this section.
(2) The zoning resolution may regulate and restrict: (a) The location, height, bulk, number of stories, and size of buildings and other structures, including tents, cabins, house trailers, and automobile trailers; (b) the percentage of lot areas which may be occupied; (c) building setback lines; (d) sizes of yards, courts, and other open spaces; (e) the density of population; (f) the uses of buildings; and (g) the uses of land for agriculture, forestry, recreation, residence, industry, and trade, after considering factors relating to soil conservation, water supply conservation, surface water drainage and removal, or other uses in the unincorporated area of the county. If a zoning resolution or regulation affects the Niobrara scenic river corridor as defined in section 72-2006, the Niobrara Council shall act on the measure as provided in section 72-2010.
(vi) The home shall have wheels, axles, transporting lights, and removable towing apparatus removed.
(b) The county board may not require additional standards unless such standards are uniformly applied to all single-family dwellings in the zoning district.
(c) Nothing in this subsection shall be deemed to supersede any valid restrictive covenants of record.
(4) For purposes of this section, manufactured home shall mean (a) a factory-built structure which is to be used as a place for human habitation, which is not constructed or equipped with a permanent hitch or other device allowing it to be moved other than to a permanent site, which does not have permanently attached to its body or frame any wheels or axles, and which bears a label certifying that it was built in compliance with National Manufactured Home Construction and Safety Standards, 24 C.F.R. 3280 et seq., promulgated by the United States Department of Housing and Urban Development, or (b) a modular housing unit as defined in section 71-1557 bearing a seal in accordance with the Nebraska Uniform Standards for Modular Housing Units Act.
(5) Special districts or zones may be established in those areas subject to seasonal or periodic flooding, and such regulations may be applied as will minimize danger to life and property.
(6) The powers conferred by this section shall not be exercised within the limits of any incorporated city or village nor within the area over which a city or village has been granted or ceded zoning jurisdiction and is exercising such jurisdiction. At such time as a city or village exercises control over an unincorporated area by the adoption or amendment of a zoning ordinance, the ordinance or amendment shall supersede any resolution or regulation of the county.
Laws 2012, LB709, § 1.
Nebraska Uniform Standards for Modular Housing Units Act, see section 71-1555.
Uniform Standard Code for Manufactured Homes and Recreational Vehicles, see section 71-4601.
If the mode or manner by which a certain action is to be taken is prescribed in a statute or charter, that method must generally be followed. State ex rel. Musil v. Woodman, 271 Neb. 692, 716 N.W.2d 32 (2006).
If there is a conflict between a comprehensive plan and a zoning ordinance, the latter is controlling when questions of a citizen's property rights are at issue. Stones v. Plattsmouth Airport Authority, 193 Neb. 552, 228 N.W.2d 129 (1975).
City zoning plan covering property within two miles of city limits supersedes county zoning regulations respecting that area. Deans v. West, 189 Neb. 518, 203 N.W.2d 504 (1973).
Owner's right to use property is subject to reasonable regulation; the burden is on one who attacks the validity of a zoning ordinance to prove facts which establish its invalidity. Stahla v. Board of Zoning Adjustment of Hall County, 186 Neb. 219, 182 N.W.2d 209 (1970).
County board has authority to adopt zoning resolution. City of Grand Island v. Ehlers, 180 Neb. 331, 142 N.W.2d 770 (1966).
Counties are empowered to adopt a comprehensive zoning plan by resolution. Crane v. Board of County Commissioners of Sarpy County, 175 Neb. 568, 122 N.W.2d 520 (1963).
Zoning resolution adopted by county board must be published. Board of Commissioners of Sarpy County v. McNally, 168 Neb. 23, 95 N.W.2d 153 (1959). | https://nebraskalegislature.gov/laws/statutes.php?statute=23-114 |
Does the National Interest Waiver (a self-sponsored green card application) require that you be working for or funded by the US Government? The short answer is no, it most certainly does not.
Is it as hard as the Extraordinary Ability application? Again, the short answer is no, definitely not. The National Interest Waiver is actually a very appropriate application for many researchers and others who either cannot be or do not want to be sponsored by their employers. It allows you to sponsor yourself, and to change jobs and employers fairly easily throughout the process.
So what are the actual qualifications for this application and how do you show your work is in the national interest? Unfortunately, this is a case in which both Congress and USCIS did not issue any guidance as to what the standard should be, so it was left to the courts. Specifically, the Administrative Appeals Office (AAO), in a precedent case (Matter of New York State Department of Transportation, 22 I&N Dec. 215 (Comm. 1998)(NYSDOT)) did explain what is needed to show that your work is in the national interest. After the AAO issued this decision, USCIS formally adopted the decision as their standard.
The NYSDOT case laid out a three part test to determine if your work is in the national interest: 1) you must be seeking work in an area that has substantial intrinsic merit; 2) you must demonstrate that the proposed benefit to be provided by your work will be national in scope; and, 3) you must demonstrate that it would be contrary to the national interest to potentially deprive the prospective employer of your services by making your position available to US Workers. While the above can seem daunting in theory, it is not quite so daunting in practice. What it comes down to is showing you are, and will be, working in an important area and that you have already made a significant impact on your field. What type of documentation can show this?
If you are a scientist you can show this through publishing and presenting your work, citation history, peer reviewing, being accepted for oral presentation or invited to talk, having a paper highlighted at a journal website or elsewhere, having press about your findings. Please note, the above is a list of documents that CAN be used to show eligibility, and it is not a list of ALL documents that are needed, as you can be approved with less than all the above documents. In fact, many of our clients may have 30-50 citations total, they may have anywhere from 2-5 papers, or more. Sometimes they have peer reviewing activities, sometimes they do not. Sometimes they have oral presentations, sometimes not. Every case is different and has to be judged on the totality of the evidence to show whether the evidence shows that the impact of your work has been substantial.
For areas other than the sciences, such as foreign relations, health policy, etc, while the type of documentation can be much the same as above – publications, press, etc, you also have the opportunity to look at your role within projects, programs, or other initiatives. It can also be much more letter focused with letters from government officials or NGOs about the use and implementation of your work, etc. It all depends on whether your work is more academic related or applied in the field.
In essence, US interests are broad in nature, and thus, depending upon the extent of your standing within a specific area, you may very well be a good candidate for this type of visa application.
Please remember, always get your legal advice from an attorney and not a blog. Call and talk to an attorney to get the specifics of this application and your ability to qualify. | https://immigrationbriefs.com/2016/04/18/national-interest-waiver-you-may-be-surprised/ |
À l'occasion de la visite d'Hubert Shum de l'université de Northumbria (UK), Ludovic Hoyet et l'équipe MimeTIC organisent un séminaire vendredi 18 novembre 2016, 10h30-12h00 en salle Les Minquiers.
Abstract: Planar shape morphing, also known as metamorphosis or shape blending, is the gradual transformation of one shape into another. Shape morphing techniques have been used widely in animation and special effects packages, such as Adobe After Effects and HTML5. With these morphing methods, we can transform a human into a bird or some other objects that people may never experience in real life. Thus, we want to build an interactive system that blends the human silhouette and other shapes such that the users can see these interesting transformations. To build such a system, (1) we need to employ compatible triangulation method to compute the correspondence between two shapes. (2) we need to apply shape interpolation method to transform one shape to another. (3) we need to use posture reconstruction method to address the transformation that involves self-occlusion.
a new method to compute compatible triangulation of two polygons in order to create a smooth geometric transformation between them. We also present an efficient scheme to fix the inconsistent rotation problems that rigid shape interpolation algorithm may suffer. Lastly, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database, which can be used to handle shapes with self-occlusion.
Bio: Zhiguang Liu received the PhD degree in computer science from the City University of Hong Kong in 2016. His research interests include character animation and machine learning.
Abstract: Due to the recent advancement in motion capture hardware and motion-based applications, human motion analysis has become an increasingly popular research area. Its core problem is to model human motion in a meaningful way, such that we can generalize knowledge to recognize, analyze and synthesize movement. Motion related applications nowadays such as motion-based gaming, 3D character animation, autonomous surveillance and smart robots are the results of the area.
The problem of human motion analysis is important as it connects different research fields. Taking an example of motion gaming with the Microsoft Kinect, the system first applies computer vision techniques to identify human body parts. Then, artificial intelligence is introduced to understand the meaning of the movement and perform human-computer interaction. Virtual reality techniques based on movement are sometimes used to enhance gaming immersiveness. Character animation and graphical rendering algorithms are implemented to render the controlled virtual character.
In this talk, I will discuss on the importance of human motion analysis in computer science. With the support of my research projects, I will demonstrate how motion analysis can connect different research fields, including computer graphics, games and vision. I will show how my projects achieve impact in research and the society, and conclude my presentation with future opportunities and potential directions.
Bio: Hubert P. H. Shum is an Associate Professor (Reader) in Computer Science and the Programme Leader of BSc (Hons) Computer Animation and Visual Effects at Northumbria University. He leads a research team focusing on computer graphics and computer vision, utilizing and managing the Motion Capture and Virtual Reality Laboratory. Before this, he worked as a Senior Lecturer at Northumbria University, a Lecturer at the University of Worcester, a postdoctoral researcher at RIKEN Japan, and a research assistant at the City University of Hong Kong. He received his PhD degree from the School of Informatics at the University of Edinburgh, as well as his Master and Bachelor degrees from the City University of Hong Kong. He has received £124,000 from EPSRC for a project on human motion analysis, and has been a core contributing researcher in a €3.03 million Erasmus Mundus project. On top of these, he has received more than £210,000 from Northumbria University to hire PhD students and purchase research equipment. | https://www.irisa.fr/en/actus/seminaire/human-motion-seminar-motion-analysis-and-shape-morphing |
1 edition of Subunits in biological systems found in the catalog.
Subunits in biological systems
Published
1971
by Marcel Dekker in New York
.
Written in English
Edition Notes
|Statement||edited by Serge N. Timasheff and Gerald D. Fasman. Part A.|
|Series||Biological macromolecules -- vol.5|
|Contributions||Fasman, Gerald D., Timasheff, Serge N.|
|ID Numbers|
|Open Library||OL14391319M|
Reactive Oxygen Species in Biological Systems book. Read reviews from world’s largest community for readers. Reactive oxygen species (ROS), which include Ratings: 0. Science Biology Macromolecules Introduction to macromolecules. AP Bio: SYI‑1 (EU), SYI‑1.B (LO), SYI‑1.B.1 (EK) Types of large biological molecules. Monomers, polymers, dehydration synthesis, and hydrolysis. Google Classroom Facebook Twitter. This is the currently selected item. Biology is brought to you with support from the.
Neilands J.B. () Chemistry of Iron in Biological Systems. In: Dhar S.K. (eds) Metal Ions in Biological Systems. Advances in Experimental Medicine and Biology, vol Cited by: How to Prepare for the MCAT Biological and Biochemical Foundations of Living Systems Test The topics covered in each section contain important terms and concepts. To prepare for the MCAT, it is advantageous to know what each term is and understand how it relates to other terms.
Answers to all problems are at the end of this book. Detailed solutions are available in the Student Solutions Manual, Study Guide, and Problems Book. Table presents some of the many known mutations in the genes encoding the a- and β -globin subunits of hemoglobin. a. Some of these mutations affect subunit interactions between the Subunits. iv An Introduction to Feedback Control in Systems Biology vi An Introduction to Feedback Control in Systems Biology Case Study XII: Reverse engineering a cell-cycle regulatory introductory descriptions of many of the biological systems considered in the book, in the hope of enticing many more control engineering researchers into.
Letters of Cicero
International comparisons in implementing pollution laws
Liliana Porter: Fragments of the journey
Stocking and seedbed distribution on clean-cut lodgepole pine areas in Utah
Baby Tips for Mums
Cumar
other Cameroonian
On the various contrivances by which British and foreign orchids are fertilised by insects
grounds for lightning and EMP protection
Mathematical foundations and biomechanics of the digestive system
The Quaker Invasion of Massachusetts
One thousand years of art in Japan
économie face à lécologie =
Prairie pilgrims
Subunits in Biological Systems, Part B (Biological Macromolecules Series Vol. 6) Hardcover – January 1, by editors Fasman, Gerald D., and Serge N. Timasheff (Author)Price: $ Additional Physical Format: Online version: Timasheff, Serge N., Subunits in biological systems.
New York, M. Dekker, (OCoLC) COVID Resources. Reliable information about the coronavirus (COVID) is available from the World Health Organization (current situation, international travel).Numerous Subunits in biological systems book frequently-updated resource results are available from this ’s WebJunction has pulled together information and resources to assist library staff as they consider how to handle.
Full text Full text is available as a scanned copy of the original print version. Get a printable copy (PDF file) of the complete article (K), or click on a page image below to browse page by page.
Subunits in other biological polymers, such as nucleic acids and proteins, are also linked by condensation reactions in which water is expelled. The bonds created by all of these condensation reactions can be broken by the reverse process of hydrolysis, in which a molecule of water is consumed (see Figure ).Cited by: 4.
warmly recommended to all colloids and surface scientists involved with or interested in biological interfaces. Journal of Dispersion Science and TechnologyFormat: Hardcover.
This book serves as an introduction to the continuum mechanics and mathematical modeling of complex fluids in living systems. The form and function of living systems are intimately tied to the nature of surrounding fluid environments, which commonly exhibit nonlinear and history dependent responses to forces and displacements.
Uri's book covers a rather wide patchwork of biological systems. If you want to get a more comprehensive coverage of biological processes, I'd look elsewhere.
Rob Phillips and Ron Milo's new intro to cell biology, Cell Biology by the Numbers. It's free online, but the print edition is worth owning. In this book, however, attention is focused up on the biological aspects of silicon and siliceous structures, with emphasis on the evolutian, phylogeny, morphology, and distribution of siliceaus structures, on the cellular as peets of silica deposition.
The molecular mimicry contributes to the efficiency of enzymes. Molecular symbiosis means that interactions attraction or repulsion) between biopolymer molecules greatly differing in conformation (globular and rod-like) favor the biological efficiency of one of them at least.
A biological system is a complex network of biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of the organ and tissue scale in mammals and other animals, examples include the circulatory system.
The study of the roles of metal ions in biological systems represents the exciting and rapidly growing interface between inorganic chemistry and the living world.
The water-splitting centre of green plants, which produces oxygen, is based on the. PHYSIOLOGY AND MAINTENANCE – Vol. II – Enzymes: The Biological Catalysts of Life - Pekka Mäntsälä and Jarmo Niemi ©Encyclopedia of Life Support Systems (EOLSS) Generally, an increase in temperature increases the activity of enzymes.
Because enzymes function in cells, the optimum conditions for most enzymes are moderate Size: KB. Proteins perform essential functions throughout the systems of the human body. In the respiratory system, hemoglobin (composed of four protein subunits) transports oxygen for use in cellular metabolism.
Additional proteins in the blood plasma and lymph carry nutrients and metabolic waste products throughout the body. From the assembly of results obtained in this work, it is possible to infer the toxic potential of TH against several systems involving hemoglobin as a biological target.
This finding is based on the fact that TH forms a supramolecular complex with hemoglobin in a stoichiometry ratio (TH:Hb) due to the transfer of EtHg + from TH to human Hb-Cys93 : Marina de Magalhães Silva, Maria Dayanne de Araújo Dantas, Reginaldo Correia da Silva Filho, Marcos.
In particular, the subunits in biological systems acquire information about the local properties of the system and behave according to particular ge- netic programsthat have been subjected to natural selection. This adds an extraFile Size: 3MB.
Degeneracy, the ability of elements that are structurally different to perform the same function or yield the same output, is a well known characteristic of the genetic code and immune systems.
Here, we point out that degeneracy is a ubiquitous biological property and argue that it is a feature of complexity at genetic, cellular, system, and population by: The organization and integration of biological systems has long been of interest to scientists.
Systems biology as a formal, organized field of study, however, emerged from the genomics revolution, which was catalyzed by the Human Genome Project (HGP; –) and the availability to biologists of the DNA sequences of the genomes of humans and many other.
The general term for a large molecule made up of many similar subunits is polymer Dehydration and hydrolysis reactions involve removing or adding ______to macromolecule subunits.
1st BCAM Workshop on Nonlinear dynamics in Biological Systems BOOK OF ABSTRACTS Oral contributions: Jacobo Aguirre Centro de Astrobiología (CSIC-INTA) Since biological homochirality of living systems involves large macromolecules, we have designed a and linker length between subunits.
The simplicity and generality of the model facilitate a. Ion channels are pore-forming membrane proteins that allow ions to pass through the channel pore. Their functions include establishing a resting membrane potential, shaping action potentials and other electrical signals by gating the flow of ions across the cell membrane, controlling the flow of ions across secretory and epithelial cells, and regulating cell volume.We consider all reaction systems in which subunits X0 and X1 form stable complexes as defined in Eq.
(7), where the cloud indicates that all system components can arbitrarily affect the .MOLECULES IN LIVING THINGS FALL INTO FOUR MAJOR CLASSES: There are four basic types of molecules that are the major players in biological systems: carbohydrates, lipids, proteins, and nucleic molecule types each have at least two major functions and all interact in complex ways, sometimes producing combined molecules as well. | https://qyviqemitefexug.sheepshedgalleryandtearoom.com/subunits-in-biological-systems-book-30106ye.php |
As those of us who have actually seen the movie know, Pitch Perfect 2 isn't just about singing songs, like some critics have suggested. So much of the movie deals with the young girls going through their last year of college, and their last year as Barden Bellas. Because of this, they face a deeper level of uncertainty, challenge, and anxieties than ever before. But all of these struggles give the film great heart, and watching Beca, especially, work through this uncertain time in her life can teach us a lot about growing up, finding ourselves, and being brave.
Throughout the movie, there are many major life experiences that Beca faces, and it is through those tough times that she (and inevitably, the audience) learns the most. Through it all, she's brave, independent, strong, and determined to not take anybody's crap — but she's also vulnerable, and proves it's OK to be scared about the future.
So take a tip (or twelve) from Beca's book — because lessons she learned in the Pitch Perfect films are too important to ignore.
1. Learn When To Move Forward
Unlike many of the Bellas who are totally focused on winning the World Championship, Beca is the first of the girls to think about her future. It's their senior year of college, after all, and she is the only one who gets a job that will carry her through graduation. She knows it's critical to her professional success to always be thinking ahead and putting stock in her own future, and so should you. Never be complacent — be proactive, and take charge of your future.
2. Speak Up
At her new job, people are hesitant to offer ideas when the boss asks for them. Maybe because they have none — but probably because her boss is bound to ridicule them if it's bad. But, when Beca finally speaks up and offers a suggestion to save Snoop Dogg's Christmas album that the studio is trying to produce, she impresses her boss and is given the opportunity to move up in the company.
5. Find Your Own Voice
At work, Beca's boss is unimpressed by her demo tape. It's her usual mashup stuff, and he says that any kid with a computer and basic music skills could do that. He pushes her to find her own unique style — and even though she struggles to find it, she knows that there are things that set each of us apart from everyone else. We just have to find out what those things are, and embrace them fully.
3. Be Honest
For a majority of the film, Beca hides her new job from the Bellas. She is afraid that if they know she is moving on and moving toward her future, they will question her commitment to the group. But this secrecy only hurts her in the long run. It isn't until Amy pulls the truth out of her that Beca's creative head space clears and she is better able to find her own voice.
4. Don't Be Afraid To Ask For Help
Amy might not be the most obvious choice for creative inspiration, but it's only after talking to her and recruiting the help of the adoring new girl, that Beca is able to truly create something special. Until she confided in and collaborated with her friends, she was unable to accomplish anything creatively. There's no shame in asking for help when you need it — in fact, it might be just the thing you need to get where you need to go.
6. Try Something New
After being discouraged by her boss for her first demo tape, Beca struggles to find something that will better define who she is as a music producer. For the first time maybe ever, she branches out from her usual mashup of popular songs, and mixes an original song by the newest member of the Bellas, Emily. Although original music is frowned up in a cappella land, the mix is a total hit, and the original song (SPOILER ALERT) is what wins the World Championship.
7. Don't Forget Your Past
At the retreat, Beca breaks down and tells the group about her new job. Even though she thought they would be mad (and some of them are), what she ultimately learns is that, even though they all need to move on from the Bellas, they will always have each other and the memories of this amazing time in their lives. In the end, this lesson is reinforced when former Bellas come back to perform at the World Championship and support the team in achieving their victory. | https://www.bustle.com/articles/86179-7-life-lessons-pitch-perfects-beca-taught-us-all-from-speaking-up-to-finding-your-own |
This paper presents the comparison of using least mean square time-domain equalizer (LMS-TEQ) and decision feedback time-domain equalizer (DF-TEQ) to reduce cyclic prefix (CP) length for direct-detection of optical orthogonal frequency division multiplexing (O-OFDM) over 6960 km of single mode fiber (SMF). Both TEQs are used immediately after the channel. Numerical modeling results show that they can cancel the residual inter symbol interference (ISI) and inter carrier interference (ICI) caused by both the group velocity dispersion (GVD) and the CP length being shorter than the channel impulse response (CIR). Using these TEQs allow the reduction of CP length, and consequently leading to system performance improvement. On the other hand, each of TEQs adds complexity to the system. Therefore, the aim of this paper is to analyze and compare the performance of LMS-TEQ and DF-TEQ while considering different CP length and complexity. | http://shdl.mmu.edu.my/4888/ |
'Treasure belongs to the temple and nobody else'
Ever since treasure worth crores of rupees was inventoried in the secret cellars of the Sree Padmanabhaswamy temple in Thiruvananthapuram, noted historian Professor MG Sashibhushan has been a very busy man with journalists from all over the world seeking his opinion on the historical background of the temple and the city.
An authority on the history of the Travancore royal family, his extensive area of work has been on the murals and history of the temples of Kerala.
Though newspapers have assessed the worth of the assets found in the cellars, Prof Sashibhushan says it is just mere speculation, and nobody, not even those who made the inventory, has calculated the value yet.
In this interview to rediff.com's Shobha Warrier, he goes back centuries to trace how the riches reached the erstwhile Travancore royal family and the Sree Padmanabhaswamy temple.
Click on NEXT to read further...
Image: The Sree Padmanabhaswamy temple (inset) M G Sashibhushan
'Temple's history is as old as the Sangam period'
It is said that the Mathilakam records (written on cadjan [cocoa-palm leaves]) mention about the secret cellars and the treasures of the Padmanabhaswamy temple. Does that mean there was knowledge of this wealth earlier itself?
All those who have some knowledge of the history of Kerala know about the wealth of the Padmanabhaswamy temple.
But only the eldest member of the Travancore royal family knew exactly how much wealth was there in the temple.
How far back can we go if we were to trace the history of the temple and its wealth?There are many who say that the temple's history is as old as the Sangam period. In Silappathikaram (epic poem in Tamil, written in the 5th-6th century AD by Prince Ilango Adigal), a sea-side golden temple called Adagamadam is mentioned.
It also says the deity of the temple is Lord Vishnu in a reclining pose. Kannagi (central character of Silapathikaram) is said to have come to the temple.
Many historians say that the temple mentioned is the Padmanabhaswamy temple as its deity is Lord Vishnu in reclining pose and it is also near the sea- side. Why it was described as a golden temple was because it was a rich temple and also the one with golden thazhikakudams (domes on top of the gopuram). That is why Adagamadam is Padmanabhaswamy temple itself.
Image: Relief sculpture of epic poet Ilanko Adikal
'Records show people like Chaitanya, Guru Nanak visited the temple'
Even in the puranas (religious texts) like Varaha puran, the temple is mentioned.
The first historical evidence about the temple is available in the Vaishanva Azhvar poet Nammazhvar's creations. These were written in the 9th century.
He had written 10 kirtanas in praise of this temple and the deity. His contemporary Thirumanga Azhvar also had written kirtanas about Padmanabhaswamy. These poems show without any doubt that this temple was in existence in the 9th century.
It is also mentioned in the 12th century in a Sanskrit poem by an unknown poet. In the 13th century, there is a Malayalam creation, Ananthapura Varnanam.
Records show that people like Ramanuja Acharyar, Chaitanya, Guru Nanak, etc visited the temple. Guru Nanak had not started the Sikh religion then; he was a Vaishnava Goswami. He had even written a poem on Sree Padmanabha and it is included in the Adi Granth.In short, Padmanabhaswamy temple was known all over India long ago. Vaishnavites see this as one of the 108 Tirupatis.
Image: Padmanabhaswamy temple
'Nobody was aware of the extent or the worth of assets'
Was it from the offerings of the devotees that the temple got its wealth, and were they kept in the secret chambers then also?
The assets of the temple were safely kept in the secret chambers all the time, that is, from the time the temple was built. Those who have learnt about the history of the temple knew there were assets but nobody was aware of the extent or the worth.
Yes, the assets were mainly the offerings of devotees and also from the owners of the temple, that is, the Travancore royal family.
That was because it was believed that if you offer anything to the God, you become the dasa of Sree Padmanabhaswamy. Later, the family gave the permission to the public to give offerings.
'Travancore royal family followed the matrilineal system'
Was it after Marthanda Varma surrendered the state to the temple and became a dasa of Sree Padmanabha?
No, it is much before that itself. Not only the family but other kings who were their guests also started putting money in the hundi.
Did money from the state also go to the temple treasury?
No, there were three different treasuries, one for the temple, one for the state, and another personal. The personal treasury is inside the palace. State treasury was kept outside the walls of the temple while the temple treasury was inside.
Another important thing to note is, the Travancore royal family followed the matrilineal system. So, the money from the royal treasury did not go to the wife and children of the King but to his nephews and nieces.
When I say royal treasury, I don't mean revenue collection but the personal property that also went directly to the temple treasury.
Image: Kowdiar Palace, Trivandrum
'The temple also had a lot of property'
How did they differentiate between the state and the temple treasury?
While revenue from the state property went to state treasury, revenue from the temple property went to the temple treasury.
And, whatever the royal family got from their own land was their revenue. The family also contributed to the temple treasury.
As rulers, they had the authority to spend one sixth of the state revenue but this family did not spend that much because they led a very simple life.
Other than this, the Travancore family got revenue from exporting pepper to the world and they lived from what they earned from this though business from pepper started only in the 17th century.
The Travancore kingdom also collected land tax and that was added to the state exchequer. In short, it was a rich state. Land revenue went directly to the state treasury. The temple also had a lot of property and the income from that property accrued to the temple treasury.
'Assets are the property of the temple'
So, who does the assets found in the cellars belong to, the family or the temple?
It is the property of the temple. Though the wealth belonged to the temple, in some emergency situations, rulers could avail this for the benefit of the state but they were bound to make restitution as soon as possible.
There was a recession in the 1930s and it is mentioned at many places that they had taken some wealth during that period though there is no evidence to show that.
Who are the real owners of the temple property? Is there a trust?
In the old days, it belonged to the Travancore royal family. Before that, the five branches of the family had ownership rights. The senior-most member of these five families headed the trust.
Later on, disputes arose between the members and the right went to two families.
'The secret chambers have always been there'
There was a group of spiritual advisors to the temple, a sort of board called the Ettara Yogam which consisted of Pottis. There was also one Nair in the yogam. The maharaja was above the Ettara Yogam.
The tax collectors of the temple property were called Ettu Veettil Pillas.
And they were the children of the maharaja from his Nair wives. There were conflicts between the rulers and the Ettu Veettil Pillas due to which the temple was closed for 50 years or so and it was set fire to in the 17th century.
It was reopened when Umayamma Rani ruled the state as the regent Maharani.
Is there any truth in the talk that the wealth was kept in the scared chambers when Tipu Sultan started moving towards Travancore?
It was only a rumour and perpetuated by those who have no knowledge of history. Like I said, the secret chambers have always been there and they contained the wealth of the temple.
Image: Picture of Umayamma Rani of Travancore
'It is a natural tendency for all to ask for a part of the wealth'
How important is the treasure found in the chambers if you look at it from the historical, religious and cultural point of view?
From what we hear about the assets, there are no inscriptions. Yes, we can study about the diamonds, jewellery, ruby, pearls, etc they have found movable idols, crowns, gold and silver bars.
You said earlier that the royal family conducted trade in pepper. Kerala also sold various other kinds of spices to many countries. Do you think this treasure will throw a light to Kerala's trade with the world?
The coins from those countries may be there. There are a lot of gems also in the chambers, which in all probability can be from the Deccan. As far as I know, they have not so far found anything that throws light on the kind of trade Kerala had with the world.
The head-less archaeology department of the state government (it does not have a director for some time now) says it will take care of the wealth. It is a natural tendency for all to ask for a part of the wealth!
Today, all those who have no competence or knowledge of ancient temple history are spouting all kinds of nonsense.
'The treasure is a symbol of Indian pride'
What should be done now with such a huge treasure as it is important that it has to be kept very safe?
Yes, it has to be safe, but you must remember that the treasure belongs to the temple and nobody else.
Is it not part of our history?
It is a part of our culture. It is a part of our pride. I consider it as a symbol of Indian pride or Hindu pride. All the other things come only later.
Should it remain inside the cellar?
For the time being, let it remain there. It should not go to wrong hands. We should keep it safely. Anyway, they are making an inventory under the instructions of the Supreme Court. Once the court comes out with a decision, as per the inventory, we must study those things which are of archaeological importance. This should be done under heavy security inside the temple.
'Temple cannot sell this wealth and create a museum there'
Do you envisage a museum of international standards coming up here like what we see in London or Paris?
Yes, it is possible. It has to be there. But you may need at least Rs 50 crore to construct such a museum. Who will fund it? The temple cannot sell this wealth and create a museum there. They should not, too. Let them take an appropriate decision at the appropriate time.
Who can take the decision?
The custodians of wealth, which is the temple right now. Also, the royal family.
But it is not the personal property of the royal family
No, it is not.
Do you see the denouncement of the Supreme Court action as a good thing so that we now got to know about such a huge wealth?
I don't see it as a good thing, as Thiruvananthapuram has become an unsafe place.
'It is the glitter of the yellow metal that has dazzled the world'
|More|
Rationalists and atheists also have entered the scene and they are clamouring for using the wealth to construct schools and hospitals. As a historian, how do you react to this?
Rationalists in Kerala have had no voice till now. That is why they have entered the fray.
My question is, why do they not talk about using the wealth of a church or a mosque for such purposes? These people are not rational, they are irrational.
If the government of Kerala had not taken the wise decision of saying that the treasure belongs to the temple, there would have been a communal division in Kerala. I would say the chief minister of Kerala (Oomen Chandy) deserves all compliments for declaring it as the property of the state and providing security to it.
Do you think the interest world over is due to the possible history behind the wealth? | https://www.rediff.com/news/slide-show/slide-show-1-interview-with-prof-mg-sashibhushan-on-padmanabhaswamy-temple%20treasure/20110712.htm |
By Susanna Carman for Enlivening Edge Magazine
Introduction by Jean-Paul Munsch, Guest Editor of EE Magazine’s Education edition:
A sophisticated article that focuses on one of the most unpopular themes in the discourse of school innovation: evidence. How do we know that our loved work is effective? By combining design-thinking, up-to-date technology, and well-founded strategy for school innovation, this crispy piece offers a model to innovate schools in rapid innovation cycles on a solid data base.
As a strategic designer, I find myself connecting ideas, people, and organisations in unexpected ways. My most recent discovery happened whilst working on three independent projects in seemingly divergent fields. At the time I had no idea how health research analytics, innovation in school design, and evaluation metrics were connected. It was only when I paused to look at the entirety of these projects through the prism of a comprehensive, strategic design framework that I was able to connect the dots.
The first signal came from an article by Stefanie Di Russo, Senior Consultant of Customer Strategy at Deloitte, Australia. In her article, Di Russo defined strategic design as an approach that “utilizes the best of a design process with the best of strategic frameworks to create super problem-solving approaches for particularly complex problems.”
Of particular interest was Di Russo’s implication that a well-considered strategic framework could take on complexity and actually win.
I read Di Russo’s article while working with a South Australian software company called GoAct. GoAct has created a platform for health researchers and clinicians that liberates fast, easy, and actionable data so they can deliver better health care models to their patients. What makes GoAct unique is its ability to pull data from over fifty apps, correlate qualitative and quantitative data, and provide fast and immediate dashboard views in ways that help inform the efficacy of an intervention.
Synchronously, I’d also been speaking with a team member at Transcend Education, a USA-based non-profit dedicated to accelerating innovation in the core design of schools. At Transcend, redesign pioneers are applying the best of rapid cycle, iterative design processes to test innovative ideas. When a great idea transforms into a working model worth scaling, research-based evidence of project efficacy is expected in order to secure ongoing support. Consequently, I discovered that rapid iterative cycles require new metric strategies and technologies to keep pace with the speed and immediacy indicative of perpetual beta testing.
The design challenge that emerged was strategic – something the school redesign movement was already considering: ‘How Might We’ create a strategy for measuring what we most value in a way that satisfies AND redefines current systemic parameters so true innovation can take place in school model design?
My intuitive response to the challenge:
- Integrate strategic design frameworks with agile technologies
- Build the system’s expressive capacity to talk about, measure, and analyse data in ways that satisfy itself and benefit the learner
Agile analytic platforms like GoAct resolve the speed and immediacy imperatives associated with gathering and interpreting complex metrics in fast, easy and actionable ways. Analytics tools like market leader Tableau and the technology-for-education platform, Brightbytes, are also gaining traction. However, GoAct’s mastery at correlating quantitative and qualitative data is facilitating game-changing results in the health research sector.
Super problem-solving strategic frameworks like the above image of Dr. Sean Esbjorn-Hargens’ MetaCapital Framework (MCF) build the expressive and receptive capacities for stakeholders to talk about metrics as formative instruments that facilitate “virtuous cycles” of learning. The MCF has the potential to link metrics with impact in ways that allow designers of new school models to:
- Determine what tacit value in a system is worth measuring
- Recognise the value bias already present in the system
- Talk about and evaluate program efficacy in terms of both explicit and implicit value
- Include quantitative and qualitative metrics in their evaluations
- Match the right type of evaluation instruments with what is considered most valuable by multiple stakeholders in the system
In a nutshell, MCF contextualizes the metrics side of systems design in a way that generates stakeholder engagement and carefully considers tacit value. Agile technologies, like GoAct, do all the heavy lifting so that multiple data sets are fast, easy, and actionable. When combined, these tools have the potential to meet the demands of rapid iterative design cycles and accelerate innovation in the core design of schools.
Strategic designers are always paying careful attention to everything they see, hear, and experience for the purpose of weaving together an imagined future. In the context of design, connecting the dots is a compulsion put to good use. This quality of inherent curiosity is more than a practice; it is a way of being that facilitates the connection of ideas, people, and experiences to produce unexpected outcomes. Perpetual curiosity, receptivity, and timelines are primary conditions for emergence, and why engaging a strategic designer to find super problem-solving approaches for particularly complex problems is worth the investment.
Susanna Carman is a strategic designer, researcher, facilitator, and writer specialising in Design Leadership and Creative Intelligence. With backgrounds in adult development, design, brand, business strategy, and the arts, Susanna works with leaders of enterprise and organisations to embed Design Thinking into the cultural fabric of human systems. For more information about how the best of design processes and the best of strategic frameworks can help you solve complex problems, contact SC Design or email [email protected]. Please visit www.susannacarman.com. | https://enliveningedge.org/columns/strategic-design-new-school-models/ |
There are two questions that, as a customer, you’re probably quite used to hearing in a restaurant. Do you have a reservation and how is everything? Yet, for me, both of these are the hospitality equivalent of nails down a blackboard. On the face of things, you may ask why. “Do you have a reservation?” seems like an obvious question for a host, receptionist or maitre d’ to ask when you arrive at a restaurant.
But let’s take it a back a step. You walk into a restaurant, a simple, “hello, how are you?” along with a welcoming smile wouldn’t go amiss first. The second, and by far the more infuriating, is the server check back that goes along the lines of, “how is everything?” It may seem the most pertinent choice of question to ask a diner but in fact it is a pointless, opened-ended question. Anyone from the Corbin & King school knows all it requires is a simple, “is there anything else I can get?” when checking on diners.
Those of us working in it – along with customers – cannot deny that there has been a revolution in the industry over the past decade. Not only what we eat and drink has changed – sharing plates, poké bowls and oat milk in our flat whites – but the way we eat has also changed.
Restaurants like POLPO and Dishoom have made queuing for our supper the norm and lengthy meals in hushed dining rooms with stuffy service have been replaced with the informal option of dining at counters, communal tables and the recent rise in popularity of even being able to do this on your own sofa, thanks to Deliveroo. But with huge growth in the restaurant industry come challenges – most notably staffing and service.
One of the aforementioned conversations occurred at a recent industry drinks party where someone mentioned that the norm in this country is to experience poor customer service – anything better is often a welcome surprise. Of course, it should not be like this. Yet I’m always aware that it’s not the individual in question’s fault – the majority of time it comes down to management, training and attitude.
With the swathe of openings in the capital, it’s no surprise that a lot of the new restaurant and bars are being led by young restaurateurs and chefs keen to make their mark on the London food scene. But that presents a dilemma. Who’s training the people that are now working in these restaurants and how are they being trained?
What drew me into this industry was the people. Food can be flawless – but if the service is not on point, it can ruin a meal. Memorable meals are often just that because you’ve been made to feel special by the restaurant while breaking bread.
I remember finding myself on the terrace at the Groucho Club a while back. I was surrounded by some of the best maitre d’s in London. We chatted for hours about the industry and the people involved in it. The one thing everyone had in common in this conversation was that they’d all worked for some of the finest restaurateurs in the industry, including Jeremy King and Chris Corbin, during their careers. It struck me that this group of individuals are a somewhat dying breed – apart from the odd establishment, restaurants are not run like this anymore and people are not trained in this way.
The skill of seating a dining room – juggling tables for regulars, making sure certain advertising CEOs are not seated near each other and keeping a room from being a boring throng of suits is a dying skill. Much like a Savile Row tailor – it’s been replaced by the more affordable high street option.
Speaking to Matt Hobbs, now managing director of the Groucho Club in Soho, he recalled his time at the Ivy in the 90s. “The internet hadn’t gripped everything like it does now. The result was there was a one hundred percent emphasis on personal relationships. Customers and staff knew each other and especially the maitre d’ team were aware of customers lives and careers. Now it’s far more transient in the wider industry
This article was first published in Issue 15 of CODE Quarterly. | https://www.codehospitality.co.uk/long_reads/service-please/ |
As most kids prepare to return to in-person school in the fall, and as we navigate a gradual return to “normal” life after COVID, parents can help by mastering some strategies to help calm anxiety in their children.
You may have seen anxiety in your child manifest as irrational fears or incessant worrying. Young children can suffer from anxiety in one form or another – whether an anxiety disorder or simply a phase in their life.
In fact, the last year or so in this pandemic has dramatically impacted children’s mental health as well as adults’.
Validate kids’ feelings and anxiety
First things first, don’t diminish or demean their feelings – even if they aren’t necessarily rational. As parents, we naturally want to comfort fears in our children. But sometimes, we inadvertently brush away their feelings as wrong or unimportant.
Instead, show that their feelings are valid: “I understand you’re feeling a little anxious. The first day of school is a big, new thing.” Then follow that with encouragement. Explain that they can still be brave through the anxiety – that courage is not the absence of fear but the determination to continue on, despite the fear.
Read Next | These Are the Common Causes of Headaches in Children
Help kids sort out stressful thoughts
When stress overwhelms us (or our children), it can be challenging to think clearly. Help your child distinguish what’s real and what’s not. For example, recognize whether a fear of theirs is indeed a threat or not (ex: monsters under the bed vs. getting sick at school).
Illustrate the importance of sorting out the things you can control from the things you can’t. If they’re worried about a situation that’s out of their hands, help them acknowledge that they can only control how they respond to events that occur, not the events themselves. On the other hand, if their stress is due to something they can control, such as a test in school, they can do something about it – like put in extra study time.
Help your child catch their negative thoughts, and then challenge them. If the thought is, “I’m bad at sports,” guide your child to challenge that with what they know is true. “Have there been times I performed well in sports? I’m still learning, and I can continue to improve.” Support your child in learning to talk as kind to themselves as they would to a good friend.
Teach age-appropriate calming and coping strategies for anxiety
Share coping techniques for children. Here are a few examples:
- Deep breathing. They can imagine blowing bubbles, birthday candles, or smelling pizza and then blowing out to cool it off.
- A calm down area. Find a corner or quiet nook and make it cozy with blankets, stuffed animals, books, or whatever else calms your child. Then when their emotions start to rise, remind them to take some time in their calm down spot.
- Music. They can listen to music, sing, or play. Music has the ability not only to improve connections in our brains – it can also be quite effective at calming us.
- Imagine a favorite place.
- Write in a journal (or have them dictate for you to write in their journal), or color pictures.
Address specific anxiety triggers one step at a time
Also called the stepladder approach, this means helping your child face their fear in small increments at a time. For example, if they are afraid to go in the backyard because they got a bee sting the last time, help them take small steps to resume playing outside.
- Step outside the door for a minute, with you next to them.
- Sit on the patio with you for 5 minutes.
- Sit on the patio by themselves for 10 minutes.
- Play a game for a half-hour.
Depending on your child’s fear, the steps you take may need to be bigger or smaller than this.
Be aware of how your parenting affects kids’ anxiety
Your parenting style can affect your child’s anxiety levels. Too controlling (authoritarian) or too hands-off (permissive) are both styles that tend to raise anxiety in children. Strive for a healthy balance (authoritative) of keeping a positive relationship yet still enforcing rules.
Even if your parenting is spot on, the way you handle your own anxiety could be affecting your children. They sense your stress and notice the way you handle it. This doesn’t mean you have to eliminate all stress from your life. It means learning to cope with it in a healthy way that demonstrates for your kids positive ways to manage anxieties.
If needed, seek help from a professional
Don’t be afraid to reach out for help. If your child’s anxiety is ongoing, interferes with their ability to function, or if an anxiety disorder is suspected (rather than just a phase), talk with a pediatrician or mental health professional. They may recommend a type of therapy to fill your child’s needs.
Consider an emotional support animal
Animals can have a calming effect on adults and children alike. If it could be helpful, you might consider getting an emotional support animal (ESA). Or you could even make your current pet an official ESA. Doing so can allow your child access to more places with the company of their ESA, giving them a comforting presence to encourage them throughout the day.
Anxiety in children can feel daunting – especially if you’re dealing with your own anxiety as well. Use these strategies to support and empower your child as they learn to self-calm and manage big emotions.
Read Next | This Is How Breathing Techniques Can Reduce Anxiety in Children
Read Next | Dealing with Social Anxiety in Kids
Like what you read? JOIN the Mommybites community to get the latest on FREE online classes, parenting advice, events, childcare listings, casting calls & raffles, and our Parents With Nannies Facebook group. SIGN UP NOW! | https://mommybites.com/col2/big-kid/7-strategies-to-help-an-anxious-child/ |
1. First hypothesis: Monophyletic: It describes that these phyla are related due to following common structures: Pseudocoelom, Cuticle, Muscular pharynx and Adhesive glands.
2. Second hypothesis: Polyphyletic: This hypothesis describe that these phyla are not related to each other. Therefore they are polyphyletic. There is absence of any single unique feature found in all groups. It strongly suggests that there is independent evolution of each phylum. These animals are adapted to similar environments. .lberetbre. similarities among them are due to convergent evolution.
3. Both monophyletic and polyphyletic: The correct phylogeny is in between the two hypotheses. All phyla have some common anatomical and physiological features. Thus they are distantly related to each other. Convergent evolution has also produced ome analogous similarities. But each phylum arose from a common acoelomate ncestor. It diverged very early in evolutionary history. Such ancestor is a primitive ciliated acoelomate turbellarian. Therefore, it is concluded that the first ancestor was ciliated, acoelomate, marine and monoecious. It lacked cuticle. | https://biologyboom.com/chapter-5-the-pseudocoelomate-body-aschelminthe/ |
The most recent element of the ongoing global dispute resolution process is the late November 2016 release of the so-called multilateral instrument (MLI), a cornerstone of the base erosion and profit shifting (BEPS) project. It is an ambitious effort of the Organization for Economic Cooperation and Development (OECD) to impose its will on as many countries as possible. The explanation comprises 85 single-spaced pages and 359 paragraphs. The MLI draft itself is 48 similar pages. The purpose of the MLI is to facilitate implementation of the BEPS Action items without having to go through the tedious process of amending approximately two thousand treaties.
In essence, the MLI implements the BEPS Action items in treaty language. While consistency is obviously an intended result, the MLI recognizes the reality that many countries will not agree to all of the provisions. Accordingly, countries are allowed to sign the agreement, but then opt out of specific provisions or make appropriate reservations with respect to specific treaties. This process is to be undertaken via notification of the “depository” (the OECD). Accordingly, countries will be able to make individual decisions on whether to update a particular treaty using the MLI.
There are a variety of initial questions to be addressed by each country, including:
- Does it intend to sign the MLI?
- Which of its treaties will be covered?
- Will treaty partners agree?
- What provisions will be included or opted out of? If there is an opt out, the country is supposed to advise the depository of how this impacts each of its treaties. This will be a time-consuming process.
- How will it negotiate with specific treaty partners with respect to the various technical provisions of the MLI?
The arbitration provisions are intended to implement the BEPS Action 14 recommendations, focused on mandatory binding arbitration. These provisions would apply to a bilateral treaty only if both parties agree. The arbitration articles provide an outline of arbitration procedures, allowing the competent authorities to vary the procedures by mutual agreement. The form of the proceeding provides a default for “last best offer” (or “baseball style”). The parties may also agree to a “reasoned decision” process, which is stated to have no precedential value. If the parties do not agree on either of these forms of proceeding, the competent authorities should endeavor to reach agreement on a form. If there is no agreement, then the arbitration provisions are inapplicable.
Whether the US or other countries will sign the MLI, it seems apparent that the net result will be a period of chaos in treaty relationships, as there will inevitably be: (1) signers and non-signers; (2) reservations; (3) opt outs; etc.
In a world in which the list of countries zealously seeking to protect their tax bases and making proposals to increase domestic tax revenues (following BEPS and related guidance), continually expands, it seems apparent that dispute resolution processes will need to evolve to resolve the tsunami of disputes that are expected to materialize. If this is not the case, then countries and MNEs alike will incur prejudice to their respective interests.
Accordingly, these dispute resolution issues should be on the agenda for consideration as effective tax rate strategies are revisited in the post-BEPS world. | https://www.lexology.com/library/detail.aspx?g=390dd93d-0fa8-44df-916d-b1469a1c30bc |
Advice & guidance resources
Advice and guidance from the NUJ and partner organisations.
Displaying 46 results
NUJ extra Welfare Officer Handbook
These guidelines were written as a result of NUJ extra's commitment to provide training for its volunteers.
NUJ guidelines on LGBT+ reporting
Gay, lesbian, bisexual and transgender people have the right to fair, accurate and inclusive reporting of their life stories and concerns. As with all...
TUC guide April 2021. A safe return to the workplace.
The union approach to keeping workers safe as the UK Government eases restrictions following the third lockdown.
NUJ extra Covid brochure
Extra help with Covid nightmare
2021 Nominations to NUJ Councils: Guidance notes
Guidance for members on nominations, elections and operation of the NUJ's councils and other bodies.
TUCG: Trade unions fighting racism and the far-right
Building solidarity in workplaces and communities.
Updated guidance on statutory sick pay, CJRS, SEISS
Latest guidance on Covid-19 financial packages for freelances.
Reporting poverty: a guide for media professionals
This guide is for journalists who want to report on these complex issues accurately, sensitively and powerfully.
Redundancy FAQ August 2020
This document covers a range of issues and the rights involved in a redundancy process.
ICTU working from home guide
The Irish Congress of Trade Unions has published a useful guide to working from home, setting out the legal entitlements and the law governing health ...
ICTU: Covid-19 Local Workplace Representative Complaints Procedure
This one-page Complaints Procedure provides guidance on how to deal with non-compliance with the Return to Work Safely Protocol.
ICTU Role of Lead Worker Representative
ICTU document focussing on the role of the Lead Worker Representative and provides advice on how it should be undertaken.
NUJ guidance – Covid-19 Health, safety, wellbeing & work
Health and Safety Committee notes for members, reps, chapels and branches.
NUJ guidance – home working inspections
Health and Safety Committee notes for members, reps, chapels and branches.
Mental Health Awareness Week 2020
This year’s mental health awareness week (18-24 May) finds us in strange and difficult times – in the midst of the coronavirus pandemic.
Preparing for the return to work outside the home: A trade union approach
This TUC report sets out what we believe the government must do now to ensure a safe transition from lockdown, looking at how to safely return to work...
Coronavirus - Help for union reps from the TUC
Guidance from the TUC to give you an understanding of the workplace issues in the context of COVID-19 and to provide support in being effective at neg...
NPCC: Working with journalists during Covid-19 outbreak
National Police Chiefs Council guidance on working with journalists during the Covid-19 outbreak.
Coronavirus - advice for freelances
Freelances make up around a third of the NUJ membership and the union is here to help its freelance members during the coronavirus pandemic.
Coronavirus/Covid 19: Health and Safety Committee Notes for members, reps, chapels and branches
The NUJ’s Health and Safety Committee has published a short guide which should be read in conjunction with the official guidance provided by the statu... | https://www.nuj.org.uk/advice/advice-guidance-resources.html |
Here's How To Know If You're Meat Is Recalled Before Stocking Up For Taco Night
We’ve got some bad news: You may want to think about postponing your upcoming taco (or fajita) night because a major ground beef recall is hitting supermarkets across the country. The United States Department of Agriculture announced a recall of thousands of pounds of fresh and frozen raw beef on March 31 after it didn’t undergo the proper federal inspection process. Yikes. If you’re wondering how to know if your meat is recalled, be sure and check the label.
Workers at Texas Meat Packers, a Fort Worth, Texas-based meat processing plant, produced 7,146 pounds of fresh and frozen beef products on March 24 that didn't undergo routine and proper federal inspection from officials at the U.S. Food and Drug Administration. Whoops. The incident wasn’t discovered until six days later on March 30, 2018, when it triggered the USDA’s recall the following day on March 31, 2018.
The recalled beef was sold in grocery stores throughout nine states including Alabama, Arkansas, Indiana, Louisiana, Mississippi, Missouri, Oklahoma, Texas, and Wisconsin. So if you’ve taken a trip to the grocery store recently, do yourself a favor and take a peek in in your refrigerator to see if you may have purchased any of the meat. Check the label on product and look for a pack date of March 23-24, 2018. The following red meat goods are included in this recall:
- 5-pound, vacuum-packed frozen packages of “Beef Skirt Diced for Tacos” with an case code of 1470.
- 5-pound, vacuum-packed frozen packages of “Pre-seasoned Beef for Fajitas” with an case code of 36989.
- Varying weights of vacuum-packed packages of fresh "USDA Choice Angus Beef, Fajita Seasoned Steak, Beef Flank Steak for Fajitas" with an case code 567248261.
- Varying weights of vacuum-packed packages of fresh "USDA Choice Angus Fajita Seasoned Strips, Beef Flank Strips for Fajitas" with an case code of 567248253.
To locate the item codes, check the upper-left hand corner of the packaging label. Also, even though these products weren't cleared by USDA's Food Safety and Inspection Service (FSIS), they still have the organization's mark of inspection with the following establishment number "34715" printed inside of the label.
If you’re wondering how this could’ve possibly happened, well, there hasn’t been an official explanation issued just yet. Alas, there are laws in place to prevent this exact scenario from happening. Thanks to the Federal Meat Inspection Act, which first took effect in 1906, all commercially-sold meat undergo inspection by federal officials to ensure that it is OK for public consumption. Thankfully, despite this lapse in inspection, these instances are few and far between — so it’s probably safe to keep shopping at your local market for packaged red meat (you should still check the label if you live in one of the states previously listed).
Luckily, no illnesses have been reported in relation to this recall — but the USDA’s Food Safety and Inspection Service is still taking the matter very seriously. The USDA is categorizing this recall to its highest classification — Class 1 — which means that there could be “serious, adverse health consequences or death” if you consume the meat. Anyone who may have eaten the meat should contact their primary care physician just to be safe.
Officials are concerned that consumers may have tossed the beef in their freezer to be enjoyed at a later date. So, yeah, if you think you may have purchased the recalled meat, you should take extra precautions and check your fridge and freezer for the items. Because no taco is worth potentially dying for. Go ahead and toss out the tainted meat or return it to the place of purchase for a full refund.
Now, back to the grocery store to start planning your next taco night. | https://www.elitedaily.com/p/heres-how-to-know-if-youre-meat-is-recalled-before-stocking-up-for-taco-night-8691540 |
Not only was I new on the Council, but I was also new to my position as a Public Health Nurse with Wright County (and new to the area!). I had been searching for a way to marry my passion for local food with my work in public health, which made this opportunity with the Council a great fit. What makes this possible is my work on the Statewide Health Improvement Partnership (SHIP), a grant from the Minnesota Department of Health that focuses on obesity prevention and tobacco cessation.
The Council was established by a few like-minded people who wanted to intentionally strengthen the local, healthy food environment in Wright County. With the support of SHIP since its inception, the Council has done just that; this is a reliable group of people who work to ensure consumers have access to healthy food from local producers. SHIP funds staff positions on the Council to coordinate projects at places like farmers markets, local food retailers, and food shelves. My goal, and the goal of SHIP, is to make the healthiest choice the easiest choice by adapting the policies, systems, and environments that impact the availability and accessibility of healthy food.
Since 2009, Community Health Boards across the state have received SHIP funding to promote active living, healthy eating, and tobacco cessation. We know that good health is created where we live, work, learn, and play; therefore, Wright County Public Health partners with schools, workplaces, child care providers, hospitals, and communities to make long-term, sustainable changes. My work, and that of my colleagues, is all about using local resources to enhance opportunities for our community to live happier and healthier lives.
If you’d like to learn more about SHIP and other local health initiatives, please visit www.livewright.org or feel free to contact me at [email protected]. | http://crowriverfoodcouncil.org/crfc-a-marriage-of-passions-in-local-food-and-public-health/ |
ABN Finsights Academy is a leading educator and training provider to highly skilled professionals to an increasingly global marketplace.
We are proud to be closely connected with top-tier corporations, education institutions, leading industry associations, government organisations, professional bodies and the wider business community through employee education & training programs, Executive recruitment initiatives and strategic partnerships internationally.
Sydney Institute of ERP (SiERP) is a SAP Education Partner since 2009 and Australia’s pioneering educational institute dedicated to offer SAP training and certification to individuals who would like to pursue their career within SAP professional environment. SiERP strongly believes that having SAP skills and relevant business knowledge provide competitive advantage to succeed in the SAP ecosystem. | https://www.abnfinsightsacademy.com.au/knowledge-partners/ |
ify the insurance company of the accident.
Here are the most typical cases of denial of insurance reimbursement:
- Damage due to factory marriage
- Deliberate actions of the car owner (creating an emergency to collect from insurance money)
- Driving under the influence
- Management of a person not included in the contract
- The car did not pass the TO
- Incorrectly stated information about the circumstances of the accident or damage (insurance cheating)
- Negligence of the insurer or his family (carrying by an oversized object – a ladder, for example – and hit the wing of the car).
In fact, only the requirements of the insurance company, which are listed in the Insurance Act and the Civil Code, can certainly be legitimate:
- The driver did not notify the insurer in a timely manner about the occurrence of the insurance case (and the late notification affected the insurance company’s ability to make a payment)
- Driver intentionally failed to take available measures to reduce damages
- Deliberate actions of the driver or third parties for profit or grossly reckless actions
- The accident occurred as a result of force majeure: nuclear strike, civil war, strikes and riots, etc. (at the same time, if such circumstances are marked in the contract as insurance cases – the company is obliged to pay compensation)
- You have repaired or disposed of the car prior to examination, which does not allow you to determine the presence of an insurance case and determine the extent of the damage.
All other reasons, if you were denied an insurance payment, can be challenged by filing a claim with the insurance company, and in the a
bsence of a proper response to it – you have every right to defend your rights through the court. What to do in case of refusal to pay for OSAGO or CASCO is detailed in the relevant subsections on our website. It also provides specific examples of incorrect failures of insurance companies.
What to do?
But claims can be not only on your part to insurance companies, but also from insurance companies to you. Most often it happens in the circumstances when there was a fairly large accident, the culprit of which you are recognized. At the same time:
- The second participant has CASCO insurance
- The amount of damage caused to him exceeds the limit of your liability under OSAGO
Then the amount missing to cover the cost of repairs will be the insurer of the victim will try to collect from you. And in the order of the so-called subrogation to you – for this amount – will come a claim from the insurance. What should you do in a situation like this? Don’t go to extremes.
First, do not pretend th
at you did not notice the claim.
It is best to immediately contact specialists on such issues – you can in our
law firm – who will be able to assess the situation.
- The amount should be displayed in view of the natural wear and tear of the parts
- The claim must be confirmed by a whole package of supporting documents.
Your fault in the accident, the amount of damage caused, the insurer’s right to subrogation must be confirmed.
Having built a line of behavior competently, you can either withdraw from yourself the claim, or significantly reduce the amount of payment.
To help you in this will always be happy employees of our firm – a lawyer for insurance disputes. | https://agtl.com.ua/en/the-insurance-company-does-not-pay/ |
Xining industry and commerce to carry out the work of scrap car recycling dismantling market special
recently, the Xining Municipal Bureau of industry and Commerce in the city to carry out the recycling of scrapped automobile recycling market special rectification work.
special rectification, Xining city Industrial and Commercial Bureau law enforcement officers focus on the existence of scrap metal recycling companies, recycling shops and auto repair factory of unlicensed illegal illegal recycling and dismantling scrapped automobiles, not according to the provisions of the state or in the scope of illegal modification of vehicles and other business activities were the focus of inspection. Check in the scrap metal recycling company of Song Village North area of Xining City, an illegal recycling and dismantling scrapped vehicles, law enforcement officers found that the company has no business procedures and relevant qualifications, on the spot seized an illegal recycling scrap cars and nearly 5 tons of scrap auto parts have been dismantled.
through special rectification, to further strengthen the recycling of scrapped cars in Xining, the dismantling of the market regulation, maintaining a normal scrap car recycling, dismantling market order. | https://lyxjc.com/boots719 |
There are more than 186,000 clinical laboratories in the United States in which clinical laboratory scientists, pathologists,
Handbook of Biosurveillance ISBN 0-12-369378-0
Elsevier Inc. All rights reserved.
TABLE 8.1 Clinical Laboratory Tests that Contribute to the Diagnosis of Anthrax
Type of Test
Specimen
Expected Result
Nonspecific White blood count Cerebrospinal fluid (CSF) analysis Presumptive Growth on sheep blood
Colony morphology
Gram stain Hemolysis Motility Sporulation
Confirmatory Capsular stain
Gamma phage Direct fluorescent antibody (DFA) Polymerase chain reaction (PCR) Time-resolved fluorescence (TRF) Molecular characterization
Whole blood CSF
Blood, CSF, lesion Bacterial growth
Bacterial growth Bacterial growth Bacterial growth Bacterial growth
Bacterial growth
Bacterial growth Bacterial growth
Bacterial growth
Bacterial growth
Bacterial growth
Elevated count Normal
Growth within 24 hours
Gray-white colonies, flat or convex, ground glass appearance Large Gram-positive rods Clear hemolysis Motile
Visual spores with malachite green stain
Visual capsules with M'Faydean stain Lysis by gamma phage Positive fluorescence
Positive PCR
Positive TRF assay
The Centers for Medicare and Medicaid Services (CMS) registers all clinical laboratories in the United States that examine materials derived from the human body for diagnosis, prevention, or treatment. CMS administers the program for the Secretary of Health and Humans Services in conjunction with the Centers for Disease Control and Prevention (CDC) and the Food and Drug Administration (FDA). CMS regulates laboratories and establishes criteria for other organizations, such as state health departments, that also regulate laboratories to ensure compliance with the federal Clinical Laboratory Improvement Act (CLIA). CLIA was first enacted by Congress in 1967 and set guidelines for large independent laboratories. In 1988, Congress amended CLIA 67 to expand the type of laboratories that must comply; CLIA 88 further established quality standards for laboratories to ensure accuracy, reliability, and timeliness of test results.
In August, 2004, 186,734 laboratories were registered with the CMS (Centers for Medicare and Medicaid Services, 2004). Table 8.2 shows the distribution of these laboratories by type. More than 55% of these laboratories are located in physician offices. Skilled nursing facilities (7.9%), hospitals (4.6%) ,and home health agencies (4.4%) accounted for an additional 20% of laboratories. The remaining clinical laboratories are found in community health clinics, health maintenance organizations, blood banks, industrial facilities, medical technologists, and laboratory technicians perform 7 billion or more diagnostic tests annually (Centers for Medicare and Medicaid Services, 2004). The American Society for Clinical Pathology (ASCP) currently certifies more than 280,000 laboratory professionals who primarily work in clinical diagnostic and research laboratories. Clinical laboratory services in the United States are delivered either by commercial clinical laboratories or by "in-house" laboratories at healthcare facilities (hospitals, clinics, physician offices), departments of health, veterinary hospitals, and clinics. Individual veterinarians and physicians and the staff within their offices also conduct laboratory testing and produce results that are important for biosurveillance.
Professional laboratorians provide services that include simple, rapid screening tests; more advanced diagnostic tests; and complex confirmatory analyses. Clinicians use the information provided by laboratories to establish diagnoses and to make treatment decisions on virtually every patient. The demand for testing is increasing as the population ages and requires more health care, including analytical services. New tests are frequently introduced that improve diagnosis and care. The emergence of new diseases, the threat of bioterrorism, and the need for better biosurveillance systems have increased the demand for qualified laboratory professional in all fields, especially infectious disease testing. Although the demand for more laboratory professionals is increasing, the number of established laboratory professional training programs is decreasing.
|
|
Type of Laboratory
|
|
Number
|
|
Percentage
|
|
Ambulatory surgical centers
|
|
3,229
Was this article helpful? | https://www.guwsmedical.info/biosurveillance-system/clinical-laboratories.html |
One of Breakin’ Convention’s key priorities is supporting professional hip hop artists. We are extremely passionate about providing opportunities and platforms for artists to learn, experiment and develop from. We have several initiatives that you can engage in to progress as a dancer, performer, choreographer or theatre maker.
To learn more or to arrange a course contact:
Higher Learning
Higher Learning is a hip hop theatre education and development programme committed to the development of the genre. Lead by a range of industry specialists through intensive workshops, courses and educational days.
Open Art Surgery
Open Art Surgery is a one week development course for hip hop theatre artists and companies. Working alongside cutting edge mentors to create new work to be performed in front of a live audience.
Back To The Lab
Back To The Lab is a two week intensive course for experienced choreographers to explore new ways of approaching the creation of new work culminating in the development of a new piece performed in front of a live audience. | https://www.breakinconvention.com/professionaldevelopment |
Elizabeth Miner, successful Author, Life and Business Coach, Speaker and Founder and CEO of Thrive This Day, shares insights and hope from her own journey from poverty to self-sufficiency. In her conversation with Carol, the listener can learn from Elizabeth’s experiences and how her deliberate choices and goals led her to a soul-satisfying career.
“I Appreciate You” – Practicing Appreciation To Create An Abundant Life
Lately, I’ve been reading about appreciation. The phrase “I appreciate you” is often used by my dearest friend. I’ve witnessed the impact and weight of these words…
Ep. 26 – The Power To Change Rests Within Us
An Interview With Jennifer Spor, Transformational Coach, Spiritual Mentor, Writer and Podcast HostIn this episode, Carol interviews Jennifer Spor, a transformational coach, spiritual mentor, writer, and host of the Awake & On Purpose Podcast. Jennifer firmly...
Ep. 25 – The Amazing Allure of a “Gentle Rebel”
Andy discusses how his path evolved and the impact of listening and feedback on guiding his next steps in building a community. Andy is a storyteller, a philosopher, and an example of coping in a busy world through planning, self-awareness, and creativity.
Ep. 24 – Undaunted By Failure: Lessons Learned
In this episode, Carol interviews Godwin Chan as he takes us on his journey of failure and discovery. Godwin shares the role of self-awareness…
Ep. 20 – The Story Behind “Emergence Of The Total Woman”
Dr. Shelley Negelow and Lynnis Woods-Mullins each experienced powerful lessons in healing from past life traumas that led…
Ep. 13 – The Beautiful Gifts of Highly Sensitive Introverts
As a child, Jas Hothi thought everyone was like him…
Subscribe to our Newsletter
Every Monday you’ll get a personalized "Note from Your Higher Self." Every Tuesday you'll receive our latest updates to our podcast and blog and at month's end a digest of the posts. We’ll share a dose of what we’re reading, learning, and collaborating on to help you continue to Elevate to a Higher Level.
Unsubscribe anytime. We will never sell or share your personal data. | https://heartsriseup.com/category/relationships-self-others/ |
The Broader Benefits of Transportation Infrastructure
Assessments of the economic benefits of transportation infrastructure investments are critical to good policy decisions. At present, most such assessments are based of two types of studies: micro-scale studies in the form of cost-benefit analysis (CBA) and macro-scale studies in the form of national or regional econometric analysis. While the former type takes a partial equilibrium perspective and may therefore miss broader economic benefits, the latter type is too widely focused to provide much guidance concerning specific infrastructure projects or programs. Intermediate (meso-scale) analytical frameworks, which are both specific with respect to the infrastructure improvement in question and comprehensive in terms of the range of economic impacts they represent, are needed. This paper contributes to the development of meso-scale analysis via the specification of a computable general equilibrium (CGE) model that can assess the broad economic impact of improvements in transportation infrastructure networks. The model builds on recent CGE formulations that seek to capture the productivity penalty on firms and the utility penalty on households imposed by congestion (Meyers and Proost, 1997; Conrad, 1997) and others that model congestion via the device of explicit household time budgets (Parry and Bento, 2001, 2002). The centerpiece of our approach is a representation of the process through which markets for non-transport commodities and labor create derived demands for freight, shopping and commuting trips. Congestion, which arises due to a mismatch between the derived demand for trips and infrastructure capacity, is modeled as increased travel time along individual network links. Increased travel time impinges on the time budgets of households and reduces the ability of transportation service firms to provide trips using given levels of inputs. These effects translate into changes in productivity, labor supply, prices and income. A complete algebraic specification of the model is provided, along with details of implementation and a discussion of data resources needed for model calibration and application in policy analysis.
|Year of publication:||
|
2007-12
|Authors:||Wing, Ian Sue ; Anderson, William P. ; Lakshmanan, T.R.|
|Institutions:||International Transport Forum, Organisation de Coopération et de Développement Économiques (OCDE)|
Saved in:
|Extent:|| |
text/html
|Series:|
|Type of publication:||Book / Working Paper|
|Notes:|| |
Number 2007/10
|Source:|
Persistent link: https://www.econbiz.de/10004962993
Saved in favorites
Similar items by person
-
The Wider Economic Benefits of Transportation: An Overview
Lakshmanan, T.R., (2007)
-
ARTICLES - Increasing Returns to Scale in Affluent Knowledge-Rich Economies
Ray, Gautam, (2001)
-
E-commerce, Transportation, and Economic Geography
Anderson, William P., (2003)
- More ... | https://www.econbiz.de/Record/the-broader-benefits-of-transportation-infrastructure-wing-ian-sue/10004962993/Description |
This is where you will submit the revised version of your research paper.
Task: Write a research paper using evidence to support a thesis that addresses your research question examining a current issue or event in the news from the perspective of your field of study. The audience is people who are generally educated but do not have extensive knowledge of your field.
Length: At least 2000 words
Sources: Minimum of 6. At least 3 of these must be from scholarly journals, and all sources should be selected based on reliability, currency, and level of information/analysis.
Topic
This paper is the culmination of your research project, in which you are examining a current issue or event in the news from the perspective of your field of study. Before drafting your paper, you will have chosen a topic, developed a research question, and identified several potential sources in an annotated bibliography. You should write on the same topic for this paper, unless your professor has asked you to make changes to your topic.
Organizing and supporting your paper
As you write your paper, be sure to include the following:
an engaging introductory paragraph that includes an effective and clear thesis statement
any definition of terms or background information that your reader is likely to need to understand your paper
unified, supported, and coherent body paragraphs that defend the thesis
an effective conclusion
Research is a key element of this paper. Take care to support your claims with research throughout the paper. Include APA in-text citations whenever you use sources, whether through quote, paraphrase, or summary. An APA reference list at the end of the paper should list all of the sources cited in the text of the paper.
Point of view
This paper will be written in an academic style. Use third person point of view. Do not use “I” or “you.”
Formatting your assignment
Incorporate these elements of APA style:
Use one-inch margins.
Double space.
Use an easy-to-read font between 10-point and 12-point.
Include a title page with the title of your paper, your name, and the name of your school
Essay Writing Service Features
Our ExperienceNo matter how complex your assignment is, we can find the right professional for your specific task. ACME Homework is an essay writing company that hires only the smartest minds to help you with your projects. Our expertise allows us to provide students with high-quality academic writing, editing & proofreading services. | https://acmehomework.com/impact-of-the-coronavirus-on-the-economy/ |
Disclaimer: It’s possible that the product’s actual price and release date will differ from those shown. The accuracy of the aforementioned information cannot be guaranteed.
The Google Pixel 7A is expected to cost 34,990 rupees in India. It is thought that the Google Pixel 7A will go on sale on December 4, 2022. The phone will be available in a variety of colors.
Google Pixel 7A Specifications
SUMMARY
|Processor Chipset||Google Tensor|
|RAM||6 GB|
|Rear Camera||Dual (12.2MP + 12MP)|
|Internal Memory||128 GB|
|Screen Size||6.1 inches (15.49 cms)|
|Battery Capacity||4410 mAh|
PERFORMANCE
|Chipset||Google Tensor|
|No Of Cores||8 (Octa Core)|
|CPU||2.8GHz, Dual-core, Cortex X12.25GHz, Dual-core, Cortex A761.8GHz, Quad core, Cortex A55|
|Architecture||64-bit|
|Secondary Processor||Titan M2|
|Fabrication||5 nm|
|RAM||6 GB|
|Graphics||Mali-G78 MP20|
DESIGN
|Screen Unlock||Fingerprint, Face unlock|
DISPLAY
|Resolution||1080 x 2400 pixels|
|Aspect ratio||20:9|
|Display Type||OLED|
|Size||6.1 inches (15.49 cms)|
|Bezel-less display||Yes, with Punch-hole|
|Pixel Density||431 pixels per inch (ppi)|
|TouchScreen||Yes, Capacitive, Multi-touch|
|Color Reproduction||16M Colors|
CAMERA
|Rear camera setup||Dual|
|Rear camera(Primary)||12.2 MP resolution f/1.7 aperture|
|Rear camera(Secondary)||12 MP resolution ultra-Wide Angle lens f/2.2 aperture|
|Front camera setup||Single|
|Front camera(Primary)||8 MP resolutionf/2 aperture|
|Flash||LED Rear flash|
|Video Resolution(Rear)||1920×1080 @ 30 fps|
|Video Resolution(Front)||1920×1080 @ 30 fps|
|Camera Features||Auto FlashAuto FocusFace detection touch to focus|
|Shooting Modes||Continuous ShootingHigh Dynamic Range mode (HDR)|
BATTERY
|Type||Li-Polymer|
|Capacity||4410 mAh|
|Removable||No|
|Fast Charging||Yes|
|Wireless Charging||Yes|
STORAGE
|Internal Memory||128 GB|
|Expandable Memory||No|
SOFTWARE
|Operating System||Android v12|
|Custom UI||No|
CONNECTIVITY
|SIM Configuration||Dual SIMSIM1: NanoSIM2: Nano|
|Network||SIM1: 5G, 4G, 3G, 2GSIM2: 4G, 3G, 2G|
|5G Support||Yes, only SIM1|
|Voice over LTE(VoLTE)||Yes|
|Wi-Fi||Yes, with b/g/n|
|Wi-fi features||Mobile Hotspot|
|Bluetooth||Bluetooth|
|USB||USB Type-C, Mass storage device, USB charging|
|GPS||Yes with A-GPS, Glonass|
|NFC Chipset||Yes|
|Infrared||Yes|
SOUND
|Speaker||Yes|
|Audio Jack||Yes, USB Type-C|
|Video Player||Yes, Video Formats: MP4|
SENSORS
|Fingerprint sensor||Yes, On-screen|
|Face Unlock||Yes|
|Other Sensor||Light sensor proximity sensorAccelerometerCompassGyroscope|
Design, Display, and Security
The Pixel 7A wears a comparative shift focus over to the standard Pixel 7 series. The phone has a punch-hole front design and a dual camera setup on the rear panel. The phone has an OLED display with a 6.1-inch resolution and a 20:9 aspect ratio.
The screen has a resolution of 2400 x 1080 pixels and a pixel density of 431 ppi. The phone has a face recognition system and an on-screen fingerprint sensor for security purposes.
Performance, Camera, and UI
The company’s own Google Tensor chipset powers the most recent Google smartphone. There is a Cortex X1 core running at 2.8 GHz, as well as two Cortex A76 cores, and four Cortex A55 cores running at 2.25 GHz and 1.8 GHz, respectively, in the octa-core chipset.
Additionally, the handset has a Mali-G78 MP20 GPU for graphics processing. The Pixel 7A has 128GB of internal storage and 6GB of RAM.
On the rear panel, the phone has a dual camera setup with a 12.2MP primary sensor that has an aperture of f/1.7 and a 12MP ultra-wide-angle lens that has an aperture of f/2.2.The phone has an 8MP camera with an f/2.0 aperture on the front. The Android 12 operating system is used by the Pixel device.
Battery and Connectivity
A 4,410mAh battery powers the Google Pixel 7A, which supports fast charging.
In addition, it has stereo speakers and a variety of interesting connectivity options, such as 5G, a USB Type-C port, NFC, GPS, Wi-Fi, and 4G VoLTE. | https://www.geeksultd.com/2022/11/google-pixel-7a-price-in-india-2022-specs-release-date-amazing-features-detailed-information/ |
Ways That Understanding Your Colleagues can Make You a Better Leader
As a business owner, you spend a lot of time trying to achieve a certain outcome while at work, whether this is winning a large client or perfecting your products. However, there may be a situation that's out of your control which puts a spanner in the works and leaves you feeling disappointed. It may be that your colleague does the opposite of what you were expecting and it can leave you feeling confused about why they did what they just did.
If you don't resolve these problems, the situation may end up reoccuring which can lead to frustration. To improve your results you need to see what went wrong and address it from every angle. But how do you do this?
Uncover the missing stories
There's often a difference between the story we tell and what actually happened. The first step in making sense of what happened is to find the difference between these two narratives.
For example: A new manager feels as though they're openly sharing what should be done and ways in which they think the workflow can be improved. They've been asking for regular status updates and sharing suggestions on the next steps to be completed. Their team has started to get chilly with them even though they feel as though they're being a good leader by fostering open communication and keeping the team engaged with the work.
Now let's look at why the team may be feeling chilly towards the new manager. The oversharing of work and the suggestions being made feel like micromanagement and the regular communication is stifling and intrusive which is counterproductive to the work they're trying to accomplish.
These two different perspectives can show how there's a different narrative depending on whose perspective you're looking from.
If the manager in this situation is defensive and feels as though their way is the right way, the team may not voice their concerns. This can soon lead to resentment within the team towards their manager.
The job of a leader is to uncover the stories that add a greater sense of meaning to a shared experience. If a member of the team stood up to the manager and respectfully voiced their concerns about the management style, this can open a dialogue for change.
To uncover any hidden stories in your business, think about a recent experience that concerns you and reflect on the following:
- Are there any unknown stories that could provide important information?
- Are there any stories that others are choosing not to tell, if so, why?
- Are there any stories that have been told but have been discounted?
- Are there stories that may be perceived as taboo or off-limits?
Once you identify a potentially missing story, go directly to the person who can tell you about it and invite open and honest conversation. Going back to the manager, they could ask "Have I asked the team about what's going on? Do they know I'm open to honest feedback? What am I doing that's making people uncomfortable about speaking up?"
Look for reasons behind actions
There are several reasons why someone acts the way that they do. For instance, a manager may act a certain way because they feel that's how they "should act", this could be based on previous experience. They may have worked somewhere with an absent boss who left them feeling isolated and unsupported, leaving them to vow they would never be that kind of manager.
If your colleagues are aware of your back-story and why you act the way you do, they may feel willing to cut you some slack over it. This can be one good reason to uncover the hidden story and the reasons why people may act the way that they do.
Try to understand, not judge
When someone acts in a way that isn't expected or wanted, it can lead to some quick, and often negative, assumptions. However, these assumptions may be incorrect and the reason for the action is neither good nor bad. If you stay neutral in your attitude you won't waste energy blaming others for undesired outcomes or lamenting something you can't change.
It's more productive to focus on the atmosphere you want to create in the business and what it'll take to get there. Internally reflecting on what other people are thinking and what they're likely to do next can help.
It can be difficult to carry out things like this when your to-do list is stretched to its limit. But it can be crucial to slow down and carve out space to adopt such habits as they can lead to a grounded and strategic approach to what needs to be done.
Summing Up
The next time you're in a situation that's gone awry, start thinking about why it went the way it did. Don't guess at what went wrong, or assume the worst from those around you. Ask anyone else involved for their outlook on what happened and look to how the situation can be changed for the future and to try and prevent it from happening again. | http://www.micro-ink.cn/blog/how-understanding-your-colleagues-improve-leadership |
The Causes of Revolution: A Case Study of Iranian Revolution of 1978-79
Description: This study investigates the causes of the Iranian revolution of 1978-79. To this end, the different theories of revolution are reviewed in Chapter One. Chapter Two provides a discussion of the historical background of the country and the role the clergy played in shaping its political development. Socioeconomic and political factors which contributed to the outbreak of this revolution are examined in the following two chapters. Finally, an attempt is made to draw some conclusions on whether existing theories of revolution can fully explain the Iranian upheaval of 1978-79 or not. For the preparation of this study United States government documents and Iranian and English language scholarly works were consulted. | https://digital.library.unt.edu/search/?q5=&searchType=advanced&fq=str_year%3A1982&fq=str_location_region%3AMiddle+East |
Course unit details:
Marketing and Society
|Unit code||BMAN31621|
|Credit rating||20|
|Unit level||Level 3|
|Teaching period(s)||Semester 1|
|Offered by||Alliance Manchester Business School|
|Available as a free choice unit?||No|
Overview
The course is designed to provide students with the opportunity to explore the broader function and impact of marketing. It will encourage students to consider the role of marketing in shaping society and the role of society in shaping marketing.
Pre/co-requisites
Pre-requisite course units have to be passed by 40% or above at the first attempt unless a higher percentage is indicated below.
Pre-requisites:
- BMAN10101 Marketing Foundations
- BMAN24281 Marketing Management OR BMAN24352 Marketing communications in the Digital Age
Aims
The course aims to introduce students to contemporary issues within consumption, social marketing and not-for-profit marketing. A particular focus will be the growing sense of responsibility within the marketing discipline to address issues at the interface between marketing and society.
Learning outcomes
- To understand research which addresses negative consumer outcomes and the realisation of positive consumer outcomes.
- To discuss the growing sense of responsibility within the marketing discipline and address issues at the interface between marketing and society.
- To develop understandings of the potential impacts that marketing and markets can have on consumers
- To identify relevant theoretical frameworks with which to explore a range of consumer settings related to consumer wellbeing
- To develop understanding of the nature of non-commercial marketing particularly how it differs from other marketing activities
- To explore and assess the challenges marketers in not-for-profit organisations face.
- To appraise the application of marketing techniques and concepts in not-for-profit and social marketing contexts
- To understand and evaluate the application of theories of behaviour change to social marketing challenges.
- To work in a group to develop critical understanding and appreciation of practical and theoretical issues in actioning a non-commercial marketing campaign
Syllabus
· The application of marketing principles to non-commercial marketing contexts
· The purpose, scope and design of social marketing
· Key theories of behaviour change
· The challenges facing marketers in not-for-profit organisations
· Consumer vulnerabilities and associated consumption contexts
· Anti-consumption and consumer activism
· ‘Problematic’ consumption and antisocial behaviours
· Consumption ethics and sustainability
Teaching and learning methods
Methods of delivery:
Lecture hours: 26 (1 hour in week 1, then 3 hours per week delivered as 1x1 hour and 1x2 hours in weeks 2,3,4,5,7,8,9,10, then 1 hour in week 11).
Seminar hours: 8 (1 hour per weeks in weeks 3,4,5,7,8, 9, 10 & 11)
Poster presentation session: 2 hours (week 11 – at the time of the 2-hour lecture)
Assessment methods
Group coursework project equivalent to 4000 words assessed through poster presentation and project report (50%). With an option of peer assessment to be made available to students.
2-hour Examination (50%)
Feedback methods
· Informal advice and discussion during lectures and seminars.
· Ongoing formative verbal feedback to groups during project seminars
· Written group feedback on poster and project report
· Generic feedback on blackboard regarding overall exam performance
In addition to the course unit evaluation questionnaire, students are encouraged to give feedback through emails and conversations at any time, and using the online questionnaire near the end of the semester
Recommended reading
Course material will be delivered through a mixture of lectures, seminars, Blackboard (for lecture slides, case questions and solutions, URLs of relevant material, etc.).
Andreasen, A. (2006) Social marketing in the 21st Century SAGE Publications
French, J. and Gordon, R. (2015), Strategic Social Marketing, SAGE Publications
Lee, N.R. and Kotler, P. (2015), Social Marketing: Changing behaviors for good, 5 Edition, SAGE Publications
Sargeant, A. (2009), Marketing Management for Nonprofit Organizations, 3rd Edition, Oxford: Oxford University Press
Murphy, P.E and Sherry Jr, JF (2014) Marketing and the Common Good: Essays from Notre Dame on Societal Impact, Taylor and Francis
Mick, D.G, Pettigrew, S, Pechmann and Ozanne, JL (eds.) (2012) Transformative consumer research for personal and collective well-being, Routledge
Study hours
|Scheduled activity hours|
|Lectures||26|
|Seminars||8|
|Independent study hours|
|Independent study||164|
Teaching staff
|Staff member||Role|
|Emma Banister||Unit coordinator|
|Anna Goatman||Unit coordinator|
Additional notes
Programme Restrictions: This course is available to final year students on BSc Management / Management (specialism), BSc International Management with American Business Studies, BSc International Management and BSc Information Technology Management for Business. | https://www.manchester.ac.uk/study/undergraduate/courses/2021/03514/bsc-international-management-with-american-business-studies/all-content/BMAN31621 |
#Recipeforafuture is more than a menu item it is a recipe for helping planet health and individual well-being including cancer prevention. Building on the Eat Lancet commission report which sets out guidance on what is a healthy, sustainable diet we want to think how we can get there – what does eating well mean in terms of everyday changes in food habits. Thinking global health, means we need to think wider than our plate and consider how we produce, transport, consume and waste food and drink. It means planning meals where plants are the new main course, wholegrains are core and a huge variety of fruits and vegetables are provided along small amount of meat, dairy and seafoods. It means staying away from ultra-processed choices, saturated fats and refined grains and added sugar.
Throughout the month of January we asked friends and colleagues of the SCPN to suggest healthy meat-free recipes that we could share with our social media following, you can find these collated here. We also published 2 blogs in line with our campaign that highlighted the importance of the campaign not only for cancer prevention but for the health or our planet ? | https://www.cancerpreventionscotland.org.uk/campaigns/recipesforafuture/ |
Wednesday, June 29, 2011
Wed, Jun 29, 2011 at 8:26 AM
Berkeley’s Firehouse Art Collective is trying to rally food vendors for an East Bay spinoff of San Francisco’s Underground Market, which lurched to an abrupt halt earlier this month after public health authorities slapped organizer Iso Rabins with a cease and desist order.
As the 2011 Berkeley Juneteenth Festival blazed out along a five-block stretch of Adeline Street last Sunday, Firehouse Art Collective directors Tom Franco and Julia Lazar debuted the bazaar, which they hope can become a weekly gathering of food, art, and crafts vendors in a barnlike space at 3192 Adeline. On Sunday, twenty-eight vendors (half of them hawking food) set up in the high-roofed former metal workshop, including Frozen Kuhsterd, Boffo Cart, Oaktown Jerk, Morph, Berlyn’s Eatery, A Humble Plate, and 23 Monkey Tree, Lazar and Franco’s kombucha business.
Bazaar or buzzar? Whatever the spelling, organizers hope it'll be a weekly event.
The couple also own the Firehouse North gallery in Berkeley’s Gourmet Ghetto, and the Firehouse East studios on Harmon Street in South Berkeley.
Lazar described Sunday’s market as a reaction to the despair of the mostly unlicensed, home-based food vendors after the Underground Market's suspension — despair, and maybe the whiff of opportunity. She and Franco reached out to the market’s Google Group to find food sellers for the Berkeley bazaar, slated for Saturdays in July and both weekend days in August, noon to 6 p.m.
“We thought, we’ll just use this space for now,” Lazar said of the Collective’s newly acquired venue on Addeline, “so members who are community-based have a place to bring their food to the community.” That’s a lot of community, though unfortunately, not much of it showed up for Sunday’s debut — Lazar thinks the Juneteenth festivities proved too much of a distraction. On Monday, she was trying to line up vendors for this Saturday’s bazaar, dropping the cost of a booth to $37.50.
As for the little problem of city and county permits, the issue that got the Underground Market in trouble? Lazar was vague. “I have to look at that again,” she said. “We have a resale permit [for the bazaar], and each vendor has to have their own permit.” But, she said, the Firehouse Art Collective is trying to find ways to help vendors get their permits, and it has access to a commercial kitchen in Emeryville that could serve as a commissary for bazaar vendors.
| |
The study visit aims at fostering professional exchange and knowledge transfer between civic education experts from Germany, Jordan, Morocco, Egypt and Tunisia. It provides the participants with in-depth insights on the field of civic education in Germany through, i.e. expert rounds, field trips and excursions.
Extensive sessions will be dedicated to share and discuss both approaches and experiences from Germany and the MENA region.
The study visit is organised by Haus am Maiberg, Academy of Social and Political Education in Heppenheim/Germany in cooperation with Goethe-Institut Cairo, the German Agency for Civic Education (Bundeszentrale für politische Bildung, bpb) and NACE (Networking Arab Civic Education).
Who can apply?
Stakeholders in the field of civic education from Jordan, Morocco, Egypt and Tunisia can apply. Fluency in English is required as it will be the main language of the study trip.
We aim to include participants from different backgrounds and to have a balanced representation of
women and men in the group. Moreover, we welcome applications from civic education stakeholders
who work for organisations putting forward colleagues belonging to minorities in their countries, or who are migrants or refugees.
We will ensure a diversity of participants from established and smaller/new organisations and a diversity of urban and countryside based organisations; the value basis of the sending organisations will be compatible with civic education as a whole.
The expenses for international flights and accommodation as well as additional travel expenses will be funded through the Goethe-Institut’s “Dialogue & Transition” programme, funded by the German Federal Foreign Office. The programme empowers civil societies in Egypt, Tunisia, Libya, Morocco, Jordan, Lebanon and Iraq with different projects in the fields of culture and education. The projects under the “Dialogue & Transition” framework primarily target young people active in civil society and education, a key objective being the development of a well-trained civil society with a high level of confidence and selfunderstanding.
Education is further the key to shaping the future and active participation of all citizens,
both men and women.
The deadline for application is 16 September 2018.
Please send your application with a letter of motivation and CV to [email protected]. Please include relevant information on the institution you are affiliated with and attach a short statement on your own professional experiences and/or practices you would like to share with the group.
Office. | https://nacecommunity.org/civic-education-in-germany-and-the-mena-region-exchanging-approaches-and-experiences-short-study-visit-for-civic-education-stakeholders-from-jordan-morocco-egypt-and-tunisia/ |
Rigid chest wall support may be achieved with mesh, acellular dermal matrix, or autogenous material such as tensor fascia lata. Of these, alloplastic mesh is most prone to infection.
Soft tissue coverage can be achieved with local muscle flaps.
Proper treatment of mediastinitis includes debridement, rigid sternal fixation when possible, and soft tissue coverage.
Pectoralis muscle is the workhorse for sternal and anterior chest wall defects.
Latissimus muscle is known for its bulk and ability to reach intrathoracic defects. Caution is advised for patients with previous thoracotomy incisions as it may have been divided.
Muscle supplies less bulk than the latissimus but will function to cover lateral chest wall defects and some intrathoracic needs.
Rectus abdominus is an excellent choice for sternal and anterior chest wall defects, especially the lower two-thirds. Furthermore, it can be used to fill space within the mediastinum.
The omentum can reach almost any chest wall defect. Its greatest advantage is its pedicle length, which can be extended by dividing the arcades. It does, however, require a laparotomy for harvest.
Introduction
Common etiologies for chest wall defects include tumor resection, deep sternal wound infections, chronic empyemas, osteoradionecrosis and trauma. Although each mechanism carries individual nuances, they will all require adequate debridement and, when possible, replacement of like with like. Fundamentally, the chest wall must be restored for the protection of underlying viscera, maintenance of respiratory mechanics, and base for the upper limb and shoulder.
Chest wall reconstruction can be generalized to include skeletal support and soft tissue cover. Skeletal support to prevent paradoxic chest wall motion is usually required when the defect exceeds 5 cm in diameter. Generally, this corresponds to those defects exceeding a two rib resection. This rule of thumb, however, is somewhat region dependent (Table 10.1). Posterior chest wall defects may tolerated up to twice the size of those in the anterior and lateral chest due to scapular coverage and support1,2 Anecdotally, patients who have undergone radiation and have decreased chest wall compliance will tolerate larger resections without skeletal replacement due to an overall fibrosis of their viscera.
|Anterior||Between anterior axillary lines|
|Lateral||Between anterior and posterior axillary lines|
|Posterior||Between posterior axillary lines and the spine|
Options for skeletal support include various mesh products including PTFE (Gore-Tex®), polypropylene, Mersilene (polyethylene-terephthalate)/methylmethacrylate,3 and acellular dermal matrix (Fig. 10.1). Furthermore, use of TFL as both graft and flap reconstruction has been described. Little data exists as to outcome comparisons between these options. However, in a retrospective review of 197 patients, PTFE and polypropylene appear to be equivalent in complications and outcomes.1 Another smaller retrospective review of 59 patients prefers Mersilene-methylmethacrylate sandwich to PTFE due of decreased paradoxic chest wall motion.4 As alloplastic implants trend towards an increased infection rate when compared with autogenous material or acellular dermal matrix, the authors prefer to avoid mesh when possible.
Fig. 10.1 Implantable mesh products including polypropylene, PTFE (Gore-Tex®), and acellular dermal matrix.
Chest wall reconstruction almost always requires some form of soft tissue coverage as very few defects will close primarily. Reconstructive goals include wound closure with maintenance of intrathoracic integrity, restoration of aesthetic contours, as well as minimization of donor site deformity.
Recruitment of local muscles with or without overlying skin is often the first-line of reconstructive offense. These muscles include pectoralis major, latissimus dorsi, serratus anterior, and rectus abdominus. The omentum may also be used. Commonly the ipsilateral latissimus muscle is divided during thoracotomy incisions and the authors encourage early communication between surgeons if there are multiple teams in order to mitigate against routine division. Muscle sparing thoracotomies help to preserve both the latissimus and serratus muscles while providing adequate intrathoracic access (Fig. 10.2).
Common flaps for reconstruction
Pectoralis major
Pectoralis major, a muscle overlying the superior portion of the anterior chest wall, is the workhorse for chest wall reconstruction, especially for defects of the sternum and anterior chest. Its main function is to internally rotate and adduct the arm. Additionally, this muscle serves as the foundation for the female breast and when absent, such as in Poland’s syndrome, reconstruction may be indicated for aesthetic reasons (Fig. 10.3). It originates from the sternum and clavicle and inserts along the superomedial humerus in the bicipital groove. Its dominant pedicle is the thoracoacromial trunk which enters the undersurface of the muscle below the clavicle at the junction of its lateral and middle third. Segmental blood supply is derived from internal mammary artery (IMA) perforators. Based on the thoracoacromial blood supply, it will easily cover sternal and anterior chest wall defects as an island or advancement flap. Division of the pectoralis major muscle insertion can also aid in advancing the muscle flap into a properly debrided mediastinal wound. The muscle can also be turned over based on the IMA perforators and with release of its insertion, cover sternal, mediastinal, and anterior chest wall defects. Importantly, when used as a turnover flap, the internal mammary vessels and their perforators must be examined and deemed intact particularly in the setting of post-sternotomy mediastinitis. This vessel may be absent (left more commonly used than right) due to harvest for coronary artery bypass grafting or damaged during wide debridement of a post-sternotomy wound. The muscle may also be placed intrathoracically, however, this will necessitate resection of a portion of the 2nd, 3rd, or 4th rib (Fig. 10.4). The muscle may be harvested with or without a skin paddle. Donor site deformity including scar placement and loss of anterior axillary fold may be aesthetically displeasing.5
Fig. 10.3 Pectoralis major serves as the foundation for the female breast and when absent, such as in Poland syndrome, reconstruction may be indicated for aesthetic reasons.
Latissimus dorsi
Latissimus dorsi, a large, flat muscle covering the mid and lower back is often recruited for chest wall reconstruction especially when significant bulk and mobility is required. It is easily placed into the chest for intrathoracic space-filling. It is known as the climbing muscle and adducts, extends, and internally rotates the arm. It originates from the thoracolumbar fascia and posterior iliac crest and inserts into the superior humerus at the intertubercular groove. Superiorly, it is attached to the scapula and care must be taken to carefully separate this muscle from the serratus at this point to avoid harvesting both muscles. Its dominant blood supply is the thoracodorsal artery which enters the undersurface of the muscle five centimeters from the posterior axillary fold.6 Segmental blood supply is derived from the posterior intercostals arteries as well as the lumbar artery. Based upon its thoracodorsal pedicle, the muscle can easily reach the ipsilateral posterior and lateral chest wall, including those defects involving either the anterior chest wall, sternum, or mediastinum. It can also be turned over and based upon the lumbar perforators. In this fashion, it can reach across the midline back. Again, it can be moved intrathoracically with rib resection. Donor site morbidity can include shoulder dysfunction, weakness and pain, as well as unattractive scarring.7 However, our experience suggests these concerns are minimal. Also, transposition of this muscle can blunt or obliterate the posterior axillary fold, resulting in some asymmetry (Figs 10.5, 10.6).5 Care must be taken to properly drain the donor site, as seromas are common. Quilting or progressive tension sutures may mitigate against seroma formation.
Serratus anterior
Serratus anterior is a thin broad multi-pennate muscle lying deep along the anterolateral chest wall. It originates from the upper 8 or 9 ribs and inserts on the ventral-medial scapula. It functions to stabilize the scapula and move it forward on the chest wall such as when throwing a punch. It has two dominant pedicles including the lateral thoracic and the thoracodorsal arteries. Division of the lateral thoracic pedicle will increase the arc posteriorly and similarly division of the thoracodorsal will increase the arc anteriorly. The muscle will reach the midline of the anterior or posterior chest. More commonly, however, it is used for intrathoracic coverage, again requiring rib resection. An osteomyocutaneous flap may be harvested by preservation of the muscular connections with the underlying ribs. Donor site morbidity is related to winging of the scapula and can be avoided if the muscle is harvested segmentally and the superior five or six digitations are preserved (Fig. 10.7).5
Rectus abdominus
Rectus abdominus is a long, flat muscle which constitutes the medial abdominal wall. It originates from the pubis and inserts onto the costal margin. It can easily cover sternal and anterior chest wall defects and can also fill space within the mediastinum. It has two dominant pedicles, the superior and inferior epigastric arteries and functions to flex the trunk. With division of the inferior pedicle, the muscle will cover the mediastinum and the anterior chest wall. It may be utilized despite previous IMA harvest based upon its minor pedicle, the 8th intercostals artery. It can be harvested with overlying skin paddle and usually the resulting cutaneous defect can be closed primarily. When taken with overlying fascia, there is a risk for resultant hernia, and at times, mesh reinforcement of the abdominal wall is necessary. Caution is also advised for patients with prior abdominal incisions as the skin perforators or intramuscular blood supply may have been previously violated (Fig. 10.8).5
Omentum
The omentum is comprised of visceral fat and blood vessels which arises from the greater curve of the stomach and is also attached to the transverse colon. This flap can easily cover wounds in the mediastinum, anterior, lateral and posterior chest wall. It has two dominant pedicles, the right and left gastroepiploic arteries. The greatest benefit of this flap is the pedicle length, which can be easily elongated with division of internal arcades. The flap is mobilized onto the chest or into the mediastinum through the diaphragm or over the costal margin. Ideally, the flap is mobilized through a cruciate incision in the right diaphragm as the liver helps to buttress the incision and prevent diaphragmatic hernia. Furthermore, right-sided transposition obviates the need to navigate the flap around the heart. Care must be taken when interpolating the omentum as it is often of very little substance and can easily be avulsed during passage through the diaphragm. Strategies to protect the omentum during transposition include placing the omentum into a bowel bag. The empty bag can be passed from the mediastinum into the abdomen via the diaphragm incision, past the left lobe of the liver. The omentum is then gently packed into the bowel bag with tension transferred to the bowel bag rather than the omentum during interpolation. Caution is again advised for patients with prior laparotomy incisions as the omentum may have significant intra-abdominal adhesions or have been previously resected (Figs 10.9–10.11).5
Fig. 10.10 Omentum is passed through cruciate incision in diaphragm under the left lobe of the liver.
History
Throughout history, the ability to perform surgical resections has been limited by their survivorship. Chest wall resections, in particular, were difficult given the intimate relationship of the chest to vital structures beneath – the heart, lungs, and great vessels. In particular, sequelae such as pneumothorax were exceptionally challenging for surgeons in the era preceding positive pressure ventilation and tube thoracostomy.
Despite adversity, however, and as early as 1906, the latissimus dorsi was used for chest wall coverage following radical mastectomy.8 This was similarly performed by Campbell in 1950.9 The earliest use of fascia lata grafts appears in 1947.10 Axially-based flaps regained popularity in the 1970s and in 1986, Pairolero and Arnold published their series of 205 patients managed with muscle flaps purporting their safety and durability.11
As surgical advances and innovations were made, the sequelae of postoperative infection followed close behind. Interestingly, the treatment of mediastinal infection has changed dramatically since the first description of the sternotomy incision in 1957.12 Open chest drainage fell out of favor quickly due to exposure of heart and mediastinum and subsequent risk of rupture. Mortality rates were as high as 50% with open packing.13 Throughout the 1960s, closed chest drainage with antibiotic catheter irrigation was advocated as the first-line therapy for deep sternal wound infections.14,15 This innovative technique reduced mortality to approximately 20%.16 Then, in 1980, Jurkiewicz published a landmark paper revealing debridement and muscle flap coverage was significantly more successful than antibiotic catheter drainage alone.17 This advancement further reduced mortality rates to 10%.18 In recent times, mediastinitis treatment has advanced to include subatmospheric pressure wound therapy and rigid fixation of residual sternal bone. These techniques address the loss of chest wall integrity, paradoxic chest wall motion, and chronic pain.19,20
Patient selection/approach to patient
The importance of a multidisciplinary approach to chest wall reconstruction cannot be underestimated. These patients, whether suffering from malignancy, infection, or trauma, are often also plagued with cardiac or respiratory insufficiency, diabetes, obesity, malnutrition, and generalized deconditioning. Thorough work-up including pulmonary function testing, physical therapy and nutritional assessment, and preoperative control of blood sugar may optimize outcomes. Furthermore, communication between referring surgeon and reconstructive plastic surgeon is crucial for properly defined preoperative reconstructive expectations as well as incision planning. For example, it may be advantageous to spare chest wall musculature, such as the latissimus dorsi, during thoracotomy.
Acquired chest wall deformities are commonly the result of iatrogenic injury. Usually encountered in conjunction with cardiac or thoracic surgery, wound infections, mediastinitis, osteoradionecrosis, refractory empyema and bronchopleural fistulas, can all necessitate chest wall reconstruction. | https://plasticsurgerykey.com/reconstruction-of-the-chest/ |
Each redistricting dataset merges the electoral data the SWDB collected and processed over the preceding decade with the most current census data (PL94-171). The result is a census block level dataset that allows for longitudinal analysis of electoral data over time on the same unit of analysis. Electoral data consist of the Statements of Vote (SOV) and Statements of Registration (SOR) for each statewide election. These data are collected from the Registrars of Voters for each of the 58 California counties with each election.
The SWDB collects the Statement of Vote and the Statement of Registration along with various geography files from each of the 58 counties for every statewide election. The Statement of Vote is a precinct level dataset and precincts in California change frequently between elections. The goal of the SWDB is to make election data available that can be compared over time, on the same unit of analysis – a precinct, a census block or a census tract.
Newsmax.com - Thursday, April 5, 2012
Author: Greg McDonald
Texas taxpayers have been saddled with a $750,000 legal bill so far in the state’s defense of a new legislative and congressional redistricting plan being challenged in court by the Justice Department and the Democratic Party.
According to records released to the Houston Chronicle under the Texas Public Information Act, the costs of state contracts for outside legal counsel through August are expected to grow even more as the court battle over whether the plan discriminates against minorities grinds on.
“The Attorney General’s Office is fulfilling its obligation to defend state redistricting laws enacted by the Texas Legislature, just as this office defends all duly enacted state laws when they are challenged in court,” Lauren Bean, as spokeswoman for state Attorney General Greg Abbott, told the Chronicle Wednesday.
Gov. Rick Perry signed the Republican-drawn plan resetting the boundaries for legislative and congressional districts into law last year. The state then submitted the plan for approval to U.S. District Court in Washington, D.C., as required by the U.S. Voting Rights Act.
Since then, the case has been up and down the federal court system, including a trip to the U.S. Supreme Court, where a federal judge’s attempt to re-write the plan was rejected.
Rice University political science professor Mark Jones says the state’s legal bill will likely grow by another 30 percent before the case is over, and he predicted its Democratic opponents would continue to attack the law by criticizing the cost of its defense.
“I expect the whole thing to top out over a million or so,” Jones told the Chronicle. “The political spin on it will depend on to what extent is that viewed as excessive.”
Democrats have accused Abbott, a potentially strong Republican candidate for governor, of wasting taxpayer money to defend the law.
“The maps that the state is trying to implement absolutely ignore the demographic realities of Texas,” said Rebecca Acuña, a spokeswoman for the Texas Democratic Party.
TOP
We would love to help. Please leave us a message and we will get right back to you: | https://statewidedatabase.org/resources/Redistricting_News/texas/2012/Federal_april_5.html |
Recommended grading tools are clinical performance, Hurley staging, and abscess and inflammatory lesion counts. The Visual Analogue Scale (VAS) and the Dermatology Life Quality Index (DLQI) also can be considered. For research studies, the recommended grading systems are the Hidradenitis Suppurativa Clinical Response (HiSCR) score, Hidradenitis Suppurativa Physician Global Assessment (HS-PGA), Sartorius score, DLQI, and pain VAS. Others to consider are the Hidradenitis Suppurativa Impact Assessment (HSIA) and the Hidradenitis Suppurativa Symptom Assessment (HSSA).
Screening for comorbidities
The patient should undergo a physical examination and a review of systems, with screening for metabolic syndrome, diabetes, depression, anxiety, polycystic ovarian syndrome, and tobacco abuse. If patients have additional risk factors for diabetes (eg, obesity, hyperlipidemia, hypertension, acanthosis nigricans), refer them for HbA1c and/or fasting glucose testing. Based on the review of systems, other screening considerations are depression, inflammatory bowel disease, autoinflammatory syndromes, and inflammatory arthropathy.
Lifestyle modifications and alternative treatments
Patients should be counseled to quit smoking. If the patient is obese, weight loss should be recommended. As far as alternative treatments, zinc supplementation may be recommended but the evidence is weak. Vitamin D supplementation lacks sufficient evidence, as does avoidance of dairy or brewer's yeast, friction, deodorant, and shaving/depilation.
Surgical modalities
Deroofing or excision is recommended for recurrent nodules and tunnels. Incision and drainage should only be used for pain relief from acute abscesses. For extensive chronic lesions, use wide local scalpel, carbon dioxide, or electrosurgical excision (with or without reconstruction). Secondary intention healing, primary closure, delayed primary closure, flaps, grafts, or skin substitutes are all appropriate for wound healing. It is likely beneficial to continue medical treatment during the perioperative period; this poses minimal risk for increased postoperative complications.
Pain management
Disease control is paramount for pain management. Pain management involves considering the multidimensional aspects of pain. Short-acting opioids may be needed in select cases; dosing should be individualized and drugs carefully prescribed. Use of the World Health Organization pain ladder is recommended for chronic pain management.
Wound care
Follow the principals of best-practice individualized wound care for local wound care in surgical and nonsurgical wounds. Consider periwound skin condition, location, amount of drainage, cost, and patient preference when choosing the type of dressing. While it carries a low risk of contact dermatitis, use of antiseptic washes is generally supported by expert opinion. Negative-pressure therapy may be beneficial for selected large open wounds (for 1-4 wk), followed by delayed reconstruction.
Light, laser, and energy sources
Nd:YAG laser is recommended in those with Hurley stage I disease based on expert consensus. It is also recommended in those with Hurley stage II or III disease based on randomized controlled trial and case series data. Lower quality evidence suggests other wavelengths that are used for follicular destruction may be helpful. In patients with Hurley stage II or III disease with fibrotic sinus tracts, carbon dioxide laser is recommended. Photodynamic therapy and external beam radiation have limited roles.
Topical and intralesional therapies
Topical clindamycin can be used to reduce pustules; however, it carries a high risk of bacterial resistance. Resorcinol 15% cream is recommended; however, it may induce contact dermatitis. Expert opinion supports using antibacterial washes such as chlorhexidine, zinc pyrithione, or others. Intralesional corticosteroid injections can be considered for short-term control of inflamed lesions, but the evidence for this recommendation is weak.
Systemic antibiotics
For mild-to-moderate disease, tetracyclines are recommended for a 12-week course or for long-term maintenance when appropriate. For mild-to-moderate disease or for a first-line or adjunctive treatment in severe disease, combination therapy with clindamycin and rifampin is an effective second-line treatment. For moderate-to-severe disease, combination therapy with metronidazole, moxifloxacin, and rifampin is recommended as second- or third-line treatment. A minority of patients with Hurley stage I or II disease may benefit from dapsone as long-term maintenance therapy. For severe disease as a one-time rescue treatment or a bridge to surgery or other long-term maintenance, intravenous ertapenem is recommended. Balance the benefit achieved for each patient against the antibiotic resistance risk when determining the frequency and duration of antibiotic use. Disease recurrence is common following cessation of antibiotic therapy.
Hormonal agents
All described evidence for hormonal therapies has major limitations, based on variable outcome measures and methods, small samples sizes, and reporting bias. Estrogen-containing combined oral contraceptives, cyproterone acetate, spironolactone, finasteride, and metformin can be considered in appropriate female patients. These can be used as monotherapy for mild-to-moderate disease or in combination with other agents for more severe disease. Progestogen-only contraceptives should likely be avoided, as anecdotal data suggest they may worsen hidradenitis suppurativa.
Retinoids
Owing to mixed results from isotretinoin studies, it should be considered only as a second- or third-line treatment or in patients with severe concomitant acne. Acitretin should also be considered a second- or third-line treatment; it may be superior to isotretinoin, but significantly robust comparative studies are lacking. While not available in the United States, alitretinoin is supported by a single study in women. It is available in Canada and other countries.
Immunosuppressants
Based on the available limited evidence, methotrexate or azathioprine is not recommended. While the evidence is weak, combination colchicine/minocycline can be considered for refractory mild-to-moderate disease; avoid colchicine monotherapy. In patients with recalcitrant moderate-to-severe disease in whom standard therapies have failed or who are not candidates for standard therapy, consider cyclosporine. In acute flares or as a bridge to other treatments, short-term pulse steroid therapy can be considered. In cases of severe disease, consider using long-term corticosteroids, tapered to the lowest dose, as adjunctive therapy when the response to standard therapy has been suboptimal.
Biologics
Adalimumab and infliximab are recommended for moderate-to-severe disease. Adalimumab should be administered at the dosage approved for hidradenitis suppurativa. Dose-ranging studies are needed to determine the optimal dosage for infliximab. Agents that may be effective include anakinra (100 mg daily) and ustekinumab (45-90 mg q12wk). Dose-ranging studies are needed for anakinra, and placebo-controlled dose-ranging studies are needed for ustekinumab. Etanercept use is not supported by the limited available evidence.
Pediatric and pregnant patients
For pediatric patients with hidradenitis suppurativa, a laboratory evaluation for precocious puberty should be performed in those aged 11 years or younger when other suggestive physical examination findings are present. Additionally, avoid tetracyclines in children younger than 9 years. Avoid administration of acitretin to female patients during childbearing years. Agents to be avoided by pregnant patients with hidradenitis suppurativa include hormonal agents, retinoids, most immunosuppressive medications, and most systemic antibiotics. Topical treatments, procedures, and safe systemic agents are acceptable for use in pregnant patients.
European Dermatology Forum guidelines
In the published guidelines for hidradenitis suppurativa developed by the Guidelines Subcommittee of the European Dermatology Forum, it is recommended that hidradenitis suppurativa be treated based on the subjective impact and objective severity of the disease, as follows:
-
Locally recurring lesions can be treated surgically
-
Medical treatment either as monotherapy or in combination with surgery is more appropriate for widely spread lesions
-
Medical therapy may include antibiotics and immunosuppressants
A Hurley severity grade‒relevant treatment of hidradenitis suppurativa is recommended by the expert group with the following treatment algorithm:
-
Limited surgery such as deroofing and laser ablation techniques are especially suited for recurrent hidradenitis suppurativa lesions at fixed locations in Hurley 1/mild Hurley II stage
-
Wide surgical excision is appropriate for moderate Hurley II/Hurley III stage
-
Topical clindamycin is recommended for localized Hurley I stage
Systemic treatment (clindamycin + rifampin/tetracycline or acitretin) with adjuvant therapy (pain management treatment of superinfections is proposed for Hurley II stage)
-
Systemic biologics (adalimumab/infliximab) are reserved for treatment-resistant, moderate-to-severe hidradenitis suppurativa (moderate Hurley II/Hurley III stage)
-
General measures are offered for all patients and include weight loss and tobacco abstinence
Based on expert opinion it is recommended that adjuvant therapy is offered to all patients in the form of general measures such as weight reduction, cessation of cigarette smoking, and specific help with bandaging lesions in order to improve the patients’ quality of life. Hidradenitis suppurativa‒specific bandages are not currently available. Choice of dressing is based on clinical experience. In addition, adhesive tape should be avoided to minimize trauma to inflamed skin, which can be overcome by using tubular net bandages or superabsorbent pads or materials in the seams of clothing.
Regarding local wound care, superabsorbent dressings are best to treat actively draining lesions or postoperative wounds, but there are no trials or studies to support this recommendation. In order to prevent the primary dressing from sticking to the wound, white petrolatum, zinc oxide paste, or film-forming liquid acrylate should be extensively and generously applied on the marginal skin as the best ways to keep the wound dressings in place.
Algorithm
In 2016, an algorithm with respect to all aspects of hidradenitis suppurativa therapy included in the aforementioned guidelines was developed by using Grading of Recommendations Assessment and Evaluation (GRADE) methodology based on the Category of Evidence and Strength of Recommendation. [1, 54]
The need for surgical intervention should be assessed in all patients depending on the type and extent of scarring.
The proposed dosing regimen as a first-line treatment option in patients with mild hidradenitis suppurativa PGA or localized Hurley I/mild Hurley II stage, especially when there are no deep inflammatory lesions (abscesses), is topical clindamycin 1% solution/gel twice daily for 12 weeks and/or tetracycline 500 mg orally twice daily for 4 months.
If patient fails to exhibit a response to treatment or for a PGA of moderate-to-severe disease with moderate and severe hidradenitis suppurativa PGA or Hurley II stage, considered clindamycin 300 mg orally twice daily with rifampin 600 orally twice daily for 10 weeks.
If the patient is not improved, then adalimumab is recommended as a first-line treatment option in patients with moderate-to-severe hidradenitis suppurativa who were unresponsive to or intolerant of oral antibiotics. Dosing is adalimumab 160 mg at week 0, 80 mg at week 2, and then 40 mg subcutaneously weekly.
If improvement occurs, then therapy should be maintained as long as hidradenitis suppurativa lesions are present. If the patient fails to exhibit response, then consideration of second- or third-line therapy is required.
The second-line therapies include the following:
-
Zinc gluconate
-
Resorcinol
-
Intralesional corticosteroids
-
Infliximab at 5 mg/kg at week 0, 2, and 6, and then every 2 months thereafter for 12 weeks; recommended only after failure of adalimumab, in patients with moderate-to-severe hidradenitis suppurativa
-
Acitretin
-
Etretinate
If clinical response is not achieved after 12 weeks of treatment, other treatment modalities must be considered.
Third line therapies evaluated include the following:
-
Colchicine
-
Botulinum toxin
-
Isotretinoin
-
Dapsone
-
Cyclosporine
-
Hormones
Benefit-to-Risk Ratio
A relevant benefit-to-risk ratio analysis can be performed only for the phase 2 trial of adalimumab, since that is the only randomized controlled trial with an appropriate safety analysis that provides the basis to recommend adalimumab as the first-line treatment option in patients with moderate-to-severe hidradenitis suppurativa who were unresponsive to or intolerant of oral antibiotics. [54, 102]
There is very limited or absent randomized controlled trial data in hidradenitis suppurativa for antibiotic therapy, retinoids, oral immunomodulators, and, in particular, there are no randomized controlled trial data investigating the timing of surgery or type of surgical procedure. Interventions currently under investigation include topical antiseptics, the Nd:YAG and carbon dioxide lasers, anakinra (a newer biological treatment that inhibits IL-1, and the PIONEER I and II studies of adalimumab therapy.
Tables
What would you like to print? | https://emedicine.medscape.com/article/1073117-guidelines |
If you have a difficult situation to face it's having to tell a guy to you that the feeling is not reciprocated. You certainly don't want to stall him, but you don't want to hurt his feelings either. Don't despair, as this is resolved with an honest conversation, in which you will have to talk about how you feel.
Steps
Part 1 of 3: Preparing for the conversation
Step 1. Find out if he really likes you
Don't do anything without being sure, or you could end up ruining a friendship based on assumptions or rumors that others have told you. Follow these tips to find out if he's into you:
- He constantly asks her out.
- He tries to make physical contact whenever he can.
- He always prefers to be alone.
Step 2. Don't push with your belly
The longer you wait, the more intense his feelings for you will become, and they may even end up losing their friendship when you say you have no interest.
Step 3. Don't run away from him forever
Fool yourself as much as you like, hoping you will "end up calling" if avoided for too long, but it just doesn't happen. You will have to make time for him and the conversation should be private – you don't want to humiliate him in front of everyone.
Step 4. Make a plan
Write down what you want to say before starting. Stuttering will make the conversation more awkward and take longer. Write down the topics you want to cover, such as what makes you not attracted to it. Don't offend him or say rude things to justify yourself; just be honest about it. For example:
- You can't forget your ex-boyfriend.
- You are not attracted to him.
- You like someone else.
Step 5. Talk over the phone
Talking over the phone or by text is also an option, as long as you are strict and make it clear that there is no chance of a relationship between you.
Part 2 of 3: Having the conversation
Step 1. Make the seriousness of the situation clear
That way, the boy will know that the conversation will not be superficial. Failing this part may cause him not to realize that the issue being debated is important.
Step 2. Be kind
He's about to be disillusioned and the best thing is to be as painless as possible If you want, give him a compliment or two, but making it clear that this is not enough to want something more with him.
- "You're a great friend, but we can't be together."
- "You're still going to make some girl really happy, it just won't be me."
Step 3. Give him the message to back off
Even though you're objective about your reasons for not wanting to date him, it's possible that he doesn't understand right away. In that case, take the opportunity to get your message across, right after stating your reasons.
- "We are not boyfriends."
- "We can remain friends if you agree to be my friend."
- "We don't have chemistry with each other."
Step 4. Make it clear that your feelings will not change
It's possible he'll keep his hopes up if you don't make this part clear. Leave no doubt that you won't develop romantic feelings for him in the future, and establish some ground rules for your friendship if it continues.
Step 5. Be honest
Let him ask questions if he wants to and answer them frankly. It doesn't make much sense to protect his feelings with lies, tell the truth. This will help you move forward faster.
Step 6. Listen to him
Mentally rehearsing helps, but it can also frustrate you if the result is not what you expected. Instead of burying him with your ideas, sit across from each other and listen to what he has to say – then he will listen too.
Step 7. End the conversation
Ask him what he has to say once he's finished exposing his side. Stand firm and don't leave the conversation until you're absolutely sure you're not interested in him. Put an end to the story.
Part 3 of 3: Moving on after the conversation
Step 1. Be polite
Disliking him doesn't mean you should be rude or ignore him. Don't see him as a fragile being who won't stand what you say; he will go on like any human being, so treat him as such. Ignoring him forever would take away this chance for him to recover.
Step 2. Make room for the boy
Don't do your guts, heart to know if he's alright. In addition to sending the wrong message, he will remember you all the time and may even develop problems with self-esteem, anger and even aggression – you don't want to be responsible for that.
Step 3. Don't deceive him
If he decides he can be your friend after the conversation, set a limit on what's appropriate and what's not. If necessary, have a separate conversation, especially if you both need time to think. This will help them to put the proper end to all of this.
- Discuss whether you can comment on each other's appearance.
- Talk about physical contacts, such as hugs, holding hands, etc. and whether they are still suitable for the nature of their relationship.
Tips
- Use praise to make him feel better about himself.
- Don't expect him to be friendly – he can get annoyed and get defensive. It's not easy to be rejected.
- Before deciding whether or not to talk to the guy, make absolutely sure that you don't feel anything for him and that you won't change your mind – know that they will always be just friends. | https://how-what-do.com/13168515-how-to-tell-a-guy-you-don39t-like-him-15-steps |
Location: Chilliwack, BC | Start Date: 03/09/2011
Organisation: University of Calgary | Web Site:
Project Description
The University of Calgary’s Faculty of Environmental Design is conducting a workshop in Chilliwack, BC with Lee & de Ridder Architects, Calgary.
Our workshop, called Healthy and Sustainable Buildings will instruct 30 registrants about Healthy Building Design and Construction. Included will be demonstrations of new wall and roof panels that do not offgass, do not grow mould, and that save energy. These Structurally Insulated Panels (SIP) are made from magnesium oxide board sandwiching rigid foam insulation. SIP panels can replace gypsum board in any building, including schools, to prevent mould growth, reduce energy costs and help reduce chemical offgassing from building materials.
Participants will include representatives from the local native bands in the lower mainland of BC, CMHC, local building officials, as well as builders. The workshop is hosted by a local builder, Lacey Developments Ltd. | https://casle.ca/projects/healthy-and-sustainable-buildings/ |
During the work on the new version of HRB Portal the user interface (UI) has been completely redesigned, by using the modern developments in Web technology and addressing the needs of the target audience of the product.
Redesigned navigation saves the user from having to make unnecessary “clicks” and reduces the total time to search information in the portal, and a specially designed menu saves space on a page and allows the user to focus directly on solving the challenges ahead.
In the design of the new UI the “minimal” design (flatdesign) approach was used: minimal graphics and focus on the content eliminates the distraction of the user and makes interaction with the portal most convenient.
An important part of improving the efficiency of business processes in the modern company is their socialization. Within the framework of the new UI of HRB Portal this task is solved by one of the new features – activity feed. It shapes a uniform information environment for the users of the portal, provides an opportunity to share the latest news, and get information on different topics and documents, as well as organise discussions in the process of carrying out certain tasks and projects. | https://agroup.lv/hrb-portal-5-3-new-user-interface-efficient-intuitive-and-user-friendly/ |
Home » Technology » Why Use ASP.net Based CMS?
Content Management Systems have been developed in pretty much every language be it PHP, Ruby on Rails, Java or even classic ASP. All these systems have worked fine without showing any major problems. But the CMS are built quickly and conveniently that are based on ASP.net. The major advantage of asp.net cms is the security system which is already part of the framework gives the CMS extra security. Along with that, there are standard design templates that are in the form of master pages. Also, easy configurations for database connections and controls for displaying the data are built in. Using Visual Studio,there is a huge advantage of rapid development techniques to speed up the time to deploy a usable application. ASP.net 2.0 along with their extensions ASP.net 3.0 and ASP.net 3.5 are not considered as a programming language. Rather they are a framework to develop applications within it. ASP.net is an independent language and any language can be used that supports the .net framework.
The simplest definition of a Content Management System is that it is a system that manages content. So what is the content? The content can be defined as the “stuff” found on the website. This “stuff” may either be the information such as text and images or the applications or software that runs on the web site’s servers and displays the information. The management part of CMS refer to creating, editing, publishing, archiving, collaborating on, reporting, distributing website content, data and information. The main purpose of CMS is to provide the capability for multiple users with different permission levels to manage a website or a section of the content. Generally, a Content Management System consists of three main elements: the Content Management Application (CMA), the Metacontent Management Application (MMA) and the Content Delivery Application (CDA). The main function of CMA is to manage the content components of the CMS. The function of MMA, on the other hand, is to manage the information about the content components. Finally, the function of CDA is to provide a way of displaying content components to the user of the web site. A CMS is made up of these three applications and their purpose is to manage the full life cycle of content components and metacontent by way of a workflow in a repository, with a goal of dynamically displaying the content in a user-friendly fashion on a web site.
It provides an extra layer of security from the framework itself making the website highly secure and reliable.
Even without any prior technical knowledge, an amateur with a computer and internet access can easily develop a website and run it efficiently.
The standard designs and templates helps the site to look attractive and beautiful with standard fonts, sizes, styles and layouts.
CMSs are client-server based which allows multiple clients to access the server at the same time.
Another very useful benefit is that most of the CMSs provide the capability to add personalization even with a simple one that can attract a lot of users.
With all these benefits and advantages provided by ASP.net based CMS, any organization or business can increase their productivity with a reliable, stable and efficient Content Management System. | http://terasrakenne.com/why-use-asp-net-based-cms/ |
Hertfordshire Local Enterprise Partnership’s (Hertfordshire LEP), Hertfordshire Growth Hub service processes personal data in order to provide our services effectively.
We may collect personal information from you, in the following ways
• paper, electronic or online forms
• email
• telephone
• website
• face-to-face, through one of our employees or partners
We collect personal and business data to enable us to deliver our services. This includes
• Your personal contact details or business contact details
• Details of your business
• Details of your enquiries
We will use your personal information for a limited number of purposes and at all times in accordance with the principles set out in data protection legislation. We process personal data
• In our legitimate interests to provide the services required or where you have consented to the processing of your information
• to allow us to communicate effectively with you
• to monitor and improve our performance and the delivery of our services
Data from the customers and users of the Hertfordshire Growth Hub service will be shared with the Department for Business, Energy and Industrial Strategy (BEIS) and any external evaluators acting on their behalf for research and evaluation purposes only. Data will be used to match to other public and commercial datasets for the purposes of evaluating and monitoring the ongoing impact of Growth Hubs.
The use of the Business’s information may include matching to other data sources to understand more about organisations like yours and general patterns and trends, although the business’s data will not be published or referred to in a way which identifies any individual or business. If the business has any questions in relation to how the information the business provides, and in particular any personal data, will be processed and disclosed please contact [email protected].
The information that we capture will be stored securely and kept for up to seven years. You have rights in respect of any personal information you provide to us or that we process. These are rights to access it, rectify inaccurate information, to erase information, to restrict or object what we do with it. If you wish to exercise a right please contact us at [email protected].
Cookies
Our website – www.hertsgrowthhub.com – automatically logs some information about your visit such as your browser type and the length of your visit. We use this information to understand how visitors interact with the site so we can make informed decisions about design improvements. Our website does not collect any personal information about you.
We use Google Analytics to help us understand how people use our website to ensure we offer the best content and user experience for our customers. This information is collected anonymously and we cannot identify you personally from the data. Hertfordshire LEP is the data controller for this information.
However, third parties we link to may collect personal data.
You have a number of rights over the data we collect and hold about you.
• You have the right to be informed about what information we hold about you and how we use it.
• You have the right to request copies of any information we hold about you by making a subject access request.
• If information we hold about you is factually inaccurate you have the right to have it corrected.
• You have the right to object to the way we are using your data.
• You have the right to request that your data is deleted. However we may be unable to delete your data if there is a need for us to keep it. In this case you will receive an explanation of why we need to keep the data.
• You can also request that we stop using your data while we consider a request to have it corrected or deleted. There may be some circumstances in which we are unable to do this however we will provide an explanation if this is the case.
• In certain circumstances you may also request data we hold about you in a format that allows it to be transferred to another organisation.
• In the event that decisions are taken using automated processes you have the right to request that these decisions are reviewed by a member of staff and to challenge these decisions.
If you would like to request copies of your data, request that your data is deleted or have any other queries in relation to data which the Council holds about you please contact [email protected].
If you are unhappy with the way that Hertfordshire LEP has used your data or with the way we have responded to a request you also have the right to contact the Information Commissioner’s Office: www.ico.org.uk.
Hertfordshire County Council is the accountable body for Hertfordshire LEP. | https://www.hertsgrowthhub.com/privacy-policy/ |
Saturday, October 13, 2007. 7:00-8:30 p.m.
Shamanism Around the World: lecture, demonstration and discussion, with Susan Grimaldi
Program: a PowerPoint, “Photographing Indigenous Visionary Healers,” shamanism in China and Siberian Asia (the Tuvan, Ulchi, Mongol, and Manchu shamans) and relating shamanism of Asia with shamanism of North and South American and African peoples. Susan will be bringing some shamanic regalia (costume) newly designed by herself to show and demonstrate. The program is free to the public. Currently in China, there is a growing interest in reintegrating shamanism back into contemporary culture. Susan Grimaldi, an internationally renowned Native American shaman, based in Vermont, has worked with communities in North China and Inner Mongolia, experiencing the living traditions of the Manchu and Mongol people, including ancient harvest rituals, healing ceremonies, and interviewing Asian shamans. Susan was invited to China to demonstrate her healing approach and help shamanism flourish in China again.
She was at the opening of the Shaman Culture Museum of Changchun University in northeast China where she donated some of Shamanic regalia and was invited to participate in the formalities. The images of Susan shows her holding a mask, demonstrating the drumming, and explaining the elaborate and heavy headpiece and other ornaments and their functions. The fringe of the headpiece is designed to cover the eyes. The lower two images are from her presentation here in Brattleboro in April 2007. | http://accvt.org/events/presentations/shaman-siberia/ |
“She stood at the gate, waiting; behind her the swamp, in front of her Colored Town, beyond it all, Maxwell.”
So begins Lillian Smith’s groundbreaking 1944 novel Strange Fruit. Its story focuses on Nonnie Smith and Tracy Deen, a couple in tiny Maxwell, Ga., who have fallen deeply in love but cannot possibly be together. Nonnie, we know by the end of the first page, is college-educated and highly respected, but she is a black woman in rural Georgia (Smith erred on the side of the colloquial slur), and Tracy Deen is a white man.
Indeed, Strange Fruit was the first celebrated novel that portrayed a sympathetic interracial romance. Released during the Second World War — Tracy is not only white but also a veteran — it was met with a great deal of controversy, which led to roaring sales but also to the book being banned in a number of locations.
Its banned-book status is perhaps part of what has earned Smith a place in this year’s Southern Women Authors: Writing America Between the Wars series, set to take place on 10 evenings between September and December, starting Wednesday, Sept. 12, at the West Asheville Library. The other authors on the docket are Caroline Miller (Lamb in His Bosom), Mildred Haun (The Hawk’s Done Gone) and Elizabeth Madox Roberts (The Time of Man).
As for Strange Fruit, it would be 1967 before anti-miscegenation laws were tossed with the Loving v. Virginia Supreme Court decision, making Lillian Smith remarkably ahead of her time and also keenly aware of what stories in American culture most needed to be told.
Jim Stokely, head of the Wilma Dykeman Legacy and organizer of the Southern Women Authors series, believes part of the injustice of Smith’s name being lost to history has to do with her having churned up a few enemies.
“One of Lillian Smith’s enemies was Ralph McGill, who’s generally seen as a great liberal journalist,” he explains, noting that McGill was among the many public figures who were “dragged into [being] progressive. They were not all bad, but they went from the gradualist stance, which was basically a dressed-up segregationist stance — what’s the difference between ‘Segregation now, segregation forever’ and ‘I believe in integration, we just can’t go too fast’? You’ll never go fast unless you’re going to push for it. Gradualist means you’re not going to push for it. That was the whole deal. They were in that camp.
“McGill was also, early on, for lynching and everything else,” he adds. “Smith had really let him have it early on, and just … probably called him out. Pushed him. He probably, out of shame as well as indignation, said, ‘To hell with her.’”
Smith wasn’t the only one who was pushed aside by male critics and others in the position of championing great women authors of that era.
A decade earlier, the year 1934 saw the publication of some highly celebrated works of fiction. F. Scott Fitzgerald published what was arguably the finest work of his career with Tender Is the Night. Fellow bearer of testosterone Henry Miller published Tropic of Cancer. But the Pulitzer Prize went to Caroline Miller for her portrayal of private matters — and female empowerment — in the rural antebellum South, via her stunning debut, Lamb in His Bosom.
You would be forgiven for wondering who Caroline Miller was.
Born in 1903 in Waycross, Ga., Miller wrote prolifically and published on occasion, yet Lamb in His Bosom was the only work for which she was celebrated in her lifetime. It earned her such success, in fact, that her marriage could not survive the attention. She eventually remarried and moved to Waynesville, where she wrote a number of other manuscripts and died in 1992.
This fall, her life and work will be given renewed interest when series attendees are treated to a lecture by Dr. Emily Powers Wright, followed by a book club discussion the following week.
Indeed, the series will feature lectures, documentary films, book club-style discussions and a “text-based musical performance” — all to celebrate and explore the work of these remarkable authors.
Stokely notes part of his motivation is based on the fact that his mother, Wilma Dykeman, “swam in gender discrimination” during her career, so he wanted to dedicate an event to other authors whose names — and perhaps also their work — have been forgotten.
Last year’s inaugural series focused on works by Zora Neale Hurston, Marjorie Kinnan Rawlings, Ellen Glasgow, Julia Peterkin and Olive Tilford Dargan, and each week saw attendance grow. Stokely is hoping for the same this year. He’s also hoping the series will encourage people not only to think more about these authors but also to read their books. To that end, used copies of each of the texts will be available for sale during the lectures, so folks can attend the lecture, go home and read the book, then return a few days later for discussion. | https://mountainx.com/arts/southern-women-authors-series-returns-to-west-asheville/ |
Speech and Language Therapy
If concerns are raised about your child’s communication, eating and/or drinking difficulties, these can be discussed at the Communication Forum.
The Communication Forum meets regularly and is attended by school staff and Speech and Language Therapists (SLT).
If at the Communication Forum, your child is identified as needing an assessment by a Specialist Speech and Language Therapist an appointment will be offered.
If your child is identified at the assessment appointment as requiring SLT intervention we would open an episode of care.
An episode of care will vary in length depending on the outcome of your child’s assessment and the clinical decision made by the SLT that intervention will effect change. The episode of care will reflect the identified impact of the presenting difficulty and desired outcomes and goals would be agreed with you and others involved in your child’s care.
Intervention may be provided via any one or a combination of:-
- Training being provided to school staff and parents/carers to support your child’s communication and/or eating and drinking skills
- A specific program developed by a Speech and Language Therapist to be carried out by the communication support team and/or school staff throughout the week.
- Direct input by a Specialist SLT for an agreed period of time. The duration of this episode will vary according to need.
It may be that the communication support team will continue to work with your child after the speech therapist has finished their episode of care. However should further advice be needed in the future a re – referral can be made at any time.
The completion of an episode of care is often referred to as discharge, which for parents/carers may seem worrying, however, your child may be re-referred to the SLT service whenever new targets have been identified. The maximum waiting time for an assessment is now 14 wks and in some cases will be sooner.
Training opportunities for parents and carers will be advertised on the Aneurin Bevan Website, under Speech and Language Therapy and also displayed at schools.
How to refer
If you have concerns about your child’s communication and/or eating and drinking skills please contact the school directly. These concerns will be discussed during our regular Communication Forum meetings and a referral/re-referral will be made to the Speech and Language Therapy service where appropriate.
For further information please contact: | http://www.penycwm.com/therapy/speech-and-language-therapy/ |
What care should be taken in twin pregnancy?
If you’re pregnant with twins, you should take the same prenatal vitamins you would take for any pregnancy, but your physician will recommend extra folic acid and iron. The additional folic acid and extra iron will help ward off iron-deficiency anemia, which is more common when you’re pregnant with multiples.
How can I make sure my twins are healthy during pregnancy?
Eating the proper foods and the right amount of calories is critical in a twin pregnancy. Whereas single-born pregnancies require 300 extra calories a day, most experts agree that twin pregnancies need around 1,000 extra calories a day. Frequent and healthy snacks can help you reach your caloric goals each day.
How can I make my twin pregnancy easier?
What will help boost my chances of having twins?
- Being older rather than younger helps. …
- Have fertility assistance such as in vitro fertilisation or take fertility drugs. …
- Pick your own genetics carefully! …
- Be of African/American heritage. …
- Having been pregnant before. …
- Have a big family.
Can folic acid cause twins?
Folic Acid Not Tied to Multiple Births. Jan. 31, 2003 — Women who take folic acid supplements before or during pregnancy are not any more likely to have a multiple birth, such as twins or triplets, according to new research.
Is giving birth to twins more painful?
That’s not all, Monga says. Moms pregnant with twins complain of more back pain, sleeping difficulties, and heartburn than moms who are carrying one child. Moms pregnant with twins also have a higher rate of maternal anemia and a higher rate of postpartum hemorrhage (bleeding) after delivery.
What are the complications of having twins?
The most common complications include the following:
- Preterm labor and birth. Over 60 percent of twins and nearly all higher-order multiples are premature (born before 37 weeks). …
- Gestational hypertension. …
- Anemia. …
- Birth defects. …
- Miscarriage. …
- Twin-to-twin transfusion syndrome.
When should you stop working when pregnant with twins?
The TAMBA (Twins and Multiple Births Association) recommends starting maternity leave between 28 and 30 weeks and earlier if you are carrying more than one baby or have any health complications.
How many twin pregnancies are successful?
Just 40% of twin pregnancies go full term. The average twin pregnancy is 35 weeks, compared to the average singleton pregnancy, which is 39 weeks. Prematurity may lead to a number of problems, including: Immature lungs, leading to difficulty in breathing.
Can twins be conceived on different days?
#1 Fraternal twins can be conceived as much as 24 days apart
For this reason, fraternal twins can be conceived a few weeks apart, though they generally will be born at the same time.
Who is more likely to have twins?
Age. According to the Office on Women’s Health , women who are aged 30 years or older are more likely to conceive twins. The reason for this is that women of this age are more likely than younger women to release more than one egg during their reproductive cycle.
Can you abort one twin and keep the other?
At least some recent studies suggest that while twin pregnancies are more difficult than singletons in many respects, aborting the other twin does not reduce the risks of the pregnancy – at least not to the same extent.
What should you not do when pregnant with twins?
It’s never advisable to drink alcohol excessively, smoke, or take drugs, whether you are pregnant or not. When you are pregnant, doing so exposes your unborn babies to toxic substances, raising their risk of birth defects and chronic illnesses. | https://mycoosada.com/pregnant/how-do-i-take-care-of-my-twin-pregnancy.html |
School of Rock Fort Lauderdale’s music teachers are experienced musicians devoted to helping students attain musical proficiency. From singing to drums to guitar, our Fort Lauderdale music instructors inspire and teach students to perform live.
Learn to play live on stage with others at School of Rock.
We offer a variety of music programs for different ages and skill levels.
School of Rock Students Perform "Middle of the Road" | https://locations.schoolofrock.com/fortlauderdale/our-school |
After a two-year layoff, the Summer Music Experience Camp has returned to Prince Albert, and the organizers and instructors couldn’t be happier.
A total of 64 students enrolled in the four-day camp, which ended with a concert at W.J. Berezowsky School on July 28. After being forced to go virtual the last two years, camp coordinator Pamela Cochet said it was great to be back with students in the classroom.
“I am overjoyed,” Cochet said. “COVID shut us down for two years, and we are so excited that Prince Albert is the location again this year.”
In a regular year, Summer Music Experience Camps would be held across the province, however, Prince Albert is the only city holding one in 2022. The camps typically focus on teaching students who haven’t been exposed to massed bands or choral music, and Cochet said it’s a good way get them hooked.
“In order for them to learn some appreciation for the music and the different kinds of things, I think it’s important that they learn what the instruments are,” she explained. “They learn how much fun it can be to play it.”
Several local musicians like Dean Bernier and Lauren Lohneis were on hand to help out with the instruction. Students had two days to experience everything from strings to horns to choral music before choosing one area to focus on for the next two days.
The camp ended with a live concert for family, friends, and program sponsors. Cochet said that’s an important part of teaching music, and something they’ve sorely missed since the start of COVID-19.
“We had talked about doing a virtual concert rather than an in-person one, and there are so many kids who are saying, ‘no, we want to get back to doing it live. We want to get back to being able to show off all of the things that we’re learning to our families in a live setting.’”
Cochet said the last two years have been difficult for music teachers, especially those teaching musicians who perform in mass, and it’s not just the education aspect that got tougher. A big bonus for students taking part in music experience camps is the chance to meet new people and make new friends, while also developing a love for music. The former has been extremely difficult as long as music performances remained online.
“You can only do so much virtually,” Cochet explained. “When you have someone and their infectious energy for the love of the music, that certainly has made a difference, and we’ve lacked that.
It’s disappeared over the last two years because we didn’t have concerts. We didn’t have the in-person things happening until it just started returning.”
That has made the return even more exciting for Cochet, who spent many years in Prince Albert as a music teacher. She’s optimistic that programs like this one can rebound, as long as sponsors keep supporting it, and music teachers keep providing their services.
SaskCulture has been a major supporter of the program since 2013. The Saskatchewan Rivers School Division also provided a major boost by allowing the use of Berezowsky School and the instruments inside for free.
Cochet said SIGA and the Firebird Music Program out of King George School were also vital in getting the music camp going again. | https://paherald.sk.ca/summer-music-experience-makes-long-awaited-return-to-prince-albert/ |
The name John D. Rockefeller elicits a wide range of emotional responses, but seldom succeeds in conveying banality. From the socialites of high society that tout his entrepreneurial spirit and philanthropic endeavors as nearly god-like, to circles in which he has been demonized under the conspiratorial view that his rise to such unprecedented levels of wealth and power was by means of Machiavellianism and usury. Within both of these extremes is an underlying respect for a man who was self-made and who realized the opportunities of his day amidst the growing American industrialism of the late 19th century, which fed on the petroleum over which his company, Standard Oil, held monopoly.
Rockefeller was a man who changed the world of business through an unmatched level of ambition steadily balanced with a level of shrewdness that can only come from humble beginnings. This balance is what came to define John D. Rockefeller as a businessman, known as the wealthiest of his time and recognized for showing a profit every single month he was in business. It was within this gray area between fiduciary conservatism and unmitigated drive that he developed the accounting principals for which he is credited. These principals, when applied with the same zeal and discipline as they were by Rockefeller himself, will serve any enterprising businessperson at any stage of their endeavor.
Rockefeller’s methods of strict accounting were, in fact, a means by which to prioritize his spending while always finding a way to save and amass capital. It was his ability to save money, pennies at a time, even on his very humble salary in his early days as an accounting clerk, which provided him with the capital he needed to begin his first business venture. It was in his own endeavors that, under the veil of secrecy, Rockefeller began to incorporate double-entry accounting methods and develop practices that would evolve into the modern management accounting techniques of cost and capital accounting still in use today.
Although the bigger objectives must always be kept in view, Rockefeller knew that it does not serve an entrepreneur to be blinded by his grand vision to such an extent that he fails to take notice of little opportunities to cut costs. His methods of accountancy, which he began to hone during his first job as a fledgling clerk while making entries into the now infamous, Ledger A, are recognized for their nearly obsessive level of accuracy and detail, which showed every cent accounted for. What Ledger A has really revealed to historians and leaders of industry alike is that these strict accounting practices followed an attitude of great reverence for the value of money earned and the accumulative power of each and every cent. | https://www.deepsky.co/2012/06/the-rockefeller-guide-to-accounting-part-1/ |
Jess and Lisa Origliasso of The Veronicas chatted with fans live for a special webcast at UStream.tv. The Brisbane twins talked about Lisa’s new necklace, their ‘Untouched’ 500k downloads plaque, plans to tour the U.S. in the summer, visiting the UK in March – where Jess hopes to find a British partner with a cute accent, having fun with the Jonas Brothers on tour, how ‘Twilight’ and the ‘Lord of the Rings’ trilogy are among their favorite movies, and much more. Watch the chat below. | http://popdirt.com/the-veronicas-live-chat/71519/ |
Mental Health Disparities Research
In the United States, there are striking differences in the prevalence, course, and severity of mental illnesses, access to quality health care, and health outcomes based on sex, gender, age, race, ethnicity, and geography.
The Disparities Team was formed in December 2018 to focus on research aimed at reducing and eliminating mental health disparities in U.S. communities. The team supports innovative and high-impact research aimed at:
- Enhancing our understanding of minority mental health and health disparities
- Reducing mental health disparities and its impact on individuals and communities
- Moving us toward achieving mental health equity.
In addition to research-related activities, the team launched the James Jackson Memorial Award in 2021 to recognize outstanding researchers who have demonstrated exceptional individual achievement and leadership in mental health disparities research, excellence in mentorship, influence in the field of mental health research, and support of students, particularly those who are Black, Indigenous, and other People of Color (BIPOC).
Recent Meetings and Events
Featured Funding Opportunity Announcements
- Addressing Mental Health Disparities Research Gaps: Aggregating and Mining Existing Data Sets for Secondary Analyses (R01 Clinical Trial Not Allowed)
- Systems-Level Risk Detection and Interventions to Reduce Suicide, Ideation, and Behaviors in Black Children and Adolescents (R34 Clinical Trial Optional)
- Systems-Level Risk Detection and Interventions to Reduce Suicide, Ideation, and Behaviors in Youth from Underserved Populations (R01 Clinical Trial Optional)
- Notice of Special Interest (NOSI) in Research on Risk and Prevention of Black Youth Suicide
- Transformative Health Disparities Research Funding Opportunities
Mental Health Disparities News and Events
Research Highlights
Explore research advances and ongoing research on mental health disparities supported by or conducted at NIMH.
Science News
Find NIMH science news related to mental health disparities.
Meetings and Events
Discover NIMH workshops and scientific meetings related to mental health disparities.
Team Leads
Lauren D. Hill, Ph.D.
Acting Director, Office for Disparities Research and Workforce Development
Email: [email protected]
Stacia Friedman-Hill, Ph.D. | https://www.nimh.nih.gov/research/priority-research-areas/mental-health-disparities-research |
It’s my passion to portray the intricate forms and details of the human body, to show them as I see and feel them. I’m in love with the shapes, textures, surfaces, the softest wrinkles of human skin, and similarly, the texture and movement of fabric. Roman style sculptures made me fall in love with art in general and especially sculpture. The intricacies and beauty of the carving on these sculptures influenced my perspective of all the different surfaces I encounter every day.
My preferred medium is clay for its ability to be manipulated and shaped into a variety of rich surfaces. Clay offers me spontaneity in the making process and indulges my love for carving. Working specially with an enlarged scale highlights the detail of my subjects. It provides freedom and space to show the tiniest of forms and inspires close looking and reflection. I am driven by the complexity of making figures and how as a sculpture they can feel both alive and empty. | https://www.njcu.edu/academics/schools-colleges/william-j-maxwell-college-arts-sciences/departments/art/undergraduate-programs/2020-bfa-candidates/elariya-girgiss |
RELATED APPLICATIONS
BACKGROUND
BRIEF DESCRIPTION OF EMBODIMENTS
DETAILED DESCRIPTION
This application is related to and claims the benefit of earlier filed U.S. Provisional Patent Application Ser. No. 61/522,965 entitled “DEFECTIVE PIXEL CORRECTION,” filed on Aug. 12, 2011, the entire teachings of which are incorporated herein by this reference.
In accordance with conventional image capturing, a device such as a camera can include an image sensor. Each of the multiple sensor elements in the image sensor detects a portion of light associated with an image being captured. Depending on an amount of detected light, a respective sensor element in the image sensor produces an output value indicative of the detected intensity of optical energy. Collectively, output values from each of the individual sensor elements in the array define attributes of an image captured by the camera. Based on output values from the sensor elements in the image sensor, it is possible to store, reconstruct, and display a rendition of a captured image.
It is not uncommon that an image sensor includes one or more defective sensor elements that do not produce accurate output values. This is especially true for lower cost image sensors. If a sensor element is defective, the respective output values of the defective sensor element can be excluded from a final version of an image to preserve the image's accuracy.
In some cases, sensor elements in an image sensor may not completely fail. For example, a defective sensor element may be able to partially detect optical energy and produce an output value that varies depending on detected optical intensity. However, the output values produced by the image sensor may be very inaccurate and therefore unusable in a final version of an image.
One way to manage defective sensor elements is to treat the defective sensor elements as being dead and replacing an output value of the defective sensor element with a value derived from one or more outputs of nearby sensor elements. If the algorithm to detect a bad sensor element is done incorrectly, for example, by generating replacement values when the sensor elements are actually not defective, results in degrading the quality of a respective image.
In general, methods for handling defective sensor elements can be divided into two categories: static and dynamic. Static methods can include use of a table to keep track of the defective sensor elements in an image sensor. Dynamic methods on the other hand attempt to determine defective sensor elements by looking for incongruous pixel data in each picture.
It may be desirable to use both methods (dynamic and static) at the same time, and possibly even use the dynamic defective pixel detection, to modify the list of static defective sensor elements.
One type of image sensor includes a patterned color filter such as a Bayer filter. Bayer filters are commonly used in single-chip digital image sensors installed in digital cameras, camcorders, and scanners to capture color images. A Bayer filter pattern can include 50% green pixels, 25% red pixels, and 25% blue pixels. A Bayer filter is sometimes called RGBG, GRGB, or RGGB.
In accordance with the Bayer pattern and filtering, each sensor element in a respective image sensor includes either a red, green, or blue filter to filter incoming light that strikes a corresponding sensor element. More specifically, for a sensor element including a respective red filter, the sensor element detects an intensity of red light that passes through the respective red filter. For a sensor element including a respective blue filter, the sensor element detects an intensity of blue light that passes through the respective blue filter. For a sensor element including a respective green filter, the sensor element detects an intensity of green that passes through the respective green filter.
Via the intensity of different colors detected by the sensor elements in different regions of the image sensor, it is possible to reproduce a respective image on a display screen.
Conventional image sensors and processing of image data can suffer from a number of deficiencies. For example, one problem with the static method of keeping track of and modifying bad sensor elements is the overhead associated with management and generation of a list indicating the defective elements. Sensor elements typically die as the image sensor ages such that the list of defective sensor elements needs to evolve over the life of the product. It may be difficult to generate an accurate list during use of the image sensor because most dead sensor elements aren't completely dead. Determining whether a sensor element is defective can be very difficult without repeated verification of the sensor elements using known test patterns. If the defective list is generated too slowly, the customer may perceive that the product is defective and return it to the original place of purchase.
One embodiment as discussed herein includes a novel way of determining whether the output value produced by a respective sensor element is defective based on an analysis of different colored sensor elements in a region of interest. This disclosure includes the discovery that a sensor element under test can monitor a first color type in an optical spectrum. The values of neighboring sensor elements that monitor a different color type can be used to determine whether the sensor element under test is defective.
More specifically, in one embodiment, an image sensor can include an array of multiple sensor elements that collectively operate to capture an image. The array of multiple sensor elements produces output values representing different intensities of colors detected in the image. To determine whether a sensor element in an image sensor is defective, an image-processing resource retrieves an intensity value produced by a sensor element under test in the array. From the same captured image, the image-processing resource further retrieves a respective intensity value for each of multiple sensor elements neighboring the sensor element under test. Each of the neighboring sensor elements in the image sensor can be fabricated to monitor a different color than a particular color that is monitored by the sensor element under test. For example, the sensor element under test can be configured to detect an intensity of a particular color of light. Sensor elements neighboring (i.e., in a vicinity of) the sensor element under test can be configured to detect a different color of light than the particular color of light.
To enhance the quality of a respective image produced by the image sensor, the image-processing resource as discussed herein selectively produces a substitute value for the sensor element under test depending on the intensity values produced by the multiple neighboring sensor elements that monitor the one or more different colors. Thus, embodiments herein can include selectively modifying an output value of a sensor element under test based at least in part on settings of the neighboring sensor elements of one or more different colors.
In accordance with further embodiments, the image-processing resource scales a range (such as based on minimum and maximum values in a region of interest) derived from intensity values for at least one different color with respect to a range of intensity values detected for the particular color. In response to detecting that the intensity value produced by the sensor element under test falls outside of the scaled range or newly enlarged range, the image-processing resource then produces a substitute value for the sensor element under test.
One or more additional embodiments as discussed herein include novel ways of enhancing image quality and reducing storage capacity requirements.
For example, an image-processing resource can be configured to access values stored in a buffer. The buffer stores data outputted by multiple sensor elements of an image sensor. In one embodiment, each of the values in the buffer represents a respective amount of optical energy detected by a corresponding sensor element in an array of multiple sensor elements. In a first moving window that traverses regions of stored values produced by respective sensor elements, the image-processing resource selectively modifies at least a portion of values produced by sensor elements residing in the first moving window. In a second window that traverses the buffer of values and trails behind the first moving window, the image-processing resource analyzes the settings of the sensor elements residing in the second moving window. The values in the second moving window can include one or more values as modified by the first moving window. The image-processing resource as discussed herein utilizes the values in the second window to produce a setting for a given sensor element in the second window. This process can be repeated to produce a final value for each of multiple sensor elements in an image sensor.
As discussed herein, this manner of detecting and modifying any defective elements via the first window and then utilizing the values in a trailing second window to produce finalized values reduces an overall need for storage capacity and provides speedy generation of a finalized display element settings.
These and other more specific embodiments are disclosed in more detail below.
Any of the resources as discussed herein can include one or more computerized devices, servers, base stations, wireless communication equipment, communication management systems, workstations, handheld or laptop computers, or the like to carry out and/or support any or all of the method operations disclosed herein. In other words, one or more computerized devices or processors can be programmed and/or configured to operate as explained herein to carry out different embodiments.
Yet other embodiments herein include software programs to perform the steps and operations summarized above and disclosed in detail below. One such embodiment comprises a computer program product including a non-transitory computer-readable storage medium (i.e., any computer readable hardware storage medium) on which software instructions are encoded for subsequent execution. The instructions, when executed in a computerized device having a processor, program and/or cause the processor to perform the operations disclosed herein. Such arrangements are typically provided as software, code, instructions, and/or other data (e.g., data structures) arranged or encoded on a non-transitory computer readable storage medium (i.e., a computer readable hardware storage resource or resources) such as an optical medium (e.g., CD-ROM), floppy disk, hard disk, memory stick, etc., or other a medium such as firmware or shortcode in one or more ROM, RAM, PROM, etc., or as an Application Specific Integrated Circuit (ASIC), etc. The software or firmware or other such configurations can be installed onto a computerized device to cause the computerized device to perform the techniques explained herein.
Accordingly, embodiments herein are directed to a method, system, computer program product, etc., that supports operations as discussed herein.
One embodiment herein includes a computer readable storage medium and/or system having instructions stored thereon to perform image processing. The instructions, when executed by a processor of a respective computer device, cause the processor or multiple processors of the system to: receive an intensity value produced by a sensor element under test, the sensor element under test selected from an array of multiple sensor elements that collectively capture an image; receive a respective intensity value for each of multiple sensor elements neighboring the sensor element under test, each of the neighboring sensor elements fabricated to monitor a different color than a particular color monitored by the sensor element under test; and selectively produce a substitute value for the sensor element under test depending on the intensity values produced by the multiple neighboring sensor elements that monitor the different color.
Another embodiment herein includes a computer readable hardware storage medium and/or system having instructions stored thereon to perform image processing. The instructions, when executed by a processor of a respective computer device, cause the processor or multiple processors of the system to: access values stored in a buffer, each of the values representing a respective amount of optical energy detected by a corresponding sensor element in an array of multiple sensor elements; in a first moving window, selectively modify at least a portion of values produced by sensor elements residing in the first moving window; and in a second window that traverses the array and trails behind the first moving window, analyze the settings of the sensor elements residing in the second moving window as selectively modified by the first moving window to produce a setting for a given sensor element in the second window.
The ordering of the steps above has been added for clarity sake. Note that any of the processing steps as discussed herein can be performed in any suitable order.
Other embodiments of the present disclosure include software programs and/or respective hardware to perform any of the method embodiment steps and operations summarized above and disclosed in detail below.
It is to be understood that the system, method, apparatus, instructions on computer readable storage media, etc., as discussed herein also can be embodied strictly as a software program, firmware, as a hybrid of software, hardware and/or firmware, or as hardware alone such as within a processor, or within an operating system or a within a software application.
As discussed herein, techniques herein are well suited for use in image processing. However, it should be noted that embodiments herein are not limited to use in such applications and that the techniques discussed herein are well suited for other applications as well.
Additionally, note that although each of the different features, techniques, configurations, etc., herein may be discussed in different places of this disclosure, it is intended, where suitable, that each of the concepts can optionally be executed independently of each other or in combination with each other. Accordingly, the one or more present inventions as described herein can be embodied and viewed in many different ways.
Also, note that this preliminary discussion of embodiments herein purposefully does not specify every embodiment and/or incrementally novel aspect of the present disclosure or claimed invention(s). Instead, this brief description only presents general embodiments and corresponding points of novelty over conventional techniques. For additional details and/or possible perspectives (permutations) of the invention(s), the reader is directed to the Detailed Description section and corresponding figures of the present disclosure as further discussed below.
FIG. 1
is an example diagram of an image sensing and processing system according to embodiments herein.
103
112
103
103
As shown, image sensor includes a field of multiple sensor elements . Each sensor element of image sensor monitors an intensity of detected optical energy in a respective region of an image. Any of one or more suitable lenses can be used to optically direct light energy to the image sensor image sensor .
103
103
In one embodiment, each of the sensor elements (as depicted by smaller squares) in the image sensor is configured to monitor one of multiple different possible colors of incoming light such as RED, GREEN, BLUE, WHITE, etc. By way of a non-limiting example, the image sensor can include any suitable type of color filter pattern such as a Bayer filter.
In one embodiment, the sensor elements assigned to monitor a color such as RED are fabricated to monitor a first wavelength range (e.g., red wavelengths of optical energy) in an optical spectrum. The sensor elements assigned to monitor a color such as GREEN are fabricated to monitor a second wavelength range (e.g., green wavelengths of optical energy) in the optical spectrum. The sensor elements assigned to monitor a color such as BLUE are fabricated to monitor a third wavelength range (e.g., blue wavelengths of optical energy) in the optical spectrum.
In one embodiment, the first wavelength, second wavelength, and third wavelength are substantially non-overlapping such that each of the sensor elements monitors one multiple different color of interest.
In accordance with further embodiments, if desired, each of the sensor elements can be configured to one or more colors.
103
103
By way of a non-limiting example, each of the sensor elements in the image sensor produces an output (e.g., a voltage, current, etc.) that varies depending on an amount of detected incoming light (i.e., optical energy) for a respective region and color monitored by the sensor element. Note that the image sensor can include any number of sensor elements, a density of which may vary depending on the embodiment.
110
103
115
115
115
125
During operation, sampling circuit selects a sensor element in the image sensor that is to be sampled and inputted to the analog to digital circuit . The analog to digital circuit converts the output of the selected sensor element into a digital value. The digital value produced by the analog to digital circuit is stored in buffer .
115
120
103
By way of a non-limiting example, the output of the analog to digital circuit can be any suitable value such as a multi-bit intensity value whose magnitude varies depending on a magnitude of detected optical energy detected by a respective sensor element. Buffer stores a respective intensity value for each of the different color-filtered sensor elements in the image sensor .
100
125
103
As more particularly discussed herein, image-processing resource analyzes the data stored in buffer to detect occurrence of defective sensor elements in the image sensor . A defective sensor element is deemed to be a sensor element that appears to produce one or more improper output values over time.
100
In one embodiment, the image-processing resource analyzes the intensity values produced by a neighboring set of different color monitoring sensor elements in a region of interest with respect to a sensor element under test to determine whether the sensor element under test is defective.
100
100
120
125
103
120
As a more specific example, assume that the sensor element X is the selected sensor element under test processed by the image-processing resource . To determine whether the intensity value produced by the respective sensor element X is erroneous, the image-processing resource analyzes a region of interest -X. Region of interest can include intensity values produced by a combination of the sensor element under test X and a field of neighboring sensor elements nearby the sensor element under test X. As previously discussed, buffer stores data produced by the sensor elements in image sensor . As discussed below, the values for the sensor elements in the region of interest -X are analyzed to determine whether the sensor element under test X is defective.
100
In accordance with one or more embodiments, the defective sensor element detection algorithm as executed by the image-processing resource takes advantage of the following observations:
1. A vast majority of defective sensor elements observed to date stand-alone and are typically not located next to other defective sensor elements.
2. Intensity values produced by defective sensor elements are objectionable when they are significantly different from what they should be.
3. In an area of otherwise uniformly illuminated display elements (e.g., pixels on a display screen), rarely does a pixel have a very different value from its neighbors.
4. In highly textured areas, removing a single pixel value and replacing it with the average of its neighbors is seldom objectionable.
100
120
In one embodiment, the image-processing resource individually processes a respective region of interest -X to determine if a respective center sensor element X is defective or not.
100
103
The image-processing resource repeats the analysis in each of multiple different regions of interest in the image sensor image sensor to determine whether a respective sensor element under test is defective. In this manner, the outputs of each of the sensor elements can be simultaneously analyzed to determine whether a respective sensor element is defective. In other words, the intensity values of different color filtered sensor elements can be used to determine whether a sensor element under test is defective.
100
As discussed below, the algorithm applied by the image-processing resource can differ slightly for green and non-green pixels (e.g., blue and red) due to the different resolutions in the color filter pattern.
FIG. 2
is a diagram illustrating an example of a region of interest processed by the image-processing resource to determine whether a respective sensor element under test is defective according to embodiments herein.
100
220
220
As shown in this example, the image-processing resource uses pattern to detect whether the sensor element X (i.e., the sensor element under test in the center of pattern ) in a region of interest is defective when the element under test X is configured to detect green filtered optical energy.
220
103
125
100
During analysis, the values A, B, C, and c in the pattern or region of interest are variables set to respective intensity values produced by corresponding sensor elements in the image sensor in a neighboring region with respect to sensor element under test X. As previously discussed, the values used to populate the variables are retrieved from buffer . As further discussed below, based on settings of the intensity values for the neighboring sensor elements and the sensor element under test, the image-processing resource determines whether the sensor element under test is defective or not.
FIG. 3
is a diagram illustrating an example region of interest processed by the image-processing resource to determine whether a respective sensor element under test is defective according to embodiments herein.
100
320
320
320
103
100
As shown, the image-processing resource uses pattern to detect whether the sensor element X (i.e., the sensor element under test in the center of pattern ) is defective when the element X is configured to detect blue filtered optical energy (or red filtered optical energy as the case may be). The values A, B, C, and c in the pattern are variables that are set to respective intensity values produced by corresponding sensor elements in the image sensor in a neighboring region with respect to sensor element under test X. As further discussed below, based on settings of the intensity values for the neighboring sensor elements and the sensor element under test, the image-processing resource determines whether the sensor element under test is likely defective or not.
103
103
100
In one embodiment, the threshold for determining defective sensor elements is a function of the ranges of values of one or more colors in the image sensor image sensor . As previously discussed, in one embodiment, the image sensor includes a Bayer-type filter, although the image-processing resource can apply the error detection algorithm as discussed herein to any suitable pattern of color filters.
100
The image-processing resource uses the intensity values for the different colored sensor elements to determine the range of each color. For example, as mentioned, the pixels that are the same color as “X” are marked with “C” or “c”.
In one embodiment, if a magnitude of the original intensity value for the sensor element under test is deemed to be erroneous, the intensity values recorded for the “C” labeled pixels are used to determine the replacement value for defective sensor elements.
Sensor elements of one of the other colors are marked with “A” and those of the remaining color are marked with “B”.
220
FIG. 2
In the example pattern as shown in , the sensor elements or locations marked with a letter “A” detect red optical energy; the sensor elements marked with a letter “B” detect blue optical energy. Note that the actual color of the pixels marked with a particular letter changes depending on where the current pixel is in the color filter pattern. This creates two possible classes of green pixels for a Bayer pattern, those that are in line with blue pixels and those in a line with red pixels, and two classes of non-green pixels, red and blue.
The intensity value for the sensor element under test X
the minimum and maximum values of the sensor elements marked “A”
the minimum and maximum values of the sensor elements marked “B”
the minimum and maximum values of the sensor elements marked “C” or “c”
the sum and number of pixels marked “C”
At the pixel level, independent of the color of “X”, the inputs to the algorithm are:
100
It is also possible that some of the pixels normally used by the error detection and correction algorithm executed by the image-processing resource are not available because the pixel being processed is at too close to the edge of the image. When calculating the inputs listed above for the sensor element under test X, any unavailable intensity values are simply ignored. The algorithm takes into account the lack of values and makes a decision based on the available values.
100
In one embodiment, the image-processing resource implements an algorithm that basically uses a fraction of the maximum of the scaled ranges of the different colors to calculate a range of values that the current sensor element under test, X, can be and still be considered not defective. Pixels with intensity values outside this range are considered defective or dead and, in one embodiment, are replaced by the average of the values of the pixels (i.e., sensor elements) marked “C”.
By way of a non-limiting example, in one embodiment, the range generated for a respective color is the difference between the maximum and the minimum values of the letter-marked pixels of that color. In one embodiment, the maximum range for determining whether the sensor element under test is defective or not is based on scaling the range of the colors that are not the same color as the sensor element under test to be comparable to that of the current pixel's (i.e., the sensor element X's) color. In other words, if the sensor element under test X is a green element, the ranges of different colored intensity values (e.g., non-green sensor elements such as blue and or red sensor elements) can serve as a basis to determine whether the sensor element under test is defective.
In a case where the intensity value for the sensor element under test falls outside a range for nearby elements of the same color (i.e., green), a large variation in intensity values for nearby sensor elements of a different color can indicate that the intensity value generated for the sensor element under test is still acceptable. Further, it is desirable not to produce a substitute value for a respective sensor element under test if the originally produced value by the sensor element under test is likely to be accurate. As mentioned, erroneous modification to healthy sensor elements will degrade image quality.
100
The following example C code expresses the algorithm executed by the image-processing resource :
// Per pixel algorithm inputs
int currentPixel; // value of pixel “X”
int capCTotal; // sum of available “C” pixels
int capCCount; // number of “C” pixels available;
int maxC; // maximum value of the “C” and “c” pixels;
int minC; // minimum value of the “C” and “c” pixels;
int maxA; // maximum value of the “A” pixels;
int minA; // minimum value of the “A” pixels;
int maxB; // maximum value of the “B” pixels;
int minB; // minimum value of the “B” pixels;
// parameters that adjust algorithm sensitivity
int overshootNumeratorParam; // sensitivity adjustment numerator
int overshootDenominatorParam; // sensitivity adjustment denominator (8)
int minRangeParam; // minimum range sensitivity parameter
// local variables
int cAverage; // average value of the available “C” pixels
int aScaledRange; // difference in the “A” pixels scaled to c's range
int bScaledRange; // difference in the “B” pixels scaled to c's range
int cRange; // difference in the “C” and “c” pixels
int maxRange; // max of all color ranges scaled to c's range
int overshoot; // maxRange adjusted by sensitivity parameters
int allowedMax; // largest value the pixel can have and still be good
int allowedMin; // smallest value the pixel can have and still be good
// calculate the value that will replace dead pixels
// note that this is used in scaling calculations also
cAverage=(capCTotal+capCCount/2)/capCCount; // average with rounding
// scale the ranges of the different colors to the range of the current pixel
aScaledRange=ScaleRanges(maxA, minA, cAverage); // See below
bScaledRange=ScaleRanges(maxB, minB, cAverage);
cRange=maxC−minC;
maxRange=max(minRangeParam, max(aScaledRange, max(bScaledRange, cRange);
overshoot=maxRange*overshootNumeratorParam/overshootDenominatorParam;
allowedMax=min(1023, overshoot+contextGMin);
allowedMin=max(0, contextGMax−overshoot);
currentPixel=cAverage;
// the pixel is “dead” so replace it
if (currentPixel>allowedMax∥currentPixel<allowedMin) {
}
In accordance with one embodiment, in the above code, the intent of the ScaleRanges function is to scale the range of a different color to the range of the color of the current sensor element under test. One implementation would be:
int otherAverage=(otherMax+otherMin+1)/2;
return ((otherMax−otherMin)*refAverage)/otherAverage;
int ScaleRanges(int otherMax, int otherMin, int refAverage) {
}
However, since division may be difficult to implement in hardware, the simpler
implementation below has been used.
int divideLookup[16]={0, 1024/1, 1024/2, 1024/3, 1024/4,
1024/5, 1024/6, 1024/7, 1024/8, 1024/9,
1024/10, 1024/11, 1024/12, 1024/13, 1024/14, 1024/15};
// note that entry 0 is never used
int otherAverage=((otherMin+otherMax+1)>>1);
// scale the averages up to a full 10 bits
break;
if (otherAverage & 0x20∥refAverage & 0x20) {
}
otherAverage<<=1;
refAverage<<=1;
for (int zeroBits=0; zeroBits<10; zeroBits++) {
}
// take the 4 ms bits of the scaled up values
otherAverage>>=6;
refAverage>>=6;
return 0;
if (!otherAverage) {
}
return ((otherMax−otherMin)*refAverage*divideLookup[otherAverage])>>10;
else {
}
int ScaleRanges(int otherMax, int otherMin, int refAverage) {
}
Another simplification can be made by constraining the parameter overshootDenominator to be a power of two so the division in the algorithm can be reduced to a right shift.
It should also be noted that windowing operations and different sensors can move the phase of the Bayer pattern so programmability in the implementation must be provided that allows the pixels to be processed with the correct color algorithm.
In one embodiment, the minRangeParam parameter prevents noise in very uniform areas from causing large numbers of defective sensor elements. The optimal value for this parameter is a function of the average intensity of the image. This can be estimated beforehand from information gathered in other parts of the image-processing pipeline while processing previous frames.
FIG. 2
103
100
220
320
Referring again to , assume in this example that the sensor element under test in the image sensor is a green filtered sensor element. Based on detecting that the sensor element under test is a green colored filter, the image-processing resource selects pattern (as opposed to pattern ) to perform an analysis of whether the intensity value produced by the sensor element under test X needs adjustment.
Assume that a magnitude of the intensity value produced by the sensor element under test X (green) is the value=115.
220
100
Assume that a magnitude of intensity values for the sensor elements in regions (e.g., element at row 2, column 2; element at row 4, column 2; element at row 2, column 4; element at row 4, column 4) labeled with an upper case C (green) are 90, 95, 105, and 110. Assume that a magnitude of intensity values for the sensor elements in elements in pattern labeled with a lower case c (green) are respectively 92, 93, 94, 106, 107, and 108. Image-processing resource initially generates a range value for the color green based on regions labeled with a C and c.
115
In this case, based on minimum and maximum values, the C-range spans between 90 and 110 (e.g., C-range value=110-90=20). Since the magnitude for the sensor element under test is outside of the range between 90 and 110, the sensor element under test X may be defective. As mentioned, and as discussed further below, if variations in the magnitudes of intensity values for the other colors (e.g., blue, red) are substantially large, the image-processing resource may not flag the intensity value of 115 for the sensor element under test as being erroneous.
100
220
Assume that a magnitude of intensity values for the sensor elements in regions labeled A (red) are 8, 9, 10, 10, 11, and 12. Image-processing resource generates a range value for the color red based on regions in pattern labeled with an A. In this case, based on minimum and maximum values of A elements, the A-range is calculated as being between 8 and 12 (e.g., range value=12−8=4). A-average=10 (e.g., an average of the different values labeled A is a value of 10).
100
In one embodiment, the image-processing resource scales the range A-range (as produced for elements labeled A) to the C-range. In one embodiment, scaling the A-range to the C-range includes first normalizing the A-range to A-average. For example, (A-range value/A-average)=4/10=0.4. The scaled range includes multiplying (A-range value/A-average)*(C-average)=(0.4)*100=40. aScaled range with respect to C-average substantially equals the scaled range between 80 and 120. Scaling of the A-range (red elements) to the C-range (green elements) expands the range of acceptable values from an original range of 90 to 110 for testing the selected green sensor element under test X to a new threshold range of 80 to 120 for testing the green sensor element under test X.
100
Assume further in this example that a magnitude of intensity values for the sensor elements in regions labeled B (blue) are 490, 495, 500, 500, 505, and 510. Image-processing resource generates a range value for the color blue based on regions labeled with a B. In this case, based on minimum and maximum values for elements labeled B, the B-range is between 490 and 510 (e.g., B-range value=510−490=20).
100
100
Image-processing resource generates a range value for the color blue based on regions labeled with a B. In this case, based on minimum and maximum values, the B-range is between 490 and 510 (e.g., B-range value=510−490=20). B-average=500 (e.g., average of elements labeled B). The image-processing resource scales the range B-range to the C-range. In one embodiment, scaling the B-range to the C-range includes normalizing the B-range to the B-average. For example, B-range value/B-average=20/500. The scaled range includes multiplying (B-range value/B-average)*(C-average)=(0.04)*100=4. bScaled range with respect to C-average substantially equals the range 98 to 102. This scaled B-range does not enlarge the C-range. Thus, the bScaled range is not used to modify the already enlarged C-range 80 to 120.
100
100
100
100
100
100
Thus, summarizing the above example processes of scaling range sequence of frames different colors, the image-processing resource produces a first average value (e.g., C-average=100), the first average value is derived from the C values produced by the neighboring sensor elements that monitor the color green; the image-processing resource produces a second average value (e.g., A-average=10), the second average value is derived from the values produced by the neighboring sensor elements that monitor the different color (e.g., RED color); the image-processing resource produces a gain value (e.g., ratio value), the gain value substantially equal to the second average value (e.g., 100) divided by the first average value (e.g., 100/10); the image-processing resource effectively multiplies the second range (e.g., 4) by the gain value 10 to produce a new range 40; in response to detecting that the newly scaled range 40 is larger than a width of the first range (i.e., C-range value=110−90=20), the image-processing resource scales the new scaled range value (40) with respect to the first average value (100) to produce the threshold range. In one embodiment, the image-processing resource centers the new range (40) about C-average to produce a threshold range between 80 and 120 for testing the sensor element under test.
100
In this example embodiment, the image-processing resource compares the value (i.e., 115) of the sensor element under test X to the newly enlarged range 80 to 120. Although the intensity value of 115 falls outside the original C-range between 90 and 110, the intensity value of 115 is not considered to be an error (i.e., the sensor element is not considered to be defective) in this instance since there is a significant variance or gradient in intensity values with respect to the other color element (i.e., Red color filtered pixels) in a vicinity of the sensor element under test. The variation in values produced by the red colored sensor elements effectively causes the acceptable range for the green element under test to be widened.
Thus, embodiments herein include: producing a first range (e.g., C-range), the first range defined by a range of values (e.g., minimum and maximum values) produced by neighboring sensor elements that monitor the particular color; producing a second range (e.g., A-range), the second range defined by a range of values (e.g., minimum and maximum values) produced by neighboring sensor elements that monitor the different color (e.g., RED); deriving a threshold range (e.g., new or enlarged range 80 to 120) based at least in part on the second range; and comparing the intensity value (e.g., 115) produced by the sensor element under test X to the derived threshold range (e.g., 80 to 120) to determine whether to produce the substitute value for the sensor element under test X.
100
100
220
Assume in the same example above that the intensity value for the sensor element under test is a value of 125 instead of 115 as in the previous example. In such an instance, because the value 125 for the sensor element under test X falls outside of the expanded range 80 and 120, the intensity value and/or sensor element under test X would be considered defective. Additionally, the image-processing resource generates a substitute value for the sensor element under test X. More specifically, in response to detecting that the intensity value produced by the sensor element under test X is likely not representative of an actual amount of optical energy of the particular color (i.e., green in this example) inputted to the sensor element under test, the image-processing resource produces the substitute value for the sensor element under test X based at least in part on intensity values produced by an additional set of neighboring sensor elements (e.g., intensity values in regions of pattern that are marked with an upper case C and/or c) with respect to the sensor element under test. As previously discussed, the set of neighboring sensor elements labeled c or C are fabricated to also monitor the color green.
100
100
In this example, because the intensity value 125 for the sensor element under test falls outside the acceptable range, the image-processing resource produces the substitute value for the sensor element under test X to reside within a range derived from the intensity values produced by the additional set of neighboring sensor elements. As an example, the range of values for regions labeled C is between 90 and 110. In one embodiment, the image-processing resource chooses a substitute value to be a value such as 100, an average of the surrounding green filtered sensor elements.
Assume in another example above that the intensity value for the sensor element under test is a value of 100 instead of 115 as in the previous example. In such an instance because the value of 100 for the sensor element under test falls within the range 90-110 for the other green sensor elements, the sensor element under test is deemed not defective.
FIGS. 4 and 5
are example diagrams illustrating an alternative way of image processing according to embodiments herein.
D=non-current component 1
E=non-current component 2
H,V,R,G,B,b,g=components as per diagrams
(#) maximum # of pixels in a computation (subject to availability)
A=allowed
O=overshoot
Except where otherwise noted, divides are by powers of 2 and are implemented by adding a rounding factor then shifting
Xrange=Xmax−Xmin
100
(If desired, division can be approximated via table lookup)
Error detection Algorithm implemented by the image-processing resource :
Xscalerange (Refavg)=((Xmax−Xmin)*Refavg)/((Xmax+Xmin)/2)
In this example: C=current pixel component (i.e. Red, Green, or Blue)
compute Gmax(12) and Gmin(12) // use G and g pixels
Crange=Gmax−Gmin
Cavg=sum(G(4))/(4) // uses G but not g pixels
compute Dmax(6) and Dmin(6) // use H pixels
compute Emax(6) and Emin(6) // use V pixels
if (currently on green pixel location) {
}
compute Bmax(8) and Bmin(8) // use B and b pixels
Crange=Bmax−Bmin
Cavg=sum(B(4))/(4) // uses B but not b pixels
compute Dmax(4) and Dmin(4) // use G pixels
compute Emax(4) and Emin(4) // use R pixels
else {// currently on red or green location
}
compute Dscalerange(Cavg)
compute Escalerange(Cavg)
Mergedrange=max(Parammin, Crange, Dscalerange, Escalerange)
Orange=(Mergedrange*Paramnum)/Paramden
Amax=min(1023, Orange+Cmin)
Amin=max(0, Cmax−Orange)
currentPixel=Cavg // pixel is dead, replace with average
If ((currentPixel<Amin)∥(currentPixel>Amax)) {
}
Implementation Enhancements—Reduced Buffering
125
In one embodiment, the described on-the-fly Defective Pixel Correction (DPC) algorithm is used just before a debayering step, where the R,G, and B pixels are converted into an RGB triplet at each pixel location. In one embodiment, it may be desired to implement the detection and correction algorithm without using additional line buffers. Assuming the image-processing resource operates on a 5×5 matrix of values as discussed herein, the buffer can be a 5 line buffer to process the image into a corrected pixel version, before being fed to a respective debayering algorithm.
FIG. 6
600
In order to implement one embodiment without any additional line buffers, embodiments herein can include modifications. For example, is an example diagram illustrating image processing system according to embodiments herein.
600
620
5
620
1
125
620
1
620
2
620
4
620
5
In one embodiment, in order to eliminate the need for any additional line buffers, the DPC algorithm as executed by the image processing system can be configured to analyze 5 lines around the center pixel that we are trying to debayer and produce 5 lines that can go to the debayering step. Note that the difference between this and the previous description is that the DPC algorithm for the 5 line's pixels (dpc5 or X5-value generator -) cannot look at the data below that pixel, and similarly, the DPC algorithm for dpc1 or X1-value generator - cannot look at data above that pixel because the data is not stored in the buffer . Since the debayering step typically weights the pixels farther from the center pixel less than the center pixel, we can degrade the performance of the DPC algorithm slightly for dpc1 (i.e., X1-value generator -), dpc2 (i.e., X2-value generator -), dpc4 (i.e., X4-value generator -), dpc5 (i.e., X5-value generator -).
620
3
Thus embodiments herein include use of a structure, where the algorithm for dpc3 (i.e., X3-value generator -) is any DPC algorithm that looks at some amount of data around raw3 (5 in this example, but could be a different number), and each dpc-i is any DPC algorithm that looks at raw1-5, for each i.
620
1
620
2
620
3
620
4
620
5
FIG. 6
We describe below specific instances of the dpc1-5 algorithms (i.e., X1-value generator -, X2-value generator -, X3-value generator -, X4-value generator -, and X5-value generator - in ).
To minimize system buffering, DPC takes place in parallel with de-Bayering. When generating a line of de-Bayered data, 5 lines of raw Bayer input data, centered around the desired output line, are processed by DPC to create 5 lines of data that are used by de-Bayering to create a single line of output. Therefore, each line in the input is processed 5 times, each time using a slightly different algorithm due to the amount and position of surrounding data that is used to make defective pixel decisions.
620
FIGS. 9 and 10
FIGS. 9 and 10
The processing threads use the patterns in to detect defective sensor elements. The algorithms and patterns in also differ slightly for green and non-green pixels due to their different resolutions in the Bayer pattern.
FIG. 9
FIG. 10
In one embodiment, the defective pixel decision for each pixel is derived from the values of nearby pixels such that a 5×5 block of input pixels is used to generate 5 output pixels that correspond to the 5 pixel vertical line in the center of the input block. Each of the 5 blocks shown in , labels the pixels involved in processing the pixel marked “X” when the center pixel of the block is green. Similarly, , below, labels the pixels involved when the center pixel is not green.
By way of a non-limiting example, the threshold for determining defective pixels is a function of the ranges of values of all three colors in the Bayer matrix. The pixels used to determine the range of each color are identified by letters. The pixels that are the same color as “X” are marked with “C” or “c”. Only the “C” pixels are used to determine the replacement value for defective pixels. Pixels of one of the other colors are marked with “A” and those of the remaining color are marked with “B”. The actual color of the pixels marked with a particular letter changes depending on where the “X” pixel is in the Bayer pattern. This creates two classes of green pixels, those that are in line with blue pixels and those in a line with red pixels, and two classes of non-green pixels, red and blue.
The value of pixel “X”
the minimum and maximum values of the pixels marked “A”
the minimum and maximum values of the pixels marked “B”
the minimum and maximum values of the pixels marked “C” or “c”
the sum and number of pixels marked “C”
At the pixel level, independent of the color of “X”, the inputs to the algorithm are:
1010
3
FIG. 10
It is possible that some of the pixels normally used by the algorithm are not available because the pixel being processed is too close to the edge of the image. When calculating the inputs listed above, the unavailable pixels are simply ignored with only one exception. The sum and number of pixels marked “C” is used to calculate the average value that is used to replace defective pixels. When working on a non-green center pixel, as shown in pattern - of , if one of the “C” pixels is not available, it is replaced by the “C” pixel directly across from it on the other side of the pixel marked “X”. This allows the average to be calculated by shifting by 2 instead of dividing by 3.
In one non-limiting example embodiment, the algorithm uses a fraction of the maximum of the scaled ranges of the different colors to calculate a range of values that the current pixel can have and still be called good (i.e., not defective). Pixels with values outside this range are considered defective and are replaced by the average of the values of the pixels marked “C”. The range of a color is the difference between the maximum and the minimum values of the letter marked pixels of that color. The maximum range is computed by scaling the range of the colors that are not the same as the current pixel to be comparable to that of the current pixel's color.
Note that the pattern of pixels used to determine whether or not pixel “X” is defective for the bottom two pixels in the center vertical line is the same as that used for the top two pixels, but mirrored over the center horizontal line.
The following C code expresses the algorithm:
// Per pixel algorithm inputs
int currentPixel; // value of pixel “X”
int capCTotal; // sum of available “C” pixels
int capCCount; // number of “C” pixels available;
int maxC; // maximum value of the “C” and “c” pixels;
int minC; // minimum value of the “C” and “c” pixels;
int maxA; // maximum value of the “A” pixels;
int minA; // minimum value of the “A” pixels;
int maxB; // maximum value of the “B” pixels;
int minB; // minimum value of the “B” pixels;
// parameters that adjust algorithm sensitivity
int overshootNumeratorParam; // sensitivity adjustment numerator
int overshootDenominatorParam; // sensitivity adjustment denominator (8)
int minRangeParam; // minimum range sensitivity parameter
// local variables
int cAverage; // average value of the available “C” pixels
int aScaledRange; // difference in the “A” pixels scaled to c's range
int bScaledRange; // difference in the “B” pixels scaled to c's range
int cRange; // difference in the “C” and “c” pixels
int maxRange; // max of all color ranges scaled to c's range
int overshoot; // maxRange adjusted by sensitivity parameters
int allowedMax; // largest value the pixel can have and still be good
int allowedMin; // smallest value the pixel can have and still be good
// calculate the value that will replace defective pixels
// note that this is used in scaling calculations also
cAverage=(capCTotal+capCCount/2)/capCCount; // average with rounding
// scale the ranges of the different colors to the range of the current pixel
aScaledRange=ScaleRanges(maxA, minA, cAverage); // See below
bScaledRange=ScaleRanges(maxB, minB, cAverage);
cRange=maxC−minC;
maxRange=max(minRangeParam, max(aScaledRange, max(bScaledRange, cRange);
overshoot=maxRange*overshootNumeratorParam/overshootDenominatorParam;
allowedMax=min(1023, overshoot+contextGMin);
allowedMin=max(0, contextGMax−overshoot);
currentPixel=cAverage;
// the pixel is “defective” so replace it
if (currentPixel>allowedMax∥currentPixel<allowedMin) {
}
In one embodiment, in the above code, the intent of the ScaleRanges function is to scale the range one color to the range of the color of the current pixel. The simplest implementation would be:
int otherAverage=(otherMax+otherMin+1)/2;
return ((otherMax−otherMin)*refAverage)/otherAverage;
int ScaleRanges(int otherMax, int otherMin, int refAverage) {
}
However, since division is hard to implement in hardware, the simpler implementation below has been used.
int divideLookup[16]={0, 1024/1, 1024/2, 1024/3, 1024/4,
1024/5, 1024/6, 1024/7, 1024/8, 1024/9,
1024/10, 1024/11, 1024/12, 1024/13, 1024/14, 1024/15};
// note that entry 0 is never used
int otherAverage=((otherMin+otherMax+1)>>1);
// scale the averages up to a full 10 bits
break;
if (otherAverage & 0x20∥refAverage & 0x20) {
}
otherAverage<<=1;
refAverage<<=1;
for (int zeroBits=0; zeroBits<10; zeroBits++) {
}
// take the 4 ms bits of the scaled up values
otherAverage>>=6;
refAverage>>=6;
return 0;
if (!otherAverage) {
}
return ((otherMax−otherMin)*refAverage*divideLookup[otherAverage])>>10;
else {
}
int ScaleRanges(int otherMax, int otherMin, int refAverage) {
}
Another simplification can be made by constraining the parameter overshootDenominator to be a power of two so the division in the algorithm can be reduced to a right shift.
It should also be noted that windowing operations and different sensors can move the phase of the Bayer pattern so programmability in the implementation is provided that allows the pixels to be processed with the correct color algorithm.
The minRange parameter prevents noise in very uniform areas from causing large numbers of defective pixels. The optimal value for this parameter is probably a function of the average intensity of the image. Presumably this can be estimated beforehand from information gathered in other parts of the image-processing pipeline while processing previous frames.
FIG. 6
600
610
1
610
2
125
625
680
More particularly, as shown in , the image processing system includes a first stage image processor -, a second stage image processor -, buffer , optional buffer to store substitute intensity values, and repository .
110
115
125
610
1
620
620
1
620
2
620
3
620
4
620
5
As previously discussed, a combination of the sampling circuit and analog to digital circuit produce digital values (e.g., intensity values) for storage in buffer . The first stage image processor - includes multiple processing threads (e.g., X1-value generator -, X2-value generator -, X3-value generator -, X4-value generator -, and X5-value generator -).
620
625
103
In one embodiment, as mentioned, each of the multiple processing threads determines whether a respective element under test in a column under test is defective. In a manner as previously discussed, the processing threads can selectively generate substitute intensity values for failing or defective sensor elements in the image sensor .
610
2
125
625
610
2
600
Second stage image processor - has access to the original intensity values stored in buffer and the generated substitute intensity values (if any) stored in buffer . Second stage image processor - can be a debayer process in which the R,G, and B pixels are converted into an RGB triplet at each pixel location. As mentioned, in one embodiment, it may be desired to implement the detection and correction algorithm without using additional line buffers or reducing an amount of line buffers that regarding needed in a respective integrated circuit or silicon chip in which all or part of processing system resides.
FIG. 7
is an example diagram illustrating multi-stage processing according to embodiments herein.
125
103
110
115
103
115
115
125
125
125
As previously discussed, for a given image frame, buffer stores intensity values produced by the respective sensor elements in the image sensor . In one embodiment, the sampling circuit scans across each of the rows of sensor elements from left to right. After completing a row, the sampling circuit starts sampling the next row of sensor elements in the image sensor . The sampling circuit repeats this raster scanning for each row. Eventually the sampling circuit reaches the end of the last row in the buffer and overwrites the oldest stored data in buffer as it samples the new data. Thus, the buffer can be a rolling buffer of data.
125
103
125
103
103
125
103
In one embodiment, the size of buffer is limited with respect to the number of sensor elements in the image sensor . For example, the buffer can be sized to store intensity values (e.g., each value being a multi-bit value) for multiple lines (e.g., 5 lines) of sensor elements in the image sensor . In one embodiment, the image sensor includes many thousands of rows of sensor elements. Thus, the intensity values temporarily stored in buffer (such as a line buffer) can represent sensor element or pixel settings for only a small number of sensor elements in the image sensor .
115
103
125
As shown, at sample window time t1, the sampling circuit samples the sensor element of image sensor at row 5 column 15 and stores the detected value in buffer .
610
1
620
710
1
625
Also, during sample window time t1, the first stage image processor - implements multiple processing threads to analyze settings for the sensor elements in window - and selectively produce substitute values if it is determined that a respective intensity value recorded for the sensor element under test is defective.
620
1
910
1
1010
1
710
1
620
1
620
1
625
620
1
125
FIG. 9
FIG. 10
More specifically, for sample time t1, and depending on a filter color of the X1 element under test, the X1-value generator - selectively applies the pattern - (in ) or pattern - (in ) to window - of settings (i.e., intensity values) to analyze an original intensity value produced for the sensor element at row 1 column 12. In a manner as previously discussed, using the error detection and correction algorithm, the X1-value generator - determines whether the intensity value stored in buffer at row 1 column 12 for element X1 needs to be corrected. If so, the X1-value generator - produces and stores a respective substitute value for the element under test X1 in buffer . As an alternative, the X1-value generator - can overwrite an original value in buffer at row 1, column 12 with a substitute value.
620
2
910
2
1010
2
710
1
620
2
620
2
625
620
2
125
FIG. 9
FIG. 10
For sample time t1, depending on a filter color of the X2 element under test, the X2-value generator - selectively applies the pattern - (in ) or pattern - (in ) to window - of settings to analyze an original intensity value produced for the sensor element at row 2 column 12. In a manner as previously discussed, using the error detection and correction algorithm, the X2-value generator - determines whether the intensity value stored in buffer at row 2, column 12 needs to be corrected. If so, the X2-value generator - produces and stores a respective substitute value for the element under test X2 in buffer . As an alternative, the X2-value generator - can overwrite an original value in buffer at row 2, column 12 with a substitute value.
620
3
910
3
1010
3
710
1
620
3
620
3
625
620
1
125
FIG. 9
FIG. 10
For sample time t1, and depending on a filter color of the X3 element under test, the X3-value generator - selectively applies the pattern - (in ) or pattern - (in ) to window - of settings to analyze an original intensity value produced for the sensor element at row 3 column 12. In a manner as previously discussed, and using the error detection and correction algorithm, the X3-value generator - determines whether the intensity value stored in buffer at row 3, column 12 needs to be corrected. If so, the X3-value generator - produces and stores a respective substitute value for the element under test X3 in buffer . As an alternative, the X1-value generator - can overwrite an original value in buffer at row 3, column 12 with a substitute value.
620
4
910
4
1010
4
710
1
620
4
620
4
625
620
1
125
FIG. 9
FIG. 10
For sample time t1, and depending on a filter color of the X4 element under test, the X4-value generator - selectively applies the pattern - (in ) or pattern - (in ) to window - of settings to analyze an original intensity value produced for the sensor element at row 4 column 12. In a manner as previously discussed, and using the error detection and correction algorithm, the X4-value generator - determines whether the intensity value stored in buffer at row 4, column 12 needs to be corrected. If so, the X4-value generator - produces and stores a respective substitute value for the element under test X4 in buffer . As an alternative, the X1-value generator - can overwrite an original value in buffer at row 4, column 12 with a substitute value.
620
5
910
5
1010
5
710
1
620
5
125
620
5
625
620
1
125
FIG. 9
FIG. 10
For sample time t1, and depending on a filter color of the X5 element under test, the X5-value generator - selectively applies the pattern - (in ) or pattern - (in ) to window - of settings to analyze an original intensity value produced for the sensor element at row 5 column 12. In a manner as previously discussed, and using the error detection and correction algorithm, the X5-value generator - determines whether the intensity value stored in buffer at row 5, column 12 needs to be corrected. If so, the X5-value generator - produces and stores a respective substitute value for the element under test X5 in buffer . As an alternative, the X1-value generator - can overwrite an original value in buffer at row 5, column 12 with a substitute value.
610
1
115
In certain instances, very few sensor elements are defective. Thus, the first stage image processor - may only occasionally generate a substitute value for overwriting or replacing an original value as detected by analog to digital circuit .
610
620
710
1
710
1
103
115
110
710
1
In one embodiment, the first stage image processor simultaneously executes the processing threads to generate respective substitute values on an as-needed basis. Note that in one embodiment, the window - scans to the right for each successive sample time. In this example, the window - lags just behind the current sensor element (e.g., row 15, column 5) of image sensor that is being sampled by the sampling circuit . The amount of lag between the current sample window of the sampling circuit a and the window - can vary depending on the embodiment.
610
2
720
2
610
2
610
2
125
625
610
1
610
2
Second stage image processor - processes settings (corrected or original settings) of values associated with the sensor elements in window -. As previously discussed, in one embodiment, the second stage image processor - implements a debayering algorithm to produce a setting for element under test Y (e.g., row 3 column 9). Second stage image processor - has access to buffer that stores the original intensity values produced by the sensor elements as well as the substitute values (if any) in buffer as produced by the first stage image processor -. The setting of element Y as produced by the second stage image processor - depends on magnitudes of the corrected or rigid setting in a manner as previously discussed.
710
1
720
1
720
1
710
1
710
2
The amount of lag between window - and window - can be 3 time samples as shown such that the window - analyzes corrected elements. The amount of lag between window - and window - can vary depending on the embodiment.
610
2
680
Second stage image processor - stores the value produced for element under test Y in repository .
FIG. 8
is an example diagram illustrating multi-stage processing according to embodiments herein.
115
103
610
1
620
710
2
625
As shown, at sample window time t2, the sampling circuit samples the sensor element of image sensor at row 5 column 16. Also, during sample window time t2, the first stage image processor - implements multiple processing threads to analyze settings for the sensor elements in window - and selectively produce substitute values if it is determined that a respective intensity value recorded for the sensor element under test is defective.
620
1
910
1
1010
1
710
2
620
1
620
1
625
125
FIG. 9
FIG. 10
More specifically, for time t2, and depending on a filter color of the X1 element under test, the X1-value generator - selectively applies the pattern - (in ) or pattern - (in ) to window - of settings (i.e., intensity values) to analyze an original intensity value produced for the sensor element at row 1 column 13. In a manner as previously discussed, using the error detection and correction algorithm, the X1-value generator - determines whether the intensity value stored in buffer at row 1 column 13 needs to be corrected. If so, the X1-value generator - produces and stores a respective substitute value for the element under test in buffer or buffer .
620
2
910
2
1010
2
710
2
620
2
620
2
625
125
FIG. 9
FIG. 10
For time t2, and depending on a filter color of the X2 element under test, the X2-value generator - selectively applies the pattern - (in ) or pattern - (in ) to window - of settings to analyze an original intensity value produced for the sensor element at row 2 column 13. In a manner as previously discussed, using the error detection and correction algorithm, the X2-value generator - determines whether the intensity value stored in buffer at row 2, column 13 needs to be corrected. If so, the X2-value generator - produces and stores a respective substitute value for the element under test in buffer or buffer .
620
3
910
3
1010
3
710
2
620
3
620
3
625
FIG. 9
FIG. 10
For time t2, and depending on a filter color of the X3 element under test, the X3-value generator - selectively applies the pattern - (in ) or pattern - (in ) to window - of settings to analyze an original intensity value produced for the sensor element at row 3 column 13. In a manner as previously discussed, using the error detection and correction algorithm, the X3-value generator - determines whether the intensity value stored in buffer at row 3, column 13 needs to be corrected. If so, the X3-value generator - produces and stores a respective substitute value for the element under test in buffer .
620
4
910
4
1010
4
710
2
620
4
620
4
625
125
FIG. 9
FIG. 10
For time t2, and depending on a filter color of the X4 element under test, the X4-value generator - selectively applies the pattern - (in ) or pattern - (in ) to window - of settings to analyze an original intensity value produced for the sensor element at row 4 column 13. In a manner as previously discussed, and using the error detection and correction algorithm, the X4-value generator - determines whether the intensity value stored in buffer at row 4, column 13 needs to be corrected. If so, the X4-value generator - produces and stores a respective substitute value for the element under test in buffer or buffer .
620
5
910
5
1010
5
710
1
620
5
620
5
625
125
FIG. 9
FIG. 10
For time t2, and depending on a filter color of the X5 element under test, the X5-value generator - selectively applies the pattern - (in ) or pattern - (in ) to window - of settings to analyze an original intensity value produced for the sensor element at row 5 column 13. In a manner as previously discussed, and using the error detection and correction algorithm, the X5-value generator - determines whether the intensity value stored in buffer at row 5, column 13 needs to be corrected. If so, the X5-value generator - produces and stores a respective substitute value for the element under test in buffer or buffer .
610
2
720
2
610
2
610
2
125
625
610
1
610
2
Second stage image processor - processes settings (corrected and/or original settings) of values in window -. As previously discussed, in one embodiment, the second stage image processor - implements a debayering algorithm to produce a setting for element under test Y (e.g., row 3 column 10). Second stage image processor - has access to buffer that stores the original intensity values produced by the sensor elements as well as the substitute values (if any) in buffer as produced by the first stage image processor -. The setting of element Y as produced by the second stage image processor - depends on magnitudes of the original settings and/or corrected settings in a manner as previously discussed.
610
2
680
Second stage image processor - stores the value produced for element under test Y at row 3, column 10 for time window t2 in repository .
600
110
115
710
1
720
2
In this manner, in each sample time window, the image processing system is able to simultaneously sample a current sensor element via a sampling circuit and analog to digital circuit , detect and correct any defective sensor elements in a first window -, and produce a final setting for the element under test Y in the second window -.
610
1
610
2
Thus, the first stage image processor - can generate and store a respective substitute value produced for each of multiple sensor elements tested. Subsequent to analyzing intensity values produced by neighboring sensor elements with respect to the sensor elements under test, the second stage image processor - applies the debayering algorithm to produce a single element setting (for element under test Y) based on a window of values including: i) original intensity values produced by the neighboring sensor elements, and ii) a substitute value produced for any defective sensor elements.
FIG. 11
is an example block diagram of a computer system for implementing any of the operations as discussed herein according to embodiments herein.
850
811
812
813
814
817
As shown, computer system of the present example can include an interconnect that couples computer readable storage media such as a non-transitory type of media (i.e., any type of hardware storage medium) in which digital information can be stored and retrieved, a processor , I/O interface , and a communications interface .
814
480
130
1005
I/O interface provides connectivity to a repository and, if present, other devices such as a playback device , keypad, control device , a computer mouse, etc.
812
812
Computer readable storage medium can be any hardware storage device such as memory, optical storage, hard drive, floppy disk, etc. In one embodiment, the computer readable storage medium stores instructions and/or data.
817
850
813
190
814
813
480
Communications interface enables the computer system and processor to communicate over a resource such as network to retrieve information from remote sources and communicate with other computers. I/O interface enables processor to retrieve stored information from repository .
812
140
1
813
140
1
140
1
100
600
As shown, computer readable storage media is encoded with image analyzer application - (e.g., software, firmware, etc.) executed by processor . Image analyzer application - can be configured to include instructions to implement any of the operations as discussed herein. In one embodiment, the image analyzer application - is configured to perform any of the operations associated with the image-processing resource , image processing system , etc.
813
812
811
140
1
812
During operation of one embodiment, processor accesses computer readable storage media via the use of interconnect in order to launch, run, execute, interpret or otherwise perform the instructions in image analyzer application - stored on computer readable storage medium .
140
1
140
2
813
140
2
813
140
1
813
850
Execution of the image analyzer application - produces processing functionality such as image analyzer process - in processor . In other words, the image analyzer process - associated with processor represents one or more aspects of executing image analyzer application - within or upon the processor in the computer system .
850
140
1
Those skilled in the art will understand that the computer system can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources to execute image analyzer application -.
150
100
In accordance with different embodiments, note that computer system may be any of various types of devices, including, but not limited to, a mobile computer, a personal computer system, a wireless device, base station, phone device, desktop computer, laptop, notebook, netbook computer, mainframe computer system, handheld computer, workstation, network computer, application server, storage device, a consumer electronics device such as a camera, camcorder, set top box, mobile device, video game console, handheld video game device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device. The computer system may reside at any location or can be included in any suitable resource in network environment to implement functionality as discussed herein.
FIGS. 12-13
Functionality supported by the different resources will now be discussed via flowcharts in . Note that the steps in the flowcharts below can be executed in any suitable order.
FIG. 12
1200
is a flowchart illustrating an example method according to embodiments. Note that there will be some overlap with respect to concepts as discussed above.
1210
100
103
In processing block , the image processing resource receives an intensity value produced by a sensor element under test X in the image sensor . The sensor element under test X is selected from an array of multiple sensor elements that collectively capture an image.
1220
100
In processing block , the image-processing resource receives a respective intensity value for each of multiple sensor elements neighboring the sensor element under test X. In one embodiment, each of the neighboring sensor elements is fabricated to monitor a different color than a particular color monitored by the sensor element under test.
1230
100
In processing block , the image-processing resource selectively produces a substitute value for the sensor element under test X depending on the intensity values produced by the multiple neighboring sensor elements that monitor the different color.
FIG. 13
1300
is a flowchart illustrating an example method according to embodiments. Note that there will be some overlap with respect to concepts as discussed above.
1310
610
1
125
103
In processing block , the first stage image processor - accesses values stored in buffer . Each of the values represents a respective amount of optical energy detected by a corresponding sensor element in an array of multiple sensor elements (e.g., image sensor ).
1320
710
1
125
610
1
710
1
In processing block , in a first moving window - that traverses values stored in the buffer , the first stage image processor - selectively modifies at least a portion of values produced by sensor elements residing in the first moving window -.
1330
710
2
710
1
610
2
710
2
710
1
In processing block , in a second window - that traverses the array (of buffer values) and trails behind the first moving window -, the second stage image processor - analyzes the settings of the sensor elements residing in the second moving window - as modified by the first moving window - to produce a setting for a given sensor element Y in the second window.
Note again that techniques herein are well suited for processing and enhancing captured images. However, it should be noted that embodiments herein are not limited to use in such applications and that the techniques discussed herein are well suited for other applications as well.
Based on the description set forth herein, numerous specific details have been set forth to provide a thorough understanding of claimed subject matter. However, it will be understood by those skilled in the art that claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, systems, etc., that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter. Some portions of the detailed description have been presented in terms of algorithms or symbolic representations of operations on data bits or binary digital signals stored within a computing system memory, such as a computer memory. These algorithmic descriptions or representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. An algorithm as described herein, and generally, is considered to be a self-consistent sequence of operations or similar processing leading to a desired result. In this context, operations or processing involve physical manipulation of physical quantities. Typically, although not necessarily, such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated. It has been convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals or the like. It should be understood, however, that all of these and similar terms are to be associated with appropriate physical quantities and are merely convenient labels. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a computing platform, such as a computer or a similar electronic computing device, that manipulates or transforms data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application as defined by the appended claims. Such variations are intended to be covered by the scope of this present application. As such, the foregoing description of embodiments of the present application is not intended to be limiting. Rather, any limitations to the invention are presented in the following claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects, features, and advantages of the invention will be apparent from the following more particular description of preferred embodiments herein, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, with emphasis instead being placed upon illustrating the embodiments, principles, concepts, etc.
FIG. 1
is an example diagram illustrating an image sensor according to embodiments herein.
FIG. 2-5
are example diagrams illustrating patterns for detecting one or more defective elements under test according to embodiments herein.
FIG. 6
is an example diagram illustrating an image-processing system according to embodiments herein.
FIG. 7
is an example diagram illustrating multiple processing windows at sample window time T1 according to embodiments herein.
FIG. 8
is an example diagram illustrating multiple processing windows at sample window time T2 according to embodiments herein.
FIG. 9-10
are example diagrams illustrating patterns for detecting one or more defective elements under test according to embodiments herein.
FIG. 11
is a diagram illustrating an example computer architecture in which to execute one or more embodiments as discussed herein.
FIGS. 12 and 13
are example diagrams illustrating a method according to embodiments herein. | |
I am writing in response to the Missouri Department of Elementary and Secondary Education’s (MDESE) request to waive certain statutory and regulatory requirements of Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA), as amended. In particular, MDESE requested a waiver of several statutory and regulatory provisions related to standards, assessments, and accountability to allow five elementary schools in the Kansas City 33 School District (KCSD) to assess students “out-of-level” (i.e., at their instructional level rather than at their grade level) and to use the results of such out-of-level testing to make adequate yearly progress (AYP) determinations. The request was for a period of four years beginning with assessments administered in school year 2010-2011.
Specifically, MDESE has requested this waiver to enable KCSD to pilot in five schools a student-centered system focused on instructing and assessing students based on standards appropriate to their instructional level. As explained in your waiver request, KCSD would determine each student’s baseline instructional level and establish a unique growth trajectory leading to proficiency by 2013-14 depending on the grade level at which a student’s baseline score is determined. KCSD would then develop an individualized educational plan for each student. For determining AYP, the percent proficient at the school and district levels will include the results of students tested at their instructional level. MDESE asserts that the pilot would provide students with more individualized attention, focus on the specific educational needs of each child, and ensure that each student masters concepts as he/she progresses through the learning program.
I understand and fully appreciate KCSD’s intent that the five pilot schools individualize instruction, focusing on the specific educational needs of each child. A school can and should differentiate instruction based on information from multiple sources, including the state assessments and other assessments that provide diagnostic, progress monitoring, and summative information. Nothing in the ESEA prevents KCSD or the pilot schools from pursuing these strategies. At the same time, I strongly support the ESEA requirement that instruction for all students must be based on grade-level academic content standards. To determine proficiency on grade-level content, students must be assessed on such content against grade-level academic achievement standards.
By using out of level assessments, KCSD would not be measuring all students in the pilot schools against grade-level achievement standards or holding those schools accountable for all students reaching grade-level proficiency. Accordingly, after reviewing and giving careful consideration to MDESE’s request, I am declining to exercise my waiver authority and am not granting your request.
If you have any questions, please contact Sharon Hall of my staff at 202-260-0998. | https://www2.ed.gov/nclb/freedom/local/flexibility/waiverletters2009/mo3.html |
Computational chemistry has grown to a large field that entails the use of computers to study a very broad range of chemical problems, and includes both the development (often referred to as “theoretical chemistry”) and applications. This broad range of computational approaches includes electronic structure calculations, molecules dynamics simulations, and free energy relationships (e.g., QSAR, QSPR). The focus of Frontiers in Computational Chemistry, is on the application of computational chemistry approaches to biological processes. While the overview of chapters included in this volume described below provides exciting specific studies, general topics covered in this series include computer aided molecular design, drug discovery and development, lead generation, lead optimization, database management, computer and molecular graphics, and the development of new computational methods or efficient algorithms for the simulation of chemical phenomena, including the analyses of biological activity.
In this fourth volume, five chapters that present a diverse spectrum of approaches towards biological processes are presented including:
Chapter 1 “Natural Lead Compounds and Strategies for Their Optimization as New Drugs” provides strategies to improve possible lead compounds for use in medicine. Dev Bukhsh Singh describes how a combination of approaches including high-throughput screening, structure activity relationships, absorption, distribution, metabolisms, excretion, and toxicity (ADMET) parameters can be used to optimize lead compounds.
Chapter 2 “Computer-aided Drug Discovery Methodologies in the Modeling of Dual Target Ligands as Potential Parkinson’s Disease Therapeutics” by Yunierkis Perez-Castillo and co-authors, presents advances in the application of drug discovery methodologies in modeling dual target ligands for the discovery of potential Parkinson’s Disease therapeutics. A virtual screening method was developed to aid in the prioritization of potential dual binder candidates.
Chapter 3 “Molecular Studies of the Inhibition of Aminoacyl tRNA Synthetases in Microbial Pathogens”. In this chapter, Nilashis Nandi describes progress made towards molecular-level insight into the inhibition of aminoacyl tRNA synthetases (aaRSs), which are promising targets for the development of new inhibitors. Insight is gained using a number of methods including structural analysis based on crystallographic and NMR measurements, as well as mutation studies, kinetic methods, and molecular dynamics simulations.
Chapter 4 “Advances in the Computational Modelling of Halogen Bonds in Biomolecular Systems: Implications for Drug Design”. P.J. Costa and Rafael Nunes provide an overview of computational methods to model halogen bond interactions in biomolecular systems such as protein-ligand complexes is provided. The practicality of the approaches in computer-aided drug design and discovery is discussed.
Chapter 5 “Molecular Classification of Caffeine, Its Metabolites, and Nicotine Metabolite” by Francisco Torrns and Gloria Catellano show the use of structure-property relationships to model retention times for caffeine and caffeine and nicotine metabolites.
We hope that the readers will find these reviews valuable and thought provoking so that they may trigger further research in the field. We are grateful for the timely efforts made by the editorial personnel, especially Ms. Mariam Mehdi (Assistant Manager Publications), Mr. Shehzad Naqvi (Editorial Manager Publications), and Mr. Mahmood Alam (Director Publications) at Bentham Science Publishers.
This volume is dedicated to Jeffry D. Madura who was a professor at Duquesne University (USA) and an editor of the first three volumes of this series. We thank Jeff not only for his role as an editor of the prior volumes in this series, but, also acknowledge his scientific contributions. He passed away too early and the chemistry community will miss him. | https://ebooks.benthamscience.com/book/9781681084411/ |
Canine, feline, birds.
PROBIOTIC contains Lactobacillus acidophilus; microorganisms that, once ingested, change the intestinal microflora, positively affecting the animal's health, improving digestion, normalizing and supporting the development of beneficial intestinal flora, helping prevent diarrhea, improving fur quality (indicator of good digestion) and favoring animals subject to stress due to rough temperature changes, feeding changes or competition, as well as the use of antibiotics for long periods of time. Other additional benefits (due to its immuno-regulatory effects) include: improvement in case of infectious diseases, intestinal chronic diseases and cardiovascular diseases. Probiotics are a safe, effective, non-antibiotic and potentially efficient treatment which, through the fermentation of bacteria or yeasts and the presence of amino acids, B complex vitamins and active enzymes, provide an additional and potential benefit on the antimicrobial production factor effect, immunoregulatory effects (they stimulate a beneficial, non-specific immune response), anti-inflammatory and anticarcinogenic effects and direct effects on the intestinal mucosa. THE PROBIOTIC ACTION MECHANISM includes: Competitive inhibition. Suppression of inflammation. Modification of the gastrointestinal medium. Consumption of potentially harming products. Antimicrobial production factor. Immune regulation. Inactivation of procarcinogens. | https://en.tornel.com/index.php?/producto/probiotic |
Dash Shaw is a U.S. comic book writer/artist and animator. He is the author of the graphic novelsCosplayers,Doctors, New School, andBottomless Belly Button, published by Fantagraphics. Additionally, Shaw has written Love Eats Brains published by Odd God Press, GardenHead published by Meathaus, The Mother's Mouth published by Alternative Comics, and BodyWorld published by Pantheon Books.
Shaw's comic short stories have appeared in many different anthologies, newspapers and magazines. His square-sized short stories were collected in the 2005 book GoddessHead published by Hidden Agenda Press. His comics are known for their emphasis on emotional, lyrical logic and innovative design. He was named one of the top ten artists to check out at the 2002 "Small Press Expo" when he was 19 years old. He also writes lyrics and plays with James Blanca in the weirdo pop band Love Eats Brains! and has co-written and acted in various short film projects.
Contents
Throughout college and since, Shaw has published sequential art short stories in a variety of publications in the United States and abroad, plus numerous magazine illustrations.[citation needed] Amy Taubin of Film Comment magazine writes:[volume & issue needed]
Dash Shaw's comics are fearless, tender and smart. Shaw's drawings and texts turn the blank page into an imaginary friend — an alter-ego onto which he and the reader can project and try to make sense of dangerous, contradictory, consuming fantasies and ideas about life (especially that crazy thing called love) and its representation. Comics and movies have lots in common, but few movies are as inspired and intimate as 'Goddess Head'.[volume & issue needed]
Shaw's Bottomless Belly Button was published by Fantagraphics in June 2008.[2] His BodyWorld webcomic was bought by Pantheon Books and published in a single printed volume in April 2010.[3]
Bottomless, an exhibition of Shaw's original drawings, storyboards, color background overlays and a new video animation, was on display at Duke University's John Hope Franklin Center from September 25 through October 31, 2008.[4]
Late 2009 saw the release of The Unclothed Man In the 35th Century A.D., a collection of short stories previously published in MOME,[5] along with several pages of storyboards and other ephemera from his animated shorts for IFC.[citation needed]
Shaw employs a combination of hand drawing, animation techniques and Photoshop to produce his artwork. Shaw started working on acetate sheets while studying at the School of Visual Arts. Pointing to pre-Photoshop comics that were colored via clear celluloid containing the black line art, under which would be placed a board with the painted colors, Shaw explains that he took this process and combined it with animation-style use of celluloid, where the backs of the acetate are painted with gouache and laid over a painted background, in addition to color separations where black line art is used to mark the different colors. In addition to using hand-drawing media such as crow quill pens, colored pencils, and markers, Shaw incorporates collage, Photoshop, and painting directly over photocopies, though he does not work with a separate line art layer, preferring to treat black as simply another color, and not a separate or more important element. On BodyWorld, for example, Shaw did the color separations by hand, used the paint bucket tool in Photoshop to color the shapes, and then printed it out and painted over the photocopy, before scanning it again and making final adjustments in Photoshop to achieve the final art.[1]
Shaw explains that his key motive is combining what he likes about hand drawing with the processes available in Photoshop. He has stated that he does not own a drawing tablet, and that his actual knowledge of Photoshop is limited, compared to most mainstream colorists who rely on it exclusively, explaining, "that coloring leaves me cold."[1]
| |
Filter:
Latest articles in this journal
Published: 28 September 2022
Journal of Advanced Research in Social Sciences, Volume 5, pp 18-25; https://doi.org/10.33422/jarss.v5i3.786
Abstract:
This qualitative study aims to understand the food memory and food identity of a group who emigrated from Turkey to North Carolina for education or for work 30-40 years ago. For this purpose, one-to-one in-depth interviews were conducted with 14 persons, 12 women and 2 men living in North Carolina using the Zoom application. In the interviews, participants said that they are connected to their roots with their homeland’s food, that is Turkish food. They had not given up on cooking and eating Turkish dishes. The memories they described are so to say proving food memory and food identity concepts which are searched in migration studies. These are sub-topics of food anthropology. Also, they have a real effort to serve Turkish food to their non-Turkish friends, neighbours and this effort seems to be an attempt to show their identity with their food. Besides, it can also be said that they were influenced by other cuisines and experienced a cultural diffusion.
Published: 25 September 2022
Journal of Advanced Research in Social Sciences, Volume 5, pp 8-17; https://doi.org/10.33422/jarss.v5i3.785
Abstract:
Learning a new language is not simply memorizing grammar rules. It is a much deeper process of “being” in that speech. Identity and belonging can be strong motivators to learn and practice a new language, but they can be detrimental in certain cases. When perceiving discrimination as an immigrant, one might move away from the local language, as a reaction to feeling unwelcomed in that environment. A stronger connection to the identity as an “immigrant” may arise and, in some cases, it can even hinder language acquisition. In this article, we will explore the connections between perceiving xenophobic experiences as an immigrant and the impact it can have on the motivation to learn the local language.
Published: 24 September 2022
Journal of Advanced Research in Social Sciences, Volume 5, pp 1-7; https://doi.org/10.33422/jarss.v5i3.841
Abstract:
This paper evaluates the validity of the foundational ethical conception of Justice and its contesting contradictory conception, Utilitarianism in framing the institutional structure of a nation. Thereby the paper asserts the importance of Justice and certain virtues to a nation’s rise and fall and observes greatest impediment of Justice in the idea of Utilitarianism. While Utilitarianism advances its own conception of Justice, based on critical-theoretical approach the article advances three-fold arguments in order to reprove Utilitarianism of its claim. Utilitarianism is not a viable form of Justice since the theory represents a form of egoism, is structurally inconsistent to sound ethical doctrine, and because it has dissolved the end and means dichotomy.
Published: 20 September 2022
Journal of Advanced Research in Social Sciences, Volume 5, pp 26-31; https://doi.org/10.33422/jarss.v5i3.761
Abstract:
Up to date myths are regarded as universal and enduring for their depicting human’s understanding and knowledge. It presents clues and intimations to Man’s origins of belief and life. Harry Potter, a series of storytelling written by J.K. Rolling, is a metaphoric presentation of myths and cultural background behind each one of them. This study investigates and explores how J.K. Rolling involves in origins of cultural textually while sharing mythological ideas in modern literature as a creative way to give new senses to each of them. With its unique demonstration, Harry Potter places an outstanding position in giving myth a new dimension and ties ancient with present via a new style of mythmaking in modern literature. The study conducts an analytic explanation of the importance of mythmaking to literature in general and specifically in Harry Potter. The findings that the study arrives at are that myths are true replications of cultures and societies, and Rolling's stories make new connection with the depth of human superficiality as well as it renders the possibility to revive mythological mentality in modern era.
Published: 20 September 2022
Journal of Advanced Research in Social Sciences, Volume 5, pp 32-46; https://doi.org/10.33422/jarss.v5i3.776
Abstract:
The Oduduwa secessionist agitators are a group of social actors with the resolution of seceding Nigeria. Meanwhile, in spite of their reminder that Nigeria’s nationhood is still highly contested, there appears to be very little or no linguistic research on discourses produced by this emerging group of activists. Therefore, this study analyses the Oduduwa agitators’ tweets to uncover their prevailing ideologies and highlight their strategies for representing themselves and those they oppose. An analytical and qualitative research design is used to interpret the data selected. From the corpus of 10,000 tweets on Oduduwa secessionist agitators, a few tweets are purposively selected and analysed in this study. With insights from van Dijk’s model of Critical Discourse Analysis, findings reveal that Oduduwa secessionists’ Twitter posts (tweets) are protest discourses, with positive ‘we’ in-group representations and negative ‘they’ out-group constructions imprinted on them. The agitators apply linguistic strategies such as code-switching, foregrounding and hashtags to express their solidarity as well as establish social interaction. The study concludes that Oduduwa secessionist agitators’ tweets are effectively used to describe the identities of the actors, express their arguments and demands, enunciate their activities and goals, and offer information updates to the agitators and supporters.
Published: 25 June 2022
Journal of Advanced Research in Social Sciences, Volume 5, pp 9-17; https://doi.org/10.33422/jarss.v5i2.792
Abstract:
The concept of disability has often been chained to that of animality as humanness is regarded as inherently marked by independence and rationality, the lack of which in animate beings is randomly associated with animality. The animality/humanity dualism, championed by anthropocentrism and ableism, not only affects the identity of humans with special needs by grouping them as Others but also disregards the agency of animals/nonhumans and nature by denying human dependency on and similarities with more-than-human entities. This research in its exploration of the connection between disability and ambiguous identity will focus upon the dynamics of the animality/humanity dualism in the context of an industrial disaster and ensuing disability as represented in Indra Sinha’s Animal’s People (2007). By understanding animality/humanity binary through the lens of local/global spatial distinction, the article scrutinises the way the animal/human ambiguous sense of place of the protagonist is mediated by his spatial relations. Building on both critical disability scholarship on animalisation of disabled humans and bioregional exploration of local/global spatial boundaries, the research, therefore, contends that the impact of environmental disasters on certain human groups creates a local (deformed humans as animals)/global (elite humans) spatial binary. The resolvability of such binaries, as the research further argues, is coterminous with developing a local bioregion, which is both connected to and dissociated from global/international places and is built upon humans–nonhumans/animals/nature interrelations that allow an agentic and inclusive human–nonhuman sense of belonging in the region.
Published: 20 June 2022
Journal of Advanced Research in Social Sciences, Volume 5, pp 36-41; https://doi.org/10.33422/jarss.v5i2.862
Abstract:
Race has been a prominent discourse in the contemporary world and academic discipline since the last few decades as it determines a variety of moral problems. In response to this discussion, race tends to emphasise a couple of philosophical aspects in favour of concepts and categories. The concept of race forms a debate in reality whilst racial taxonomy gives a physical system of division such as black, white, Asian, Native American, and so forth. Correspondingly, there have been remarkable problematic issues with regards to biological realism, antirealism or eliminativism alongside social constructivism. In addition, the particular term, race, predominantly embodies a pair of notions: it is a biological position, demarcated by observable physical characteristics in terms of certain ancestry and geographical territory - it is a historical moral perspective, which is construed by ancient societies. Therefore, by employing qualitative mode of enquiry, I attempt this research to defend the thesis that race is not real, and it could be an upshot of social constructivism. Then, I look forward to illuminating a few substantial findings: the central claims of antirealism or eliminativism, a critique of social constructivism along with a brief analysis of political and cultural constructionism. Notwithstanding these limited outcomes, this research suggests that further studies need to be carried out in order to explore the unreal nature of race.
Published: 20 June 2022
Journal of Advanced Research in Social Sciences, Volume 5, pp 18-35; https://doi.org/10.33422/jarss.v5i2.783
Abstract:
The article aims at analyzing of the contemporary political discourse modality that has acquired specific characteristic features of highly emotional utterances based on deliberate or unintended violation of political etiquette principles recently. The article’s generalized theorizing is illustrated by a sample of case study of invectives in political discourse. This analysis aims at distinguishing “agonal” signs (a deliberate use of invectives in speech) and pragmatic borrowings (inadvertent use of invectives) in their functioning, their pragma-semantic characteristics and discursive markers, which helps us in identification of both types of political discourse linguistic items. This research represents an integrative approach combining the Critical Discourse Analysis, the Political Discourse Semiotics Theory, the Role Theory, the Communication Theory, and others, in order to discover the actual reasons and consequences of these changes in the society in general, and in political discourse in particular.
Published: 20 June 2022
Journal of Advanced Research in Social Sciences, Volume 5, pp 36-46; https://doi.org/10.33422/jarss.v5i2.791
Abstract:
In this paper, Unit Heterogeneity and the Degree of system differentiation are considered as the independent variables to explain the differential characteristics of the international structure, which lead to a differentiated interaction mode between hegemon and other rising powers. Then the paper further argues that globalization and nuclear deterrence lead to dynamic changes in system differentiation, and the heterogeneity between rising power and hegemonic power in geographical objectives, strategic culture, ideology, and polity are the conditions that hegemon must refer to when positioning the nature of rising power and interacting with rising power. However, the logic of power distribution is implied in the degree of system differentiation, and the author finds that in the process of globalization promoted by the hegemon if the relative power of rising powers becomes unconstrained, the hegemon will slow down globalization and suppress rising powers instead. The degree of urgency relates not only to power distribution but also to unit heterogeneity. Therefore, the paper distinguished four patterns in terms of great powers’ competition: duopoly competition in orderly anarchy status, alliance management in rigid hierarchy status, the dual-track embedded competition in loose hierarchy status, and quasi-perfect competition in chaotic anarchy status. In the end, the article verified the common modes of great power interaction, which are reflected in the competition between the U.S. and the USSR, differing interests between hegemon and allies inside the hegemon alliance, and U.S.-China competition.
Published: 20 April 2022
Journal of Advanced Research in Social Sciences, Volume 5, pp 18-34; https://doi.org/10.33422/jarss.v5i1.583
Abstract:
Sexuality is a development milestone in one’s life cycle, and each generation has its own struggles with it. It becomes more complex when the biological forces that accompany it initiate the sexual maturation process. The youth are very prone to risky sexual behaviour at this stage due to their perceptions of personal invulnerability, that leaves many exposed to HIV/AIDS infection, early pregnancies and abortion incidences. The unmet need for contraceptive use in sub-Saharan Africa has left the youth exposed to the aforementioned vices, making this a matter of great public health concern. Through a qualitative approach, this article examines the social meaning that the youth bestowed on two contraceptives (condom and the E-pill) and assesses how these meanings influence their sexual behaviour. The study concludes that there is need for policy makers to understand youth perceptions towards various contraceptive methods if effective campaign on reproductive health is to be realised. | https://www.scilit.net/journal/4205046 |
In early 2017 I wanted to expand my skills and knowledge and learn more about other areas of the industry so completed an advanced course at the Cassie Lomas Makeup Academy in Manchester. Specialising in hair and makeup for photo shoots. I loved it just as much and was inspired to become part of the fashion industry.
Since, I have worked with amazing photographers, stylists, models, other makeup artists and clients.
Every brief I’m presented, I fulfil with thorough research. I’m very professional and reliable and do more than what is expected on every job.
I love all aspects of my work and love to make people feel and look special.
I am constantly researching as-well as ensuring my makeup kit and tools are up to date so I can pursue new looks and makeup trends.
I take great pride in my work and portfolio and my passion for makeup is continues to grow! | https://www.kathrynrooneymakeup.co.uk/about-kathryn-makeup/ |
Dear Editor,
Recently, a contributed opinion column on March 5, 2021, in Food Safety News made a case concerning food safety issues in the plant due to the FDA inspections or FSMA out of compliance issues. It listed a Top Ten list of typical challenges based on the experience of the writer as an FDA and legal expert of 42 years. The writer, Joseph Levitt, said he learned from food companies facing food safety and compliance challenges; one repeated phrase came through, “I wish I’d acted sooner.” He framed “acted sooner” as calling lawyers to advise how the manufacturer should defend themselves. The law is transparent concerning compliance. FDA findings from the Top Ten list relevant to reaching a lawyer after the incident has occurred. A lawyer, at that point, is after the damage is already done. “Sooner rather than later’”means following the legal requirements under the Food Safety Modernization Act (FSMA) to implement “preventive controls.”
In this discourse, we will show how (FSMA), if strategically implemented, will be aptly designated sooner rather than later. Operationalizing the law should be done electronically as a system, with error-proofing and management by exception for executing the controls — leaving the lawyering as a last resort. The need is to have an implemented system.
Here is the Top Ten list and how each of the issues can be addressed in a thought out preventive electronic system with controls — to prevent the problems from occurring in the first place rather than after the fact — which then really requires a lawyer.
- Notification of Outbreak: You have been contacted by the FDA and/or CDC that your company’s product has been associated with an outbreak of foodborne illness. You need help right away, to help you determine if the product needs to be recalled and if your plant needs to be shut down, and if so, what will be needed to restart.
117.139- Recall plan
You should have a written recall plan, which includes what will be needed to restart. However, according to the law, there is overlapping preventive control verification — the attendant, Internal PCQI, External PCQI. Three instances of review have to be bypassed to reach the customer to necessitate a recall. Strategic means — auto alerts to management when binary or monitoring values are not met or entered by the attendant at the point of application. Management immediately knows that there are occurring issues to address to nip the said issues in the bud.
Restart will be triggered by an auto corrective action, which sends alerts when the corrective action is closed.
- Bad results from FDA swab-a-thon: You have been contacted by FDA and told that they took environmental swabs in your facility and found one or more positive findings of a food pathogen, such as Salmonella or Listeria. If not handled properly, this could be the beginning of bad things to come. That is because FDA will do DNA fingerprinting, called Whole Genome Sequencing (WGS), of your sample, keep it on file, and if they come back a year later and find the same thing, FDA could make you recall all product made in the intervening time under the “resident strain” theory.
117.135/117.150 Preventive Control / Corrective Action
The problem is not a positive environmental swab reading, but what is done about it. Therefore, you want a method to be immediately informed to be alerted, in the first place if the prescribed test is performed, triggers a non-conformance and a corrective action, and alerts that the corrective action was completed in the statutory time of seven days.
- FDA sends you a warning letter: This means FDA has already determined that your product is legally adulterated or misbranded — you are officially on the naughty list — and if not remedied promptly, it could lead to any of the regulatory actions mentioned above. Note that FDA generally only sends one warning letter per facility, so just receiving it means you are in legal jeopardy.
117.135 Preventive Control
It also means — your preventive controls are ineffective. It would help if you had preventive control with binary and/or monitoring features that trigger a non-conformance when either the requirements observed are not met or the monitoring value is out of range. The system cannot proceed until the triggered non-conformances are treated and verified by the internal PCQI (Preventive Controls Qualified Individual) and External PCQI electronically.
- FDA invites you to a Regulatory Meeting: This is an in-person version of the Warning Letter and carries all of the same cautions and risks.
117.301 Records
Records from the preventive control activity should be made available at the regulatory meeting, preferably electronically, which is time stamped with the people conducting the activities. Provide evidence of trained to employees to perform the activities at a specific points in the process. If the employee chooses not to treat with an auto corrective action, management is also electronically alerted.
- You receive a second 483 Inspectional Observations report in the same facility: This is a red flag for the FDA. It means they feel they cannot trust you to fix your problems on your own. An escalation is almost certain to follow if you do not immediately change course and nip this in the bud. How you respond to that second 483 will be very important, and an experienced food regulatory lawyer can help you put your best foot forward.
117.150 Corrective Action.
An escalation can be avoided because immediately when an out of spec or binary requirement is not met all the elements of the corrective become available, and depending on what it is, validation 117.160 and reanalysis 117.170, if required, is included in the corrective action electronic format to enable improvements to change course.
- You receive your first 483 for a facility, but it is long, scary or the inspection itself was verbally contentious. FDA can escalate its activities even after a single bad inspection if the agency feels it went badly enough. At a minimum, you need a second opinion from an experienced food regulatory lawyer.
117.135 Preventive Controls
Before the 483, a rule of thumb is to identify a master list of all assets and surfaces and determine if a preventive control, including monitoring, covers it. A system is needed to determine if all assets/ surfaces compare to the master list and whether the preventive control per the prescribed frequency is completed for asset/surface and receive an alert when not completed. A bad inspection triggers the corrective action. However, the preventive controls should minimize the occurrence of issues to cause a bad inspection.
- Your finished product testing program shows a product positive for a food pathogen – usually Salmonella or Listeria. It is highly unusual to get even a single product positive, so this is an incredibly important warning signal. If an outbreak is a 5-alarm fire, a finished product positive is still a 3-alarm fire. You need to act quickly or the house could burn down. In addition to a food regulatory lawyer, you will also probably need an external scientific consultant to help you find the root cause and take necessary remedial action.
117.135/150 – Preventive Control /Corrective Action
Once input into the system, your test results should trigger a corrective action for all product testing if product is positive.There should be an alert on the seventh day to complete and close the corrective action. If you are within the seven days, it cannot be registered as an FDA finding. Nevertheless, you need to have irrefutable records to demonstrate the timeframe because an FDA audit in the future, outside the statutory time frame — can result in a finding. Electronic real-time records will support the seven days.
- You have a series of positive environmental findings in your facility for Salmonella or Listeria. This is an example of: Where there’s smoke, fire may follow. Remember that FDA will have access to those testing records, so these findings will become immediately visible to an FDA inspector. You need an objective viewpoint to assess whether or not your corrective actions will be seen by the FDA as sufficient. Always best to act before the FDA inspector is in your plant.
117.135/117.150 Preventive Control / Corrective Action
The out-of-range test results should be immediately available to the attendant, triggered, and cannot be bypassed for that action, as corrective action is triggered, which must be acted upon within seven days. The fields in the corrective action should direct completing the corrective action.
A subsequent alert is sent if it is late more than seven days and when the corrective action is closed.
9. You have findings that make you question if you need to file with FDA a Reportable Food Registry (RFR) report. This is sometimes a tricky decision. If you decide not to file you should have clear written documentation as to your rationale and an objective second opinion that it is legally defensible.
117.301 Recordkeeping
FSMA never states that there cannot be a defect or non-conformance — it is the capability to manage and have the documentation to prove it. If your system captures your documentation as stated, your submittal of a file to the Reportable Food Registry will show the methodology to be correct. So with the correct method, the worst that could happen is the FDA can return it, with the confidence your methodology is intact. You are using the RTF to show off competence — less intrusion in the future.
- You have findings that make you question if you need to recall a product, or if you should continue to ship a product. Often this will be related to the RFR decision-making above. You may have had an adverse incident at your plant, an unexpected spike in environmental test findings, or even a foreign material or quality issue. The same principle applies — make the right decision and document it well and get experienced advice in doing so.
117.139 Recall
Properly implemented – FSMA presents layered inspections or audits. 1) binary or monitoring parameters at the point of application that triggers non-conformance; 2) Internal PCQI verifies; 3) External receiving facility verifies. These three layers of verification should avoid recalls. If a recall is still needed, it can be done electronically.
It is hoped that we could demonstrate the use of technology to operationalize FSMA with layered preventive controls, alerts to top management, and auto corrective actions to minimize the after-the-fact lawyering. The list of Top Ten issues can practically be eliminated ‘ sooner than later.
— Jeffrey Lewis, Fellow Chartered Quality Institute, PCQI
Director of Safety In Your Hand Inc.
fsmafoodsafety.com
Editor’s note: This letter is in response to a contributed column by Joseph A. Levitt is a former director of FDA’s Center for Food Safety and Applied Nutrition. He is currently senior counsel in the Washington D.C. office of Hogan Lovells US LLP The law firm handles FDA and USDA food safety and compliance matters. | https://www.foodsafetynews.com/2021/03/letter-to-the-editor-comply-with-fsma-sooner-rather-than-later/ |
Managing access rights and roles using zones
Zones enable you to grant specific rights to users in specific roles on specific computers. By assigning roles, you can control the scope of resources any particular group of users can access and what those users can do. For example, all of the computers in the finance department could be grouped into a single zone called “finance” and the members of that zone could be restricted to finance employees and senior managers, each with specific rights, such as permission to log on locally, access a database, update certain files, or generate reports.
Rights represent specific operations users are allowed to perform. A role is a collection of rights that can be defined in a parent or child zone and inherited. For example, a role defined in a parent zone can be used in a child zone, in a computer role, or at the computer level.
System and predefined rights
There are specialized login rights, called system rights. The system rights for Windows computers are:
- Console login is allowed: Specifies that users are allowed to log on locally using their Active Directory account credentials.
- Remote login is allowed: Specifies that users are allowed to log on remotely using their Active Directory account credentials.
- PowerShell remote access is allowed: Specifies that users are allowed to log on remotely to PowerShell.
There are additional predefined rights that allow access to specific applications. For example, there are predefined rights that allow users to run Performance Monitor or Server Manager without having an administrator’s password. You grant users permission to access computers by assigning them to a role that includes at least one login right. You can then give them access to specific applications or privileges using additional predefined or custom access rights.
Granting permission to log on
By default, zones always provide the Windows Login role to allow users to log on locally or remotely to computers in the zone. Users must have at least one role assignment that grants console or remote login access or they will not be allowed to access any of the computers in the zone.
Note: The Windows Login role grants users the permission to log on whether they are authenticated by specifying a user name and password or by using a smart card and personal identification number (PIN).
Because the Windows Login role only allows users to log on, it is often assigned to users in a parent zone and inherited in child zones. However, the Window Login role does not override any native Windows security policies. For example, most domain users are not allowed to log on to domain controllers. Assigning users to the Windows Login role does not grant them permission to log on to the domain controllers. Similarly, if users are required to be members of a specific Windows security group, such as Server Operators or Remote Desktop Users, to log on to specific computers, the native Windows security policies take precedence.
There are additional predefined roles that grant specific rights, such as the Rescue ‑ always permit login role that grants users the “rescue” right to log on if audit and monitoring service is required but not available. In general, at least one user should be assigned this role to ensure an administrator can log on if the audit and monitoring service service fails or a computer becomes unstable. | https://docs.centrify.com/Content/auth-admin-win/ZonesManageAccessRightsRolesUsing.htm |
VELLORE: The Vellore District Bus Owners’ Association (Association) has appealed to the Centre to annul the present National Highways toll policy and adopt the 1997 toll policy introduced by the NDA government.
President of the Association, D Vijaya Govindarajan said, the indiscriminate hike in tolls by toll plazas in Vellore district has discouraged transportation by road and road-use by commuters in addition to raising the overall price index. Despite protests by transport operators, toll plazas continue to collect excess fees. He said, when the NDA government had framed the toll policy in 1997, the base rates for various categories of vehicles was in the range of 40 paise to `3. The NDA had implemented a fee based on whole sale price index (WPI) each year.
The UPA-I government implemented a flat 3 per cent increase of fees every year on top of the base 40 per cent of the WPI toll fee. The NDA policy allowed revision of toll fees once in 5 years. As per the NDA policy, vehicles were allowed to cross the toll plaza unlimited number of times if they paid one-and-half times the toll fee during a 24-hour period. A monthly pass cost 30 times the cost of a one-way journey, which was earlier restricted to 50 trips per month. Almost all eligible VIPs such as MPs, MLAs, and government officials were exempted from paying toll.
The Association welcomed the announcement of Union Minister of Road Transport and Highways Nitin Gadkari, that tolls should only be collected after 100 per cent completion of road project. He stated that tolls should not be collected after the cost of the project is recovered. | https://www.newindianexpress.com/states/tamil-nadu/2014/oct/16/Plea-To-Bring-Back-1997-Toll-Policy-672282.html |
Career Prospects specialises in the provision of human resource management and development services. Their major areas of specialization are as follows:
Career Prospects are an employment agency that helps organisations recruit suitable individuals for different positions. We recruit both for permanent and temporary requirements. Their data-base has been steadily growing and we now boast of possessing professionals in a wide range of disciplines. Career Prospects offer full recruitment services as follows;
Career Prospects's goal is to provide consulting expertise in helping to create the right talent and organisational structures and processes that will support the greater organisation. Their quality services are delivered with a flexible approach that exceeds customer expectations.
Career Prospects understand the value of ongoing development for every individual and organisation. Training your staff and keeping your skills and knowledge up to date means that you remain competitive and current in a fast-paced and evolving world.
The objectives of this program are to enhance the professional skills of the participants in order to assist them raise their efficiency and effectiveness levels and contribute better to the realization of their organisation's objectives.
Services provided Staff training in customer care, team building, total quality management, finance for non-finance managers, HR management for non-HR managers, supervisory skills, leadership skills, performance management and development, stress management, organisation development and change management. | https://bizbwana.com/orgs/career-prospects |
Without a doubt, one of the greatest achievements of medicine in today’s world is having the knowledge and technology to provide safe and effective anesthesia for a patient undergoing surgery. Anesthesia must be administered with attention, precision, and care to avoid harm to the patient. Due to how complex anesthesia can be, there are medical personnel called anesthesiologists who have the responsibility to monitor the patient’s functioning during the surgical procedure.
Every surgery comes along with risks. Patients must be informed about these risks before agreeing to the surgery. And while a poor result after the procedure doesn’t automatically mean medical malpractice occurred, it is certainly possible that the anesthesiologist or doctor made a mistake that led to the negative outcome.
Common Errors For Anesthesia
A doctor or anesthesiologists that committed medical malpractice during surgery, which ultimately led to a patient getting hurt, may be held liable for losses and damages. Of all the ways that malpractice of anesthesia may occur, it isn’t uncommon for the patient to have been improperly evaluated from the very start. Other mistakes happen while the patient is undergoing the operation. Examples of common anesthesia errors include:
- Failing to administer oxygen properly during surgery
- Failing to react promptly to oxygenation issues
- Failing to monitor the patient
- Administering too much anesthesia
- Failing to inform the patient of pre-surgery care, such as not eating or drinking a certain number of hours before the procedure
- Administering anesthesia to a patient that is allergic
- Using defective medical tools during sedation
- Poor product labeling
Anesthesia Malpractice Injuries
When a patient is not taken care of properly by the doctor and/or anesthesiologists regarding a surgery, the patient can face very severe and long-term injuries. If you or someone you love awoke from a surgery harmed and you suspect an error was made by the surgical team, then it’s time to meet with an attorney for guidance on what to do next. Anesthesia malpractice can lead to the following injuries:
- Stroke
- Birth Defects
- Nerve Damage
- Heart Attack
- Brain Injury
- Paralysis
- Coma
- Spinal Cord Injury
- Fatality
Emotional/Mental Injuries
A real possibility that is terrifying to imagine, would be if the anesthesiologists did not use enough anesthesia to sufficiently keep the patient under sedation. What can happen is the patient begins to wake up, gains awareness of what’s going on around them, or can feel pain sensation during the surgery. As horrifying as this is to think about, victims must understand their rights to seeking compensation for what they endured. Anesthesia awareness can result in life-long emotional and mental damage, including:
- Sleep Disorders (Insomnia, sleeping too much)
- Flashbacks
- Post Traumatic Stress Disorder (PTSD)
- Anxiety and Panic Disorders
- Extreme Fear (particularly in medical settings)
A mistake during surgery can leave profoundly negative impacts on the patient who may have endured physical and/or emotional injuries due to the error. If this sounds like something you or a loved one went through, contacting a medical malpractice attorney Elizabeth, NJ at a law firm like Wade Suthhard, P.C., could help you better understand your legal options. | https://amlegal.org/is-an-anesthesia-injury-categorized-under-medical-malpractice/ |
Presentation is loading. Please wait.
Published bySteven McKinney Modified over 4 years ago
1
Experience with medium-size SRS for muon tomography Michael Staib Florida Institute of Technology
2
Muon Tomography at Florida Tech Eight 30 cm x 30 cm triple-GEM detectors enclosing an active area of ~ 1 ft 3 Detector design similar to COMPASS GEMs: 3/2/2/2 mm gap configuration Cartesian XY readout strips with 400 µm pitch 1,536 readout channels per detector 2 Muon Tomography Concept Main idea: Multiple scattering is proportional to Z and the density of the material, allowing detection of nuclear contraband by measuring scattering of cosmic ray muons.
3
SRS for Muon Tomography Current station configuration with 8 detectors: 96 APV Hybrid (48 M/S pairs) 6 ADC/FEC cards 2 Gigabit network switches Six 25 ns frames of data recorded for each APV per trigger yields event size of ~200kb @ 30 Hz. DATE for data acquisition. AMORE for data decoding, event monitoring and data analysis. 3
4
Muon Tomography at Florida Tech 4 Detector Characterization Point of Closest Approach (POCA) Reconstruction Note: Preliminary detector alignment Ta Pb Fe U WPb Fe Sn
5
5 NetworkingHDMI Cables Low-cost network switches are unable to support many FECs. 2 x Netgear JGS516 16-port switches currently used and can support a maximum of 4 FECs each. Issue not well understood. Problem with HDMI channel mapping on APV hybrid identified. All HDMI cables are not the same! Inexpensive cables can be used, but it is important to test if they work with the system. Some Minor Problems
6
6 Several data sets show some inconsistencies in the quality of the data. These data sets show anomalous station acceptance, and improper tracking information. The corruption of data can start randomly in the middle of a data set. This could be caused by one of the FECs missing a trigger. Several checks have been implemented in the firmware, but they are not perfect. Desychronization of FEC clocks makes it difficult to detect this problem reliably. For now we limit each run to 100k events to minimize the effect of bad data. Missing Triggers?
7
Summary and Outlook We have successfully used the SRS to record data from eight 30 cm x 30 cm GEMs and processed this data to produce tomographic images of several high-Z scenarios. SRS operation is stable for ~12k channel system, with a few caveats: Missing triggers (or packets) occasionally corrupt data Networking issue needs to be better understood Zero suppression at the hardware level becomes very important for medium/large scale systems. We are very interested in testing zero suppression on the FPGA. Clock/trigger fan-out unit may help with problem of corrupted data. 7
Similar presentations
© 2020 SlidePlayer.com Inc.
All rights reserved. | https://slideplayer.com/slide/7479463/ |
In most of the organization, there is a 3 tier checking process for every stress system for maintaining the quality of analyzed stress systems. Normally stress system is performed by one (junior or senior), checked by some other (must be experienced enough), and finally approved by the lead stress engineer. Even though the main points which need to be considered are well-known to every piping stress engineer, but still some important points could be missed at the specific moments during stress analysis or checking. So a piping stress analysis checklist can be prepared and referred during the process for proper quality control.
The following article will provide an insight into the main points which a stress engineer must check during analyzing a system. I request you to inform me of the additional points which I may have missed while writing this article by replying in the comments section.
Important points to consider while checking any stress system (Piping Stress Analysis Checklist Points):
1. Whether the input for pipe material, pipe diameter, pipe wall thickness, pipe temperatures (operating, design and upset), pressures (design and hydro test), insulation thickness, corrosion allowance, fluid density, insulation density is correct?
2. Whether the input for the above design parameters for equipment and nozzles are correct?
3. Whether SIF’s for Tee, bend/elbow, cross, and trunnions are taken correctly?
4. Whether the flanged elbow is considered where required?
5. Whether the actual weight of control valves/nonstandard rigid items/valve actuators are considered appropriate?
6. Whether equipment has been modeled with correct dimensions from general arrangement drawing?
7. Whether trunnion modeling is done following in-house work instructions?
8. Whether settlements/displacements have been considered where required? Normally settlement is used for storage tanks and thermal displacements are used for compressors, turbines, and packaged items?
9. Whether proper parameters have been used for seismic and wind analysis?
10. Whether friction has been included when significant?
11. Whether the expansion stress range has been checked in between maximum and minimum temperatures for which the piping system will be subjected?
12. Whether the effect of friction on sliding support loads been considered?
13. Whether the use of low friction pads been properly marked if used?
14. Whether the analysis is performed for the system with and without friction to check the effect of friction (to determine the worst case) as friction is not something that can be relied on? The harmful effects of friction need to be considered but not the benefits.
15. Whether the Caesar plot and isometric plot are matching with the 3D plot?
16. Whether the loads on connected equipment are within the allowable limits?
17. Whether the thermal effects of pipe supports, equipment support been considered?
18. Whether the flange weight includes the weight of bolting? In large size piping bolt weights become significant?
19. Whether all possible load cases (startup, shutdown, regeneration, any special process consideration) are considered in analysis?
20. Whether the proper ambient temperature is used for the location?
21. Whether spring is modeled properly and selected considering all operating temperature cases?
22. Whether adequate documentation in case of gapped restraints (or any special consideration) are mentioned in isometric clearly to assure that supports will be installed in that manner in the construction site?
23. Whether there is a possibility of elastic follow up or strain concentration condition?
24. Whether radial thermal expansion has been considered for line sizes greater than 24 inch NB?
25. Whether hot sustained check has been performed?
26. Whether the pressure thrust has been considered while using expansion joints?
27. Whether flanged elbows has been considered?
28. Whether sustained deflection and thermal displacements are within the limit specified by the project document?
29. Whether the SIF limitation been considered for large D/t piping?
30. Whether pressure stiffening of bends has been considered in analysis?
31. Whether flange leakage has been performed as per specification?
32. Whether the change in pipe length due to internal pressure has been considered?
33. Whether all stresses are within code limits?
34. Whether variability of springs are within 10% near rotary/critical equipment and 25% for others?
35. Whether thermal displacements more than 50 mm are marked on isometric?
36. Whether support loads are checked and discussed with layout/design?
37. Whether feasibility of all supports has been checked?
38. Whether routing change and special support requirements has been clearly marked in stress isometric and informed to layout/design group?
39. Whether spiders are modeled properly at appropriate intervals for jacketed pipes?
40. Whether the weight of hot tapping machines and related equipment are considered in specific situations?
41. Whether alignment checking (WNC file) has been performed for all rotary equipment as per API RP 686?
42. Whether PSV forces are considered for open discharge PSV systems?
43. Whether Hot-Cold and Operating-Standby philosophy has been used when required?
44. Whether restrained and unrestrained piping defined correctly for pipeline systems (ASME code B31.4/B31.8).
45. If a three-way support is provided near tie in point, then civil load information for that support considered the impact of other side as well.
46. For Interfacing with other EPC contractors/Package equipment vendor/GRP Vendor data at tie in points are transferred and back up kept for future reference.
47. For power plants Steam blowing activity is checked with client and supports are designed for that activity.
48. Reference of FIV/AIV study is clearly mentioned in the Stress report.
49. Whether intermediate nodes are included for dynamic analysis.
50. Supports uplift force for Hold Down Supports are checked and highlighted.
51. Proper supporting for two-phase/slug/surge/vibrating forces are added. | https://whatispiping.com/stress-check-list/ |
After Placement of Dental ImplantsSanta Maria, CA
Do not disturb the wound. Avoid rinsing, spitting, or touching the wound on the day of surgery. There may be a metal healing abutment protruding through the gingival (gum) tissue.
Bleeding
Some bleeding or redness in the saliva is normal for 24 hours. Excessive bleeding (your mouth fills up rapidly with blood) can be controlled by biting on a gauze pad placed directly on the bleeding wound for 30 minutes. If bleeding continues please call (805) 910-1213 for further instructions.
Swelling
Swelling is a normal occurrence after surgery. To minimize swelling, apply an ice bag (or a plastic bag or towel filled with ice) to the cheek in the area of surgery. Apply the ice continuously, as much as possible, for the first 36 hours.
Diet
Drink plenty of fluids. Avoid very hot liquids or food. Soft food and liquids should be eaten on the day of surgery. Return to a normal diet as soon as possible unless otherwise directed. However, avoid chewing hard foods on the implant sites for at least the first month after surgery. Chewing forces during the healing phase can decrease the body’s ability to heal around the implant.
Antibiotics
If prescribed, be sure to take the antibiotics as directed to help prevent infection.
Oral Hygiene
Good oral hygiene is essential to good healing. The night of surgery, use the prescribed Peridex Oral Rinse before bed. The day after surgery, the Peridex should be used twice daily, after breakfast and before bed, until your post-operative appointment. Be sure to rinse for at least 30 seconds then spit it out. Warm salt water rinses (teaspoon of salt in a cup of warm water) should also be used at least 5 times a day, especially after meals. Brush the area after the first 24 hours with a very soft toothbrush. Do not avoid brushing your teeth. Be gentle initially with brushing around the surgical areas.
Activity
Keep physical activities to a minimum immediately following surgery. If you are considering exercise, throbbing or bleeding may occur. If this occurs, you should discontinue exercising. Keep in mind that you are probably not taking normal nourishment. This may weaken you and further limit your ability to exercise.
Wearing your Prosthesis
Partial dentures, flippers, or full dentures should not be used immediately after surgery and for at least 10 days unless discussed otherwise.
NOTE: Please call our office if you experience any unusual symptoms or excessive bleeding.
Contact Us
Wilson Oral Surgery is located at
2151 S College Dr Ste 104
Santa Maria, CA
93455
Recent Posts
3 Common Procedures Performed By An Oral Surgeon
An oral surgeon corrects orthodontic and dental problems that cannot be resolved with non-surgical treatments. They perform oral surgeries that range from tooth extraction to jaw reconstruction.Patients that are about to undergo oral surgery are often worried and intimidated by the thought of going under the knife. With advances in surgical techniques, anesthesiology and post-surgery…
The FAQs About An Oral Surgeon
A lot of people won’t have to interact with an oral surgeon ever in their life, but when accidents happen or when the mouth is in poor shape, an oral surgeon is likely the best specialist to see. Oral surgeons, just like dentists, aim to improve the oral health of a person. However, unlike dentists,…
How An Oral Surgeon Can Replace And Restore Teeth
An oral surgeon has the skills needed to perform surgical dental treatments like installing implants. These oral professionals have additional education after dental school learning about how to perform surgical treatments.An oral surgeon can perform all the treatments done by a general dentist, and they can also perform treatments general dentists are not qualified to…
What Types Of Procedures Does An Oral Surgeon Perform? | https://www.centralcoastoms.com/santa-maria-ca/after-placement-of-dental-implants/ |
The recent ascendency of the term “deep state” presents an opportunity to explore some of the elements which may comprise the entities that lay behind the term. Despite the long history of concerned politicians and citizens attempting to warn the public of the dangers and potential harms of a growing “shadow government” influencing international policy, such a discussion has largely existed under the radar of the general public.
One of the earliest and most public warnings came from president Eisenhower during his farewell address in 1961, when he cautioned us to become aware of the growing power and influence of what he termed “the military-industrial complex”. While the entire speech can be seen on You Tube or read on the web, the following quote will give you the gist of his message:
“In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.”
Eisenhower believed that our freedoms and democracy were being threatened by the power and influence being wielded by the growing surreptitious alliances between the military and corporate/commercial entities. Soon these “special interest groups” would take hold of the behind the scenes world of lobbyists which forever pressure, seduce and bully our politicians into supporting their surreptitious goals and agendas.
Another major player in the shadow world of the deep state is the CIA and related intelligence agencies. In essence the intelligence world is an excessive perversion of the old maxim that beauty is in the eye of the beholder. The benefits and goodness of the intelligence community is totally based on one’s perspective. The bulk of the surreptitious and immoral actions of the intelligence community violates every tenet of an open and free democracy, but is posed as necessary and vital for the long term success of our nation. As long as they are working on our side and defending our freedoms and democracy we excuse their methods and total lack of transparency and moral ethics.
Yet, each and every decade since the inception of the CIA there are major scandals that emerge which question both their usefulness and their benefit to our democracy and freedoms of US citizens. Many of the unsavory tactics including propaganda, assassinations, torture, control of the media, perceptual management, rigging of elections and sabotaging of the actions of populist movements abroad not only open us up to blowback, but are often exposed by investigative journalists as being used domestically on our own citizenry. | http://guidoworld.com/blog/category/economics/ |
University of Auckland Audiology Masters’ student Gaby Surja had a major career change two years ago when she decided she wanted to do something that really made a difference to people’s lives.
The former transport engineer, who spent eight years working on major traffic infrastructure projects in and around Auckland, traded her hard hat for an audiometer and has never looked back.
She says audiology ticked all the right boxes – the nature of the work, the work-life balance it offers, the interaction with people, and the fact that the service could be tailored to fit the needs of clients and their whānau.
In choosing her thesis topic, Gaby sought the advice of her supervisor and Hearing House Clinical Director for audiology and rehabilitation, Dr. Holly Teagle, who encouraged her to look into improving and developing clinical practices that could reflect real-life challenges for clients.
The Hearing House works closely with the University of Auckland Audiology programme, providing clinical experience placements for students and collaborating on research.
Gaby’s chosen topic – Speech Perception in Noise Assessment – examines an area that is not currently a standard part of audiology testing in New Zealand, but Gaby is hoping her research might change that.
Most testing is done without background noise, which Gaby says isn’t a good representation of what we come across in real life – think noisy streets, busy cafes, and loud parties.
“In general, speech-in-noise tests are more realistic for estimating an individual’s real-life listening abilities compared to speech tests in quiet,” she says.
“However, the barriers need to be understood and addressed before speech-in-noise testing is accepted as an industry standard.”
Gaby is due to start a full-time role with Dilworth Hearing in Remuera and Takapuna in March once she’s submitted her thesis, but she views her time spent at The Hearing House as her most formative period. At first, she found academic research challenging after her years spent on busy design and construction projects but found her happy place when she discovered clinical application and dealing with people.
“Audiology is a well-balanced mix of science, technology, and people,” she says. “The highlight of my research has been the human interaction.”
Gaby says that poor speech perception in noise is one of the most common complaints of adults with sensorineural hearing loss – or loss caused by damage to the inner ear. She says that an understanding of a person’s speech perception in noise is crucial for deciding on an appropriate rehabilitation strategy, including the fitting of hearing devices such as cochlear implants and hearing aids.
In addition, she says the performance of hearing devices in noise is largely tested in controlled laboratory conditions, and not validated in realistic acoustic environments.
This can lead to persistent complaints of hearing difficulty in noisy situations, despite the use of a hearing device.
A hearing loss assessment that reflects everyday speech, modified to suit Kiwis
The Arizona Biomedical (AzBio) sentence test is a speech-in-noise test that was originally developed in the US as speech recognition assessment for cochlear implant recipients. To capture the nuances of the New Zealand accent, a Kiwi version of the test was recently recorded.
The construction and recording of the NZ AzBio sentences are intended to reflect that of conversational speech in everyday listening environments, as opposed to clear speech.
Consequently, the NZ AzBio sentences are more difficult and lead to comparatively lower scores than other tests. These scores may more accurately reflect a person’s ability to hear and understand in the real world.
Using the NZ AzBio sentence test, Gaby conducted a speech-in-noise assessment with a group of nearly 30 adults – half with normal hearing and half with hearing loss. Gaby’s results will be analysed early this year in order to assess the validity and reliability of the Kiwi version of the test -- an essential step in introducing the test to the range of audiology measurements available in New Zealand.
Removing the barriers to performing speech-in-noise testing
In addition, as part of her research, Gaby surveyed 95 audiologists to better understand their perception and use of speech-in-noise testing in New Zealand audiology clinics.
Her results showed that a vast majority (98%) of audiologists think that speech-in-noise testing is important for diagnostics, counselling and rehabilitation, and yet only half (50%) use it more
than 20% of the time in their current practice.
Gaby says that initial results show that the biggest barriers for using speech-in-noise testing appear to be lack of time, lack of guidance or experience, and the fact that speech-in-noise testing is not currently included in the NZAS Best Practice Guidelines for Speech Audiometry and Hearing Aid Fitting.
She hopes that her findings could add further weight to the move to have speech-in-noise testing adapted as best practice in New Zealand – and help create a better understanding and more realistic picture of how people with hearing loss cope in our noisy world. | https://www.loudshirtday.org.nz/post/student-research-into-speech-in-noise-testing-could-contribute-to-changing-practices-in-audiology |
Summer is right around the corner and there is nothing better than having a nice glass of refreshing lemonade at the pool. This recipe can be made with any fruit of your choice like raspberries, cherries, and watermelon.
For those of you watching your carbs, the sugar in the recipe can be replaced with substitutes such as stevia, sucralose, or more honey.
In this recipe, I used 2 ml of hemp CBD oil (a total of 30 mg of CBD). You can always add or subtract the amount of CBD based on your preference.
Strawberry Lemonade (CBD-Infused)
6servings
5minutes
5minutes
Ingredients
2 cups fresh strawberries
7 cups water divided use
1 cup sugar
2 cups fresh squeezed lemon juice
2ml of hemp CBD oil (30 mg of CBD)
Directions
- Combine the sugar with 2 cups of water. Microwave for 2 minutes or heat on the stove until very hot. Stir until sugar is dissolved.
- Place the strawberries in a blender with 1 cup of the water. Blend until smooth.
- Combine the strawberry puree, sugar water mixture, lemon juice, CBD, and remaining 4 cups of water in a pitcher.
- Stir thoroughly then chill until ready to serve. | https://counterfeitcook.com/2019/04/strawberry-lemonade-thc-cbd/ |
Endosomes are important in membrane trafficking and play a crucial role in regulating synaptic strength and plasticity by controlling recycling of presynaptic vesicles and postsynaptic receptors. Zinc finger protein 179 (Znf179) has been found to peripherally associate with the membrane and localize to endosomes to affect endosomal membrane dynamics. Previous studies, including ours, observed reduced paired-pulse facilitation and long-term potentiation in Znf179 knockout mice, suggesting pre- and postsynaptic contributions of Znf179. Our preliminary result showed that many synaptic proteins such as STXBP1 (syntaxin binding protein 1), drebrin, Rab11fip5 (RAB11 family interacting protein 5), and the kinesin superfamily proteins (KIFs) that have important pre- and postsynaptic functions such as endocytosis, exocytosis, microtubule dynamics, and synaptic transmission were identified by Znf179 IP-mass spectrometry analysis. Using co-immunoprecipitation coupled with Western blotting, we will further confirm the interaction of Znf179 and these possible interacting partners. GST pull-down and in vitro ubiquitination assays will be used to examine direct interactions between Znf179 and the possible interacting proteins and whether Znf179 can mediate their ubiquitination. Using FM dye imaging, we will examine the changes of synaptic vesicle exocytosis in wild type and Znf179 knockout neurons. We will also use biochemical fractionation coupled with Western blotting to detect the neurotransmitter receptors trafficking in wild type and Znf179 knockout mice after LTP induction. The interactions between potential Znf179 interacting proteins and their effectors will also be examined in the presence or absence of Znf179. Further investigation of the effects and underlying mechanisms of Znf179 in synaptic functions will enhance our understanding of the mechanisms of synaptic trafficking and synaptic plasticity.
|Status||Finished|
|Effective start/end date||8/1/17 → 7/31/18|
Keywords
- Brain finger protein
- Znf179
- Rnf112
- synaptic trafficking
- synaptic plasticity
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint. | https://tmu.pure.elsevier.com/en/projects/investigation-of-the-novel-role-and-underlying-mechanism-of-znf17 |
Kidney disease affects more than 10% of the population in Canada, and treatments are cost and resource intensive. Members of the Kidney Clinical Research Unit (KCRU), within which this lab is based, are aggressively pursuing clinical research to identify effective therapies that can be implemented at various stages of the disease. Clinical research is necessary, but must be thoroughly justified. An integrative approach will help to uncover mechanisms of disease, dialysis treatment, and of pharmacological therapies.
I am an applied mathematician. I represent poorly understood but very important cell to organ level processes in a mathematical framework as mechanistic models. I use and analyse these models to understand the inter-related dynamics of the complex organisation of electro-mechanical, fluid flow, metabolic and other multi-physics that dictate life functions and failure thereof. In addition to mathematical physiology, I also work on computational imaging that is designed to preinform our clinical-imaging research. My work is targeted towards further improvement of healthcare research and practice, but also serves developments in maths, computer science, and software engineering due to the nature of it.
A complementary physical sciences approach will further sharpen the value of our clinical trial outcomes by:
comprehensively exploiting clinical data (imaging, BP, ECG, blood testing, signals) to the patient's advantage, understand what the processes are;
generating basic science mechanism based evidence and inform us how and why the therapy may be effective;
providing decision making tools in the mid-term which will be based on cause-effect mechanism investigations;
providing a pruning and hypothesis testing platform using the mechanistic and AI systems;
integrating extant knowledge within our University, Hospitals, as well as worldwide to further increase the pace of our clinical research, be able to share our expertise with other investigators;
developing a big data (AI) infrastructure geared specifically towards kidney disease patients' data;
developing an exciting team of investigators with expertise in maths, HPC, imaging techniques, computer science, engineering, clinical sciences; and
working by collaboration with researchers throughout the world.
To do so, we have the following PM3 platforms:
PM3-SimVascular, a customized blood flow simulation library for local use.
PM3-Chaste, a customized cardiac electrophysiology simulation library for local use.
Virtual Cardiac Physiological Laboratory (VCPL);
Dynamic blood flow in internal organs simulator;
CT and MRI simulator.
TensorFlow based data analysis methods. | https://www.kidneyclinicalresearchunit.com/srk-computational-medicine-laborato |
Mariyah & Cristina - famously known as "mahryska" and "tinayums" are Dubai-based freelance photographers making a mark in the Dubai fashion Industry as photographers and bloggers. Having met several years ago, they discovered that they shared a great passion for photography, beauty and fashion and now work together side by side, as a powerful duo.
The photography pair does a wide array of work for various agencies including model portfolios, advertising and commercial work, a wide range of fashion lookbooks, and editorials for both local and international fashion designers. They regularly cover the Dubai Fashion Week, Bridal Show, and Dubai Fashion Fiesta and also attend the other biggest fashion & beauty events regularly in the city to share updates and news on their online blogs.
They are also community volunteers at a Dubai-based non-profit Photography Club called OPPPS (www.oppps.com) where they volunteer to serve the Filipino Community - sharing and teaching photography enthusiasts. Several of their works have been exhibited and featured at OPPPS, Gulf Photo Plus, and Illustrado magazine. They are also regular photography contributors for Illustrado and IN fashion magazine. | https://shop.westerndigital.com/en-ca/community/extreme-team/mahryska-tinayums |
Does your university as a body systematically measure/track women’s application rate, acceptance/entry rate and study completion rate at the university?
The University conducts long-term longitudinal tracking and analysis of students from the admission data (including analysis of student origins, recruitment strategies, and collection of data on recruitment effectiveness) to the development paths of graduates (including various types of evaluation results, students’ performance in further studies and employment, development paths of alumni, employer satisfaction, etc.). Based on the learning outcomes of students, a statistical analysis model is constructed to track the long-term learning process and effectiveness from the time of admission. The results are used to establish a predictive model which serves as a basis for adjusting the teaching and counselling mechanism and the allocation of school resources.
The University ensures equal opportunities in education and demonstrates social responsibility.
The University provides disadvantaged students with education opportunities.
1. Encouraging faculties to increase the admission quota proactively so that disadvantaged students have a greater chance of securing admission
(1) In recent years, the University has proactively provided additional quota to admit disadvantaged students via various channels.
(2) Faculties are granted additional operating funds to encourage them to actively increase the admission quota for disadvantaged students.
(3) To cater to the needs of disadvantaged students in all respects, the University has integrated supportive measures into the screening process in the second phase of ‘Individual Application’ for university admission in the academic year 2018 and relaxed the screening criteria in the second phase.
2. Reducing the financial burden for disadvantaged applicants – exemption from registration fee and provision of travel allowance for ‘Individual Application’.
(1) The University allows economically disadvantaged students to be exempted from the registration fee for the self-administered entrance examination.
(2) During the screening in the second phase of ‘Individual Application’, the University grants students a travel allowance ranging from NT$150 to NT$1,000 according to their places of residence. The allowance will be distributed by faculties to eligible candidates on the day of screening.
3. Supportive measures for disadvantaged students
In order to support disadvantaged students in focusing on their studies, the University actively raises funds and establishes various supportive measures to improve the supportive mechanism for disadvantaged students. Support for disadvantaged students can be divided into two categories according to the goal of assistance: poverty alleviation (subsidy for tuition and miscellaneous fees) and emergency assistance (emergency aid funds and campus meal vouchers).
(1) Subsidy for tuition and miscellaneous fees as prescribed by the Ministry of Education
The reduced or exempted items include tuition fees, miscellaneous fees, pre-credit fees, base tuition and miscellaneous fees and other fees.
(2) Active fundraising for emergency aid funds and campus meal vouchers
To motivate students from economically disadvantaged backgrounds to strive for excellence, develop self-confidence and independence, alleviate their financial burden during university studies and allow them to study without financial worries, the University has established the following supportive measures: ‘Guidelines on the Implementation of Student Emergency Aid Funds’, ‘Guidelines on the Application for Mr. Wang Jin-pyng Emergency Aid Fund and Campus Meal Vouchers for Impoverished Students’, ‘Guidelines on the Application for the Student Emergency Aid Fund by Acting Bai Sha Culture and Education Foundation’ and ‘Regulations on the Distribution of Charity Meal Vouchers’. | http://en.ncue.edu.tw/files/11-1038-2752.php?Lang=en |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.